GuideDecember 30, 2025 · Updated April 14, 20267 min read

How to Import CSV into Snowflake: 5 Practical Methods

A concise, step-by-step guide to importing CSV files into Snowflake using SnowSQL, the web UI, cloud storage stages, Airbyte, and the Python connector. Pick the right method for your use case.

Igor Nikolic
Igor Nikolic

Co-founder, FileFeed

How to Import CSV into Snowflake: 5 Practical Methods

Snowflake separates compute from storage, making it easy to scale ingestion. If you are new to the format itself, our primer on what a CSV file actually is covers the basics. Here are five reliable ways to load CSVs into Snowflake, from quick manual uploads to fully automated pipelines.

Method #1: SnowSQL CLI

Best for scriptable, repeatable imports or CI tasks. Use PUT to stage the file, then COPY INTO to load it.

  1. Install & connect: Configure SnowSQL with your account, role, and warehouse.
  2. Stage the file: run PUT to move the CSV into your user stage.
  3. Load the table: run COPY INTO from the staged file.

PUT file:///path/to/your_file.csv @~;
COPY INTO your_table
  FROM @~/your_file.csv
  FILE_FORMAT = (TYPE = 'CSV');

Method #2: Snowflake Web Interface

Good for one-off, manual imports without scripts.

  1. Open the UI: Log in and navigate to the target database and table.
  2. Load Data wizard: Upload your CSV and map columns as prompted.
  3. Review: Confirm the file format (CSV) and run the load.

Method #3: Cloud Storage Staging

Ideal for larger files or recurring feeds. Stage in S3/GCS/Azure, then load via COPY INTO.

  1. Upload to storage: Place the CSV in your bucket.
  2. Create or use a stage: Point Snowflake to the bucket path.
  3. Load: use COPY INTO from the stage (examples below).

-- single-line
COPY INTO your_table FROM @your_stage/your_file.csv FILE_FORMAT = (TYPE = 'CSV');

-- multi-line
COPY INTO your_table
  FROM @your_stage/your_file.csv
  FILE_FORMAT = (TYPE = 'CSV');

Method #4: Airbyte

Great for automated pipelines and ongoing syncs with transformations.

  1. Deploy Airbyte: Run locally or in the cloud; open the dashboard.
  2. Configure source: Add a CSV source (local path or cloud storage) with format settings.
  3. Add Snowflake destination: Provide account, warehouse, database, and schema.
  4. Create connection: Map schema, set frequency, define transformations.
  5. Sync: Start the job; Airbyte extracts, transforms, and loads into Snowflake.

Method #5: Snowflake Python Connector

For Python workflows needing custom logic before or after load.

  1. Install: pip install snowflake-connector-python.
  2. Connect: Initialize the connector with user, password, account, warehouse, database, and schema.
  3. Stage and load: Execute PUT to stage the CSV, then COPY INTO to load.

pip install snowflake-connector-python
import snowflake.connector

conn = snowflake.connector.connect(
  user="your_username",
  password="your_password",
  account="your_account",
  warehouse="your_warehouse",
  database="your_database",
  schema="your_schema",
)

conn.cursor().execute(
  "PUT file:///path/to/your_file.csv @~"
)
conn.cursor().execute(
  "COPY INTO your_table FROM @~/your_file.csv FILE_FORMAT = (TYPE = 'CSV')"
)

Choosing the Right Approach

Snowflake always requires staging before loading. Your first decision is whether to use an internal stage (managed by Snowflake, simpler to set up) or an external stage pointing to S3, GCS, or Azure Blob (better for files that already live in cloud storage or need to be shared across systems). If your files land in S3 first, our guide on uploading CSV to Amazon S3 covers the staging patterns that pair well with Snowflake external stages. For one-off loads, internal stages with PUT keep things simple. For production pipelines, external stages give you more control over file lifecycle and access patterns.

  • Warehouse sizing: COPY INTO runs on your virtual warehouse, and warehouse size directly affects load speed and credit consumption. An X-Small warehouse handles most CSV loads under a few hundred MB. For larger files, scale up temporarily and suspend the warehouse immediately after the load completes to avoid burning credits on idle compute.
  • Error handling with ON_ERROR: The COPY INTO command supports ON_ERROR options that control what happens when rows fail validation. CONTINUE skips bad rows and loads the rest. SKIP_FILE rejects the entire file. ABORT_STATEMENT stops the load entirely. Choose based on whether partial loads are acceptable for your use case.
  • Snowpipe for continuous loading: If CSV files arrive continuously (every few minutes or more frequently), Snowpipe auto-ingests files from a stage as they appear, using event notifications rather than scheduled COPY jobs. This eliminates polling and reduces latency to near real-time.
  • Time Travel for recovery: If a bad CSV load corrupts a table, Snowflake's Time Travel lets you query or restore the table to its state before the load, up to 90 days back depending on your edition. This is a safety net that most other warehouses do not offer natively.

If the ingestion process begins with users uploading spreadsheets inside an application rather than engineers running imports manually, an in-product CSV upload component can validate and normalize data before it ever reaches a Snowflake stage.

Where FileFeed Fits

Snowflake credits burn during every COPY INTO operation, and the ON_ERROR parameter forces an uncomfortable choice. Set it to ABORT_STATEMENT and one bad row kills the entire load, wasting the credits spent processing everything before that row. Set it to CONTINUE and you lose the bad rows silently, which means your table is missing data you may never realize is gone. Set it to SKIP_FILE and partial batches vanish entirely. None of these options solve the actual problem, which is that the file should have been clean before it reached Snowflake. Warehouse sizing adds another cost variable: spin up an XL warehouse for a load and you pay per-second for compute that might be processing garbage. Cleaning CSV data upstream turns this cost equation on its head.

FileFeed validates and transforms files before they reach your Snowflake stage, which makes ABORT_STATEMENT a safe default rather than a risky one. Every row that arrives in Snowflake has already passed your schema rules, so COPY INTO runs at full speed without hitting errors that waste credits or drop data. Column mapping, type normalization, and encoding fixes all happen outside of Snowflake compute, so you can size your warehouse purely for the actual load throughput you need. Whether you use scheduled COPY jobs or Snowpipe, the upstream data is guaranteed to be consistent. Teams that rely on this pattern run FileFeed as automated Snowflake ingestion pipelines that absorb file variability so their Snowflake spend stays predictable.

Frequently asked questions about Snowflake CSV imports

Do I need to create a stage before importing CSV into Snowflake?

Yes, Snowflake requires data to be staged before loading with COPY INTO. You can use internal stages (managed by Snowflake), external stages pointing to S3 or Azure Blob, or table stages attached to specific tables. For quick one-off loads, PUT and a table stage are the simplest option.

How do I handle CSV files with different delimiters in Snowflake?

Specify the delimiter in your file format definition using FIELD_DELIMITER. For example, CREATE FILE FORMAT my_format TYPE = CSV FIELD_DELIMITER = ‘|’ for pipe-delimited files. You can also set FIELD_OPTIONALLY_ENCLOSED_BY for quoted fields and SKIP_HEADER to ignore header rows during loading.

What is the maximum CSV file size Snowflake can handle?

Snowflake can handle files of any size, but recommends splitting files into 100-250 MB compressed chunks for optimal parallel loading. The COPY INTO command automatically parallelizes across multiple files. For very large datasets, compress files with gzip and stage them in cloud storage before loading.

Final Thoughts

Snowflake makes scaling compute for loads trivial, but that flexibility also means careless loads can burn credits fast. Size your warehouse appropriately, suspend it after loads complete, and use Snowpipe instead of scheduled COPY jobs when files arrive unpredictably. Lean on ON_ERROR settings and Time Travel as safety nets, but invest in upstream validation so you rarely need them. FileFeed handles that validation layer, ensuring files are clean and schema-consistent before they reach your Snowflake stage.

Teams evaluating Snowflake ingestion patterns often compare similar approaches used when importing CSV into BigQuery or other analytical warehouses. For a broader comparison of loading methods across platforms, see our complete CSV-to-database import guide.

Ready to eliminate the bottleneck?

Let your CS team onboard clients without engineers

Start free, configure your first pipeline, and see how FileFeed handles the file processing layer so your team doesn't have to.