Snowflake separates compute from storage, making it easy to scale ingestion. Here are five reliable ways to load CSVs-from quick, manual uploads to fully automated pipelines.
Method #1: SnowSQL CLI
Best for scriptable, repeatable imports or CI tasks. Use PUT to stage the file, then COPY INTO to load it.
- Install & connect: Configure SnowSQL with your account, role, and warehouse.
- Stage the file: run PUT to move the CSV into your user stage.
- Load the table: run COPY INTO from the staged file.
PUT file:///path/to/your_file.csv @~;
COPY INTO your_table
FROM @~/your_file.csv
FILE_FORMAT = (TYPE = 'CSV');
Method #2: Snowflake Web Interface
Good for one-off, manual imports without scripts.
- Open the UI: Log in and navigate to the target database and table.
- Load Data wizard: Upload your CSV and map columns as prompted.
- Review: Confirm the file format (CSV) and run the load.
Method #3: Cloud Storage Staging
Ideal for larger files or recurring feeds. Stage in S3/GCS/Azure, then load via COPY INTO.
- Upload to storage: Place the CSV in your bucket.
- Create or use a stage: Point Snowflake to the bucket path.
- Load: use COPY INTO from the stage (examples below).
-- single-line
COPY INTO your_table FROM @your_stage/your_file.csv FILE_FORMAT = (TYPE = 'CSV');
-- multi-line
COPY INTO your_table
FROM @your_stage/your_file.csv
FILE_FORMAT = (TYPE = 'CSV');
Method #4: Airbyte
Great for automated pipelines and ongoing syncs with transformations.
- Deploy Airbyte: Run locally or in the cloud; open the dashboard.
- Configure source: Add a CSV source (local path or cloud storage) with format settings.
- Add Snowflake destination: Provide account, warehouse, database, and schema.
- Create connection: Map schema, set frequency, define transformations.
- Sync: Start the job; Airbyte extracts, transforms, and loads into Snowflake.
Method #5: Snowflake Python Connector
For Python workflows needing custom logic before or after load.
- Install:
pip install snowflake-connector-python. - Connect: Initialize the connector with user, password, account, warehouse, database, and schema.
- Stage and load: Execute PUT to stage the CSV, then COPY INTO to load.
pip install snowflake-connector-python
import snowflake.connector
conn = snowflake.connector.connect(
user="your_username",
password="your_password",
account="your_account",
warehouse="your_warehouse",
database="your_database",
schema="your_schema",
)
conn.cursor().execute(
"PUT file:///path/to/your_file.csv @~"
)
conn.cursor().execute(
"COPY INTO your_table FROM @~/your_file.csv FILE_FORMAT = (TYPE = 'CSV')"
)
Final Thoughts
Each method fits a different need: quick manual loads, scripted CLI runs, cloud-scale staging, low-maintenance pipelines with Airbyte, or Python-first flows. If you want this automated end to end-with capture, validation, transformations, monitoring, and delivery-consider FileFeed’s Automated FileFeeds and embeddable importer to keep CSV ingestion predictable without extra engineering work.
