How can I set up an ETL process for DataFrames (dbt or alternative tools)?

1 week ago 13
ARTICLE AD BOX

Question: I'm currently working on a dashboard prototype and storing data from a 22-page PDF document as 22 separate DataFrames. These DataFrames should undergo an ETL process (especially data type conversions) on every import before being written to the database.

My initial approach was to use dbt seeds and/or models. However, dbt loads the seed CSV directly into the SQLite database without my prior transformations taking effect. I want to transform first and then load.

Setup:

* Data source: 22 DataFrames extracted from PDF tables

* Database: SQLite (for prototype only)

* ETL/ELT tool: preferably dbt, but not mandatory

* Language: Python Problem: How can I set up an ETL workflow where DataFrames are processed and transformed directly without having to load them into dbt as CSV (seeds) beforehand?

Is there a way to integrate DataFrames directly into a dbt model process? If not, what alternative tools are suitable (e.g., Airflow, Dagster, Prefect, pandas-based ETL pipelines, etc.)?

Previous attempts:

* dbt seed: loads CSV directly into the database → transformations don't work

* dbt models: only work if the data is already in the database, which I want to avoid

* Currently: manual type conversions in Python (float, int, string, datetime)

Looking for: Best practices or tool recommendations for directly transforming DataFrames and then loading them into SQLite – with or without dbt.

Read Entire Article