Datacoves Co-founder | 15+ Data Platform Expert.
Solving enterprise data challenges quickly with dbt & Airflow.
Working with data involves bridging the gap between raw data collection and deciphering meaningful insights. Data transformation is at the heart of this process, and a variety of tools are available to facilitate this. Two have risen to prominence: dbt (Data Build Tool) and Apache Airflow. While both are celebrated for their prowess in facilitating data transformations, they have their distinct methodologies and specialties. Let's dive deeper into the nuances, strengths, and challenges that each tool brings to the table.
If you are a data professional trying to navigate the complex landscape of data orchestration tools or an organization looking to optimize its data operations and workflows, then this article is for you. It's essential to understand that when it comes to choosing between dbt and Airflow, it's not necessarily an 'either-or' decision. In many scenarios, pairing both tools can significantly elevate their potential, further optimizing data transformation workflows.
Airflow is a popular open-source tool that let's you author, schedule, and monitor data pipelines. It can be used to orchestrate and monitor complex workflows.
Imagine a scenario where you have a series of tasks: Task A, Task B, and Task C. These tasks need to be executed in sequence every day at a specific time. Airflow enables you to programmatically define the sequence of steps as well as what each step does. With Airflow you can also monitor the execution of each step and get alerts when something fails.
Airflow provides flexibility, which means you can script the logic of each task directly within the tool. However, this flexibility might be both a blessing and a curse. Just because you can code everything within Airflow, it doesn't mean that you should. Overly complicated workflows and incorporating too much logic within Airflow can make it difficult to manage and debug. Ensure that when you're using Airflow, it's the right tool for the specific task you're tackling. For example, it is far more efficient to transform data within a data warehouse than to move data to the Airflow server, perform the transformation, and write the data back to the warehouse.
At the heart of Apache Airflow's appeal is its flexibility when it comes to customizing each step in a workflow. Unlike other tools that may only let you schedule and order tasks, Airflow offers users the ability to define the code behind each task. This means you aren't just deciding the "what" and the "when" of your tasks, but also the"how". Whether it's extracting and loading data from sources, defining transformations, or integrating with other platforms, Airflow lets you tailor each step to your exact requirements. This granularity makes it a powerful ally for those looking to have granular control over their data workflows, ensuring that each step is executed precisely as intended.
While Airflow is powerful, it's important to strike a balance. You should use Airflow primarily as an orchestrator. If mature tools exist for specific tasks, consider integrating them into your workflow and allow Airflow to handle scheduling and coordination. Let specialized tools abstract away complexity. One example is leveraging a tool like Fivetran or Airbyte to perform data extraction from SaaS applications rather than building all the logic within Airflow.
As stated above, Airflow can be used for many things, but we suggest these use cases.
dbt Core is an open-source framework that leverages templated SQL to perform data transformations. Developed by dbt Labs, it specializes in transforming, testing, and documenting data. While it's firmly grounded in SQL, it infuses software engineering principles into the realm of analytics, promoting best practices like version control and DataOps.
Imagine you have a raw data set and you need to transform it for analytical purposes. dbt allows you to create transformation scripts using SQL which is enhanced with Jinja templating for dynamic execution. Once created, these scripts, called "models" in dbt, can be run to create or replace tables and views in your data warehouse. Each transformation can be executed sequentially and when possible, in parallel, ensuring your data is processed properly.
Unlike some traditional ETL tools which might abstract SQL into drag-and-drop interfaces, dbt embraces SQL as the lingua franca of data transformation. This makes it exceptionally powerful for those well-acquainted with SQL. But dbt goes a step further: by infusing Jinja, it introduces dynamic scripting, conditional logic, and reusable macros. Moreover, dbt's commitment to idempotency ensures that your data transformations are consistent and repeatable, promoting reliability.
Lastly, dbt emphasizes the importance of testing and documentation for data transformations. dbt facilitates the capture of data descriptions, data lineage, data quality tests, and other metadata about the data and it can generate a rich web-based documentation site. dbt's metadata can also be pushed to other tools such a specialized data catalog or data observability tools. While dbt is a transformative tool, it's essential to understand its position in the data stack. It excels at the "T" in ELT (Extract, Load, Transform) but requires complementary tools for extraction and loading.
A common misunderstanding within the data community is that dbt = dbt Cloud. When people say dbt they are referring to dbt Core. dbt Cloud is a commercial offering by dbt Labs and it is built upon dbt Core. It provides additional functionalities to the open source framework; these include a scheduler for automating dbt runs, alongside hosting, monitoring, and an integrated development environment (IDE). This means that you can use the open source dbt Core framework without paying for dbt Cloud, however, you will not get the added features dbt Cloud offers such as the scheduler. If you are using dbt Core you will eventually need an orchestrator such as Airflow to get the job done. For more information, check out our article where we cover the differences between dbt cloud vs dbt core.
As mentioned above, one of the key features of dbt Cloud is its scheduler which allows teams to automate their dbt runs at specified intervals. This functionality ensures that data transformations are executed regularly, maintaining the freshness and reliability of data models. However, it's important to note that dbt Cloud's scheduler only handles the scheduling of dbt jobs, i.e., your transformation jobs. You will still need an orchestrator to manage your Extract and Load (EL) processes and anything after Transform (T), such as visualization.
At Datacoves we solve the deployment and infrastructure problems for you so you can focus on data, not infrastructure. A managed Visual Studio Code editor gives developers the best dbt experience with bundled libraries and extensions that improve efficiency. Orchestration of the whole data pipeline is done with Datacoves’ managed Airflow that also offers a simplified YAML based Airflow job configuration to integrate Extract and Load with Transform. Datacoves has best practices and accelerators built in so companies can get a robust data platform up and running in minutes instead of months. To learn more, check out our product page.
Managing the deployment and infrastructure of dbt Core and Airflow is a not so hidden cost of choosing open source, however, at Datacoves we solve the deployment and infrastructure problems for you so you can focus on data, not infrastructure. A managed Visual Studio Code editor gives developers the best dbt experience with bundled libraries and extensions that improve efficiency. Orchestration of the whole data pipeline is done with Datacoves’ managed Airflow that also offers a simplified YAML based Airflow job configuration to integrate Extract and Load with Transform. Datacoves has best practices and accelerators built in so companies can get a robust data platform up and running in minutes instead of months. To learn more, check out our product page.
When looking at the strengths of each tool, it’s clear that the decision isn’t an either-or solution, but they each have a place in your data platform. Analyzing the strengths of each reveals that Airflow should be leveraged for the end-to-end orchestration of the data journey and dbt should be focused on data transformation, documentation, and data quality. This holds true if you are adopting dbt through dbt Cloud. dbt Core does not come with a scheduler, so you will eventually need an orchestrator such as Airflow to automate your transformations as well as other steps in your data pipeline. If you implement dbt with dbt Cloud, you will be able to schedule your transformations but will still need an orchestrator to handle the other steps in your pipeline. You can also check out other dbt alternatives.
The following table shows a high-level summary.
Apache Airflow | dbt Core | |
---|---|---|
Recommended Usage | Data workflow orchestration | Data transformation, documentation, data quality, data lineage |
Primary Strength | Orchestration of data pipelines | Data modeling and transformation |
Development Language | Python or YAML using DagFactory | SQL and Python in some warehouses |
Ease of Use | Python knowledge required. Deployment is simpler if using a managed platform, but increases in complexity if self deploying and managing | SQL based transformation reduces the learning curve for using dbt. But setting up and managing the dbt infrastructure manually can be challenging as the organization grows. |
Pricing | Open-source but there is a cost to deploy and maintain the service. There are cloud offerings from companies like Astronomer and Datacoves that offer a turn-key Airflow experience that eliminates this pain point | dbt Core is open source but deployment and maintenance of the development, DataOps, and Orchestration infrastructure can be difficult, time consuming, and costly. Subscription-based services like dbt Cloud and Datacoves also exist to eliminate this pain point |
By now you can see that each tool has its place in an end-to-end data solution, but if you came to this article because you need to choose one to integrate, then here is the summary.
If you're orchestrating complex workflows, especially if they involve various tasks and processes, Apache Airflow should be your starting point as it gives you unparalleled flexibility and granular control over scheduling and monitoring.
An organization starting out with basic requirements may be fine starting with dbt Core, but when end-to-end orchestration is needed, Airflow will need to play a role.
If your primary focus is data transformation and you're looking to apply software development best practices to your analytics, dbt is the right answer. Here is the key takeaway: these tools are not rivals, but allies. While one might be the starting point based on immediate needs, having both in your arsenal unlocks the full potential of your data operations.
While Airflow and dbt are designed to assist data teams in deriving valuable insights, they each excel at unique stages of the workflow. For a holistic data pipeline approach, it's best to integrate both. Use tools such as Airbyte or Fivetran for data extraction and loading and trigger them through Airflow. Once your data is prepped, let Airflow guide dbt in its transformation and validation, readying it for downstream consumption. Post-transformation, Airflow can efficiently distribute data to a range of tools, executing tasks like data feeds to BI platforms, refreshing ML models, or initiating marketing automation processes.
However, a challenge arises when integrating dbt with Airflow: the intricacies of deploying and maintaining the combined infrastructure isn't trivial and can be resource-intensive if not approached correctly. But is there a way to harness the strengths of both Airflow and dbt without getting bogged down in the setup and ongoing maintenance? Yes!
Both Apache Airflow and dbt have firmly established themselves as indispensable tools in the data engineering landscape, each bringing their unique strengths and capabilities to the table. While Apache Airflow has emerged as the premier orchestrator, ensuring that tasks and workflows are scheduled and executed with precision, dbt stands out for its ability to streamline and enhance the data transformation process. The choice is not about picking one over the other, but about understanding how they can be integrated to provide a comprehensive solution.
It's vital to approach the integration and maintenance of these platforms pragmatically. Solutions like Datacoves offer a seamless experience, reducing the complexity of infrastructure management and allowing teams to focus on what truly matters: extracting value from their data. In the end, it's about harnessing the right tools, in the right way, to chart the path from raw data to actionable intelligence. See if Datacoves dbt pricing is right for your organization.
dbt, also known as data build tool, is a data transformation framework that leverages templated SQL to transform and test data. dbt is part of the modern data stack and helps practitioners apply software development best practices on data pipelines. Some of these best practices include code modularity, version control, and continuous testing via its built in data quality framework. In this article we will focus on how data can be tested with dbt via build in functionality and with additional dbt packages and libraries.
Adding tests to workflows does more than ensure code and data integrity; it facilitates a continuous dialogue with your data, enhancing understanding and responsiveness. By integrating testing into your regular workflows, you can:
By embedding testing into the development cycle and consuming the results diligently, teams not only safeguard the functionality of their data transformations but also enhance their overall data literacy and operational efficiency. This proactive approach to testing ensures that the insights derived from data are both accurate and actionable.
In dbt, there are two main categories of tests: data tests and unit tests.
Data tests are meant to be executed with every pipeline run to validate the integrity of the data and can be further divided into two types: Generic tests and Singular tests.
Regardless of the type of data test, the process is the same behind the scenes: dbt will compile the code to a SQL SELECT statement and execute it against your database. If any rows are returned by the query, this indicates a failure to dbt.
Unit tests, on the other hand, are meant to validate your transformation logic. They rely on predefined data for comparison to ensure your logic is returning an expected result. Unlike data tests, which are meant to be run with every pipeline execution, unit tests are typically run during the CI (Continuous Integration) step when new code is introduced. Unit tests were incorporated in dbt Core as of version 1.8.
These are foundational tests provided by dbt-core, focusing on basic schema validation and source freshness. These tests are ideal for ensuring that your data sources remain valid and up-to-date.
dbt-core provides four built-in generic tests that are essential for data modeling and ensuring data integrity:
unique: is a test to verify that every value in a column (e.g. customer_id) contains unique values. This is useful for finding records that may inadvertently be duplicated in your data.
not_null: is a test to check that the values for a given column are always present. This can help you find cases where data in a column suddenly arrives without being populated.
accepted_values: this test is used to validate whether a set of values within a column is present. For example, in a column called payment_status, there can be values like pending, failed, accepted, rejected, etc. This test is used to verify that each row within the column contains one of the different payment statuses, but no other. This is useful to detect changes in the data like when a value gets changed such as accepted being replaced with approved.
relationships: these tests check referential integrity. This type of test is useful when you have related columns (e.g. the customer identifier) in two different tables. One table serves as the “parent” and the other is the “child” table. This is common when one table has a transaction and only lists a customer_id and the other table has the details for that customer. With this test we can verify that every row in the transaction table has a corresponding record in the dimension/details table. For example, if you have orders for customer_ids 1, 2, 3 we can validate that we have information about each of these customers in the customer details table.
Using a generic test is done by adding it to the model's property (yml) file.
Generic tests can accept additional test configurations such as a where clause to apply the test on a subset of rows. This can be useful on large tables by limiting the test to recent data or excluding rows based on the value of another column. Since an error will stop a dbt build or dbt test of the project, it is also possible to assign a severity to a test and optionally a threshold where errors will be treated as warning instead of errors. Finally, since dbt will automatically generate a name for the test, it may be useful to override the auto generated test name for simplicity. Here's the same property file from above with the additional configurations defined.
Singular tests allow for the customization of testing parameters to create tailored tests when the default generic ones (or the ones in the packages discussed below) do not meet your needs. These tests are simple SQL queries that express assertions about your data. An example of this type of test can be a more complex assertion such as having sales for one product be within +/- 10% of another product. The SQL simply needs to return the rows that do not meet this condition.
In dbt, it is also possible to define your own custom generic tests. This may be useful when you find yourself creating similar Singular tests. A custom generic test is essentially the same as a dbt macro which has a least a model as a parameter, and optionally column_name, if the test will apply to a column. Once the generic test is defined, it can be applied many times just like the generic tests shipped with dbt Core. It is also possible to pass additional parameters to a custom generic test.
As our data transformations become more complex, the need for testing becomes increasingly important. The concept of unit testing is already well established in software development, where tests confirm that individual units of code work as intended. Recognizing this, dbt 1.8 introduced unit testing.
Unlike the data tests we have above, which ensure that incoming data meets specific criteria and are run at every data refresh, unit tests are designed to verify that the transformation logic itself produces the expected results. In the context of dbt, unit tests validate transformation logic by comparing the test results against predefined data typically defined using seeds (CSV files) or SQL queries. Unit tests should only be executed when new data transformation code is introduced and implemented since they are designed to help catch potential issues early in the development process. It is recommended to run unit tests only during the CI step. Running them in production would be a redundant use of compute resources because the expected outcomes do not change. Unit testing is only available in 1.8 or higher, but there are community packages (dbt-unit-testing, dbt_datamocktool, dbt-unittest) that have worked to solve this problem and are worth exploring if you are not using dbt 1.8.
While not technically a dbt test, a freshness check validates the timeliness of source data. The freshness check in dbt Core is designed to monitor the timeliness of the data. It helps ensure that the data loaded into your warehouse is updated regularly and remains relevant for decision-making processes. This is valuable because sometimes data will stop getting refreshed and the data pipelines will continue to run with a silent failure. To assure that you are alerted when a data delivery SLA is not met, simply add a freshness check to your sources.
This comprehensive suite of testing capabilities in dbt Core ensures that data teams can build, maintain, and verify the reliability and accuracy of their data models effectively.
In addition to the generic tests that can be found within dbt Core, there are a lot more in the dbt ecosystem. These tests are found in dbt packages. Packages are libraries of reusable SQL code created by organizations of the dbt community. We will briefly go over some of the tests that can be found in these packages.
The dbt-utils package, created by dbt Labs, contains generic dbt tests, SQL generators, and macros. The dbt_utils package include 16 generic tests including:
not_accepted_values: this test is the opposite of the accepted_values test and is used to check that specific values are NOT present in a particular range of rows.
equal_rowcount: this test checks that two different tables have the same number of rows. This is a useful test that can assure that a transformation step does not accidentally introduce additional rows in the target table.
fewer_rows_than: this test is used to verify that a target table contains fewer rows than a source table. For example, if you are aggregating a table, you expect that the target table will have fewer rows than the table you are aggregating. This test can help you validate this condition.
There are 17 generic dbt tests available in the dbt-utils package.
Another awesome package that can accelerate your data testing is dbt-expectations. This package is a port of the great Python library Great Expectations. For those not familiar, Great Expectations is an open-source Python library that is used for automated testing. dbt-expectations is modeled after this library and was developed by Calogica so dbt practitioners would have access to an additional set of pre-created Generic tests without adding another tool to the data platform. Tests in dbt-expectations are divided into seven categories encompassing a total of 62 generic dbt tests:
You can find detailed information on all the dbt-expectations generics tests in their documentation.
Created by Snowflake, dbt_constraints adds primary and foreign key constraints to dbt models. When incorporated into a dbt project, this package automatically creates unique keys for all existing unique and dbt_utils.unique_combination_of_columns tests, along with foreign keys for existing relationship tests and not null constraints for not_null tests. It provides three flexible tests - primary_key, unique_key, and foreign_key - which can be used inline, out-of-line, and support multiple columns.
The elementary tool offers 10 generic dbt tests that help in detecting schema changes, validating JSON schemas, and monitoring anomalies in source freshness, among other functionalities.
dbt-fihr focuses on the healthcare sector, providing 20 generic dbt tests for validating HL7® FHIR® (Fast Healthcare Interoperability Resources) data types, a standard for exchanging healthcare information across different systems.
Maintained by Google, the fhir-dbt-analytics package includes tests that ensure the quality of clinical data. These tests might involve counting the number of FHIR resources to verify expected counts or checking references between FHIR resources.
By leveraging these diverse dbt testing packages, data teams can significantly enhance their data validation processes, ensuring that their data pipelines are robust, accurate, and reliable.
While the tests above run against production data and are run even when none of the dbt code has changed, there are some tests that should be applied during development. This will improve a project's long term maintainability, assure project governance, and validate transformation logic in isolation of production data.
This dbt-meta-testing package contains macros to assert test and documentation coverage leveraging a configuration defined in the dbt_project.yml configuration settings.
While dbt tests are great to test with "real" data, sometimes you may want to test the logic of a transformation with "fake" data. This type of test is called a unit test. The dbt-unit-testing package has all you need to do proper dbt unit testing. (side note, the dbt Core team has announced the unit testing will be part of a future release of dbt although it may not be exactly as done using this package).
dbt_datamocktool can be used to create mock CSV seeds to stand in for the sources and refs that your models use and test that the model produces the expected output as compared with another CSV seed.
The dbt-unittest is a dbt package to enhance dbt package development by providing unit testing macros.
Incorporating automated data validation into CI/CD pipelines helps catch issues early and ensures data accuracy before deployment. By integrating tests into every code change, teams can prevent bad data from reaching production and maintain reliable data pipelines.
dbt-checkpoint is a library that can be leveraged during the development and release life-cycle to assure a level of governance of the dbt project. Typical validations include assuring that dbt models and/or their columns have descriptions and that all the columns in a dbt model (sql) are present in a property file (yml).
Recce is an open-source data validation toolkit for comprehensive PR review in dbt projects. Recce helps to validate the data impact of code changes during development and PR review by enabling you to compare data structure, profiling statistics, and queries between two dbt environments, such as dev and prod. By performing Recce checks, you are able to identify unexpected data impact, validate expected impact, and prevent bad merges and incorrect data entering production.
Recce checks, can be performed during development, automatically as part of CI, and as part of PR review for root cause analysis. The suite of tools in Recce enable you to perform:
Record the results of your data validations in the Checklist and share as part of PR review or discussion with stakeholders.
For full coverage, use Recce’s automated ‘preset checks’ that are triggered with each pull request and automatically post an impact summary to your PR comment.
Recce Cloud users can also take advantage of check-syncing and PR merge-blocking until the reviewer or stakeholders have approved the check results.
By default, dbt will not store the results of a dbt test execution. There is a configuration that can be set for the dbt project or at the specific model level which will have dbt store the failures of the test in a table in the data warehouse. While this is a good start, these test results get overridden each time dbt tests are run. To overcome this deficiency, tools have been developed in the community that store results longitudinally and even provide dashboards of test results.
Elementary is an open source data observability tool for dbt. It simplifies the capture of dbt test results over time, enables testing without having to manually add tests to all your dbt model columns, and has a user interface for viewing test results as well as dbt lineage.
Elementary also provides advanced configurations for generating Slack alerts for dbt tests, enhancing how teams monitor and respond to data quality issues. You can configure alerts based on test results, test statuses, and test durations. Additionally, you can set up recurring alerts based on a schedule that you define, ensuring continuous oversight without constant manual checking.
Key features include:
This comprehensive suite of tools not only sends notifications but also allows for significant customization, ensuring that alerts are meaningful and actionable. The integration of these features into your workflow facilitates better data management and quicker response to potential data discrepancies, streamlining your project's efficiency and reliability.
This dbt Data Quality package is a Snowflake only package that helps users access and report on the outputs from dbt source freshness and dbt test results.
The dbt-tools package makes it simple to store and visualize dbt test results in a BI dashboard.
re_data is an open-source data reliability framework for modern data stack.
When migrating data from one system to another validating that tables match is incredibly important. For this we recommend datacompy to get the job done.
Getting started with dbt testing is simple thanks to the predefined generic dbt tests found within dbt Core and the additional generic tests found in dbt-utils and dbt-expectations. In addition to these juggernauts of the dbt community other organizations in the dbt community have contributed a additional generic tests, tools to improve dbt development, libraries that can help with validation and governance before releasing code to production and tools that can improve data quality observability. If you are using dbt cloud or dbt core you may be interested in reading more about dbt alternatives such as Datacoves which falls under the managed dbt core solutions.
You now know what dbt (data build tool) is all about. You are being productive, but you forgot what `dbt build` does or you forgot what the @ dbt graph operator does. This handy dbt cheat sheet has it all in one place.
With the advent of dbt 1.6, we updated the awesome dbt cheat sheet created originally by Bruno de Lima
We have also moved the dbt jinja sheet sheet to a dedicated post.
This reference summarizes all the dbt commands you may need as you run your dbt jobs or study for your dbt certification.
If you ever wanted to know what the difference between +model and @model is in your dbt run, you will find the answer. Whether you are trying to understand dbt graph operators or what the dbt retry command does, but this cheat sheet has you covered. Check it out below.
These are the principal commands you will use most frequently with dbt. Not all of these will be available on dbt Cloud
dbt development commands | |
---|---|
dbt build | This command will load seeds, perform snapshots, run models, and execute tests |
dbt compile | Generates executable SQL code of dbt models, analysis, and tests and outputs to the target folder |
dbt docs | Generates and serves documentation for the dbt project (dbt docs generate, dbt docs serve) |
dbt retry | Re-executes the last dbt command from the node point of failure. It references run_results.json to determine where to start |
dbt run | Executes compiled SQL for the models in a dbt project against the target database |
dbt run-operation | Is used to invoke a dbt macro from the command line. Typically used to run some arbitrary SQL against a database. |
dbt seed | Loads CSV files located in the seeds folder into the target database |
dbt show | Executes sql query against the target database and without materializing, displays the results to the terminal |
dbt snapshot | Executes "snapshot" jobs defined in the snapshot folder of the dbt project |
dbt source | Provides tools for working with source data to validate that sources are "fresh" |
dbt test | Executes singular and generic tests defined on models, sources, snapshots, and seeds |
The dbt commands above have options that allow you to select and exclude models as well as deferring to another environment like production instead of building dependent models for a given run. This table shows which options are available for each dbt command
dbt command arguments | |
---|---|
dbt build | --select / -s, --exclude, --selector, --resource-type, --defer, --empty, --full-refresh |
dbt compile | --select / -s, --exclude, --selector, --inline |
dbt docs generate | --select / -s, --no-compile, --empty-catalog |
dbt docs serve | --port |
dbt ls / dbt list | --select / -s, --exclude, --selector, --output, --output-keys, --resource-type |
dbt run | --select / -s, --exclude, --selector, --resource-type, --defer, --empty, --full-refresh |
dbt seed | --select / -s, --exclude, --selector |
dbt show | --select / -s, --inline, --limit |
dbt snapshot | --select / -s, --exclude, --selector |
dbt source freshness | --select / -s, --exclude, --selector |
dbt source | --select / -s, --exclude, --selector, --output |
dbt test | --select / -s, --exclude, --selector, --defer |
By combining the arguments above like "-s" with the options below, you can tell dbt which items you want to select or exclude. This can be a specific dbt model, everything in a specific folder, or now with the latest versions of dbt, the specific version of a model you are interested in.
dbt node selectors | |
---|---|
tag | Select models that match a specified tag |
source | Select models that select from a specified source |
path | Select models/sources defined at or under a specific path. |
file / fqn | Used to select a model by its filename, including the file extension (.sql). |
package | Select models defined within the root project or an installed dbt package. |
config | Select models that match a specified node config. |
test_type | Select tests based on their type, singular or generic, data, or unit (unit tests are available only in dbt 1.8) |
test_name | Select tests based on the name of the generic test that defines it. |
state | Select nodes by comparing them against a previous version of the same project, which is represented by a manifest. The file path of the comparison manifest must be specified via the --state flag or DBT_STATE environment variable. |
exposure | Select parent resources of a specified exposure. |
metric | Select parent resources of a specified metric. |
result | The result method is related to the state method described above and can be used to select resources based on their result status from a prior run. |
source_status | Select resource based on source freshness |
group | Select models defined within a group |
access | Selects models based on their access property. |
version | Selects versioned models based on their version identifier and latest version. |
dbt Graph Operator provide a powerful syntax that allow you to hone in on the specific items you want dbt to process.
dbt graph operators | |
---|---|
+ | If "plus" (+) operator is placed at the front of the model selector, + will select all parents of the selected model. If placed at the end of the string, + will select all children of the selected model. |
n+ | With the n-plus (n+) operator you can adjust the behavior of the + operator by quantifying the number of edges to step through. |
@ | The "at" (@) operator is similar to +, but will also include the parents of the children of the selected model. |
* | The "star" (*) operator matches all models within a package or directory. |
The following commands are used less frequently and perform actions like initializing a dbt project, installing dependencies, or validating that you can connect to your database.
project level dbt commands | |
---|---|
dbt clean | By default, this command deletes contents of the dbt_packages and target folders in the dbt project |
dbt clone | In databases that support it, can clone nodes (views/tables) to the current dbt target database, otherwise it creates a view pointing to the other environment |
dbt debug | Validates dbt project setup and tests connection to the database defined in profiles.yml |
dbt deps | Installs dbt package dependencies for the project as defined in packages.yml |
dbt init | Initializes a new dbt project and sets up the users's profiles.yml database connection |
dbt ls / dbt list | Lists resources defined in a dbt project such as modem, tests, and sources |
dbt parse | Parses and validates dbt files. It will fail if there are jinja and yaml errors in the project. It also outputs detailed timing info that may be useful when optimizing large projects |
dbt rpc | DEPRECATED after dbt 1.4. Runs an RPC server that compiles dbt models into SQL that can be submitted to a database by external tools |
The flags below immediately follow the dbt command and go before the subcommand e.g. dbt <FLAG> run
Read the official dbt documentation
dbt command line (CLI) flags (general) | |
---|---|
-x, --fail-fast / --no-fail-fast | Stop dbt execution as soon as a failure occurs. |
-h, --help | Shows command help documentation |
--send-anonymous-usage-stats / --no-send-anonymous-usage-stats | Send anonymous dbt usage statistics to dbt Labs. |
-V, -v, --version | Returns information about the installed dbt version |
--version-check / --no-version-check | Ensures or ignores that the installed dbt version matches the require-dbt-version specified in the dbt_project.yml file. |
--warn-error | If dbt would normally warn, instead raise an exception. |
--warn-error-options WARN_ERROR_OPTIONS | Allows for granular control over exactly which types of warnings are treated as errors. This argument receives a YAML string like '{"include": "all"}. |
--write-json / --no-write-json | Whether or not to write the manifest.json and run_results.json files to the target directory |
dbt CLI flags (logging and debugging) | |
---|---|
-d, --debug / --no-debug | Display debug logging during dbt execution useful for debugging and making bug reports. Not to be confused with the dbt debug command which tests database connection. |
--log-cache-events / --no-log-cache-events | Enable verbose logging for relational cache events to help when debugging. |
--log-format [text|debug|json|default] | Specify the format of logging to the console and the log file. |
--log-format-file [text|debug|json|default] | Specify the format of logging to the log file by overriding the default format |
--log-level [debug|info|warn|error|none] | Specify the severity of events that are logged to the console and the log file. |
--log-level-file [debug|info|warn|error|none] | Specify the severity of events that are logged to the log file by overriding the default log level |
--log-path PATH | Configure the 'log-path'. Overrides 'DBT_LOG_PATH' if it is set. |
--print / --no-print | Outputs or hides all {{ print() }} statements within a macro call. |
--printer-width INTEGER | Sets the number of characters for terminal output |
-q, --quiet / --no-quiet | Suppress all non-error logging to stdout Does not affect {{ print() }} macro calls. |
--use-colors / --no-use-colors | Specify whether log output is colorized in the terminal |
--use-colors-file / --no-use-colors-file | Specify whether log file output is colorized |
dbt CLI flags (parsing and performance) | |
---|---|
--cache-selected-only / --no-cache-selected-only | Have dbt cache or not cache metadata about all the objects in all the schemas where it might materialize resources |
--partial-parse / --no-partial-parse | Uses or ignores the pickle file in the target folder used to speed up dbt invocations by only reading and parsing modified objects. |
--populate-cache / --no-populate-cache | At start of run, use `show` or `information_schema` queries to populate a relational cache to speed up subsequent materializations |
-r, --record-timing-info PATH | Saves performance profiling information to a file that can be visualized with snakeviz to understand the performance of a dbt invocation |
--static-parser / --no-static-parser | Use or disable the static parser. (e.g. no partial parsing if enabled) |
--use-experimental-parser / --no-use-experimental-parser | Enable experimental parsing features. |
As a managed dbt Core solution, the Datacoves platform simplifies the dbt Core experience and retains its inherent flexibility. It effectively bridges the gap, capturing many benefits of dbt Cloud while mitigating the challenges tied to a pure dbt Core setup. See if Datacoves dbt pricing is right for your organization or visit our product page.
Please contact us with any errors or suggestions.
Using dbt with Snowflake is one of the most popular and powerful data stacks today. Are you facing one of these situations:
- You just joined a data team using dbt Core with Snowflake, and you want to set up your local dbt development environment
- Your company is already using Snowflake and wants to try out dbt.
- You are using dbt with another data warehouse and you want to try dbt with Snowflake before migrating.
If you are facing any of these, this guide will help you set up your dbt environment for Snowflake using VS Code as your dbt development environment. We will also cover useful python libraries and VS Code extensions that will make you more productive. Want to become a dbt on Snowflake ninja, keep reading.
To get started, you need to set up your dbt development environment. This includes Python, dbt, and VS Code. You will also need to set up your Snowflake account properly so dbt can work its magic.
While dbt supports versions of Python greater than 3.7, some other tools like Snowpark require Python 3.8, therefore we recommend you stick with that version. You can find the installer for your particular Operating System on this page. Finding the right Windows Installer can be confusing, so look for a link titled Windows installer (64-bit). If you are on a Mac, you can use the Universal Installer macOS 64-bit universal2 installer.
When using dbt, you will also need Git. To setup git, follow this handy guide.
The preferred IDE for dbt is VS Code, you can find it on the official Microsoft site. click the big Download button and install like any other app.
Installing dbt is done using pip. You can find more information on this page.
For dbt Snowflake, simply run:
This will install the latest version of dbt core along with the dbt adapter for Snowflake. If you need to use an older version of dbt, you will need to specify it when you run pip. However, the version of the dbt adapter may not match the version of dbt core.
For example, as of this writing, the last version of dbt core 1.3.x is 1.3.4. However, dbt-snowflake for 1.3.x is 1.3.2. So, if you want to install the latest dbt 1.3.x for snowflake, you would run
This will install dbt-snowflake 1.3.2 along with dbt-core 1.3.4
The final pre-requisite you will need to do is set up a Snowflake user that has been granted a role with the right access to a database where dbt will create views and tables. The role should also have access to a Snowflake Warehouse for compute. Here is a handy guide that gives you the basics. We would recommend a more comprehensive setup for a production deployment, but this will get you going for now.
The key items that are not covered in that guide is that you should create a role ANALYST and a database ANALYTICS_DEV and grant OWNERSHIP of that database to the ANALYST role. The ANALYST role should also be granted USAGE on the TRANSFORMING warehouse. You also need to grant the ANALYST role to your specific user. Don’t run as ACCOUNTADMIN.
This is all needed because when dbt runs it will create a schema for your user in the ANALYTICS_DEV database and you will use the TRANSFORMING warehouse when compute is needed like when dbt creates tables.
If all of this seems confusing or tedious, you should consider a managed dbt solution like dbt Cloud or Datacoves. For more information, checkout our in-depth article where we compare dbt cloud vs dbt core as well as managed dbt core.
Now with all the pre-requisites out of the way, let’s configure dbt to connect with Snowflake.
To initialize your dbt project in Snowflake, dbt has a handy command dbt init. You can configure your dbt Snowflake profile using the dbt init command both for a new and a existing dbt project. First you will need to clone your repo using git. Then, simply run the dbt-init command and go through the prompts.
Once you get your project set up, consider adding a profile_template.yml to your project. As stated on that page, using a profiles template will simplify the dbt init process for users on your team.
To make sure dbt can connect to Snowflake, run dbt debug. If dbt can connect to your Snowflake account, you should see “All checks passed!” If you have problems, then join the dbt Community search the forums or ask a question in the #db-snowflake channel.
Even though dbt performed the setup of your profile.yml to connect to Snowflake with your credentials, it only provides the basic setup. This page provides additional parameters. that you can configure for the Snowflake connection in your profiles.yml file.
If you want to configure those parameters, you will need to open and edit the profiles.yml file. The profiles.yml file created by dbt init will be in your user’s home directory in a subdirectory called .dbt.
One handy configuration parameter to change is reuse_connections. Also, if you use SSO authentication with external browser, you should consider setting up connection caching on Snowflake, otherwise you will be prompted to authenticate for every connection dbt opens to the database.
Now that you have set up your dbt connection to Snowflake, there are some other options you can configure when dbt runs against your Snowflake account. This includes overriding the default warehouse for a specific model, adding query tags, copying grants, etc. This handy page has a lot more information on these dbt snowflake advanced configurations.
Now that you have dbt connecting to your database, let’s talk about some python libraries you should set up to improve how you work with dbt.
dbt-coves is an open source library created by Datacoves to complement dbt development by simplifying tedious tasks like generating staging models. It is a must-have tool for any dbt practitioner who wants to improve their efficiency. dbt-coves will automatically create your source.yml and staging models as well as their corresponding yml(property) files. It also has utilities for backing up Airbyte and Fivetran configurations.
SQLFluff is a Python library for linting SQL code. SQLFluff seamlessly integrates dbt using a templater and it is the only linter compatible with dbt. If you have not heard of code linting it helps you enforce rules on how your SQL code is formatted for example, should everyone use leading or trailing commas, should SQL keywords be upper or lower case. We recommend everyone use a linter as this will improve code readability and long term maintainability.
pre-commit with dbt-checkpoint
dbt-checkpoint is a tool that allows you to make sure your dbt project complies with certain governance rules. For example, you can have a rule that validates whether every dbt model has a description. You can also ensure that every column is documented among many other rules. We also recommend the use of dbt-checkpoint as it will assure developers don’t add technical debt from the start of a project.
In addition to these Python libraries, at Datacoves we set up the development environment with other handy libraries like Snowpark and Streamlit. We believe that flexibility is important especially in enterprise environments. If you want to learn what to consider when selecting a managed dbt core platform, check out our guide.
In addition to Python libraries, you can improve your dbt workflow with Snowflake by installing these VS Code extensions.
The official Snowflake dbt extension keeps you in the flow by bringing the Snowflake interface to VS Code. With it you can explore your database, run queries, and even upload and download files to Snowflake stages. It is a must-have for any Snowflake user.
dbt power user is a VS Code extension that improves the dbt development experience by adding handy shortcuts to run dbt models, tests, and even let’s you preview the result of a dbt model or a CTE within that model.
The SQLFluff VS Code extension is the companion to the SQLFluff python library. It improves the development experience by highlighting linting errors right in line with your SQL code. It even has a handy hover which describes the linting error and links to the SQLFluff documentation.
There are many other great VS Code extensions and at Datacoves we are always improving the dbt developer’s experience by pre-installing them. One recent addition demonstrated on the video below is a ChatGPT extension that allows you to improve the dbt experience by writing documentation in addition to other functions.
Getting started with dbt and Snowflake is not hard and knowing how to streamline the development experience when working with dbt and Snowflake will maximize developer happiness.
Some users may run into issues configuring their development environment. If that happens, check out the #sqlfluff, #tools-dbt-libraries, and #tools-vscode channels on the dbt Slack community. There are many helpful people there always ready to help.
As you can see there are a lot of steps and potential gotchas to get a dbt environment properly configured. This gets more complicated as the number of dbt users increases. Upgrading everyone can also pose a challenge. These reasons and more are why we created the most flexible managed dbt-core environment. If you want your developers to be up and running in minutes with no installation required, reach out and we can show you how we can streamline your teams’ dbt experience with best practices from the start.
Since its inception, dbt has served the needs of small to mid-sized companies to transform data into actionable insights. Now as larger enterprises look to adopt dbt, they must consider the unique complexities of large enterprises. These include integrating with existing systems behind a corporate firewall, orchestrating end-to-end data pipelines, and implementing DataOps processes to transform how teams collaborate across the organization.
This article compares three popular dbt development and deployment options: leveraging dbt Cloud, using dbt Core alone, or using a dbt Core managed platform other than dbt Cloud. To learn more about specific dbt Cloud features and how to implement them using dbt Core using other open-source tools checkout dbt Core vs dbt Cloud - Key Differences.
dbt Cloud is a hosted environment to develop and deploy dbt Core projects. dbt Cloud leverages dbt Core, the open-source data transformation framework we all know as dbt. dbt Cloud pricing starts at $100 per developer for teams or if you are a single developer, they offer a free single seat with limited features.
When companies are ramping up with dbt, one of the hardest parts is setting up and managing dbt IDE environments. Analytic Engineers coming to dbt may not be familiar with concepts like version control with git or using the command line. The dbt Cloud IDE simplifies developer onboarding by providing a web-based SQL IDE to team members so they can easily write, test, and refine data transformation code without having to install anything on their computers. Complexities like starting a git branch are tucked behind a big colorful button so users know that is the first step in their new feature journey.
Developers who are accustomed to more versatile local IDEs, such as VS Code, may find the dbt Cloud experience limiting as they cannot leverage extensions such as those from the VS Code Marketplace nor can they extend dbt Core using the vast array of Python libraries. An example of a common library employed in the dbt community is SQLFluff. With SQLFluff, teams can ensure a minimum level of code quality is achieved across developers.
DataOps is a set of practices that combines data management and software engineering to improve the quality, speed, and collaboration of data analytics teams. In an enterprise environment, implementing DataOps processes early on in a project's lifecycle is crucial.
dbt Cloud can be paired with Continuous Integration (CI) tools like GitHub Actions to validate data transformations before they are added to a production environment.
If CI integration checks include things like SQLFluff and dbt-checkpoint, there is currently no way for users to run these validations in dbt Cloud prior to executing the automated CI checks in GitHub Actions. This may lead to a lot of back and forth between the GitHub Actions run failures and the dbt Cloud IDE to fix and commit the changes. Aspects such as code reviews and approvals will occur in the CI/CD tool of choice such as GitHub and dbt Cloud can report job execution status back to GitHub Actions. This allows teams to know when it is safe to merge changes to their code.
Companies that have tools like Bitbucket, Jira, and Jenkins within their corporate firewall may find it challenging to integrate with dbt Cloud.
dbt Cloud makes deploying a dbt Core project simple. It allows you to define custom environment variables and the specific dbt commands (seed, run, test) that you want to run during production runs. The dbt Cloud scheduler can be configured to trigger at specific intervals using an intuitive UI.
dbt Cloud is primarily focused on running dbt projects. Therefore if a data pipeline has more dependencies, an external orchestration tool may be required. Cases where this is necessary include when you need to connect the Extract and Load step to the Transformation happening in dbt. Without connecting these steps, you have no assurance that the load is complete before the transformation starts. This can be mitigated with freshness checks, but without an explicit connection between the load step and the transformation, you may be delaying transformation unnecessarily.
Fortunately, if you do use an external orchestrator, dbt Cloud offers an API to trigger dbt Cloud jobs from your orchestrator. Another reason an external orchestrator may be needed in an enterprise is when there are additional steps to run after the dbt transformation. These may include sending data to downstream BI tools, marketing campaign tools, MDM systems, or refreshing machine learning models. dbt Cloud recently announced a webhooks API that would allow for triggering steps after transformation. However, if these endpoints reside within a corporate firewall, this may not be an option and an orchestrator like Airflow may be necessary.
The dbt Cloud IDE does not allow for the installation of Python libraries and dbt Cloud cannot create and orchestrate Python scripts that leverage tools like SQLFluff (this is coming to dbt Cloud later this year), dbt-coves, dbt-fal, Snowpark, Permifrost, or Streamlit. Additionally, the dbt Cloud IDE cannot be extended with extensions such as the official Snowflake or Databricks VS Code extensions.
While dbt Cloud can send notifications via email and Slack, it does not support other popular enterprise systems such as MS Teams.
dbt Cloud offers industry-standard SaaS security appropriate for any enterprise. At the enterprise level, it offers Single Sign-On and Role Based Access Controls. Enterprises can also choose to have dbt Cloud provisioned on a dbt Labs managed private cloud account. dbt Cloud is only offered as a managed SaaS solution on AWS and thus cannot be deployed to a company's existing private cloud account.
dbt Cloud is especially attractive for organizations with limited in-house technical resources or those seeking to minimize the time and effort spent on initial setup, maintenance, and upgrades. dbt Cloud runs in the browser and thus removes the need for manual installation and setup of dbt components, allowing your team to focus on data transformation tasks. dbt Cloud handles updates and upgrades for you, so it is simple to stay up-to-date and secure. dbt Cloud includes built-in scheduling capabilities, allowing you to automate dbt runs and testing without relying on external schedulers or orchestration tools.
dbt Cloud may introduce additional complexities in large enterprises with more complex orchestration needs or those that need to host the environment within their private cloud. Finally, the dbt Cloud IDE may feel constraining for users who prefer a VS Code environment.
All these features can in turn reduce the time to value in setting up an end-to-end enterprise data management platform. It is possible to start out with a simple setup, but inevitably not accounting for the breadth of the entire platform may create technical debt and cause rework in the future.
Pros:
- Simplified setup and management of dbt environment
- Enhanced collaboration capabilities for data teams
- Integrated dbt job scheduling and monitoring
- Managed upgrades and updates
- Web-based IDE for streamlined development
- Metadata API
Cons:
- Reliance on a third-party platform
- May be more expensive for larger teams or organizations with complex requirements
- Not able to deploy within the corporate's private cloud
- Unable to leverage VS Code extensions and other Python libraries
- May require additional components or additional SaaS solutions for complex orchestrations
dbt Core is the heart of dbt and since it is open source, anyone can use it freely. That being said, an enterprise data management platform must account for additional requirements such as the development experience, deployment and scheduling dbt runs, and creating a DataOps process. This will impact the total cost of ownership and the platform's time to market.
Leveraging dbt Core allows for ultimate flexibility, but it involves setting up a development environment on each team member's local machine or on a virtual environment such as an AWS workspace. Installation includes installing a popular dbt IDE like VS Code, installing dbt Core, configuring a connection to the data warehouse, and managing additional dependencies such asa compatible version of Python.
While the overall installation process for dbt is similar for both Windows and Mac users, there are some differences and potential gotchas to be aware of. Additionally, most organizations provide users with pre-configured computers and these systems may be delivered with software that conflicts with dbt Core. This complexity can lead to frustration and delays and may require assistance from senior members of the team, pulling them away from value-added work.
Once everything is installed, configuring dbt to connect to a data warehouse can also introduce complexity before a new team member can begin familiarizing themselves with the organization's dbt project structure and coding practices.
An enterprise setup of dbt should also include components that improve developer code quality and productivity. Popular IDEs like VS Code include a vast array of extensions that also improve the development experience. These include dbt Power User, VS Code extension for SQLFluff, and the dbt Snowflake extension.
Configuring a development environment can take anywhere from a few days to as long as a week. Depending on the person’s technical aptitude and experience with dbt. To improve this process, companies spend weeks or months dialing in the installation process and documenting it in knowledge base tools like Confluence or Notion.
A development environment is not evergreen, it must be upgraded as new versions of dbt Core are released for added functionality or to fix bugs. Upgrading can be as error-prone as new installations and some companies opt not to keep their enterprise data platform current. This, however, is ill-advised because this type of technical debt only gets harder to remediate over time.
When using dbt Core for your enterprise data platform, you will need to not only define and configure the automation scripts, but you will also need to ensure that all the components, such as a git server, CI server, CI runners, etc. are all working harmoniously together.
Since dbt Core runs within the corporate firewall, it can be integrated with any CI tool and internal components such as Jira, Bitbucket, and Jenkins. This flexibility comes at a price. To do this well, all the project dependencies must be packed into reusable Docker containers. Notifications will also need to be defined across the various components and all of this will take time and money.
When setting up a deployment environment, companies using dbt Core can leverage any orchestration tool, such as Airflow. They can connect steps prior to or after the dbt transformations and they can trigger any tool that exists within or outside the corporate network.
However, scaling tools like Airflow is not straightforward and may require knowledge of additional technologies like Kubernetes. By default, Airflow uses Python files and therefore, the team working with these orchestration scripts will need to also know Python on top of SQL.
A mature enterprise platform will also require multiple Airflow environments for development, testing, and production, adding complexity and increasing the total cost of ownership.
Using dbt Core is inevitably the most flexible option, which is why many organizations choose to go this route. Companies can run dbt Core and leverage leading code editors like VS Code. They can install any additional Python dependencies, including running any proprietary code that is only available within the corporate firewall. They can also take advantage of the ever-growing VS Code extensions such as those for Snowflake or Databricks.
Teams using dbt Core can send notifications to Slack, MSTeams, or any other platform as long as they develop and maintain these solutions.
However, this extensibility will only increase the platform's total cost of ownership and the time it takes to deploy the end-to-end platform.
One reason some enterprises choose to use dbt Core is that they are not able to leverage SaaS solutions due to compliance or data privacy requirements such as GDPR. By leveraging dbt Core, they can achieve ultimate security as dbt Core would run in and comply with corporate network policies.
Running dbt Core within the corporate firewall also eliminates the need for lengthy risk assessments. However, ensuring that security patches are applied is left to the organization. Managing Single Sign-on with tools like Airflow and setting up Role Based Access Controls are also things that will have to be solved when using dbt Core as a dbt enterprise data management platform.
A lot of the power and flexibility with using dbt Core comes from being able to customize the entire dbt experience. However, all of this comes with the expense of having to build a data management platform. It is not uncommon for organizations to take six months or up to a year to set up and test their custom dbt enterprise data management platform. They will also likely need to hire external consultants who have worked with a myriad of technologies and understand how to integrate them.
Building a platform is not for the faint of heart and since it is something custom to the organization, it will also increase the total cost of ownership. Some organizations get the basics in place just to make their teams productive, but eventually, they find that without proper design and implementation, they cannot take full advantage of the value of dbt and the rest of the modern data stack.
Using dbt Core as a dbt enterprise data platform can be a cost-effective option for smaller teams with limited requirements and budgets. It can also be good for those trying out dbt. Large organizations with stringent security requirements of sensitive data and strong technical platform teams may also leverage this option as it allows them to deploy in a way that is compliant with corporate policies.
As platform complexity grows, so does the cost and time needed to implement the end-to-end platform. Larger organizations should consider whether developing something custom is better than using a managed solution or taking shortcuts that can reduce the value of dbt and the modern data stack.
Pros:
- Full control over the dbt environment and configurations
- No reliance on third-party services
- No subscription costs
- Able to meet corporate security requirements by installing within the corporate firewall
Cons:
- Requires technical expertise in defining the installation steps and documenting them
- May require hands-on involvement during setup and upgrades
- It can be time-consuming and challenging to maintain consistency across team members
- Upgrades and dependency management may be more complex compared to managed solutions
- Resolving installation issues can consume senior team members' time
- Total cost of ownership of the platform may be higher than using a managed solution
- Additional expertise will be needed to set up all the required technologies
- It will take longer to implement than a managed solution
We have seen that dbt Cloud is a great dbt Core managed platform, but it has some limitations for large or complex deployments. On the other hand, using dbt Core alone gives organizations ultimate flexibility, but it comes at an added expense in terms of the time and cost that it takes to stand up and maintain a custom, one-of, enterprise data management platform.
What if you could combine the simplicity of dbt Cloud with the flexibility of dbt Core? A managed dbt Core platform, like Datacoves, can combine some of the best of both worlds. However, not all managed platforms are the same, below we discuss what to look for when considering a managed dbt Core enterprise data platform. Checkout our dbt pricing page for more pricing information.
By far, the tool most of the dbt enterprise users will interact with is the IDE. When considering a managed dbt Core platform, consider the developer experience. Will they have VS Code in the browser? Will you get a curated, best in class dbt IDE like VS Code with integrated libraries like SQLFluff and extensions that simplify the developer workflow? When required, will you be able to customize the developer environment with company specific Python libraries and extensions?
In a best-in-class developer setup, new users are onboarded in minutes with simple configuration screens that remove the need to edit text files such as profiles.yml and remove the complexity of creating and managing SSH keys. Version upgrades of dbt or any dependent library should be transparent to users.
Some platforms will also have added flexibility and let you leverage VS Code for non-dbt development tasks, such as opening a Jupyter notebook environment or a Python development environment with libraries such as Python Black.
Spinning up a pristine environment should be a matter of clicks. To test new dbt versions, you should be able to run your project in an isolated environment before switching all developers to the new version of dbt.
The advantage of a managed dbt Core platform when it comes to DataOps is that you do not have to worry about creating Docker images with all the dependencies you need as you perform your Continuous integration tests. Working with a partner that has experience with enterprise CI/CD platforms like Jenkins will also accelerate your time to value. A knowledgeable dbt Core managed platform partner will know best practices and guide you in setting up a world-class workflow that will scale with your enterprise. Developers should also be able to execute CI validations locally before pushing their changes to GitHub, thus reducing the back and forth to fix issues during the CI checks.
Some managed dbt Core platforms can run within your corporate firewall, allowing you to leverage existing tools like Jira, Bitbucket, Jenkins, or Gitlab. All this simplifies and accelerates the rollout of a dbt data management platform.
Enterprises are complex, and as such, they must consider a flexible orchestration tool that has wide adoption, documentation and can scale to the needs of the company.
A mature enterprise dbt Core platform will include a full-fledged orchestrator such as Airflow. With Airflow, organizations can orchestrate pipelines that include Extract, Load, Transform, and Activation steps. Other aspects to consider is whether Airflow jobs can scale by leveraging technologies such as Kubernetes and whether it will be simple to send notifications to enterprise tools such as MS Teams.
Some dbt Core platforms that leverage dbt have also adopted mechanisms that simplify Airflow DAG creation by letting users create them using yml files vs creating traditional Python-based DAGs.
A complete enterprise data management platform will allow you to easily create multiple Airflow environments for development, testing, and production. It will also eliminate the burden of having to create Docker images with your dbt dependencies.
An enterprise dbt Core platform will streamline the process of ramping up with dbt and it will offer the organization the flexibility it requires. This may include triggering AWS Glue jobs as well as Fivetran jobs. It may include triggering internal and external APIs or it may require a pre-configured development environment with Python libraries like Snowpark or Streamsets.
When it comes to security, different types of industries have different requirements. Regulated industries in Life Sciences or Finance may not want to or and unable to use SaaS solutions, so being able to deploy a dbt enterprise data management platform within the corporate firewall is a must.
Integrating with SSO solutions like Okta or PingID is also a must, and managing user's authorization using existing mechanisms like Active Directory simplifies compliance with corporate policies.
Another area that may be overlooked when using open-source solutions is validating that libraries are up to date and there are no vulnerabilities introduced into the network. A robust dbt Core managed platform will be able to leverage internal registries like Artifactory, which are set up to scan for these issues.
Finally, at enterprise scale, the dbt Core managed platform must have robust Role Based Access Controls to assure users have the appropriate level of access.
Just like dbt Cloud, a reliable enterprise dbt Core data management platform will reduce the time to value. Teams will not have to figure out how to host and configure the multitude of tools and libraries and they will have a starting point for implementing best practices at the start of the project.
By eliminating the need to stand up a platform, teams can focus on change management and new ways of working that will have the biggest impact on the organization. The total cost of ownership will be lower for the company since they do not have to build and constantly maintain their one-of platform. In contrast, they will benefit by working with a partner that works with other large organizations and looks for ways to improve and enhance the platform for everyone.
A managed dbt Core enterprise data platform is necessary for organizations that are not in the business of creating and managing complex technologies and those who need the flexibility that comes with complex enterprise tools and processes. When dbt Core must be deployed within the corporate firewall, an enterprise managed dbt data platform can also be a good option for companies with stringent security requirements to protect sensitive data.
Smaller organizations that want to leverage a managed VS Code environment or who want to implement DataOps from the outset can also realize the benefits of a SaaS managed dbt Core data platform.
A managed dbt Core enterprise data platform gives companies the simplicity of dbt Cloud with the peace of mind of dbt Core. Best of all, if the platform leverages open-source technologies, there is no vendor lock-in.
Pros:
- Simplified developer onboarding
- Robust IDE such as VS Code
- Ability to configure a dbt environment
- Managed orchestrator such as Airflow
- Able to meet corporate security requirements by installing within the corporate firewall
- Requires a lot less technical expertise to set up and run
- Simplified dependency management across the development environment, CI/CD worker, and Airflow
- Lower total cost of ownership
Cons:
- Unable to have full control over the platform
- Reliance on third-party
- Licensing costs
Selecting between dbt Cloud, dbt Core, or a managed dbt Core enterprise platform comes down to the internal experience within an organization and their desire to focus on data vs developing and maintaining a custom dbt platform. The beauty of the dbt ecosystem is that the components are freely available and with enough time and money, anyone can do what they need. However, enterprises that want to move fast and leverage the full potential of dbt, should consider a managed dbt Core enterprise platform such as Datacoves to reduce the total cost of ownership and accelerate time to market. It is important to be aware of the dbt alternatives to make an informed decision.
A few years ago, we identified a need to simplify how companies implement the Modern Data Stack, both from an infrastructure and process perspective. This vision led to the creation of Datacoves. Our efforts were recognized by Snowflake as we became a top-10 finalist in the 2022 Snowflake Startup Challenge. We have become trusted partners at Fortune 100 companies, and now we are ready for our next chapter.
Datacoves is thrilled to announce that we have been selected to be part of the Spring 2023 cohort at TinySeed. Their philosophy and culture align flawlessly with our values and aspirations. Our goal has always been to help our customers revolutionize the way their teams work with data, and this investment will enable us to continue delivering on that vision.
The TinySeed investment will also strengthen our commitment to supporting open-source projects within the dbt community, such as SQLFluff, dbt-checkpoint, dbt-expectations, and dbt-coves. Our contributions will not only benefit our customers but also the broader dbt community, as we believe open-source innovation enhances the tools and resources available to everyone.
We want to express our deep gratitude to Tristan Handy, CEO & Founder at dbt Labs and the entire dbt Labs team for partnering with us on the 2023 State of Analytics Engineering Community survey and for creating the amazing dbt framework. Without dbt, Datacoves would not be where it is today, and our customers would not enjoy the benefits that come with our solutions.
As we continue to grow and evolve, we look forward to finding new ways to collaborate with dbt Labs and the dbt community to further enhance how people use dbt and provide the best possible experience for everyone. Together, we can empower data teams around the world and revolutionize the way they work.
In continuation of our previous blog discussing the importance of implementing DataOps, we now turn our attention to the tools that can efficiently streamline your processes. Additionally, we will explore real-life examples of successful implementations, illustrating the tangible benefits of adopting DataOps practices.
There are a lot of DataOps tools that can help you automate data processes, manage data pipelines, and ensure the quality of your data. These tools can help data teams work faster, make fewer mistakes, and deliver data products more quickly.
Here are some recommended tools needed for a robust DataOps process:
DataOps has been successfully used in the real world by companies of all sizes, from small startups to large corporations. The DataOps methodology is based on collaboration, automation, and monitoring throughout the entire data lifecycle, from collecting data to using it. Organizations can get insights faster, be more productive, and improve the quality of their data. DataOps has been used successfully in many industries, including finance, healthcare, retail, and technology.
Here are a few examples of real-world organizations that have used DataOps well:
DataOps has a bright future because more and more businesses are realizing how important data is to their success. With the exponential growth of data, it is becoming more and more important for organizations to manage it well. DataOps will likely be used by more and more companies as they try to streamline their data management processes and cut costs. Cloud-based data management platforms have made it easier for organizations to manage their data well. Some of the main benefits of these platforms are that they are scalable, flexible, and cost-effective. With DataOps teams can improve collaboration, agility, and build trust in data by creating processes that test changes before they are rolled out to production.
With the development of modern data tools, companies can now adopt software development best practices in analytics. In today’s fast-paced world, it's important to give teams the tools they need to respond quickly to changes in the market by using high-quality data. Companies should use DataOps if they want to manage data better and reduct the technical debt created from uncontrolled processes. Putting DataOps processes in place for the first time can be hard, and it's easier said than done. DataOps requires a change in attitude, a willingness to try out new technologies and ways of doing things, and a commitment to continuous improvement. If an organization is serious about using DataOps, it must invest in the training, infrastructure, and cultural changes that are needed to make it work. With the right approach, companies can get the most out of DataOps and help their businesses deliver better outcomes.
At Datacoves, we offer a suite of DataOps tools to help organizations implement DataOps quickly and efficiently. We enable organizations to start automating simple processes and gradually build out more complex ones as their needs evolve. Our team has extensive experience guiding organizations through the DataOps implementation process.
Schedule a call with us, and we'll explain how dbt and DataOps can help you mature your data processes.
The amount of data produced today is mind-boggling and is expanding at an astonishing rate. This explosion of data has led to the emergence of big data challenges that organizations must overcome to remain competitive. Organizations must effectively manage data, guarantee its accuracy and quality, in order to derive actionable insights from it. This is where DataOps comes in. In this first post we go through an overview of DataOps and in our next post we will discuss tooling that support DataOps as well as discuss companies that have seen the benefit of implementing DataOps Processes.
DataOps is a methodology that merges DevOps principles from software development with data management practices. This helps to improve the process of developing, deploying, and maintaining data-driven applications. The goal is to increase the delivery of new insights and the approach emphasizes collaboration between data scientists, data engineers, data analysts, and analytics engineers.
Automation and continuous delivery are leveraged for faster deployment cycles. Organizations can enhance their data management procedures, decrease errors, and increase the accuracy and timeliness of their data by implementing DataOps.
The overarching goal is to improve an organization's capacity for decision-making by facilitating quicker, more precise data product deliveries. DataOps can speed up task completion with fewer errors by encouraging effective team collaboration, ultimately enhancing the organization's success.
DataOps is now feasible thanks to modern data stack tools. Modern data tools like Snowflake and dbt can enable delivery of new features more quickly while upholding IT governance standards. This allows for employee empowerment and improves their agility all while helping to create a culture that values data-driven decision-making. Effective DataOps implementation enables businesses to maximize the value of their data and build a data-driven culture for greater success.
It is impossible to overstate the role that DataOps plays in overcoming big data challenges. Organizations must learn how to manage data effectively as it continues to increase if they want to stay competitive.
DataOps addresses these challenges by providing a set of best practices and tools that enable data teams to work more efficiently and effectively. Some of the key benefits of DataOps include:
In a world without DataOps, data-related processes can be slow, error-prone, and documentation is often outdated. DataOps provides the opportunity to rethink how data products are delivered. This promotes a culture of federated delivery. Everyone in the organization with the necessary skills is empowered to participate in insight generation.
We can achieve comparable results while preserving governance over the delivery process by learning from large, distributed open-source software development projects like the Linux project. To ensure high-quality output, a good DataOps process places an emphasis on rules and procedures that are monitored and controlled by a core team.
To implement DataOps well, one needs to take a holistic approach that brings together people, processes, and technology. Here are the five most important parts of DataOps that organizations need to think about in order to implement it successfully:
Having identified what needs to be done to achieve a good DataOps implementation, you may be wondering how we turn these ideas into actionable steps.
At Datacoves, we offer a comprehensive suite of DataOps tools to help organizations implement robust processes quickly and efficiently. One of the key benefits of using Datacoves is the ability to implement DataOps incrementally using our dynamic Infrastructure. Organizations can start by automating simple processes, such as data collection and cleaning, and gradually build out more complex processes as their needs evolve.
Our agile methodology allows for iterative development and continuous deployment, ensuring that our tools and processes are tailored to meet the evolving needs of our clients. This approach allows organizations to see the benefits of DataOps quickly, without committing to a full-scale implementation upfront. We have helped teams implement mature DataOps processes in as little as six weeks.
The Datacoves platform is being used by top global organizations and our team has extensive experience in guiding organizations through the DataOps implementation process. We ensure that you benefit from best practices and avoid common pitfalls when implementing DataOps.
Set aside some time to speak with us to discuss how we can support you in maturing your data processes with dbt and DataOps.
Don't miss out on our latest blog, where we showcase the best tools and provide real-life examples of successful implementations.
Get ready to take your data operations to the next level!
Don’t let platform limitations or maintenance overhead hold you back.
Book a DemoDatacoves is an enterprise DataOps platform with managed dbt Core and Airflow for data transformation and orchestration, as well as VS Code in the browser for development
Apache, Apache Airflow, Airflow, Apache Superset, the Airflow logo, the Apache feather logo, Superset, and the Superset logo are trademarks of the Apache Software Foundation. Dbt, dbt core, dbt logo are trademarks of the dbt Labs, Inc. Airbyte, Airbyte logo are trademarks of the Airbyte, Inc. Snowflake, Snowflake logo are trademarks of the Snowflake Inc.