Datacoves blog

Learn more about dbt Core, ELT processes, DataOps,
modern data stacks, and team alignment by exploring our blog.
build vs buy analytics
dbt alternatives
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Why don't decision makers trust your analytics
5 mins read

The reason companies fail at leveraging analytics stems from the fact that people tend to focus on the destination instead of the journey that will lead to the solutions that will have the most impact on the business. Time and time again, I see people focus on the so-called shiny objects, like new tools, new techniques, or even new people, that appear to be the silver bullet everyone needs. The truth is, if you go back to the first principles and start with true alignment, good data processes, and user-centric experiences, project success and satisfaction are achievable.

Lack of Alignment Reduces Faith in the Solution

Every project I have been a part of started with a sense of optimism and excitement. The honeymoon phase was great. Everyone was united; we had gotten the funding, selected vendor partners, and purchased whatever technology was part of the solution. We all spoke the same language, everyone got to work, management started getting progress updates, and everyone thought we were off to a great start.

It wasn't until real decisions needed to be made that we realized the honeymoon was over. In every single instance, an excessive amount of time was spent in meetings  arguing and reaching some level of consensus until the next decision. The reason this happened was because we didn't really spend the time to get on the same page. People assumed that we were aligned because at a high level, we all talking about the key points of the given initiative: digital transformation, self-service analytics, customer mastering, data lakes, etc.

But we were not really thinking the same things. Everyone had different backgrounds and had expertise on different parts of the solution: regulatory requirement, technology limitations, end-user needs, etc. There were also things no one knew at the start, and we didn't have a north star to guide these decisions. We all appeared to be saying the same things, but we were thinking very differently.

Lack of Alignment Reduces Faith in the Solution
You may be using the same words, but you are envisioning different things.

I have seen the pressure to get started on a project and show progress lead to delays and ultimate dissatisfaction with the end result. On projects where we have spent a couple of weeks getting aligned using a structured approach to product discovery, we ended up with better estimates and better overall satisfaction.

In any analytics-related project, the same things apply: the team needs to understand the business objectives, the current state (so the new process isn't worse), the risks, and prioritize the high-level features. Most importantly, the team needs to align on what's NOT in the new solution and the prioritizing criteria such as quality, feature completeness, or usability that will be used when making decisions. Agile does not mean no planning.

Trust starts by listening to people and creating a shared vision that sets the right expectations from day one. You can create an achievable plan if everyone knows what you are trying to achieve.

People don't Trust the Data

Let's face it, your data processes get no love. This is usually because this is "too technical." Your users don't care about databases, schemas, tables, or columns, let alone the process of converting raw facts into business-ready insights. It's easy for management to see a fancy dashboard and get excited about the possibility of machine learning, but talk about data and people's eyes gloss over.

It kind of makes sense; most people don't understand how the power grid works. We all take it for granted. We flip a switch, the light turns on, and we move forward. No one cares about electricity until something goes wrong. In a lot of organizations, things go wrong with data more often than you would think. Sometimes people notice right away, but other times failures are silent. When something does go wrong, everyone goes into firefighting mode. Meetings are held, issues are discovered, and patches to "prevent" the failure are put in place. The time to think about the inevitable is not once things break; you need to anticipate failure and design for resilience.

People don't Trust the Data
Fighting fires hinders progress and erodes trust

The issue here is that we don't think of the process of going from raw data to insights as a single system. It is all interconnected and needs to be treated as such. When it comes to analytics, sometimes it feels like companies want to build a mansion on a foundation atop quicksand. Initially, all seems fine, and everyone is in the house decorating until someone notices that a corner of the house is sinking. Everyone goes outside, props up the corner, and they happily go back inside to decide what color to paint the next room.

You can't build a house on quicksand; you need to set up repeatable processes with quality built in from the start. If we want collaboration, we have to build it in. If you want to be able to do impact analysis, guess what? You can't retrofit that later if you didn't do it from the start. Having documented analytics is not magic; you need this to be part of the culture and part of the process. The good thing is that many smart people have faced the same issues, and there are examples we can see where people are doing things right.

If you want users to trust data analytics, they need to trust the data, and they need to believe in a solid process that is built on a solid foundation.  

Bad User Experiences Erode Confidence

When you try to please everyone, you please no one, and in many companies, technical teams try to do everything they are asked. They jump through hoops to deliver projects, but it is very common for people to be dissatisfied with the end results. I have also seen new tools used like old ones. Teams sometimes take the approach that the new process is just affecting some part of the current broken process, so they only incrementally change it. I have seen Tableau dashboards that are essentially Excel on the web with some automation.

Instead of asking users what they want, we need to understand what they need and why. What are they trying to accomplish? What's wrong with how they do things today? Is the new process / tool you are putting in place better than what they already have? Sometimes it makes more sense to leave a current process as-is until other parts of the system are improved.

Bad User Experiences Erode Confidence
You will not build confidence in your solution if your users are frustrated

When you understand the real need for an omni-channel dashboard or a sales dashboard, you design the solution to help you achieve that goal. If your users need to quickly get in and out of the tool, you can find ways to reduce the number of clicks it takes them to get there. You simplify access, and you surface the most important information first. You build the solution around them, and more importantly, you are able to justify your decisions and why certain things need to be de-prioritized. When users see that you empathize with them, they trust you. They don't push back on every choice because they know you have their best interests at heart because you have demonstrated time and again that you do care.

Conclusion

Getting decision-makers to trust data analytics is no different than getting anyone to trust anything. You need to start with alignment and set the right expectations; you need to build end-to-end processes that are robust; and you need to deliver the tools that facilitate the job users do.

optimize dbt slim ci
5 mins read

Any experienced data engineer will tell you that efficiency and resource optimization are always top priorities. One powerful feature that can significantly optimize your dbt CI/CD workflow is dbt Slim CI. However, despite its benefits, some limitations have persisted. Fortunately, the recent addition of the --empty flag in dbt 1.8 addresses these issues. In this article, we will share a GitHub Action Workflow and demonstrate how the new --empty flag can save you time and resources.

What is dbt Slim CI?

dbt Slim CI is designed to make your continuous integration (CI) process more efficient by running only the models that have been changed and their dependencies, rather than running all models during every CI build. In large projects, this feature can lead to significant savings in both compute resources and time.

Key Benefits of dbt Slim CI

  • Speed Up Your Workflows: Slim CI accelerates your CI/CD pipelines by skipping the full execution of all dbt models. Instead, it focuses on only the modified models and their dependencies and uses the defer flag to pull the unmodified models from production. So, if we have model A, B and C yet only make changes to C, then only model C will be run during the CI/CD process.
  • Save Time, Snowflake Credits, and Money: By running only the necessary models, Slim CI helps you save valuable build time and Snowflake credits. This selective approach means fewer computational resources are used, leading to cost savings.

dbt Slim CI flags explained

dbt Slim CI is implemented efficiently using these flags:

--select state:modified:  The state:modified selector allows you to choose the models whose "state" has changed (modified) to be included in the run/build. This is done using the state:modified+ selector which tells dbt to run only the models that have been modified and their downstream dependencies.

--state <path to production manifest>: The --state flag specifies the directory where the artifacts from a previous dbt run are stored ie) the production dbt manifest. By comparing the current branch's manifest with the production manifest, dbt can identify which models have been modified.

--defer: The --defer flag tells dbt to pull upstream models that have not changed from a different environment (database). Why rebuild something that exists somewhere else? For this to work, dbt will need access to the dbt production manifest.

dbt build
dbt CI/CD command

You may have noticed that there is an additional flag in the command above.  

--fail-fast: The --fail-fast flag is an example of an optimization flag that is not essential to a barebones Slim CI but can provide powerful cost savings. This flag stops the build as soon as an error is encountered instead of allowing dbt to continue building downstream models, therefore reducing wasted builds. To learn more about these arguments you can use have a look at our dbt cheatsheet.

dbt Slim CI with Github Actions before dbt 1.8

The following sample Github Actions workflow below is executed when a Pull Request is opened. ie) You have a feature branch that you want to merge into main.

sample Github Actions workflow is executed when a Pull Request is opened
Same Github Action Prior to dbt 1.8

Workflow Steps

Checkout Branch: The workflow begins by checking out the branch associated with the pull request to ensure that the latest code is being used.

Set Secure Directory: This step ensures the repository directory is marked as safe, preventing potential issues with Git operations.  

List of Files Changed: This command lists the files changed between the PR branch and the base branch, providing context for the changes and helpful for debugging.

Install dbt Packages: This step installs all required dbt packages, ensuring the environment is set up correctly for the dbt commands that follow.

Create PR Database: This step creates a dedicated database for the PR, isolating the changes and tests from the production environment.

Get Production Manifest: Retrieves the production manifest file, which will be used for deferred runs and governance checks in the following steps.

Run dbt Build in Slim Mode or Run dbt Build Full Run: If a manifest is present in production, dbt will be run in slim mode with deferred models. This build includes only the modified models and their dependencies. If no manifest is present in production we will do a full refresh.

Grant Access to PR Database: Grants the necessary access to the new PR database for end user review.

Generate Docs Combining Production and Branch Catalog: If a dbt test is added to a YAML file, the model will not be run, meaning it will not be present in the PR database. However, governance checks (dbt-checkpoint) will need the model in the database for some checks and if not present this will cause a failure. To solve this, the generate docs step is added to merge the catalog.json from the current branch with the production catalog.json.

Run Governance Checks: Executes governance checks such as SQLFluff and dbt-checkpoint.

Problems with the dbt CI/CD Workflow

As mentioned in the beginning of the article, there is a limitation to this setup. In the existing workflow, governance checks need to run after the dbt build step. This is because dbt-checkpoint relies on the manifest.json and catalog.json. However, if these governance checks fail, it means that the dbt build step will need to run again once the governance issues are fixed. As shown in the diagram below, after running our dbt build, we proceed with governance checks. If these checks fail, we need to resolve the issue and re-trigger the pipeline, leading to another dbt build. This cycle can lead to unnecessary model builds even when leveraging dbt Slim CI.

ci/cd process before dbt 1.8
dbt CI/CD before dbt 1.8

Leveraging the --empty Flag for Efficient dbt CI/CD Workflows

The solution to this problem is the --empty flag in dbt 1.8. This flag allows dbt to perform schema-only dry runs without processing large datasets. It's like building the wooden frame of a house—it sets up the structure, including the metadata needed for governance checks, without filling it with data. The framework is there, but the data itself is left out, enabling you to perform governance checks without completing an actual build.

Let’s see how we can rework our Github Action:

rework our Github Action
Sample Github Action with dbt 1.8

Workflow Steps

Checkout Branch: The workflow begins by checking out the branch associated with the pull request to ensure that the latest code is being used.

Set Secure Directory: This step ensures the repository directory is marked as safe, preventing potential issues with Git operations.  

List of Files Changed: This step lists the files changed between the PR branch and the base branch, providing context for the changes and helpful for debugging.

Install dbt Packages: This step installs all required dbt packages, ensuring the environment is set up correctly for the dbt commands that follow.

Create PR Database: This command creates a dedicated database for the PR, isolating the changes and tests from the production environment.

Get Production Manifest: Retrieves the production manifest file, which will be used for deferred runs and governance checks in the following steps.

*NEW* Governance Run of dbt (Slim or Full) with EMPTY Models: If there is a manifest in production, this step runs dbt with empty models using slim mode and using the empty flag. The models will be built in the PR database with no data inside and we can now use the catalog.json to run our governance checks since the models. Since the models are empty and we have everything we need to run our checks, we have saved on compute costs as well as run time.  

Generate Docs Combining Production and Branch Catalog: If a dbt test is added to a YAML file, the model will not be run, meaning it will not be present in the PR database. However, governance checks (dbt-checkpoint) will need the model in the database for some checks and if not present this will cause a failure. To solve this, the generate docs step is added to merge the catalog.json from the current branch with the production catalog.json.

Run Governance Checks: Executes governance checks such as SQLFluff and dbt-checkpoint.

Run dbt Build: Runs dbt build using either slim mode or full run after passing governance checks.

Grant Access to PR Database: Grants the necessary access to the new PR database for end user review.

By leveraging the dbt --empty flag, we can materialize models in the PR database without the computational overhead, as the actual data is left out. We can then use the metadata that was generated during the empty build. If any checks fail, we can repeat the process again but without the worry of wasting any computational resources doing an actual build. The cycle still exists but we have moved our real build outside of this cycle and replaced it with an empty or fake build. Once all governance checks have passed, we can proceed with the real dbt build of the dbt models as seen in the diagram below.

ci/cd process after dbt 1.8
dbt CI/CD after dbt 1.8 --empty flag

Conclusion

dbt Slim CI is a powerful addition to the dbt toolkit, offering significant benefits in terms of speed, resource savings, and early error detection. However, we still faced an issue of wasted models when it came to failing governance checks. By incorporating dbt 1.8’s  --empty flag into your CI/CD workflows we can reduce wasted model builds to zero, improving the efficiency and reliability of your data engineering processes.

🔗 Watch the vide where Noel explains the  --empty flag implementation in Github Actions:

Dbt jinja cheat sheet
5 mins read

Jinja templating in dbt offers flexibility and expressiveness that can significantly improve SQL code organization and reusability. There is a learning curve, but this cheat sheet is designed to be a quick reference for data practitioners, helping to streamline the development process and reduce common pitfalls.

Whether you're troubleshooting a tricky macro or just brushing up on syntax, bookmark this page. Trust us, it will come in handy and help you unlock the full potential of Jinja in your dbt projects.

If you find this cheat sheet useful, be sure to check out our Ultimate dbt Jinja Functions Cheat Sheet. It covers the specialized Jinja functions created by dbt, designed to enhance versatility and expedite workflows.

dbt Jinja: Basic syntax

This is the foundational syntax of Jinja, from how to comment to the difference between statements and expressions.

dbt Jinja: Variable assignment 

Define and assign variables in different data types such as strings, lists, and dictionaries.

dbt Jinja: White space control 

Jinja allows fine-grained control over white spaces in compiled output. Understand how to strategically strip or maintain spaces.

       

dbt Jinja: Control flow

In dbt, conditional structures guide the flow of transformations. Grasp how to integrate these structures seamlessly.

Control Flow
If/elif/else/endif
{%- if target.name == 'dev' -%}
{{ some code }}
{%- elif target.name == 'prod' -%}
{{ some other code }}
{%- else -%}
{{ some other code }}
{%- endif -%}

dbt Jinja: Looping

Discover how to iterate over lists and dictionaries. Understand  simple loop syntax or accessing loop properties.

Looping
Loop Syntax
{%- for item in my_iterable -%}
  --Do something with item
  {{ item }}
{%- endfor -%}
loop.last
This boolean is False unless the current iteration is the last iteration.
          {% for item in list %}
  {% if loop.last %}   
    --This is the last item
    {{ item }}
  {% endif %}
{% endfor %}
loop.first
A boolean that is True if the current iteration is the first iteration, otherwise False.
{% for item in list %}
  {% if loop.first %}
    --first item
    {{ item }}
  {% endif %}
{% endfor %}
loop.index
An integer representing the current iteration of the loop (1-indexed). So, the first iteration would have loop.index of 1, the second would be 2, and so on.
{% for item in list %}
   --This is item number
   {{ loop.index }}
{% endfor %}
Looping a List
{% set rating_categories = ["quality_rating",
                            "design_rating",
                            "usability_rating"] %}
SELECT product_id,
 {%- for col_name in rating_categories -%}
   AVG({{ col_name }}) as {{ column_name }}_average
   {%- if not loop.last  -%} 
     , 
   {%- endif -%}
 {%- endfor -%}
 FROM product_reviews
 GROUP BY 1

Compiled code
SELECT product_id,
   AVG(quality_rating) as quality_rating_average,
   AVG(design_rating) as design_rating_average,
   AVG(usability_rating) as usability_rating_average
FROM product_reviews
GROUP BY 1
Looping a Dictionary
{% set delivery_type_dict = {"a": "digital_download",
                             "b": "physical_copy"} %}
SELECT order_id,
{%- for type, column_name in delivery_type_dict.items() -%}
COUNT(CASE 
      WHEN delivery_method = '{{ type }}' THEN order_id 
      END) as {{ column_name }}_count
      {%- if not loop.last  -%}
       , 
      {%- endif -%}
      {%- endfor -%}
FROM order_deliveries
GROUP BY 1

SELECT order_id,
COUNT(CASE 
      WHEN delivery_method = 'a' THEN order_id 
      END) as digital_download_count,
COUNT(CASE 
      WHEN delivery_method = 'b' THEN order_id 
      END) as physical_copy_count
FROM order_deliveries
GROUP BY 1

dbt Jinja: Operators 

These logical and comparison operators come in handy, especially when defining tests or setting up configurations in dbt.

Logic Operators
and
{% if condition1 and condition2 %}
or
{% if condition1 or condition2 %}
not
{{  not condition1 }}

Comparison Operators
Equal To
{% if 1 == 2 %}
Not Equal To
{% if 1 != 2 %}
Greater Than
{% if 1 > 2 %}
Less Than
{% if 1 < 2 %}
Greater Than or Equal to
{% if 1 >= 2 %}
Less Than or Equal To
{% if 1 <= 2 %}

dbt Jinja: Variable tests

Within dbt, you may need to validate if a variable is defined or a if a value is odd or even. These Jinja Variable tests allow you to validate with ease.

Jinja Variable Tests
Is Defined
                  {% if my_variable is defined %}
-- Handle conditions when variable exists
{% endif %}
Is None

{% if my_variable is none %}
-- Handle absence of my_variable
{% endif %}
Is Even

{% if my_variable is even %}
-- Handle when my_variable is even
{% endif %}
Is Odd

{% if my_variable is odd %}
-- Handle when my_variable is odd
{% endif %}
Is a String

{% if my_variable is string %}
-- Handle when my_variable is a string
{% endif %}
Is a Number

{% if my_variable is number %}
-- Handle when my_variable is a number
{% endif %}

dbt Jinja: Creating macros & tests

Macros are the backbone of advanced dbt workflows. Review how to craft these reusable code snippets and also how to enforce data quality with tests.

Creating Macros & Tests
Define a Macro
Write your macros in your project's macros directory.
{% macro ms_to_sec(col_name, precision=3) %}   
  ( {{ col_name }} / 1000 )::numeric(16, {{ precision }})   
{% endmacro %}
Use a Macro from a Model
In a model:
SELECT order_id,       
  {{ ms_to_sec(col_name=time_ms, precision=3) }} as time_sec
FROM order_timings;

Compiled code:
SELECT order_id,
(time_ms/ 1000 )::numeric(16, 3) AS time_sec
FROM order_timings;
Run a Macro from the Terminal
Define in your macros directory. Ex)macros/create_schema_macro.sql:
{% macro create_schema(schema_name) %}
    CREATE SCHEMA IF NOT EXISTS {{ schema_name }};
{% endmacro %}

In Termial:

dbt run-operation create_schema --args '{"schema_name": "my_new_schema"}'
Define a Generic Test
Generic Tests used to be defined in the macros directory. It is now recommended to write your Generic Tests in the tests/generic directory.

{% test over_10000(model, column_name) %}
  SELECT {{column_name}} 
  FROM {{ model }}   
  WHERE {{column_name}} > 10000     
{% endtest %}
Use a Generic test
In models/schema.yml add the generic test to the model and column you wish to test.
version: 2

models:
  - name: my_model
    columns:
      - name: column_to_test
        tests:
          - over_10000
          - not_null
Define a Singular Test
Write your dbt Singular tests in the tests directory and give it a descriptive name. Ex) test/test_suspicious_refunds.sql
    
SELECT order_id, 
SUM(CASE
    WHEN amount < 0 THEN amount 
    ELSE 0 
    END) as total_refunded_amount,       
COUNT(CASE 
     WHEN amount < 0 THEN 1 
     END) as number_of_refunds  
FROM {{ ref('my_model') }}  
GROUP BY 1   
HAVING number_of_refunds > 5

dbt Jinja: Filters (aka Methods)

Fine-tune your dbt data models with these transformation and formatting utilities.

String Manipulation
Lower
{{ "DATACOVES" | lower }} => "datacoves"
Upper
{{ "datacoves" | upper }} => "DATACOVES"
Default
{{ variable_name | default("Default Value") }}    
If value exists => "Sample Value"
If value does not exist => "Default Value"
Trim
{{ "Datacoves   " | trim }} => "Datacoves"  
Replace
{{ "Datacoves" | replace("v", "d") }} => "Datacodes" 
Length
{{ "Datacoves" | length }} => 9
Capitalize
{{ "datacoves" | capitalize }} => "Datacoves"  
Title
{{ "datacoves managed platform" | capitalize }}
  => "Datacoves managed platform”
Repeat a String
{{ print('-' * 20) }}
Substring
{{ "Datacoves"[0:4] }} => "Data"
Split
{{ "Data coves".split(' ') }} => ["Data", "coves"]  

Number Manipulation
Int
{{ "20" | int }} => 20 
Float
{{ 20 | float }} => 20.0 
Rounding to Nearest Whole Number
{{ 20.1434 | round }} => 21
Rounding to a Specified Decimal Place
{{ 20.1434 | round(2) }} => 20.14
Rounding Down (Floor Method)
{{ 20.5 | round(method='floor') }} => 20 
Rounding Up (Ceil Method)
{{ 20.5 | round(method='ceil') }} => 21

Please contact us with any errors or suggestions.

dbt wont fix your data maturity problem
5 mins read

Implementing dbt (data build tool) can revolutionize your organization's data maturity, however, if your organization is not ready to take advantage of the benefits of dbt it might not be the right time to start. Why? Because the success of data initiatives often hinges on aspects beyond the tooling itself.  

Many companies rush into implementing dbt without assessing their organization’s maturity and this leads to poor implementation. The consequences that come from a poorly implemented dbt initiative can leave the organization frustrated, overwhelmed with technical debt, and wasted resources. To avoid these pitfalls and ensure your organization is truly ready for dbt, you should complete an assessment of your organization's readiness by answering the questions presented later in this article.

What is Data Maturity?

Before diving into the maturity assessment questions, it’s important to understand what data maturity means. Data maturity is the extent to which an organization can effectively leverage its data to drive business value. It encompasses multiple areas, including:

Data-Driven Culture: Fostering an environment where data is integral to decision-making processes.

Data Quality: Ensuring data is accurate, consistent, and reliable.

Data Governance: Implementing policies and procedures to manage data assets.

Data Integration: Seamlessly combining data from various sources for a unified view.

A mature data organization not only ensures data accuracy and consistency but also embeds data-driven decision-making into its core operations.  

How dbt Improves Data Maturity

By leveraging dbt's features, organizations can significantly enhance their data maturity, leading to better decision-making, improved data quality, robust governance, and seamless integration. For example:

what is dbt
dbt for dbt maturity

Data-Driven Culture: By using dbt, you can improve many aspects that contribute to creating a data-driven culture within an organization. One way is by encouraging business users to be involved in providing or reviewing accurate model and column descriptions which are embedded in dbt. You can also involve them in defining what data to test with dbt. Better Data Quality will improve trust in the data. More trust in the data will always lead to more frequent use and reliance on it.

Data Quality and Observability: dbt enables automated testing and validation of data transformations. This ensures data quality by catching issues like schema changes or data anomalies early in the pipeline. As your data quality and data observability needs grow you can assess where you are on the data maturity curve.  For example, in a sales data model, we can write tests to ensure there are no negative order quantities and that each order has a valid customer ID. With dbt you can also understand data lineage and this can improve impact and root cause analysis when insights don’t seem quite right.

Data Governance: dbt facilitates version control and documentation for all transformations, enhancing transparency and accountability. Organizations can track changes to data models ensuring compliance with data governance policies.

Data Integration: dbt supports the integration of data from multiple sources by providing a framework for consistent and reusable transformations. This allows for the creation of unified data models that provide a holistic view of business operations.

dbt Readiness Assessment

Now that we understand what data maturity is and how dbt can help improve it, you might be ready to jump on the dbt bandwagon. But first, we encourage you to assess your organization’s readiness for dbt. The journey to data maturity involves not only choosing the right tools but also ensuring that your organization is philosophically and operationally prepared to take full advantage of these tools. It is important to recognize that dbt’s approach requires a shift in mindset towards modern data practices, emphasizing transparency, collaboration, and automation.

To determine if your organization is mature enough for dbt or if dbt is the right fit, consider the following assessment questions:

Are you philosophically aligned?

dbt requires a philosophical alignment with its principles, such as ELT (Extract, Load, Transform) instead of the traditional ETL (Extract, Transform, Load) approach. dbt is also based on idempotency meaning that given the same input, you will always get the same output. This is different than traditional ETL that may use incompatible constructs like Auto-Incrementing Primary Keys. If your organization prefers processes that are incompatible with dbt’s methodology, you will face challenges fighting the dbt framework to make it do something it was not intended to do.

Are you just going to do a lift and shift?

Simply migrating existing processes and code to dbt without rethinking them won’t leverage dbt’s full potential. Assess whether you’re ready to redesign your workflows to take advantage of dbt’s capabilities such as incremental tables, snapshots, seeds, etc.

Are you going to take advantage of features like data quality and documentation?

dbt offers excellent features for data quality and documentation. Evaluate if your team is prepared to prioritize the utilization of these features to enhance transparency and trust in your data. Tests and model descriptions will not write themselves. When it comes to good descriptions, they shouldn't come from a data engineering team that does not know how the data is used or the best data quality rules to implement. Good descriptions must involve business user review at a minimum.

Are you going to open up dbt to teams outside of IT?

The goal of dbt is to empower various teams including IT and business users by using the same tooling. Consider if your organization is ready to foster this cross-functional collaboration. When you implement dbt correctly, you will empower anyone who knows SQL to contribute. You can have multiple teams contribute to the insight delivery process and still ensure proper governance and testing before updating production.

Are you going to automate processes like deployment and testing?

Automation is key to achieving efficiency with dbt. Implementing automated deployment, testing, and CI/CD pipelines can significantly improve your workflows. If you aren’t ready to automate, the benefits of dbt may not be fully realized. If you simply put in dbt without thinking about the end-to-end process and the failure points, you will miss opportunities for errors. The spaghetti code you have today didn't happen just because you were not using dbt.

Are you changing more than just tooling?

dbt is a framework, not a silver bullet. Merely changing tools without altering your underlying processes will not solve existing issues. This is a huge issue with organizations that have not done the work to create a data-driven culture. Assess if your team is ready to adopt better naming conventions and more structured processes to make data more understandable.  

Conclusion

Data immaturity might manifest as a reliance on manual processes, lack of data quality controls, or poor documentation practices. These factors can derail the effective implementation of dbt since dbt thrives in environments where data practices are robust and standardized. In other words, dbt alone will not solve these problems.

Ensuring your organization is ready for the changes that come with implementing dbt is not just best practice, it is essential for success. By thoroughly assessing your readiness, you can avoid technical debt, optimize your workflows, and fully harness the power of dbt. Remember, dbt is a powerful tool, but its effectiveness depends on the readiness of your organization to improve data practices and its alignment with dbt’s philosophy.

dbt alternatives
5 mins read

dbt (data build tool) is a powerful data transformation tool that allows data analysts and engineers to transform data in their warehouse more effectively. It enables users to write modular SQL queries, which it then runs on top of the data warehouse; this helps to streamline the analytics engineering workflow by leveraging the power of SQL. In addition to this, dbt incorporates principles of software engineering, like modularity, documentation and version control.

dbt Core vs dbt Cloud

Before we jump into the list of dbt alternatives it is important to distinguish dbt Core from dbt Cloud. The primary difference between dbt Core and dbt Cloud lies in their execution environments and additional features. dbt Core is an open-source package that users can run on their local systems or orchestrate with their own scheduling systems. It is great for developers comfortable with command-line tools and custom setup environments. On the other hand, dbt Cloud provides a hosted service with dbt core as its base. It offers a web-based interface that includes automated job scheduling, an integrated IDE, and collaboration features. It offers a simplified platform for those less familiar with command-line operations and those with less complex platform requirements.

You may be searching for alternatives to dbt due to preference for simplified platform management, flexibility to handle your organization’s complexity, or other specific enterprise needs. Rest assured because this article explores ten notable alternatives that cater to a variety of data transformation requirements.

Below is a quick list of the dbt alternatives we will be covering in this article:

We have organized these dbt alternatives into 3 groups: dbt Cloud alternatives, code based dbt alternatives , and GUI based dbt alternatives.

dbt Cloud Alternatives

dbt Cloud is a tool that dbt Labs provides, there are a few things to consider:

  • Flexibility, may be hindered by the inability to extend the dbt Cloud IDE with Python libraries or VS Code extensions
  • Handlining enterprise complexity of an end-to-end ELT pipeline will require a full-fledged orchestration tool
  • Costs can be higher than some of the alternatives below, especially at enterprise scale
  • Data security and compliance may require VPC deployment which is not available in dbt Cloud
  • Some features of dbt Cloud are not open source increasing vendor lock-in

Although dbt Cloud can help teams get going quickly with dbt, it is important to have a clear understanding of the long-term vision for your data platform and get a clear understanding of the total cost of ownership. You may be reading this article because you are still interested in implementing dbt but want to know what your options are other than dbt Clould.

Datacoves

Datacoves is tailored specifically as a seamless alternative to dbt Cloud. The platform integrates directly with existing cloud data warehouses, provides a user-friendly interface that simplifies the management and orchestration of data transformation workflows with Airflow, and provides a preconfigured VS Code IDE experience. It also offers robust scheduling and automation with managed Airflow, enabling data transformations with dbt to be executed based on specific business requirements.

Benefits:

Flexibility and Customization: Datacoves allows customization such as enabling  VSCode extensions or adding any Python library. This flexibility is needed when adapting to dynamic business environments and evolving data strategies, without vendor lock-in.

Handling Enterprise Complexity: Datacoves is equipped with managed Airflow, providing a full-fledged orchestration tool necessary for managing complex end-to-end ELT pipelines. This ensures robust data transformation workflows tailored to specific business requirements. Additionally, Datacoves does not just support the T (transformations) in the ELT pipeline, the platform spans across the pipeline by helping the user tie all the pieces together. From initial data load to post-transformation operations such as pushing data to marketing automation platforms.

Cost Efficiency: Datacoves optimizes data processing and reduces operational costs associated with data management as well as the need for multiple SaaS contracts. Its pricing model is designed to scale efficiently.

Data Security and Compliance: Datacoves is the only commercial managed dbt data platform that supports VPC deployment in addition to SaaS, offering enhanced data security and compliance options. This ensures that sensitive data is handled within a secure environment, adhering to enterprise security standards. A VPC deployment is advantageous for some enterprises because it helps reduce the red tape while still maintaining optimal security.

Open Source and Reduced Vendor Lock-In: Datacoves bundles a range of open-source tools, minimizing the risk of vendor lock-in associated with proprietary features. This approach ensures that organizations have the flexibility to switch tools without being tied to a single vendor.

Do-it-Yourself dbt Core

It is worth mentioning that that because dbt Core is open source a DIY approach is always an option. However, opting for a DIY solution requires careful consideration of several factors. Key among these is assessing team resources, as successful implementation and ongoing maintenance of dbt Core necessitate a certain level of technical expertise. Additionally, time to production is an important factor; setting up a DIY dbt Core environment and adapting it to your organization’s processes can be time-consuming.  

Finally, maintainability is essential- ensuring that the dbt setup continues to meet organizational needs over time requires regular updates and adjustments. While a DIY approach with dbt Core can offer customization and control, it demands significant commitment and resources, which may not be feasible for all organizations.

Benefits:

This is a very flexible approach because it will be made in-house and with all the organization’s needs in mind but requires additional time to implement and increases the total cost of long-term ownership.

dbt alternatives - Code based ETL tools

For organizations seeking a code-based data transformation alternative to dbt, there are two contenders they may want to consider.

SQLMesh

SQLMesh is an open-source framework that allows for SQL or python-based data transformations. Their workflow provides column level visibility to the impact of changes to downstream models. This helps developers remediate breaking changes. SQLMesh creates virtual data environments that also eliminate the need to calculate data changes more than once. Finally, teams can preview data changes before they are applied to production.

Benefits:

SQLMesh allows developers to create accurate and efficient pipelines with SQL. This tool integrates well with tools you are using today such as Snowflake, and Airflow. SQLMesh also optimizes cost savings by reusing tables and minimizing computation.

Dataform

Dataform enables data teams to manage all data operations in BigQuery. These operations include creating table definitions, configuring dependencies, adding column descriptions, and configuring data quality assertions. It also provides version control and integrates with GitLab or GitHub.

Benefits:

Dataform is a great option for those using BigQuery because it fosters collaboration among data teams with strong version control and development practices directly integrated into the workflow. Since it keeps you in BigQuery, it also reduces context switching and centralizes data models in the warehouse, improving efficiency.  

AWS Glue

AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. It automates the provisioning of ETL code. It is worth noting that Amazon Glue offers GUI elements (like Glue Studio).

Benefits:

AWS Glue provides flexible support for various pipelines such as ETL, ELT, batch and more, all without a vendor lock-in. It also scales on demand, offering a pay-as-you-go billing. Lastly, this all-in-one platform has tools to support all data users from the most technical engineers to the non-technical business users.

dbt alternatives - Graphical ETL tools

While experience has taught us that there is no substitute for a code-based data transformation solution. Some organizations may opt for a graphical user interface (GUI) tool. These tools are designed with visual interfaces that allow users to drag and drop components to build data integration and transformation workflows. Ideal for users who may be intimidated by a code editor like VS Code, graphical ETL tools may simplify data processes in the short term.  

Matillion

Matillion is a cloud-based data integration platform that allows organizations to build and manage data pipelines and create no-code data transformations at scale. The platform is designed to be user-friendly, offering a graphical interface where users can build data transformation workflows visually.

Benefits:

Matillion simplifies the ETL process with a drag-and-drop interface, making it accessible for users without deep coding knowledge. It also supports major cloud data warehouses like Amazon Redshift, Google BigQuery, and Snowflake, enhancing scalability and performance.

Informatica

Informatica offers extensive data integration capabilities including ETL, hundreds of no code connectors cloud connectors, data masking, data quality, and data replication. It also uses a metadata-driven approach for data integration. In addition, it was built with performance, reliability, and security in mind to protect your valuable data.

Benefits:

Informatica enhances enterprise scalability and supports complex data management operations across various data types and sources. Informatica offers several low-code and no-code features across its various products, particularly in its cloud services and integration tools. These features are designed to make it easier for users who may not have deep technical expertise to perform complex data management tasks.

Alteryx

Alteryx allows you to automate your analytics at scale. It combines data blending, advanced analytics, and data visualization in one platform. It offers tools for predictive analytics and spatial data analysis.  

Benefits:

Alteryx enables users to perform complex data analytics with AI. It also improves efficiency by allowing data preparation, analysis, and reporting to be done within a single tool. It can be deployed on-prem or in the cloud and is scalable to meet enterprise needs.

Azure Data Factory

Azure Data Factory is a fully managed, serverless data integration service that integrates with various Azure services for data storage and data analytics. It provides a visual interface for data integration workflows which allows you to prepare data, construct ETL and ELT processes, and orchestrate and monitor pipelines code-free.

Benefits:

Azure Data Factory can be beneficial for users utilizing various Azure services because it allows seamless integration with other Microsoft products, which is ideal for businesses already invested in the Microsoft ecosystem. It also supports a pay-as-you-go model.

Talend

Talend offers an end-to-end modern data management platform with real-time or batch data integration as well as a rich suite of tools for data quality, governance, and metadata management. Talend Data Fabric combines data integration, data quality, and data governance into a single, low-code platform.

Benefits:

Talend can enhance data quality and reliability with built-in tools for data cleansing and validation. Talend is a cloud-independent solution and supports cloud, multi-cloud, hybrid, or on-premises environments.

SSIS (SQL Server Integration Services)

SQL Server Integration Services are a part of Microsoft SQL Server, providing a platform for building enterprise-level data integration and data transformations solutions. With this tool you can extract and transform data from a wide variety of sources such as XML data files, flat files, and relational data sources, and then load the data into one or more destinations. It Includes graphical tools and wizards for building and debugging packages.

Benefits:

SQL Server Integration Services are ideal for organizations heavily invested in SQL Server environments. They offer extensive support and integration capabilities with other Microsoft services and products.

Conclusion

While we believe that code is the best option to express the complex logic needed for data pipelines, the dbt alternatives we covered above offer a range of features and benefits that cater to different organizational needs. Tools like Matillion, Informatica, and Alteryx provide graphical interfaces for managing ETL processes, while SQLMesh, and Dataform offer code-based approaches to SQL and Python based data transformation.  

For those specifically looking for a dbt Cloud alternative, Datacoves stands out as a tailored, flexible solution designed to integrate seamlessly with modern data workflows, ensuring efficiency and scalability.

Open source databases
5 mins read

SQL databases are great for organizing, storing, and retrieving structured data essential to modern business operations. These databases use Structured Query Language (SQL), a gold standard tool for managing and manipulating data, which is universally recognized for its reliability and robustness in handling complex queries and vast datasets.

SQL is so instrumental to database management that databases are often categorized based on their use of SQL. This has led to the distinction between SQL databases, which use Structured Query Language for managing data, and NoSQL databases, which do not rely on SQL and are designed for handling unstructured data and different data storage models. If you are looking to compare SQL databases or just want to deepen your understanding of these essential tools, this article is just for you.

What is an Open Source Database?  

Open source databases are software systems whose source code is publicly available for anyone to view, modify, and enhance. This article covers strictly open source SQL databases. Why? Because we believe that they bring additional advantages that are reshaping the data management space. Unlike proprietary databases that can be expensive and restrictive, open source databases are developed through collaboration and innovation at their core. This not only eliminates licensing fees but also creates a rich environment of community-driven enhancements. Contributors from around the globe work to refine and evolve these databases, ensuring they are equipped to meet the evolving demands of the data landscape.  

Why Use Open Source Databases?

Cost-effectiveness: Most open source databases are free to use, which can significantly reduce the total cost of ownership.

Flexibility and Customization: Users can modify the database software to meet their specific needs, a benefit not always available with proprietary software.

Community Support: Robust communities contribute to the development and security of these databases, often releasing updates and security patches faster than traditional software vendors.

OLTP vs OLAP

When selecting a database, it is important to determine your primary use case. Are you frequently creating, updating, or deleting data? Or do you need to analyze large volumes of archived data that doesn't change often? The answer should guide the type of database system you choose to implement.  

In this article we will be touching on OLTP and OLAP open source SQL databases. These databases are structured in different ways depending on the action they wish to prioritize analytics, transactions, or a hybrid of the two.  

What is OLTP?

OLTP or Online Transaction Processing databases are designed to manage and handle high volumes of small transactions such as inserting, updating, and/or deleting small amounts of data in a database. OLTP databases can handle real-time transactional tasks due to their emphasis on speed and reliability. The design of OLTP databases is highly normalized to reduce redundancy and optimizes update/insert/delete performance. OLTP databases can be used for analytics but this is not recommended since better databases suited for analytics exist.

Characteristics of OLTP:

  • Handles large numbers of transactions by many users.
  • Operations are typically simple (e.g., updating a record or retrieving specific record details).
  • Focus on quick query processing and maintaining data integrity in multi-access environments.
  • Data is highly normalized.

When to use OLTP?

Use OLTP if you are developing applications that require fast, reliable, and secure transaction processing. Common use cases include but are not limited to:  

E-commerce: Order placement, payment processing, customer profile management, and shopping cart updates.

Banking: Account transactions, loan processing, ATM operations, and fraud detection.

Customer Relationship Management (CRM): Tracking customer interactions, updating sales pipelines, managing customer support tickets, and monitoring marketing campaigns.

What is OLAP?

OLAP or Online Analytical Processing databases are designed to perform complex analyses and queries on large volumes of data. They are optimized for read-heavy scenarios where queries are often complicated and involve aggregations such as sums and averages across many datasets. OLAP databases are typically denormalized, which improves query performance but come with the added expense of storage space and slower update speeds.

Characteristics of OLAP:

  • Designed for analysis and reporting functions.
  • Queries are complex and involve large volumes of data.
  • Focus on maximizing query speed across large datasets.
  • Data may be denormalized to expedite query processing.

When to use OLAP?

Use OLAP if you need to perform complex analysis on large datasets to gather insights and support decision making. Common use cases include but are not limited to:

Retail Sales Data Analysis: A retail chain consolidates nationwide sales data to analyze trends, product performance, and customer preferences.

Corporate Performance Monitoring: A multinational uses dashboards to track financial, human resources, and operational metrics for strategic decision-making.

Financial Analysis and Risk Management: A bank leverages an OLAP system for financial forecasting and risk analysis using complex data-driven calculations.

In practice, many businesses will use both types of systems: OLTP systems to handle day-to-day transactions and OLAP systems to analyze data accumulated from these transactions for business intelligence and reporting purposes.

Now that we are well versed in OLTP vs OLAP, let's dive into our open source databases!  

Open Source OLTP Databases

PostgreSQL

A row-oriented database, often considered the world’s most advanced open source database. PostgreSQL offers extensive features designed to handle a range of workloads from single machines to data warehouses or web services with many concurrent users.

Best Uses: Enterprise applications, complex queries, handling large volumes of data.

SQLite

SQLite is a popular choice for embedded database applications, being a self-contained, high-reliability, and full-featured SQL database engine. This database is a File-based database which means that they store data in a file (or set of files) on disk, rather than requiring a server-based backend. This approach has several key characteristics and advantages such as being lightweight, portable, easy to use, and self-contained.

Best Uses: Mobile applications, small to medium-sized websites, and desktop applications.

MariaDB

A columnar database and offshoot of MySQL. MariaDB was created by the original developers of MySQL after concerns over its acquisition by Oracle. It is widely respected for its performance and robustness.

Best Uses: Web-based applications, cloud environments, or as a replacement for MySQL.

Firebird

Firebird is a flexible relational database offering many ANSI SQL standard features that run on Linux, Windows, and a variety of Unix platforms. This database can handle a hybrid approach of OLTP and OLAP due to its multi-generational architecture and because readers do not block writers when accessing the same data.

Best Uses: Small to medium enterprise applications, particularly where complex, customizable database systems are required.

Open Source OLAP Databases

ClickHouse

Known for its speed, ClickHouse is an open-source column-oriented, File-based database management system that is great at real-time query processing over large datasets.  As mentioned earlier in the article, File-based databases bring many benefits They make use of data compression, disk storage of data, parallel processing on multiple cores, distributed processing on multiple servers and more.

Best Uses: Real-time analytics and managing large volumes of data.

DuckDB

Similar to SQLite, DuckDB is an embedded file-based database, however, DuckDB is a column-oriented database that is designed to execute analytical SQL queries fast and efficiently. This database has no dependencies making it a simple, efficient, and portable database. Since it is file-based this means DuckDB runs embedded within the host process, which allows for high-speed data transfer during analytics.

Best Uses: Analytical applications that require fast, in-process SQL querying capabilities.

StarRocks

StarRocks is a performance-oriented, columnar distributed data warehouse designed to handle real-time analytics. StarRocks also supports hybrid row-column storage. It is known for its blazing-fast massively parallel processing (MPP) abilities. Data can be ingested at a high speed and updated and deleted in real time making it perfect for real-time analytics on fresh data.

Best Uses: Real-time analytical processing on large-scale datasets.

Doris

Doris is an MPP-based, column-oriented data warehouse, aimed at providing high performance and real-time analytical processing. Doris can support highly concurrent point query scenarios and high-throughput complex analytic scenarios. Its high speed and ease of use despite working with large amounts of data make it a great option.

Best Uses: Real-time OLAP applications and scenarios demanding fast data processing and complex aggregation.

Trino

Even though Trino is not a database, but rather a query engine that allows you to query your databases, we felt it is a powerful addition to this open source list. Originally developed by Facebook and known as PrestoSQL, Trino is designed to query large data warehouses and big data systems rapidly. Since it is great for working with terabytes or petabytes of data it is an alternative to tools such as Hive or Pig. However, it can also operate on traditional relational databases and other data sources such as Cassandra. One major benefit is that Trino allows you to perform queries across different databases and data sources. This is known as query federation.  

Best Uses: Distributed SQL querying for big data solutions.

Citus

While this is not a separate open source database, we felt it was a good addition to the list because Citus is an extension to PostgreSQL that that transforms your Postgres database into a distributed database. This enables it to scale horizontally.

Best Uses: Scalable PostgreSQL applications, especially those needing to handle multi-tenant applications and real-time analytics over large datasets.

Conclusion  

Open source SQL databases provide a variety of options for organizations and developers seeking flexible, cost-effective solutions for data management. Whether your needs are for handling large data sets, real-time analytics, or robust enterprise applications, there is likely an open source database out there for you.

5 open source data quality tools
5 mins read

Data is in the spotlight as companies everywhere realize data's true potential. With big initiatives like GenAI and sophisticated data ecosystems, ensuring data quality is not just a necessity but a mandatory investment for businesses and analysts worldwide. Some people are learning the hard way that you need stable data foundations to get the results these initiatives promise.

genAI vs data quality
Data quality first

While there are many great tools out there, the spotlight on open source tools has never been brighter. Open source software offers transparency, adaptability, and community-driven enhancements that are crucial in the rapidly evolving data landscape. This article covers 5 open source data quality tools and is current as of April 2024, so if that is something that interests you, stick around.

What is data quality?

First things first, what is data quality?  There are many definitions of data quality, but data is considered high quality if it is fit for its intended uses in operations, decision-making, and planning. In other words, data quality refers to the data's accuracy, completeness, reliability, relevance, and how up-to-date it is. In the context of data-driven decision-making, high-quality data is crucial as it directly impacts the accuracy of insights and the effectiveness of decisions. Our data foundation.

High-quality data should be:

Accurate: Data that is free from errors and discrepancies.

Complete: Data that covers the necessary breadth and depth needed by the business.

Reliable: Data that has no missing elements and is consistently represented and sourced.

Relevant: Data that is applicable to the context and purposes for the business.

Current: Data that is up-to-date and timely for its purpose.

Understanding what is needed for data quality is the first step toward recognizing the importance of these tools and practices that maintain or enhance this quality.

What should you track for good data quality?

Now we know what constitutes high quality data but what do we need to monitor to ensure that our data is high quality? The good news is these metrics tend to be universal. For maintaining high data quality, several metrics and elements should be monitored regularly:

Accuracy: Ensure that your data correctly represents reality or the source from which it came.

Completeness: Check for missing values or data segments that could lead to incorrect analysis or conclusions.

Consistency: Data across different systems or platforms should match and be consistent.

Timeliness: Data should be updated and available in a timeframe that aligns with its intended use.

Validity: Data should adhere to the relevant rules, such as data formats and value ranges.

Uniqueness: No duplicates should be present unless necessary, ensuring each entry is unique.

Integrity: There should be a relationship between datasets and records that maintains data accuracy and consistency.

By tracking these metrics, organizations can set up the essential data foundation and significantly improve the trustworthiness and utility of their data. This will lead to better outcomes and insights that can support great data initiatives of the future.  

Since it is essential to track these metrics, companies are on the search for the best tool to help them improve their data quality. Here is a list of open source tools that can be leveraged to improve data quality.

Why open source?

Before we jump into the tool list you may have noticed that a quick google search for this topic will give me many different lists. How is our list different? Well, we are focusing on open source tools. There are many great tools out there both paid and “free” and we put quotes around free because there is no such thing as free; there are always hidden costs (hours worked) for setup and maintenance. However, we wanted to make this open source tool list because regardless of the hidden costs we believe in the following benefits of open source tools:

Transparency: Open source tools offer complete transparency in their operations and algorithms. Users can inspect, modify, and improve the code, which enhances trust and reliability.

Community: Open source projects benefit from the collective intelligence of a global community. This not only accelerates innovation and bug resolution but also provides a large pool of knowledge and support.

Flexibility: With open source, organizations are not locked into proprietary systems, allowing them to tailor tools to their specific needs and integrate them seamlessly into their existing environments.

Cost-effectiveness: While open source doesn't always mean free, it significantly reduces costs associated with licensing fees and vendor lock-in, making cutting-edge tools accessible to everyone.

Quality and Security: Continuous contributions and scrutiny by the community mean that open source tools often meet high standards of quality and security, with issues being identified and addressed rapidly.

Our selection of open source data quality tools is grounded in rigorous open source criteria. We believe that the strength of an open source project lies not just in its ability to solve complex problems but also in its community, transparency, and commitment to ongoing improvement. When compiling this list, we considered factors such as active community engagement, frequency of updates, the quality of the documentation, and ease of contribution. This ensures that the tools recommended not only meet high standards of performance and reliability but also embody the principles that make open source software a valuable asset to the data quality landscape. So without further ado, let's jump into our list.

Open source data quality tool list

dbt Core

Self managed or in Datacoves

Primary Language: SQL / YAML

Purpose & Features: dbt core is an open source tool that allows data analysts and engineers to transform data in their data warehouses by writing dynamic SQL queries, which dbt then converts into tables and views. It also supports version control, testing, and documentation, which helps maintain data integrity and reliability.  

For data quality, dbt Core has some out of the box data tests which can be extended through custom made test, or by using libraries such as dbt-expectations and elementary. Testing is easily done by configuring macros in YAML files or by writing custom SQL tests. However, integrating dbt Core into your data stack can be a big task especially when it comes to scheduling. A managed dbt Core platform such as Datacoves could be a great option for saving time and money. While dbt handles only the 'T' in ELT, Datacoves’ managed dbt Core Platform ensures that the entire ELT process is smooth and interconnected, allowing your team to concentrate on deriving insights from the data. There are other dbt alternatives on the market that can also be explored which handle the "T" in the ELT process.

Who it is for: Best for teams using SQL who want to transform data directly in the warehouse and who want to follow software development best practices including unit testing in their data pipelines.

Image from Datacoves UI
Image from Datacoves UI
ui dbt test
Image from Datacoves UI dbt tests

Soda Core

Primary Language: YAML

Purpose & Features: Soda Core is the open source component that allows users to define data quality checks in code and integrate them into workflows.  

Who it's for: Teams that need data quality checks integrated into their existing Python workflows or data pipelines.

soda core CLI
Soda-core CLI

Great Expectations

Primary Language: Python  

Purpose & Features: This tool is a data quality platform that allows you to create data tests, documentation, and profiles automatically. It easily integrates into existing data processing pipelines to ensure data validation against expectations (unit tests). You can collaborate with nontechnical stakeholders by sharing the Data Docs. Data docs are Expectations, Validation Results, and other metadata translated into a human readable format as seen in the image below.  

Who it's for: Data teams looking for a Pythonic way to enforce data quality rules and create automated data documentation.

Great Expectations Data Docs
Data Docs Image from Great Expectations Docs

Deequ

Primary Language: Scala (for Apache Spark)

Purpose & Features: Deequ is an open source tool by Amazon with which you can define "unit tests" (columnar or row level) for large-scale data within the Spark ecosystem. It allows for automated checks of data quality metrics such as completeness, uniqueness, and conformity. This enables data teams to find errors early before they are consumed downstream. You can use Deequ to define your assumptions about the data in unit tests to catch any data that does not meet your assumptions. This tool works on tabular data such as CSV files, databases tables, logs and flattened JSON files.  

Who it's for: Data engineers and scientists working with big data in Spark (billons of rows), particularly those focused on maintaining data quality at scale.

Deequ unit tests
Image from Deequ, I mean Data Quality

Data Diff

Primary Language: You don’t manually write data quality tests but as you make changes to your SQL data diff will work its magic.  

Purpose & Features: This tool is a little different from the rest because you're not exactly writing tests to catch data quality issues. Instead, this open source Python package by Datafold lets you do development testing by spotting the differences between tables whenever you tweak your code. It's a great way to compare what's happening in your production data against your development changes, helping you see directly how those code changes are playing out in the data.  

Who it's for: Data engineers and teams who need to ensure that changes in data processing and ETL logic do not negatively affect data quality.

Data Diff CLI
Data Diff CLI Image from datafold.com

Choose the right open source data quality tool for your use case

The concept of "the best" for data quality tools is inherently tied to specific use cases. What might be an ideal solution for one organization could be less effective for another, depending on the unique challenges and requirements each face.

Before you dive into a tool, it's crucial to understand your organization's specific data quality challenges. Are you dealing with high volumes of data, requiring scalability? Or are your main issues related to data consistency and accuracy in a smaller, more controlled dataset? Identifying your primary use case will help you navigate through our top 10 tools and select the one that best fits your situation.  

Here’s 6 tips to help you make an informed choice for your specific needs:

1. Assess Your Data Quality Needs:

Identify the primary issues you're facing with your data. Are you struggling with incomplete data, inconsistencies, outdated information, or data that's not in the right format? Understanding your main challenges will guide you toward a tool that specializes in addressing those specific problems. Once you understand your data quality challenges and objectives, match these with the strengths of the tools listed above

2. Consider Your Technical Environment:

Evaluate the technical stack you are currently using. Some data quality tools are better suited for certain environments or integrate more seamlessly with specific databases, data lakes, or processing frameworks. Choose a tool that aligns with your existing infrastructure to reduce integration headaches.

3. Evaluate Community and Support:

The strength of an open-source tool lies in its community. Look for a tool with an active community, which is evident through regular updates, vibrant forums, and extensive documentation. A strong community can provide invaluable support, from troubleshooting to best practices.

4. Check for Flexibility and Scalability:

Your data needs will evolve, so it’s important to choose a tool that is flexible and can scale with your business. Assess the tool’s ability to handle different data volumes, types, and sources. A good open-source tool should not only solve your current data quality issues but also adapt to future challenges.

5. Review Security and Compliance Features:

Data security and compliance are imperative. Be sure the tool complies with data protection regulations and offers security features to protect your data. This is especially important if you're dealing with sensitive or personal information.

6. Test Drive the Tool:

Finally, don’t hesitate to get your hands dirty. Most open-source tools are free to use, so take advantage of this by testing the tool with your data. This will give you a clear idea of the tool’s usability, effectiveness, and fit with your use case. Be sure to go into this with an open mind to get the most out of the tool.

Conclusion

In the era of generative AI and other lofty initiatives high-quality data is not just an option but a necessity, and embracing these open-source data quality tools can significantly enhance the reliability and accuracy of your data. Remember, the "best" tool is one that aligns closely with your specific use case offering the features and flexibility your team needs to effectively tackle your data quality challenges; it very well could be a combination of these tools. Whether you are in the world of SQL, Python, or any other programming language, there is a tool tailored to your needs. Consider factors such as ease of integration into your current data ecosystem, the learning curve for your team, and the level of community support available.

Healthcare's digital transformation: data strategy issues
5 mins read

‍There is no doubt about the transformative potential of big data and analytics. This is particularly true for the Life Science sector as implementing technologies and ideologies can revolutionize drug development, tailor medicines to individual needs, dramatically improve patient care and more. The data supports this, with the longest running report of Fortune 1000 CIOs by Wavestone, showing 87.9% of companies believe “investments in Data & Analytics are a Top Organizational Priority.”  

Such great promise and high buy in should mean easy cultural adoption, right? Well not exactly. The 2024 report shows there has been a notable improvement; the percentage of top executives facing significant challenges in culture, people, and process/organization decreased from a staggering 90% in 2020 to 77.6% in 2024. While this reduction signifies a positive trend towards addressing these issues, the fact that more than three-quarters of leaders still encounter these problems underscores the widespread nature of these challenges. The persistently high percentage highlights the need for continued and focused efforts to overcome these barriers.  

In an effort to help further lower the 77.6%, this article aims to cover the benefits of data in the Life Science sector, highlight common culture issues, and provide some solutions to the problem. If your organization is among those facing these struggles, know that you are not alone.

Benefit of data in Life Science sector

The life science sector stands on the brink of a digital transformation, powered by the strategic use of data. It's clear that its impact is far-reaching, transforming every facet from research and development to patient care and beyond. Below are some examples of what this data-driven culture can improve.

  • Accelerating Drug Development: One of the most significant benefits of data lies in its ability to streamline and accelerate the drug development process. By harnessing big data and advanced analytics, life science organizations can identify potential drug candidates faster than ever before. This not only reduces the time and cost associated with bringing new treatments to market but also allows for more targeted and effective therapies.
  • Enhancing Personalized Medicine: Personalized medicine, tailored to the individual characteristics of each patient, is becoming a reality thanks to data. By analyzing vast datasets, including genetic information, researchers can understand how different people are likely to respond to treatments. This leads to more effective and personalized care plans, improving patient outcomes and reducing the likelihood of adverse reactions.
  • Improving Clinical Trials: Data analytics revolutionize the way clinical trials are conducted. By leveraging data, scientists can design better trials, select suitable candidates more accurately, and monitor results in real time. This not only increases the efficiency and efficacy of trials but also enhances patient safety by identifying potential issues earlier in the process.
  • Driving Operational Efficiency: Beyond research and patient care, data also plays a crucial role in enhancing the operational efficiency of life science organizations. From streamlining supply chains to optimizing resource allocation, data-driven insights help companies operate more efficiently, reduce costs, and respond more quickly to market changes.
  • Fostering Innovation: At its core, the utilization of data fosters an environment of innovation within the life science sector. By providing a wealth of information and insights, data encourages researchers to explore new hypotheses, uncover hidden patterns, and push the boundaries of what's possible in medical science.
  • Informing Policy and Decision-Making: Lastly, data aids in the development of more informed policies and decisions. By analyzing trends, outcomes, and costs, policymakers and healthcare providers can make evidence-based decisions that lead to better health outcomes and more efficient use of resources.

Digital transformation in healthcare - Key challenges

As we identified earlier in this article, culture, people, and process/organization challenges are the biggest obstacles for companies to achieve digital transformation and become data-driven. Identifying specific challenges is key to developing a solution.  

  • Resistance to Change: Cultural norms and resistance to change within organizations can impede the adoption of new data strategies, especially when there is a lack of buy-in from key stakeholders. This can be especially present in the Life Science sector which tends to lean risk averse.
  • Overemphasis on Buzzwords and End Products: Initiatives often start with buzzwords like 'Gen AI', focusing primarily on the final product without addressing underlying flawed infrastructures and processes. This inclination towards technology is because technology is tangible, you can talk to vendors and see their websites, however, consistent findings from the annual report reveal that technical challenges constitute only 23.4% of the issues; the bulk stems from ingrained cultural problems within organizations. Meaning, adopting new technologies without addressing these foundational processes is likely to lead to suboptimal outcomes.
  • Lack of Alignment: When different departments or teams pursue the same objective without proper coordination, it results in alignment issues. These misalignments act as barriers to the seamless execution of unified data strategies, causing data management inefficiencies. Often, this situation stems from a lack of centralized guidance and strategic direction. It is important for organizational leaders to bridge these gaps through a top-down approach, ensuring all components work in harmony towards the collective aim.
  • Cultural Complexity: Large enterprises especially face this issue. Cultural complexities, such as diverse social and cultural preferences, values, and disparities in knowledge, can pose challenges in implementing data strategies, particularly in institutions spanning developed and developing countries. This is why taking the time to align fundamentally is especially important for the enterprise.
  • Misplaced Prioritization of Data: Often, data is viewed as a technology concern for IT to handle, while other teams focus on deliverables like patient service or clinical trials. This perspective is problematic as data is crucial for analyzing and ensuring the success of these initiatives. Typically, data management is considered too late in the process, when the necessary data may not have been effectively captured for analysis.  

Solutions to digital transformation challenges

Questioning the status quo

Pharma companies tend to be risk averse due to the nature of the data they are responsible for. This leads to limitations and constraints to innovative solutions. These constraints are often accepted without understanding the rationale behind these constraints or challenging them. New technology offers new ways to manage data that were not available in the past. If a company remains the most risk averse and the most conservative this will stifle innovation. The key takeaway is that it is not only about technology but about the new process potential that this new tech provides.

Focus on fundamentals

This goes back to the foundation. Technology changes, initiatives change, but data truths do not. These truths involve thinking about end-to-end processes, data quality, documentation, good guidelines conventions, data governance, reducing points of failure ect. It is easy to get swept up by the latest trend and want to jump in, but you cannot build a house on quicksand; new trends like Gen AI still require these things underneath.

Focus on fundamentals
Data Quality Meme from LinkedIn Post

Fundamental alignment

This starts with aligning the team. Alignment is one of the 3 core pillars to a data-driven culture. People will start out thinking they are on the same page because they are using the same terminology. But we all have different ideas influenced by our experiences. So, it is important to gather all stakeholders and go through a true alignment process which includes figuring out the current state, pain points, and aligning on the solution.  

Fundamental alignment includes adopting a top-down approach. The efforts one team is making are sure to affect the efforts of another. Leadership must align and connect the dots between all moving parts and understand that things are not living in isolation. This will solve headaches downstream and ensure a smoother process.  

Fundamental alignment
Fundamental Alignment

Quick wins

Alignment alone won’t be enough to combat culture challenges. True alignment can lead to overly ambitious projects that may prove difficult to execute. This is the classic trap of trying to do too much at once. It is important to start with manageable, small-scale projects that can provide immediate benefits. These quick wins are vital for positively influencing the organizational culture. Additionally, they help maintain momentum: if a larger initiative is slow to yield results, these smaller successes ensure continuous progress. By allowing for ongoing adjustments and reprioritizations, these projects help prevent initiatives from being abandoned.

Conclusion

While the potential of big data and analytics in transforming the Life Science sector is undeniable, and investment in data is on the rise, the journey towards a data-driven culture remains one of the biggest challenges for data integration. The path forward requires a balanced approach that combines technology with fundamental changes in culture and processes. By embracing these changes, the life sciences sector can overcome existing barriers and fully harness the power of data to advance medical science and improve patient outcomes. At Datacoves we are passionate about helping companies achieve a data-driven culture. See how Datacoves helped Johnson&Johnson innovate their tech stack.

Prefer a Video?

This article was inspired by RAN BioLinks’ podcast episode “Why Life Science Organizations Fail to Implement Effective Data Strategies”.  The detailed conversation with Noel Gomez, a seasoned expert in data management within the life science industry, explores the critical challenges and innovative solutions for effective data strategies in healthcare.

Get our free ebook dbt Cloud vs dbt Core

Get the PDF