Tooling

Deep dives into modern data tools, platforms, and integrations to power your data stack.
Transformative power of data modeling in data
5 mins read

I read an article on Anchor Data Modeling, more specifically, Atomic modeling where the author proposes a different way of Data Modeling. The rationale for this change is that there is a lack of skills to model data well. We are giving powerful tools to novices, and that is bound to lead to problems.

From the article:

"we are in a distressful situation both concerning the art as a whole but also its place in modern architectures"

Is this the case? Do we have a big problem on the horizon that requires us to make this big shift?

I'd say I am open-minded and expose myself to different ways of thinking so I can broaden my views. A few years ago, I learned a bit about COBOL, not because I had any real use for it but because I was curious. I found it very interesting and even saw its similarities with SQL. I approached the topic with no preconceived ideas; this is the first time I heard of Atomic Modeling.

Is there something wrong with atomic and anchor data modeling in general?

The issues I see with ideas like Atomic data modeling are not in their goal. I am 100% aligned with the goal; the problem is the technology, process, and people needed to get there.

What we see in the market is a direct result of a backlash against doing things "perfectly." But why is this the case?  I believe it is because we haven't communicated how we will achieve this vision of ideas like atomic data. The author even says a key phrase in the first paragraph:

"practitioners shying away from its complexity"

If doing anchor data modeling is "complex" how are we going to up-skill people? Is this feasible? I am happy if I can get more people to use SQL vs a GUI tool like Alterix 😁

Doing anchor data modeling is complex

I am by no means an expert. Yet, I am fairly technical, and if I am not convinced, how will we convince decision-makers?

As I read this article, here's what I am thinking:

1. First, I will need to deconstruct the data I get from some source like material master data form SAP. That will be a bunch of tables, and who is going to do all this data modeling? It sounds expensive and time-consuming.

2. I am going to need some tooling for this, and I am either going to build it or use something a few others are using. Will my company want to take a chance on something this early? This sounds risky.

3. After I deconstruct all this data, I need to catalog all these atoms. I now have a lot of metadata, and that's good, but is the metadata business-friendly?  We can't get people to add table descriptions how is this going to happen with this explosion of objects? Who will maintain it? How will we expose this? Is there a catalog for it already? Does that catalog have the other features people need? It sounds like I need to figure out a bunch of things, the biggest one being the change management aspect.

4. What sort of database will I use to store this? This is a great use case for a graph database. But graph databases are not widely adopted, and I have seen graph databases choke at scale. We can use a relational database, but these joins are going to be complex.  Someone may have figured all this out, but there's more tech and learning needed. It sounds like this will also add time and expense.

5. When I have managed to do all the above, I will need to construct what people can actually use. We need objects that work with tools that are available. I need to make relational tables I can query with SQL and viz tools, which are more user-friendly. This sounds like more work, more time, and more money.

I may have missed some steps and oversimplified what's needed for this type of change. I am also aware that I may not know what exists to solve all the above. However, if I don't know it, then there are a ton of other people who also don't know it and this is where we need to start. We need to understand how we will tactically achieve this "better" world.

How we tactically achieve a better world

What are we fixing?

I've had conversations on metadata-driven automation, and like atomic modeling, I am not clear on who we are helping and how. What are we improving and in what timeframe? In the end, it feels like we have optimized for something only a few companies can do. To do anchor modeling well would be a huge expense, and when things go wrong, there are several points of failure. When we look at business problems, we need to be sure to optimize the end-to-end system. We can't locally optimize one area because we are likely moving the problem somewhere else. This can be in terms of time, money, or usability.

Decision-makers are not interested in data modeling. They are expecting results and a faster time to market. It's hard enough getting people to do things "better." This is why I find it hard to imagine that we can get to this level of maturity any time soon.

What can we do instead?

There are incremental steps we can take to incorporate best practices into the modern data stack. We need to help people mature their data practice faster, and we should not let perfection get in the way of good. Most companies are not large enterprises with millions of dollars to spend on initiatives like atomic modeling. That being said, I have yet to see anchor modeling in practice, so I welcome the opportunity to learn. I remember years ago the debates about how Ruby on Rials was teaching people "bad practices."  The other side of that argument is that Rails helped companies like Twitter and Github launch businesses faster. Rails was also better than the alternative at the time, which included messy PHP code. Others advocated for well-crafted "scalable" and expensive Java applications. Rails may not be the powerhouse it once was, but it has had a huge influence on how we build software. I even see its influence in dbt even if it might not have been intentional or direct.

Conclusion

Tools like Snowflake and dbt allow us to build processes that are much better than what most people have. Should we focus on all the "bad" things that may come with the modern data stack? Should we focus on how practitioners are not well educated, and so we need to throw all they are doing out?

I don't think so; I believe that we can help companies mature their data practices faster. Will we have the best data models? Maybe not. Will users do things perfectly? Nope. But can we help them move faster and guide them along their journey to avoid big pitfalls? I think we can. Getting people to use git, automating testing, and creating DataOps processes is a huge step forward for many organizations. Let's start there.

There's a reason Data Mesh and the Modern Data Stack resonate with so many people. There's a desire to do things faster with more autonomy at many companies, not just the ones with multi-million-dollar budgets. Let's focus on what is achievable, do the best we can, and help people mature along the way. We don't need more complexity; we need less.

Team work on data stack
Dataops top tools
5 mins read

In continuation of our previous blog discussing the importance of implementing DataOps, we now turn our attention to the tools that can efficiently streamline your processes. Additionally, we will explore real-life examples of successful implementations, illustrating the tangible benefits of adopting DataOps practices.

Which DataOps tools can help streamline your processes?

There are a lot of DataOps tools that can help you automate data processes, manage data pipelines, and ensure the quality of your data. These tools can help data teams work faster, make fewer mistakes, and deliver data products more quickly.

Here are some recommended tools needed for a robust DataOps process:

  1. dbt (data build tool): dbt is an open-source data transformation tool that lets teams use SQL to change the data in their warehouse. dbt has a lightweight modeling layer and features like dependency management, testing, and the ability to create documentation. Since dbt uses version-controlled code, it is easy to see changes to data transformations (code reviews) before they are put into production.  dbt can dynamically change the target of a query's "FROM" statement on the fly and this allows us to run the same code against development, test, and production databases by just changing a configuration. During the CI process, dbt also lets us run only the changed transformations and their downstream dependencies.
  2. Fivetran: Fivetran is an Extract and Load(EL) tool that has been gaining in popularity in recent years since it removes the complexity of writing and maintaining custom scripts to extract data from SaaS tools like Salesforce and Google Analytics.  By automating extracting data from hundred's of sources Fivetran removes complexity freeing data engineers to work on projects with a bigger impact. Finally, Fivetran has a robust API which allows you to save configurations done vie their UI for disaster recovery or to promote configurations form a development to a production environment.
  3. Airbyte: Airbyte is another data-ingestion EL tool that is appealing because it is open source, requires little or no code, and is run by the community. It also makes it easier to extract and load data without having to do custom coding.  Airbyte also offers a connector development kit to help companies build custom connectors that may not be available. This allows companies to leverage most of the Airbyte functionality without too much work. There's also an API that can be used to retrieve configurations for disaster recovery.
  4. SQLFluff: SQLFluff is an open-source SQL linter that helps teams make sure their SQL code is consistent and follows best practices. It gives you a set of rules that you can change to find and fix syntax errors, inconsistencies in style, and other common mistakes. Sqlfluff can be added to the CI/CD pipeline so that problems are automatically found before they are added to the codebase. By using a tool like SQLFluff, you can make sure your team follows a consistant coding style and this will help with long term project maintainability.
  5. dbt-checkpoint: dbt-checkpoint provides validators to make sure your dbt projects are good. dbt is great, but when a project has a lot of models, sources, and macros, it gets hard for all the data analysts and analytics engineers to maintain the same level of quality. Users forget to update the columns in property (yml) files or add descriptions for the table and columns. Without automation, reviewers have to do more work and may miss mistakes that weren't made on purpose. Organizations can add automated validations with dbt-checkpoint, which makes the code review and release process better.
  6. Hashboard: Hashboard is a business intelligence (BI) product built for data engineers to do their best work and easily spread the data love to their entire organizations. Hashboard has an interactive data exploration tool that enables anyone in an organization to discover actionable insights.
  7. GitHub: GitHub offers a cloud-based Git repository hosting service. It makes it easier for people and teams to use Git for version control and to work together. GitHub can also run the workflows needed for CI/CD and it provides a simple UI for teams to perform code reviews and allows for approvals before code is moved to production.
  8. Docker: Docker makes it easy for data teams to manage dependencies such as the versions of libraries such as dbt, dbt-checkpoint, SQLFluff, etc.. Docker makes development workflows more robust by integrating the development pipeline and combining dependencies simplifying reproducibility.

Examples of companies who have successfully implemented DataOps

Dataops in Top Companies

DataOps has been successfully used in the real world by companies of all sizes, from small startups to large corporations. The DataOps methodology is based on collaboration, automation, and monitoring throughout the entire data lifecycle, from collecting data to using it. Organizations can get insights faster, be more productive, and improve the quality of their data. DataOps has been used successfully in many industries, including finance, healthcare, retail, and technology.

Here are a few examples of real-world organizations that have used DataOps well:

  1. Optum: Optum is part of UnitedHealthcare. Optum prioritizes healthcare data management and analytics and when they wanted to implement new features and apps quickly, they turned to DataOps. DataOps helped Optum break down silos, saving millions of dollars annually by reducing compute usage. Optum managed data from dozens of sources via thousands of APIs, which was its biggest challenge. A massive standardization and modernization effort created a scalable, centralized data platform that seamlessly shared information across multiple consumers.
  2. JetBlue: DataOps helped JetBlue make data-driven decisions. After struggling with an on-premises data warehouse, the airline migrated to the cloud to enable self-service reporting and machine learning. They've cleaned, organized, and standardized their data and leveraged DataOps to create robust processes. Their agility in data curation has enabled them to increase data science initiatives.
  3. HubSpot: HubSpot is a leading company that makes software for inbound marketing and sales. It used DataOps to improve the use of its data. By using a DataOps approach, HubSpot was empowered to do data modeling the right way, to define model dependencies, and to update and troubleshoot models, which resulted in a highly scalable database and opened up new data application possibilities.
  4. Nasdaq: Nasdaq, a global technology company that provides trading, clearing, and exchange technology, adopted DataOps to improve its data processing and analysis capabilities. They launched a data warehouse, products, and marketplaces quickly. After scalability issues, they moved to a data lake and optimized their data infrastructure 6x. The migration reduced maintenance costs and enabled analytics, ETL, reporting, and data visualization. This enabled better and faster business opportunity analysis.  
  5. Monzo: Monzo is a UK-based digital bank that used DataOps to create a data-driven culture and improve its customer experience. By letting everyone make and look at the different data maps, they are helping teams figure out how their changes affect the different levels of their data warehouse. This gave the Monzo data team confidence that the data they give to end users is correct.

What is the future of DataOps adoption?

Future of DataOps

DataOps has a bright future because more and more businesses are realizing how important data is to their success. With the exponential growth of data, it is becoming more and more important for organizations to manage it well. DataOps will likely be used by more and more companies as they try to streamline their data management processes and cut costs. Cloud-based data management platforms have made it easier for organizations to manage their data well. Some of the main benefits of these platforms are that they are scalable, flexible, and cost-effective. With DataOps teams can improve collaboration, agility, and build trust in data by creating processes that test changes before they are rolled out to production.

With the development of modern data tools, companies can now adopt software development best practices in analytics. In today’s fast-paced world, it's important to give teams the tools they need to respond quickly to changes in the market by using high-quality data. Companies should use DataOps if they want to manage data better and reduct the technical debt created from uncontrolled processes. Putting DataOps processes in place for the first time can be hard, and it's easier said than done. DataOps requires a change in attitude, a willingness to try out new technologies and ways of doing things, and a commitment to continuous improvement. If an organization is serious about using DataOps, it must invest in the training, infrastructure, and cultural changes that are needed to make it work. With the right approach, companies can get the most out of DataOps and help their businesses deliver better outcomes.

At Datacoves, we offer a suite of DataOps tools to help organizations implement DataOps quickly and efficiently. We enable organizations to start automating simple processes and gradually build out more complex ones as their needs evolve. Our team has extensive experience guiding organizations through the DataOps implementation process.

Schedule a call with us, and we'll explain how dbt and DataOps can help you mature your data processes.

Get our free ebook dbt Cloud vs dbt Core

Get the PDF
Download pdf