Ingénierie des données : ce que vous devez savoir

Technology   |   Paul Warburg   |   Sep 12, 2022 TIME TO READ: 8 MINS
TIME TO READ: 8 MINS

At a high level, data engineering sounds simple. Move data source A to target B, and voilà! Data is ready for use in analytics and machine learning projects.

In practice, however, there’s a lot of work that happens between point A and point B.

Building data pipelines, modeling data, transforming data, transporting data between landing zones, maintaining data quality, and ensuring data governance are just some of the activities involved in moving data throughout an organization.

In this post, we’ll delve deeper into the field of data engineering—what it is, as well as some of the cutting-edge strategies and technologies used today.

What is Data Engineering?

At its core, data engineering is ensuring that data workers throughout the organization have access to the data they need.

Data engineering does the important job of gathering, transforming, and storing data from the many different sources where it resides so that business users can make sense of it in a meaningful way.

Data engineering teams also maintain ownership over the quality of the data. This includes maintaining security and data governance—essentially, making sure that the right people have access to the right data—as well as the Cs of data quality: the consistency, conformity, completeness and currency of the data.

High-quality data is the foundation of all data initiatives, which means that the quality of data engineering work dictates the quality of analytics, reporting, and machine learning. Data engineering can often be mistaken as a segregated precursor to data analytics and machine learning; in reality, it is part and parcel of these initiatives.

Central to data engineering is the process of building data pipelines, which are the means by which data migrates from its original source to a data repository, often through the different zones of that repository and, finally, into business applications and platforms.

It is during this data pipelining process that data is extracted from data systems and external sources, transformed into a format for storage, and loaded into databases, otherwise known as the ETL process.

Core Data Engineering Processes: ETL vs. ELT

ETL is a staple of the data engineering field, however, it has recently undergone serious change.

The ETL process dates back to the 1980s, when it was created to automate much of the tedious coding required to retrieve and cleanse data. At the time, ETL was designed to handle data that was generally well-structured, often originating from a variety of operational systems or databases the organization wanted to report against. Specific ETL pipelines were built for a specific set of users. And the end-result was successful—the productivity gains from ETL versus writing code by hand were undeniable.

Today, much of the architecture and data surrounding ETL has changed. The data itself has become much bigger and messier. And even the use cases, which were historically clearly-defined, have grown experimental in nature.

Perhaps the biggest difference is that instead of providing data for a few business groups, ETL pipelines are expected to serve a huge variety of users across an organization. Each of these users require different data that has been cleansed and transformed differently. But there’s one commonality—they all want the data fast, and the amount of use cases they’re working with are growing exponentially.

Traditional ETL pipelines have struggled to extend support for the self-service agility required by these emerging analytics use cases. ETL tools were built for IT users, not business users, which often leaves business users waiting in line to get data cleaned, passing specs back and forth until they’ve received their desired output.

Today, instead of an ETL pipeline, many organizations are taking an “ELT” approach, or decoupling data movement (extracting and loading) from data preparation (transforming).

This ELT approach follows a larger IT trend. Whereas IT architecture was historically built in monolithic silos, many organizations are decoupling the same components so that they function independently. Decoupled technologies means less work up front (stacks don’t need to be deployed understanding all possible uses and outcomes) and more efficient maintenance.

A clean separation between data movement and data preparation also comes with its own specific benefits:

  • Reduced time — An ETL process requires the use of a staging area and system, which means extra time to load data; ELT does not.
  • Increased usability — Business users can own business logic instead of a small IT team using Java, Python, Scala etc. to transform data
  • More cost-effective — Using SaaS solutions, an ELT data structure stack can scale up or down to the needs of an organization; ETL was designed for large organizations only.
  • Improved analytics — Under ELT, business users can apply their unique business context to the data, which often leads to better results.

Data Engineering infrastructure

In tandem with the rise towards an ELT approach has been the prioritization of centralized data repositories.

No longer are organizations dependent on databases, which segregated data and made it difficult to find and access. Instead, centralized data repositories allow for a cohesive view of data sources, improve exploration, and increase access to data for business users.

Some of the common centralized data repositories include:

  1. Data warehouse
    Data warehouses have long been an architectural standard for storing and processing business data. They are an intermediary between operational systems and business applications, allowing data engineers to bring together many different data sources in a single data warehouse.The disadvantage of a data warehouse is that it doesn’t allow for unstructured data storage, which can inhibit data exploration.
  2. Data lake
    Data lakes came about largely as a response to the limitations of a data warehouse. Data lakes are a repository that can store any type of raw data, structured or unstructured, so that it could be later transformed for specific business uses.The disadvantage of a data lake is the work involved in trying to maintain and organize it; very quickly, data lakes can become unusable data “swamps.”
  3. Data lakehouse
    The data lakehouse is the most recent answer to the years-long question of the best way to process and store modern data types.A data lakehouse combines elements of both a data lake and a traditional data warehouse and can simplify a multiple-system setup that includes a data lake, several data warehouses, and other specialized systems.

Often, data engineers must work with a combined approach, which may include all three of these types of data repositories, along with other storage options, such as simple storage.

What are required Data Engineering skills?

As the amount of data sources increases, data engineering, inevitably, has become more and more complex. To keep up, proficient data engineer must be equipped with strong technical skills, which includes:

  • Extensive knowledge of programming languages such as C#, Java, Python, R, Ruby, Scala and SQL. Python, R and SQL are widely considered the most important of the list.
  • Experience with ETL/ELT tools and REST-oriented APIs, which are used for data integration.
  • An understanding of a wide range of data systems and platforms, such as data warehouses and data lakes, relational databases, such as MySQL and PostgreSQL, and NoSQL databases and Apache Spark systems.
  • An understanding of a wide range of business intelligence (BI) or business-facing platforms, which can be used to establish connections to data platforms and self-serve data to the business.
  • A familiarity with machine learning (ML) skills. Though not typical to the day-to-day work of a data engineer, data engineers must be able to know the basics of deploying machine learning algorithms for insights.

At the same time, all of the work that data engineers do in building data pipelines and maintaining data repositories must align to the end business goals. For example, ensuring that data is modeled correctly in a data warehouse or that the right data sources are being ingested in accordance with the business’ schedule.

As such, organizations must ensure that they are taking the right steps to ensure that collaboration between data engineering, data science, and data analytics teams is as tightly-knit as possible.

Data Engineering Platforms

One of the best technologies on the market that fosters collaboration between data engineering, data science, and data analytics teams is a data engineering platform.

Designed to execute complex data engineering tasks, but with a user-friendly interface that anyone in the organization can understand, a data engineering platform is the best way to close the gap between data engineering teams and the rest of the organization.

Though commonly operated by data engineering teams, as the name would suggest, data engineering platforms allow the work of the data engineering to be extended out to business teams.

Business teams can use their unique context of the data to better design pipelines and ensure that data is properly transformed for use.

The Designer Cloud

The Designer Cloud is the leading data engineering platform on the market.

The Designer Cloud is the only open and interactive cloud platform for data engineers and analysts to collaboratively profile, prepare, and pipeline data for analytics and machine learning.

To learn more about the Designer Cloud and how it works alongside modern data warehousing concepts such as slowly changing dimensions, request a demo to see the Designer Cloud in action.

Tags