Data Infrastructure Basics

Data Strategy
Data Engineering
Data Infrastructure
Startups
Author

Piotr Sidoruk

Published

October 8, 2024

Data Infrastructure Basics

In this post, we will cover foundational aspects of data infrastructure that every startup needs to understand. We’ll explore data sources, data warehousing and data lakes, and the processes of ETL and ELT.

Data Sources: Internal and External

Startups often gather data from a wide range of sources, which can be broadly categorized into two types:

  • Internal Data Sources: This includes customer data, transaction logs, product usage data, and any other data generated within the company’s systems.
    • Examples: CRM databases, SaaS tools, website analytics, internal financial systems.
  • External Data Sources: These are datasets obtained from outside the company that complement or enhance internal data.
    • Examples: Public datasets, third-party APIs, market research data.

Understanding the distinction and the interplay between internal and external data is critical for effective analysis and decision-making.

Data Warehousing and Data Lakes

A robust data infrastructure often incorporates both data warehousing and data lakes:

  • Data Warehousing: A data warehouse is a centralized repository for structured data, designed to support reporting and analytics. It’s optimized for querying and is often used for business intelligence tasks.
    • Key Characteristics: Structured data, schema-on-write, high query performance.
    • Tools: Amazon Redshift, Google BigQuery, Snowflake.
  • Data Lakes: A data lake is a storage repository that holds a vast amount of raw data in its native format. Data can be structured, semi-structured, or unstructured, and it’s often used for big data analytics and machine learning.
    • Key Characteristics: Unstructured or semi-structured data, schema-on-read, large-scale storage.
    • Tools: Amazon S3, Azure Data Lake, Hadoop.

Both systems can coexist within an organization, often complementing each other by serving different purposes.

ETL/ELT Processes

Data needs to be extracted, transformed, and loaded into the right systems to be useful. There are two primary approaches to this process:

  • ETL (Extract, Transform, Load): In ETL, data is first extracted from various sources, transformed into the desired format, and then loaded into the target system (e.g., a data warehouse).
    • When to use ETL: Best for structured data and when data needs to be clean before loading.
    • Tools: Apache Nifi, Talend, Informatica.
  • ELT (Extract, Load, Transform): In ELT, data is first extracted and loaded into the storage system (e.g., a data lake), and transformation occurs afterwards. This approach is common in big data scenarios.
    • When to use ELT: Best for unstructured data or when transformation can be handled after loading.
    • Tools: dbt, Google Dataflow, Fivetran.

Choosing between ETL and ELT depends on the use case and the types of data you’re working with.