Quick Start Guide

This quick start guide helps new and trial users understand how to set up, model, and automate a data warehouse using AnalyticsCreator. It follows the actual execution flow of the application, from metadata definition to deployment and execution, and explains how SQL-based warehouse structures are generated and processed.

The guide assumes:

  • Strong SQL and ETL background
  • Familiarity with layered DWH design (STG, CORE, DM)

Core Concept

AnalyticsCreator is a metadata-driven design application that generates SQL-based data warehouse structures, transformation logic, and orchestration components. Instead of manually implementing ETL processes, developers define metadata, which is translated into executable database objects and pipelines.

The process follows a generation-driven approach:

  • Connect to source systems
  • Import metadata (tables, columns, keys, relationships)
  • Generate a draft data warehouse model using the wizard
  • Refine transformations, keys, and historization
  • Generate and deploy SQL artifacts and pipelines
  • Execute data loading and processing workflows

A key architectural element is the persistent staging layer (STG):

  • Source data is stored persistently after extraction
  • Supports reprocessing without re-reading the source system
  • Decouples ingestion from transformation and historization

In practice, staging is followed by a second layer where historization is applied before data is transformed into CORE structures (dimensions and facts).

Quick Start Flow

The implementation process in AnalyticsCreator follows a defined sequence:

  1. Create repository
    Initialize a metadata repository (SQL Server database) that stores all definitions of the data warehouse.
  2. Create connectors
    Define connections to source systems (e.g. SAP, SQL Server) and enable metadata extraction.
  3. Import metadata and run wizard
    Automatically read source structures and generate a draft data warehouse model (STG, CORE, DM).
  4. Refine the model
    Adjust business keys, surrogate keys, relationships, historization behavior, and transformations.
  5. Synchronize
    Generate SQL objects (tables, views, procedures) and materialize the structure in the target database.
  6. Deploy
    Generate and deploy deployment packages (DACPAC, pipelines, semantic models).
  7. Execute workflows
    Run generated pipelines (e.g. SSIS, Azure Data Factory) to load and process data.
  8. Consume data
    Use generated data marts and semantic models in reporting tools (e.g. Power BI).

What This Quick Start Covers

  • Create connectors and define relationships (foreign keys, references)
  • Import and persist source data in the STG layer
  • Understand historization and persistent staging behavior
  • Build and refine CORE transformations (dimensions and facts)
  • Define business keys and surrogate keys
  • Create data marts (DM layer) and calendar dimensions
  • Generate and deploy SQL Server, pipeline, and analytical model artifacts