Microsoft Fabric

This page describes how AnalyticsCreator generates and integrates data warehouse and analytical solutions for Microsoft Fabric environments.

Overview

AnalyticsCreator supports Microsoft Fabric as a target platform for data warehouse generation, orchestration, and analytical modeling. It generates SQL-based structures, pipelines, and semantic models that run within Fabric services.

AnalyticsCreator itself does not execute workloads inside Fabric. It generates artifacts that are deployed and executed within Fabric components such as SQL endpoints, pipelines, and semantic models.

Supported Services and Components

  • Fabric Data Warehouse (SQL endpoint)
  • Fabric Lakehouse (SQL analytics endpoint)
  • OneLake storage
  • Fabric Data Pipelines
  • Power BI semantic models (Fabric-integrated)

What AnalyticsCreator Generates

For Microsoft Fabric, AnalyticsCreator generates:

  • SQL objects:
    • STG tables (import layer)
    • Persistent staging and historization tables
    • CORE transformations (views or tables)
    • DM layer (facts and dimensions)
  • Stored procedures for:
    • Data loading
    • Historization
    • Persisting logic
  • Fabric pipelines:
    • Orchestration of load and transformation steps
    • Dependency-based execution
  • Semantic models:
    • Dimensions and measures
    • Relationships between entities

Supported Modeling Approaches

  • Dimensional modeling (facts and dimensions)
  • Data Vault modeling (hubs, links, satellites)
  • Hybrid approaches (Data Vault foundation with dimensional output)
  • Historized models (SCD2 with valid-from and valid-to)

Both warehouse and lakehouse-style architectures can be implemented depending on the selected Fabric components.

Deployment and Execution Model

AnalyticsCreator separates generation, deployment, and execution:

  • AnalyticsCreator generates SQL objects, pipelines, and semantic models
  • Deployment publishes these artifacts into Microsoft Fabric
  • Execution is performed by Fabric services (pipelines and SQL engine)

Data processing runs inside Fabric:

  • SQL transformations run on Fabric SQL endpoints
  • Pipelines orchestrate execution using Fabric pipeline services
  • Data is stored in OneLake

CI/CD and Version Control

  • Metadata is stored in the AnalyticsCreator repository
  • Projects can be versioned via JSON export (acrepo)
  • Deployment artifacts can be integrated into CI/CD pipelines
  • Fabric environments can be targeted via deployment configurations

Connectors, Sources, and Exports

Supported sources

  • SAP systems
  • SQL Server and relational databases
  • Other supported connectors

Exports and targets

  • Fabric SQL endpoints
  • Lakehouse tables
  • Semantic models for Power BI

Prerequisites, Limitations, and Notes

  • Fabric workspace and permissions must be configured
  • Linked services or connections must be defined for pipelines
  • SQL compatibility depends on Fabric SQL endpoint capabilities
  • Performance depends on data volume, partitioning, and load strategy

Design considerations:

  • Choose between Data Warehouse and Lakehouse based on workload
  • Use persistent staging to avoid repeated source reads
  • Validate generated joins and transformations for performance

Example Use Cases

  • Building a Fabric-native data warehouse with automated SQL generation
  • Implementing Data Vault models on top of OneLake storage
  • Generating Power BI-ready semantic models from warehouse structures
  • Replacing manual pipeline development with generated Fabric pipelines

Platform-specific Notes

  • Fabric unifies storage and compute, which simplifies deployment compared to separate Azure services
  • Lakehouse and warehouse approaches can coexist in the same environment
  • Semantic models are tightly integrated with Power BI

Related Content

Key Takeaway

AnalyticsCreator generates SQL structures, pipelines, and semantic models for Microsoft Fabric, while execution and storage are handled by Fabric services such as SQL endpoints, pipelines, and OneLake.