After workflows have been executed, the data warehouse is populated and ready for consumption. The final step is to access the processed data through data marts and semantic models.
AnalyticsCreator generates structures that are optimized for analytical consumption. These include dimensional models and semantic layers that can be directly used by reporting and BI tools.
Purpose
Provide structured, query-ready data for analytical tools and reporting use cases.
Design Principle
AnalyticsCreator separates data processing from data consumption.
STG and CORE layers handle ingestion and transformation
DM and semantic models provide consumption-ready structures
Consumers should not access staging or intermediate layers directly.
Inputs / Outputs
Inputs
Processed CORE structures
Generated DM layer (facts and dimensions)
Deployed semantic model
Outputs
Queryable data marts
Semantic models with defined relationships and measures
Data available for reporting tools (e.g. Power BI)
Internal Mechanics
1. DM layer exposure
The DM layer contains consumption-ready structures such as fact and dimension tables or views. These are generated based on the CORE transformations and are optimized for analytical queries.
2. Semantic model generation
AnalyticsCreator can generate a semantic model that defines:
Relationships between facts and dimensions
Measures and calculated fields
Hierarchies and aggregation logic
3. Data access
Reporting tools connect to the semantic model or directly to the DM layer. Typical access patterns include:
DirectQuery or import into BI tools
Connection to tabular models
4. Refresh behavior
After workflow execution, the semantic model can be refreshed to reflect updated data. This ensures consistency between the data warehouse and reporting layer.
Types / Variants
Consumption layers
DM tables or views
Tabular models
External BI tool connections
Access patterns
Direct query on DM layer
Semantic model (recommended)
Hybrid approaches
Example
After execution, the following structures are available:
dm.FactSales
dm.DimCustomer
dm.DimProduct
A semantic model defines relationships between these tables and exposes measures such as:
TotalSales = SUM(FactSales.Amount)
A reporting tool connects to this model and visualizes sales by customer, product, and time.
When to Use / When NOT to Use
Use when
Data warehouse has been executed and populated
Users require analytical access to data
Reporting or dashboarding is required
Do NOT use lower layers when
Accessing STG or CORE directly for reporting
Building reports on non-finalized structures
Performance & Design Considerations
DM layer should be optimized for query performance
Semantic models reduce complexity for end users
Pre-aggregations can improve performance for large datasets
Direct access to CORE can negatively impact performance and consistency
Design trade-off:
Direct access offers flexibility
Semantic models provide consistency and usability
Integration with other AnalyticsCreator features
CORE transformations: provide input for DM layer
Deployment: creates semantic models
Execution: ensures data is up to date
Macros and transformations: influence calculated fields and measures
Common Pitfalls
Querying STG or CORE layers directly
Ignoring semantic model design
Missing refresh after data load
Overloading DM layer with unnecessary complexity
Key Takeaway
The DM layer and semantic model provide consumption-ready data for reporting tools and should be the primary access point for analytical workloads.
[
{"id":383461199041,"name":"Getting Started","type":"category","path":"/docs/getting-started","breadcrumb":"Getting Started","description":"","searchText":"getting started this section provides the fastest path to understanding how to set up and use analyticscreator. it focuses on how a data warehouse is generated, deployed, and executed based on metadata definitions. if you are new to analyticscreator, start with the quick start guide. it walks through the full workflow from repository creation to data consumption. recommended path quick start guide end-to-end implementation flow from metadata to deployed data warehouse understanding analyticscreator architecture, layers (stg, core, dm), and design principles installation and configuration system setup and environment configuration typical workflow create repository define connectors run data warehouse wizard refine model synchronize database deploy artifacts execute workflows consume data available sections installation system requirements download and installation understanding analyticscreator quick start guide"}
,{"id":383225948363,"name":"Quick Start Guide","type":"section","path":"/docs/getting-started/quick-start-guide","breadcrumb":"Getting Started › Quick Start Guide","description":"","searchText":"getting started quick start guide this quick start guide helps new and trial users understand how to set up, model, and automate a data warehouse using analyticscreator. it follows the actual execution flow of the application, from metadata definition to deployment and execution, and explains how sql-based warehouse structures are generated and processed. the guide assumes: strong sql and etl background familiarity with layered dwh design (stg, core, dm) core concept analyticscreator is a metadata-driven design application that generates sql-based data warehouse structures, transformation logic, and orchestration components. instead of manually implementing etl processes, developers define metadata, which is translated into executable database objects and pipelines. the process follows a generation-driven approach: connect to source systems import metadata (tables, columns, keys, relationships) generate a draft data warehouse model using the wizard refine transformations, keys, and historization generate and deploy sql artifacts and pipelines execute data loading and processing workflows a key architectural element is the persistent staging layer (stg): source data is stored persistently after extraction supports reprocessing without re-reading the source system decouples ingestion from transformation and historization in practice, staging is followed by a second layer where historization is applied before data is transformed into core structures (dimensions and facts). quick start flow the implementation process in analyticscreator follows a defined sequence: create repository initialize a metadata repository (sql server database) that stores all definitions of the data warehouse. create connectors define connections to source systems (e.g. sap, sql server) and enable metadata extraction. import metadata and run wizard automatically read source structures and generate a draft data warehouse model (stg, core, dm). refine the model adjust business keys, surrogate keys, relationships, historization behavior, and transformations. synchronize generate sql objects (tables, views, procedures) and materialize the structure in the target database. deploy generate and deploy deployment packages (dacpac, pipelines, semantic models). execute workflows run generated pipelines (e.g. ssis, azure data factory) to load and process data. consume data use generated data marts and semantic models in reporting tools (e.g. power bi). what this quick start covers create connectors and define relationships (foreign keys, references) import and persist source data in the stg layer understand historization and persistent staging behavior build and refine core transformations (dimensions and facts) define business keys and surrogate keys create data marts (dm layer) and calendar dimensions generate and deploy sql server, pipeline, and analytical model artifacts"}
,{"id":383985776833,"name":"Repository & Metadata Model","type":"subsection","path":"/docs/getting-started/quick-start-guide/repository-metadata-model","breadcrumb":"Getting Started › Quick Start Guide › Repository & Metadata Model","description":"","searchText":"getting started quick start guide repository & metadata model before any connector, transformation, or data warehouse object is created, analyticscreator requires a repository. the repository is a sql server database that stores the complete metadata definition of the data warehouse. all objects defined in analyticscreator—sources, transformations, keys, relationships, deployment settings, and workflows—are stored in this repository. it acts as the central control layer from which all sql code and artifacts are generated. purpose provide a persistent metadata foundation that defines the structure, logic, and behavior of the data warehouse independently of the generated sql artifacts. design principle analyticscreator follows a metadata-driven approach where the repository contains the full definition of the data warehouse model. the repository is not a runtime system; it is a design-time control layer. all generated objects (tables, views, procedures, pipelines) are derived from this metadata. once generated and deployed, these objects can run independently of analyticscreator. key principle: metadata stored in repository → sql and pipelines generated → deployed to target system inputs / outputs inputs repository name and sql server instance initial project configuration outputs sql server database containing metadata structured definitions of: connectors source objects transformations keys and relationships deployment configurations internal mechanics 1. repository creation when a new project is created, analyticscreator initializes a sql server database that serves as the repository. this database contains all metadata required to define the data warehouse. 2. metadata storage each object in analyticscreator is stored as structured metadata. this includes: source definitions (tables, columns, data types) transformation logic historization settings dependencies between objects the repository is fully accessible and can be queried or extended directly if required. 3. central control layer all design changes are written to the repository. no sql objects are created in the target system at this stage. the repository acts as the single source for: code generation deployment packaging dependency resolution 4. separation of design-time and runtime analyticscreator operates purely at design time. the repository defines what will be generated, but execution happens only after deployment in the target environment (sql server, azure, etc.). types / variants local sql server repository (development setup) shared repository (team collaboration) version-controlled repository (via acrepo / json export) example creating a new repository results in a sql server database that contains metadata tables describing the data warehouse model. conceptually: repository (sql server) ├── connectors ├── sources ├── transformations ├── keys / relationships ├── deployment config no stg, core, or dm tables exist yet in the target system. only their definitions are stored. when to use / when not to use use when starting a new data warehouse project managing metadata centrally working in a team with shared definitions do not use as a runtime system the repository does not store business data it is not queried by reporting tools performance & design considerations repository size grows with model complexity, not data volume changes in metadata trigger regeneration, not direct sql changes direct modifications in the target database can be overwritten during synchronization design trade-off: centralized metadata control vs direct sql flexibility integration with other ac features connectors: stored and managed in repository wizard: reads metadata and generates draft models synchronization: converts metadata into sql objects deployment: packages generated artifacts ci/cd: repository can be versioned and exported common pitfalls treating the repository as a data storage layer manually modifying generated sql instead of metadata ignoring repository versioning in team environments mixing multiple environments in a single repository key takeaway the repository is the central metadata store that defines the entire data warehouse and drives all code generation and deployment in analyticscreator."}
,{"id":385512401101,"name":"Create Connectors","type":"subsection","path":"/docs/getting-started/quick-start-guide/create-connectors","breadcrumb":"Getting Started › Quick Start Guide › Create Connectors","description":"","searchText":"getting started quick start guide create connectors after initializing the repository, the next step is to define connectors to source systems. connectors provide the technical and structural foundation for importing metadata and generating the data warehouse model. a connector defines how analyticscreator accesses a source system and how metadata (tables, columns, keys, relationships) is retrieved. this metadata is then stored in the repository and used by the data warehouse wizard to generate a draft model. purpose establish access to source systems and import structural metadata required for automated data warehouse generation. design principle analyticscreator separates metadata acquisition from data extraction. metadata (structure) is imported first and stored in the repository data extraction happens later during execution (via pipelines) this means a data warehouse model can be designed and generated without requiring an active connection to the source system at runtime. inputs / outputs inputs connector type (e.g. sql server, sap, metadata connector) connection configuration (server, database, authentication) selected schemas, tables, or metadata source outputs connector definition stored in repository imported metadata: tables and views columns and data types primary keys foreign keys or references (if available) internal mechanics 1. connector definition the connector stores the configuration required to access a source system. this includes connection details and selection of relevant schemas or objects. 2. metadata extraction analyticscreator reads structural metadata from the source system or from a metadata connector. this includes: table structures column definitions key definitions relationships between tables in some cases (e.g. sap or metadata connectors), metadata can be imported without direct access to the operational system. 3. repository persistence all imported metadata is stored in the repository. at this stage: no sql objects are generated no data is extracted no pipelines are executed the system builds a structural model that will later drive code generation. 4. relationship availability if source systems expose foreign keys or references, these are imported and can be reused during modeling. if not, relationships must be defined manually in later steps. types / variants connector types direct database connectors (e.g. sql server) erp connectors (e.g. sap metadata extraction) metadata connectors (predefined structures without live connection) import modes full metadata import selective table import manual definition (if metadata is incomplete) example a connector is created for a sql server database containing the following tables: customer orders orderlines the system imports: column definitions (e.g. customerid, orderid) primary keys foreign key relationships (e.g. orders → customer) these definitions are stored in the repository and become available for automated model generation in the next step. when to use / when not to use use when starting a new data warehouse model importing metadata from source systems preparing for automated model generation do not rely on connectors alone when source metadata is incomplete or inconsistent business relationships differ from technical relationships required structures are not exposed in the source system performance & design considerations connector scope directly affects model complexity importing unnecessary tables increases modeling overhead metadata quality determines quality of generated model design trade-off: broad import (high coverage, more noise) selective import (cleaner model, more manual work later) integration with other analyticscreator features repository: stores connector and metadata definitions wizard: uses imported metadata to generate draft model stg generation: based on imported source structures transformations: reuse source metadata and relationships common pitfalls importing entire source systems without filtering assuming source relationships are suitable for analytical models using technical keys as business keys without validation skipping metadata validation before running the wizard key takeaway connectors import and persist source metadata in the repository, forming the structural basis for automated data warehouse generation."}
,{"id":385512338634,"name":"Run Data Warehouse Wizard","type":"subsection","path":"/docs/getting-started/quick-start-guide/run-data-warehouse-wizard","breadcrumb":"Getting Started › Quick Start Guide › Run Data Warehouse Wizard","description":"","searchText":"getting started quick start guide run data warehouse wizard after connectors and metadata are available in the repository, the next step is to generate a draft data warehouse model using the analyticscreator wizard. this is the central step where the system translates metadata into a structured warehouse design. the wizard analyzes imported metadata and automatically creates a full model including staging, historization, and transformation layers. this provides a working baseline that can be refined instead of built manually from scratch. purpose generate a complete draft data warehouse model based on imported metadata, including stg, core, and dm structures. design principle analyticscreator follows a generation-first approach: the full data warehouse model is generated automatically from metadata developers refine and adjust the generated model instead of building it manually the wizard uses structural metadata such as tables, keys, and relationships to infer joins, dimensions, and fact structures. inputs / outputs inputs imported metadata from connectors selected source tables modeling approach (e.g. data vault, dimensional, mixed) optional configuration (naming conventions, defaults) outputs generated data warehouse model including: stg layer (import structures) persistent staging and historization structures core layer (dimensions and facts) dm layer (analytical structures) predefined joins and relationships initial transformation logic internal mechanics 1. metadata analysis the wizard reads all metadata stored in the repository, including tables, columns, and relationships. based on this, it determines how objects are related. 2. model generation analyticscreator generates a complete data warehouse structure. this includes: import tables in the stg layer persistent staging structures with historization core transformations for dimensions and facts dm structures for analytical consumption 3. relationship inference joins between tables are derived automatically based on source relationships. these joins are used to construct fact and dimension transformations. 4. default logic generation the wizard can apply default behaviors such as: including directly and indirectly related tables in facts creating standard transformations generating calendar dimensions 5. visual model creation the result is a fully structured data warehouse diagram that shows all layers and dependencies. at this stage, the model is defined but not yet deployed. types / variants modeling approaches data vault model (hubs, links, satellites) dimensional model (facts and dimensions) mixed approach (data vault foundation with dimensional output) configuration options naming conventions (prefixes, suffixes) default transformations inclusion rules for related tables example a set of source tables is selected: customer orders orderlines products after running the wizard: stg tables are created for each source relationships are detected automatically a fact table is generated based on transaction data dimensions are generated for related entities the resulting model already contains joins, transformation paths, and structural dependencies. when to use / when not to use use when starting a new data warehouse model rapidly generating a baseline structure working with well-defined source metadata do not rely on defaults when business logic is complex or non-standard source relationships are incomplete or incorrect fact and dimension definitions require domain-specific adjustments performance & design considerations the wizard accelerates initial modeling but does not replace design decisions generated joins should be reviewed for correctness and performance fact table scope depends on inclusion settings (direct vs indirect relationships) design trade-off: full automation provides speed manual refinement ensures correctness and performance integration with other analyticscreator features repository: provides metadata input for the wizard transformations: generated and refined after wizard execution synchronization: converts generated model into sql objects deployment: packages generated artifacts common pitfalls assuming the generated model is production-ready without review over-including tables leading to overly complex fact structures ignoring incorrect or missing source relationships not validating generated joins key takeaway the wizard generates a complete data warehouse model from metadata, which is then refined and deployed rather than built manually."}
,{"id":385512401114,"name":"Refine the Model","type":"subsection","path":"/docs/getting-started/quick-start-guide/refine-the-model","breadcrumb":"Getting Started › Quick Start Guide › Refine the Model","description":"","searchText":"getting started quick start guide refine the model after generating the draft model with the wizard, the next step is to refine and adjust the data warehouse structure. the generated model provides a complete baseline, but it must be validated and adapted to match business logic, data quality, and performance requirements. this step focuses on defining keys, adjusting transformations, handling historization, and ensuring that the generated joins and structures reflect the intended analytical model. purpose validate and adjust the generated data warehouse model to ensure correct business logic, data relationships, and performance behavior. design principle analyticscreator generates a structurally complete model, but correctness is achieved through refinement. automation provides the structure manual refinement ensures semantic accuracy developers work on metadata definitions, not directly on sql, and all changes are reflected in generated code during synchronization. inputs / outputs inputs generated draft model (stg, core, dm) source metadata and relationships business requirements and logic outputs refined transformations defined business keys and surrogate keys adjusted joins and relationships configured historization behavior internal mechanics 1. column selection and cleanup generated transformations often include all available columns. unnecessary attributes should be removed to reduce model complexity and improve performance. 2. business key definition business keys must be validated or defined explicitly. these keys determine: uniqueness of entities join conditions between tables basis for historization 3. surrogate key generation analyticscreator generates surrogate keys automatically. depending on the modeling approach: identity-based keys (e.g. integer) hash-based keys (for data vault or hybrid models) hash keys are typically generated in the staging layer as calculated and persisted columns. 4. relationship validation automatically generated joins should be reviewed. this includes: correct join paths cardinality assumptions inclusion of required tables 5. historization configuration historization is applied in persistent staging and core layers. typical behavior includes: valid-from and valid-to columns tracking changes over time the historization strategy should be verified for correctness and performance impact. 6. macro usage reusable sql logic is implemented using macros. for example: hash key generation standard transformations macros allow centralized control of repeated logic without modifying generated sql directly. 7. dimension and fact adjustments fact tables and dimensions generated by the wizard should be refined: remove unnecessary joins add required attributes ensure correct grain of fact tables 8. calendar and date handling date columns should typically be replaced by references to a calendar dimension. this is often done using predefined macros. types / variants key strategies business keys only surrogate keys (identity) hash-based keys historization strategies scd2 (valid-from / valid-to) snapshot-based access current-state only transformation styles fully generated adjusted via metadata extended with custom sql logic example a generated fact table includes all columns from multiple related tables. refinement steps: remove unnecessary attributes validate join between orders and customers define surrogate key for dimension tables replace date columns with calendar dimension references example adjustment: -- before refinement select * from stg_orders o join stg_customer c on o.customer_id = c.customer_id; -- after refinement (conceptual) select o.order_id, c.customer_key, o.order_date_key, o.amount from core_orders o join dim_customer c on o.customer_key = c.customer_key; when to use / when not to use use when after running the wizard validating generated model structures aligning model with business logic do not skip when working with complex source systems data quality issues exist performance requirements are strict performance & design considerations reducing column count improves performance incorrect joins can cause data duplication historization increases storage and processing cost hash keys improve scalability but add computation overhead design trade-off: automation speed vs model accuracy flexibility vs standardization integration with other analyticscreator features wizard: provides initial model macros: define reusable sql logic synchronization: generates sql from refined metadata deployment: uses finalized model for artifact creation common pitfalls leaving generated joins unvalidated using incorrect business keys overloading fact tables with unnecessary attributes ignoring historization impact on performance mixing business logic directly into sql instead of metadata key takeaway the generated model must be refined to ensure correct business logic, keys, and performance before sql generation and deployment."}
,{"id":385512338635,"name":"Synchronize (SQL Generation)","type":"subsection","path":"/docs/getting-started/quick-start-guide/synchronize-sql-generation","breadcrumb":"Getting Started › Quick Start Guide › Synchronize (SQL Generation)","description":"","searchText":"getting started quick start guide synchronize (sql generation) after the model has been refined, the next step is to synchronize the data warehouse. synchronization converts the metadata stored in the repository into physical sql objects in the target database. at this stage, analyticscreator materializes the designed warehouse structure. tables, views, and generated procedures become visible in the sql server database. this is the point where the model moves from design-time metadata to deployable database objects. purpose generate and materialize the physical database structure from the metadata model. design principle synchronization is the controlled transition from metadata definition to sql implementation. the repository remains the source of truth the target database is regenerated or updated from metadata developers do not manually create warehouse objects in the target database. instead, analyticscreator generates them consistently from the repository definitions. inputs / outputs inputs refined metadata model in the repository target database configuration naming conventions and generation settings outputs generated sql server database objects, including: stg tables persistent staging and historization tables core views and tables dm views and tables stored procedures for loading, historization, and persisting internal mechanics 1. metadata evaluation analyticscreator reads the current model definition from the repository and determines which sql objects must be created or updated. 2. object generation based on the metadata, the system generates sql artifacts such as: physical tables for staging and persistent layers views for generated transformations stored procedures for historization and persisting 3. schema materialization the generated structure is applied to the target sql server database. after synchronization, the database contains the warehouse objects defined in the model. 4. dependency-aware generation objects are generated in the required order so that dependent objects can reference upstream objects correctly. 5. re-synchronization behavior if the model changes, synchronization updates the target structure accordingly. this keeps the generated sql database aligned with the repository metadata. types / variants typical synchronized object types import tables historization tables transformation views persisted transformation tables stored procedures generation patterns view-based transformations table-based persisted layers procedure-driven loading and historization example a refined model contains: one source import table one historized customer table one fact transformation one customer dimension after synchronization, the target sql server database contains generated objects such as: stg.customer_import pst.customer_history core.vw_factsales dm.vw_dimcustomer sp_load_customer_import sp_historize_customer the model now exists as physical sql objects, but data is not yet loaded unless execution is triggered separately. when to use / when not to use use when the model has been refined and validated you want to materialize the current warehouse structure you need to inspect or test generated sql objects do not treat synchronization as execution synchronization creates structure, not loaded business data etl or pipeline execution happens later performance & design considerations synchronization affects schema, not data volume frequent changes to metadata can cause repeated structural updates manual database changes outside analyticscreator can be overwritten design trade-off: consistent generated structure vs manual database customization integration with other analyticscreator features repository: remains the source for all generated objects refinement: defines what is materialized persisting: adds generated persisted tables and procedures deployment: packages the synchronized structures for release common pitfalls assuming synchronization loads data editing generated database objects manually synchronizing before validating keys and joins forgetting that metadata, not the target database, is authoritative key takeaway synchronization materializes the metadata model as physical sql objects in the target database, but it does not execute data loading by itself."}
,{"id":385512401131,"name":"Deploy","type":"subsection","path":"/docs/getting-started/quick-start-guide/deploy","breadcrumb":"Getting Started › Quick Start Guide › Deploy","description":"","searchText":"getting started quick start guide deploy after synchronization, the data warehouse structure exists in the target database. the next step is deployment, where analyticscreator generates and distributes deployment artifacts to the selected environment. deployment packages the generated database objects together with orchestration components such as pipelines and analytical models. this allows the data warehouse to be executed and used in a target environment such as sql server, azure or fabric. purpose package and deploy generated database structures, pipelines, and analytical models to a target environment. design principle deployment separates structure generation from environment distribution. synchronization creates the structure deployment distributes and activates it in a target system all deployment artifacts are generated from metadata and can be recreated at any time. inputs / outputs inputs synchronized data warehouse model deployment configuration (target server, database, credentials) selected components (database objects, pipelines, semantic models) outputs deployment package containing: sql scripts or dacpac ssis packages or azure data factory pipelines analytical models (e.g. tabular model for power bi) deployed artifacts in the target environment internal mechanics 1. deployment package creation analyticscreator generates a deployment package that contains all required components for the data warehouse. this includes database objects, pipeline definitions, and optional analytical models. 2. target configuration deployment settings define where the artifacts will be deployed. this includes: sql server or azure environment database name authentication details 3. database deployment the generated database structure is applied to the target system. this may include: creating or updating schemas deploying tables, views, and procedures 4. pipeline generation analyticscreator automatically generates orchestration components: ssis packages for on-premise environments azure data factory pipelines for cloud environments fabric data factory pipelines these pipelines define how data is extracted, transformed, and loaded. 5. analytical model generation if configured, a semantic model is generated and deployed. this includes: dimensions and measures relationships between tables compatibility with reporting tools such as power bi 6. deployment logging the deployment process produces logs that show which objects and components were created or updated. types / variants deployment targets on-premise sql server azure sql database azure synapse or fabric environments pipeline variants ssis packages azure data factory pipelines analytical outputs tabular models for power bi powerbi project other supported analytical engines example a deployment is configured with: target sql server database ssis package generation enabled tabular model generation enabled after deployment: database objects are created in the target database ssis packages are generated and available in a visual studio project a tabular model is deployed and available for power bi at this stage, the system is fully deployed but not yet populated with data. when to use / when not to use use when the model is finalized and synchronized you want to move the data warehouse to a target environment pipelines and analytical models need to be generated do not assume deployment loads data deployment creates structure and pipelines data loading requires execution of pipelines performance & design considerations deployment time depends on model size and number of objects pipeline generation adds orchestration complexity but reduces manual work repeated deployments should be controlled via versioning design trade-off: automated deployment vs manual control of environment-specific configurations integration with other analyticscreator features synchronization: provides the generated structure workflows: define execution order within pipelines ci/cd: deployment packages can be integrated into pipelines repository: remains the source for regeneration common pitfalls deploying without validating the model incorrect connection configuration assuming deployment includes data loading not selecting required pipeline or model components key takeaway deployment packages and distributes the generated data warehouse structure, pipelines, and analytical models to a target environment, but does not execute data loading."}
,{"id":385512401133,"name":"Execute Workflows (Load Data)","type":"subsection","path":"/docs/getting-started/quick-start-guide/execute-workflows-load-data","breadcrumb":"Getting Started › Quick Start Guide › Execute Workflows (Load Data)","description":"","searchText":"getting started quick start guide execute workflows (load data) after deployment, the data warehouse structure, pipelines, and analytical models exist in the target environment, but no business data has been loaded yet. the next step is to execute the generated workflows. workflow execution runs the generated load processes in the correct order. this is the stage where source data is extracted, written to staging, historized where required, transformed into core structures, and exposed through data marts and analytical models. purpose execute the generated loading and processing workflows so that the deployed data warehouse is populated with data. design principle analyticscreator separates execution from generation. generation defines structure and logic execution runs the actual data movement and processing this separation makes it possible to validate and deploy a model before loading any business data. inputs / outputs inputs deployed database objects generated workflows or pipeline packages configured source connections and linked services execution parameters and scheduling context outputs loaded stg tables historized persistent staging tables processed core structures updated dm structures refreshed analytical model content internal mechanics 1. workflow start execution begins by starting the generated workflow package or pipeline. this acts as the orchestration entry point for the full load process. 2. source extraction data is read from the configured source systems and written into the stg layer. import mappings, filters, and variables defined in the model are applied during this step. 3. persistent staging and historization after import, the data is written into the persistent staging layer. if historization is enabled, valid-from and valid-to handling or other configured historization logic is executed here. 4. core processing generated transformations are processed in dependency order. facts, dimensions, and other core structures are built from the persisted source data. 5. dm and semantic model refresh after core processing, the dm layer and the generated semantic model can be refreshed so that reporting tools can consume the updated data. 6. dependency handling the execution order is controlled by the generated workflow logic. upstream objects are processed before downstream objects so that dependencies are resolved automatically. types / variants execution variants ssis-based execution azure data factory pipeline execution manual execution for testing scheduled execution in production loading patterns full load incremental load historized load example a deployed workflow package contains the following sequence: load source table into stg.customer_import apply historization into pst.customer_history refresh fact and dimension transformations refresh the semantic model used by power bi at the end of execution: source data is available in staging historical versions are stored where configured reporting tools can access current analytical data when to use / when not to use use when the deployment has completed successfully source connections are configured correctly you want to populate or refresh the data warehouse do not execute before validating linked services and source access reviewing load filters and parameters confirming that required objects have been deployed performance & design considerations execution time depends on data volume, transformation complexity, and load pattern persistent staging supports reprocessing without re-reading source systems incremental loading reduces runtime but requires correct filter logic historization increases write volume and storage requirements design trade-off: full reloads are simpler to validate incremental and historized loads scale better but require stricter design control integration with other analyticscreator features connectors: provide source access used during execution stg and historization: form the first processing layers workflows: define orchestration and dependency order deployment: provides the executable packages and pipelines semantic models: can be refreshed after successful load common pitfalls assuming deployment already loaded data running workflows without validating linked services using incorrect filter logic for incremental loads ignoring dependency order in manually triggered runs confusing source staging with final analytical output key takeaway workflow execution is the step where deployed structures are populated with data and processed into usable analytical output."}
,{"id":385512401134,"name":"Consume Data in Data Marts and Semantic Models","type":"subsection","path":"/docs/getting-started/quick-start-guide/consume-data-in-data-marts-and-semantic-models","breadcrumb":"Getting Started › Quick Start Guide › Consume Data in Data Marts and Semantic Models","description":"","searchText":"getting started quick start guide consume data in data marts and semantic models after workflows have been executed, the data warehouse is populated and ready for consumption. the final step is to access the processed data through data marts and semantic models. analyticscreator generates structures that are optimized for analytical consumption. these include dimensional models and semantic layers that can be directly used by reporting and bi tools. purpose provide structured, query-ready data for analytical tools and reporting use cases. design principle analyticscreator separates data processing from data consumption. stg and core layers handle ingestion and transformation dm and semantic models provide consumption-ready structures consumers should not access staging or intermediate layers directly. inputs / outputs inputs processed core structures generated dm layer (facts and dimensions) deployed semantic model outputs queryable data marts semantic models with defined relationships and measures data available for reporting tools (e.g. power bi) internal mechanics 1. dm layer exposure the dm layer contains consumption-ready structures such as fact and dimension tables or views. these are generated based on the core transformations and are optimized for analytical queries. 2. semantic model generation analyticscreator can generate a semantic model that defines: relationships between facts and dimensions measures and calculated fields hierarchies and aggregation logic 3. data access reporting tools connect to the semantic model or directly to the dm layer. typical access patterns include: directquery or import into bi tools connection to tabular models 4. refresh behavior after workflow execution, the semantic model can be refreshed to reflect updated data. this ensures consistency between the data warehouse and reporting layer. types / variants consumption layers dm tables or views tabular models external bi tool connections access patterns direct query on dm layer semantic model (recommended) hybrid approaches example after execution, the following structures are available: dm.factsales dm.dimcustomer dm.dimproduct a semantic model defines relationships between these tables and exposes measures such as: totalsales = sum(factsales.amount) a reporting tool connects to this model and visualizes sales by customer, product, and time. when to use / when not to use use when data warehouse has been executed and populated users require analytical access to data reporting or dashboarding is required do not use lower layers when accessing stg or core directly for reporting building reports on non-finalized structures performance & design considerations dm layer should be optimized for query performance semantic models reduce complexity for end users pre-aggregations can improve performance for large datasets direct access to core can negatively impact performance and consistency design trade-off: direct access offers flexibility semantic models provide consistency and usability integration with other analyticscreator features core transformations: provide input for dm layer deployment: creates semantic models execution: ensures data is up to date macros and transformations: influence calculated fields and measures common pitfalls querying stg or core layers directly ignoring semantic model design missing refresh after data load overloading dm layer with unnecessary complexity key takeaway the dm layer and semantic model provide consumption-ready data for reporting tools and should be the primary access point for analytical workloads."}
,{"id":383225948362,"name":"Understanding AnalyticsCreator","type":"section","path":"/docs/getting-started/understanding-analytics-creator","breadcrumb":"Getting Started › Understanding AnalyticsCreator","description":"","searchText":"getting started understanding analyticscreator analyticscreator is a metadata-driven design application for building and automating data warehouses and analytical models. instead of manually implementing etl and sql logic, developers define metadata such as sources, keys, relationships, transformations, and loading behavior. analyticscreator uses these definitions to generate database objects, pipelines, and semantic models. how analyticscreator works the workflow in analyticscreator starts with a repository, continues with source metadata import, and then uses a wizard to generate a draft data warehouse model. that model is refined, synchronized into sql objects, deployed to the target environment, and finally executed through generated workflows or pipelines. create a repository define or import connectors import source metadata run the data warehouse wizard refine the generated model synchronize the structure deploy artifacts execute workflows consume data through data marts and semantic models repository and metadata every analyticscreator project is based on a repository. the repository is a sql server database that stores the full metadata definition of the data warehouse. this includes connectors, source objects, transformations, keys, relationships, deployment settings, and other object definitions. the repository is the design-time control layer and the source for all generated artifacts. this means the target database is not modeled manually. instead, analyticscreator reads the repository metadata and generates the required sql structures from it. generated code can run independently after deployment because analyticscreator is used as a design-time application, not as a runtime dependency. connectors and metadata import analyticscreator connects to source systems such as sql server or sap and imports structural metadata including tables, columns, keys, and references. in some scenarios, metadata can also be imported through metadata connectors, which makes it possible to model a data warehouse without an active connection to the live source system during design. imported metadata is stored in the repository and later used by the wizard to generate the draft warehouse model. at this stage, no warehouse data has been loaded yet. only structure and metadata are being captured. the wizard the data warehouse wizard is the central acceleration mechanism in analyticscreator. it analyzes source metadata and generates a draft warehouse model automatically. depending on the selected approach, this can be a dimensional model, a data vault model, or a mixed approach. the wizard can create staging structures, historization layers, dimensions, facts, calendar dimensions, and default relationships based on detected metadata. the generated model is not the end result. it is the baseline that developers refine and validate. the main engineering work happens after generation, when keys, joins, historization behavior, measures, and transformations are adjusted to fit the intended warehouse design. warehouse layers analyticscreator supports a layered warehouse architecture from source to presentation. in a typical setup, this includes source objects, staging, persistent staging or historization, core transformations, data marts, and semantic or reporting layers. it can also generate analytical models for tools such as power bi. persistent staging a key architectural concept is the persistent staging layer. source data is first imported into staging structures and then stored persistently for further processing. this persistent layer is used for historization and for decoupling source extraction from downstream transformations. it allows data to be reprocessed without repeatedly reading the source system. in dimensional scenarios, historized tables typically include surrogate keys together with valid-from and valid-to columns. in data vault and hybrid scenarios, additional hash-based keys and references can be generated in the staging layer as persisted calculated columns and then reused in later layers. transformations transformations in analyticscreator are usually generated as sql views based on metadata definitions. these definitions specify source tables, joins, selected columns, macros, and transformation rules. in many cases, the default generated view logic is sufficient as a starting point, but it can be refined through metadata rather than by rewriting generated sql directly. analyticscreator also supports reusable macros for standard sql logic, such as date-to-calendar-key conversion or hash key generation. this allows repeated logic to be defined once and reused consistently across the model. synchronization, deployment, and execution these three steps are related but different and should not be confused. synchronization synchronization materializes the metadata model into sql objects in the target database. this creates the database structure defined in analyticscreator, such as tables, views, and procedures. it does not mean that business data has already been loaded. :contentreference[oaicite:13]{index=13} deployment deployment creates and distributes deployable artifacts for the selected target environment. these can include sql database packages, ssis packages, azure data factory pipelines, and semantic models. deployment prepares the environment but still does not imply that source data has already been processed. execution execution runs the generated workflows and pipelines. this is the step where source data is actually extracted, written to staging, historized where required, transformed into core structures, and exposed through data marts and semantic models. in azure scenarios, this may happen through azure data factory. in on-premise scenarios, this may happen through ssis. consumption after execution, the data warehouse can be consumed through data marts and semantic models. these structures are intended for reporting and analytics, while lower layers such as staging and historization should remain implementation layers rather than direct reporting interfaces. analyticscreator can generate tabular models and structures for tools such as power bi. design implications the repository is the source of truth metadata drives generation, not manual sql-first development the wizard creates a baseline, not a final production model persistent staging is part of the architecture, not just a temporary landing area synchronization, deployment, and execution are separate steps consumption should happen from data marts or semantic models, not from staging layers key takeaway analyticscreator works by storing warehouse definitions as metadata, generating sql and orchestration artifacts from that metadata, and then deploying and executing those artifacts in the target environment."}
,{"id":383225948358,"name":"Installation","type":"section","path":"/docs/getting-started/installation","breadcrumb":"Getting Started › Installation","description":"","searchText":"getting started installation installing analyticscreator: 32-bit and 64-bit versions this guide offers step-by-step instructions for installing either the 32-bit or 64-bit version of analyticscreator, depending on your system requirements. ⓘ note: to ensure optimal performance, verify that your system meets the following prerequisites before installation."}
,{"id":383225948359,"name":"System Requirements","type":"section","path":"/docs/getting-started/system-requirements","breadcrumb":"Getting Started › System Requirements","description":"","searchText":"getting started system requirements to ensure optimal performance, verify that the following requirements are met: ⓘ note: if you already have sql server installed and accessible, you can proceed directly to the launching analyticscreator section. networking: communication over port 443 is where analytics communicates to the analyticscreator server. operating system: windows 10 or later. analyticscreator is compatible with windows operating systems starting from version 10. ⓘ warning: port 443 is the standard https port for secured transactions. it is used for data transfers and ensures that data exchanged between a web browser and websites remains encrypted and protected from unauthorized access. microsoft sql server: sql server on azure virtual machines azure sql managed instances"}
,{"id":383225948360,"name":"Download and Installation","type":"section","path":"/docs/getting-started/download-and-installation","breadcrumb":"Getting Started › Download and Installation","description":"","searchText":"getting started download and installation access the download page navigate to the analyticscreator download page download the installer locate and download the installation file. verify sql server connectivity before proceeding with the installation, confirm that you can connect to your sql server instance. connecting to sql server: to ensure successful connectivity: use sql server management studio (ssms), a tool for managing and configuring sql server. if ssms is not installed on your system, download it from the official microsoft site: download sql server management studio (ssms) install the software once connectivity is confirmed, follow the instructions below to complete the installation."}
,{"id":383225948361,"name":"Configuring AnalyticsCreator","type":"section","path":"/docs/getting-started/configuring-analyticscreator","breadcrumb":"Getting Started › Configuring AnalyticsCreator","description":"","searchText":"getting started configuring analyticscreator this guide will walk you through configuring analyticscreator with your system. provide the login and password that you received by e-mail from analyticscreator minimum requirements configuration settings the configuration of analyticscreator is very simple. the only mandatory configuration is the sql server settings. sql server settings use localdb to store repository: enables you to store the analyticscreator project (metadata only) on your localdb. sql server to store repository: enter the ip address or the name of your microsoft sql server. security integrated: authentication is based on the current windows user. standard: requires a username and password. azure ad: uses azure ad (now microsoft entra) for microsoft sql server authentication. trust server certificate: accepts the server's certificate as trusted. sql user: the sql server username. sql password: the corresponding password. optional requirements paths unc path to store backup: a network path to store project backups. local sql server path to store backup: a local folder to store your project backups. local sql server path to store database: a local folder to store your sql server database backups. repository database template: the alias format for your repositories. default: repo_{reponame}. dwh database template: the alias format for your dwh templates. default: dwh_{reponame}. proxy settings proxy address: the ip address or hostname of your proxy server. proxy port: the port number used by the proxy. proxy user: the username for proxy authentication. proxy password: the password for the proxy user. now you're ready to create your new data warehouse with analyticscreator."}
,
{"id":383461199042,"name":"User Guide","type":"category","path":"/docs/user-guide","breadcrumb":"User Guide","description":"","searchText":"user guide you can launch analyticscreator in two ways: from the desktop icon after installation or streaming setup, a desktop shortcut is created. double-click the icon to start analyticscreator. from the installer window open the downloaded analyticscreator installer. instead of selecting install, click launch (labeled as number one in the image below). a window will appear showing the available analyticscreator servers, which deliver the latest version to your system. this process launches analyticscreator without performing a full installation, assuming all necessary prerequisites are already in place."}
,{"id":383225948364,"name":" Desktop Interface","type":"section","path":"/docs/user-guide/desktop-interface","breadcrumb":"User Guide › Desktop Interface","description":"","searchText":"user guide desktop interface with analyticscreator desktop users can: data warehouse creation automatically generate and structure your data warehouse, including fact tables and dimensions. connectors add connections to various data sources and import metadata seamlessly. layer management define and manage layers such as staging, persisted staging, core, and datamart layers. package generation generate integration packages for ssis (sql server integration services) and adf (azure data factory). indexes and partitions automatically configure indexes and partitions for optimized performance. roles and security manage roles and permissions to ensure secure access to your data. galaxies and hierarchies organize data across galaxies and define hierarchies for better data representation. customizations configure parameters, macros, scripts, and object-specific scripts for tailored solutions. filters and predefined transformations apply advanced filters and transformations for data preparation and enrichment. snapshots and versioning create snapshots to track and manage changes in your data warehouse. deployments deploy your projects with flexible configurations, supporting on-premises and cloud solutions. groups and models organize objects into groups and manage models for streamlined workflows. data historization automate the process of creating historical data models for auditing and analysis."}
,{"id":383225948365,"name":"Working with AnalyticsCreator","type":"section","path":"/docs/user-guide/working-with-analyticscreator","breadcrumb":"User Guide › Working with AnalyticsCreator","description":"","searchText":"user guide working with analyticscreator understanding the fundamental operations in analyticscreator desktop is essential for efficiently managing your data warehouse repository and ensuring accuracy in your projects. below are key basic operations you can perform within the interface: edit mode and saving â data warehouse editor single object editing: in the data warehouse repository, you can edit one object at a time. this ensures precision and reduces the risk of unintended changes across multiple objects. how to edit: double-click on any field within an object to enter edit mode. the selected field becomes editable, allowing you to make modifications. save prompt: if any changes are made, a prompt will appear, reminding you to save your modifications before exiting the edit mode. this safeguard prevents accidental loss of changes. unsaved changes: while edits are immediately reflected in the repository interface, they are not permanently saved until explicitly confirmed by clicking the save button. accessing views in data warehouse explorer layer-specific views: each layer in the data warehouse contains views generated by analyticscreator. these views provide insights into the underlying data structure and transformations applied at that layer. how to access: navigate to the data warehouse explorer and click on the view tab for the desired layer. this displays the layer's contents, including tables, fields, and transformations. adding and deleting objects adding new objects: navigate to the appropriate section (e.g., tables, layers, or connectors) in the navigation tree. right-click and select add [object type] to create a new object. provide the necessary details, such as name, description, and configuration parameters. save the object. deleting objects: select the object in the navigation tree and right-click to choose delete. confirm the deletion when prompted. â ď¸ note: deleting an object may affect dependent objects or configurations. filtering and searching in data warehouse explorer filtering: use filters to narrow down displayed objects by criteria such as name, type, or creation date. searching: enter keywords or phrases in the search bar to quickly locate objects. benefits: these features enhance repository navigation and efficiency when working with large datasets. object dependencies and relationships dependency view: for any selected object, view its dependencies and relationships with other objects by accessing the dependencies tab. impact analysis: analyze how changes to one object might affect other parts of the data warehouse. managing scripts predefined scripts: add scripts for common operations like data transformations or custom sql queries. edit and run: double-click a script in the navigation tree to modify it. use run script to execute and view results. validating and testing changes validation tools: use built-in tools to check for errors or inconsistencies in your repository. evaluate changes: use the evaluate button before saving or deploying to test functionality and ensure correctness. locking and unlocking objects locking: prevent simultaneous edits by locking objects, useful in team environments. unlocking: release locks once edits are complete to allow further modifications by others. exporting and importing data export: export objects, scripts, or configurations for backup or sharing. use the export option in the toolbar or navigation tree. import: import previously exported files to replicate configurations or restore backups. use the import option and follow the prompts to load the data."}
,{"id":383225948366,"name":"Advanced Features","type":"section","path":"/docs/user-guide/advanced-features","breadcrumb":"User Guide › Advanced Features","description":"","searchText":"user guide advanced features analyticscreator provides a rich set of advanced features to help you configure, customize, and optimize your data warehouse projects. these features extend the toolâs capabilities beyond standard operations, enabling more precise control and flexibility. scripts scripts in analyticscreator allow for detailed customization at various stages of data warehouse creation and deployment. they enhance workflow flexibility and enable advanced repository configurations. types of scripts object-specific scripts define custom behavior for individual objects, such as tables or transformations, to meet specific requirements. pre-creation scripts execute tasks prior to creating database objects. example: define sql functions to be used in transformations. pre-deployment scripts configure processes that run before deploying the project. example: validate dependencies or prepare the target environment. post-deployment scripts handle actions executed after deployment is complete. example: perform cleanup tasks or execute stored procedures. pre-workflow scripts manage operations that occur before initiating an etl workflow. example: configure variables or initialize staging environments. repository extension scripts extend repository functionality with user-defined logic. example: add custom behaviors to redefine repository objects. historization the historization features in analyticscreator enable robust tracking and analysis of historical data changes, supporting advanced time-based reporting and auditing. key components slowly changing dimensions (scd) automate the management of changes in dimension data. supports various scd types including: type 1 (overwrite) type 2 (versioning) others as needed time dimensions create and manage temporal structures to facilitate time-based analysis. example: build fiscal calendars or weekly rollups for time-series analytics. snapshots capture and preserve specific states of the data warehouse. use cases include audit trails, historical reporting, and rollback points. parameters and macros these tools provide centralized control and reusable logic to optimize workflows and streamline repetitive tasks. parameters dynamic management: centralize variable definitions for consistent use across scripts, transformations, and workflows. reusable configurations: update values in one place to apply changes globally. use cases: set default values for connection strings, table prefixes, or date ranges. macros reusable logic: create parameterized scripts for tasks repeated across projects or workflows. streamlined processes: use macros to enforce consistent logic in transformations and calculations. example: define a macro to calculate age from a birthdate and reuse it across transformations. summary analyticscreatorâs advanced features offer deep customization options that allow you to: control object-level behavior through scripting track and manage historical data effectively streamline project-wide settings with parameters reuse logic with powerful macros these capabilities enable you to build scalable, maintainable, and highly flexible data warehouse solutions."}
,{"id":383225948367,"name":"Wizards","type":"section","path":"/docs/user-guide/wizards","breadcrumb":"User Guide › Wizards","description":"","searchText":"user guide wizards the wizards in analyticscreator provide a guided and efficient way to perform various tasks related to building and managing a data warehouse. below is an overview of the eight available wizards and their core functions. dwh wizard the dwh wizard is designed to quickly create a semi-ready data warehouse. it is especially useful when the data source contains defined table relationships or manually maintained references. supports multiple architectures: classic (kimball), data vault 1.0 & 2.0, or mixed. automatically creates imports, dimensions, facts, hubs, satellites, and links. customizable field naming, calendar dimensions, and sap deltaq integration. source wizard the source wizard adds new data sources to the repository. supports source types: table or query. retrieves table relationships and sap-specific metadata. allows query testing and schema/table filtering. import wizard the import wizard defines and manages the import of external data into the warehouse. configures source, target schema, table name, and ssis package. allows additional attributes and parameters. historization wizard the historization wizard manages how tables or transformations are historized. supports scd types: 0, 1, and 2. configures empty record behavior and vault id usage. supports ssis-based or stored procedure historization. transformation wizard the transformation wizard creates and manages data transformations. supports regular, manual, script, and external transformation types. handles both historicized and non-historicized data. configures joins, fields, persistence, and metadata settings. calendar transformation wizard the calendar transformation wizard creates calendar transformations used in reporting and time-based models. configures schema, name, start/end dates, and date-to-id macros. assigns transformations to specific data mart stars. time transformation wizard the time transformation wizard creates time dimensions to support time-based analytics. configures schema, name, time period, and time-to-id macros. assigns transformations to specific data mart stars. snapshot transformation wizard the snapshot transformation wizard creates snapshot dimensions for snapshot-based analysis. allows creation of one snapshot dimension per data warehouse. configures schema, name, and data mart star assignment. by using these eight wizards, analyticscreator simplifies complex tasks, ensures consistency, and accelerates the creation and management of enterprise data warehouse solutions."}
,{"id":384157771973,"name":"DWH Wizard ","type":"subsection","path":"/docs/user-guide/wizards/dwh-wizard-function","breadcrumb":"User Guide › Wizards › DWH Wizard ","description":"","searchText":"user guide wizards dwh wizard the dwh wizard allows for the rapid creation of a semi-ready data warehouse. it is especially effective when the data source includes predefined table references or manually maintained source references. prerequisites at least one source connector must be defined before using the dwh wizard. note: the dwh wizard support flat files using duckdb , in that case you should select the option \"use metadata of existing sources\" or use the source wizard instead. to launch the dwh wizard, click the “dwh wizard” button in the toolbar. instead, the user can use the connector context menu: using the dwh wizard select the connector, optionally enter the schema or table filter, and click \"apply\". then, the source tables will be displayed. optionally, select the \"existing sources\" radio button to work with already defined sources instead of querying the external system (ideal for meta connectors). if a table already exists, the \"exist\" checkbox will be selected. to add or remove tables: select them and click the ▶ button to add. select from below and click the ◀ button to remove. dwh wizard architecture options the wizard can generate the dwh using: classic or mixed architecture: supports imports, historization, dimensions, and facts. data vault architecture: supports hubs, satellites, links, dimensions, and facts with automatic classification when “auto” is selected. define name templates for dwh objects: set additional parameters: dwh wizard properties field name appearance: leave unchanged, or convert to upper/lowercase. retrieve relations: enable automatic relation detection from source metadata. create calendar dimension: auto-create calendar dimension and define date range. include tables in facts: include related tables in facts (n:1, indirect, etc.). use calendar in facts: include date-to-calendar references in fact transformations. sap deltaq transfer mode: choose between idoc or trfs. sap deltaq automatic synchronization: enable automatic deltaq sync. sap description language: select sap object description language. datavault2: do not create hubs: optionally suppress hub creation in dv2. historizing type: choose ssis package or stored procedure for historization. use friendly names in transformations as column names: use display names from sap/meta/manual connectors. default transformations: select default predefined transformations for dimensions. stars: assign generated dimensions and facts to data mart stars."}
,{"id":384140346566,"name":"Source Wizard","type":"subsection","path":"/docs/user-guide/wizards/source-wizard","breadcrumb":"User Guide › Wizards › Source Wizard","description":"","searchText":"user guide wizards source wizard the source wizard is used to add new data sources to the repository. to launch the source wizard, right-click on the \"sources\" branch of a connector in the context menu and select \"add source.\" source wizard functionality the appearance and functionality of the source wizard will vary depending on the selected source type (table or query): table: when selecting table as the data source, the wizard provides options to configure and view available tables. configuring a table data source when selecting \"table\" as the data source in the source wizard, click the \"apply\" button to display the list of available source tables. optionally, you can enter a schema or table filter to refine the results. configuration options: retrieve relations: enables the retrieval of relationships for the selected source table, if available. sap description language: specifies the language for object descriptions when working with sap sources. sap deltaq attributes: for sap deltaq sources, additional deltaq-specific attributes must be defined. configuring a query as a data source when selecting \"query\" as the data source in the source wizard, follow these steps: define schema and name: specify the schema and name of the source for the repository. enter the query: provide the query in the query language supported by the data source. test the query: click the “test query” button to verify its validity and ensure it retrieves the expected results. complete the configuration: click the “finish” button to add the new source to the repository. the source definition window will open, allowing further modifications if needed."}
,{"id":384159908072,"name":"Import wizard","type":"subsection","path":"/docs/user-guide/wizards/import-wizard","breadcrumb":"User Guide › Wizards › Import wizard","description":"","searchText":"user guide wizards import wizard to start the import wizard, use the source context menu: import status indicators sources marked with a \"!\" icon indicate that they have not yet been imported. attempting to launch the import wizard on a source that has already been imported will result in an error. typical import wizard window there is a typical import wizard window, as shown in the image below: options: source: the source that should be imported. target schema: the schema of the import table. target name: the name of the import table. package: the name of the ssis package where the import will be done. you can select an existing import package or add a new package name. click finish to proceed. the import definition window will open, allowing the configuration of additional import attributes and parameters, as shown in the image below: post-import actions refer to the \"import package\" description for more details. after creating a new import, refresh the diagram to reflect the changes, as shown in the image below:"}
,{"id":384136118500,"name":"Historization wizard","type":"subsection","path":"/docs/user-guide/wizards/historization-wizard","breadcrumb":"User Guide › Wizards › Historization wizard","description":"","searchText":"user guide wizards historization wizard the historization wizard is used to historicize a table or transformation. to start the historization wizard, use the object context menu: \"add\" → \"historization\" in the diagram, as shown in the image below: alternatively, the object context menu in the navigation tree can be used, as shown in the image below: parameters there is a typical historization wizard window, as shown in the image below: source table: the table that should be historicized. target schema: the schema of the historicized table. target name: the name of the historicized table. package: the name of the ssis package where the historization will be done. you can select an existing historization package or add a new package name. historizing type: you can select between ssis package and stored procedure. scd type: the user can select between different historization types: scd 0, scd 1, and scd 2. empty record behavior: defines what should happen in case of a missing source record. use vault id as pk: if you are using datavault or mixed architecture, the user can use hashkeys instead of business keys to perform historization. after clicking \"finish\", the historization will be generated, and the diagram will be updated automatically. then, the user can select the generated historization package and optionally change some package properties (see \"historizing package\")."}
,{"id":384138863823,"name":"Transformation wizard","type":"subsection","path":"/docs/user-guide/wizards/transformation-wizard","breadcrumb":"User Guide › Wizards › Transformation wizard","description":"","searchText":"user guide wizards transformation wizard the transformation wizard is used to create a new transformation. to start it, use the object context menu and select: \"add → transformation\" in the diagram. typical transformation wizard window supported transformation types regular transformations: described in tabular form, results in a generated view. manual transformations: hand-created views defined manually by the user. script transformations: based on sql scripts, often calling stored procedures. external transformations: created outside analyticscreator as ssis packages. main page parameters type: transformation type: dimension: fullhist, creates unknown member, joinhisttype: actual fact: snapshot, no unknown member, joinhisttype: historical_to other: fullhist, no unknown member, joinhisttype: historical_to manual, external, script: as named schema: schema name name: transformation name historizing type: fullhist snapshothist snapshot actualonly none main table: only for regular transformations create unknown member: adds surrogate id = 0 (for dimensions) persist transformation: save view to a table persist table: name of persist table persist package: ssis package name result table: for external/script types ssis package: for external/script types table selection page allows selection of additional tables. tables must be directly or indirectly related to the main table. parameters table joinhisttype none actual historical_from historical_to full join options: all n:1 direct related all direct related all n:1 related all related use hash keys if available parameter page configure additional parameters (for regular transformations only). fields: none all key fields all fields field names (if duplicated): field[n] table_field field name appearance: no changes upper case lower case key fields null to zero: replaces null with 0 use friendly names as column names stars page stars: data mart stars for the transformation default transformations: no defaults (facts) all defaults (dimensions) selected defaults dependent tables: manage dependent tables script page used for script transformations. enter the sql logic that defines the transformation. insert into imp.lastpayment(businessentityid, ratechangedate, rate) select ph.businessentityid, ph.ratechangedate, ph.rate from ( select businessentityid, max(ratechangedate) lastratechangedate from [imp].[employeepayhistory] group by businessentityid ) t inner join [imp].[employeepayhistory] ph on ph.businessentityid = t.businessentityid and ph.ratechangedate = t.lastratechangedate"}
,{"id":384140346567,"name":"Calendar transformation wizard","type":"subsection","path":"/docs/user-guide/wizards/calendar-transformation-wizard","breadcrumb":"User Guide › Wizards › Calendar transformation wizard","description":"","searchText":"user guide wizards calendar transformation wizard to create a calendar transformation, select \"add → calendar dimension\" from the diagram context menu. as shown in the image below: the calendar transformation wizard will open. typically, only one calendar transformation is required in the data warehouse. as shown in the image below: parameters schema: the schema of the calendar transformation. name: the name of the calendar transformation. date from: the start date for the calendar. date to: the end date for the calendar. date-to-id function: the macro name that transforms a datetime value into the key value for the calendar dimension. this macro is typically used in fact transformations to map datetime fields to calendar dimension members. stars: the data mart stars where the calendar transformation will be included."}
,{"id":384159908073,"name":"Time transformation wizard","type":"subsection","path":"/docs/user-guide/wizards/time-transformation-wizard","breadcrumb":"User Guide › Wizards › Time transformation wizard","description":"","searchText":"user guide wizards time transformation wizard to create a time transformation, select \"add → time dimension\" from the diagram context menu. as shown in the image below: the time transformation wizard will then open, allowing you to configure a new time transformation: parameters schema the schema in which the time transformation resides. name the name assigned to the time transformation. period (minutes) the interval (in minutes) used to generate time dimension records. time-to-id function the macro function that converts a datetime value into the key value for the time dimension. use case: convert datetime fields in fact transformations into time dimension members. stars the data mart stars where the time transformation will be included."}
,{"id":384138863824,"name":"Snapshot transformation wizard","type":"subsection","path":"/docs/user-guide/wizards/snapshot-transformation-wizard","breadcrumb":"User Guide › Wizards › Snapshot transformation wizard","description":"","searchText":"user guide wizards snapshot transformation wizard to create a snapshot transformation, select \"add → snapshot dimension\" from the diagram context menu. this will open the snapshot transformation wizard. ⚠️ note: only one snapshot dimension can exist in the data warehouse. as shown in the image below: parameters schema the schema in which the snapshot transformation resides. name the name assigned to the snapshot transformation. stars the data mart stars where this snapshot transformation will be included."}
,{"id":384157771974,"name":"Persisting wizard","type":"subsection","path":"/docs/user-guide/wizards/persisting-wizard","breadcrumb":"User Guide › Wizards › Persisting wizard","description":"","searchText":"user guide wizards persisting wizard the content of any regular or manual transformation can be stored in a table, typically to improve access speed for complex transformations. persisting the transformation is managed through an ssis package. to persist a transformation, the user should select \"add → persisting\" from the object context menu in the diagram. as shown in the image below: persisting wizard options as shown in the image below: transformation: the name of the transformation to persist. persist table: the name of the table where the transformation will be persisted. this table will be created in the same schema as the transformation. persist package: the name of the ssis package that manages the persistence process."}
,
{"id":383461199043,"name":"Reference","type":"category","path":"/docs/reference","breadcrumb":"Reference","description":"","searchText":"reference structured reference for the analyticscreator user interface, entities, types, and parameters. this reference guide is organized into sections and subsections to help you quickly find interface elements, object types, dialogs, wizards, and configuration details in analyticscreator. sections [link:365118109942|user interface] toolbar, navigation tree, dataflow diagram, pages, lists, dialogs, and wizards. [link:365178121463|entity types] connector types, source types, table types, transformation types, package types, and more. [link:365178123475|entities] reference pages for main analyticscreator object classes such as layers, sources, tables, and packages. [link:365178123499|parameters] system and project parameters including technical and environment-related settings."}
,{"id":383461259458,"name":"User Interface","type":"section","path":"/docs/reference/user-interface","breadcrumb":"Reference › User Interface","description":"","searchText":"reference user interface the analyticscreator user interface is designed to support structured, metadata-driven development of data products. it provides a clear separation between modeling, configuration, and generation activities, enabling users to navigate complex data solutions efficiently. the interface is organized into multiple functional areas that work together: navigation & repository structure provides access to repositories, object groups, and individual objects. it reflects the logical organization of the data solution and supports collaboration across teams. design & modeling area the central workspace where users define sources, transformations, and data products. this includes visual representations of data flows and dependencies, supporting transparency and impact analysis. properties & configuration panels context-sensitive panels that allow detailed configuration of selected objects, including technical settings, mappings, and behavior definitions. toolbar offers quick access to key actions such as synchronization, validation, and deployment, enabling an efficient workflow from design to delivery. lineage & dependency visualization displays relationships between objects and data flows. users can explore upstream and downstream dependencies to understand the impact of changes. the interface follows a metadata-driven approach: users define logic and structure once, and analyticscreator generates the corresponding technical artifacts. this ensures consistency, traceability, and efficient lifecycle management across environments."}
,{"id":383509396676,"name":"Toolbar","type":"subsection","path":"/docs/reference/user-interface/toolbar","breadcrumb":"Reference › User Interface › Toolbar","description":"","searchText":"reference user interface toolbar the toolbar provides access to the main functional areas of analyticscreator. it is organized into logical sections that group related actions and configuration options, supporting a structured and efficient workflow. each section focuses on a specific aspect of the data product lifecycle: file contains general application actions such as creating or opening repositories, saving changes, and managing overall workspace settings. sources used to define and manage source systems. this includes configuring connections, importing metadata, and maintaining source structures. dwh focuses on data warehouse modeling. users define transformations, historization, and core structures required for building integrated data models. data mart supports the creation of data products for analytical consumption. this includes defining business-oriented models and preparing data for reporting tools. etl provides access to generation and configuration of data movement and transformation processes, such as pipelines or integration workflows. deployment controls the generation and deployment of artifacts to target environments. this includes creating deployment packages and executing releases. options contains configuration settings for the application, repositories, and environment-specific behavior. help provides access to documentation, guidance, and additional support resources. the toolbar structure reflects the typical workflow in analyticscreator—from source definition to modeling, transformation, and deployment—ensuring a clear and guided development process"}
,{"id":379959516358,"name":"File","type":"topic","path":"/docs/reference/user-interface/toolbar/file","breadcrumb":"Reference › User Interface › Toolbar › File","description":"","searchText":"reference user interface toolbar file the file menu contains commands for creating, connecting, and maintaining repositories in analyticscreator. from here, you can start new projects, connect to existing repositories, synchronize metadata, and back up or restore configurations. it's the primary place to manage the setup and ongoing maintenance of your warehouse models. [link:380044750015|dwh wizard] rapidly creates a semi-ready warehouse, ideal when sources include predefined or curated table references. [link:384157770997|sync dwh] synchronizes the warehouse with metadata and source changes to keep structures current. [link:373340595406|new] creates a new repository configuration for metadata and model definitions. [link:373340595407|connect] connects to an existing repository database to reuse or update metadata. [link:373340595408|backup & restore — load from file] imports repository data or metadata from a local file. [link:373340595408| backup & restore — save to file] saves the current repository or project metadata to a portable file. [link:373340595408|backup & restore — load from cloud] restores repository data directly from cloud storage. [link:373340595408|backup & restore — save to cloud] backs up the repository or metadata to connected cloud storage. find on diagram highlights specific tables, columns, or objects within the modeling diagram."}
,{"id":380042415310,"name":"Sources","type":"topic","path":"/docs/reference/user-interface/toolbar/sources","breadcrumb":"Reference › User Interface › Toolbar › Sources","description":"","searchText":"reference user interface toolbar sources sources the sources menu is where you configure data connectivity. add new connectors, manage connected systems (databases and files), and maintain reference tables used across models. icon feature description connectors lists and manages available connectors for different data sources. sources displays and manages connected source systems (databases and flat files). references manages reference tables for lookups, hierarchies, or static mappings. new connector adds a new data source connector (select type and authentication). new connector imports connector definitions from a previously exported file. new connector imports connector settings directly from cloud storage or a repository."}
,{"id":380044750015,"name":"DWH","type":"topic","path":"/docs/reference/user-interface/toolbar/dwh","breadcrumb":"Reference › User Interface › Toolbar › DWH","description":"","searchText":"reference user interface toolbar dwh dwh the dwh menu focuses on warehouse modeling. define layers and schemas, configure tables and indexes, and manage reusable assets such as references, macros, predefined transformations, and snapshots. icon feature description layers configure warehouse layers and their responsibilities. schemas list and manage schemas within the warehouse model. tables display and configure fact and dimension tables. indexes list and configure indexes to optimize query performance. references manage reference tables for lookups, hierarchies, or static mappings. macros create and manage reusable macro actions. predefined transformations library of ready-to-use transformations for common patterns. snapshots define snapshot structures to capture point-in-time states."}
,{"id":380044818681,"name":"Data mart","type":"topic","path":"/docs/reference/user-interface/toolbar/data-mart","breadcrumb":"Reference › User Interface › Toolbar › Data mart","description":"","searchText":"reference user interface toolbar data mart data mart the data products menu models analytical products for bi consumption. organize related stars into galaxies, define star schemas, manage hierarchies and roles, and configure partitions and semantic models. icon feature description galaxies organize related star schemas into a galaxy for analytical grouping. stars define star schemas containing facts and dimensions. hierarchies manage hierarchical structures (e.g., year â quarter â month). roles define user roles and access permissions for data products. partitions configure table partitions for scale and performance. models define semantic models built on top of the warehouse for bi tools."}
,{"id":380044750017,"name":"ETL","type":"topic","path":"/docs/reference/user-interface/toolbar/etl","breadcrumb":"Reference › User Interface › Toolbar › ETL","description":"","searchText":"reference user interface toolbar etl etl the etl menu contains development assets for extraction, transformation, and loading. group work into packages, write scripts, manage imports, and handle historization scenarios with reusable transformations and generated dimensions. icon feature description packages list etl packages that group transformations and workflows. scripts contain sql or script-based transformations for etl. imports manage import processes from external sources into the warehouse. historizations handle slowly changing dimensions and historical data tracking. transformations define transformation logic for staging and warehouse layers. new transformations launch transformation wizard calendar dimension generates a reusable calendar dimension (year, month, day, etc.). time dimension creates a detailed time dimension (hours, minutes, seconds). snapshot dimension creates snapshot dimensions to capture point-in-time records."}
,{"id":380044819646,"name":"Deployment","type":"topic","path":"/docs/reference/user-interface/toolbar/deployment","breadcrumb":"Reference › User Interface › Toolbar › Deployment","description":"","searchText":"reference user interface toolbar deployment deployment the deployment menu packages your modeled assets for delivery to target environments. use it to build and export deployment artifacts for your warehouse or data products. icon feature description deployment package build and export deployment packages for the warehouse or data products."}
,{"id":380044819647,"name":"Options","type":"topic","path":"/docs/reference/user-interface/toolbar/options","breadcrumb":"Reference › User Interface › Toolbar › Options","description":"","searchText":"reference user interface toolbar options options the options menu centralizes application-wide settings. configure user groups, warehouse defaults, interface preferences, global parameters, and encrypted values used throughout projects. icon feature description user groups manage user groups and access levels. dwh settings configure global warehouse settings such as naming and storage rules. interface customize interface preferences and appearance. parameter define global and local parameters for etl and modeling. encrypted strings manage encrypted connection strings and sensitive values."}
,{"id":380044750021,"name":"Help","type":"topic","path":"/docs/reference/user-interface/toolbar/help","breadcrumb":"Reference › User Interface › Toolbar › Help","description":"","searchText":"reference user interface toolbar help help the help menu provides export tools and links to external resources. generate documentation, open knowledge resources, and review legal and product information. icon feature description export to visio export diagrams to microsoft visio for documentation. export in word export documentation directly to a microsoft word file. wikipedia open a relevant wikipedia article for reference. videos links to instructional or demo videos. community links to the user community or forums. version history show version history and change logs. eula display the end user license agreement. about show software version, credits, and licensing information."}
,{"id":383509396677,"name":"Navigation tree","type":"subsection","path":"/docs/reference/user-interface/navigation-tree","breadcrumb":"Reference › User Interface › Navigation tree","description":"","searchText":"reference user interface navigation tree the navigation tree provides a structured view of all elements within the current repository. it serves as the primary entry point for accessing and organizing objects in analyticscreator. objects are arranged hierarchically, typically grouped by layers and functional areas such as sources, data warehouse structures, and data products. this structure reflects the logical design of the data solution and supports clear separation of concerns. key capabilities of the navigation tree include: hierarchical organization displays objects in a tree structure, allowing users to expand and collapse nodes to navigate complex models efficiently. object access and selection enables quick access to all objects, including sources, transformations, and data products. selecting an object updates the central workspace and configuration panels. object grouping supports logical grouping of objects (e.g., via object groups), helping teams organize large projects and maintain clarity. context actions right-click options allow users to create, modify, or manage objects directly within the tree. visual indicators icons and markers provide additional information about object types, states, or dependencies. the navigation tree acts as the backbone of the user interface, enabling users to efficiently navigate, manage, and maintain all components of their data solution."}
,{"id":380121766108,"name":"Connectors","type":"topic","path":"/docs/reference/user-interface/navigation-tree/connectors","breadcrumb":"Reference › User Interface › Navigation tree › Connectors","description":"","searchText":"reference user interface navigation tree connectors reference page for defining and maintaining source system connectors in analyticscreator. overview the connectors menu in analyticscreator defines metadata for establishing a connection to a source system. each connector includes a name, a source type, and a connection string. these connections are used in etl packages to access external data sources during data warehouse generation. function connectors allow analyticscreator to integrate with relational databases and other supported systems. the connection string is stored in the project metadata and referenced during package execution. each connector is project-specific and can be reused across multiple packages or layers. access connectors are managed under the sources section in the analyticscreator user interface. all defined connectors are listed in a searchable grid, and new entries can be created or deleted from this screen. selecting new opens a connector definition form with metadata fields and a connection string editor. how to access navigation tree connectors → connector → edit connector; connectors → add connector toolbar sources → add diagram not applicable visual element {searchconnectors} → connector → double-click screen overview the first image below shows the main connectors interface. the second shows the editor that appears when a new connector is created. list connectors id property description 1 connectorname logical name identifying the connector within the project 2 connectortype type of source system (e.g., mssql, oracle, etc.) 3 connectionstring ole db or equivalent connection string used to connect to the source system new connector dialog id property description 1 connectorname logical name identifying the connector within the project. 2 connectortype type of source system, for example mssql, oracle, or another supported connector type. 3 azure source type type of azure source, for example azure sql, azure postgres, or another supported azure source type. 4 connectionstring ole db or equivalent connection string used to connect to the source system. 5 cfg.ssis controls whether the connection string should not be stored in cfg.ssis_configurations. related topics [link:#|source] [link:#|connector types] [link:#|refresh source metadata] [link:#|create source]"}
,{"id":380121766109,"name":"Layers","type":"topic","path":"/docs/reference/user-interface/navigation-tree/layers","breadcrumb":"Reference › User Interface › Navigation tree › Layers","description":"","searchText":"reference user interface navigation tree layers reference page for defining and maintaining logical layers in analyticscreator. overview the layers feature in analyticscreator defines the logical and sequential structure in which metadata objects are grouped and generated. each object in a project is assigned to a layer, which determines its build order and visibility during solution generation. function layers represent vertical slices in a project's architecture, such as source, staging, persisted staging, transformation, data warehouse - core, or datamart. one layer can have one or more schemas associated with it. they are used to control: object assignment and isolation layers define where objects belong and keep architectural responsibilities clearly separated. deployment sequencing layers control the order in which structures are generated and deployed across environments. selective generation specific parts of the solution can be included or excluded based on layer configuration. dependency resolution layer order influences build-time logic and helps resolve dependencies between generated objects. layer configuration impacts how analyticscreator generates the sql database schema, azure data factory pipelines, and semantic models. access layers are accessible from the dwh section. a dedicated layers panel displays all defined layers, their order, and their assignment status. how to access navigation tree layers → layer → edit layer toolbar dwh → layers diagram not applicable visual element not applicable screen overview the image below shows the list layers interface with columns labeled for easy identification. id property description 1 name name of the layer used to identify it within the project structure. 2 seqnr defines the sequence number of the layer and controls its display order in the lineage. 3 description optional field used to provide a more detailed description of the layer. behavior execution order layers are executed in the defined top-down order. generation scope disabling a layer excludes its objects from generation. object assignment each object must belong to one and only one layer. build influence layers influence sql build context and pipeline generation. usage context layers are typically aligned with logical data architecture phases. common usage includes separating ingestion, transformation, modeling, and reporting responsibilities. notes layer configurations are stored within the project metadata. changes to layer order or status require regeneration of the solution. layer visibility and behavior apply across all deployment targets. related topics [link:#|schema] [link:#|table] [link:#|transformation] [link:#|predefined transformations]"}
,{"id":380121766110,"name":"Packages","type":"topic","path":"/docs/reference/user-interface/navigation-tree/packages","breadcrumb":"Reference › User Interface › Navigation tree › Packages","description":"","searchText":"reference user interface navigation tree packages "}
,{"id":380121766111,"name":"Indexes","type":"topic","path":"/docs/reference/user-interface/navigation-tree/indexes","breadcrumb":"Reference › User Interface › Navigation tree › Indexes","description":"","searchText":"reference user interface navigation tree indexes "}
,{"id":380121767100,"name":"Roles","type":"topic","path":"/docs/reference/user-interface/navigation-tree/roles","breadcrumb":"Reference › User Interface › Navigation tree › Roles","description":"","searchText":"reference user interface navigation tree roles "}
,{"id":380121783543,"name":"Galaxies","type":"topic","path":"/docs/reference/user-interface/navigation-tree/galaxies","breadcrumb":"Reference › User Interface › Navigation tree › Galaxies","description":"","searchText":"reference user interface navigation tree galaxies "}
,{"id":380121783544,"name":"Hierarchies","type":"topic","path":"/docs/reference/user-interface/navigation-tree/hierarchies","breadcrumb":"Reference › User Interface › Navigation tree › Hierarchies","description":"","searchText":"reference user interface navigation tree hierarchies "}
,{"id":380121784533,"name":"Partitions","type":"topic","path":"/docs/reference/user-interface/navigation-tree/partitions","breadcrumb":"Reference › User Interface › Navigation tree › Partitions","description":"","searchText":"reference user interface navigation tree partitions "}
,{"id":380121767101,"name":"Parameters","type":"topic","path":"/docs/reference/user-interface/navigation-tree/parameters","breadcrumb":"Reference › User Interface › Navigation tree › Parameters","description":"","searchText":"reference user interface navigation tree parameters "}
,{"id":380121767102,"name":"Macros","type":"topic","path":"/docs/reference/user-interface/navigation-tree/macros","breadcrumb":"Reference › User Interface › Navigation tree › Macros","description":"","searchText":"reference user interface navigation tree macros "}
,{"id":380121784534,"name":"Object scripts","type":"topic","path":"/docs/reference/user-interface/navigation-tree/object-scripts","breadcrumb":"Reference › User Interface › Navigation tree › Object scripts","description":"","searchText":"reference user interface navigation tree object scripts "}
,{"id":380121784535,"name":"Filters","type":"topic","path":"/docs/reference/user-interface/navigation-tree/filters","breadcrumb":"Reference › User Interface › Navigation tree › Filters","description":"","searchText":"reference user interface navigation tree filters "}
,{"id":380121767103,"name":"Predefined transformations","type":"topic","path":"/docs/reference/user-interface/navigation-tree/predefined-transformations","breadcrumb":"Reference › User Interface › Navigation tree › Predefined transformations","description":"","searchText":"reference user interface navigation tree predefined transformations "}
,{"id":380121767104,"name":"Snapshots","type":"topic","path":"/docs/reference/user-interface/navigation-tree/snapshots","breadcrumb":"Reference › User Interface › Navigation tree › Snapshots","description":"","searchText":"reference user interface navigation tree snapshots "}
,{"id":380121767105,"name":"Deployments","type":"topic","path":"/docs/reference/user-interface/navigation-tree/deployments","breadcrumb":"Reference › User Interface › Navigation tree › Deployments","description":"","searchText":"reference user interface navigation tree deployments "}
,{"id":380121767106,"name":"Groups","type":"topic","path":"/docs/reference/user-interface/navigation-tree/groups","breadcrumb":"Reference › User Interface › Navigation tree › Groups","description":"","searchText":"reference user interface navigation tree groups "}
,{"id":380121784536,"name":"Models","type":"topic","path":"/docs/reference/user-interface/navigation-tree/models","breadcrumb":"Reference › User Interface › Navigation tree › Models","description":"","searchText":"reference user interface navigation tree models "}
,{"id":383509174508,"name":"Dataflow diagram","type":"subsection","path":"/docs/reference/user-interface/dataflow-diagram","breadcrumb":"Reference › User Interface › Dataflow diagram","description":"","searchText":"reference user interface dataflow diagram the dataflow diagram provides a visual representation of data movement and transformation within analyticscreator. it allows users to understand how data flows from sources through the data warehouse to data products. objects are displayed as connected elements, showing dependencies and execution order across the data pipeline. this visual approach supports both development and analysis by making relationships between components immediately visible. key capabilities of the dataflow diagram include: end-to-end data flow visibility displays the flow of data from source systems through transformations to final data products. dependency visualization clearly shows upstream and downstream relationships between objects, enabling impact analysis when making changes. interactive navigation users can select elements within the diagram to access detailed configurations and related objects. logical structuring organizes transformations and processing steps in a way that reflects execution logic and data dependencies. visual clarity for complex models helps users understand large and complex data solutions by providing an intuitive graphical overview. the dataflow diagram enhances transparency and control by making data dependencies explicit, supporting both development efficiency and governance."}
,{"id":383509174509,"name":"Pages","type":"subsection","path":"/docs/reference/user-interface/pages","breadcrumb":"Reference › User Interface › Pages","description":"","searchText":"reference user interface pages pages"}
,{"id":383509396683,"name":"Lists","type":"subsection","path":"/docs/reference/user-interface/lists","breadcrumb":"Reference › User Interface › Lists","description":"","searchText":"reference user interface lists lists"}
,{"id":383509396684,"name":"Dialogs","type":"subsection","path":"/docs/reference/user-interface/dialogs","breadcrumb":"Reference › User Interface › Dialogs","description":"","searchText":"reference user interface dialogs lists"}
,{"id":383509340360,"name":"Wizards","type":"subsection","path":"/docs/reference/user-interface/wizards","breadcrumb":"Reference › User Interface › Wizards","description":"","searchText":"reference user interface wizards dwh wizard export wizard export to visio export to word historization wizard import wizard new calendar transformation new snapshot dimension new time transformation persisting wizard run object script source wizard transformation wizard vault wizard"}
,{"id":384138863822,"name":"ETL","type":"subsection","path":"/docs/reference/user-interface/etl-menu","breadcrumb":"Reference › User Interface › ETL","description":"","searchText":"reference user interface etl etl the etl menu contains development assets for extraction, transformation, and loading. group work into packages, write scripts, manage imports, and handle historization scenarios with reusable transformations and generated dimensions. icon feature description packages list etl packages that group transformations and workflows. scripts contain sql or script-based transformations for etl. imports manage import processes from external sources into the warehouse. historizations handle slowly changing dimensions and historical data tracking. transformations define transformation logic for staging and warehouse layers. transform. transformation wizard new transformation — calendar dimension generates a reusable calendar dimension (year, month, day, etc.). new transformation — time dimension creates a detailed time dimension (hours, minutes, seconds). new transformation — snapshot dimension creates snapshot dimensions to capture point-in-time records.a"}
,{"id":383461259455,"name":"Entity types","type":"section","path":"/docs/reference/entity-types","breadcrumb":"Reference › Entity types","description":"","searchText":"reference entity types entity types"}
,{"id":383509396685,"name":"Connector types","type":"subsection","path":"/docs/reference/entity-types/connector-types","breadcrumb":"Reference › Entity types › Connector types","description":"","searchText":"reference entity types connector types connector types"}
,{"id":383509396687,"name":"Source types","type":"subsection","path":"/docs/reference/entity-types/source-types","breadcrumb":"Reference › Entity types › Source types","description":"","searchText":"reference entity types source types source types"}
,{"id":383509396688,"name":"Table types","type":"subsection","path":"/docs/reference/entity-types/table-types","breadcrumb":"Reference › Entity types › Table types","description":"","searchText":"reference entity types table types table types"}
,{"id":383509396689,"name":"Transformation types","type":"subsection","path":"/docs/reference/entity-types/transformation-types","breadcrumb":"Reference › Entity types › Transformation types","description":"","searchText":"reference entity types transformation types transformation types"}
,{"id":383509340361,"name":"Transformation historization types","type":"subsection","path":"/docs/reference/entity-types/transformation-historization-types","breadcrumb":"Reference › Entity types › Transformation historization types","description":"","searchText":"reference entity types transformation historization types transformation historization types"}
,{"id":383509340362,"name":"Join historization types","type":"subsection","path":"/docs/reference/entity-types/join-historization-types","breadcrumb":"Reference › Entity types › Join historization types","description":"","searchText":"reference entity types join historization types join historization types"}
,{"id":383509340363,"name":"Package types","type":"subsection","path":"/docs/reference/entity-types/package-types","breadcrumb":"Reference › Entity types › Package types","description":"","searchText":"reference entity types package types package types"}
,{"id":383509396690,"name":"SQL Script types","type":"subsection","path":"/docs/reference/entity-types/sql-script-types","breadcrumb":"Reference › Entity types › SQL Script types","description":"","searchText":"reference entity types sql script types sql script types"}
,{"id":383509340364,"name":"Schema types","type":"subsection","path":"/docs/reference/entity-types/schema-types","breadcrumb":"Reference › Entity types › Schema types","description":"","searchText":"reference entity types schema types schema types"}
,{"id":383461259456,"name":"Entities ","type":"section","path":"/docs/reference/entities","breadcrumb":"Reference › Entities ","description":"","searchText":"reference entities entities"}
,{"id":383509340365,"name":"Layer","type":"subsection","path":"/docs/reference/entities/layer","breadcrumb":"Reference › Entities › Layer","description":"","searchText":"reference entities layer layer"}
,{"id":383509340366,"name":"Schema","type":"subsection","path":"/docs/reference/entities/schema","breadcrumb":"Reference › Entities › Schema","description":"","searchText":"reference entities schema schema"}
,{"id":383509396692,"name":"Connector","type":"subsection","path":"/docs/reference/entities/connector","breadcrumb":"Reference › Entities › Connector","description":"","searchText":"reference entities connector connector"}
,{"id":383509340368,"name":"Source","type":"subsection","path":"/docs/reference/entities/source","breadcrumb":"Reference › Entities › Source","description":"","searchText":"reference entities source source"}
,{"id":383509340369,"name":"Table","type":"subsection","path":"/docs/reference/entities/table","breadcrumb":"Reference › Entities › Table","description":"","searchText":"reference entities table table"}
,{"id":383509396693,"name":"Transformation","type":"subsection","path":"/docs/reference/entities/transformation","breadcrumb":"Reference › Entities › Transformation","description":"","searchText":"reference entities transformation transformation"}
,{"id":383509396694,"name":"Package","type":"subsection","path":"/docs/reference/entities/package","breadcrumb":"Reference › Entities › Package","description":"","searchText":"reference entities package package"}
,{"id":383509340370,"name":"Index","type":"subsection","path":"/docs/reference/entities/index","breadcrumb":"Reference › Entities › Index","description":"","searchText":"reference entities index index"}
,{"id":383509396695,"name":"Partition","type":"subsection","path":"/docs/reference/entities/partitionpartition","breadcrumb":"Reference › Entities › Partition","description":"","searchText":"reference entities partition partition"}
,{"id":383509396696,"name":"Hierarchy","type":"subsection","path":"/docs/reference/entities/hierarchy","breadcrumb":"Reference › Entities › Hierarchy","description":"","searchText":"reference entities hierarchy hierarchy"}
,{"id":383509340372,"name":"Macro","type":"subsection","path":"/docs/reference/entities/macro","breadcrumb":"Reference › Entities › Macro","description":"","searchText":"reference entities macro macro"}
,{"id":383509340373,"name":"SQL Script","type":"subsection","path":"/docs/reference/entities/sql-script","breadcrumb":"Reference › Entities › SQL Script","description":"","searchText":"reference entities sql script macro"}
,{"id":383509340375,"name":"Object script","type":"subsection","path":"/docs/reference/entities/object-script","breadcrumb":"Reference › Entities › Object script","description":"","searchText":"reference entities object script object script"}
,{"id":383509396699,"name":"Deployment","type":"subsection","path":"/docs/reference/entities/deployment","breadcrumb":"Reference › Entities › Deployment","description":"","searchText":"reference entities deployment deployment"}
,{"id":383509396700,"name":"Object group","type":"subsection","path":"/docs/reference/entities/object-group","breadcrumb":"Reference › Entities › Object group","description":"","searchText":"reference entities object group object group"}
,{"id":383509396701,"name":"Filter","type":"subsection","path":"/docs/reference/entities/filter","breadcrumb":"Reference › Entities › Filter","description":"","searchText":"reference entities filter filter"}
,{"id":383509396702,"name":"Model","type":"subsection","path":"/docs/reference/entities/model","breadcrumb":"Reference › Entities › Model","description":"","searchText":"reference entities model model"}
,{"id":383461259457,"name":"Parameters ","type":"section","path":"/docs/reference/parameters","breadcrumb":"Reference › Parameters ","description":"","searchText":"reference parameters parameters"}
,{"id":383509340376,"name":"Other parameters","type":"subsection","path":"/docs/reference/parameters/other-parameters","breadcrumb":"Reference › Parameters › Other parameters","description":"","searchText":"reference parameters other parameters other parameters"}
,
{"id":383461199045,"name":"Tutorials","type":"category","path":"/docs/tutorials","breadcrumb":"Tutorials","description":"","searchText":"tutorials to become familiar with analyticscreator, we have made certain data sets available. you may use these to test analyticscreator: click here for the northwind data warehouse"}
,{"id":383225948382,"name":"Northwind DWH Walkthrough","type":"section","path":"/docs/tutorials/northwind-dwh-walkthrough","breadcrumb":"Tutorials › Northwind DWH Walkthrough","description":"","searchText":"tutorials northwind dwh walkthrough step-by-step: sql server northwind project create your first data warehouse with analyticscreator analyticscreator offers pre-configured demos for testing within your environment. this guide outlines the steps to transition from the northwind oltp database to the northwind data warehouse model. once completed, you will have a fully generated dwh project ready to run locally. load the demo project from the file menu, select load from cloud. choose nw_demo enter a name for your new repository (default: nw_demo) note: this repository contains metadata onlyâno data is moved. analyticscreator will automatically generate all required project parameters. project structure: the 5-layer model analyticscreator will generate a data warehouse project with five layers: sources â raw data from the source system (northwind oltp). staging layer â temporary storage for data cleansing and preparation. persisted staging layer â permanent storage of cleaned data for historization. core layer â integrated business modelâstructured and optimized for querying. datamart layer â optimized for reportingâorganized by business topic (e.g., sales, inventory). northwind setup (if not already installed) step 1: check if the northwind database exists open sql server management studio (ssms) and verify that the northwind database is present. if yes, skip to the next section. if not, proceed to step 2. step 2: create the northwind database run the setup script from microsoft: đľ download script or copy-paste it into ssms and execute. step 3: verify database use northwind; go select * from information_schema.tables where table_schema = 'dbo' and table_type = 'base table'; once confirmed, you can proceed with the next steps to configure the analyticscreator connector with your northwind database. note: analyticscreator uses only native microsoft connectors, and we do not store any personal information. step 4: change database connector navigate to sources > connectors. you will notice that a connector is already configured. for educational purposes, the connection string is not encrypted yet. to edit or add a new connection string, go to options > encrypted strings > add. paste your connection string as demonstrated in the video below. after adding the new connection string, it's time to test your connection. go to sources â connectors and press the test button to verify your connection. step 5: create a new deployment in this step, you'll configure and deploy your project to the desired destination. please note that only the metadata will be deployed; there will be no data movement or copy during this process. navigate to deployments in the menu and create a new deployment. assign a name to your deployment. configure the connection for the destination set the project path where the deployment will be saved. select the packages you want to generate. review the connection variables and click deploy to initiate the process. finally, click deploy to complete the deployment. in this step, your initial data warehouse project is created. note that only the metadataâthe structure of your projectâis generated at this stage. you can choose between two options for package generation: ssis (sql server integration services) adf (azure data factory) ssis follows a traditional etl tool architecture, making it a suitable choice for on-premises data warehouse architectures. in contrast, adf is designed with a modern cloud-native architecture, enabling seamless integration with various cloud services and big data systems. this architectural distinction makes adf a better fit for evolving data integration needs in cloud-based environments. to execute your package and move your data, you will still need an integration runtime (ir). keep in mind that analyticscreator only generates the project at the metadata level and does not access your data outside the analyticscreator interface. it does not link your data to us, ensuring that your data remains secure in its original location. for testing purposes, you can run your package in microsoft visual studio 2022, on your local sql server, or even in azure data factory."}
,
{"id":383461199046,"name":"Functions","type":"category","path":"/docs/functions-features","breadcrumb":"Functions","description":"","searchText":"functions get started by clicking on one of these sections: main functionality gui process support data sources export functionality use of analytics frontends"}
,{"id":383225948376,"name":"Main Functionality","type":"section","path":"/docs/functions-features/main-functionality","breadcrumb":"Functions › Main Functionality","description":"","searchText":"functions main functionality full bi-stack automation: from source to data warehouse through to frontend. holistic data model: complete view of the entire data model. this also allows for rapid prototyping of various models. data warehouses: ms sql server 2012-2022, azure sql database, azure synapse analytics dedicated, azure sql managed instance, sql server on azure vms, ms fabric sql. analytical databases: ssas tabular databases, ssas multidimensional databases, azure synapse analytics dedicated, power bi, power bi premium, duck db, tableau, and qlik sense. data lakes: ms azure blob storage, onelake. frontends: power bi, qlik sense, tableau, powerpivot (excel). pipelines/etl: sql server integration packages (ssis), azure data factory 2.0 pipelines, azure data bricks, fabric data factory. azure: azure sql server, azure data factory pipelines. deployment: visual studio solution (ssdt), creation of dacpac files, ssis packages, data factory arm templates, xmla files. modelling approaches: top-down modelling, bottom-up modelling, import from external modelling tool, dimensional/kimball, data vault 2.0, mixed approach of dv 2.0 and kimball (a combination the best of both worlds by using elements of both data vault 2.0 and kimball modelling), inmon, 3nf, or any custom data model. the analyticscreator wizard can help you create a data vault model automatically and also supports strict dan linstead techniques and data vaults. historization approaches: slowly changing dimensions (scd) type 0, type 1, type 2, mixed, snapshot historization, gapless historization, change-based calculations. surrogate key: auto-increment, long integer, hash key, custom definition of hash algorithm."}
,{"id":383225948377,"name":"GUI","type":"section","path":"/docs/functions-features/gui","breadcrumb":"Functions › GUI","description":"","searchText":"functions gui windows gui embedded version control multi-user development supporting distributed development manual object locking possible predefined templates cloud-based repository cloud service support available data lineage macro language for more flexible development predefined, datatype-based transformations calculated columns in each dwh table single point development: the whole design is possible in analyticscreator. external development not necessary embedding external code automatic documentation in word and visio export to microsoft devops, github, .. analyticscreator repository is stored in a ms sql server and can be modified and extended with additional functionality"}
,{"id":383225948378,"name":"Process support","type":"section","path":"/docs/functions-features/process-support","breadcrumb":"Functions › Process support","description":"","searchText":"functions process support etl procedure protocol error handling on etl procedures consistency on etl failure rollback on etl procedures automatic recognition of source structure changes and automatic adaptation of connected dwh entire dwh life-cycle support delta and full load of data models near real-time data loads possible external orchestration/scheduling for etl process internal orchestration/scheduling for etl process with generated ms-ssis packages several workflow configurations no is necessary runtime for analyticscreator daily processing of created dhws are run without analyticscreator no additional licences necessary for design component no ms sql server necessary"}
,{"id":383225948379,"name":"Data Sources","type":"section","path":"/docs/functions-features/data-sources","breadcrumb":"Functions › Data Sources","description":"","searchText":"functions data sources build-in connectivity: ms sql server, oracle, sap erp, s4/hana with theobald software (odp, deltaq/tables), sap business one with analyticscreator own connectivity, sap odp objects, excel, access, csv/text, oledb (e.g. terradata, netezza, db2..), odbc (mysql, postgres) , odata , azure blob storage (csv, parquet, avro), rest, ms sharepoint, google ads, amazon, salesforce crm, hubspot crm, ms dynamics 365 business central, ms dynamics navision 3rd party connectivity: access to more than 250+ data source with c-data connector [www.cdata.com/drivers]. this allows for connection to analyticscreator directly by an odbc, or ole db driver, or by connecting an ingest layer with externally filled tables. define your own connectivity: (any data source, hadoop, google bigquery/analytics, amazon, shop solutions, facebook, linkedin, x (formerly twitter)) in all cases of access to source data an analyticscreator-metadata-connector is created. the analyticscreator-metadata-connector is a description of data-sources you use for more easy handling in analyticscreator. analyticscreator is able to automatically create a metadata connector by extracting the data definition from your source data. it contains information about key fields, referential integrity, name of fields and description."}
,{"id":383225948380,"name":"Export Functionality","type":"section","path":"/docs/functions-features/export-functionality","breadcrumb":"Functions › Export Functionality","description":"","searchText":"functions export functionality azure blob storage, text, csv files, any target system using oledb or odbc driver, automated type conversation, export performed by ssis packages or azure data factory pipelines export for example to oracle, snowflake, synapse"}
,{"id":383225948381,"name":"Use of Analytics Frontends","type":"section","path":"/docs/functions-features/use-of-analytics-frontends","breadcrumb":"Functions › Use of Analytics Frontends","description":"","searchText":"functions use of analytics frontends push concept: power bi, tableau, and qlik models will be created automatically. all models described here will be created at the same time. pull concept: there are many bi frontends around which allows you to connect with the specified microsoft data. check with your vendor or us what is possible. analyticscreator allows you to develop a specific solution for your analytics frontend in the way that the model will be created automatically for your bi frontend (push concept)."}
]