Get trial

English

AnalyticsCreator Desktop Overview

With AnalyticsCreator Desktop users can:

  1. Data Warehouse Creation
    Automatically generate and structure your data warehouse, including fact tables and dimensions.
  2. Connectors
    Add connections to various data sources and import metadata seamlessly.
  3. Layer Management
    Define and manage layers such as staging, persisted staging, core, and datamart layers.
  4. Package Generation
    Generate integration packages for SSIS (SQL Server Integration Services) and ADF (Azure Data Factory).
  5. Indexes and Partitions
    Automatically configure indexes and partitions for optimized performance.
  6. Roles and Security
    Manage roles and permissions to ensure secure access to your data.
  7. Galaxies and Hierarchies
    Organize data across galaxies and define hierarchies for better data representation.
  8. Customizations
    Configure parameters, macros, scripts, and object-specific scripts for tailored solutions.
  9. Filters and Predefined Transformations
    Apply advanced filters and transformations for data preparation and enrichment.
  10. Snapshots and Versioning
    Create snapshots to track and manage changes in your data warehouse.
  11. Deployments
    Deploy your projects with flexible configurations, supporting on-premises and cloud solutions.
  12. Groups and Models
    Organize objects into groups and manage models for streamlined workflows.
  13. Data Historization
    Automate the process of creating historical data models for auditing and analysis.

This list consolidates the core features while emphasizing the flexibility and power of AnalyticsCreator in building comprehensive data warehouse solutions.

Configuring AnalyticsCreator

Provide the Login and Password that you received by e-mail from AnalyticsCreator

Minimum Requirements

Configuration Settings

The configuration of AnalyticsCreator is very simple. The only mandatory configuration is the SQL Server Settings.

  1. SQL Server Settings
    1. Use LocalDB to store repository: Enables you to store the AnalyticsCreator project (metadata only) on your LocalDB.
    2. SQL Server to store repository: Enter the IP address or the name of your Microsoft SQL Server.
    3. Security
      • Integrated: Authentication is based on the current Windows user.
      • Standard: Requires a username and password.
      • Azure AD: Uses Azure AD (now Microsoft Entra) for Microsoft SQL Server authentication.
      • Trust server certificate: Accepts the server's certificate as trusted.
    4. SQL User: The SQL Server username.
    5. SQL Password: The corresponding password.

Optional Requirements

  1. Paths
    1. UNC path to store backup: A network path to store project backups.
    2. Local SQL Server path to store backup: A local folder to store your project backups.
    3. Local SQL Server path to store database: A local folder to store your SQL Server database backups.
    4. Repository database template: The alias format for your repositories. Default: repo_{RepoName}.
    5. DWH database template: The alias format for your DWH templates. Default: dwh_{RepoName}.
  2. Proxy Settings
    1. Proxy Address: The IP address or hostname of your proxy server.
    2. Proxy Port: The port number used by the proxy.
    3. Proxy User: The username for proxy authentication.
    4. Proxy Password: The password for the proxy user.

Now you're ready to create your new Data Warehouse with AnalyticsCreator.

Working with AnalyticsCreator

Understanding the fundamental operations in AnalyticsCreator Desktop is essential for efficiently managing your data warehouse repository and ensuring accuracy in your projects. Below are key basic operations you can perform within the interface:


Edit Mode and Saving – Data Warehouse Editor

  • Single Object Editing: In the Data Warehouse Repository, you can edit one object at a time. This ensures precision and reduces the risk of unintended changes across multiple objects.
  • How to Edit: Double-click on any field within an object to enter edit mode. The selected field becomes editable, allowing you to make modifications.
  • Save Prompt: If any changes are made, a prompt will appear, reminding you to save your modifications before exiting the edit mode. This safeguard prevents accidental loss of changes.
  • Unsaved Changes: While edits are immediately reflected in the repository interface, they are not permanently saved until explicitly confirmed by clicking the Save button.

Accessing Views in Data Warehouse Explorer

  • Layer-Specific Views: Each layer in the data warehouse contains views generated by AnalyticsCreator. These views provide insights into the underlying data structure and transformations applied at that layer.
  • How to Access: Navigate to the Data Warehouse Explorer and click on the View tab for the desired layer. This displays the layer’s contents, including tables, fields, and transformations.

Adding and Deleting Objects

Adding New Objects:

  1. Navigate to the appropriate section (e.g., Tables, Layers, or Connectors) in the navigation tree.
  2. Right-click and select Add [Object Type] to create a new object.
  3. Provide the necessary details, such as name, description, and configuration parameters.
  4. Save the object.

Deleting Objects:

  • Select the object in the navigation tree and right-click to choose Delete.
  • Confirm the deletion when prompted.
    Note: Deleting an object may affect dependent objects or configurations.

Filtering and Searching in Data Warehouse Explorer

  • Filtering: Use filters to narrow down displayed objects by criteria such as name, type, or creation date.
  • Searching: Enter keywords or phrases in the search bar to quickly locate objects.
  • Benefits: These features enhance repository navigation and efficiency when working with large datasets.

Object Dependencies and Relationships

  • Dependency View: For any selected object, view its dependencies and relationships with other objects by accessing the Dependencies tab.
  • Impact Analysis: Analyze how changes to one object might affect other parts of the data warehouse.

Managing Scripts

  • Predefined Scripts: Add scripts for common operations like data transformations or custom SQL queries.
  • Edit and Run: Double-click a script in the navigation tree to modify it. Use Run Script to execute and view results.

Validating and Testing Changes

  • Validation Tools: Use built-in tools to check for errors or inconsistencies in your repository.
  • Evaluate Changes: Use the Evaluate button before saving or deploying to test functionality and ensure correctness.

Locking and Unlocking Objects

  • Locking: Prevent simultaneous edits by locking objects, useful in team environments.
  • Unlocking: Release locks once edits are complete to allow further modifications by others.

Exporting and Importing Data

Export:

  • Export objects, scripts, or configurations for backup or sharing.
  • Use the Export option in the toolbar or navigation tree.

Import:

  • Import previously exported files to replicate configurations or restore backups.
  • Use the Import option and follow the prompts to load the data.

Advanced Features

AnalyticsCreator provides a rich set of advanced features to help you configure, customize, and optimize your data warehouse projects. These features extend the tool’s capabilities beyond standard operations, enabling more precise control and flexibility.

Scripts

Scripts in AnalyticsCreator allow for detailed customization at various stages of data warehouse creation and deployment. They enhance workflow flexibility and enable advanced repository configurations.

Types of Scripts

  1. Object-Specific Scripts
    Define custom behavior for individual objects, such as tables or transformations, to meet specific requirements.
  2. Pre-Creation Scripts
    Execute tasks prior to creating database objects.
    Example: Define SQL functions to be used in transformations.
  3. Pre-Deployment Scripts
    Configure processes that run before deploying the project.
    Example: Validate dependencies or prepare the target environment.
  4. Post-Deployment Scripts
    Handle actions executed after deployment is complete.
    Example: Perform cleanup tasks or execute stored procedures.
  5. Pre-Workflow Scripts
    Manage operations that occur before initiating an ETL workflow.
    Example: Configure variables or initialize staging environments.
  6. Repository Extension Scripts
    Extend repository functionality with user-defined logic.
    Example: Add custom behaviors to redefine repository objects.

Historization

The historization features in AnalyticsCreator enable robust tracking and analysis of historical data changes, supporting advanced time-based reporting and auditing.

Key Components

  1. Slowly Changing Dimensions (SCD)
    Automate the management of changes in dimension data.
    Supports various SCD types including:
    • Type 1 (overwrite)
    • Type 2 (versioning)
    • Others as needed
  2. Time Dimensions
    Create and manage temporal structures to facilitate time-based analysis.
    Example: Build fiscal calendars or weekly rollups for time-series analytics.
  3. Snapshots
    Capture and preserve specific states of the data warehouse.
    Use cases include audit trails, historical reporting, and rollback points.

Parameters and Macros

These tools provide centralized control and reusable logic to optimize workflows and streamline repetitive tasks.

Parameters

  • Dynamic Management: Centralize variable definitions for consistent use across scripts, transformations, and workflows.
  • Reusable Configurations: Update values in one place to apply changes globally.
  • Use Cases: Set default values for connection strings, table prefixes, or date ranges.

Macros

  • Reusable Logic: Create parameterized scripts for tasks repeated across projects or workflows.
  • Streamlined Processes: Use macros to enforce consistent logic in transformations and calculations.
  • Example: Define a macro to calculate age from a birthdate and reuse it across transformations.

Summary

AnalyticsCreator’s advanced features offer deep customization options that allow you to:

  • Control object-level behavior through scripting
  • Track and manage historical data effectively
  • Streamline project-wide settings with parameters
  • Reuse logic with powerful macros

These capabilities enable you to build scalable, maintainable, and highly flexible data warehouse solutions.

Installation

Installing AnalyticsCreator:  32-bit and 64-bit Versions

This guide offers step-by-step instructions for installing either the 32-bit or 64-bit version of AnalyticsCreator, depending on your system requirements.

🛈 Note: To ensure optimal performance, verify that your system meets the following prerequisites before installation.

Understanding AnalyticsCreator

There are at least two different approaches to design a holistic business and data model.

The bottom-up method, which is shown in the graphic below and the top-down method, which starts with the conceptual model first, although models can also be loaded from other modeling tools.

image (9)

  1. Connect AnalyticsCreator to any data source, especially databases, individual files, data lakes, cloud services, Excel files and other extracts. Build-in connectors to many common sources are available as well as support of Azure Data Factory, Azure Analytics.
  2. Define Data - AnalyticsCreator extracts all metadata from the data sources, such as field descriptions, data types, key fields, and all relationships, which is stored in the AnalyticsCreator Metadata Repository. This will:
    1. Extract and capture DDL
    2. Detect structure changes and forward in all higher layers.
  3. Cognitive Suggestion- Intelligent wizards help to create a draft version of the model across all layers of the data analytics platform. Choose different modelling approaches or create your own approach:
    1. Data Vault 2.0, dimensional, 3 NF, own
    2. Historical data handling (SCD, Snapshot, CDC, Gapless, ..)
    3. Use Azure DevOps
  4. Model- The entire toolset of AnalyticsCreator is at your disposal to further develop the draft model. Behind the holistic graphical model, the generated code is already finished and can be also modified manually, including:
    1. Automated transformations and wizards
    2. Collaboration
    3. Development process supported by data lineage flow-chart
    4. Own scripting and macros are possible
  5. Deploy - To deploy the Data model in different environments (Test, Prod, ..), AnalyticsCreator generates deployment packages that are also used for the change process of structures and loadings. Deployment packages can be used locally, in Fabric, Azure as well in hybrid environments. This includes:
    1. Stored procedures, SSIS
    2. Azure SQL DB, Azure Analysis Services, Synapse
    3. ARM Template for Azure Data Factory
    4. Tabular Models, OLAP Cubes
    5. Power BI
    6. Tableau
    7. Qlik

Download and Installation

  1. Access the Download Page
    Navigate to the AnalyticsCreator download page
  2. Download the Installer
    Locate and download the installation file.
  3. Verify SQL Server Connectivity
    Before proceeding with the installation, confirm that you can connect to your SQL Server instance.
    1. Connecting to SQL Server: To ensure successful connectivity:
    2. Use SQL Server Management Studio (SSMS), a tool for managing and configuring SQL Server.
    3. If SSMS is not installed on your system, download it from the official Microsoft site: Download SQL Server Management Studio (SSMS)
  4. Install the software
    Once connectivity is confirmed, follow the instructions below to complete the installation.

System Requirements

To ensure optimal performance, verify that the following requirements are met:

🛈 Note: If you already have SQL Server installed and accessible, you can proceed directly to the Launching AnalyticsCreator section.
  • Networking:
    • Communication over Port 443 is where analytics communicates to the sAnalyticsCreator server.
  • Operating System: Windows 10 or later.
    AnalyticsCreator is compatible with Windows operating systems starting from version 10.
⚠️ WARNING: Port 443 is the standard HTTPS port for secured transactions.
It is used for data transfers and ensures that data exchanged between a web browser and websites remains encrypted and protected from unauthorized access.
  • Microsoft SQL Server:
    • SQL Server on Azure Virtual Machines
    • Azure SQL Managed Instances

AnalyticsCreator Azure SQL Family

  • Microsoft Fabric SQL: Microsoft Fabric SQL Icon
  • On Premises:
    Supported editions include Express, Standard, and Enterprise from Microsoft SQL Server 2016 onward.
    AnalyticsCreator utilizes SQL Server to store project data, so it is essential to have a compatible version installed before continuing.

Compatibility Matrix

VERSION Edition EngineEdition
Microsoft SQL Azure (RTM) SQL Azure 12

To check your version use

SELECT VERSION = @@VERSION, 
       Edition = SERVERPROPERTY('Edition'), 
       EngineEdition = SERVERPROPERTY('EngineEdition')

 

Generate Power BI Files

The Power BI Generation function in AnalyticsCreator allows users to generate Power BI Desktop Project files in one click. The Power BI Project is the best format for working in Power BI as a Developer. It is organized into folders and files, all well structured.

Power BI Project (PBIR) uses TMDL (Tabular Model Definition Language).

The TMDL Folder Structure

The default folder structure has only one level of sub-folders, all with .tmdl files inside:

  • cultures
  • perspectives
  • roles
  • tables

And root files for:

  • database
  • model
  • relationships
  • expressions
  • datasources

Connectors

Setting Up Connectors in AnalyticsCreator

Connectors in AnalyticsCreator allow users to establish data source connections, enabling efficient data management and analysis. Here’s a comprehensive guide to understanding and setting up various connectors.

Navigating Connectors

To create or edit a connector, navigate through the toolbar menu:

Connectors define the data source logic.

Here’s a list of connector types supported by AnalyticsCreator:

  • MS SQL Server
  • Oracle
  • CSV
  • Excel
  • DuckDB (Parquet, CSV, S3)
  • MS Access
  • OLEDB
  • SAP (using Theobald connector)
  • ODBC

Connection String and Templates

AnalyticsCreator provides a friendly interface for generating connection string templates.

For several connector types, users can access these templates by clicking the “Template” button. Here’s an example template:

PROVIDER=SQLNCLI11;Data Source=[SERVER];Initial Catalog=[DATABASE];Integrated Security=SSPI;

Make sure to replace the placeholders [SERVER] and [DATABASE] with the actual server and database names.

CSV Connector Properties

The CSV connector has unique properties enhancing file handling capabilities. Users should pay attention to these additional settings to ensure seamless file integration and processing.

Row Delimiters

When defining row delimiters, users can utilize specific abbreviations for ease. These include:

  • {CR} for Carriage Return
  • {LF} for Line Feed
  • {t} for Tabulator

These specifications enable seamless formatting and data structuring within your source files.

Automating Data Source Descriptions

For automatic data source description retrieval, ensure your connections to these data sources are active and functional. This automation simplifies data management and improves operational efficiency.

Cloud Storage for Connectors

Store connector definitions and associated data sources in the cloud. Cloud storage provides a durable and accessible platform for managing your data across different repositories, enhancing collaboration and data security.

Encrypted Strings

We highly recommend keeping your connection strings encrypted.

To encrypt your string, simply click on Options → Encrypted Strings → New.

To use your encrypted strings in your sources, enclose the name you’ve created with # on both sides.

For example, if your DSN=DuckDB, the connection string will be #DuckDB#

Sources

🛈 Note: PRO TIP: If you're new to AnalyticsCreator we highly recommend making use of the Source - Wizard

Configuring AnalyticsCreator

The source contains a description of the external data.
Each source belongs to a connector.
Each source has columns, and references (Table Keys) between sources can be defined.

To open the source definition, use the "Edit source" option from the source context menu in the navigation tree or diagram.
To add a new source, use the "Add new source" option from the source context menu in the navigation tree or diagram.

Below is a typical source definition:

AnalyticsCreator Data Sources

The properties of sources depend on the connector type and the source type.

There are three source types:

  • TABLE
  • VIEW
  • SAP_DELTAQ
  • QUERY

For the QUERY source type, the source window will display an additional tab containing the query definition. You cannot create a source manually. The only source that can be created manually is the CSV

Refresh source metadata

The user can check for changes in the source and propagate any detected changes to the data warehouse objects. To check for changes in all connector sources, use the connector context menu and select "Refresh all sources" in the navigation tree.

To check for changes in imported connector sources only, use the connector context menu and select "Refresh used sources" in the navigation tree.

To check for changes in a specific source, use the source context menu and select "Refresh source" in the navigation tree.

Refresh Source Options:

The following refresh options are available:

  • Detect differences: Detects changes in the source but does not modify the repository.
  • Delete missing sources: Deletes any missing sources from the repository.
  • Refresh source descriptions: Refreshes the descriptions of the sources.
  • Refresh columns in imported tables: Refreshes columns when there are new or changed source columns.
  • Delete missing columns in imported tables: Deletes columns in imported tables if the source columns have been deleted.
  • Refresh primary keys in imported tables: Updates primary keys if the source’s primary key has changed.
  • Refresh descriptions in imported tables: Updates descriptions of imported tables and columns.

Layers

In a data warehouse, layers are a crucial aspect of its logical structure. Users have the ability to define a variety of layers, each serving a specific purpose. Below are six primary types of layers commonly used in a data warehouse architecture, along with their functions and interconnections.

This configuration facilitates an efficient workflow, transforming raw data sources into insightful, user-accessible information.

Each layer plays a distinct role in the data journey—from acquisition to end-user presentation—supporting governance, transformation, historization, and analytics.

1. Source Layer (SRC)

  • Purpose: Acts as the foundational logical data layer containing external data sources.
  • Characteristics:
    • Not part of the actual data warehouse storage.
    • Serves as the entry point for incoming external data.
    • Tables and transformations cannot be created in this layer.

2. Staging Layer (IMP)

  • Also Known As: Import Layer
  • Purpose: Loads and structures raw data from the Source Layer into tables for further processing.
  • Characteristics:
    • Temporarily stores incoming data.
    • Frequently refreshed with the latest imports.
    • Prepares data for historization and persistence.

3. Persisted Staging Layer (STG)

  • Purpose: Begins the commitment to data historization and traceability.
  • Characteristics:
    • Stores data from the Staging Layer persistently.
    • Maintains historical records of changes.
    • Considered the first "true" layer of the data warehouse.

4. Transformation Layer (TRN)

  • Purpose: Applies additional logic and refinements to the data.
  • Characteristics:
    • Optional, but useful for cleansing, deduplication, or complex business logic.
    • Ensures high data quality and consistency.
    • Acts as a bridge between raw and modeled data.

5. Data Warehouse Layer (DWH)

  • Purpose: Converts structured data into analytical models (e.g., facts and dimensions).
  • Characteristics:
    • Core repository of business-ready data.
    • Supports advanced querying, reporting, and data analysis.

6. Data Mart Layer (DM)

  • Purpose: Provides business users with access to relevant datasets in a user-friendly structure.
  • Characteristics:
    • Often adopts star schema or other analytical models.
    • Optimized for reporting tools and dashboards.
    • Represents the interface between the data warehouse and the end-user.

Together, these layers enable a modular and governed approach to building scalable and maintainable data warehouse solutions in AnalyticsCreator.

Schemas

A schema is the Microsoft SQL Server schema to which a Data Warehouse (DWH) object belongs.
Each schema should be assigned to a specific layer, and each layer can contain multiple schemas.

Tables

Tables in AnalyticsCreator

A table represents a database table or view within the data warehouse, and each table belongs to a specific schema. Tables are created automatically when defining a data import, historization, or persisting process.

Views are created when defining a transformation. Additionally, tables can be manually created to store results of external or script transformations.

For most tables, several properties can be configured, including calculated columns, primary keys, identity columns, and indexes.

Table Properties:

  1. Table name: table name
  2. Table schema: table schema
  3. Table type: type of the table
    1. Import table: Filled by external data using SSIS package.
    2. Historicized table: Contains historized data. Includes:
      • SATZ_ID (bigint) – surrogate primary key
      • DAT_VON_HIST (datetime) – start of validity
      • DAT_BIS_HIST (datetime) – end of validity
    3. View without history
    4. View with history
    5. Persisted table without history
    6. Persisted table with history
    7. Data mart dimension view without history
    8. Data mart dimension view with history
    9. Data mart fact view without history
    10. Data mart fact view with history
    11. Externally filled table without history
    12. Externally filled table with history
    13. Data vault hub table with history
    14. Data vault satellite table with history
    15. Data vault link table with history
  4. Friendly name: Used in OLAP cubes instead of table name.
  5. Compression type: DEFAULT, NONE, ROW, PAGE
  6. Description: Description inherited by dependent objects.
  7. Hist of table: Names of persist, hub, satellite, or link tables.
  8. Has primary key: If checked, adds PRIMARY KEY constraint.
  9. Primary key name: Name of the primary key.
  10. PK clustered: If checked, creates clustered PK.
  11. Columns:
    • Column name
    • Data type, MaxLength, NumPrec, NumScale, Nullable
    • PKOrdinalPos
    • Default (e.g., GETDATE())
    • Friendly name
    • Referenced column (for N:1 relationships)
    • References (read-only, comma-separated list)
  12. Identity column:
    • Name, type, seed, increment
    • PK pos: Position in PK

For normal tables (not views), you can optionally define identity and calculated columns (see tab).

Northwind-Based Link Table

Calculated Columns Properties:

  1. Column name: Name of the column
  2. Statement: SQL statement (macros like @GetVaultHash supported)
  3. Persisted: If checked, column will be persisted
  4. PKOrdinalPos: Position in primary key
  5. Friendly name: Used in OLAP cubes instead of column name
  6. Referenced column: Defines N:1 references
  7. References: Comma-separated, read-only

Table references

Defining Table Relationships in AnalyticsCreator

Relationships between tables can be defined to enable combining tables during transformations. These relationships include N:1 ("1-field" to "1-primary key field") references and more complex associations.

Defining N:1 References

One-Field to One Primary Key Field:
These references can be directly defined within the table definition using the Referenced Column attribute.

  • Example: A foreign key in one table referencing the primary key of another.

More complex references can be defined using Table references.

Here is a typical table reference definition:

Table References

Table Reference Properties:

  1. Cardinality:
    • Unknown
    • OneToOne
    • ManyToOne
    • OneToMany
    • ManyToMany
    Note: It is recommended to primarily use Many-to-One (N:1) and One-to-Many (1:N) cardinalities.
  2. Join: SQL join type
  3. Table1: Schema and table name of the first table
  4. Table2: Schema and table name of the second table
  5. Alias 1: Optional. Alias of the first table. Should be defined if reference statement is used
  6. Alias 2: Optional. Alias of the second table. Should be defined if reference statement is used
  7. Description: Reference name
  8. Auto created: If checked, the reference was automatically created during synchronization.
  9. Reference statement: Optional. SQL reference statement. Should be used if the reference cannot be described using column references only. Table aliases will be used.
  10. Columns: There are columns and statements. Either column or statement should be defined on each reference side.
    • Column1: Column from the first table
    • Statement1: SQL statement
    • Column2: Column from the second table
    • Statement2: SQL statement

Inheritance of Table Relations Across DWH Layers

Table relations will be inherited into subsequent DWH layers. For example, if references are defined between two import tables that are historicized, the same references will be automatically created between the corresponding historicized tables.

If a reference is changed, the changes will propagate into the inherited references unless those references are used in transformations. In such cases, the references will be renamed by adding the suffix _changed(N), and new inherited references will be created.

Therefore, if a "parent" reference is changed, transformations using the inherited reference will not be updated automatically. However, you can manually update them by selecting the new inherited reference.

The inherited references, where the Auto created flag is set, cannot be modified unless you uncheck the Auto created flag.

Source references

Defining Relations Between Sources

Relations between sources are defined and will be inherited by the data warehouse objects during synchronization.

The N:1 relation, which refers to a "one field" to a "one primary key field" reference, can be defined directly in the source definition by using the Referenced column attribute.

For more complex references, use Source references.

Inheritance of Source Relations Across DWH Layers

Source relations will be inherited into subsequent DWH layers. For example, if references are defined between two source tables that are imported, the same references will be automatically created between the corresponding import tables.

If a source reference is changed, the changes will propagate into the inherited references, unless those references are used in transformations. In such cases, the references will be renamed by adding the suffix _changed(N) and new inherited references will be created.

Therefore, if a "parent" reference is changed, transformations using the inherited reference will not be updated automatically. However, the user can manually update them by selecting the new inherited reference.

Stars

A Star is a part of a data mart layer. The data mart layer can contain several stars. Each star corresponds to a schema. If you create an OLAP (Online Analytical Processing) model, each star will produce one OLAP cube (Tabular and Multidimensional).

Galaxies

A Galaxy is a group of several stars. Each star should belong to a galaxy.

Galaxy definition window:

Transformations

A transformation is a process used to modify data. The result of a transformation is always either a single VIEW or a single TABLE.

To create a new transformation, use the Transformation Wizard.

Each transformation has the following common properties:

  1. Name: The name of the transformation
  2. Schema: The schema for the transformation
  3. TransType: The type of transformation
  4. Stars: A list of stars in which the transformation is involved
    • Star: The name of the star
    • IsFact: This should be selected for fact transformations
    • Filter: You can define an additional filter to restrict transformation data for a specific data mart

AnalyticsCreator supports the following transformation types:


Regular Transformation

A regular transformation is a VIEW generated by AnalyticsCreator based on the defined transformation parameters. Tables, table relationships, and transformation columns must be specified, after which AnalyticsCreator automatically creates the transformation VIEW.

Below is a typical regular transformation definition:

AnalyticsCreator Regular Transformation

Regular Transformation Properties:

  1. Historization type: Defines how to work with historicized data.
    • FullHist: Fully historicized transformation. Includes:
      • SATZ_ID
      • DAT_VON_HIST
      • DAT_BIS_HIST
    • SnapshotHist: For predefined snapshot dates (used for dimensions).
    • Snapshot: Uses snapshot dates to combine historicized data (usually for facts).
    • ActualOnly: Uses only current data from historized sources (dimensions or facts).
    • None: Non-historicized data.
  2. Create unknown member: Adds surrogate ID = 0 with default values for unmatched dimension members.
  3. Fact transformation: Check if defining a fact transformation.
  4. Persist table: Name of the table where results will be stored.
  5. Persist package: Name of the SSIS package for persisting results.
  6. SSIS Package: For external or script transformations; launches transformation.
  7. Hub of table: Read-only source for hub transformations.
  8. Sat of table: Source table for satellite transformations.
  9. Link of table: Read-only source table for link transformations.
  10. Snapshots: Snapshot and group info (relevant for Snapshot types).
  11. Tables: Participating tables
    • SeqNr: Unique table sequence number
    • Table: Table name
    • Table Alias: Unique alias used in joins/statements
    • JoinHistType:
      • None – no historicized data
      • Actual – only current data
      • Historical_from – value at start of linked record period
      • Historical_to – value at end of linked record period
      • Full – full historicizing info
    • Join type: INNER, LEFT, RIGHT, FULL, CROSS
    • Force Join: LOOP JOIN, HASH JOIN, MERGE JOIN
    • Reference statement: Optional custom join logic (e.g. T5.ID = T1.CustomerID)
  12. Filter statement: Additional SQL filter (e.g. T5.Country = 'GER')
  13. Sub select: Additional subquery to refine reference logic.
  14. Columns: Transformation output columns
    • Column name
    • TableSeqNr (optional)
    • Reference (optional)
    • Statement: SQL with aliases
    • IsAggr: Aggregated column
    • Default value: Used for unknown members
    • SeqNr: Column sequence
    • PK Position: Primary key position
    • Description
  15. References: Table joins (see Table References)
    • SeqNr1: First table seq number
    • SeqNr2: Second table seq number
    • Reference: Reference name
  16. Predefined transformations: List of referenced transformations
  17. VIEW tab: Read-only view definition

Transformation Compilation and Creation

  • Compile: Use the Compile button to check and validate the transformation logic. Errors will be flagged.
  • Create: Use the Create button to build the transformation VIEW into the DWH. Errors will be reported if present.

Manual Transformation

A manual transformation is a VIEW that is created manually.

Properties:

  • VIEW: Contains the manually created VIEW definition.
  • Rename Columns Table: If you rename a column in the manually created VIEW, enter the old and new column names into this table.

Below is a typical manual transformation definition:

AnalyticsCreator Manual Transformation


External Transformation

An external transformation is a transformation manually created using an SSIS package.

Properties:

  1. Result Table: The table where the transformation results will be stored.
  2. SSIS Package: The name of the manually created SSIS package.
  3. Tables: A list of tables on which the transformation depends. Only the table name is relevant.

Below is a typical external transformation definition:

AnalyticsCreator External Transformations


Script Transformation

A script transformation is a transformation that uses an SQL script.

Properties:

  • Result Table: The table where the transformation results will be stored.
  • SSIS Package: The name of the SSIS package where the transformation script is executed.
  • Script: The SQL script used in the transformation.

Below is a typical script transformation definition:

AnalyticsCreator Script Transformation


Data Mart Transformation

Data mart transformations are views created in the data mart layer. A data mart transformation cannot be created manually. Instead, the Stars – the affiliation of other transformations – must be defined, and the corresponding data mart transformations will be created automatically.

AnalyticsCreator Data Mart Transformation

Every regular or manual transformation can be persisted. This means the content of the VIEW can be stored in a TABLE.


Predefined Transformations

Predefined transformations are field-level transformations based on the field type. For example, below is a definition of a predefined transformation that removes leading and trailing spaces from all fields of type varchar and nvarchar:

AnalyticsCreator Predefined Transformation

Check and Transformation Statements

The Check Statement is used to verify whether a field meets the transformation conditions.
The Transformation Statement contains the actual SQL transformation logic.

Several predefined transformations are built-in, but users can also create their own. Predefined transformations are applied in regular transformations. When creating a transformation, users can select which predefined transformations to apply.

List of Predefined Transformations

Predefined Transformation Description
Trim Removes leading and trailing spaces from string fields (e.g., varchar, nvarchar).
StringNULLToNA Converts NULL values in string fields to "NA".
StringMaxT08000 Trims string fields to a maximum length of 8000 characters.
NumberNULLToZero Converts NULL values in numeric fields to zero.
XmlToString Converts XML data type fields to string format.
HierarchyToString Converts hierarchical data into a string representation.
TimeToDatetime Converts time fields into datetime by appending a default date (e.g., "1900-01-01").
BinaryToStr Converts binary data to a string format.
Anonymization Anonymizes data by replacing sensitive fields with generic or masked values.

Applying Multiple Predefined Transformations

Multiple predefined transformations can be applied simultaneously. Below is an example result when combining multiple transformations on a single field:

[FKART] = RTRIM(LTRIM(ISNULL([T1].[FKART], 'N.A.')))

Snapshots and snapshot groups

Snapshots and Snapshot Groups

Snapshots are predefined dates calculated during the ETL process and used in Snapshot Transformations to combine historicized data.

By default, there is always at least one snapshot, referred to as the "Actual Date", which represents the current timestamp. Additional snapshots can be defined as needed.

Below is a typical snapshot definition:

Creating a Snapshot in AnalyticsCreator

SQL Expression for Calculating the Previous Date

This SQL expression calculates the previous date relative to a given @ActDate. It uses the DATEADD, CONVERT, and DATEPART functions to adjust the date by subtracting days and converting between data types.

DATEADD(ms, -2, CONVERT(datetime, CONVERT(date, DATEADD(dd, 1-DATEPART(d, @ActDate), @ActDate))))

Each snapshot must have a unique name. An SQL statement is used to calculate the snapshot value, and the predefined variable @ActDate (representing the current timestamp) can be used in this statement.

Multiple snapshots can be organized into snapshot groups for better management and usability, as shown below:

Snapshot Group Definition

Working with Multiple Snapshots

When working with multiple snapshots, a Snapshot Dimension can be defined and used as a common dimension in the data mart layer.

To create a Snapshot Dimension, use the context menu:
Right-click over the Core Layer → Add → Snapshot Dimension

Add Snapshot Dimension

Snapshots are used in regular snapshot transformations to combine historicized data based on predefined dates. These transformations rely on snapshot values to accurately represent the historical context of the data.

List of Snapshots - Highlighting the Actual Date

Using Snapshot Groups and Individual Snapshots

Both snapshot groups and individual snapshots can be selected and applied during the transformation process.

Group of Snapshots - The Last 6 Months

Snapshot transformation wizard

Snapshot Transformation Wizard

To create a snapshot transformation, select "Add → Snapshot Dimension" from the diagram context menu. This will open the Snapshot Transformation Wizard.

⚠️ Note: Only one snapshot dimension can exist in the data warehouse.

As shown in the image below:

Snapshot Transformation Wizard

Parameters

  1. Schema
    The schema in which the snapshot transformation resides.
  2. Name
    The name assigned to the snapshot transformation.
  3. Stars
    The data mart stars where this snapshot transformation will be included.

Time transformation wizard

Time Transformation Wizard

To create a time transformation, select "Add → Time Dimension" from the diagram context menu.

As shown in the image below:

AnalyticsCreator Time Transformation Wizard

The Time Transformation Wizard will then open, allowing you to configure a new time transformation:

AnalyticsCreator Time Transformation Configuration

Parameters

  1. Schema
    The schema in which the time transformation resides.
  2. Name
    The name assigned to the time transformation.
  3. Period (Minutes)
    The interval (in minutes) used to generate time dimension records.
  4. Time-to-ID Function
    The macro function that converts a DateTime value into the key value for the time dimension.
    Use case: Convert datetime fields in fact transformations into time dimension members.
  5. Stars
    The data mart stars where the time transformation will be included.

Calendar transformation wizard

Calendar Transformation Wizard

To create a calendar transformation, select "Add → Calendar Dimension" from the diagram context menu.

As shown in the image below:

AnalyticsCreator Calendar Transformation Wizard

The Calendar Transformation Wizard will open. Typically, only one calendar transformation is required in the data warehouse.

As shown in the image below:

AnalyticsCreator Calendar Transformation

Parameters

  1. Schema: The schema of the calendar transformation.
  2. Name: The name of the calendar transformation.
  3. Date From: The start date for the calendar.
  4. Date To: The end date for the calendar.
  5. Date-to-ID Function: The macro name that transforms a DateTime value into the key value for the calendar dimension. This macro is typically used in fact transformations to map datetime fields to calendar dimension members.
  6. Stars: The data mart stars where the calendar transformation will be included.

Persisting wizard

Persisting Wizard

The content of any regular or manual transformation can be stored in a table, typically to improve access speed for complex transformations. Persisting the transformation is managed through an SSIS package.

To persist a transformation, the user should select "Add → Persisting" from the object context menu in the diagram.

As shown in the image below:

AnalyticsCreator Persisting Wizard

Persisting Wizard Options

As shown in the image below:

AnalyticsCreator Persisting Wizard Options

  • Transformation: The name of the transformation to persist.
  • Persist Table: The name of the table where the transformation will be persisted. This table will be created in the same schema as the transformation.
  • Persist Package: The name of the SSIS package that manages the persistence process.

Transformation wizard

Transformation Wizard

The Transformation Wizard is used to create a new transformation. To start it, use the object context menu and select: "Add → Transformation" in the diagram.

Transformation Wizard - Context Menu

Typical Transformation Wizard Window

Transformation Wizard Window

Supported Transformation Types

  • Regular Transformations: Described in tabular form, results in a generated VIEW.
  • Manual Transformations: Hand-created VIEWs defined manually by the user.
  • Script Transformations: Based on SQL scripts, often calling stored procedures.
  • External Transformations: Created outside AnalyticsCreator as SSIS packages.

Main Page Parameters

  1. Type: Transformation type:
    • Dimension: FullHist, creates unknown member, JoinHistType: Actual
    • Fact: Snapshot, no unknown member, JoinHistType: Historical_to
    • Other: FullHist, no unknown member, JoinHistType: Historical_to
    • Manual, External, Script: as named
  2. Schema: Schema name
  3. Name: Transformation name
  4. Historizing Type:
    • FullHist
    • SnapshotHist
    • Snapshot
    • ActualOnly
    • None
  5. Main Table: Only for regular transformations
  6. Create Unknown Member: Adds surrogate ID = 0 (for dimensions)
  7. Persist Transformation: Save VIEW to a table
  8. Persist Table: Name of persist table
  9. Persist Package: SSIS package name
  10. Result Table: For external/script types
  11. SSIS Package: For external/script types

Table Selection Page

Allows selection of additional tables. Tables must be directly or indirectly related to the main table.

Table Selection - Join Histype

Parameters

  • Table JoinHistType
    • None
    • Actual
    • Historical_from
    • Historical_to
    • Full
  • Join Options:
    • All N:1 direct related
    • All direct related
    • All N:1 related
    • All related
    • Use hash keys if available

Parameter Page

Configure additional parameters (for regular transformations only).

Transformation Fields Page

  • Fields:
    • None
    • All key fields
    • All fields
  • Field Names (if duplicated):
    • Field[n]
    • Table_Field
  • Field Name Appearance:
    • No changes
    • Upper case
    • Lower case
  • Key Fields NULL to Zero: Replaces NULL with 0
  • Use Friendly Names as Column Names

Stars Page

Stars Page

  • Stars: Data mart stars for the transformation
  • Default Transformations:
    • No Defaults (facts)
    • All Defaults (dimensions)
    • Selected Defaults
  • Dependent Tables: Manage dependent tables

Add Dependent Tables

Script Page

Used for script transformations. Enter the SQL logic that defines the transformation.

Script Transformation

INSERT INTO IMP.LastPayment(BusinessEntityID, RateChangeDate, Rate)
SELECT 
    PH.BusinessEntityID, PH.RateChangeDate, PH.Rate
FROM 
(
    SELECT 
        BusinessEntityID,
        MAX(RateChangeDate) LastRateChangeDate
    FROM 
        [IMP].[EmployeePayHistory]
    GROUP BY 
        BusinessEntityID
) T
INNER JOIN [IMP].[EmployeePayHistory] PH ON 
    PH.BusinessEntityID = T.BusinessEntityID AND 
    PH.RateChangeDate = T.LastRateChangeDate

Historization wizard

Historization wizard

The Historization Wizard is used to historicize a table or transformation.
To start the Historization Wizard, use the object context menu: "Add" → "Historization" in the diagram, as shown in the image below:

Historization Wizard From Lineage

Alternatively, the object context menu in the navigation tree can be used, as shown in the image below:

Historization Wizard From Tables

Parameters

There is a typical Historization Wizard window, as shown in the image below:

Historization Wizard Window

  1. Source Table: The table that should be historicized.
  2. Target Schema: The schema of the historicized table.
  3. Target Name: The name of the historicized table.
  4. Package: The name of the SSIS package where the historization will be done. You can select an existing historization package or add a new package name.
  5. Historizing Type: You can select between SSIS package and stored procedure.
  6. SCD Type: The user can select between different historization types: SCD 0, SCD 1, and SCD 2.
  7. Empty Record Behavior: Defines what should happen in case of a missing source record.
  8. Use VAULT ID as PK: If you are using DataVault or mixed architecture, the user can use HashKeys instead of business keys to perform historization.

After clicking "Finish", the historization will be generated, and the diagram will be updated automatically. Then, the user can select the generated historization package and optionally change some package properties (see "Historizing Package").

Import wizard

Import Wizard

To start the Import Wizard, use the source context menu:

Import Wizard - From Import Layer

Import Status Indicators

Sources marked with a "!" icon indicate that they have not yet been imported. Attempting to launch the Import Wizard on a source that has already been imported will result in an error.

Import Status Indicators

Typical Import Wizard Window

There is a typical Import Wizard window, as shown in the image below:

Import Wizard Window

Options:

  1. Source: The source that should be imported.
  2. Target Schema: The schema of the import table.
  3. Target Name: The name of the import table.
  4. Package: The name of the SSIS package where the import will be done. You can select an existing import package or add a new package name.

Click Finish to proceed. The import definition window will open, allowing the configuration of additional import attributes and parameters, as shown in the image below:

Edit Import

Post-Import Actions

Refer to the "Import Package" description for more details.
After creating a new import, refresh the diagram to reflect the changes, as shown in the image below:

Import Refresh

Source Wizard

Source Wizard

The Source Wizard is used to add new data sources to the repository. To launch the Source Wizard, right-click on the "Sources" branch of a connector in the context menu and select "Add Source."

Create new source

Source Wizard Functionality

The appearance and functionality of the Source Wizard will vary depending on the selected source type (Table or Query):

  • Table: When selecting Table as the data source, the wizard provides options to configure and view available tables.

Read Table from new source

Configuring a Table Data Source

When selecting "Table" as the data source in the Source Wizard, click the "Apply" button to display the list of available source tables. Optionally, you can enter a schema or table filter to refine the results.

Configuration Options:
  • Retrieve Relations: Enables the retrieval of relationships for the selected source table, if available.
  • SAP Description Language: Specifies the language for object descriptions when working with SAP sources.
  • SAP DeltaQ Attributes: For SAP DeltaQ sources, additional DeltaQ-specific attributes must be defined.

Configuring a Query as a Data Source

When selecting "Query" as the data source in the Source Wizard, follow these steps:

Read Table from Select new source

  1. Define Schema and Name: Specify the schema and name of the source for the repository.
  2. Enter the Query: Provide the query in the query language supported by the data source.
  3. Test the Query: Click the “Test Query” button to verify its validity and ensure it retrieves the expected results.
  4. Complete the Configuration: Click the “Finish” button to add the new source to the repository. The source definition window will open, allowing further modifications if needed.

Mixed Modeling DWH

coming soon....

Dimensional Modeling with AnalyticsCreator

Dimensional Modeling with AnalyticsCreator

Dimensional modeling in AnalyticsCreator simplifies the design and organization of data warehouse structures, enabling efficient data analysis and reporting. By organizing data into facts (quantitative metrics) and dimensions (descriptive attributes), it enhances query performance and user understanding.

AnalyticsCreator supports various modeling techniques, such as Classic (Kimball), Data Vault, and hybrid approaches, ensuring flexibility based on business requirements. Users can easily define and manage dimensions, facts, and measures, and automate the creation of relationships between tables.

With built-in wizards, it streamlines the setup of data marts, calendar transformations, and historical data management. This powerful tool not only helps structure data for improved reporting but also ensures scalability and consistency across the data warehouse environment.

DWH Wizard

DWH Wizard

The DWH Wizard allows for the rapid creation of a semi-ready data warehouse. It is especially effective when the data source includes predefined table references or manually maintained source references.

Prerequisites

  • At least one source connector must be defined before using the DWH Wizard.
  • Note: The DWH Wizard does not support CSV connectors. For CSV sources, use the Source Wizard instead.

To launch the DWH Wizard, click the “DWH Wizard” button in the toolbar.

DWH Wizard

Instead, the user can use the connector context menu:

DWH Wizard Ribbon

Using the DWH Wizard

Select the connector, optionally enter the schema or table filter, and click "Apply". Then, the source tables will be displayed.

Optionally, select the "Existing Sources" radio button to work with already defined sources instead of querying the external system (ideal for meta connectors).

DWH Read from Existing Sources

If a table already exists, the "Exist" checkbox will be selected.

To add or remove tables:

  • Select them and click the ➕ button to add.
  • Select from below and click the ➖ button to remove.
DWH Wizard Select Tables

DWH Wizard Architecture Options

The wizard can generate the DWH using:

  • Classic or Mixed Architecture: Supports imports, historization, dimensions, and facts.
  • Data Vault Architecture: Supports hubs, satellites, links, dimensions, and facts with automatic classification when “Auto” is selected.
DWH Wizard Selected Tables

Define name templates for DWH objects:

DWH Wizard Customization

Set additional parameters:

DWH Wizard Calendar Settings

DWH Wizard Properties

  1. Field Name Appearance: Leave unchanged, or convert to UPPER/lowercase.
  2. Retrieve Relations: Enable automatic relation detection from source metadata.
  3. Create Calendar Dimension: Auto-create calendar dimension and define date range.
  4. Include Tables in Facts: Include related tables in facts (N:1, indirect, etc.).
  5. Use Calendar in Facts: Include date-to-calendar references in fact transformations.
  6. SAP DeltaQ Transfer Mode: Choose between IDoc or tRFS.
  7. SAP DeltaQ Automatic Synchronization: Enable automatic DeltaQ sync.
  8. SAP Description Language: Select SAP object description language.
  9. DataVault2: Do Not Create HUBs: Optionally suppress hub creation in DV2.
  10. Historizing Type: Choose SSIS package or stored procedure for historization.
  11. Use Friendly Names in Transformations as Column Names: Use display names from SAP/meta/manual connectors.
  12. Default Transformations: Select default predefined transformations for dimensions.
  13. Stars: Assign generated dimensions and facts to data mart stars.

Wizards

Wizards

The Wizards in AnalyticsCreator provide a guided and efficient way to perform various tasks related to building and managing a data warehouse. Below is an overview of the eight available wizards and their core functions.

1. DWH Wizard

The DWH Wizard is designed to quickly create a semi-ready data warehouse. It is especially useful when the data source contains defined table relationships or manually maintained references.

  • Supports multiple architectures: Classic (Kimball), Data Vault 1.0 & 2.0, or mixed.
  • Automatically creates imports, dimensions, facts, hubs, satellites, and links.
  • Customizable field naming, calendar dimensions, and SAP DeltaQ integration.

2. Source Wizard

The Source Wizard adds new data sources to the repository.

  • Supports source types: Table or Query.
  • Retrieves table relationships and SAP-specific metadata.
  • Allows query testing and schema/table filtering.

3. Import Wizard

The Import Wizard defines and manages the import of external data into the warehouse.

  • Configures source, target schema, table name, and SSIS package.
  • Allows additional attributes and parameters.

4. Historization Wizard

The Historization Wizard manages how tables or transformations are historized.

  • Supports SCD types: 0, 1, and 2.
  • Configures empty record behavior and Vault ID usage.
  • Supports SSIS-based or stored procedure historization.

5. Transformation Wizard

The Transformation Wizard creates and manages data transformations.

  • Supports Regular, Manual, Script, and External transformation types.
  • Handles both historicized and non-historicized data.
  • Configures joins, fields, persistence, and metadata settings.

6. Calendar Transformation Wizard

The Calendar Transformation Wizard creates calendar transformations used in reporting and time-based models.

  • Configures schema, name, start/end dates, and date-to-ID macros.
  • Assigns transformations to specific data mart stars.

7. Time Transformation Wizard

The Time Transformation Wizard creates time dimensions to support time-based analytics.

  • Configures schema, name, time period, and time-to-ID macros.
  • Assigns transformations to specific data mart stars.

8. Snapshot Transformation Wizard

The Snapshot Transformation Wizard creates snapshot dimensions for snapshot-based analysis.

  • Allows creation of one snapshot dimension per data warehouse.
  • Configures schema, name, and data mart star assignment.

By using these eight wizards, AnalyticsCreator simplifies complex tasks, ensures consistency, and accelerates the creation and management of enterprise data warehouse solutions.

Deployment packages

Deployment Packages

Multiple deployment packages can be created to manage different deployment configurations. Each deployment package is a Visual Studio solution containing the necessary elements required to deploy the data warehouse.

Deployment packages

Deployment Package Properties

  1. Name: The name of the deployment package and the generated Visual Studio solution.
  2. Create DACPAC: If checked, the DACPAC file containing the DWH structure will be generated.
  3. Deploy DACPAC: If checked, the DACPAC file will be deployed to the database defined below.
  4. Server, DB Name, Integrated Security, Login, and Password: Connection attributes of SQL Server to which the DACPAC file should be deployed.
  5. Deployment Options:
    • Allow Data Loss
    • Drop Objects Not in Source
    • Backup DB Before Changes
    • Block When Drift Detected
    • Deploy in Single User Mode
    • Allow Incompatible Platform

    These options control how the DACPAC is deployed. See SQLDEPLOY.EXE options for more information.

  6. Create Power Pivot: If checked, the Excel file containing the Power Pivot/Power BI semantic model will be created. This Power Pivot file can be imported into Power BI.

Next options are common for Multidimensional and Tabular OLAP databases:

  1. Create XMLA File: If checked, the XMLA file containing the OLAP database definition will be created.
  2. Server, DB Name, Login, Password: Connection attributes of the OLAP server where the OLAP database will be deployed. Dummy information can be added here, but the XMLA file should be edited to replace it with the correct server credentials.
  3. Process Cube in Workflow Package: If checked, the cube processing task will be added to the workflow package.
  4. Create Cube During Deployment: If checked, the OLAP cube will be created using the OLAP server connection attributes.
  5. SSIS Packages: SSIS packages that will be generated during deployment. To invert the selection, click on the header of the "Deploy" column in the package list.
  6. SSIS Config Type: Choose between an environment variable and a config file to configure the connection to the database containing the [CFG].[SSIS_Configuration] table. This table holds the configurations for all SSIS packages.
  7. SSIS Config Env. Var./SSIS Config File Path: The name of the environment variable or the path to the config file that will be created.
  8. Deploy SSIS_Configuration: If checked, the content of the [CFG].[SSIS_Configuration] table will be recreated.
  9. Use Project Reference: If selected, the workflow package will access other SSIS packages using a project reference. Otherwise, it will use a file reference.
  10. Other Files:
    • Generate Power BI Project (.PBIP) files
    • Generate Tableau Packaged Workbook (.twbx)
    • Generate Qlik Script (.qvs)

Workflow package

Workflow Package

A workflow package is used to execute all other packages in the correct order. There are no configuration options available.

Script launching package

A script launching package is used to execute script transformations. There are no configuration options available.

Persisting package

A persisting package is used to persist transformations. There are no additional configuration options available.

Historization package

Historization Package

This package is used to historicize data. One package can be used to define multiple historizations.

Note:
Historicizing Data refers to the process of tracking and storing changes to data over time. Instead of just storing the current state of the data, historicizing data ensures that previous versions or states are preserved. This allows organizations to analyze how data has evolved, which is useful for trend analysis, auditing, and reporting.

Below is a typical historization definition:

Historization Packages

Historization Options

  1. Missing record behavior: Describes the behavior when a primary key is missing in the source table:
    • Close: Closes the validity period of the corresponding key in the historicized table.
    • Add empty record: Closes the period and adds a new record with default "Empty value" columns.
    • Do not close: No action is taken; the key remains in the actual data.
  2. Insert only: If set, the source data is appended without historization (used when no primary key exists).
  3. Type: Selects the historization algorithm:
    • SSIS Package: Historization is done via an SSIS package.
    • Automatically created stored procedure: Procedure named [CFG].[HIST_TableName] is generated and executed.
    • Manually created stored procedure: Procedure with same name is manually editable. Use auto-generated procedure as a starting point.
  4. Optional statement to calculate ValidFrom date: Define a custom SQL expression (returns date or datetime) to calculate the validity start date for new/existing keys.
  5. Insert Filter and Delete Filter:
    • Insert Filter: Restrict which source records get historicized.
    • Delete Filter: Restrict which records can be "closed" when primary keys are missing.
  6. SCD Type: Choose historization logic per field:
    • None (SCD 0): No change tracking; current value only.
    • SCD 1: Changes overwrite historical values.
    • SCD 2: Adds new records for changed values, maintaining validity periods.
  7. Calculated columns: Define derived columns using previous ([S]) and current ([I]) values, e.g.:
    ISNULL(I.Amount, 0) - ISNULL(S.Amount, 0)
  8. SSIS variables: Use @VariableName format to reference variables for filters. Define values via [CFG].[SSIS_Configuration].
  9. Scripts: Define pre- or post-historization SQL scripts using the Scripts tab.

Import package

Import Package

This package is used to import data from external data sources.

A single package can be used to define multiple imports.

Below is a typical import definition:

Import Packages

Import Package Properties

  1. Fields: Defines the mapping between source and target fields, including any SSIS expressions used for each field during import.
  2. SSIS Variables: Allows defining SSIS variables and their value expressions. Values can be managed using the SSIS_Configuration table. These variables are commonly used in filter expressions.
  3. Filter: Filters restrict the data imported. Use SSIS variables with the “@” symbol (e.g., @Date) to build dynamic filter logic.
  4. Scripts (Tab): SQL scripts can be configured to run before or after the import process.
  5. ImpSQL: Allows redefining the default SQL command used for data import (used when custom logic is required).
  6. Update Statistics: If selected, the SQL Server UPDATE STATISTICS command is executed after the import completes.
  7. Manually Created: Indicates that the SSIS package is custom-built or modified. When selected:
    • The package will not be auto-generated during deployment.
    • However, it will be included in the overall workflow package execution.
  8. Use Logging: Enables execution logs to be written to the DWH log tables, improving monitoring and traceability.
  9. Externally Launched: Excludes the package from the main workflow execution. It must be triggered manually outside of the workflow.

ETL

ETL

SSIS packages are automatically generated by AnalyticsCreator as part of the deployment process. These packages are used to execute ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) processes within the data warehouse, depending on the architecture and requirements.

Types of SSIS Packages

  1. Import Packages
    These packages are used to import data from external sources into the data warehouse.
  2. Historization Packages
    These handle the historicization of data, ensuring changes are tracked over time for analytical purposes.
  3. Persisting Packages
    These packages are responsible for persisting transformation results within the data warehouse.
  4. Script Launching Packages
    These packages are designed to execute script-based transformations.
  5. Workflow Packages
    These orchestrate the execution of all other packages in the correct sequence, ensuring that ETL or ELT processes are performed in a logical and efficient order.

Each package type is tailored to specific tasks, enabling seamless integration and efficient data processing in the data warehouse environment. AnalyticsCreator simplifies the configuration and generation of these packages, providing a robust and automated ETL solution.

Parameters

Parameters

AnalyticsCreator provides various parameters that can be modified to customize its functionality. To access the parameter settings page, navigate to Help → Parameters in the toolbar.
Once the parameter settings page is open, use the Search Criteria field to locate specific parameters.

image (39)

Below is a list of parameters available for modification in AnalyticsCreator:

Parameter Description Initial value
ALLOW_SNOWFLAKE_TABULAR_OLAP Allow DIM-DIM relations in tabular OLAP cubes 0
AUTOCREATED_REFERENCES_USE_FRIENDLY_NAME Use friendly names instead of table names in description of autocreated references: 0- no, 1 - yes 0
CSV_EMPTY_STRING_LENGTH Length of empty string fields 50
CSV_MIN_STRING_LENGTH Minimum length of string fields 50
CSV_SCAN_ROWS Count of rows scanned to get the field properties 500
DATAVAULT2_CREATE_HUBS DataVault2 create hubs: 0 - no, 1 - yes 1
DEFAULT_CALENDAR_MACRO Name of default calendar macro NULL
DEPLOYMENT_DO_NOT_DROP_OBJECT_TYPES Comma-separated list of object types (see description of SQLPACKAGE.EXE)

Aggregates, ApplicationRoles, Assemblies, AsymmetricKeys, BrokerPriorities, Certificates, Contracts, DatabaseRoles, DatabaseTriggers, ExtendedProperties, FullTextCatalogs,

FullTextStoplists, MessageTypes, PartitionFunctions,

PartitionSchemes, Permissions, Queues, RemoteServiceBindings, RoleMembership, Rules, SearchPropertyLists, Sequences, Services,

Signatures, SymmetricKeys, Synonyms, UserDefinedDataTypes, UserDefinedTableTypes, ClrUserDefinedTypes, Users, XmlSchemaCollections, Audits, Credentials, CryptographicProviders,

DatabaseAuditSpecifications, Endpoints, ErrorMessages, EventNotifications, EventSessions, LinkedServerLogins, LinkedServers, Logins, Routes, ServerAuditSpecifications, ServerRoleMembership, ServerRoles, ServerTriggers

DESCRIPTION_PATTERN_CALENDAR_ID Autogenerated description of HIST_ID (SATZ_ID) field in calendar dimension. You can use {TableName}, {TableID} and {CR} placeholders Calendar ID
DESCRIPTION_PATTERN_DATEFROM Autogenerated description of DATEFROM (DAT_VON_HIST) field. You can use {TableName}, {FriendlyName}, {TableID} and {CR} placeholders {TableName}: Start of validity period
DESCRIPTION_PATTERN_DATETO Autogenerated description of DATETO (DAT_BIS_HIST) field. You can use {TableName}, {FriendlyName}, {TableID} and {CR} placeholders {TableName}: End of validity period
DESCRIPTION_PATTERN_HIST_ID Autogenerated description of HIST_ID (SATZ_ID) field. You can use {TableName}, {FriendlyName}, {TableID} and {CR} placeholders {TableName}: Surrogate key
DESCRIPTION_PATTERN_SNAPSHOT_ID Autogenerated description of HIST_ID (SATZ_ID) field in snapshot dimension . You can use {TableName}, {TableID} and {CR} placeholders Snapshot ID
DIAGRAM_NAME_PATTERN Object name in diagram. You can use {Name}, {Friendly name}, {FullFriendlyName}, {ID} and {CR} placeholders {FullFriendlyName}
DWH_CREATE_REFERENCES Create disabled references between tables in data warehouse 0
DWHWIZARD_CALENDAR DWH Wizard. 1 - Create, 0 - Do not create 1
DWHWIZARD_CALENDAR_FROM DWH Wizard. Calendar start date 19800101
DWHWIZARD_CALENDAR_TO DWH Wizard. Calendar start date 20201231
DWHWIZARD_CALENDAR_TRANSNAME DWH Wizard. Calendar dimension name DIM_Calendar
DWHWIZARD_DIMNAME DWH Wizard. Template for generated dimensions DIM_{SRC_NAME}
DWHWIZARD_DWHTYPE DWH Wizard. 1 - Classic, 2 - DataVault 1.0, 3 - DataVault 2.0, 4 - Mixed 1
DWHWIZARD_FACT DWH Wizard. 1 - N:1 direct related, 2 - All direct related, 3 - N:1 direct and indirect related, 4 - all direct and indirect related 3
DWHWIZARD_FACT_CALENDAR DWH Wizard. 1 - use Calendar in facts, 0 - do not use Calendar in facts 1
DWHWIZARD_FACTNAME DWH Wizard. Template for generated facts FACT_{SRC_NAME}
DWHWIZARD_HISTPACKAGENAME DWH Wizard. Template for generated hist package names HIST_{CONNECTOR_NAME}{NR}
DWHWIZARD_HUB_PACKAGENAME DWH Wizard. Template for generated HUB packages HIST_{CONNECTOR_NAME}_HUB{NR}
DWHWIZARD_HUB_TABLENAME DWH Wizard. Template for generated HUB tables {SRC_NAME}_HUB
DWHWIZARD_HUB_TRANSNAME DWH Wizard. Template for generated HUB transformations {SRC_NAME}_HUB
DWHWIZARD_IMPPACKAGENAME DWH Wizard. Template for generated import package names IMP_{CONNECTOR_NAME}{NR}
DWHWIZARD_LINK_PACKAGENAME DWH Wizard. Template for generated LINK packages HIST_{CONNECTOR_NAME}_LINK{NR}
DWHWIZARD_LINK_TABLENAME DWH Wizard. Template for generated LINK tables {SRC_NAME}_LINK
DWHWIZARD_LINK_TRANSNAME DWH Wizard. Template for generated LINK transformations {SRC_NAME}_LINK
DWHWIZARD_SAT_PACKAGENAME DWH Wizard. Template for generated SAT packages HIST_{CONNECTOR_NAME}_SAT{NR}
DWHWIZARD_SAT_TABLENAME DWH Wizard. Template for generated SAT tables {SRC_NAME}_SAT
DWHWIZARD_SAT_TRANSNAME DWH Wizard. Template for generated SAT transformations {SRC_NAME}_SAT
DWHWIZARD_TABLENAME DWH Wizard. Template for generated table names {SRC_NAME}
DWHWIZARD_TABLESPERPACKAGE DWH Wizard. Tables per package 10
FORCE_DESCRIPTION_INHERITANCE Force inheritance of table and column description: 0 - no, 1 - yes 0
FORCE_FRIENDLYNAMES_INHERITANCE Force inheritance of table and column friendly names: 0 - no, 1 - yes 0
FRIENDLYNAME_PATTERN_CALENDAR_ID Autogenerated friendly name of HIST_ID (SATZ_ID) field in calendar dimension. You can use {TableName}, {TableID} and {CR} placeholders Calendar
FRIENDLYNAME_PATTERN_DATEFROM Autogenerated friendly name of DATEFROM (DAT_VON_HIST) field. You can use {TableName}, {FriendlyName}, {TableID} and {CR} placeholders {FriendlyName}_ValidFrom
FRIENDLYNAME_PATTERN_DATETO Autogenerated friendly name of DATETO (DAT_BIS_HIST) field. You can use {TableName}, {FriendlyName}, {TableID} and {CR} placeholders {FriendlyName}_ValidTo
FRIENDLYNAME_PATTERN_DUPLICATED_COLUMNS Autogenerated replacement of duplicated friendly names. You can use {FriendlyName}, {ColumnName}, {ColumnID} and {NR} (consecutive number) placeholders {FriendlyName}_{ColumnName}
FRIENDLYNAME_PATTERN_DUPLICATED_TABLES Autogenerated replacement of duplicated friendly names. You can use {FriendlyName}, {TableName}, {TableID} and {NR} (consecutive number) placeholders {FriendlyName}_{TableName}
FRIENDLYNAME_PATTERN_HIST_ID Autogenerated friendly name of HIST_ID (SATZ_ID) field. You can use {TableName}, {FriendlyName}, {TableID} and {CR} placeholders {FriendlyName}
FRIENDLYNAME_PATTERN_SNAPSHOT_ID Autogenerated friendly name of HIST_ID (SATZ_ID) field in snapshot dimension . You can use {TableName}, {TableID} and {CR} placeholders Snapshot
HIST_DEFAULT_TYPE 1- SSIS Package, 2 - Stored Procedure 1
HIST_DEFAULT_USE_VAULTID 0 - don't use VAULT_HUB_ID as primary key. 1 - use VAULT_HUB_ID as primary key 1
HIST_DO_NOT_CLOSE Default value of "Missing record behaviour" parameter for new historizations. 0 - close, 1 - don't close 0
LAYER1_NAME Source layer name Source layer
LAYER2_NAME Staging layer name Staging layer
LAYER3_NAME Persisted staging layer name Persisted staging layer
LAYER4_NAME Transformation layer name Transformation layer
LAYER5_NAME Data warehouse layer name Data warehouse layer
LAYER6_NAME Data mart layer name Data mart layer
OLEDBPROVIDER_SQLSERVER OLEDB provider for SQL Server SQLNCLI11
REF_TABLES_RECURSION_DEPTH Max recursion depth due the detection of referenced tables in transformation wizard 5
SAP_DELTAQ_AUTOSYNC 0-Disable, 1- Enable 1
SAP_DELTAQ_TRANSFERMODE I-IDoc, T- tRFC T
SAP_DESCRIPTION_LANGUAGE SAP language to get table and field descriptions E
SAP_MAX_RECORD_COUNT Max count of records returned by SAP 1000
SAP_THEOBALD_VERSION 0 - match the SQL Server version, number (2008, 2012 etc) 2012
SAP_USETABLECOMPRESSION Load SAP using compression 1
SHOW_HUB_DEPS Show vault HUB dependencies 0
SOURCE_REFERENCE_DESCRIPTION_PATTERN Autogenerated source reference description. You can use {SourceSchema1}, {SourceName1}, {SourceID1}, {FriendlyName1}, {SourceSchema2}, {SourceName2}, {SourceID2} and {FriendlyName2} placeholders FK_{SourceName1}_{SourceName2}
SOURCE_REFERENCE_ONECOL_DESCRIPTION_PATTERN Autogenerated one-column source reference description. You can use {SourceSchema1}, {SourceName1}, {SourceID1}, {FriendlyName1}, {SourceSchema2}, {SourceName2}, {SourceID2}, {FriendlyName2}, {ColumnName}, {ColumnID} and {ColumnFriendlyName} placeholders RC_{SourceName1}_{SourceName2}_{ColumnName}
SOURCE_REFRESH_DEL_MISSING_IMP_COLS Source refresh - delete missing import columns: 0 - no, 1 - yes 0
SOURCE_REFRESH_DEL_MISSING_SOURCES Source refresh - delete missing sources: 0 - no, 1 - yes 0
SOURCE_REFRESH_REFRESH_IMP_COLS Source refresh - refresh import columns: 0 - no, 1 - yes 0
SOURCE_REFRESH_REFRESH_IMP_DESC Source refresh - refresh import descriptions: 0 - no, 1 - yes 0
SOURCE_REFRESH_REFRESH_PK Source refresh - refresh primary keys in import tables: 0 - no, 1 - yes 0
SOURCE_REFRESH_REFRESH_SRC_DESC Source refresh - refresh source descriptions: 0 - no, 1 - yes 0
SSIS_REPLACE_DECIMAL_SEPARATOR 0 - do not replace, 1 - replace point by comma, 2 - replace comma by point 1
SYNC_TIMEOUT Timeout for DWH synchronization, seconds 600
TABLE_COMPRESSION_TYPE Default table compression type: 1-NONE, 2-PAGE, 3-RAW 1
TABLE_REFERENCE_DESCRIPTION_PATTERN Autogenerated table reference description. You can use {TableSchema1}, {TableName1}, {TableID1}, {FriendlyName1}, {TableSchema2}, {TableName2}, {TableID2} and {FriendlyName2} placeholders FK_{TableName1}_{TableName2}
TABLE_REFERENCE_ONECOL_DESCRIPTION_PATTERN Autogenerated one-column table reference description. You can use {TableSchema1}, {TableName1}, {TableID1}, {FriendlyName1}, {TableSchema2}, {TableName2}, {TableID2}, {FriendlyName2}, {ColumnName}, {ColumnID} and {ColumnFriendlyName} placeholders RC_{TableName1}_{TableName2}_{ColumnName}
THUMBNAIL_DIAGRAM_DOCK 0 - No dock, 1 - Left top corner, 2 - Right top corner, 3 - Left down corner, 4 - Right down corner 4
THUMBNAIL_DIAGRAM_HEIGHT Height (points) 300
THUMBNAIL_DIAGRAM_LEFT Left (points) 0
THUMBNAIL_DIAGRAM_MARGIN Margin (points). 30
THUMBNAIL_DIAGRAM_SHOW 0 - do not show, 1 - show 1
THUMBNAIL_DIAGRAM_TOP Top (points) 0
THUMBNAIL_DIAGRAM_WIDTH Width (points) 300
TRANS_DEFAULT_USE_VAULT_RELATIONS 0 - use business relations rather than VAULT relations. 1 - use VAULT relations rather than business relations 1
TRANS_FRIENDLY_NAMES_AS_COLUMN_NAMES Use friendly names as column names in transformations: 0 - no, 1 - yes 1
TRANSFORMATIONS_CREATEVIEWS Create VIEW when saving transformation: 2-yes, 1-compile only, 0-no 0

Scripts

Scripts

Scripts are a set of SQL commands that will be executed under specific conditions. There are four types of scripts:

  1. Pre-Deployment Script: This script is executed prior to DWH synchronization and before deployment.
  2. Post-Deployment Script: This script is executed after DWH synchronization and before deployment.
  3. Pre-Workflow Script: This script is executed in the workflow package before starting all other packages.
  4. Post-Workflow Script: This script is executed in the workflow package after all other packages have finished.

Deployment Script Control

The deployment script can be disabled during synchronization or deployment by using the "Do Not Deploy" and "Do Not Synchronize" flags.

Below is a typical script definition:

Pre-deployment script

Pre and Post-Deployment Scripts for Stored Procedures

Pre and Post-deployment scripts can be used to create stored procedures for use in transformations. In this case, the script should be executed only during Data Warehouse (DWH) synchronization.

Including the CREATE PROCEDURE script during deployment is unnecessary, as the procedure definition is already included in the deployment package.

Macros

Macros

A macro is a powerful tool used to simplify transformation definitions in AnalyticsCreator. Every macro has the following components:

  • Name: The name of the macro.
  • Language: The programming language used in the macro.
  • Definition Statement: The logic or functionality defined within the macro.
  • Referenced Table (optional): Used for auto-referencing in transformations.

Currently, two languages are supported: T-SQL and SSIS.

  • T-SQL Macros: Used in transformations, calculated fields, and database objects.
  • SSIS Macros: Used in SSIS statements for import constraints or field logic.

Macro Example

A typical macro definition:

Date2ID Macro

Macro Statement and Parameters

Every macro uses positional parameters like :1, :2, etc. To call a macro, prefix it with @ and supply parameters in parentheses. For example:

@Date2ID(T1.BUDAT)

This will be parsed into:

CONVERT(bigint, ISNULL(DATEDIFF(DD, '20000101', CONVERT(date, T1.BUDAT)) + 1, 0))

Macro Parameters and NULL Replacement

If fewer parameters are passed than defined, the remaining placeholders will be replaced by NULL.

@Date2ID()

Results in:

CONVERT(bigint, ISNULL(DATEDIFF(DD, '20000101', CONVERT(date, NULL)) + 1, 0))

Referenced Table Parameter

The Referenced Table parameter allows automatic creation of a reference between a field and the referenced table, based on the macro logic.

Macro Usage in Transformations

Macros are commonly used in transformation column definitions. For example:

Macro in use

This will be parsed in the transformation VIEW as:


[FK_ModifiedDate] = CASE
    WHEN T1.ModifiedDate < '19800101' THEN 0 
    WHEN T1.ModifiedDate > '20401231' THEN CONVERT(bigint, DATEDIFF(DD, '19800101',  '20401231') + 1)
    ELSE CONVERT(bigint, ISNULL(DATEDIFF(DD, '19800101', CONVERT(date, T1.ModifiedDate)) + 1, 0))

Macro Updates

If a macro definition is changed, all dependent transformations and calculated fields will be recalculated automatically to reflect the change.

Roles

Roles

User Roles in OLAP Cubes

User roles are utilized in OLAP cubes to restrict access. Existing user roles can be viewed by using the "Data Mart → Roles" toolbar button or the "List Roles" context menu under the "Roles" node in the navigation tree.

List Roles

Adding a New Role

To add a new role, click the "New" button in the role list window or use the "Add Role" context menu under the "Roles" node in the navigation tree.

Add Role

Role List and Role Details Windows

These windows allow users to define, edit, and manage access permissions across OLAP cubes, dimensions, and users. They also provide tools for organizing and duplicating existing roles efficiently.

Role Details Window

Role List Window

Role Options

For a full list of role definition options in SSAS, refer to SQL Server Analysis Services documentation.

  • Name: The name of the role.
  • Full Control: Grants full access to the cube.
  • Read Sources: Allows reading of source data.
  • Process Database: Allows cube processing.
  • Read Source Definition: Allows reading of source metadata.
  • Read Definition: Allows reading of the cube definition.
  • Users: A list of logins that belong to the group.
  • Cubes: A list of cubes that the role users have access to.
  • Dimensions: A list of dimensions that the role users have access to.
  • Cube Dimensions: A list of cube dimensions that the role users have access to.

Duplicating Roles

To duplicate an existing role, use the "Duplicate" context menu of the specific role.

Duplicate Role

Filter diagram by group

Filter Diagram by Group

To display only the objects belonging to a specific group in the diagram, select the group from the "Group" dropdown in the menu bar.

Filter Diagram by Group

Groups in navigation tree

Groups in Navigation Tree

In the navigation tree, the user can see the defined object groups:

Navigation Tree Group Overview

There are different icons for common groups and workflow groups.
To edit a group, select "Edit Group" from the context menu.

Edit Group Context Menu

You can edit the group definition as follows:

Edit Group Dialog

To view all objects belonging to a specific group, select "List Objects" from the group context menu.

List Objects in Group

Managing Objects in the Group

The user can view and edit the objects belonging to the selected group. In this window, the user can:

  • Add new objects to the group
  • Delete primary objects
  • Exclude inherited objects from the group

As shown in the image below.

Group Object Management Window

Object membership

Object Membership

AnalyticsCreator objects, such as sources, tables, and transformations, can be organized into groups. These groups allow users to:

  • Display only the objects belonging to a specific group.
  • Enable or disable objects in the workflow based on their group membership.

To add an object to a group, select "Object Groups" from the object's context menu, as shown in the image below.

Object Groups Context Menu

The Group Window will open, allowing you to manage group memberships.

Group Membership Window

Creating and Managing Groups

To create a new group, enter a group name and check the "Member" checkbox.
To add all objects dependent on the selected object to the group, select the "Inherit Successors" checkbox.
To add all objects that the selected object depends on to the group, select the "Inherit Predecessors" checkbox.
To create SQL scripts for turning group objects on and off in the workflow package, select the "Create Workflow" checkbox. Three SQL scripts can be created:

  1. SSIS_Configuration Complete Script: Contains the workflow configuration, disabling all objects except those that belong to the group.
  2. SSIS_Configuration Enable Script: Contains the workflow configuration, enabling objects that belong to the group.
  3. SSIS_Configuration Disable Script: Contains the workflow configuration, disabling objects that belong to the group.

There is the group membership of one object, as shown in the image below:

Group Membership Overview

This object belongs to several groups. The "Inherited" flag indicates that the group membership was inherited from a dependent or depending object. This object is displayed in the "Inherited from Object" column.
To disable the "inherited" membership, select the "Exclude" checkbox.

Object groups

Object Groups

Object groups in AnalyticsCreator are a powerful feature that allows users to manage and organize objects, such as sources, tables, and transformations, efficiently.
Groups can be used to filter displayed objects or control workflow execution by including or excluding specific objects.

Object Membership
Objects such as sources, tables, and transformations can be assigned to groups. These groups help to:

  • Display only the objects belonging to a specific group.
  • Enable or disable objects in the workflow based on their group membership.

Groups in the Navigation Tree
In the navigation tree, all defined groups are displayed with different icons for:

  • Common groups
  • Workflow groups

Filtering the Diagram by Group

To display only the objects belonging to a specific group in the diagram:

  1. Use the Group dropdown in the menu bar.
  2. Select the desired group to filter the diagram view.

Interface

Interface Settings Overview

The Interface Settings window in AnalyticsCreator allows users to customize various visual elements of the application. It is organized into tabs that include options for diagrams, the navigation tree, and pages. Each tab offers settings for colors, sizes, alignments, and more. The interface also includes preview functionality and buttons to restore defaults or save/cancel changes.

Tabs and Categories

  • Colors: Customize the color scheme of interface elements such as diagrams, tables, packages, and transformations.
  • Diagram: Adjust visual properties of graphical elements like arrows, boxes, and fonts.
  • Navigation Tree: Modify the appearance and spacing of items in the left-hand navigation pane.
  • Pages: Configure the layout and alignment of detail and table views within the application.

Common Interface Elements

  • Buttons at the bottom of the settings window: Default 1, Default 2, Default 3, Cancel, and Save.
  • A filter or dropdown in the main interface (not the settings window) allows filtering by groups or objects.

Color Settings

Each item below includes a color picker for visual customization:

  1. Background/Foreground Arrow
  2. Background/Foreground Text
  3. Background/Foreground Dimension
  4. Background/Foreground External Transformation
  5. Background/Foreground Fact
  6. Background/Foreground Header
  7. Background/Foreground Vault Hub
  8. Background/Foreground Vault Link
  9. Background/Foreground Even Column
  10. Background/Foreground Odd Column
  11. Background/Foreground Other Object
  12. Background/Foreground Package
  13. Background/Foreground Vault Satellite
  14. Background/Foreground Script Transformation
  15. Background/Foreground Source
  16. Background/Foreground Table
  17. Background/Foreground View
  18. Border Diagram
  19. Border Package
  20. Border Color
  21. Line Color Thin
  22. Highlighter Label

Diagram Settings

Adjustments to the layout and structure of diagrams:

  1. Arrow Height
  2. Font Size
  3. Border Thickness
  4. Cell Height
  5. Cell Width
  6. Header Font Size
  7. Header Height
  8. Header Width
  9. Sub Box Height
  10. Sub Box Width
  11. Scale
  12. Minor Connector Line Opacity (%)

Navigation Tree Settings

  1. Icon Size
  2. Line Spacing
  3. Scale
  4. Font Size
  5. Splitter Position

Page Layout Settings

Detail Page

  1. Horizontal Alignment: Left, Center, Right, or Stretch
  2. Vertical Alignment: Top, Center, Bottom, or Stretch
  3. Max Width
  4. Max Height

Table Page

  1. Horizontal Alignment: Left, Center, Right, or Stretch
  2. Vertical Alignment: Top, Center, Bottom, or Stretch
  3. Max Width
  4. Max Height
  5. Frame Scale

Version Control - Git

Version Control in AnalyticsCreator

AnalyticsCreator supports version control by allowing users to export their repository into a structured JSON format. This enables seamless integration with Git-based systems such as GitHub or Azure DevOps, empowering teams to manage their data product development process with full traceability, collaboration, and control.

Why Use Version Control with AnalyticsCreator?

Version control brings critical benefits to your data warehouse development lifecycle:

  • Track changes to your metadata and configurations
  • Enable collaboration across multiple developers
  • Revert to previous versions when needed
  • Integrate with CI/CD pipelines
  • Support consistent deployment and testing workflows

Exporting Your Repository

1. Export to File

In AnalyticsCreator, click:

File > Save to file

Save To File

2. Choose Format: .acrepox

In the Save dialog, select the file type:

AC JSON Files (*.acrepox)

Select File Type

What's Included in the .acrepox File?

The exported file contains:

  • Project metadata
  • Data layers, ETL logic, and semantic models
  • Parameters, transformations, macros
  • Relationships and object dependencies
🔐 Credentials are NOT included. This ensures secure storage and prevents leaking sensitive information.

Step-by-Step Git Workflow (GitHub or Azure DevOps)

Implementing Version Control in a Collaborative Environment

To implement version control in a collaborative environment, use the Two-Branch Strategy:

  • main: Production-ready version
  • changes: Development and staging updates

Prerequisites

  • A Git repository (on GitHub, Azure DevOps, etc.)
  • Git installed locally
  • Access to the repository
  • Your previously exported .acrepox file

Step-by-Step Process

Step 1: Clone the Repository

git clone https://your-repo-url.git
cd your-repo-folder

Step 2: Switch to the changes Branch

git checkout -b changes

If changes already exists:

git checkout changes

Step 3: Add the Exported .acrepox File

Place your exported file (e.g., CustomerDW.acrepox) into the project folder. Then run:

git add CustomerDW.acrepox 
git commit -m "Updated repository with latest model changes" 
git push origin changes

Step 4: Open Pull Request (PR) to main

From GitHub or Azure DevOps:

  • Go to the Pull Requests section.
  • Create a new PR from changesmain.
  • Include a clear description of what’s changed.
Review Checklist:
  • Have you tested the export?
  • Are credentials excluded?
  • Have you added documentation for changes?

Step 5: Archive and Backup

Store previous versions of .acrepox files in a versions or archive folder within the repo for traceability, or use releases.

Best Practices

  • Export regularly during development milestones
  • Use folders to organize models by project/component
  • Use tags or naming conventions for major releases
  • Communicate changes clearly in PR descriptions
  • Use secure CI/CD pipelines for automated deployment

Restoring a Version

To restore a specific version:

  1. Checkout the version or tag in Git
  2. Open the .acrepox file in AnalyticsCreator:
    File > Open from file
  3. Your full repository structure will be restored as exported.

Northwind Data Warehouse

Step-by-Step: SQL Server Northwind Project

Create Your First Data Warehouse with AnalyticsCreator

AnalyticsCreator offers pre-configured demos for testing within your environment. This guide outlines the steps to transition from the Northwind OLTP database to the Northwind Data Warehouse model.

Once completed, you will have a fully generated DWH project ready to run locally.

Load the Demo Project

From the File menu, select Load From Cloud.

Load From Cloud
  • Choose NW_Demo
  • Enter a name for your new repository (default: NW_Demo)
  • Note: This repository contains metadata only—no data is moved.

AnalyticsCreator will automatically generate all required project parameters.

Project Structure: The 5-Layer Model

AnalyticsCreator will generate a Data Warehouse project with five layers:

  1. Sources – Raw data from the source system (Northwind OLTP).
  2. Staging Layer – Temporary storage for data cleansing and preparation.
  3. Persisted Staging Layer – Permanent storage of cleaned data for historization.
  4. Core Layer – Integrated business model—structured and optimized for querying.
  5. Datamart Layer – Optimized for reporting—organized by business topic (e.g., Sales, Inventory).
Lineage View

Northwind Setup (If Not Already Installed)

Step 1: Check if the Northwind Database Exists

Open SQL Server Management Studio (SSMS) and verify that the Northwind database is present.

If yes, skip to the next section. If not, proceed to Step 2.

Step 2: Create the Northwind Database

Run the setup script from Microsoft:

🔗 Download Script

Or copy-paste it into SSMS and execute.

Step 3: Verify Database

USE Northwind;
GO
SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
  AND TABLE_TYPE = 'BASE TABLE';

Once confirmed, you can proceed with the next steps to configure the AnalyticsCreator Connector with your Northwind database.

Note: AnalyticsCreator uses only native Microsoft Connectors, and we do not store any personal information.

Step 4: Change Database Connector

Navigate to Sources > Connectors.

Northwind Data Warehouse Connectors

You will notice that a connector is already configured. For educational purposes, the connection string is not encrypted yet. To edit or add a new connection string, go to Options > Encrypted Strings > Add. Paste your connection string as demonstrated in the video below.

After adding the new connection string, it's time to test your connection. Go to Sources → Connectors and press the Test button to verify your connection.

Step 5: Create a New Deployment

In this step, you'll configure and deploy your project to the desired destination. Please note that only the metadata will be deployed; there will be no data movement or copy during this process.

  1. Navigate to Deployments in the menu and create a new deployment.
  2. Assign a name to your deployment.
  3. Configure the connection for the Destination
  4. Set the project path where the deployment will be saved.
  5. Select the packages you want to generate.
  6. Review the connection variables and click Deploy to initiate the process.
  7. Finally, click Deploy to complete the deployment.

In this step, your initial Data Warehouse project is created. Note that only the metadata—the structure of your project—is generated at this stage.

You can choose between two options for package generation:

SSIS follows a traditional ETL tool architecture, making it a suitable choice for on-premises data warehouse architectures. In contrast, ADF is designed with a modern cloud-native architecture, enabling seamless integration with various cloud services and big data systems.

This architectural distinction makes ADF a better fit for evolving data integration needs in cloud-based environments.

To execute your package and move your data, you will still need an Integration Runtime (IR). Keep in mind that AnalyticsCreator only generates the project at the metadata level and does not access your data outside the AnalyticsCreator interface. It does not link your data to us, ensuring that your data remains secure in its original location.

For testing purposes, you can run your package in Microsoft Visual Studio 2022, on your local SQL Server, or even in Azure Data Factory.

Create your first DWH

Create your first DWH

To become familiar with AnalyticsCreator, we have made certain data sets available. You may use these to test AnalyticsCreator:

Click here for the Northwind Data Warehouse