Could not find the requested topic. Please check the URL and try again.
[
{"id":383461199041,"name":"Getting Started","type":"category","path":"/docs/getting-started","breadcrumb":"Getting Started","description":"","searchText":"getting started welcome to the analyticscreator documentation. in this getting started section, you can choose from the following sections: installation system requirements download and installation understanding analyticscreator"}
,{"id":383225948358,"name":"Installation","type":"section","path":"/docs/getting-started/installation","breadcrumb":"Getting Started › Installation","description":"","searchText":"getting started installation installing analyticscreator: 32-bit and 64-bit versions this guide offers step-by-step instructions for installing either the 32-bit or 64-bit version of analyticscreator, depending on your system requirements. đąnote: to ensure optimal performance, verify that your system meets the following prerequisites before installation."}
,{"id":383225948359,"name":"System Requirements","type":"section","path":"/docs/getting-started/system-requirements","breadcrumb":"Getting Started › System Requirements","description":"","searchText":"getting started system requirements to ensure optimal performance, verify that the following requirements are met: đą note: if you already have sql server installed and accessible, you can proceed directly to the launching analyticscreator section. networking: communication over port 443 is where analytics communicates to the analyticscreator server. operating system: windows 10 or later. analyticscreator is compatible with windows operating systems starting from version 10. â ď¸ warning: port 443 is the standard https port for secured transactions. it is used for data transfers and ensures that data exchanged between a web browser and websites remains encrypted and protected from unauthorized access. microsoft sql server: sql server on azure virtual machines azure sql managed instances"}
,{"id":383225948360,"name":"Download and Installation","type":"section","path":"/docs/getting-started/download-and-installation","breadcrumb":"Getting Started › Download and Installation","description":"","searchText":"getting started download and installation access the download page navigate to the analyticscreator download page download the installer locate and download the installation file. verify sql server connectivity before proceeding with the installation, confirm that you can connect to your sql server instance. connecting to sql server: to ensure successful connectivity: use sql server management studio (ssms), a tool for managing and configuring sql server. if ssms is not installed on your system, download it from the official microsoft site: download sql server management studio (ssms) install the software once connectivity is confirmed, follow the instructions below to complete the installation."}
,{"id":383225948361,"name":"Configuring AnalyticsCreator","type":"section","path":"/docs/getting-started/configuring-analyticscreator","breadcrumb":"Getting Started › Configuring AnalyticsCreator","description":"","searchText":"getting started configuring analyticscreator this guide will walk you through configuring analyticscreator with your system. provide the login and password that you received by e-mail from analyticscreator minimum requirements configuration settings the configuration of analyticscreator is very simple. the only mandatory configuration is the sql server settings. sql server settings use localdb to store repository: enables you to store the analyticscreator project (metadata only) on your localdb. sql server to store repository: enter the ip address or the name of your microsoft sql server. security integrated: authentication is based on the current windows user. standard: requires a username and password. azure ad: uses azure ad (now microsoft entra) for microsoft sql server authentication. trust server certificate: accepts the server's certificate as trusted. sql user: the sql server username. sql password: the corresponding password. optional requirements paths unc path to store backup: a network path to store project backups. local sql server path to store backup: a local folder to store your project backups. local sql server path to store database: a local folder to store your sql server database backups. repository database template: the alias format for your repositories. default: repo_{reponame}. dwh database template: the alias format for your dwh templates. default: dwh_{reponame}. proxy settings proxy address: the ip address or hostname of your proxy server. proxy port: the port number used by the proxy. proxy user: the username for proxy authentication. proxy password: the password for the proxy user. now you're ready to create your new data warehouse with analyticscreator."}
,{"id":383225948362,"name":"Understanding AnalyticsCreator","type":"section","path":"/docs/getting-started/understanding-analytics-creator","breadcrumb":"Getting Started › Understanding AnalyticsCreator","description":"","searchText":"getting started understanding analyticscreator there are at least two different approaches to design a holistic business and data model. the bottom-up method, which is shown in the graphic below and the top-down method, which starts with the conceptual model first, although models can also be loaded from other modeling tools. connect analyticscreator to any data source, especially databases, individual files, data lakes, cloud services, excel files and other extracts. build-in connectors to many common sources are available as well as support of azure data factory, azure analytics. define data - analyticscreator extracts all metadata from the data sources, such as field descriptions, data types, key fields, and all relationships, which is stored in the analyticscreator metadata repository. this will: extract and capture ddl detect structure changes and forward in all higher layers. cognitive suggestion- intelligent wizards help to create a draft version of the model across all layers of the data analytics platform. choose different modelling approaches or create your own approach: data vault 2.0, dimensional, 3 nf, own historical data handling (scd, snapshot, cdc, gapless, ..) use azure devops model- the entire toolset of analyticscreator is at your disposal to further develop the draft model. behind the holistic graphical model, the generated code is already finished and can be also modified manually, including: automated transformations and wizards collaboration development process supported by data lineage flow-chart own scripting and macros are possible deploy - to deploy the data model in different environments (test, prod, ..), analyticscreator generates deployment packages that are also used for the change process of structures and loadings. deployment packages can be used locally, in fabric, azure as well in hybrid environments. this includes: stored procedures, ssis azure sql db, azure analysis services, synapse arm template for azure data factory tabular models, olap cubes power bi tableau qlik"}
,{"id":383225948363,"name":"Quick Start Guide","type":"section","path":"/docs/getting-started/quick-start-guide","breadcrumb":"Getting Started › Quick Start Guide","description":"","searchText":"getting started quick start guide this quick start guide helps new and trial users understand how to set up, model, and automate a data warehouse using analyticscreator. it covers everything from connectors to data marts - with practical examples based on sap source systems. analyticscreator automates the creation of data warehouses and analytical models. it connects to source systems (like sap, sql server, or others), imports metadata, and generates all required transformation, historization, and loading structures. this quick start shows how to: create connectors and relationships (foreign keys, references) import source tables build transformations for dimensions and facts define relationships and surrogate keys create data marts and calendar dimensions generate cubes and metrics for reporting tools (power bi, etc.)"}
,
{"id":383461199042,"name":"User Guide","type":"category","path":"/docs/user-guide","breadcrumb":"User Guide","description":"","searchText":"user guide you can launch analyticscreator in two ways: from the desktop icon after installation or streaming setup, a desktop shortcut is created. double-click the icon to start analyticscreator. from the installer window open the downloaded analyticscreator installer. instead of selecting install, click launch (labeled as number one in the image below). a window will appear showing the available analyticscreator servers, which deliver the latest version to your system. this process launches analyticscreator without performing a full installation, assuming all necessary prerequisites are already in place."}
,{"id":383225948364,"name":" Desktop Interface","type":"section","path":"/docs/user-guide/desktop-interface","breadcrumb":"User Guide › Desktop Interface","description":"","searchText":"user guide desktop interface with analyticscreator desktop users can: data warehouse creation automatically generate and structure your data warehouse, including fact tables and dimensions. connectors add connections to various data sources and import metadata seamlessly. layer management define and manage layers such as staging, persisted staging, core, and datamart layers. package generation generate integration packages for ssis (sql server integration services) and adf (azure data factory). indexes and partitions automatically configure indexes and partitions for optimized performance. roles and security manage roles and permissions to ensure secure access to your data. galaxies and hierarchies organize data across galaxies and define hierarchies for better data representation. customizations configure parameters, macros, scripts, and object-specific scripts for tailored solutions. filters and predefined transformations apply advanced filters and transformations for data preparation and enrichment. snapshots and versioning create snapshots to track and manage changes in your data warehouse. deployments deploy your projects with flexible configurations, supporting on-premises and cloud solutions. groups and models organize objects into groups and manage models for streamlined workflows. data historization automate the process of creating historical data models for auditing and analysis."}
,{"id":383225948365,"name":"Working with AnalyticsCreator","type":"section","path":"/docs/user-guide/working-with-analyticscreator","breadcrumb":"User Guide › Working with AnalyticsCreator","description":"","searchText":"user guide working with analyticscreator understanding the fundamental operations in analyticscreator desktop is essential for efficiently managing your data warehouse repository and ensuring accuracy in your projects. below are key basic operations you can perform within the interface: edit mode and saving â data warehouse editor single object editing: in the data warehouse repository, you can edit one object at a time. this ensures precision and reduces the risk of unintended changes across multiple objects. how to edit: double-click on any field within an object to enter edit mode. the selected field becomes editable, allowing you to make modifications. save prompt: if any changes are made, a prompt will appear, reminding you to save your modifications before exiting the edit mode. this safeguard prevents accidental loss of changes. unsaved changes: while edits are immediately reflected in the repository interface, they are not permanently saved until explicitly confirmed by clicking the save button. accessing views in data warehouse explorer layer-specific views: each layer in the data warehouse contains views generated by analyticscreator. these views provide insights into the underlying data structure and transformations applied at that layer. how to access: navigate to the data warehouse explorer and click on the view tab for the desired layer. this displays the layer's contents, including tables, fields, and transformations. adding and deleting objects adding new objects: navigate to the appropriate section (e.g., tables, layers, or connectors) in the navigation tree. right-click and select add [object type] to create a new object. provide the necessary details, such as name, description, and configuration parameters. save the object. deleting objects: select the object in the navigation tree and right-click to choose delete. confirm the deletion when prompted. â ď¸ note: deleting an object may affect dependent objects or configurations. filtering and searching in data warehouse explorer filtering: use filters to narrow down displayed objects by criteria such as name, type, or creation date. searching: enter keywords or phrases in the search bar to quickly locate objects. benefits: these features enhance repository navigation and efficiency when working with large datasets. object dependencies and relationships dependency view: for any selected object, view its dependencies and relationships with other objects by accessing the dependencies tab. impact analysis: analyze how changes to one object might affect other parts of the data warehouse. managing scripts predefined scripts: add scripts for common operations like data transformations or custom sql queries. edit and run: double-click a script in the navigation tree to modify it. use run script to execute and view results. validating and testing changes validation tools: use built-in tools to check for errors or inconsistencies in your repository. evaluate changes: use the evaluate button before saving or deploying to test functionality and ensure correctness. locking and unlocking objects locking: prevent simultaneous edits by locking objects, useful in team environments. unlocking: release locks once edits are complete to allow further modifications by others. exporting and importing data export: export objects, scripts, or configurations for backup or sharing. use the export option in the toolbar or navigation tree. import: import previously exported files to replicate configurations or restore backups. use the import option and follow the prompts to load the data."}
,{"id":383225948366,"name":"Advanced Features","type":"section","path":"/docs/user-guide/advanced-features","breadcrumb":"User Guide › Advanced Features","description":"","searchText":"user guide advanced features analyticscreator provides a rich set of advanced features to help you configure, customize, and optimize your data warehouse projects. these features extend the toolâs capabilities beyond standard operations, enabling more precise control and flexibility. scripts scripts in analyticscreator allow for detailed customization at various stages of data warehouse creation and deployment. they enhance workflow flexibility and enable advanced repository configurations. types of scripts object-specific scripts define custom behavior for individual objects, such as tables or transformations, to meet specific requirements. pre-creation scripts execute tasks prior to creating database objects. example: define sql functions to be used in transformations. pre-deployment scripts configure processes that run before deploying the project. example: validate dependencies or prepare the target environment. post-deployment scripts handle actions executed after deployment is complete. example: perform cleanup tasks or execute stored procedures. pre-workflow scripts manage operations that occur before initiating an etl workflow. example: configure variables or initialize staging environments. repository extension scripts extend repository functionality with user-defined logic. example: add custom behaviors to redefine repository objects. historization the historization features in analyticscreator enable robust tracking and analysis of historical data changes, supporting advanced time-based reporting and auditing. key components slowly changing dimensions (scd) automate the management of changes in dimension data. supports various scd types including: type 1 (overwrite) type 2 (versioning) others as needed time dimensions create and manage temporal structures to facilitate time-based analysis. example: build fiscal calendars or weekly rollups for time-series analytics. snapshots capture and preserve specific states of the data warehouse. use cases include audit trails, historical reporting, and rollback points. parameters and macros these tools provide centralized control and reusable logic to optimize workflows and streamline repetitive tasks. parameters dynamic management: centralize variable definitions for consistent use across scripts, transformations, and workflows. reusable configurations: update values in one place to apply changes globally. use cases: set default values for connection strings, table prefixes, or date ranges. macros reusable logic: create parameterized scripts for tasks repeated across projects or workflows. streamlined processes: use macros to enforce consistent logic in transformations and calculations. example: define a macro to calculate age from a birthdate and reuse it across transformations. summary analyticscreatorâs advanced features offer deep customization options that allow you to: control object-level behavior through scripting track and manage historical data effectively streamline project-wide settings with parameters reuse logic with powerful macros these capabilities enable you to build scalable, maintainable, and highly flexible data warehouse solutions."}
,{"id":383225948367,"name":"Wizards","type":"section","path":"/docs/user-guide/wizards","breadcrumb":"User Guide › Wizards","description":"","searchText":"user guide wizards the wizards in analyticscreator provide a guided and efficient way to perform various tasks related to building and managing a data warehouse. below is an overview of the eight available wizards and their core functions. dwh wizard the dwh wizard is designed to quickly create a semi-ready data warehouse. it is especially useful when the data source contains defined table relationships or manually maintained references. supports multiple architectures: classic (kimball), data vault 1.0 & 2.0, or mixed. automatically creates imports, dimensions, facts, hubs, satellites, and links. customizable field naming, calendar dimensions, and sap deltaq integration. source wizard the source wizard adds new data sources to the repository. supports source types: table or query. retrieves table relationships and sap-specific metadata. allows query testing and schema/table filtering. import wizard the import wizard defines and manages the import of external data into the warehouse. configures source, target schema, table name, and ssis package. allows additional attributes and parameters. historization wizard the historization wizard manages how tables or transformations are historized. supports scd types: 0, 1, and 2. configures empty record behavior and vault id usage. supports ssis-based or stored procedure historization. transformation wizard the transformation wizard creates and manages data transformations. supports regular, manual, script, and external transformation types. handles both historicized and non-historicized data. configures joins, fields, persistence, and metadata settings. calendar transformation wizard the calendar transformation wizard creates calendar transformations used in reporting and time-based models. configures schema, name, start/end dates, and date-to-id macros. assigns transformations to specific data mart stars. time transformation wizard the time transformation wizard creates time dimensions to support time-based analytics. configures schema, name, time period, and time-to-id macros. assigns transformations to specific data mart stars. snapshot transformation wizard the snapshot transformation wizard creates snapshot dimensions for snapshot-based analysis. allows creation of one snapshot dimension per data warehouse. configures schema, name, and data mart star assignment. by using these eight wizards, analyticscreator simplifies complex tasks, ensures consistency, and accelerates the creation and management of enterprise data warehouse solutions."}
,
{"id":383461199043,"name":"Reference","type":"category","path":"/docs/reference","breadcrumb":"Reference","description":"","searchText":"reference structured reference for the analyticscreator user interface, entities, types, and parameters. this reference guide is organized into sections and subsections to help you quickly find interface elements, object types, dialogs, wizards, and configuration details in analyticscreator. sections [link:365118109942|user interface] toolbar, navigation tree, dataflow diagram, pages, lists, dialogs, and wizards. [link:365178121463|entity types] connector types, source types, table types, transformation types, package types, and more. [link:365178123475|entities] reference pages for main analyticscreator object classes such as layers, sources, tables, and packages. [link:365178123499|parameters] system and project parameters including technical and environment-related settings."}
,{"id":383461259458,"name":"User Interface","type":"section","path":"/docs/reference/user-interface","breadcrumb":"Reference › User Interface","description":"","searchText":"reference user interface user interface"}
,{"id":383509396676,"name":"Toolbar","type":"subsection","path":"/docs/reference/user-interface/toolbar","breadcrumb":"Reference › User Interface › Toolbar","description":"","searchText":"reference user interface toolbar toolbar"}
,{"id":379959516358,"name":"File","type":"topic","path":"/docs/reference/user-interface/toolbar/file","breadcrumb":"Reference › User Interface › Toolbar › File","description":"","searchText":"reference user interface toolbar file the file menu contains commands for creating, connecting, and maintaining repositories in analyticscreator. from here, you can start new projects, connect to existing repositories, synchronize metadata, and back up or restore configurations. it's the primary place to manage the setup and ongoing maintenance of your warehouse models. [link:380044750015|dwh wizard] rapidly creates a semi-ready warehouse, ideal when sources include predefined or curated table references. [link:373340595405|sync dwh] synchronizes the warehouse with metadata and source changes to keep structures current. [link:373340595406|new] creates a new repository configuration for metadata and model definitions. [link:373340595407|connect] connects to an existing repository database to reuse or update metadata. [link:373340595408|backup & restore — load from file] imports repository data or metadata from a local file. [link:373340595408| backup & restore — save to file] saves the current repository or project metadata to a portable file. [link:373340595408|backup & restore — load from cloud] restores repository data directly from cloud storage. [link:373340595408|backup & restore — save to cloud] backs up the repository or metadata to connected cloud storage. find on diagram highlights specific tables, columns, or objects within the modeling diagram."}
,{"id":380042415310,"name":"Sources","type":"topic","path":"/docs/reference/user-interface/toolbar/sources","breadcrumb":"Reference › User Interface › Toolbar › Sources","description":"","searchText":"reference user interface toolbar sources sources the sources menu is where you configure data connectivity. add new connectors, manage connected systems (databases and files), and maintain reference tables used across models. icon feature description connectors lists and manages available connectors for different data sources. sources displays and manages connected source systems (databases and flat files). references manages reference tables for lookups, hierarchies, or static mappings. new connector adds a new data source connector (select type and authentication). new connector imports connector definitions from a previously exported file. new connector imports connector settings directly from cloud storage or a repository."}
,{"id":380044750015,"name":"DWH","type":"topic","path":"/docs/reference/user-interface/toolbar/dwh","breadcrumb":"Reference › User Interface › Toolbar › DWH","description":"","searchText":"reference user interface toolbar dwh dwh the dwh menu focuses on warehouse modeling. define layers and schemas, configure tables and indexes, and manage reusable assets such as references, macros, predefined transformations, and snapshots. icon feature description layers configure warehouse layers and their responsibilities. schemas list and manage schemas within the warehouse model. tables display and configure fact and dimension tables. indexes list and configure indexes to optimize query performance. references manage reference tables for lookups, hierarchies, or static mappings. macros create and manage reusable macro actions. predefined transformations library of ready-to-use transformations for common patterns. snapshots define snapshot structures to capture point-in-time states."}
,{"id":380044818681,"name":"Data mart","type":"topic","path":"/docs/reference/user-interface/toolbar/data-mart","breadcrumb":"Reference › User Interface › Toolbar › Data mart","description":"","searchText":"reference user interface toolbar data mart data mart the data products menu models analytical products for bi consumption. organize related stars into galaxies, define star schemas, manage hierarchies and roles, and configure partitions and semantic models. icon feature description galaxies organize related star schemas into a galaxy for analytical grouping. stars define star schemas containing facts and dimensions. hierarchies manage hierarchical structures (e.g., year â quarter â month). roles define user roles and access permissions for data products. partitions configure table partitions for scale and performance. models define semantic models built on top of the warehouse for bi tools."}
,{"id":380044750017,"name":"ETL","type":"topic","path":"/docs/reference/user-interface/toolbar/etl","breadcrumb":"Reference › User Interface › Toolbar › ETL","description":"","searchText":"reference user interface toolbar etl etl the etl menu contains development assets for extraction, transformation, and loading. group work into packages, write scripts, manage imports, and handle historization scenarios with reusable transformations and generated dimensions. icon feature description packages list etl packages that group transformations and workflows. scripts contain sql or script-based transformations for etl. imports manage import processes from external sources into the warehouse. historizations handle slowly changing dimensions and historical data tracking. transformations define transformation logic for staging and warehouse layers. new transformations launch transformation wizard calendar dimension generates a reusable calendar dimension (year, month, day, etc.). time dimension creates a detailed time dimension (hours, minutes, seconds). snapshot dimension creates snapshot dimensions to capture point-in-time records."}
,{"id":380044819646,"name":"Deployment","type":"topic","path":"/docs/reference/user-interface/toolbar/deployment","breadcrumb":"Reference › User Interface › Toolbar › Deployment","description":"","searchText":"reference user interface toolbar deployment deployment the deployment menu packages your modeled assets for delivery to target environments. use it to build and export deployment artifacts for your warehouse or data products. icon feature description deployment package build and export deployment packages for the warehouse or data products."}
,{"id":380044819647,"name":"Options","type":"topic","path":"/docs/reference/user-interface/toolbar/options","breadcrumb":"Reference › User Interface › Toolbar › Options","description":"","searchText":"reference user interface toolbar options options the options menu centralizes application-wide settings. configure user groups, warehouse defaults, interface preferences, global parameters, and encrypted values used throughout projects. icon feature description user groups manage user groups and access levels. dwh settings configure global warehouse settings such as naming and storage rules. interface customize interface preferences and appearance. parameter define global and local parameters for etl and modeling. encrypted strings manage encrypted connection strings and sensitive values."}
,{"id":380044750021,"name":"Help","type":"topic","path":"/docs/reference/user-interface/toolbar/help","breadcrumb":"Reference › User Interface › Toolbar › Help","description":"","searchText":"reference user interface toolbar help help the help menu provides export tools and links to external resources. generate documentation, open knowledge resources, and review legal and product information. icon feature description export to visio export diagrams to microsoft visio for documentation. export in word export documentation directly to a microsoft word file. wikipedia open a relevant wikipedia article for reference. videos links to instructional or demo videos. community links to the user community or forums. version history show version history and change logs. eula display the end user license agreement. about show software version, credits, and licensing information."}
,{"id":383509396677,"name":"Navigation tree","type":"subsection","path":"/docs/reference/user-interface/navigation-tree","breadcrumb":"Reference › User Interface › Navigation tree","description":"","searchText":"reference user interface navigation tree navigation tree"}
,{"id":380121766108,"name":"Connectors","type":"topic","path":"/docs/reference/user-interface/navigation-tree/connectors","breadcrumb":"Reference › User Interface › Navigation tree › Connectors","description":"","searchText":"reference user interface navigation tree connectors reference page for defining and maintaining source system connectors in analyticscreator. overview the connectors menu in analyticscreator defines metadata for establishing a connection to a source system. each connector includes a name, a source type, and a connection string. these connections are used in etl packages to access external data sources during data warehouse generation. function connectors allow analyticscreator to integrate with relational databases and other supported systems. the connection string is stored in the project metadata and referenced during package execution. each connector is project-specific and can be reused across multiple packages or layers. access connectors are managed under the sources section in the analyticscreator user interface. all defined connectors are listed in a searchable grid, and new entries can be created or deleted from this screen. selecting new opens a connector definition form with metadata fields and a connection string editor. how to access navigation tree connectors → connector → edit connector; connectors → add connector toolbar sources → add diagram not applicable visual element {searchconnectors} → connector → double-click screen overview the first image below shows the main connectors interface. the second shows the editor that appears when a new connector is created. list connectors id property description 1 connectorname logical name identifying the connector within the project 2 connectortype type of source system (e.g., mssql, oracle, etc.) 3 connectionstring ole db or equivalent connection string used to connect to the source system new connector dialog id property description 1 connectorname logical name identifying the connector within the project. 2 connectortype type of source system, for example mssql, oracle, or another supported connector type. 3 azure source type type of azure source, for example azure sql, azure postgres, or another supported azure source type. 4 connectionstring ole db or equivalent connection string used to connect to the source system. 5 cfg.ssis controls whether the connection string should not be stored in cfg.ssis_configurations. related topics [link:#|source] [link:#|connector types] [link:#|refresh source metadata] [link:#|create source]"}
,{"id":380121766109,"name":"Layers","type":"topic","path":"/docs/reference/user-interface/navigation-tree/layers","breadcrumb":"Reference › User Interface › Navigation tree › Layers","description":"","searchText":"reference user interface navigation tree layers reference page for defining and maintaining logical layers in analyticscreator. overview the layers feature in analyticscreator defines the logical and sequential structure in which metadata objects are grouped and generated. each object in a project is assigned to a layer, which determines its build order and visibility during solution generation. function layers represent vertical slices in a project's architecture, such as source, staging, persisted staging, transformation, data warehouse - core, or datamart. one layer can have one or more schemas associated with it. they are used to control: object assignment and isolation layers define where objects belong and keep architectural responsibilities clearly separated. deployment sequencing layers control the order in which structures are generated and deployed across environments. selective generation specific parts of the solution can be included or excluded based on layer configuration. dependency resolution layer order influences build-time logic and helps resolve dependencies between generated objects. layer configuration impacts how analyticscreator generates the sql database schema, azure data factory pipelines, and semantic models. access layers are accessible from the dwh section. a dedicated layers panel displays all defined layers, their order, and their assignment status. how to access navigation tree layers → layer → edit layer toolbar dwh → layers diagram not applicable visual element not applicable screen overview the image below shows the list layers interface with columns labeled for easy identification. id property description 1 name name of the layer used to identify it within the project structure. 2 seqnr defines the sequence number of the layer and controls its display order in the lineage. 3 description optional field used to provide a more detailed description of the layer. behavior execution order layers are executed in the defined top-down order. generation scope disabling a layer excludes its objects from generation. object assignment each object must belong to one and only one layer. build influence layers influence sql build context and pipeline generation. usage context layers are typically aligned with logical data architecture phases. common usage includes separating ingestion, transformation, modeling, and reporting responsibilities. notes layer configurations are stored within the project metadata. changes to layer order or status require regeneration of the solution. layer visibility and behavior apply across all deployment targets. related topics [link:#|schema] [link:#|table] [link:#|transformation] [link:#|predefined transformations]"}
,{"id":380121766110,"name":"Packages","type":"topic","path":"/docs/reference/user-interface/navigation-tree/packages","breadcrumb":"Reference › User Interface › Navigation tree › Packages","description":"","searchText":"reference user interface navigation tree packages "}
,{"id":380121766111,"name":"Indexes","type":"topic","path":"/docs/reference/user-interface/navigation-tree/indexes","breadcrumb":"Reference › User Interface › Navigation tree › Indexes","description":"","searchText":"reference user interface navigation tree indexes "}
,{"id":380121767100,"name":"Roles","type":"topic","path":"/docs/reference/user-interface/navigation-tree/roles","breadcrumb":"Reference › User Interface › Navigation tree › Roles","description":"","searchText":"reference user interface navigation tree roles "}
,{"id":380121783543,"name":"Galaxies","type":"topic","path":"/docs/reference/user-interface/navigation-tree/galaxies","breadcrumb":"Reference › User Interface › Navigation tree › Galaxies","description":"","searchText":"reference user interface navigation tree galaxies "}
,{"id":380121783544,"name":"Hierarchies","type":"topic","path":"/docs/reference/user-interface/navigation-tree/hierarchies","breadcrumb":"Reference › User Interface › Navigation tree › Hierarchies","description":"","searchText":"reference user interface navigation tree hierarchies "}
,{"id":380121784533,"name":"Partitions","type":"topic","path":"/docs/reference/user-interface/navigation-tree/partitions","breadcrumb":"Reference › User Interface › Navigation tree › Partitions","description":"","searchText":"reference user interface navigation tree partitions "}
,{"id":380121767101,"name":"Parameters","type":"topic","path":"/docs/reference/user-interface/navigation-tree/parameters","breadcrumb":"Reference › User Interface › Navigation tree › Parameters","description":"","searchText":"reference user interface navigation tree parameters "}
,{"id":380121767102,"name":"Macros","type":"topic","path":"/docs/reference/user-interface/navigation-tree/macros","breadcrumb":"Reference › User Interface › Navigation tree › Macros","description":"","searchText":"reference user interface navigation tree macros "}
,{"id":380121784534,"name":"Object scripts","type":"topic","path":"/docs/reference/user-interface/navigation-tree/object-scripts","breadcrumb":"Reference › User Interface › Navigation tree › Object scripts","description":"","searchText":"reference user interface navigation tree object scripts "}
,{"id":380121784535,"name":"Filters","type":"topic","path":"/docs/reference/user-interface/navigation-tree/filters","breadcrumb":"Reference › User Interface › Navigation tree › Filters","description":"","searchText":"reference user interface navigation tree filters "}
,{"id":380121767103,"name":"Predefined transformations","type":"topic","path":"/docs/reference/user-interface/navigation-tree/predefined-transformations","breadcrumb":"Reference › User Interface › Navigation tree › Predefined transformations","description":"","searchText":"reference user interface navigation tree predefined transformations "}
,{"id":380121767104,"name":"Snapshots","type":"topic","path":"/docs/reference/user-interface/navigation-tree/snapshots","breadcrumb":"Reference › User Interface › Navigation tree › Snapshots","description":"","searchText":"reference user interface navigation tree snapshots "}
,{"id":380121767105,"name":"Deployments","type":"topic","path":"/docs/reference/user-interface/navigation-tree/deployments","breadcrumb":"Reference › User Interface › Navigation tree › Deployments","description":"","searchText":"reference user interface navigation tree deployments "}
,{"id":380121767106,"name":"Groups","type":"topic","path":"/docs/reference/user-interface/navigation-tree/groups","breadcrumb":"Reference › User Interface › Navigation tree › Groups","description":"","searchText":"reference user interface navigation tree groups "}
,{"id":380121784536,"name":"Models","type":"topic","path":"/docs/reference/user-interface/navigation-tree/models","breadcrumb":"Reference › User Interface › Navigation tree › Models","description":"","searchText":"reference user interface navigation tree models "}
,{"id":383509174508,"name":"Dataflow diagram","type":"subsection","path":"/docs/reference/user-interface/dataflow-diagram","breadcrumb":"Reference › User Interface › Dataflow diagram","description":"","searchText":"reference user interface dataflow diagram dataflow diagram"}
,{"id":383509174509,"name":"Pages","type":"subsection","path":"/docs/reference/user-interface/pages","breadcrumb":"Reference › User Interface › Pages","description":"","searchText":"reference user interface pages pages"}
,{"id":383509396683,"name":"Lists","type":"subsection","path":"/docs/reference/user-interface/lists","breadcrumb":"Reference › User Interface › Lists","description":"","searchText":"reference user interface lists lists"}
,{"id":383509396684,"name":"Dialogs","type":"subsection","path":"/docs/reference/user-interface/dialogs","breadcrumb":"Reference › User Interface › Dialogs","description":"","searchText":"reference user interface dialogs lists"}
,{"id":383509340360,"name":"Wizards","type":"subsection","path":"/docs/reference/user-interface/wizards","breadcrumb":"Reference › User Interface › Wizards","description":"","searchText":"reference user interface wizards wizards"}
,{"id":383461259455,"name":"Entity types","type":"section","path":"/docs/reference/entity-types","breadcrumb":"Reference › Entity types","description":"","searchText":"reference entity types entity types"}
,{"id":383509396685,"name":"Connector types","type":"subsection","path":"/docs/reference/entity-types/connector-types","breadcrumb":"Reference › Entity types › Connector types","description":"","searchText":"reference entity types connector types connector types"}
,{"id":383509396687,"name":"Source types","type":"subsection","path":"/docs/reference/entity-types/source-types","breadcrumb":"Reference › Entity types › Source types","description":"","searchText":"reference entity types source types source types"}
,{"id":383509396688,"name":"Table types","type":"subsection","path":"/docs/reference/entity-types/table-types","breadcrumb":"Reference › Entity types › Table types","description":"","searchText":"reference entity types table types table types"}
,{"id":383509396689,"name":"Transformation types","type":"subsection","path":"/docs/reference/entity-types/transformation-types","breadcrumb":"Reference › Entity types › Transformation types","description":"","searchText":"reference entity types transformation types transformation types"}
,{"id":383461259456,"name":"Entities ","type":"section","path":"/docs/reference/entities","breadcrumb":"Reference › Entities ","description":"","searchText":"reference entities entities"}
,{"id":383461259457,"name":"Parameters ","type":"section","path":"/docs/reference/parameters","breadcrumb":"Reference › Parameters ","description":"","searchText":"reference parameters parameters"}
,
{"id":383461199045,"name":"Tutorials","type":"category","path":"/docs/tutorials","breadcrumb":"Tutorials","description":"","searchText":"tutorials to become familiar with analyticscreator, we have made certain data sets available. you may use these to test analyticscreator: click here for the northwind data warehouse"}
,{"id":383225948382,"name":"Northwind DWH Walkthrough","type":"section","path":"/docs/tutorials/northwind-dwh-walkthrough","breadcrumb":"Tutorials › Northwind DWH Walkthrough","description":"","searchText":"tutorials northwind dwh walkthrough step-by-step: sql server northwind project create your first data warehouse with analyticscreator analyticscreator offers pre-configured demos for testing within your environment. this guide outlines the steps to transition from the northwind oltp database to the northwind data warehouse model. once completed, you will have a fully generated dwh project ready to run locally. load the demo project from the file menu, select load from cloud. choose nw_demo enter a name for your new repository (default: nw_demo) note: this repository contains metadata onlyâno data is moved. analyticscreator will automatically generate all required project parameters. project structure: the 5-layer model analyticscreator will generate a data warehouse project with five layers: sources â raw data from the source system (northwind oltp). staging layer â temporary storage for data cleansing and preparation. persisted staging layer â permanent storage of cleaned data for historization. core layer â integrated business modelâstructured and optimized for querying. datamart layer â optimized for reportingâorganized by business topic (e.g., sales, inventory). northwind setup (if not already installed) step 1: check if the northwind database exists open sql server management studio (ssms) and verify that the northwind database is present. if yes, skip to the next section. if not, proceed to step 2. step 2: create the northwind database run the setup script from microsoft: đľ download script or copy-paste it into ssms and execute. step 3: verify database use northwind; go select * from information_schema.tables where table_schema = 'dbo' and table_type = 'base table'; once confirmed, you can proceed with the next steps to configure the analyticscreator connector with your northwind database. note: analyticscreator uses only native microsoft connectors, and we do not store any personal information. step 4: change database connector navigate to sources > connectors. you will notice that a connector is already configured. for educational purposes, the connection string is not encrypted yet. to edit or add a new connection string, go to options > encrypted strings > add. paste your connection string as demonstrated in the video below. after adding the new connection string, it's time to test your connection. go to sources â connectors and press the test button to verify your connection. step 5: create a new deployment in this step, you'll configure and deploy your project to the desired destination. please note that only the metadata will be deployed; there will be no data movement or copy during this process. navigate to deployments in the menu and create a new deployment. assign a name to your deployment. configure the connection for the destination set the project path where the deployment will be saved. select the packages you want to generate. review the connection variables and click deploy to initiate the process. finally, click deploy to complete the deployment. in this step, your initial data warehouse project is created. note that only the metadataâthe structure of your projectâis generated at this stage. you can choose between two options for package generation: ssis (sql server integration services) adf (azure data factory) ssis follows a traditional etl tool architecture, making it a suitable choice for on-premises data warehouse architectures. in contrast, adf is designed with a modern cloud-native architecture, enabling seamless integration with various cloud services and big data systems. this architectural distinction makes adf a better fit for evolving data integration needs in cloud-based environments. to execute your package and move your data, you will still need an integration runtime (ir). keep in mind that analyticscreator only generates the project at the metadata level and does not access your data outside the analyticscreator interface. it does not link your data to us, ensuring that your data remains secure in its original location. for testing purposes, you can run your package in microsoft visual studio 2022, on your local sql server, or even in azure data factory."}
,
{"id":383461199046,"name":"Functions","type":"category","path":"/docs/functions-features","breadcrumb":"Functions","description":"","searchText":"functions get started by clicking on one of these sections: main functionality gui process support data sources export functionality use of analytics frontends"}
,{"id":383225948376,"name":"Main Functionality","type":"section","path":"/docs/functions-features/main-functionality","breadcrumb":"Functions › Main Functionality","description":"","searchText":"functions main functionality full bi-stack automation: from source to data warehouse through to frontend. holistic data model: complete view of the entire data model. this also allows for rapid prototyping of various models. data warehouses: ms sql server 2012-2022, azure sql database, azure synapse analytics dedicated, azure sql managed instance, sql server on azure vms, ms fabric sql. analytical databases: ssas tabular databases, ssas multidimensional databases, azure synapse analytics dedicated, power bi, power bi premium, duck db, tableau, and qlik sense. data lakes: ms azure blob storage, onelake. frontends: power bi, qlik sense, tableau, powerpivot (excel). pipelines/etl: sql server integration packages (ssis), azure data factory 2.0 pipelines, azure data bricks, fabric data factory. azure: azure sql server, azure data factory pipelines. deployment: visual studio solution (ssdt), creation of dacpac files, ssis packages, data factory arm templates, xmla files. modelling approaches: top-down modelling, bottom-up modelling, import from external modelling tool, dimensional/kimball, data vault 2.0, mixed approach of dv 2.0 and kimball (a combination the best of both worlds by using elements of both data vault 2.0 and kimball modelling), inmon, 3nf, or any custom data model. the analyticscreator wizard can help you create a data vault model automatically and also supports strict dan linstead techniques and data vaults. historization approaches: slowly changing dimensions (scd) type 0, type 1, type 2, mixed, snapshot historization, gapless historization, change-based calculations. surrogate key: auto-increment, long integer, hash key, custom definition of hash algorithm."}
,{"id":383225948377,"name":"GUI","type":"section","path":"/docs/functions-features/gui","breadcrumb":"Functions › GUI","description":"","searchText":"functions gui windows gui embedded version control multi-user development supporting distributed development manual object locking possible predefined templates cloud-based repository cloud service support available data lineage macro language for more flexible development predefined, datatype-based transformations calculated columns in each dwh table single point development: the whole design is possible in analyticscreator. external development not necessary embedding external code automatic documentation in word and visio export to microsoft devops, github, .. analyticscreator repository is stored in a ms sql server and can be modified and extended with additional functionality"}
,{"id":383225948378,"name":"Process support","type":"section","path":"/docs/functions-features/process-support","breadcrumb":"Functions › Process support","description":"","searchText":"functions process support etl procedure protocol error handling on etl procedures consistency on etl failure rollback on etl procedures automatic recognition of source structure changes and automatic adaptation of connected dwh entire dwh life-cycle support delta and full load of data models near real-time data loads possible external orchestration/scheduling for etl process internal orchestration/scheduling for etl process with generated ms-ssis packages several workflow configurations no is necessary runtime for analyticscreator daily processing of created dhws are run without analyticscreator no additional licences necessary for design component no ms sql server necessary"}
,{"id":383225948379,"name":"Data Sources","type":"section","path":"/docs/functions-features/data-sources","breadcrumb":"Functions › Data Sources","description":"","searchText":"functions data sources build-in connectivity: ms sql server, oracle, sap erp, s4/hana with theobald software (odp, deltaq/tables), sap business one with analyticscreator own connectivity, sap odp objects, excel, access, csv/text, oledb (e.g. terradata, netezza, db2..), odbc (mysql, postgres) , odata , azure blob storage (csv, parquet, avro), rest, ms sharepoint, google ads, amazon, salesforce crm, hubspot crm, ms dynamics 365 business central, ms dynamics navision 3rd party connectivity: access to more than 250+ data source with c-data connector [www.cdata.com/drivers]. this allows for connection to analyticscreator directly by an odbc, or ole db driver, or by connecting an ingest layer with externally filled tables. define your own connectivity: (any data source, hadoop, google bigquery/analytics, amazon, shop solutions, facebook, linkedin, x (formerly twitter)) in all cases of access to source data an analyticscreator-metadata-connector is created. the analyticscreator-metadata-connector is a description of data-sources you use for more easy handling in analyticscreator. analyticscreator is able to automatically create a metadata connector by extracting the data definition from your source data. it contains information about key fields, referential integrity, name of fields and description."}
,{"id":383225948380,"name":"Export Functionality","type":"section","path":"/docs/functions-features/export-functionality","breadcrumb":"Functions › Export Functionality","description":"","searchText":"functions export functionality azure blob storage, text, csv files, any target system using oledb or odbc driver, automated type conversation, export performed by ssis packages or azure data factory pipelines export for example to oracle, snowflake, synapse"}
,{"id":383225948381,"name":"Use of Analytics Frontends","type":"section","path":"/docs/functions-features/use-of-analytics-frontends","breadcrumb":"Functions › Use of Analytics Frontends","description":"","searchText":"functions use of analytics frontends push concept: power bi, tableau, and qlik models will be created automatically. all models described here will be created at the same time. pull concept: there are many bi frontends around which allows you to connect with the specified microsoft data. check with your vendor or us what is possible. analyticscreator allows you to develop a specific solution for your analytics frontend in the way that the model will be created automatically for your bi frontend (push concept)."}
]