Could not find the requested topic. Please check the URL and try again.
[
{"id":193656403493,"name":"Getting Started","type":"category","path":"/docs/getting-started","breadcrumb":"Getting Started","description":"","searchText":"getting started welcome to the analyticscreator documentation. in this getting started section, you can choose from the following sections: installation system requirements download and installation understanding analyticscreator"}
,{"id":194231638534,"name":"Installation","type":"section","path":"/docs/getting-started/installation","breadcrumb":"Getting Started › Installation","description":"","searchText":"getting started installation installing analyticscreator: 32-bit and 64-bit versions this guide offers step-by-step instructions for installing either the 32-bit or 64-bit version of analyticscreator, depending on your system requirements. 💡note: to ensure optimal performance, verify that your system meets the following prerequisites before installation."}
,{"id":194231638452,"name":"System Requirements","type":"section","path":"/docs/getting-started/system-requirements","breadcrumb":"Getting Started › System Requirements","description":"","searchText":"getting started system requirements to ensure optimal performance, verify that the following requirements are met: 💡 note: if you already have sql server installed and accessible, you can proceed directly to the launching analyticscreator section. networking: communication over port 443 is where analytics communicates to the analyticscreator server. operating system: windows 10 or later. analyticscreator is compatible with windows operating systems starting from version 10. ⚠️ warning: port 443 is the standard https port for secured transactions. it is used for data transfers and ensures that data exchanged between a web browser and websites remains encrypted and protected from unauthorized access. microsoft sql server: sql server on azure virtual machines azure sql managed instances"}
,{"id":193657815102,"name":"Download and Installation","type":"section","path":"/docs/getting-started/download-and-installation","breadcrumb":"Getting Started › Download and Installation","description":"","searchText":"getting started download and installation access the download page navigate to the analyticscreator download page download the installer locate and download the installation file. verify sql server connectivity before proceeding with the installation, confirm that you can connect to your sql server instance. connecting to sql server: to ensure successful connectivity: use sql server management studio (ssms), a tool for managing and configuring sql server. if ssms is not installed on your system, download it from the official microsoft site: download sql server management studio (ssms) install the software once connectivity is confirmed, follow the instructions below to complete the installation."}
,{"id":194809538144,"name":"Configuring AnalyticsCreator","type":"section","path":"/docs/getting-started/configuring-analyticscreator","breadcrumb":"Getting Started › Configuring AnalyticsCreator","description":"","searchText":"getting started configuring analyticscreator this guide will walk you through configuring analyticscreator with your system. provide the login and password that you received by e-mail from analyticscreator minimum requirements configuration settings the configuration of analyticscreator is very simple. the only mandatory configuration is the sql server settings. sql server settings use localdb to store repository: enables you to store the analyticscreator project (metadata only) on your localdb. sql server to store repository: enter the ip address or the name of your microsoft sql server. security integrated: authentication is based on the current windows user. standard: requires a username and password. azure ad: uses azure ad (now microsoft entra) for microsoft sql server authentication. trust server certificate: accepts the server's certificate as trusted. sql user: the sql server username. sql password: the corresponding password. optional requirements paths unc path to store backup: a network path to store project backups. local sql server path to store backup: a local folder to store your project backups. local sql server path to store database: a local folder to store your sql server database backups. repository database template: the alias format for your repositories. default: repo_{reponame}. dwh database template: the alias format for your dwh templates. default: dwh_{reponame}. proxy settings proxy address: the ip address or hostname of your proxy server. proxy port: the port number used by the proxy. proxy user: the username for proxy authentication. proxy password: the password for the proxy user. now you're ready to create your new data warehouse with analyticscreator."}
,{"id":194231638533,"name":"Understanding AnalyticsCreator","type":"section","path":"/docs/getting-started/understanding-analytics-creator","breadcrumb":"Getting Started › Understanding AnalyticsCreator","description":"","searchText":"getting started understanding analyticscreator there are at least two different approaches to design a holistic business and data model. the bottom-up method, which is shown in the graphic below and the top-down method, which starts with the conceptual model first, although models can also be loaded from other modeling tools. connect analyticscreator to any data source, especially databases, individual files, data lakes, cloud services, excel files and other extracts. build-in connectors to many common sources are available as well as support of azure data factory, azure analytics. define data - analyticscreator extracts all metadata from the data sources, such as field descriptions, data types, key fields, and all relationships, which is stored in the analyticscreator metadata repository. this will: extract and capture ddl detect structure changes and forward in all higher layers. cognitive suggestion- intelligent wizards help to create a draft version of the model across all layers of the data analytics platform. choose different modelling approaches or create your own approach: data vault 2.0, dimensional, 3 nf, own historical data handling (scd, snapshot, cdc, gapless, ..) use azure devops model- the entire toolset of analyticscreator is at your disposal to further develop the draft model. behind the holistic graphical model, the generated code is already finished and can be also modified manually, including: automated transformations and wizards collaboration development process supported by data lineage flow-chart own scripting and macros are possible deploy - to deploy the data model in different environments (test, prod, ..), analyticscreator generates deployment packages that are also used for the change process of structures and loadings. deployment packages can be used locally, in fabric, azure as well in hybrid environments. this includes: stored procedures, ssis azure sql db, azure analysis services, synapse arm template for azure data factory tabular models, olap cubes power bi tableau qlik"}
,{"id":292127259841,"name":"Quick Start Guide","type":"section","path":"/docs/getting-started/quick-start-guide","breadcrumb":"Getting Started › Quick Start Guide","description":"","searchText":"getting started quick start guide this quick start guide helps new and trial users understand how to set up, model, and automate a data warehouse using analyticscreator. it covers everything from connectors to data marts - with practical examples based on sap source systems. analyticscreator automates the creation of data warehouses and analytical models. it connects to source systems (like sap, sql server, or others), imports metadata, and generates all required transformation, historization, and loading structures. this quick start shows how to: create connectors and relationships (foreign keys, references) import source tables build transformations for dimensions and facts define relationships and surrogate keys create data marts and calendar dimensions generate cubes and metrics for reporting tools (power bi, etc.)"}
,
{"id":194803188913,"name":"User Guide","type":"category","path":"/docs/user-guide","breadcrumb":"User Guide","description":"","searchText":"user guide you can launch analyticscreator in two ways: from the desktop icon after installation or streaming setup, a desktop shortcut is created. double-click the icon to start analyticscreator. from the installer window open the downloaded analyticscreator installer. instead of selecting install, click launch (labeled as number one in the image below). a window will appear showing the available analyticscreator servers, which deliver the latest version to your system. this process launches analyticscreator without performing a full installation, assuming all necessary prerequisites are already in place."}
,{"id":194831949947,"name":" Desktop Interface","type":"section","path":"/docs/user-guide/desktop-interface","breadcrumb":"User Guide › Desktop Interface","description":"","searchText":"user guide desktop interface with analyticscreator desktop users can: data warehouse creation automatically generate and structure your data warehouse, including fact tables and dimensions. connectors add connections to various data sources and import metadata seamlessly. layer management define and manage layers such as staging, persisted staging, core, and datamart layers. package generation generate integration packages for ssis (sql server integration services) and adf (azure data factory). indexes and partitions automatically configure indexes and partitions for optimized performance. roles and security manage roles and permissions to ensure secure access to your data. galaxies and hierarchies organize data across galaxies and define hierarchies for better data representation. customizations configure parameters, macros, scripts, and object-specific scripts for tailored solutions. filters and predefined transformations apply advanced filters and transformations for data preparation and enrichment. snapshots and versioning create snapshots to track and manage changes in your data warehouse. deployments deploy your projects with flexible configurations, supporting on-premises and cloud solutions. groups and models organize objects into groups and manage models for streamlined workflows. data historization automate the process of creating historical data models for auditing and analysis."}
,{"id":194831949967,"name":"Working with AnalyticsCreator","type":"section","path":"/docs/user-guide/working-with-analyticscreator","breadcrumb":"User Guide › Working with AnalyticsCreator","description":"","searchText":"user guide working with analyticscreator understanding the fundamental operations in analyticscreator desktop is essential for efficiently managing your data warehouse repository and ensuring accuracy in your projects. below are key basic operations you can perform within the interface: edit mode and saving — data warehouse editor single object editing: in the data warehouse repository, you can edit one object at a time. this ensures precision and reduces the risk of unintended changes across multiple objects. how to edit: double-click on any field within an object to enter edit mode. the selected field becomes editable, allowing you to make modifications. save prompt: if any changes are made, a prompt will appear, reminding you to save your modifications before exiting the edit mode. this safeguard prevents accidental loss of changes. unsaved changes: while edits are immediately reflected in the repository interface, they are not permanently saved until explicitly confirmed by clicking the save button. accessing views in data warehouse explorer layer-specific views: each layer in the data warehouse contains views generated by analyticscreator. these views provide insights into the underlying data structure and transformations applied at that layer. how to access: navigate to the data warehouse explorer and click on the view tab for the desired layer. this displays the layer's contents, including tables, fields, and transformations. adding and deleting objects adding new objects: navigate to the appropriate section (e.g., tables, layers, or connectors) in the navigation tree. right-click and select add [object type] to create a new object. provide the necessary details, such as name, description, and configuration parameters. save the object. deleting objects: select the object in the navigation tree and right-click to choose delete. confirm the deletion when prompted. ⚠️ note: deleting an object may affect dependent objects or configurations. filtering and searching in data warehouse explorer filtering: use filters to narrow down displayed objects by criteria such as name, type, or creation date. searching: enter keywords or phrases in the search bar to quickly locate objects. benefits: these features enhance repository navigation and efficiency when working with large datasets. object dependencies and relationships dependency view: for any selected object, view its dependencies and relationships with other objects by accessing the dependencies tab. impact analysis: analyze how changes to one object might affect other parts of the data warehouse. managing scripts predefined scripts: add scripts for common operations like data transformations or custom sql queries. edit and run: double-click a script in the navigation tree to modify it. use run script to execute and view results. validating and testing changes validation tools: use built-in tools to check for errors or inconsistencies in your repository. evaluate changes: use the evaluate button before saving or deploying to test functionality and ensure correctness. locking and unlocking objects locking: prevent simultaneous edits by locking objects, useful in team environments. unlocking: release locks once edits are complete to allow further modifications by others. exporting and importing data export: export objects, scripts, or configurations for backup or sharing. use the export option in the toolbar or navigation tree. import: import previously exported files to replicate configurations or restore backups. use the import option and follow the prompts to load the data."}
,{"id":194803189337,"name":"Advanced Features","type":"section","path":"/docs/user-guide/advanced-features","breadcrumb":"User Guide › Advanced Features","description":"","searchText":"user guide advanced features analyticscreator provides a rich set of advanced features to help you configure, customize, and optimize your data warehouse projects. these features extend the tool’s capabilities beyond standard operations, enabling more precise control and flexibility. scripts scripts in analyticscreator allow for detailed customization at various stages of data warehouse creation and deployment. they enhance workflow flexibility and enable advanced repository configurations. types of scripts object-specific scripts define custom behavior for individual objects, such as tables or transformations, to meet specific requirements. pre-creation scripts execute tasks prior to creating database objects. example: define sql functions to be used in transformations. pre-deployment scripts configure processes that run before deploying the project. example: validate dependencies or prepare the target environment. post-deployment scripts handle actions executed after deployment is complete. example: perform cleanup tasks or execute stored procedures. pre-workflow scripts manage operations that occur before initiating an etl workflow. example: configure variables or initialize staging environments. repository extension scripts extend repository functionality with user-defined logic. example: add custom behaviors to redefine repository objects. historization the historization features in analyticscreator enable robust tracking and analysis of historical data changes, supporting advanced time-based reporting and auditing. key components slowly changing dimensions (scd) automate the management of changes in dimension data. supports various scd types including: type 1 (overwrite) type 2 (versioning) others as needed time dimensions create and manage temporal structures to facilitate time-based analysis. example: build fiscal calendars or weekly rollups for time-series analytics. snapshots capture and preserve specific states of the data warehouse. use cases include audit trails, historical reporting, and rollback points. parameters and macros these tools provide centralized control and reusable logic to optimize workflows and streamline repetitive tasks. parameters dynamic management: centralize variable definitions for consistent use across scripts, transformations, and workflows. reusable configurations: update values in one place to apply changes globally. use cases: set default values for connection strings, table prefixes, or date ranges. macros reusable logic: create parameterized scripts for tasks repeated across projects or workflows. streamlined processes: use macros to enforce consistent logic in transformations and calculations. example: define a macro to calculate age from a birthdate and reuse it across transformations. summary analyticscreator’s advanced features offer deep customization options that allow you to: control object-level behavior through scripting track and manage historical data effectively streamline project-wide settings with parameters reuse logic with powerful macros these capabilities enable you to build scalable, maintainable, and highly flexible data warehouse solutions."}
,{"id":194809542259,"name":"Wizards","type":"section","path":"/docs/user-guide/wizards","breadcrumb":"User Guide › Wizards","description":"","searchText":"user guide wizards the wizards in analyticscreator provide a guided and efficient way to perform various tasks related to building and managing a data warehouse. below is an overview of the eight available wizards and their core functions. dwh wizard the dwh wizard is designed to quickly create a semi-ready data warehouse. it is especially useful when the data source contains defined table relationships or manually maintained references. supports multiple architectures: classic (kimball), data vault 1.0 & 2.0, or mixed. automatically creates imports, dimensions, facts, hubs, satellites, and links. customizable field naming, calendar dimensions, and sap deltaq integration. source wizard the source wizard adds new data sources to the repository. supports source types: table or query. retrieves table relationships and sap-specific metadata. allows query testing and schema/table filtering. import wizard the import wizard defines and manages the import of external data into the warehouse. configures source, target schema, table name, and ssis package. allows additional attributes and parameters. historization wizard the historization wizard manages how tables or transformations are historized. supports scd types: 0, 1, and 2. configures empty record behavior and vault id usage. supports ssis-based or stored procedure historization. transformation wizard the transformation wizard creates and manages data transformations. supports regular, manual, script, and external transformation types. handles both historicized and non-historicized data. configures joins, fields, persistence, and metadata settings. calendar transformation wizard the calendar transformation wizard creates calendar transformations used in reporting and time-based models. configures schema, name, start/end dates, and date-to-id macros. assigns transformations to specific data mart stars. time transformation wizard the time transformation wizard creates time dimensions to support time-based analytics. configures schema, name, time period, and time-to-id macros. assigns transformations to specific data mart stars. snapshot transformation wizard the snapshot transformation wizard creates snapshot dimensions for snapshot-based analysis. allows creation of one snapshot dimension per data warehouse. configures schema, name, and data mart star assignment. by using these eight wizards, analyticscreator simplifies complex tasks, ensures consistency, and accelerates the creation and management of enterprise data warehouse solutions."}
,{"id":373340594423,"name":"DWH Wizard ","type":"subsection","path":"/docs/user-guide/wizards/dwh-wizard-function","breadcrumb":"User Guide › Wizards › DWH Wizard ","description":"","searchText":"user guide wizards dwh wizard the dwh wizard allows for the rapid creation of a semi-ready data warehouse. it is especially effective when the data source includes predefined table references or manually maintained source references. prerequisites at least one source connector must be defined before using the dwh wizard. note: the dwh wizard support flat files using duckdb , in that case you should select the option \"use metadata of existing sources\" or use the source wizard instead. to launch the dwh wizard, click the “dwh wizard” button in the toolbar. instead, the user can use the connector context menu: using the dwh wizard select the connector, optionally enter the schema or table filter, and click \"apply\". then, the source tables will be displayed. optionally, select the \"existing sources\" radio button to work with already defined sources instead of querying the external system (ideal for meta connectors). if a table already exists, the \"exist\" checkbox will be selected. to add or remove tables: select them and click the ▶ button to add. select from below and click the ◀ button to remove. dwh wizard architecture options the wizard can generate the dwh using: classic or mixed architecture: supports imports, historization, dimensions, and facts. data vault architecture: supports hubs, satellites, links, dimensions, and facts with automatic classification when “auto” is selected. define name templates for dwh objects: set additional parameters: dwh wizard properties field name appearance: leave unchanged, or convert to upper/lowercase. retrieve relations: enable automatic relation detection from source metadata. create calendar dimension: auto-create calendar dimension and define date range. include tables in facts: include related tables in facts (n:1, indirect, etc.). use calendar in facts: include date-to-calendar references in fact transformations. sap deltaq transfer mode: choose between idoc or trfs. sap deltaq automatic synchronization: enable automatic deltaq sync. sap description language: select sap object description language. datavault2: do not create hubs: optionally suppress hub creation in dv2. historizing type: choose ssis package or stored procedure for historization. use friendly names in transformations as column names: use display names from sap/meta/manual connectors. default transformations: select default predefined transformations for dimensions. stars: assign generated dimensions and facts to data mart stars."}
,{"id":373340594424,"name":"Source Wizard","type":"subsection","path":"/docs/user-guide/wizards/source-wizard","breadcrumb":"User Guide › Wizards › Source Wizard","description":"","searchText":"user guide wizards source wizard the source wizard is used to add new data sources to the repository. to launch the source wizard, right-click on the \"sources\" branch of a connector in the context menu and select \"add source.\" source wizard functionality the appearance and functionality of the source wizard will vary depending on the selected source type (table or query): table: when selecting table as the data source, the wizard provides options to configure and view available tables. configuring a table data source when selecting \"table\" as the data source in the source wizard, click the \"apply\" button to display the list of available source tables. optionally, you can enter a schema or table filter to refine the results. configuration options: retrieve relations: enables the retrieval of relationships for the selected source table, if available. sap description language: specifies the language for object descriptions when working with sap sources. sap deltaq attributes: for sap deltaq sources, additional deltaq-specific attributes must be defined. configuring a query as a data source when selecting \"query\" as the data source in the source wizard, follow these steps: define schema and name: specify the schema and name of the source for the repository. enter the query: provide the query in the query language supported by the data source. test the query: click the “test query” button to verify its validity and ensure it retrieves the expected results. complete the configuration: click the “finish” button to add the new source to the repository. the source definition window will open, allowing further modifications if needed."}
,{"id":373340594425,"name":"Import wizard","type":"subsection","path":"/docs/user-guide/wizards/import-wizard","breadcrumb":"User Guide › Wizards › Import wizard","description":"","searchText":"user guide wizards import wizard to start the import wizard, use the source context menu: import status indicators sources marked with a \"!\" icon indicate that they have not yet been imported. attempting to launch the import wizard on a source that has already been imported will result in an error. typical import wizard window there is a typical import wizard window, as shown in the image below: options: source: the source that should be imported. target schema: the schema of the import table. target name: the name of the import table. package: the name of the ssis package where the import will be done. you can select an existing import package or add a new package name. click finish to proceed. the import definition window will open, allowing the configuration of additional import attributes and parameters, as shown in the image below: post-import actions refer to the \"import package\" description for more details. after creating a new import, refresh the diagram to reflect the changes, as shown in the image below:"}
,{"id":373340595386,"name":"Historization wizard","type":"subsection","path":"/docs/user-guide/wizards/historization-wizard","breadcrumb":"User Guide › Wizards › Historization wizard","description":"","searchText":"user guide wizards historization wizard the historization wizard is used to historicize a table or transformation. to start the historization wizard, use the object context menu: \"add\" → \"historization\" in the diagram, as shown in the image below: alternatively, the object context menu in the navigation tree can be used, as shown in the image below: parameters there is a typical historization wizard window, as shown in the image below: source table: the table that should be historicized. target schema: the schema of the historicized table. target name: the name of the historicized table. package: the name of the ssis package where the historization will be done. you can select an existing historization package or add a new package name. historizing type: you can select between ssis package and stored procedure. scd type: the user can select between different historization types: scd 0, scd 1, and scd 2. empty record behavior: defines what should happen in case of a missing source record. use vault id as pk: if you are using datavault or mixed architecture, the user can use hashkeys instead of business keys to perform historization. after clicking \"finish\", the historization will be generated, and the diagram will be updated automatically. then, the user can select the generated historization package and optionally change some package properties (see \"historizing package\")."}
,{"id":373340595387,"name":"Transformation wizard","type":"subsection","path":"/docs/user-guide/wizards/transformation-wizard","breadcrumb":"User Guide › Wizards › Transformation wizard","description":"","searchText":"user guide wizards transformation wizard the transformation wizard is used to create a new transformation. to start it, use the object context menu and select: \"add → transformation\" in the diagram. typical transformation wizard window supported transformation types regular transformations: described in tabular form, results in a generated view. manual transformations: hand-created views defined manually by the user. script transformations: based on sql scripts, often calling stored procedures. external transformations: created outside analyticscreator as ssis packages. main page parameters type: transformation type: dimension: fullhist, creates unknown member, joinhisttype: actual fact: snapshot, no unknown member, joinhisttype: historical_to other: fullhist, no unknown member, joinhisttype: historical_to manual, external, script: as named schema: schema name name: transformation name historizing type: fullhist snapshothist snapshot actualonly none main table: only for regular transformations create unknown member: adds surrogate id = 0 (for dimensions) persist transformation: save view to a table persist table: name of persist table persist package: ssis package name result table: for external/script types ssis package: for external/script types table selection page allows selection of additional tables. tables must be directly or indirectly related to the main table. parameters table joinhisttype none actual historical_from historical_to full join options: all n:1 direct related all direct related all n:1 related all related use hash keys if available parameter page configure additional parameters (for regular transformations only). fields: none all key fields all fields field names (if duplicated): field[n] table_field field name appearance: no changes upper case lower case key fields null to zero: replaces null with 0 use friendly names as column names stars page stars: data mart stars for the transformation default transformations: no defaults (facts) all defaults (dimensions) selected defaults dependent tables: manage dependent tables script page used for script transformations. enter the sql logic that defines the transformation. insert into imp.lastpayment(businessentityid, ratechangedate, rate) select ph.businessentityid, ph.ratechangedate, ph.rate from ( select businessentityid, max(ratechangedate) lastratechangedate from [imp].[employeepayhistory] group by businessentityid ) t inner join [imp].[employeepayhistory] ph on ph.businessentityid = t.businessentityid and ph.ratechangedate = t.lastratechangedate"}
,{"id":373340595388,"name":"Calendar transformation wizard","type":"subsection","path":"/docs/user-guide/wizards/calendar-transformation-wizard","breadcrumb":"User Guide › Wizards › Calendar transformation wizard","description":"","searchText":"user guide wizards calendar transformation wizard to create a calendar transformation, select \"add → calendar dimension\" from the diagram context menu. as shown in the image below: the calendar transformation wizard will open. typically, only one calendar transformation is required in the data warehouse. as shown in the image below: parameters schema: the schema of the calendar transformation. name: the name of the calendar transformation. date from: the start date for the calendar. date to: the end date for the calendar. date-to-id function: the macro name that transforms a datetime value into the key value for the calendar dimension. this macro is typically used in fact transformations to map datetime fields to calendar dimension members. stars: the data mart stars where the calendar transformation will be included."}
,{"id":373340595389,"name":"Time transformation wizard","type":"subsection","path":"/docs/user-guide/wizards/time-transformation-wizard","breadcrumb":"User Guide › Wizards › Time transformation wizard","description":"","searchText":"user guide wizards time transformation wizard to create a time transformation, select \"add → time dimension\" from the diagram context menu. as shown in the image below: the time transformation wizard will then open, allowing you to configure a new time transformation: parameters schema the schema in which the time transformation resides. name the name assigned to the time transformation. period (minutes) the interval (in minutes) used to generate time dimension records. time-to-id function the macro function that converts a datetime value into the key value for the time dimension. use case: convert datetime fields in fact transformations into time dimension members. stars the data mart stars where the time transformation will be included."}
,{"id":373340595390,"name":"Snapshot transformation wizard","type":"subsection","path":"/docs/user-guide/wizards/snapshot-transformation-wizard","breadcrumb":"User Guide › Wizards › Snapshot transformation wizard","description":"","searchText":"user guide wizards snapshot transformation wizard to create a snapshot transformation, select \"add → snapshot dimension\" from the diagram context menu. this will open the snapshot transformation wizard. ⚠️ note: only one snapshot dimension can exist in the data warehouse. as shown in the image below: parameters schema the schema in which the snapshot transformation resides. name the name assigned to the snapshot transformation. stars the data mart stars where this snapshot transformation will be included."}
,{"id":373340595391,"name":"Persisting wizard","type":"subsection","path":"/docs/user-guide/wizards/persisting-wizard","breadcrumb":"User Guide › Wizards › Persisting wizard","description":"","searchText":"user guide wizards persisting wizard the content of any regular or manual transformation can be stored in a table, typically to improve access speed for complex transformations. persisting the transformation is managed through an ssis package. to persist a transformation, the user should select \"add → persisting\" from the object context menu in the diagram. as shown in the image below: persisting wizard options as shown in the image below: transformation: the name of the transformation to persist. persist table: the name of the table where the transformation will be persisted. this table will be created in the same schema as the transformation. persist package: the name of the ssis package that manages the persistence process."}
,{"id":194817239195,"name":"Interface Settings","type":"section","path":"/docs/user-guide/interface-settings","breadcrumb":"User Guide › Interface Settings","description":"","searchText":"user guide interface settings the interface settings window in analyticscreator allows users to customize various visual elements of the application. it is organized into tabs that include options for diagrams, the navigation tree, and pages. each tab offers settings for colors, sizes, alignments, and more. the interface also includes preview functionality and buttons to restore defaults or save/cancel changes. tabs and categories colors: customize the color scheme of interface elements such as diagrams, tables, packages, and transformations. diagram: adjust visual properties of graphical elements like arrows, boxes, and fonts. navigation tree: modify the appearance and spacing of items in the left-hand navigation pane. pages: configure the layout and alignment of detail and table views within the application. common interface elements buttons at the bottom of the settings window: default 1, default 2, default 3, cancel, and save. a filter or dropdown in the main interface (not the settings window) allows filtering by groups or objects. color settings each item below includes a color picker for visual customization: background/foreground arrow background/foreground text background/foreground dimension background/foreground external transformation background/foreground fact background/foreground header background/foreground vault hub background/foreground vault link background/foreground even column background/foreground odd column background/foreground other object background/foreground package background/foreground vault satellite background/foreground script transformation background/foreground source background/foreground table background/foreground view border diagram border package border color line color thin highlighter label diagram settings adjustments to the layout and structure of diagrams: arrow height font size border thickness cell height cell width header font size header height header width sub box height sub box width scale minor connector line opacity (%) navigation tree settings icon size line spacing scale font size splitter position page layout settings detail page horizontal alignment: left, center, right, or stretch vertical alignment: top, center, bottom, or stretch max width max height table page horizontal alignment: left, center, right, or stretch vertical alignment: top, center, bottom, or stretch max width max height frame scale"}
,{"id":194809538311,"name":"Connectors & Sources","type":"section","path":"/docs/user-guide/connectors-sources","breadcrumb":"User Guide › Connectors & Sources","description":"","searchText":"user guide connectors & sources setting up connectors in analyticscreator connectors in analyticscreator allow users to establish data source connections, enabling efficient data management and analysis. here's a comprehensive guide to understanding and setting up various connectors. navigating connectors to create or edit a connector, navigate through the toolbar menu: connectors define the data source logic. heres a list of connector types supported by analyticscreator: ms sql server oracle csv excel duckdb (parquet, csv, s3) ms access oledb sap (using theobald connector) odbc connection string and templates analyticscreator provides a friendly interface for generating connection string templates. for several connector types, users can access these templates by clicking the template button. heres an example template: provider=sqlncli11;data source=[server];initial catalog=[database];integrated security=sspi; make sure to replace the placeholders [server] and [database] with the actual server and database names. csv connector properties the csv connector has unique properties enhancing file handling capabilities. users should pay attention to these additional settings to ensure seamless file integration and processing. row delimiters when defining row delimiters, users can utilize specific abbreviations for ease. these include: {cr} for carriage return {lf} for line feed {t} for tabulator these specifications enable seamless formatting and data structuring within your source files. automating data source descriptions for automatic data source description retrieval, ensure your connections to these data sources are active and functional. this automation simplifies data management and improves operational efficiency. cloud storage for connectors store connector definitions and associated data sources in the cloud. cloud storage provides a durable and accessible platform for managing your data across different repositories, enhancing collaboration and data security. encrypted strings we highly recommend keeping your connection strings encrypted. to encrypt your string, simply click on options encrypted strings new. to use your encrypted strings in your sources, enclose the name youve created with # on both sides. for example, if your dsn=duckdb, the connection string will be #duckdb# note: pro tip: if you're new to analyticscreator we highly recommend making use of the source - wizard configuring analyticscreator the source contains a description of the external data. each source belongs to a connector. each source has columns, and references (table keys) between sources can be defined. to open the source definition, use the \"edit source\" option from the source context menu in the navigation tree or diagram. to add a new source, use the \"add new source\" option from the source context menu in the navigation tree or diagram. below is a typical source definition: the properties of sources depend on the connector type and the source type. there are three source types: table view sap_deltaq query for the query source type, the source window will display an additional tab containing the query definition. you cannot create a source manually. the only source that can be created manually is the csv the user can check for changes in the source and propagate any detected changes to the data warehouse objects. to check for changes in all connector sources, use the connector context menu and select \"refresh all sources\" in the navigation tree. to check for changes in imported connector sources only, use the connector context menu and select \"refresh used sources\" in the navigation tree. to check for changes in a specific source, use the source context menu and select \"refresh source\" in the navigation tree. refresh source options: the following refresh options are available: detect differences: detects changes in the source but does not modify the repository. delete missing sources: deletes any missing sources from the repository. refresh source descriptions: refreshes the descriptions of the sources. refresh columns in imported tables: refreshes columns when there are new or changed source columns. delete missing columns in imported tables: deletes columns in imported tables if the source columns have been deleted. refresh primary keys in imported tables: updates primary keys if the sources primary key has changed. refresh descriptions in imported tables: updates descriptions of imported tables and columns."}
,{"id":194803189339,"name":"Data Structures","type":"section","path":"/docs/user-guide/data-structures","breadcrumb":"User Guide › Data Structures","description":"","searchText":"user guide data structures in a data warehouse, layers are a crucial aspect of its logical structure. users have the ability to define a variety of layers, each serving a specific purpose. below are six primary types of layers commonly used in a data warehouse architecture, along with their functions and interconnections. this configuration facilitates an efficient workflow, transforming raw data sources into insightful, user-accessible information. each layer plays a distinct role in the data journey—from acquisition to end-user presentation—supporting governance, transformation, historization, and analytics. 1. source layer (src) purpose: acts as the foundational logical data layer containing external data sources. characteristics: not part of the actual data warehouse storage. serves as the entry point for incoming external data. tables and transformations cannot be created in this layer. 2. staging layer (imp) also known as: import layer purpose: loads and structures raw data from the source layer into tables for further processing. characteristics: temporarily stores incoming data. frequently refreshed with the latest imports. prepares data for historization and persistence. 3. persisted staging layer (stg) purpose: begins the commitment to data historization and traceability. characteristics: stores data from the staging layer persistently. maintains historical records of changes. considered the first \"true\" layer of the data warehouse. 4. transformation layer (trn) layers purpose: applies additional logic and refinements to the data. characteristics: optional, but useful for cleansing, deduplication, or complex business logic. ensures high data quality and consistency. acts as a bridge between raw and modeled data. 5. data warehouse layer (dwh) purpose: converts structured data into analytical models (e.g., facts and dimensions). characteristics: core repository of business-ready data. supports advanced querying, reporting, and data analysis. 6. data mart layer (dm) purpose: provides business users with access to relevant datasets in a user-friendly structure. characteristics: often adopts star schema or other analytical models. optimized for reporting tools and dashboards. represents the interface between the data warehouse and the end-user. together, these layers enable a modular and governed approach to building scalable and maintainable data warehouse solutions in analyticscreator. schemas a schema is the microsoft sql server schema to which a data warehouse (dwh) object belongs. each schema should be assigned to a specific layer, and each layer can contain multiple schemas. stars a star is a part of a data mart layer. the data mart layer can contain several stars. each star corresponds to a schema. if you create an olap (online analytical processing) model, each star will produce one olap cube (tabular and multidimensional). galaxies a galaxy is a group of several stars. each star should belong to a galaxy. galaxy definition window:"}
,{"id":194803189343,"name":"Tables & References","type":"section","path":"/docs/user-guide/tables-references","breadcrumb":"User Guide › Tables & References","description":"","searchText":"user guide tables & references tables a table represents a database table or view within the data warehouse, and each table belongs to a specific schema. tables are created automatically when defining a data import, historization, or persisting process. views are created when defining a transformation. additionally, tables can be manually created to store results of external or script transformations. for most tables, several properties can be configured, including calculated columns, primary keys, identity columns, and indexes. table properties: table name: table name table schema: table schema table type: type of the table import table: filled by external data using ssis package. historicized table: contains historized data. includes: satz_id (bigint) — surrogate primary key dat_von_hist (datetime) — start of validity dat_bis_hist (datetime) — end of validity view without history view with history persisted table without history persisted table with history data mart dimension view without history data mart dimension view with history data mart fact view without history data mart fact view with history externally filled table without history externally filled table with history data vault hub table with history data vault satellite table with history data vault link table with history friendly name: used in olap cubes instead of table name. compression type: default, none, row, page description: description inherited by dependent objects. hist of table: names of persist, hub, satellite, or link tables. has primary key: if checked, adds primary key constraint. primary key name: name of the primary key. pk clustered: if checked, creates clustered pk. columns: column name data type, maxlength, numprec, numscale, nullable pkordinalpos default (e.g., getdate()) friendly name referenced column (for n:1 relationships) references (read-only, comma-separated list) identity column: name, type, seed, increment pk pos: position in pk for normal tables (not views), you can optionally define identity and calculated columns (see tab). calculated columns properties: column name: name of the column statement: sql statement (macros like @getvaulthash supported) persisted: if checked, column will be persisted pkordinalpos: position in primary key friendly name: used in olap cubes instead of column name referenced column: defines n:1 references references: comma-separated, read-only defining table relationships in analyticscreator relationships between tables can be defined to enable combining tables during transformations. these relationships include n:1 (\"1-field\" to \"1-primary key field\") references and more complex associations. defining n:1 references one-field to one primary key field: these references can be directly defined within the table definition using the referenced column attribute. example: a foreign key in one table referencing the primary key of another. more complex references can be defined using table references. here is a typical table reference definition: table reference properties: cardinality: unknown onetoone manytoone onetomany manytomany note: it is recommended to primarily use many-to-one (n:1) and one-to-many (1:n) cardinalities. join: sql join type table1: schema and table name of the first table table2: schema and table name of the second table alias 1: optional. alias of the first table. should be defined if reference statement is used alias 2: optional. alias of the second table. should be defined if reference statement is used description: reference name auto created: if checked, the reference was automatically created during synchronization. reference statement: optional. sql reference statement. should be used if the reference cannot be described using column references only. table aliases will be used. columns: there are columns and statements. either column or statement should be defined on each reference side. column1: column from the first table statement1: sql statement column2: column from the second table statement2: sql statement inheritance of table relations across dwh layers table relations will be inherited into subsequent dwh layers. for example, if references are defined between two import tables that are historicized, the same references will be automatically created between the corresponding historicized tables. if a reference is changed, the changes will propagate into the inherited references unless those references are used in transformations. in such cases, the references will be renamed by adding the suffix _changed(n), and new inherited references will be created. therefore, if a \"parent\" reference is changed, transformations using the inherited reference will not be updated automatically. however, you can manually update them by selecting the new inherited reference. the inherited references, where the auto created flag is set, cannot be modified unless you uncheck the auto created flag. defining relations between sources relations between sources are defined and will be inherited by the data warehouse objects during synchronization. the n:1 relation, which refers to a \"one field\" to a \"one primary key field\" reference, can be defined directly in the source definition by using the referenced column attribute. for more complex references, use source references. inheritance of source relations across dwh layers source relations will be inherited into subsequent dwh layers. for example, if references are defined between two source tables that are imported, the same references will be automatically created between the corresponding import tables. if a source reference is changed, the changes will propagate into the inherited references, unless those references are used in transformations. in such cases, the references will be renamed by adding the suffix _changed(n) and new inherited references will be created. therefore, if a \"parent\" reference is changed, transformations using the inherited reference will not be updated automatically. however, the user can manually update them by selecting the new inherited reference."}
,{"id":194803189345,"name":"Transformations","type":"section","path":"/docs/user-guide/transformations","breadcrumb":"User Guide › Transformations","description":"","searchText":"user guide transformations transformations a transformation is a process used to modify data. the result of a transformation is always either a single view or a single table. to create a new transformation, use the transformation wizard. each transformation has the following common properties: name: the name of the transformation schema: the schema for the transformation transtype: the type of transformation stars: a list of stars in which the transformation is involved star: the name of the star isfact: this should be selected for fact transformations filter: you can define an additional filter to restrict transformation data for a specific data mart analyticscreator supports the following transformation types: regular transformation manual transformation external transformation script transformation data mart transformation predefined transformation regular transformation a regular transformation is a view generated by analyticscreator based on the defined transformation parameters. tables, table relationships, and transformation columns must be specified, after which analyticscreator automatically creates the transformation view. below is a typical regular transformation definition: regular transformation properties: historization type: defines how to work with historicized data. fullhist: fully historicized transformation. includes: satz_id dat_von_hist dat_bis_hist snapshothist: for predefined snapshot dates (used for dimensions). snapshot: uses snapshot dates to combine historicized data (usually for facts). actualonly: uses only current data from historized sources (dimensions or facts). none: non-historicized data. create unknown member: adds surrogate id = 0 with default values for unmatched dimension members. fact transformation: check if defining a fact transformation. persist table: name of the table where results will be stored. persist package: name of the ssis package for persisting results. ssis package: for external or script transformations; launches transformation. hub of table: read-only source for hub transformations. sat of table: source table for satellite transformations. link of table: read-only source table for link transformations. snapshots: snapshot and group info (relevant for snapshot types). tables: participating tables seqnr: unique table sequence number table: table name table alias: unique alias used in joins/statements joinhisttype: none — no historicized data actual — only current data historical_from — value at start of linked record period historical_to — value at end of linked record period full — full historicizing info join type: inner, left, right, full, cross force join: loop join, hash join, merge join reference statement: optional custom join logic (e.g. t5.id = t1.customerid) filter statement: additional sql filter (e.g. t5.country = 'ger') sub select: additional subquery to refine reference logic. columns: transformation output columns column name tableseqnr (optional) reference (optional) statement: sql with aliases isaggr: aggregated column default value: used for unknown members seqnr: column sequence pk position: primary key position description references: table joins (see table references) seqnr1: first table seq number seqnr2: second table seq number reference: reference name predefined transformations: list of referenced transformations view tab: read-only view definition transformation compilation and creation compile: use the compile button to check and validate the transformation logic. errors will be flagged. create: use the create button to build the transformation view into the dwh. errors will be reported if present. manual transformation a manual transformation is a view that is created manually. properties: view: contains the manually created view definition. rename columns table: if you rename a column in the manually created view, enter the old and new column names into this table. below is a typical manual transformation definition: external transformation an external transformation is a transformation manually created using an ssis package. properties: result table: the table where the transformation results will be stored. ssis package: the name of the manually created ssis package. tables: a list of tables on which the transformation depends. only the table name is relevant. below is a typical external transformation definition: script transformation a script transformation is a transformation that uses an sql script. properties: result table: the table where the transformation results will be stored. ssis package: the name of the ssis package where the transformation script is executed. script: the sql script used in the transformation. below is a typical script transformation definition: data mart transformation data mart transformations are views created in the data mart layer. a data mart transformation cannot be created manually. instead, the stars — the affiliation of other transformations — must be defined, and the corresponding data mart transformations will be created automatically. every regular or manual transformation can be persisted. this means the content of the view can be stored in a table. predefined transformations predefined transformations are field-level transformations based on the field type. for example, below is a definition of a predefined transformation that removes leading and trailing spaces from all fields of type varchar and nvarchar: check and transformation statements the check statement is used to verify whether a field meets the transformation conditions. the transformation statement contains the actual sql transformation logic. several predefined transformations are built-in, but users can also create their own. predefined transformations are applied in regular transformations. when creating a transformation, users can select which predefined transformations to apply. list of predefined transformations predefined transformation description trim removes leading and trailing spaces from string fields (e.g., varchar, nvarchar). stringnulltona converts null values in string fields to \"na\". stringmaxt08000 trims string fields to a maximum length of 8000 characters. numbernulltozero converts null values in numeric fields to zero. xmltostring converts xml data type fields to string format. hierarchytostring converts hierarchical data into a string representation. timetodatetime converts time fields into datetime by appending a default date (e.g., \"1900-01-01\"). binarytostr converts binary data to a string format. anonymization anonymizes data by replacing sensitive fields with generic or masked values. applying multiple predefined transformations multiple predefined transformations can be applied simultaneously. below is an example result when combining multiple transformations on a single field: [fkart] = rtrim(ltrim(isnull([t1].[fkart], 'n.a.')))"}
,{"id":194809538313,"name":"Packages & Workflow","type":"section","path":"/docs/user-guide/packages-workflow","breadcrumb":"User Guide › Packages & Workflow","description":"","searchText":"user guide packages & workflow deployment packages multiple deployment packages can be created to manage different deployment configurations. each deployment package is a visual studio solution containing the necessary elements required to deploy the data warehouse. deployment package properties name: the name of the deployment package and the generated visual studio solution. create dacpac: if checked, the dacpac file containing the dwh structure will be generated. deploy dacpac: if checked, the dacpac file will be deployed to the database defined below. server, db name, integrated security, login, and password: connection attributes of sql server to which the dacpac file should be deployed. deployment options: allow data loss drop objects not in source backup db before changes block when drift detected deploy in single user mode allow incompatible platform these options control how the dacpac is deployed. see sqldeploy.exe options for more information. create power pivot: if checked, the excel file containing the power pivot/power bi semantic model will be created. this power pivot file can be imported into power bi. next options are common for multidimensional and tabular olap databases: create xmla file: if checked, the xmla file containing the olap database definition will be created. server, db name, login, password: connection attributes of the olap server where the olap database will be deployed. dummy information can be added here, but the xmla file should be edited to replace it with the correct server credentials. process cube in workflow package: if checked, the cube processing task will be added to the workflow package. create cube during deployment: if checked, the olap cube will be created using the olap server connection attributes. ssis packages: ssis packages that will be generated during deployment. to invert the selection, click on the header of the \"deploy\" column in the package list. ssis config type: choose between an environment variable and a config file to configure the connection to the database containing the [cfg].[ssis_configuration] table. this table holds the configurations for all ssis packages. ssis config env. var./ssis config file path: the name of the environment variable or the path to the config file that will be created. deploy ssis_configuration: if checked, the content of the [cfg].[ssis_configuration] table will be recreated. use project reference: if selected, the workflow package will access other ssis packages using a project reference. otherwise, it will use a file reference. other files: generate power bi project (.pbip) files generate tableau packaged workbook (.twbx) generate qlik script (.qvs)"}
,{"id":373340594422,"name":"Snapshots and Snapshot Groups","type":"subsection","path":"/docs/user-guide/packages-workflow/snapshots-and-snapshot-groups","breadcrumb":"User Guide › Packages & Workflow › Snapshots and Snapshot Groups","description":"","searchText":"user guide packages & workflow snapshots and snapshot groups snapshots are predefined dates calculated during the etl process and used in snapshot transformations to combine historicized data. by default, there is always at least one snapshot, referred to as the \"actual date\", which represents the current timestamp. additional snapshots can be defined as needed. below is a typical snapshot definition: sql expression for calculating the previous date this sql expression calculates the previous date relative to a given @actdate. it uses the dateadd, convert, and datepart functions to adjust the date by subtracting days and converting between data types. dateadd(ms, -2, convert(datetime, convert(date, dateadd(dd, 1-datepart(d, @actdate), @actdate)))) each snapshot must have a unique name. an sql statement is used to calculate the snapshot value, and the predefined variable @actdate (representing the current timestamp) can be used in this statement. multiple snapshots can be organized into snapshot groups for better management and usability, as shown below: working with multiple snapshots when working with multiple snapshots, a snapshot dimension can be defined and used as a common dimension in the data mart layer. to create a snapshot dimension, use the context menu: right-click over the core layer → add → snapshot dimension snapshots are used in regular snapshot transformations to combine historicized data based on predefined dates. these transformations rely on snapshot values to accurately represent the historical context of the data. using snapshot groups and individual snapshots both snapshot groups and individual snapshots can be selected and applied during the transformation process."}
,{"id":373340595396,"name":"Workflow package","type":"subsection","path":"/docs/user-guide/packages-workflow/workflow-package","breadcrumb":"User Guide › Packages & Workflow › Workflow package","description":"","searchText":"user guide packages & workflow workflow package a workflow package is used to execute all other packages in the correct order. there are no configuration options available."}
,{"id":373340595397,"name":"Script launching package","type":"subsection","path":"/docs/user-guide/packages-workflow/script-launching-package","breadcrumb":"User Guide › Packages & Workflow › Script launching package","description":"","searchText":"user guide packages & workflow script launching package a script launching package is used to execute script transformations. there are no configuration options available."}
,{"id":373340595398,"name":"Persisting package","type":"subsection","path":"/docs/user-guide/packages-workflow/persisting-package","breadcrumb":"User Guide › Packages & Workflow › Persisting package","description":"","searchText":"user guide packages & workflow persisting package a persisting package is used to persist transformations. there are no additional configuration options available."}
,{"id":373340595399,"name":"Historization package","type":"subsection","path":"/docs/user-guide/packages-workflow/historization-package","breadcrumb":"User Guide › Packages & Workflow › Historization package","description":"","searchText":"user guide packages & workflow historization package this package is used to historicize data. one package can be used to define multiple historizations. note: historicizing data refers to the process of tracking and storing changes to data over time. instead of just storing the current state of the data, historicizing data ensures that previous versions or states are preserved. this allows organizations to analyze how data has evolved, which is useful for trend analysis, auditing, and reporting. below is a typical historization definition: historization options missing record behavior: describes the behavior when a primary key is missing in the source table: close: closes the validity period of the corresponding key in the historicized table. add empty record: closes the period and adds a new record with default \"empty value\" columns. do not close: no action is taken; the key remains in the actual data. insert only: if set, the source data is appended without historization (used when no primary key exists). type: selects the historization algorithm: ssis package: historization is done via an ssis package. automatically created stored procedure: procedure named [cfg].[hist_tablename] is generated and executed. manually created stored procedure: procedure with same name is manually editable. use auto-generated procedure as a starting point. optional statement to calculate validfrom date: define a custom sql expression (returns date or datetime) to calculate the validity start date for new/existing keys. insert filter and delete filter: insert filter: restrict which source records get historicized. delete filter: restrict which records can be \"closed\" when primary keys are missing. scd type: choose historization logic per field: none (scd 0): no change tracking; current value only. scd 1: changes overwrite historical values. scd 2: adds new records for changed values, maintaining validity periods. calculated columns: define derived columns using previous ([s]) and current ([i]) values, e.g.: isnull(i.amount, 0) - isnull(s.amount, 0) ssis variables: use @variablename format to reference variables for filters. define values via [cfg].[ssis_configuration]. scripts: define pre- or post-historization sql scripts using the scripts tab."}
,{"id":373340595400,"name":"Import package","type":"subsection","path":"/docs/user-guide/packages-workflow/import-package","breadcrumb":"User Guide › Packages & Workflow › Import package","description":"","searchText":"user guide packages & workflow import package this package is used to import data from external data sources. a single package can be used to define multiple imports. below is a typical import definition: import package properties fields: defines the mapping between source and target fields, including any ssis expressions used for each field during import. ssis variables: allows defining ssis variables and their value expressions. values can be managed using the ssis_configuration table. these variables are commonly used in filter expressions. filter: filters restrict the data imported. use ssis variables with the “@” symbol (e.g., @date) to build dynamic filter logic. scripts (tab): sql scripts can be configured to run before or after the import process. impsql: allows redefining the default sql command used for data import (used when custom logic is required). update statistics: if selected, the sql server update statistics command is executed after the import completes. manually created: indicates that the ssis package is custom-built or modified. when selected: the package will not be auto-generated during deployment. however, it will be included in the overall workflow package execution. use logging: enables execution logs to be written to the dwh log tables, improving monitoring and traceability. externally launched: excludes the package from the main workflow execution. it must be triggered manually outside of the workflow."}
,{"id":373340595401,"name":"ETL","type":"subsection","path":"/docs/user-guide/packages-workflow/etl","breadcrumb":"User Guide › Packages & Workflow › ETL","description":"","searchText":"user guide packages & workflow etl ssis packages are automatically generated by analyticscreator as part of the deployment process. these packages are used to execute etl (extract, transform, load) or elt (extract, load, transform) processes within the data warehouse, depending on the architecture and requirements. types of ssis packages [link:373340595400|import package] these packages are used to import data from external sources into the data warehouse. historization packages these handle the historicization of data, ensuring changes are tracked over time for analytical purposes. persisting packages these packages are responsible for persisting transformation results within the data warehouse. script launching packages these packages are designed to execute script-based transformations. workflow packages these orchestrate the execution of all other packages in the correct sequence, ensuring that etl or elt processes are performed in a logical and efficient order. each package type is tailored to specific tasks, enabling seamless integration and efficient data processing in the data warehouse environment. analyticscreator simplifies the configuration and generation of these packages, providing a robust and automated etl solution."}
,{"id":373340595402,"name":"Version Control","type":"subsection","path":"/docs/user-guide/packages-workflow/version-control","breadcrumb":"User Guide › Packages & Workflow › Version Control","description":"","searchText":"user guide packages & workflow version control version control in analyticscreator analyticscreator supports version control by allowing users to export their repository into a structured json format. this enables seamless integration with git-based systems such as github or azure devops, empowering teams to manage their data product development process with full traceability, collaboration, and control. why use version control with analyticscreator? version control brings critical benefits to your data warehouse development lifecycle: track changes to your metadata and configurations enable collaboration across multiple developers revert to previous versions when needed integrate with ci/cd pipelines support consistent deployment and testing workflows exporting your repository 1. export to file in analyticscreator, click: file > save to file 2. choose format: .acrepox in the save dialog, select the file type: ac json files (*.acrepox) what's included in the .acrepox file? the exported file contains: project metadata data layers, etl logic, and semantic models parameters, transformations, macros relationships and object dependencies 🔒 credentials are not included. this ensures secure storage and prevents leaking sensitive information. implementing version control in a collaborative environment to implement version control in a collaborative environment, use the two-branch strategy: main: production-ready version changes: development and staging updates prerequisites a git repository (on github, azure devops, etc.) git installed locally access to the repository your previously exported .acrepox file step-by-step process step 1: clone the repository git clone https://your-repo-url.git cd your-repo-folder step 2: switch to the changes branch git checkout -b changes if changes already exists: git checkout changes step 3: add the exported .acrepox file place your exported file (e.g., customerdw.acrepox) into the project folder. then run: git add customerdw.acrepox git commit -m \"updated repository with latest model changes\" git push origin changes step 4: open pull request (pr) to main from github or azure devops: go to the pull requests section. create a new pr from changes → main. include a clear description of what's changed. review checklist: have you tested the export? are credentials excluded? have you added documentation for changes? step 5: archive and backup store previous versions of .acrepox files in a versions or archive folder within the repo for traceability, or use releases. best practices export regularly during development milestones use folders to organize models by project/component use tags or naming conventions for major releases communicate changes clearly in pr descriptions use secure ci/cd pipelines for automated deployment restoring a version to restore a specific version: checkout the version or tag in git open the .acrepox file in analyticscreator: file > open from file your full repository structure will be restored as exported."}
,{"id":194819914822,"name":"Modeling Approaches","type":"section","path":"/docs/user-guide/modeling-approaches","breadcrumb":"User Guide › Modeling Approaches","description":"","searchText":"user guide modeling approaches data warehouse design is governed by established modeling methodologies that provide structure, consistency, and scalability. analyticscreator supports the principal industry approaches and enables their automated implementation within microsoft-based environments. each methodology is applied through metadata-driven modeling, ensuring that the resulting schemas, transformations, and documentation are generated in a standardized and reproducible manner. this allows organizations to adopt the modeling approach most aligned with their strategic, architectural, and analytical requirements."}
,{"id":373340595392,"name":"Dimensional Modeling with AnalyticsCreator","type":"subsection","path":"/docs/user-guide/modeling-approaches/dimensional-modeling-with-analyticscreator","breadcrumb":"User Guide › Modeling Approaches › Dimensional Modeling with AnalyticsCreator","description":"","searchText":"user guide modeling approaches dimensional modeling with analyticscreator dimensional modeling with analyticscreator dimensional modeling in analyticscreator simplifies the design and organization of data warehouse structures, enabling efficient data analysis and reporting. by organizing data into facts (quantitative metrics) and dimensions (descriptive attributes), it enhances query performance and user understanding. analyticscreator supports various modeling techniques, such as classic (kimball), data vault, and hybrid approaches, ensuring flexibility based on business requirements. users can easily define and manage dimensions, facts, and measures, and automate the creation of relationships between tables. with built-in wizards, it streamlines the setup of data marts, calendar transformations, and historical data management. this powerful tool not only helps structure data for improved reporting but also ensures scalability and consistency across the data warehouse environment."}
,{"id":373340595393,"name":"Mixed Modeling DWH","type":"subsection","path":"/docs/user-guide/modeling-approaches/mixed-modeling-dwh","breadcrumb":"User Guide › Modeling Approaches › Mixed Modeling DWH","description":"","searchText":"user guide modeling approaches mixed modeling dwh mixed modeling approach in analyticscreator the mixed modeling approach combines elements of different data modeling strategies—most commonly kimball (dimensional modeling) and data vault—to meet modern enterprise data warehouse needs. it leverages the strengths of both approaches to optimize performance, data governance, historical tracking, and agility. when and why to use a mixed modeling approach enterprises are increasingly dealing with both structured and semi-structured data, frequent business rule changes, and the need for both auditability and performance. relying on a single modeling paradigm is often not sufficient. use kimball-style models in the data presentation layer to support fast query performance and ease of use for bi tools. use data vault in the raw data layer to handle changing business logic, full historization, and traceability. mix both when you need governance, auditability, and flexibility without sacrificing performance and usability. how the mixed model works in analyticscreator analyticscreator supports a mixed modeling approach by allowing users to define the logical and physical layers separately using metadata. this flexibility is built into the platform’s model-driven architecture. model core business entities using data vault (hubs, links, satellites) to ensure historization and auditability. expose business-friendly kimball-style dimensions and facts from the raw vault or stage views. use model variants in analyticscreator to define parallel data marts or reporting models on top of the same raw layer. deploy these models directly into fabric sql for governed data storage and onelake delta tables for consumption. benefits of the mixed modeling approach feature benefit auditability (data vault) full data lineage and historization in raw data vault layers performance (kimball) optimized schema for bi tools and reporting agility business rules and transformations can evolve without affecting historical raw data separation of concerns different teams can manage ingestion, raw data modeling, and consumption independently automation in ac schema changes propagate across layers using metadata-driven automation limitations and considerations initial setup of both modeling layers requires strategic planning and governance. data vault structures may be less intuitive for business users if directly exposed. requires a platform like analyticscreator to manage model complexity and deployment consistency. mixed modeling in fabric with analyticscreator analyticscreator simplifies the deployment of mixed modeling architectures into microsoft fabric: fabric sql databases: hosts the raw vault, stage layer, and dimensional models using the metadata-generated schema. azure data factory pipelines: automatically generated to handle data ingestion and etl into the appropriate layers. onelake delta tables: serve as consumption endpoints for power bi and other tools, supporting both real-time and batch scenarios. by combining these technologies with a mixed modeling strategy, you gain a balance of governance, performance, and adaptability at scale. use case example a global retail company implemented a mixed model in analyticscreator to meet audit requirements while supporting self-service bi. they modeled transactional data with data vault to preserve history and compliance. on top of the raw vault, they generated conformed dimensions and facts for finance and supply chain reporting in power bi. thanks to analyticscreator’s metadata engine, they deployed both models into microsoft fabric with one-click publishing, enabling a modern, governed, and flexible analytics platform. key takeaway the mixed modeling approach in analyticscreator enables you to build auditable, high-performing, and scalable data warehouses on microsoft fabric. by blending the strengths of kimball and data vault, and automating the deployment using metadata, organizations reduce risk and speed up delivery. next steps want to see how a mixed model would look in your fabric environment? book a technical session with our team to explore your use case."}
,{"id":373340595394,"name":"Data Vault Modeling","type":"subsection","path":"/docs/user-guide/modeling-approaches/data-vault-modeling","breadcrumb":"User Guide › Modeling Approaches › Data Vault Modeling","description":"","searchText":"user guide modeling approaches data vault modeling coming soon"}
,{"id":373340595395,"name":"Medallion Modeling","type":"subsection","path":"/docs/user-guide/modeling-approaches/medallion-modeling","breadcrumb":"User Guide › Modeling Approaches › Medallion Modeling","description":"","searchText":"user guide modeling approaches medallion modeling coming soon"}
,{"id":194803189348,"name":"Parameters & Macros","type":"section","path":"/docs/user-guide/parameters-macros","breadcrumb":"User Guide › Parameters & Macros","description":"","searchText":"user guide parameters & macros parameters analyticscreator provides various parameters that can be modified to customize its functionality. to access the parameter settings page, navigate to help — parameters in the toolbar. once the parameter settings page is open, use the search criteria field to locate specific parameters. below is a list of parameters available for modification in analyticscreator: parameter description initial value allow_snowflake_tabular_olap allow dim-dim relations in tabular olap cubes 0 autocreated_references_use_friendly_name use friendly names instead of table names in description of autocreated references: 0- no, 1 - yes 0 csv_empty_string_length length of empty string fields 50 csv_min_string_length minimum length of string fields 50 csv_scan_rows count of rows scanned to get the field properties 500 datavault2_create_hubs datavault2 create hubs: 0 - no, 1 - yes 1 default_calendar_macro name of default calendar macro null deployment_do_not_drop_object_types comma-separated list of object types (see description of sqlpackage.exe) aggregates, applicationroles, assemblies, asymmetrickeys, brokerpriorities, certificates, contracts, databaseroles, databasetriggers, extendedproperties, fulltextcatalogs, fulltextstoplists, messagetypes, partitionfunctions, partitionschemes, permissions, queues, remoteservicebindings, rolemembership, rules, searchpropertylists, sequences, services, signatures, symmetrickeys, synonyms, userdefineddatatypes, userdefinedtabletypes, clruserdefinedtypes, users, xmlschemacollections, audits, credentials, cryptographicproviders, databaseauditspecifications, endpoints, errormessages, eventnotifications, eventsessions, linkedserverlogins, linkedservers, logins, routes, serverauditspecifications, serverrolemembership, serverroles, servertriggers description_pattern_calendar_id autogenerated description of hist_id (satz_id) field in calendar dimension. you can use {tablename}, {tableid} and {cr} placeholders calendar id description_pattern_datefrom autogenerated description of datefrom (dat_von_hist) field. you can use {tablename}, {friendlyname}, {tableid} and {cr} placeholders {tablename}: start of validity period description_pattern_dateto autogenerated description of dateto (dat_bis_hist) field. you can use {tablename}, {friendlyname}, {tableid} and {cr} placeholders {tablename}: end of validity period description_pattern_hist_id autogenerated description of hist_id (satz_id) field. you can use {tablename}, {friendlyname}, {tableid} and {cr} placeholders {tablename}: surrogate key description_pattern_snapshot_id autogenerated description of hist_id (satz_id) field in snapshot dimension . you can use {tablename}, {tableid} and {cr} placeholders snapshot id diagram_name_pattern object name in diagram. you can use {name}, {friendly name}, {fullfriendlyname}, {id} and {cr} placeholders {fullfriendlyname} dwh_create_references create disabled references between tables in data warehouse 0 dwhwizard_calendar dwh wizard. 1 - create, 0 - do not create 1 dwhwizard_calendar_from dwh wizard. calendar start date 19800101 dwhwizard_calendar_to dwh wizard. calendar start date 20201231 dwhwizard_calendar_transname dwh wizard. calendar dimension name dim_calendar dwhwizard_dimname dwh wizard. template for generated dimensions dim_{src_name} dwhwizard_dwhtype dwh wizard. 1 - classic, 2 - datavault 1.0, 3 - datavault 2.0, 4 - mixed 1 dwhwizard_fact dwh wizard. 1 - n:1 direct related, 2 - all direct related, 3 - n:1 direct and indirect related, 4 - all direct and indirect related 3 dwhwizard_fact_calendar dwh wizard. 1 - use calendar in facts, 0 - do not use calendar in facts 1 dwhwizard_factname dwh wizard. template for generated facts fact_{src_name} dwhwizard_histpackagename dwh wizard. template for generated hist package names hist_{connector_name}{nr} dwhwizard_hub_packagename dwh wizard. template for generated hub packages hist_{connector_name}_hub{nr} dwhwizard_hub_tablename dwh wizard. template for generated hub tables {src_name}_hub dwhwizard_hub_transname dwh wizard. template for generated hub transformations {src_name}_hub dwhwizard_imppackagename dwh wizard. template for generated import package names imp_{connector_name}{nr} dwhwizard_link_packagename dwh wizard. template for generated link packages hist_{connector_name}_link{nr} dwhwizard_link_tablename dwh wizard. template for generated link tables {src_name}_link dwhwizard_link_transname dwh wizard. template for generated link transformations {src_name}_link dwhwizard_sat_packagename dwh wizard. template for generated sat packages hist_{connector_name}_sat{nr} dwhwizard_sat_tablename dwh wizard. template for generated sat tables {src_name}_sat dwhwizard_sat_transname dwh wizard. template for generated sat transformations {src_name}_sat dwhwizard_tablename dwh wizard. template for generated table names {src_name} dwhwizard_tablesperpackage dwh wizard. tables per package 10 force_description_inheritance force inheritance of table and column description: 0 - no, 1 - yes 0 force_friendlynames_inheritance force inheritance of table and column friendly names: 0 - no, 1 - yes 0 friendlyname_pattern_calendar_id autogenerated friendly name of hist_id (satz_id) field in calendar dimension. you can use {tablename}, {tableid} and {cr} placeholders calendar friendlyname_pattern_datefrom autogenerated friendly name of datefrom (dat_von_hist) field. you can use {tablename}, {friendlyname}, {tableid} and {cr} placeholders {friendlyname}_validfrom friendlyname_pattern_dateto autogenerated friendly name of dateto (dat_bis_hist) field. you can use {tablename}, {friendlyname}, {tableid} and {cr} placeholders {friendlyname}_validto friendlyname_pattern_duplicated_columns autogenerated replacement of duplicated friendly names. you can use {friendlyname}, {columnname}, {columnid} and {nr} (consecutive number) placeholders {friendlyname}_{columnname} friendlyname_pattern_duplicated_tables autogenerated replacement of duplicated friendly names. you can use {friendlyname}, {tablename}, {tableid} and {nr} (consecutive number) placeholders {friendlyname}_{tablename} friendlyname_pattern_hist_id autogenerated friendly name of hist_id (satz_id) field. you can use {tablename}, {friendlyname}, {tableid} and {cr} placeholders {friendlyname} friendlyname_pattern_snapshot_id autogenerated friendly name of hist_id (satz_id) field in snapshot dimension . you can use {tablename}, {tableid} and {cr} placeholders snapshot hist_default_type 1- ssis package, 2 - stored procedure 1 hist_default_use_vaultid 0 - don't use vault_hub_id as primary key. 1 - use vault_hub_id as primary key 1 hist_do_not_close default value of \"missing record behaviour\" parameter for new historizations. 0 - close, 1 - don't close 0 layer1_name source layer name source layer layer2_name staging layer name staging layer layer3_name persisted staging layer name persisted staging layer layer4_name transformation layer name transformation layer layer5_name data warehouse layer name data warehouse layer layer6_name data mart layer name data mart layer oledbprovider_sqlserver oledb provider for sql server sqlncli11 ref_tables_recursion_depth max recursion depth due the detection of referenced tables in transformation wizard 5 sap_deltaq_autosync 0-disable, 1- enable 1 sap_deltaq_transfermode i-idoc, t- trfc t sap_description_language sap language to get table and field descriptions e sap_max_record_count max count of records returned by sap 1000 sap_theobald_version 0 - match the sql server version, number (2008, 2012 etc) 2012 sap_usetablecompression load sap using compression 1 show_hub_deps show vault hub dependencies 0 source_reference_description_pattern autogenerated source reference description. you can use {sourceschema1}, {sourcename1}, {sourceid1}, {friendlyname1}, {sourceschema2}, {sourcename2}, {sourceid2} and {friendlyname2} placeholders fk_{sourcename1}_{sourcename2} source_reference_onecol_description_pattern autogenerated one-column source reference description. you can use {sourceschema1}, {sourcename1}, {sourceid1}, {friendlyname1}, {sourceschema2}, {sourcename2}, {sourceid2}, {friendlyname2}, {columnname}, {columnid} and {columnfriendlyname} placeholders rc_{sourcename1}_{sourcename2}_{columnname} source_refresh_del_missing_imp_cols source refresh - delete missing import columns: 0 - no, 1 - yes 0 source_refresh_del_missing_sources source refresh - delete missing sources: 0 - no, 1 - yes 0 source_refresh_refresh_imp_cols source refresh - refresh import columns: 0 - no, 1 - yes 0 source_refresh_refresh_imp_desc source refresh - refresh import descriptions: 0 - no, 1 - yes 0 source_refresh_refresh_pk source refresh - refresh primary keys in import tables: 0 - no, 1 - yes 0 source_refresh_refresh_src_desc source refresh - refresh source descriptions: 0 - no, 1 - yes 0 ssis_replace_decimal_separator 0 - do not replace, 1 - replace point by comma, 2 - replace comma by point 1 sync_timeout timeout for dwh synchronization, seconds 600 table_compression_type default table compression type: 1-none, 2-page, 3-raw 1 table_reference_description_pattern autogenerated table reference description. you can use {tableschema1}, {tablename1}, {tableid1}, {friendlyname1}, {tableschema2}, {tablename2}, {tableid2} and {friendlyname2} placeholders fk_{tablename1}_{tablename2} table_reference_onecol_description_pattern autogenerated one-column table reference description. you can use {tableschema1}, {tablename1}, {tableid1}, {friendlyname1}, {tableschema2}, {tablename2}, {tableid2}, {friendlyname2}, {columnname}, {columnid} and {columnfriendlyname} placeholders rc_{tablename1}_{tablename2}_{columnname} thumbnail_diagram_dock 0 - no dock, 1 - left top corner, 2 - right top corner, 3 - left down corner, 4 - right down corner 4 thumbnail_diagram_height height (points) 300 thumbnail_diagram_left left (points) 0 thumbnail_diagram_margin margin (points). 30 thumbnail_diagram_show 0 - do not show, 1 - show 1 thumbnail_diagram_top top (points) 0 thumbnail_diagram_width width (points) 300 trans_default_use_vault_relations 0 - use business relations rather than vault relations. 1 - use vault relations rather than business relations 1 trans_friendly_names_as_column_names use friendly names as column names in transformations: 0 - no, 1 - yes 1 transformations_createviews create view when saving transformation: 2-yes, 1-compile only, 0-no 0 scripts scripts are a set of sql commands that will be executed under specific conditions. there are four types of scripts: pre-deployment script: this script is executed prior to dwh synchronization and before deployment. post-deployment script: this script is executed after dwh synchronization and before deployment. pre-workflow script: this script is executed in the workflow package before starting all other packages. post-workflow script: this script is executed in the workflow package after all other packages have finished. deployment script control the deployment script can be disabled during synchronization or deployment by using the \"do not deploy\" and \"do not synchronize\" flags. below is a typical script definition: pre and post-deployment scripts for stored procedures pre and post-deployment scripts can be used to create stored procedures for use in transformations. in this case, the script should be executed only during data warehouse (dwh) synchronization. including the create procedure script during deployment is unnecessary, as the procedure definition is already included in the deployment package. macros a macro is a powerful tool used to simplify transformation definitions in analyticscreator. every macro has the following components: name: the name of the macro. language: the programming language used in the macro. definition statement: the logic or functionality defined within the macro. referenced table (optional): used for auto-referencing in transformations. currently, two languages are supported: t-sql and ssis. t-sql macros: used in transformations, calculated fields, and database objects. ssis macros: used in ssis statements for import constraints or field logic. macro example a typical macro definition: macro statement and parameters every macro uses positional parameters like :1, :2, etc. to call a macro, prefix it with @ and supply parameters in parentheses. for example: @date2id(t1.budat) this will be parsed into: convert(bigint, isnull(datediff(dd, '20000101', convert(date, t1.budat)) + 1, 0)) macro parameters and null replacement if fewer parameters are passed than defined, the remaining placeholders will be replaced by null. @date2id() results in: convert(bigint, isnull(datediff(dd, '20000101', convert(date, null)) + 1, 0)) referenced table parameter the referenced table parameter allows automatic creation of a reference between a field and the referenced table, based on the macro logic. macro usage in transformations macros are commonly used in transformation column definitions. for example: this will be parsed in the transformation view as: [fk_modifieddate] = case when t1.modifieddate < '19800101' then 0 when t1.modifieddate > '20401231' then convert(bigint, datediff(dd, '19800101', '20401231') + 1) else convert(bigint, isnull(datediff(dd, '19800101', convert(date, t1.modifieddate)) + 1, 0)) macro updates if a macro definition is changed, all dependent transformations and calculated fields will be recalculated automatically to reflect the change."}
,
{"id":194803189334,"name":"Reference","type":"category","path":"/docs/reference","breadcrumb":"Reference","description":"","searchText":"reference structured reference for the analyticscreator user interface, entities, types, and parameters. this reference guide is organized into sections and subsections to help you quickly find interface elements, object types, dialogs, wizards, and configuration details in analyticscreator. sections [link:365118109942|user interface] toolbar, navigation tree, dataflow diagram, pages, lists, dialogs, and wizards. [link:365178121463|entity types] connector types, source types, table types, transformation types, package types, and more. [link:365178123475|entities] reference pages for main analyticscreator object classes such as layers, sources, tables, and packages. [link:365178123499|parameters] system and project parameters including technical and environment-related settings."}
,{"id":379959760082,"name":"User Interface","type":"section","path":"/docs/reference/menu-user-interface","breadcrumb":"Reference › User Interface","description":"","searchText":"reference user interface the analyticscreator user interface is designed to support structured, metadata-driven development of data products. it provides a clear separation between modeling, configuration, and generation activities, enabling users to navigate complex data solutions efficiently. the interface is organized into multiple functional areas that work together: navigation & repository structure provides access to repositories, object groups, and individual objects. it reflects the logical organization of the data solution and supports collaboration across teams. design & modeling area the central workspace where users define sources, transformations, and data products. this includes visual representations of data flows and dependencies, supporting transparency and impact analysis. properties & configuration panels context-sensitive panels that allow detailed configuration of selected objects, including technical settings, mappings, and behavior definitions. [link:373340983495|toolbar] offers quick access to key actions such as synchronization, validation, and deployment, enabling an efficient workflow from design to delivery. lineage & dependency visualization displays relationships between objects and data flows. users can explore upstream and downstream dependencies to understand the impact of changes. the interface follows a metadata-driven approach: users define logic and structure once, and analyticscreator generates the corresponding technical artifacts. this ensures consistency, traceability, and efficient lifecycle management across environments."}
,{"id":383153355998,"name":"Entity types","type":"section","path":"/docs/reference/entity-types","breadcrumb":"Reference › Entity types","description":"","searchText":"reference entity types entity types"}
,{"id":383153900778,"name":"Entities","type":"section","path":"/docs/reference/entities","breadcrumb":"Reference › Entities","description":"","searchText":"reference entities entities"}
,{"id":383153355999,"name":"Parameters","type":"section","path":"/docs/reference/parameters","breadcrumb":"Reference › Parameters","description":"","searchText":"reference parameters parameters"}
,
{"id":193656403543,"name":"Tutorials","type":"category","path":"/docs/tutorials","breadcrumb":"Tutorials","description":"","searchText":"tutorials to become familiar with analyticscreator, we have made certain data sets available. you may use these to test analyticscreator: click here for the northwind data warehouse"}
,{"id":194803189349,"name":"Northwind DWH Walkthrough","type":"section","path":"/docs/tutorials/northwind-dwh-walkthrough","breadcrumb":"Tutorials › Northwind DWH Walkthrough","description":"","searchText":"tutorials northwind dwh walkthrough step-by-step: sql server northwind project create your first data warehouse with analyticscreator analyticscreator offers pre-configured demos for testing within your environment. this guide outlines the steps to transition from the northwind oltp database to the northwind data warehouse model. once completed, you will have a fully generated dwh project ready to run locally. load the demo project from the file menu, select load from cloud. choose nw_demo enter a name for your new repository (default: nw_demo) note: this repository contains metadata only—no data is moved. analyticscreator will automatically generate all required project parameters. project structure: the 5-layer model analyticscreator will generate a data warehouse project with five layers: sources — raw data from the source system (northwind oltp). staging layer — temporary storage for data cleansing and preparation. persisted staging layer — permanent storage of cleaned data for historization. core layer — integrated business model—structured and optimized for querying. datamart layer — optimized for reporting—organized by business topic (e.g., sales, inventory). northwind setup (if not already installed) step 1: check if the northwind database exists open sql server management studio (ssms) and verify that the northwind database is present. if yes, skip to the next section. if not, proceed to step 2. step 2: create the northwind database run the setup script from microsoft: 📥 download script or copy-paste it into ssms and execute. step 3: verify database use northwind; go select * from information_schema.tables where table_schema = 'dbo' and table_type = 'base table'; once confirmed, you can proceed with the next steps to configure the analyticscreator connector with your northwind database. note: analyticscreator uses only native microsoft connectors, and we do not store any personal information. step 4: change database connector navigate to sources > connectors. you will notice that a connector is already configured. for educational purposes, the connection string is not encrypted yet. to edit or add a new connection string, go to options > encrypted strings > add. paste your connection string as demonstrated in the video below. after adding the new connection string, it's time to test your connection. go to sources — connectors and press the test button to verify your connection. step 5: create a new deployment in this step, you'll configure and deploy your project to the desired destination. please note that only the metadata will be deployed; there will be no data movement or copy during this process. navigate to deployments in the menu and create a new deployment. assign a name to your deployment. configure the connection for the destination set the project path where the deployment will be saved. select the packages you want to generate. review the connection variables and click deploy to initiate the process. finally, click deploy to complete the deployment. in this step, your initial data warehouse project is created. note that only the metadata—the structure of your project—is generated at this stage. you can choose between two options for package generation: ssis (sql server integration services) adf (azure data factory) ssis follows a traditional etl tool architecture, making it a suitable choice for on-premises data warehouse architectures. in contrast, adf is designed with a modern cloud-native architecture, enabling seamless integration with various cloud services and big data systems. this architectural distinction makes adf a better fit for evolving data integration needs in cloud-based environments. to execute your package and move your data, you will still need an integration runtime (ir). keep in mind that analyticscreator only generates the project at the metadata level and does not access your data outside the analyticscreator interface. it does not link your data to us, ensuring that your data remains secure in its original location. for testing purposes, you can run your package in microsoft visual studio 2022, on your local sql server, or even in azure data factory."}
,
{"id":194166266410,"name":"Functions","type":"category","path":"/docs/functions-features","breadcrumb":"Functions","description":"","searchText":"functions get started by clicking on one of these sections: main functionality gui process support data sources export functionality use of analytics frontends"}
,{"id":194166266411,"name":"Main Functionality","type":"section","path":"/docs/functions-features/main-functionality","breadcrumb":"Functions › Main Functionality","description":"","searchText":"functions main functionality full bi-stack automation: from source to data warehouse through to frontend. holistic data model: complete view of the entire data model. this also allows for rapid prototyping of various models. data warehouses: ms sql server 2012-2022, azure sql database, azure synapse analytics dedicated, azure sql managed instance, sql server on azure vms, ms fabric sql. analytical databases: ssas tabular databases, ssas multidimensional databases, azure synapse analytics dedicated, power bi, power bi premium, duck db, tableau, and qlik sense. data lakes: ms azure blob storage, onelake. frontends: power bi, qlik sense, tableau, powerpivot (excel). pipelines/etl: sql server integration packages (ssis), azure data factory 2.0 pipelines, azure data bricks, fabric data factory. azure: azure sql server, azure data factory pipelines. deployment: visual studio solution (ssdt), creation of dacpac files, ssis packages, data factory arm templates, xmla files. modelling approaches: top-down modelling, bottom-up modelling, import from external modelling tool, dimensional/kimball, data vault 2.0, mixed approach of dv 2.0 and kimball (a combination the best of both worlds by using elements of both data vault 2.0 and kimball modelling), inmon, 3nf, or any custom data model. the analyticscreator wizard can help you create a data vault model automatically and also supports strict dan linstead techniques and data vaults. historization approaches: slowly changing dimensions (scd) type 0, type 1, type 2, mixed, snapshot historization, gapless historization, change-based calculations. surrogate key: auto-increment, long integer, hash key, custom definition of hash algorithm."}
,{"id":194166266415,"name":"GUI","type":"section","path":"/docs/functions-features/gui","breadcrumb":"Functions › GUI","description":"","searchText":"functions gui windows gui embedded version control multi-user development supporting distributed development manual object locking possible predefined templates cloud-based repository cloud service support available data lineage macro language for more flexible development predefined, datatype-based transformations calculated columns in each dwh table single point development: the whole design is possible in analyticscreator. external development not necessary embedding external code automatic documentation in word and visio export to microsoft devops, github, .. analyticscreator repository is stored in a ms sql server and can be modified and extended with additional functionality"}
,{"id":194166266416,"name":"Process support","type":"section","path":"/docs/functions-features/process-support","breadcrumb":"Functions › Process support","description":"","searchText":"functions process support etl procedure protocol error handling on etl procedures consistency on etl failure rollback on etl procedures automatic recognition of source structure changes and automatic adaptation of connected dwh entire dwh life-cycle support delta and full load of data models near real-time data loads possible external orchestration/scheduling for etl process internal orchestration/scheduling for etl process with generated ms-ssis packages several workflow configurations no is necessary runtime for analyticscreator daily processing of created dhws are run without analyticscreator no additional licences necessary for design component no ms sql server necessary"}
,{"id":194166266417,"name":"Data Sources","type":"section","path":"/docs/functions-features/data-sources","breadcrumb":"Functions › Data Sources","description":"","searchText":"functions data sources build-in connectivity: ms sql server, oracle, sap erp, s4/hana with theobald software (odp, deltaq/tables), sap business one with analyticscreator own connectivity, sap odp objects, excel, access, csv/text, oledb (e.g. terradata, netezza, db2..), odbc (mysql, postgres) , odata , azure blob storage (csv, parquet, avro), rest, ms sharepoint, google ads, amazon, salesforce crm, hubspot crm, ms dynamics 365 business central, ms dynamics navision 3rd party connectivity: access to more than 250+ data source with c-data connector [www.cdata.com/drivers]. this allows for connection to analyticscreator directly by an odbc, or ole db driver, or by connecting an ingest layer with externally filled tables. define your own connectivity: (any data source, hadoop, google bigquery/analytics, amazon, shop solutions, facebook, linkedin, x (formerly twitter)) in all cases of access to source data an analyticscreator-metadata-connector is created. the analyticscreator-metadata-connector is a description of data-sources you use for more easy handling in analyticscreator. analyticscreator is able to automatically create a metadata connector by extracting the data definition from your source data. it contains information about key fields, referential integrity, name of fields and description."}
,{"id":194166266418,"name":"Export Functionality","type":"section","path":"/docs/functions-features/export-functionality","breadcrumb":"Functions › Export Functionality","description":"","searchText":"functions export functionality azure blob storage, text, csv files, any target system using oledb or odbc driver, automated type conversation, export performed by ssis packages or azure data factory pipelines export for example to oracle, snowflake, synapse"}
,{"id":194166266419,"name":"Use of Analytics Frontends","type":"section","path":"/docs/functions-features/use-of-analytics-frontends","breadcrumb":"Functions › Use of Analytics Frontends","description":"","searchText":"functions use of analytics frontends push concept: power bi, tableau, and qlik models will be created automatically. all models described here will be created at the same time. pull concept: there are many bi frontends around which allows you to connect with the specified microsoft data. check with your vendor or us what is possible. analyticscreator allows you to develop a specific solution for your analytics frontend in the way that the model will be created automatically for your bi frontend (push concept)."}
,
{"id":365016789181,"name":"Unnamed Category","type":"category","path":"/docs/","breadcrumb":"Unnamed Category","description":"","searchText":"unnamed category executive summary reference guide structure analysis structural overview and hubdb 3-level mapping feasibility 4 top-level sections 44 subsections 189 topic pages 3 hierarchy levels structure overview the reference guide is organized into a clean 3-level hierarchy. the spreadsheet uses columns menu → submenu → subsubmenu to define the tree. each entry also carries an id, description, ac visual element reference, and multiple \"call from\" paths (navigation tree, toolbar, diagram, visual element). section subsections (l2) topics (l3) max depth 1. user interface 8 127 3 levels 2. entity types 9 62 3 levels 3. entities 17 0 2 levels 4. parameters 10 0 2 levels sections 1 and 2 use the full 3-level depth. sections 3 and 4 are 2-level only (menu → submenu with no sub-items). 1. user interface the largest section (127+ topics) covering all visual aspects of the application. contains 8 subsections: common information, toolbar (9 items), navigation tree (18 items), dataflow diagram (14 items), pages (26 items), lists (30 items), dialogs (17 items), and wizards (13 items). 2. entity types documents all type classifications across 9 subsections: connector types (12), source types (5), table types (9, including datavault), transformation types (7), transformation historization types (5), join historization types (5), package types (7), sql script types (7), and schema types (5). total: 62 topics. 3. entities covers 17 core entity definitions: layer, schema, connector, source, table, transformation, package, index, partition, hierarchy, macro, sql script, object script, deployment, object group, filter, and model. flat structure with no further nesting. 4. parameters documents 10 configuration parameters (ac_log, table_compression_type, pers_default_partswitch, diagram_name_pattern, and more), plus an \"other parameters\" catch-all page. two-level structure only. hubdb 3-level mapping can this structure fit into a hubdb table with three columns: category → section → topic? ✓ yes — this is a natural fit. the spreadsheet's menu → submenu → subsubmenu hierarchy maps directly to a 3-level hubdb schema. the two shallow sections (entities and parameters) simply leave the topic column null or use the item as both section and topic. hubdb column maps to count examples level 1: category menu column 4 user interface, entity types, entities, parameters level 2: section submenu column 44 toolbar, navigation tree, connector types, pages, lists level 3: topic subsubmenu column 189 file, mssql, import, historization, dwh wizard, login sample hubdb rows id category section topic 1.2.2 user interface toolbar file 1.2.6 user interface toolbar etl 1.3.2 user interface navigation tree connectors 1.3.3 user interface navigation tree layers 1.3.4 user interface navigation tree packages 1.3.5 user interface navigation tree indexes 1.3.6 user interface navigation tree roles 1.3.7 user interface navigation tree galaxies 1.3.8 user interface navigation tree hierarchies 1.3.9 user interface navigation tree partitions 2.1.1 entity types connector types mssql 3.5 entities table null 4.1 parameters ac_log null considerations ✓ clean 3-level fit menu → submenu → subsubmenu maps 1:1 to category → section → topic with no restructuring needed. ✓ consistent ids every row has a hierarchical id (e.g., 1.5.12) usable as a unique slug or sort key in hubdb. ✓ metadata-ready extra columns (description, ac element, call paths) store as additional hubdb columns alongside the 3-level hierarchy. ⚠ shallow sections sections 3 (entities) and 4 (parameters) are only 2 levels deep. the topic column will be null for ~27 rows. use a default or mirror the section name. analyticscreator reference guide — structure analysis • generated from referenceguidestructure.xlsx"}
,
{"id":380121784537,"name":"Unnamed Category","type":"category","path":"/docs/","breadcrumb":"Unnamed Category","description":"","searchText":"unnamed category analyticscreator reference guide the analyticscreator is the central storage location for all metadata related to your data warehouse projects. it serves as the foundation for organizing and managing the various elements of a data warehouse, ensuring consistency, scalability, and efficient collaboration across teams. what is the analyticscreator repository? the repository stores all data warehouse project metadata information, including details about data sources, transformations, layers, and configurations. it is designed to act as a centralized structure where users can: define and manage data warehouse artifacts. configure and store database objects and workflows. organize elements into logical folders for better accessibility. while the repository encompasses all metadata, not every item within it needs to be actively used, allowing flexibility in managing large and complex projects. repository structure the repository is organized into folders, with each folder representing a specific data warehouse artifact or database object. these objects include but are not limited to: connectors: configurations for connecting to external data sources like mssql, oracle, or sap. layers: hierarchical structures for organizing data, such as staging, core, and data marts. packages: collections of related objects or configurations for deployment. indexes: structures to improve query performance by optimizing data retrieval. roles: access controls and permissions for users interacting with the data warehouse. galaxies, hierarchies, partitions, and parameters: components used in data modeling to define relationships, subsets, and configurations. macros and scripts: reusable logic and code snippets for data transformations and operations. object scripts: scripts tied to specific data objects for precise customizations. filters: tools for selecting or excluding specific data based on defined conditions. predefined transformations: built-in processes to streamline common data processing tasks. snapshots: static copies or versions of data at specific points in time for auditing or rollback purposes. deployments: configurations and workflows for deploying changes to the data warehouse. groups: logical groupings of related objects or users for better management. models: representations of the structure and relationships within the data warehouse. types of repositories analyticscreator supports three types of repositories, offering flexibility in storage and collaboration: sql server repository stored in microsoft sql server databases. ideal for centralized storage and multi-user collaboration in larger projects. local file repository stored locally on your system. suitable for individual users or small-scale projects requiring minimal setup. analyticscreator cloud repository a cloud-based storage solution. enables seamless collaboration and remote access, making it ideal for distributed teams. both the sql server repository and cloud repository are essentially microsoft sql server databases with a predefined schema to store all analyticscreator metadata. no additional software is required for setup. key benefits of the repository centralized management all metadata is stored in one location, ensuring consistency and reducing redundancy. scalability supports projects of all sizes, from small, local setups to large, multi-user cloud environments. flexibility allows users to organize, customize, and manage artifacts based on project requirements. collaboration with sql server or cloud repositories, teams can work collaboratively on shared projects. best practices for using the repository organize folders: group objects logically to reflect the structure and purpose of your data warehouse. use appropriate types: select the repository type that best suits your project scale and team collaboration needs. regular backups: for sql server and local repositories, ensure regular backups to prevent data loss. optimize performance: use indexes, filters, and partitions effectively to manage large datasets efficiently. version control: keep track of changes and maintain versioning to facilitate rollback if necessary. the analyticscreator repository is a robust and versatile solution for managing metadata, enabling you to build scalable and efficient data warehouses. its flexibility across storage types and comprehensive feature set make it a cornerstone of analyticscreator's functionality. let me know if you'd like further enhancements!"}
]