[
{"id":383461199041,"name":"Getting Started","type":"category","path":"/docs/getting-started","breadcrumb":"Getting Started","description":"","searchText":"getting started this section provides the fastest path to understanding how to set up and use analyticscreator. it focuses on how a data warehouse is generated, deployed, and executed based on metadata definitions. if you are new to analyticscreator, start with the quick start guide. it walks through the full workflow from repository creation to data consumption. recommended path quick start guide end-to-end implementation flow from metadata to deployed data warehouse understanding analyticscreator architecture, layers (stg, core, dm), and design principles installation and configuration system setup and environment configuration typical workflow create repository define connectors run data warehouse wizard refine model synchronize database deploy artifacts execute workflows consume data available sections installation system requirements download and installation understanding analyticscreator quick start guide"}
,{"id":383225948363,"name":"Quick Start Guide","type":"section","path":"/docs/getting-started/quick-start-guide","breadcrumb":"Getting Started › Quick Start Guide","description":"","searchText":"getting started quick start guide this quick start guide helps new and trial users understand how to set up, model, and automate a data warehouse using analyticscreator. it follows the actual execution flow of the application, from metadata definition to deployment and execution, and explains how sql-based warehouse structures are generated and processed. the guide assumes: strong sql and etl background familiarity with layered dwh design (stg, core, dm) core concept analyticscreator is a metadata-driven design application that generates sql-based data warehouse structures, transformation logic, and orchestration components. instead of manually implementing etl processes, developers define metadata, which is translated into executable database objects and pipelines. the process follows a generation-driven approach: connect to source systems import metadata (tables, columns, keys, relationships) generate a draft data warehouse model using the wizard refine transformations, keys, and historization generate and deploy sql artifacts and pipelines execute data loading and processing workflows a key architectural element is the persistent staging layer (stg): source data is stored persistently after extraction supports reprocessing without re-reading the source system decouples ingestion from transformation and historization in practice, staging is followed by a second layer where historization is applied before data is transformed into core structures (dimensions and facts). quick start flow the implementation process in analyticscreator follows a defined sequence: create repository initialize a metadata repository (sql server database) that stores all definitions of the data warehouse. create connectors define connections to source systems (e.g. sap, sql server) and enable metadata extraction. import metadata and run wizard automatically read source structures and generate a draft data warehouse model (stg, core, dm). refine the model adjust business keys, surrogate keys, relationships, historization behavior, and transformations. synchronize generate sql objects (tables, views, procedures) and materialize the structure in the target database. deploy generate and deploy deployment packages (dacpac, pipelines, semantic models). execute workflows run generated pipelines (e.g. ssis, azure data factory) to load and process data. consume data use generated data marts and semantic models in reporting tools (e.g. power bi). what this quick start covers create connectors and define relationships (foreign keys, references) import and persist source data in the stg layer understand historization and persistent staging behavior build and refine core transformations (dimensions and facts) define business keys and surrogate keys create data marts (dm layer) and calendar dimensions generate and deploy sql server, pipeline, and analytical model artifacts"}
,{"id":383225948362,"name":"Understanding AnalyticsCreator","type":"section","path":"/docs/getting-started/understanding-analytics-creator","breadcrumb":"Getting Started › Understanding AnalyticsCreator","description":"","searchText":"getting started understanding analyticscreator analyticscreator is a metadata-driven design application for building and automating data warehouses and analytical models. instead of manually implementing etl and sql logic, developers define metadata such as sources, keys, relationships, transformations, and loading behavior. analyticscreator uses these definitions to generate database objects, pipelines, and semantic models. how analyticscreator works the workflow in analyticscreator starts with a repository, continues with source metadata import, and then uses a wizard to generate a draft data warehouse model. that model is refined, synchronized into sql objects, deployed to the target environment, and finally executed through generated workflows or pipelines. create a repository define or import connectors import source metadata run the data warehouse wizard refine the generated model synchronize the structure deploy artifacts execute workflows consume data through data marts and semantic models repository and metadata every analyticscreator project is based on a repository. the repository is a sql server database that stores the full metadata definition of the data warehouse. this includes connectors, source objects, transformations, keys, relationships, deployment settings, and other object definitions. the repository is the design-time control layer and the source for all generated artifacts. this means the target database is not modeled manually. instead, analyticscreator reads the repository metadata and generates the required sql structures from it. generated code can run independently after deployment because analyticscreator is used as a design-time application, not as a runtime dependency. connectors and metadata import analyticscreator connects to source systems such as sql server or sap and imports structural metadata including tables, columns, keys, and references. in some scenarios, metadata can also be imported through metadata connectors, which makes it possible to model a data warehouse without an active connection to the live source system during design. imported metadata is stored in the repository and later used by the wizard to generate the draft warehouse model. at this stage, no warehouse data has been loaded yet. only structure and metadata are being captured. the wizard the data warehouse wizard is the central acceleration mechanism in analyticscreator. it analyzes source metadata and generates a draft warehouse model automatically. depending on the selected approach, this can be a dimensional model, a data vault model, or a mixed approach. the wizard can create staging structures, historization layers, dimensions, facts, calendar dimensions, and default relationships based on detected metadata. the generated model is not the end result. it is the baseline that developers refine and validate. the main engineering work happens after generation, when keys, joins, historization behavior, measures, and transformations are adjusted to fit the intended warehouse design. warehouse layers analyticscreator supports a layered warehouse architecture from source to presentation. in a typical setup, this includes source objects, staging, persistent staging or historization, core transformations, data marts, and semantic or reporting layers. it can also generate analytical models for tools such as power bi. persistent staging a key architectural concept is the persistent staging layer. source data is first imported into staging structures and then stored persistently for further processing. this persistent layer is used for historization and for decoupling source extraction from downstream transformations. it allows data to be reprocessed without repeatedly reading the source system. in dimensional scenarios, historized tables typically include surrogate keys together with valid-from and valid-to columns. in data vault and hybrid scenarios, additional hash-based keys and references can be generated in the staging layer as persisted calculated columns and then reused in later layers. transformations transformations in analyticscreator are usually generated as sql views based on metadata definitions. these definitions specify source tables, joins, selected columns, macros, and transformation rules. in many cases, the default generated view logic is sufficient as a starting point, but it can be refined through metadata rather than by rewriting generated sql directly. analyticscreator also supports reusable macros for standard sql logic, such as date-to-calendar-key conversion or hash key generation. this allows repeated logic to be defined once and reused consistently across the model. synchronization, deployment, and execution these three steps are related but different and should not be confused. synchronization synchronization materializes the metadata model into sql objects in the target database. this creates the database structure defined in analyticscreator, such as tables, views, and procedures. it does not mean that business data has already been loaded. :contentreference[oaicite:13]{index=13} deployment deployment creates and distributes deployable artifacts for the selected target environment. these can include sql database packages, ssis packages, azure data factory pipelines, and semantic models. deployment prepares the environment but still does not imply that source data has already been processed. execution execution runs the generated workflows and pipelines. this is the step where source data is actually extracted, written to staging, historized where required, transformed into core structures, and exposed through data marts and semantic models. in azure scenarios, this may happen through azure data factory. in on-premise scenarios, this may happen through ssis. consumption after execution, the data warehouse can be consumed through data marts and semantic models. these structures are intended for reporting and analytics, while lower layers such as staging and historization should remain implementation layers rather than direct reporting interfaces. analyticscreator can generate tabular models and structures for tools such as power bi. design implications the repository is the source of truth metadata drives generation, not manual sql-first development the wizard creates a baseline, not a final production model persistent staging is part of the architecture, not just a temporary landing area synchronization, deployment, and execution are separate steps consumption should happen from data marts or semantic models, not from staging layers key takeaway analyticscreator works by storing warehouse definitions as metadata, generating sql and orchestration artifacts from that metadata, and then deploying and executing those artifacts in the target environment."}
,{"id":383225948358,"name":"Installation","type":"section","path":"/docs/getting-started/installation","breadcrumb":"Getting Started › Installation","description":"","searchText":"getting started installation installing analyticscreator: 32-bit and 64-bit versions this guide offers step-by-step instructions for installing either the 32-bit or 64-bit version of analyticscreator, depending on your system requirements. ⓘ note: to ensure optimal performance, verify that your system meets the following prerequisites before installation."}
,{"id":383225948359,"name":"System Requirements","type":"section","path":"/docs/getting-started/system-requirements","breadcrumb":"Getting Started › System Requirements","description":"","searchText":"getting started system requirements to ensure optimal performance, verify that the following requirements are met: ⓘ note: if you already have sql server installed and accessible, you can proceed directly to the launching analyticscreator section. networking: communication over port 443 is where analytics communicates to the analyticscreator server. operating system: windows 10 or later. analyticscreator is compatible with windows operating systems starting from version 10. ⓘ warning: port 443 is the standard https port for secured transactions. it is used for data transfers and ensures that data exchanged between a web browser and websites remains encrypted and protected from unauthorized access. microsoft sql server: sql server on azure virtual machines azure sql managed instances"}
,{"id":383225948360,"name":"Download and Installation","type":"section","path":"/docs/getting-started/download-and-installation","breadcrumb":"Getting Started › Download and Installation","description":"","searchText":"getting started download and installation access the download page navigate to the analyticscreator download page download the installer locate and download the installation file. verify sql server connectivity before proceeding with the installation, confirm that you can connect to your sql server instance. connecting to sql server: to ensure successful connectivity: use sql server management studio (ssms), a tool for managing and configuring sql server. if ssms is not installed on your system, download it from the official microsoft site: download sql server management studio (ssms) install the software once connectivity is confirmed, follow the instructions below to complete the installation."}
,{"id":383225948361,"name":"Configuring AnalyticsCreator","type":"section","path":"/docs/getting-started/configuring-analyticscreator","breadcrumb":"Getting Started › Configuring AnalyticsCreator","description":"","searchText":"getting started configuring analyticscreator this guide will walk you through configuring analyticscreator with your system. provide the login and password that you received by e-mail from analyticscreator minimum requirements configuration settings the configuration of analyticscreator is very simple. the only mandatory configuration is the sql server settings. sql server settings use localdb to store repository: enables you to store the analyticscreator project (metadata only) on your localdb. sql server to store repository: enter the ip address or the name of your microsoft sql server. security integrated: authentication is based on the current windows user. standard: requires a username and password. azure ad: uses azure ad (now microsoft entra) for microsoft sql server authentication. trust server certificate: accepts the server's certificate as trusted. sql user: the sql server username. sql password: the corresponding password. optional requirements paths unc path to store backup: a network path to store project backups. local sql server path to store backup: a local folder to store your project backups. local sql server path to store database: a local folder to store your sql server database backups. repository database template: the alias format for your repositories. default: repo_{reponame}. dwh database template: the alias format for your dwh templates. default: dwh_{reponame}. proxy settings proxy address: the ip address or hostname of your proxy server. proxy port: the port number used by the proxy. proxy user: the username for proxy authentication. proxy password: the password for the proxy user. now you're ready to create your new data warehouse with analyticscreator."}
,
{"id":383461199042,"name":"User Guide","type":"category","path":"/docs/user-guide","breadcrumb":"User Guide","description":"","searchText":"user guide you can launch analyticscreator in two ways: from the desktop icon after installation or streaming setup, a desktop shortcut is created. double-click the icon to start analyticscreator. from the installer window open the downloaded analyticscreator installer. instead of selecting install, click launch (labeled as number one in the image below). a window will appear showing the available analyticscreator servers, which deliver the latest version to your system. this process launches analyticscreator without performing a full installation, assuming all necessary prerequisites are already in place."}
,{"id":383225948364,"name":" Desktop Interface","type":"section","path":"/docs/user-guide/desktop-interface","breadcrumb":"User Guide › Desktop Interface","description":"","searchText":"user guide desktop interface with analyticscreator desktop users can: data warehouse creation automatically generate and structure your data warehouse, including fact tables and dimensions. connectors add connections to various data sources and import metadata seamlessly. layer management define and manage layers such as staging, persisted staging, core, and datamart layers. package generation generate integration packages for ssis (sql server integration services) and adf (azure data factory). indexes and partitions automatically configure indexes and partitions for optimized performance. roles and security manage roles and permissions to ensure secure access to your data. galaxies and hierarchies organize data across galaxies and define hierarchies for better data representation. customizations configure parameters, macros, scripts, and object-specific scripts for tailored solutions. filters and predefined transformations apply advanced filters and transformations for data preparation and enrichment. snapshots and versioning create snapshots to track and manage changes in your data warehouse. deployments deploy your projects with flexible configurations, supporting on-premises and cloud solutions. groups and models organize objects into groups and manage models for streamlined workflows. data historization automate the process of creating historical data models for auditing and analysis."}
,{"id":383225948365,"name":"Working with AnalyticsCreator","type":"section","path":"/docs/user-guide/working-with-analyticscreator","breadcrumb":"User Guide › Working with AnalyticsCreator","description":"","searchText":"user guide working with analyticscreator understanding the fundamental operations in analyticscreator desktop is essential for efficiently managing your data warehouse repository and ensuring accuracy in your projects. below are key basic operations you can perform within the interface: edit mode and saving â data warehouse editor single object editing: in the data warehouse repository, you can edit one object at a time. this ensures precision and reduces the risk of unintended changes across multiple objects. how to edit: double-click on any field within an object to enter edit mode. the selected field becomes editable, allowing you to make modifications. save prompt: if any changes are made, a prompt will appear, reminding you to save your modifications before exiting the edit mode. this safeguard prevents accidental loss of changes. unsaved changes: while edits are immediately reflected in the repository interface, they are not permanently saved until explicitly confirmed by clicking the save button. accessing views in data warehouse explorer layer-specific views: each layer in the data warehouse contains views generated by analyticscreator. these views provide insights into the underlying data structure and transformations applied at that layer. how to access: navigate to the data warehouse explorer and click on the view tab for the desired layer. this displays the layer's contents, including tables, fields, and transformations. adding and deleting objects adding new objects: navigate to the appropriate section (e.g., tables, layers, or connectors) in the navigation tree. right-click and select add [object type] to create a new object. provide the necessary details, such as name, description, and configuration parameters. save the object. deleting objects: select the object in the navigation tree and right-click to choose delete. confirm the deletion when prompted. â ď¸ note: deleting an object may affect dependent objects or configurations. filtering and searching in data warehouse explorer filtering: use filters to narrow down displayed objects by criteria such as name, type, or creation date. searching: enter keywords or phrases in the search bar to quickly locate objects. benefits: these features enhance repository navigation and efficiency when working with large datasets. object dependencies and relationships dependency view: for any selected object, view its dependencies and relationships with other objects by accessing the dependencies tab. impact analysis: analyze how changes to one object might affect other parts of the data warehouse. managing scripts predefined scripts: add scripts for common operations like data transformations or custom sql queries. edit and run: double-click a script in the navigation tree to modify it. use run script to execute and view results. validating and testing changes validation tools: use built-in tools to check for errors or inconsistencies in your repository. evaluate changes: use the evaluate button before saving or deploying to test functionality and ensure correctness. locking and unlocking objects locking: prevent simultaneous edits by locking objects, useful in team environments. unlocking: release locks once edits are complete to allow further modifications by others. exporting and importing data export: export objects, scripts, or configurations for backup or sharing. use the export option in the toolbar or navigation tree. import: import previously exported files to replicate configurations or restore backups. use the import option and follow the prompts to load the data."}
,{"id":383225948366,"name":"Advanced Features","type":"section","path":"/docs/user-guide/advanced-features","breadcrumb":"User Guide › Advanced Features","description":"","searchText":"user guide advanced features analyticscreator provides a rich set of advanced features to help you configure, customize, and optimize your data warehouse projects. these features extend the toolâs capabilities beyond standard operations, enabling more precise control and flexibility. scripts scripts in analyticscreator allow for detailed customization at various stages of data warehouse creation and deployment. they enhance workflow flexibility and enable advanced repository configurations. types of scripts object-specific scripts define custom behavior for individual objects, such as tables or transformations, to meet specific requirements. pre-creation scripts execute tasks prior to creating database objects. example: define sql functions to be used in transformations. pre-deployment scripts configure processes that run before deploying the project. example: validate dependencies or prepare the target environment. post-deployment scripts handle actions executed after deployment is complete. example: perform cleanup tasks or execute stored procedures. pre-workflow scripts manage operations that occur before initiating an etl workflow. example: configure variables or initialize staging environments. repository extension scripts extend repository functionality with user-defined logic. example: add custom behaviors to redefine repository objects. historization the historization features in analyticscreator enable robust tracking and analysis of historical data changes, supporting advanced time-based reporting and auditing. key components slowly changing dimensions (scd) automate the management of changes in dimension data. supports various scd types including: type 1 (overwrite) type 2 (versioning) others as needed time dimensions create and manage temporal structures to facilitate time-based analysis. example: build fiscal calendars or weekly rollups for time-series analytics. snapshots capture and preserve specific states of the data warehouse. use cases include audit trails, historical reporting, and rollback points. parameters and macros these tools provide centralized control and reusable logic to optimize workflows and streamline repetitive tasks. parameters dynamic management: centralize variable definitions for consistent use across scripts, transformations, and workflows. reusable configurations: update values in one place to apply changes globally. use cases: set default values for connection strings, table prefixes, or date ranges. macros reusable logic: create parameterized scripts for tasks repeated across projects or workflows. streamlined processes: use macros to enforce consistent logic in transformations and calculations. example: define a macro to calculate age from a birthdate and reuse it across transformations. summary analyticscreatorâs advanced features offer deep customization options that allow you to: control object-level behavior through scripting track and manage historical data effectively streamline project-wide settings with parameters reuse logic with powerful macros these capabilities enable you to build scalable, maintainable, and highly flexible data warehouse solutions."}
,{"id":383225948367,"name":"Wizards","type":"section","path":"/docs/user-guide/wizards","breadcrumb":"User Guide › Wizards","description":"","searchText":"user guide wizards the wizards in analyticscreator provide a guided and efficient way to perform various tasks related to building and managing a data warehouse. below is an overview of the eight available wizards and their core functions. dwh wizard the dwh wizard is designed to quickly create a semi-ready data warehouse. it is especially useful when the data source contains defined table relationships or manually maintained references. supports multiple architectures: classic (kimball), data vault 1.0 & 2.0, or mixed. automatically creates imports, dimensions, facts, hubs, satellites, and links. customizable field naming, calendar dimensions, and sap deltaq integration. source wizard the source wizard adds new data sources to the repository. supports source types: table or query. retrieves table relationships and sap-specific metadata. allows query testing and schema/table filtering. import wizard the import wizard defines and manages the import of external data into the warehouse. configures source, target schema, table name, and ssis package. allows additional attributes and parameters. historization wizard the historization wizard manages how tables or transformations are historized. supports scd types: 0, 1, and 2. configures empty record behavior and vault id usage. supports ssis-based or stored procedure historization. transformation wizard the transformation wizard creates and manages data transformations. supports regular, manual, script, and external transformation types. handles both historicized and non-historicized data. configures joins, fields, persistence, and metadata settings. calendar transformation wizard the calendar transformation wizard creates calendar transformations used in reporting and time-based models. configures schema, name, start/end dates, and date-to-id macros. assigns transformations to specific data mart stars. time transformation wizard the time transformation wizard creates time dimensions to support time-based analytics. configures schema, name, time period, and time-to-id macros. assigns transformations to specific data mart stars. snapshot transformation wizard the snapshot transformation wizard creates snapshot dimensions for snapshot-based analysis. allows creation of one snapshot dimension per data warehouse. configures schema, name, and data mart star assignment. by using these eight wizards, analyticscreator simplifies complex tasks, ensures consistency, and accelerates the creation and management of enterprise data warehouse solutions."}
,
{"id":383461199043,"name":"Reference","type":"category","path":"/docs/reference","breadcrumb":"Reference","description":"","searchText":"reference structured reference for the analyticscreator user interface, entities, types, and parameters. this reference guide is organized into sections and subsections to help you quickly find interface elements, object types, dialogs, wizards, and configuration details in analyticscreator. sections [link:365118109942|user interface] toolbar, navigation tree, dataflow diagram, pages, lists, dialogs, and wizards. [link:365178121463|entity types] connector types, source types, table types, transformation types, package types, and more. [link:365178123475|entities] reference pages for main analyticscreator object classes such as layers, sources, tables, and packages. [link:365178123499|parameters] system and project parameters including technical and environment-related settings."}
,{"id":383461259458,"name":"User Interface","type":"section","path":"/docs/reference/user-interface","breadcrumb":"Reference › User Interface","description":"","searchText":"reference user interface the analyticscreator user interface is designed to support structured, metadata-driven development of data products. it provides a clear separation between modeling, configuration, and generation activities, enabling users to navigate complex data solutions efficiently. the interface is organized into multiple functional areas that work together: navigation & repository structure provides access to repositories, object groups, and individual objects. it reflects the logical organization of the data solution and supports collaboration across teams. design & modeling area the central workspace where users define sources, transformations, and data products. this includes visual representations of data flows and dependencies, supporting transparency and impact analysis. properties & configuration panels context-sensitive panels that allow detailed configuration of selected objects, including technical settings, mappings, and behavior definitions. toolbar offers quick access to key actions such as synchronization, validation, and deployment, enabling an efficient workflow from design to delivery. lineage & dependency visualization displays relationships between objects and data flows. users can explore upstream and downstream dependencies to understand the impact of changes. the interface follows a metadata-driven approach: users define logic and structure once, and analyticscreator generates the corresponding technical artifacts. this ensures consistency, traceability, and efficient lifecycle management across environments."}
,{"id":383461259455,"name":"Entity types","type":"section","path":"/docs/reference/entity-types","breadcrumb":"Reference › Entity types","description":"","searchText":"reference entity types entity types"}
,{"id":383461259456,"name":"Entities ","type":"section","path":"/docs/reference/entities","breadcrumb":"Reference › Entities ","description":"","searchText":"reference entities entities"}
,{"id":383461259457,"name":"Parameters ","type":"section","path":"/docs/reference/parameters","breadcrumb":"Reference › Parameters ","description":"","searchText":"reference parameters parameters"}
,
{"id":383461199045,"name":"Tutorials","type":"category","path":"/docs/tutorials","breadcrumb":"Tutorials","description":"","searchText":"tutorials to become familiar with analyticscreator, we have made certain data sets available. you may use these to test analyticscreator: click here for the northwind data warehouse"}
,{"id":383225948382,"name":"Northwind DWH Walkthrough","type":"section","path":"/docs/tutorials/northwind-dwh-walkthrough","breadcrumb":"Tutorials › Northwind DWH Walkthrough","description":"","searchText":"tutorials northwind dwh walkthrough step-by-step: sql server northwind project create your first data warehouse with analyticscreator analyticscreator offers pre-configured demos for testing within your environment. this guide outlines the steps to transition from the northwind oltp database to the northwind data warehouse model. once completed, you will have a fully generated dwh project ready to run locally. load the demo project from the file menu, select load from cloud. choose nw_demo enter a name for your new repository (default: nw_demo) note: this repository contains metadata onlyâno data is moved. analyticscreator will automatically generate all required project parameters. project structure: the 5-layer model analyticscreator will generate a data warehouse project with five layers: sources â raw data from the source system (northwind oltp). staging layer â temporary storage for data cleansing and preparation. persisted staging layer â permanent storage of cleaned data for historization. core layer â integrated business modelâstructured and optimized for querying. datamart layer â optimized for reportingâorganized by business topic (e.g., sales, inventory). northwind setup (if not already installed) step 1: check if the northwind database exists open sql server management studio (ssms) and verify that the northwind database is present. if yes, skip to the next section. if not, proceed to step 2. step 2: create the northwind database run the setup script from microsoft: đľ download script or copy-paste it into ssms and execute. step 3: verify database use northwind; go select * from information_schema.tables where table_schema = 'dbo' and table_type = 'base table'; once confirmed, you can proceed with the next steps to configure the analyticscreator connector with your northwind database. note: analyticscreator uses only native microsoft connectors, and we do not store any personal information. step 4: change database connector navigate to sources > connectors. you will notice that a connector is already configured. for educational purposes, the connection string is not encrypted yet. to edit or add a new connection string, go to options > encrypted strings > add. paste your connection string as demonstrated in the video below. after adding the new connection string, it's time to test your connection. go to sources â connectors and press the test button to verify your connection. step 5: create a new deployment in this step, you'll configure and deploy your project to the desired destination. please note that only the metadata will be deployed; there will be no data movement or copy during this process. navigate to deployments in the menu and create a new deployment. assign a name to your deployment. configure the connection for the destination set the project path where the deployment will be saved. select the packages you want to generate. review the connection variables and click deploy to initiate the process. finally, click deploy to complete the deployment. in this step, your initial data warehouse project is created. note that only the metadataâthe structure of your projectâis generated at this stage. you can choose between two options for package generation: ssis (sql server integration services) adf (azure data factory) ssis follows a traditional etl tool architecture, making it a suitable choice for on-premises data warehouse architectures. in contrast, adf is designed with a modern cloud-native architecture, enabling seamless integration with various cloud services and big data systems. this architectural distinction makes adf a better fit for evolving data integration needs in cloud-based environments. to execute your package and move your data, you will still need an integration runtime (ir). keep in mind that analyticscreator only generates the project at the metadata level and does not access your data outside the analyticscreator interface. it does not link your data to us, ensuring that your data remains secure in its original location. for testing purposes, you can run your package in microsoft visual studio 2022, on your local sql server, or even in azure data factory."}
,
{"id":383461199046,"name":"Platform Support","type":"category","path":"/docs/platform-support","breadcrumb":"Platform Support","description":"","searchText":"platform support this section describes how analyticscreator integrates with and generates data warehouse and analytical solutions for supported platforms. analyticscreator is a metadata-driven design application that generates sql-based data warehouse structures, orchestration pipelines, and analytical models. the generated artifacts are deployed and executed on the selected target platform. what platform support means platform support in analyticscreator defines: which services and runtimes are used for data processing which database engines and storage layers are targeted which orchestration technologies are generated which analytical models and reporting layers are supported analyticscreator itself does not execute data processing. instead, it generates artifacts that run on the selected platform. supported platforms microsoft fabric microsoft azure / data factory sql server power bi how to use this section each platform page explains: which native services are supported what analyticscreator generates for that platform how deployment and execution work which modeling approaches are supported platform-specific constraints and considerations use these pages to understand how analyticscreator maps metadata definitions to platform-specific implementations. common patterns across platforms across all supported platforms, analyticscreator follows the same core approach: metadata defines structure and behavior sql-based artifacts are generated from metadata orchestration components are generated automatically execution happens on the target platform, not inside analyticscreator key differences between platforms the main differences between platforms are: type of orchestration (e.g. ssis vs azure data factory) execution environment (on-premise vs cloud) supported storage and compute layers integration with analytical tools and semantic models these differences are described in detail on each platform-specific page. when to use this section evaluating which platform to use with analyticscreator understanding how generated artifacts behave on a platform designing a platform-specific data warehouse architecture key takeaway analyticscreator generates platform-specific database, pipeline, and analytical artifacts from metadata, while execution is handled by the target platform."}
,{"id":383225948376,"name":"Microsoft Fabric","type":"section","path":"/docs/platform-support/microsoft-fabric","breadcrumb":"Platform Support › Microsoft Fabric","description":"","searchText":"platform support microsoft fabric this page describes how analyticscreator generates and integrates data warehouse and analytical solutions for microsoft fabric environments. overview analyticscreator supports microsoft fabric as a target platform for data warehouse generation, orchestration, and analytical modeling. it generates sql-based structures, pipelines, and semantic models that run within fabric services. analyticscreator itself does not execute workloads inside fabric. it generates artifacts that are deployed and executed within fabric components such as sql endpoints, pipelines, and semantic models. supported services and components fabric data warehouse (sql endpoint) fabric lakehouse (sql analytics endpoint) onelake storage fabric data pipelines power bi semantic models (fabric-integrated) what analyticscreator generates for microsoft fabric, analyticscreator generates: sql objects: stg tables (import layer) persistent staging and historization tables core transformations (views or tables) dm layer (facts and dimensions) stored procedures for: data loading historization persisting logic fabric pipelines: orchestration of load and transformation steps dependency-based execution semantic models: dimensions and measures relationships between entities supported modeling approaches dimensional modeling (facts and dimensions) data vault modeling (hubs, links, satellites) hybrid approaches (data vault foundation with dimensional output) historized models (scd2 with valid-from and valid-to) both warehouse and lakehouse-style architectures can be implemented depending on the selected fabric components. deployment and execution model analyticscreator separates generation, deployment, and execution: analyticscreator generates sql objects, pipelines, and semantic models deployment publishes these artifacts into microsoft fabric execution is performed by fabric services (pipelines and sql engine) data processing runs inside fabric: sql transformations run on fabric sql endpoints pipelines orchestrate execution using fabric pipeline services data is stored in onelake ci/cd and version control metadata is stored in the analyticscreator repository projects can be versioned via json export (acrepo) deployment artifacts can be integrated into ci/cd pipelines fabric environments can be targeted via deployment configurations connectors, sources, and exports supported sources sap systems sql server and relational databases other supported connectors exports and targets fabric sql endpoints lakehouse tables semantic models for power bi prerequisites, limitations, and notes fabric workspace and permissions must be configured linked services or connections must be defined for pipelines sql compatibility depends on fabric sql endpoint capabilities performance depends on data volume, partitioning, and load strategy design considerations: choose between data warehouse and lakehouse based on workload use persistent staging to avoid repeated source reads validate generated joins and transformations for performance example use cases building a fabric-native data warehouse with automated sql generation implementing data vault models on top of onelake storage generating power bi-ready semantic models from warehouse structures replacing manual pipeline development with generated fabric pipelines platform-specific notes fabric unifies storage and compute, which simplifies deployment compared to separate azure services lakehouse and warehouse approaches can coexist in the same environment semantic models are tightly integrated with power bi related content quick start guide understanding analyticscreator fabric tutorials and examples key takeaway analyticscreator generates sql structures, pipelines, and semantic models for microsoft fabric, while execution and storage are handled by fabric services such as sql endpoints, pipelines, and onelake."}
,{"id":383225948377,"name":"Azure","type":"section","path":"/docs/platform-support/azure","breadcrumb":"Platform Support › Azure","description":"","searchText":"platform support azure this page describes how analyticscreator generates and integrates data warehouse solutions in microsoft azure environments, with a focus on azure data factory for orchestration and sql-based engines for processing. overview analyticscreator supports microsoft azure as a target environment by generating sql-based data warehouse structures, orchestration pipelines, and analytical models. azure data factory is used for workflow orchestration, while sql-based engines handle data processing and storage. analyticscreator generates all required artifacts, but execution is performed by azure services such as data factory and sql engines. supported services and components azure data factory (orchestration) azure sql database azure sql managed instance azure synapse analytics (sql pools) azure storage (as data source or staging area) power bi (analytical layer) what analyticscreator generates for azure environments, analyticscreator generates: sql objects: stg tables (import layer) persistent staging and historization tables core transformations (views or persisted tables) dm layer (facts and dimensions) stored procedures for: data loading historization persisting logic azure data factory pipelines: execution orchestration dependency handling integration with linked services semantic models for reporting tools such as power bi supported modeling approaches dimensional modeling (facts and dimensions) data vault modeling (hubs, links, satellites) hybrid approaches historized models (scd2 with valid-from and valid-to) modeling behavior is independent of azure and is defined in metadata. azure determines where and how generated logic is executed. deployment and execution model analyticscreator separates generation, deployment, and execution: analyticscreator generates sql objects and pipeline definitions deployment publishes these artifacts to azure services execution is handled by azure data factory and sql engines typical execution flow: azure data factory triggers pipelines data is extracted from sources data is written to stg tables stored procedures execute transformations and historization core and dm layers are updated ci/cd and version control metadata is stored in the analyticscreator repository projects can be versioned via json export (acrepo) generated artifacts can be integrated into azure devops pipelines deployment configurations support multiple environments connectors, sources, and exports supported sources sap systems sql server and azure sql flat files and external storage via data factory exports and targets azure sql database azure synapse sql pools power bi semantic models prerequisites, limitations, and notes azure subscription and resource group required data factory instance must be configured linked services must be defined for source systems sql compatibility depends on target engine (azure sql vs synapse) design considerations: azure environments are modular and require explicit configuration orchestration and storage are separated services performance depends on selected sql engine and scaling configuration example use cases building a cloud-based data warehouse using azure sql database using azure data factory to orchestrate etl pipelines implementing data vault models in azure synapse automating pipeline generation instead of manual adf development platform-specific notes azure separates orchestration (data factory) from compute (sql engines) pipeline configuration requires linked services and integration runtimes multiple sql engines can be used depending on workload requirements related content quick start guide understanding analyticscreator azure tutorials and examples key takeaway analyticscreator generates sql structures and azure data factory pipelines, while execution is handled by azure services such as data factory and sql-based compute engines."}
,{"id":383225948378,"name":"SQL Server","type":"section","path":"/docs/platform-support/sql-server","breadcrumb":"Platform Support › SQL Server","description":"","searchText":"platform support sql server this page describes how analyticscreator generates and integrates data warehouse solutions for microsoft sql server environments. overview analyticscreator supports sql server as both the metadata repository platform and a primary target platform for generated data warehouse structures. in a typical sql server-based setup, analyticscreator generates database objects, loading procedures, ssis packages, and analytical structures that run on sql server and related microsoft services. analyticscreator itself is a design-time application. it generates sql-based artifacts, but execution takes place in sql server, sql server integration services, and downstream analytical services. supported services and components sql server database engine sql server integration services (ssis) sql server analysis services, where used for tabular or analytical models power bi as a downstream semantic and reporting target sql server-based repository for metadata storage what analyticscreator generates for sql server environments, analyticscreator generates: sql objects: stg tables for source import persistent staging and historization tables core transformations as views or persisted tables dm layer structures with facts and dimensions stored procedures for: data loading historization persisting logic customizable processing steps ssis packages for: source import workflow execution etl orchestration analytical outputs: data marts tabular or semantic model structures where configured power bi-ready dimensional outputs deployment assets: deployment packages visual studio solution content generated sql deployment artifacts supported modeling approaches dimensional modeling with facts and dimensions data vault modeling with hubs, links, and satellites hybrid approaches combining data vault foundations with dimensional output historized models using valid-from and valid-to logic snapshot-based historization patterns in sql server-based models, analyticscreator can generate a layered flow from source to staging, persistent staging, core, data mart, and presentation-oriented outputs. deployment and execution model analyticscreator separates structure generation, deployment, and execution: analyticscreator stores metadata in the repository and generates sql server artifacts from that metadata synchronization materializes the modeled structure in a sql server database deployment creates the required database assets and related execution packages execution is performed by sql server and ssis, not by analyticscreator itself typical sql server execution flow: metadata is stored in the sql server repository the wizard generates a draft model synchronization materializes the model as sql server objects deployment creates sql server database content and ssis packages ssis packages are executed to load and process data ci/cd and version control repository metadata is stored in sql server projects can be exported for versioning and deployment control generated deployment packages can be used in development, test, and production processes visual studio-based deployment assets support controlled release workflows connectors, sources, and exports relevant source types sql server source systems sap sources with sql server as target platform files and other supported source connectors relevant exports and downstream targets sql server data warehouse databases ssis execution packages analysis services or power bi-oriented semantic outputs prerequisites, limitations, and notes a sql server instance is required for the repository a target sql server database is required for generated warehouse objects ssis is required when package-based orchestration is used performance depends on indexing, load strategy, and transformation complexity generated sql should still be reviewed where platform-specific tuning is required design considerations: persistent staging should be used deliberately to support reprocessing and historization persisting can improve performance for complex transformations by materializing view output into tables historization strategy affects both storage volume and processing behavior example use cases building an on-premise sql server data warehouse with ssis-based loading modernizing an existing sql server warehouse into a metadata-driven model generating dimensional marts and semantic outputs for power bi implementing data vault or hybrid architectures on sql server platform-specific faq does analyticscreator execute etl inside sql server? analyticscreator generates sql objects and procedures for sql server, but workflow execution is typically handled through generated ssis packages or other generated orchestration assets. does synchronization load business data? no. synchronization materializes the structure in sql server. data loading happens later during execution. can sql server be both source and target? yes. sql server can be used as a source system, as the repository platform, and as the target warehouse platform. can i use power bi with a sql server-based model? yes. analyticscreator can generate dimensional and semantic outputs that are intended for downstream use in power bi. proof assets demo transcripts show the repository stored in sql server, synchronization creating a new sql server database, and deployment producing ssis packages and sql-based structures demo transcripts also show generated historization procedures, view-based transformations, persisting procedures, and data mart outputs for analytical consumption related content quick start guide understanding analyticscreator sql server tutorials and examples historization reference persisting reference commercial solution page for product-level positioning and commercial overview, see the analyticscreator sql server solution page. key takeaway in sql server environments, analyticscreator generates the warehouse schema, loading procedures, ssis assets, and analytical structures, while sql server and related microsoft services execute and host the resulting solution."}
,{"id":383225948379,"name":"Power BI","type":"section","path":"/docs/platform-support/power-bi","breadcrumb":"Platform Support › Power BI","description":"","searchText":"platform support power bi this page describes how analyticscreator generates and integrates analytical models for power bi. overview analyticscreator supports power bi as a target for analytical consumption by generating semantic models based on the data warehouse structure. these models define dimensions, measures, and relationships that can be used directly in reporting. analyticscreator does not execute data processing inside power bi. instead, it generates semantic structures that are consumed by power bi after data has been processed in the underlying data warehouse platform. supported services and components power bi semantic models (tabular models) power bi datasets power bi desktop and power bi service directquery and import modes what analyticscreator generates for power bi, analyticscreator generates: semantic models: dimensions and facts mapped from dm layer relationships between entities measures and calculated fields model structure: hierarchies (e.g. date hierarchies) attribute groupings metadata descriptions deployment-ready artifacts: tabular model definitions integration with deployment workflows supported modeling approaches dimensional modeling (star schema) measures derived from fact tables hierarchical dimensions (e.g. time, geography) data vault models are not directly exposed to power bi. instead, dimensional structures are generated from core layers and used as the basis for semantic models. deployment and execution model analyticscreator separates semantic model generation from data processing: analyticscreator generates the semantic model definition the model is deployed to power bi or an analytical engine data processing happens in the underlying data warehouse power bi consumes the processed data through the semantic model execution flow: data is processed in stg, core, and dm layers semantic model references dm structures power bi queries the model or underlying data source ci/cd and version control semantic models are generated from metadata definitions model definitions can be included in deployment pipelines integration with version-controlled analyticscreator projects connectors, sources, and exports data sources for power bi sql server azure sql microsoft fabric other supported warehouse targets export targets power bi semantic models tabular model deployments prerequisites, limitations, and notes power bi requires access to the deployed data warehouse model performance depends on underlying data model design directquery performance depends on source system performance design considerations: use dm layer as the source for semantic models avoid exposing stg or core layers directly ensure measures are defined consistently example use cases generating a power bi-ready star schema from a data warehouse automating creation of measures and relationships standardizing reporting models across projects reducing manual modeling effort in power bi platform-specific faq does analyticscreator replace power bi modeling? analyticscreator generates the base semantic model, including dimensions, relationships, and measures. additional report-specific logic can still be implemented in power bi if required. where are measures defined? measures can be generated as part of the semantic model based on metadata definitions and can be extended in power bi. can power bi connect directly to core or stg? this is technically possible but not recommended. the dm layer should be used as the primary source for reporting. proof assets generated semantic models include dimensions, measures, and relationships based on metadata demo scenarios show end-to-end flow from source to power bi-ready model related content quick start guide understanding analyticscreator data mart modeling semantic model reference commercial solution page for product-level positioning and overview, see the analyticscreator power bi solution page. key takeaway analyticscreator generates semantic models for power bi based on data warehouse structures, while data processing remains in the underlying platform."}
]