[
{"id":383461199041,"name":"Getting Started","type":"category","path":"/docs/getting-started","breadcrumb":"Getting Started","description":"","searchText":"getting started welcome to the analyticscreator documentation. in this getting started section, you can choose from the following sections: installation system requirements download and installation understanding analyticscreator"}
,{"id":383225948358,"name":"Installation","type":"section","path":"/docs/getting-started/installation","breadcrumb":"Getting Started › Installation","description":"","searchText":"getting started installation installing analyticscreator: 32-bit and 64-bit versions this guide offers step-by-step instructions for installing either the 32-bit or 64-bit version of analyticscreator, depending on your system requirements. đąnote: to ensure optimal performance, verify that your system meets the following prerequisites before installation."}
,{"id":383225948359,"name":"System Requirements","type":"section","path":"/docs/getting-started/system-requirements","breadcrumb":"Getting Started › System Requirements","description":"","searchText":"getting started system requirements to ensure optimal performance, verify that the following requirements are met: đą note: if you already have sql server installed and accessible, you can proceed directly to the launching analyticscreator section. networking: communication over port 443 is where analytics communicates to the analyticscreator server. operating system: windows 10 or later. analyticscreator is compatible with windows operating systems starting from version 10. â ď¸ warning: port 443 is the standard https port for secured transactions. it is used for data transfers and ensures that data exchanged between a web browser and websites remains encrypted and protected from unauthorized access. microsoft sql server: sql server on azure virtual machines azure sql managed instances"}
,{"id":383225948360,"name":"Download and Installation","type":"section","path":"/docs/getting-started/download-and-installation","breadcrumb":"Getting Started › Download and Installation","description":"","searchText":"getting started download and installation access the download page navigate to the analyticscreator download page download the installer locate and download the installation file. verify sql server connectivity before proceeding with the installation, confirm that you can connect to your sql server instance. connecting to sql server: to ensure successful connectivity: use sql server management studio (ssms), a tool for managing and configuring sql server. if ssms is not installed on your system, download it from the official microsoft site: download sql server management studio (ssms) install the software once connectivity is confirmed, follow the instructions below to complete the installation."}
,{"id":383225948361,"name":"Configuring AnalyticsCreator","type":"section","path":"/docs/getting-started/configuring-analyticscreator","breadcrumb":"Getting Started › Configuring AnalyticsCreator","description":"","searchText":"getting started configuring analyticscreator this guide will walk you through configuring analyticscreator with your system. provide the login and password that you received by e-mail from analyticscreator minimum requirements configuration settings the configuration of analyticscreator is very simple. the only mandatory configuration is the sql server settings. sql server settings use localdb to store repository: enables you to store the analyticscreator project (metadata only) on your localdb. sql server to store repository: enter the ip address or the name of your microsoft sql server. security integrated: authentication is based on the current windows user. standard: requires a username and password. azure ad: uses azure ad (now microsoft entra) for microsoft sql server authentication. trust server certificate: accepts the server's certificate as trusted. sql user: the sql server username. sql password: the corresponding password. optional requirements paths unc path to store backup: a network path to store project backups. local sql server path to store backup: a local folder to store your project backups. local sql server path to store database: a local folder to store your sql server database backups. repository database template: the alias format for your repositories. default: repo_{reponame}. dwh database template: the alias format for your dwh templates. default: dwh_{reponame}. proxy settings proxy address: the ip address or hostname of your proxy server. proxy port: the port number used by the proxy. proxy user: the username for proxy authentication. proxy password: the password for the proxy user. now you're ready to create your new data warehouse with analyticscreator."}
,{"id":383225948362,"name":"Understanding AnalyticsCreator","type":"section","path":"/docs/getting-started/understanding-analytics-creator","breadcrumb":"Getting Started › Understanding AnalyticsCreator","description":"","searchText":"getting started understanding analyticscreator there are at least two different approaches to design a holistic business and data model. the bottom-up method, which is shown in the graphic below and the top-down method, which starts with the conceptual model first, although models can also be loaded from other modeling tools. connect analyticscreator to any data source, especially databases, individual files, data lakes, cloud services, excel files and other extracts. build-in connectors to many common sources are available as well as support of azure data factory, azure analytics. define data - analyticscreator extracts all metadata from the data sources, such as field descriptions, data types, key fields, and all relationships, which is stored in the analyticscreator metadata repository. this will: extract and capture ddl detect structure changes and forward in all higher layers. cognitive suggestion- intelligent wizards help to create a draft version of the model across all layers of the data analytics platform. choose different modelling approaches or create your own approach: data vault 2.0, dimensional, 3 nf, own historical data handling (scd, snapshot, cdc, gapless, ..) use azure devops model- the entire toolset of analyticscreator is at your disposal to further develop the draft model. behind the holistic graphical model, the generated code is already finished and can be also modified manually, including: automated transformations and wizards collaboration development process supported by data lineage flow-chart own scripting and macros are possible deploy - to deploy the data model in different environments (test, prod, ..), analyticscreator generates deployment packages that are also used for the change process of structures and loadings. deployment packages can be used locally, in fabric, azure as well in hybrid environments. this includes: stored procedures, ssis azure sql db, azure analysis services, synapse arm template for azure data factory tabular models, olap cubes power bi tableau qlik"}
,{"id":383225948363,"name":"Quick Start Guide","type":"section","path":"/docs/getting-started/quick-start-guide","breadcrumb":"Getting Started › Quick Start Guide","description":"","searchText":"getting started quick start guide this quick start guide helps new and trial users understand how to set up, model, and automate a data warehouse using analyticscreator. it covers everything from connectors to data marts - with practical examples based on sap source systems. analyticscreator automates the creation of data warehouses and analytical models. it connects to source systems (like sap, sql server, or others), imports metadata, and generates all required transformation, historization, and loading structures. this quick start shows how to: create connectors and relationships (foreign keys, references) import source tables build transformations for dimensions and facts define relationships and surrogate keys create data marts and calendar dimensions generate cubes and metrics for reporting tools (power bi, etc.)"}
,
{"id":383461199042,"name":"User Guide","type":"category","path":"/docs/user-guide","breadcrumb":"User Guide","description":"","searchText":"user guide you can launch analyticscreator in two ways: from the desktop icon after installation or streaming setup, a desktop shortcut is created. double-click the icon to start analyticscreator. from the installer window open the downloaded analyticscreator installer. instead of selecting install, click launch (labeled as number one in the image below). a window will appear showing the available analyticscreator servers, which deliver the latest version to your system. this process launches analyticscreator without performing a full installation, assuming all necessary prerequisites are already in place."}
,{"id":383225948364,"name":" Desktop Interface","type":"section","path":"/docs/user-guide/desktop-interface","breadcrumb":"User Guide › Desktop Interface","description":"","searchText":"user guide desktop interface with analyticscreator desktop users can: data warehouse creation automatically generate and structure your data warehouse, including fact tables and dimensions. connectors add connections to various data sources and import metadata seamlessly. layer management define and manage layers such as staging, persisted staging, core, and datamart layers. package generation generate integration packages for ssis (sql server integration services) and adf (azure data factory). indexes and partitions automatically configure indexes and partitions for optimized performance. roles and security manage roles and permissions to ensure secure access to your data. galaxies and hierarchies organize data across galaxies and define hierarchies for better data representation. customizations configure parameters, macros, scripts, and object-specific scripts for tailored solutions. filters and predefined transformations apply advanced filters and transformations for data preparation and enrichment. snapshots and versioning create snapshots to track and manage changes in your data warehouse. deployments deploy your projects with flexible configurations, supporting on-premises and cloud solutions. groups and models organize objects into groups and manage models for streamlined workflows. data historization automate the process of creating historical data models for auditing and analysis."}
,{"id":383225948365,"name":"Working with AnalyticsCreator","type":"section","path":"/docs/user-guide/working-with-analyticscreator","breadcrumb":"User Guide › Working with AnalyticsCreator","description":"","searchText":"user guide working with analyticscreator understanding the fundamental operations in analyticscreator desktop is essential for efficiently managing your data warehouse repository and ensuring accuracy in your projects. below are key basic operations you can perform within the interface: edit mode and saving â data warehouse editor single object editing: in the data warehouse repository, you can edit one object at a time. this ensures precision and reduces the risk of unintended changes across multiple objects. how to edit: double-click on any field within an object to enter edit mode. the selected field becomes editable, allowing you to make modifications. save prompt: if any changes are made, a prompt will appear, reminding you to save your modifications before exiting the edit mode. this safeguard prevents accidental loss of changes. unsaved changes: while edits are immediately reflected in the repository interface, they are not permanently saved until explicitly confirmed by clicking the save button. accessing views in data warehouse explorer layer-specific views: each layer in the data warehouse contains views generated by analyticscreator. these views provide insights into the underlying data structure and transformations applied at that layer. how to access: navigate to the data warehouse explorer and click on the view tab for the desired layer. this displays the layer's contents, including tables, fields, and transformations. adding and deleting objects adding new objects: navigate to the appropriate section (e.g., tables, layers, or connectors) in the navigation tree. right-click and select add [object type] to create a new object. provide the necessary details, such as name, description, and configuration parameters. save the object. deleting objects: select the object in the navigation tree and right-click to choose delete. confirm the deletion when prompted. â ď¸ note: deleting an object may affect dependent objects or configurations. filtering and searching in data warehouse explorer filtering: use filters to narrow down displayed objects by criteria such as name, type, or creation date. searching: enter keywords or phrases in the search bar to quickly locate objects. benefits: these features enhance repository navigation and efficiency when working with large datasets. object dependencies and relationships dependency view: for any selected object, view its dependencies and relationships with other objects by accessing the dependencies tab. impact analysis: analyze how changes to one object might affect other parts of the data warehouse. managing scripts predefined scripts: add scripts for common operations like data transformations or custom sql queries. edit and run: double-click a script in the navigation tree to modify it. use run script to execute and view results. validating and testing changes validation tools: use built-in tools to check for errors or inconsistencies in your repository. evaluate changes: use the evaluate button before saving or deploying to test functionality and ensure correctness. locking and unlocking objects locking: prevent simultaneous edits by locking objects, useful in team environments. unlocking: release locks once edits are complete to allow further modifications by others. exporting and importing data export: export objects, scripts, or configurations for backup or sharing. use the export option in the toolbar or navigation tree. import: import previously exported files to replicate configurations or restore backups. use the import option and follow the prompts to load the data."}
,{"id":383225948366,"name":"Advanced Features","type":"section","path":"/docs/user-guide/advanced-features","breadcrumb":"User Guide › Advanced Features","description":"","searchText":"user guide advanced features analyticscreator provides a rich set of advanced features to help you configure, customize, and optimize your data warehouse projects. these features extend the toolâs capabilities beyond standard operations, enabling more precise control and flexibility. scripts scripts in analyticscreator allow for detailed customization at various stages of data warehouse creation and deployment. they enhance workflow flexibility and enable advanced repository configurations. types of scripts object-specific scripts define custom behavior for individual objects, such as tables or transformations, to meet specific requirements. pre-creation scripts execute tasks prior to creating database objects. example: define sql functions to be used in transformations. pre-deployment scripts configure processes that run before deploying the project. example: validate dependencies or prepare the target environment. post-deployment scripts handle actions executed after deployment is complete. example: perform cleanup tasks or execute stored procedures. pre-workflow scripts manage operations that occur before initiating an etl workflow. example: configure variables or initialize staging environments. repository extension scripts extend repository functionality with user-defined logic. example: add custom behaviors to redefine repository objects. historization the historization features in analyticscreator enable robust tracking and analysis of historical data changes, supporting advanced time-based reporting and auditing. key components slowly changing dimensions (scd) automate the management of changes in dimension data. supports various scd types including: type 1 (overwrite) type 2 (versioning) others as needed time dimensions create and manage temporal structures to facilitate time-based analysis. example: build fiscal calendars or weekly rollups for time-series analytics. snapshots capture and preserve specific states of the data warehouse. use cases include audit trails, historical reporting, and rollback points. parameters and macros these tools provide centralized control and reusable logic to optimize workflows and streamline repetitive tasks. parameters dynamic management: centralize variable definitions for consistent use across scripts, transformations, and workflows. reusable configurations: update values in one place to apply changes globally. use cases: set default values for connection strings, table prefixes, or date ranges. macros reusable logic: create parameterized scripts for tasks repeated across projects or workflows. streamlined processes: use macros to enforce consistent logic in transformations and calculations. example: define a macro to calculate age from a birthdate and reuse it across transformations. summary analyticscreatorâs advanced features offer deep customization options that allow you to: control object-level behavior through scripting track and manage historical data effectively streamline project-wide settings with parameters reuse logic with powerful macros these capabilities enable you to build scalable, maintainable, and highly flexible data warehouse solutions."}
,{"id":383225948367,"name":"Wizards","type":"section","path":"/docs/user-guide/wizards","breadcrumb":"User Guide › Wizards","description":"","searchText":"user guide wizards the wizards in analyticscreator provide a guided and efficient way to perform various tasks related to building and managing a data warehouse. below is an overview of the eight available wizards and their core functions. dwh wizard the dwh wizard is designed to quickly create a semi-ready data warehouse. it is especially useful when the data source contains defined table relationships or manually maintained references. supports multiple architectures: classic (kimball), data vault 1.0 & 2.0, or mixed. automatically creates imports, dimensions, facts, hubs, satellites, and links. customizable field naming, calendar dimensions, and sap deltaq integration. source wizard the source wizard adds new data sources to the repository. supports source types: table or query. retrieves table relationships and sap-specific metadata. allows query testing and schema/table filtering. import wizard the import wizard defines and manages the import of external data into the warehouse. configures source, target schema, table name, and ssis package. allows additional attributes and parameters. historization wizard the historization wizard manages how tables or transformations are historized. supports scd types: 0, 1, and 2. configures empty record behavior and vault id usage. supports ssis-based or stored procedure historization. transformation wizard the transformation wizard creates and manages data transformations. supports regular, manual, script, and external transformation types. handles both historicized and non-historicized data. configures joins, fields, persistence, and metadata settings. calendar transformation wizard the calendar transformation wizard creates calendar transformations used in reporting and time-based models. configures schema, name, start/end dates, and date-to-id macros. assigns transformations to specific data mart stars. time transformation wizard the time transformation wizard creates time dimensions to support time-based analytics. configures schema, name, time period, and time-to-id macros. assigns transformations to specific data mart stars. snapshot transformation wizard the snapshot transformation wizard creates snapshot dimensions for snapshot-based analysis. allows creation of one snapshot dimension per data warehouse. configures schema, name, and data mart star assignment. by using these eight wizards, analyticscreator simplifies complex tasks, ensures consistency, and accelerates the creation and management of enterprise data warehouse solutions."}
,
{"id":383461199043,"name":"Reference","type":"category","path":"/docs/reference","breadcrumb":"Reference","description":"","searchText":"reference structured reference for the analyticscreator user interface, entities, types, and parameters. this reference guide is organized into sections and subsections to help you quickly find interface elements, object types, dialogs, wizards, and configuration details in analyticscreator. sections [link:365118109942|user interface] toolbar, navigation tree, dataflow diagram, pages, lists, dialogs, and wizards. [link:365178121463|entity types] connector types, source types, table types, transformation types, package types, and more. [link:365178123475|entities] reference pages for main analyticscreator object classes such as layers, sources, tables, and packages. [link:365178123499|parameters] system and project parameters including technical and environment-related settings."}
,{"id":383461259458,"name":"User Interface","type":"section","path":"/docs/reference/user-interface","breadcrumb":"Reference › User Interface","description":"","searchText":"reference user interface user interface"}
,{"id":383509396675,"name":"Common information","type":"subsection","path":"/docs/reference/user-interface/common-information","breadcrumb":"Reference › User Interface › Common information","description":"","searchText":"reference user interface common information common information"}
,{"id":383509396676,"name":"Toolbar","type":"subsection","path":"/docs/reference/user-interface/toolbar","breadcrumb":"Reference › User Interface › Toolbar","description":"","searchText":"reference user interface toolbar toolbar"}
,{"id":383461259455,"name":"Entity types","type":"section","path":"/docs/reference/entity-types","breadcrumb":"Reference › Entity types","description":"","searchText":"reference entity types entity types"}
,{"id":383461259456,"name":"Entities ","type":"section","path":"/docs/reference/entities","breadcrumb":"Reference › Entities ","description":"","searchText":"reference entities entities"}
,{"id":383461259457,"name":"Parameters ","type":"section","path":"/docs/reference/parameters","breadcrumb":"Reference › Parameters ","description":"","searchText":"reference parameters parameters"}
,
{"id":383461199045,"name":"Tutorials","type":"category","path":"/docs/tutorials","breadcrumb":"Tutorials","description":"","searchText":"tutorials to become familiar with analyticscreator, we have made certain data sets available. you may use these to test analyticscreator: click here for the northwind data warehouse"}
,{"id":383225948382,"name":"Northwind DWH Walkthrough","type":"section","path":"/docs/tutorials/northwind-dwh-walkthrough","breadcrumb":"Tutorials › Northwind DWH Walkthrough","description":"","searchText":"tutorials northwind dwh walkthrough step-by-step: sql server northwind project create your first data warehouse with analyticscreator analyticscreator offers pre-configured demos for testing within your environment. this guide outlines the steps to transition from the northwind oltp database to the northwind data warehouse model. once completed, you will have a fully generated dwh project ready to run locally. load the demo project from the file menu, select load from cloud. choose nw_demo enter a name for your new repository (default: nw_demo) note: this repository contains metadata onlyâno data is moved. analyticscreator will automatically generate all required project parameters. project structure: the 5-layer model analyticscreator will generate a data warehouse project with five layers: sources â raw data from the source system (northwind oltp). staging layer â temporary storage for data cleansing and preparation. persisted staging layer â permanent storage of cleaned data for historization. core layer â integrated business modelâstructured and optimized for querying. datamart layer â optimized for reportingâorganized by business topic (e.g., sales, inventory). northwind setup (if not already installed) step 1: check if the northwind database exists open sql server management studio (ssms) and verify that the northwind database is present. if yes, skip to the next section. if not, proceed to step 2. step 2: create the northwind database run the setup script from microsoft: đľ download script or copy-paste it into ssms and execute. step 3: verify database use northwind; go select * from information_schema.tables where table_schema = 'dbo' and table_type = 'base table'; once confirmed, you can proceed with the next steps to configure the analyticscreator connector with your northwind database. note: analyticscreator uses only native microsoft connectors, and we do not store any personal information. step 4: change database connector navigate to sources > connectors. you will notice that a connector is already configured. for educational purposes, the connection string is not encrypted yet. to edit or add a new connection string, go to options > encrypted strings > add. paste your connection string as demonstrated in the video below. after adding the new connection string, it's time to test your connection. go to sources â connectors and press the test button to verify your connection. step 5: create a new deployment in this step, you'll configure and deploy your project to the desired destination. please note that only the metadata will be deployed; there will be no data movement or copy during this process. navigate to deployments in the menu and create a new deployment. assign a name to your deployment. configure the connection for the destination set the project path where the deployment will be saved. select the packages you want to generate. review the connection variables and click deploy to initiate the process. finally, click deploy to complete the deployment. in this step, your initial data warehouse project is created. note that only the metadataâthe structure of your projectâis generated at this stage. you can choose between two options for package generation: ssis (sql server integration services) adf (azure data factory) ssis follows a traditional etl tool architecture, making it a suitable choice for on-premises data warehouse architectures. in contrast, adf is designed with a modern cloud-native architecture, enabling seamless integration with various cloud services and big data systems. this architectural distinction makes adf a better fit for evolving data integration needs in cloud-based environments. to execute your package and move your data, you will still need an integration runtime (ir). keep in mind that analyticscreator only generates the project at the metadata level and does not access your data outside the analyticscreator interface. it does not link your data to us, ensuring that your data remains secure in its original location. for testing purposes, you can run your package in microsoft visual studio 2022, on your local sql server, or even in azure data factory."}
,
{"id":383461199046,"name":"Functions","type":"category","path":"/docs/functions-features","breadcrumb":"Functions","description":"","searchText":"functions get started by clicking on one of these sections: main functionality gui process support data sources export functionality use of analytics frontends"}
,{"id":383225948376,"name":"Main Functionality","type":"section","path":"/docs/functions-features/main-functionality","breadcrumb":"Functions › Main Functionality","description":"","searchText":"functions main functionality full bi-stack automation: from source to data warehouse through to frontend. holistic data model: complete view of the entire data model. this also allows for rapid prototyping of various models. data warehouses: ms sql server 2012-2022, azure sql database, azure synapse analytics dedicated, azure sql managed instance, sql server on azure vms, ms fabric sql. analytical databases: ssas tabular databases, ssas multidimensional databases, azure synapse analytics dedicated, power bi, power bi premium, duck db, tableau, and qlik sense. data lakes: ms azure blob storage, onelake. frontends: power bi, qlik sense, tableau, powerpivot (excel). pipelines/etl: sql server integration packages (ssis), azure data factory 2.0 pipelines, azure data bricks, fabric data factory. azure: azure sql server, azure data factory pipelines. deployment: visual studio solution (ssdt), creation of dacpac files, ssis packages, data factory arm templates, xmla files. modelling approaches: top-down modelling, bottom-up modelling, import from external modelling tool, dimensional/kimball, data vault 2.0, mixed approach of dv 2.0 and kimball (a combination the best of both worlds by using elements of both data vault 2.0 and kimball modelling), inmon, 3nf, or any custom data model. the analyticscreator wizard can help you create a data vault model automatically and also supports strict dan linstead techniques and data vaults. historization approaches: slowly changing dimensions (scd) type 0, type 1, type 2, mixed, snapshot historization, gapless historization, change-based calculations. surrogate key: auto-increment, long integer, hash key, custom definition of hash algorithm."}
,{"id":383225948377,"name":"GUI","type":"section","path":"/docs/functions-features/gui","breadcrumb":"Functions › GUI","description":"","searchText":"functions gui windows gui embedded version control multi-user development supporting distributed development manual object locking possible predefined templates cloud-based repository cloud service support available data lineage macro language for more flexible development predefined, datatype-based transformations calculated columns in each dwh table single point development: the whole design is possible in analyticscreator. external development not necessary embedding external code automatic documentation in word and visio export to microsoft devops, github, .. analyticscreator repository is stored in a ms sql server and can be modified and extended with additional functionality"}
,{"id":383225948378,"name":"Process support","type":"section","path":"/docs/functions-features/process-support","breadcrumb":"Functions › Process support","description":"","searchText":"functions process support etl procedure protocol error handling on etl procedures consistency on etl failure rollback on etl procedures automatic recognition of source structure changes and automatic adaptation of connected dwh entire dwh life-cycle support delta and full load of data models near real-time data loads possible external orchestration/scheduling for etl process internal orchestration/scheduling for etl process with generated ms-ssis packages several workflow configurations no is necessary runtime for analyticscreator daily processing of created dhws are run without analyticscreator no additional licences necessary for design component no ms sql server necessary"}
,{"id":383225948379,"name":"Data Sources","type":"section","path":"/docs/functions-features/data-sources","breadcrumb":"Functions › Data Sources","description":"","searchText":"functions data sources build-in connectivity: ms sql server, oracle, sap erp, s4/hana with theobald software (odp, deltaq/tables), sap business one with analyticscreator own connectivity, sap odp objects, excel, access, csv/text, oledb (e.g. terradata, netezza, db2..), odbc (mysql, postgres) , odata , azure blob storage (csv, parquet, avro), rest, ms sharepoint, google ads, amazon, salesforce crm, hubspot crm, ms dynamics 365 business central, ms dynamics navision 3rd party connectivity: access to more than 250+ data source with c-data connector [www.cdata.com/drivers]. this allows for connection to analyticscreator directly by an odbc, or ole db driver, or by connecting an ingest layer with externally filled tables. define your own connectivity: (any data source, hadoop, google bigquery/analytics, amazon, shop solutions, facebook, linkedin, x (formerly twitter)) in all cases of access to source data an analyticscreator-metadata-connector is created. the analyticscreator-metadata-connector is a description of data-sources you use for more easy handling in analyticscreator. analyticscreator is able to automatically create a metadata connector by extracting the data definition from your source data. it contains information about key fields, referential integrity, name of fields and description."}
,{"id":383225948380,"name":"Export Functionality","type":"section","path":"/docs/functions-features/export-functionality","breadcrumb":"Functions › Export Functionality","description":"","searchText":"functions export functionality azure blob storage, text, csv files, any target system using oledb or odbc driver, automated type conversation, export performed by ssis packages or azure data factory pipelines export for example to oracle, snowflake, synapse"}
,{"id":383225948381,"name":"Use of Analytics Frontends","type":"section","path":"/docs/functions-features/use-of-analytics-frontends","breadcrumb":"Functions › Use of Analytics Frontends","description":"","searchText":"functions use of analytics frontends push concept: power bi, tableau, and qlik models will be created automatically. all models described here will be created at the same time. pull concept: there are many bi frontends around which allows you to connect with the specified microsoft data. check with your vendor or us what is possible. analyticscreator allows you to develop a specific solution for your analytics frontend in the way that the model will be created automatically for your bi frontend (push concept)."}
]