Automating Data Solutions for Beginners: Data Warehouses to Power BI
AnalyticsCreator automates the design, development, change, and deployment process for data warehouses, data marts, data lakehouses, and Power BI models. In this beginner session, the demo shows how to create a data warehouse from AdventureWorks, deploy it to Azure, generate Azure Data Factory pipelines, and publish a Power BI model.
Video Questions
- How can a data warehouse be created up to 10 times faster?
- How do you create a data warehouse from AdventureWorks?
- How does AnalyticsCreator generate source code from a graphical data model?
- How does historization work in AnalyticsCreator?
- How can AnalyticsCreator deploy to Azure and Power BI?
- How can Power BI quick insights be created from the generated model?
Key takeaways
- AnalyticsCreator is described as a pure design-time tool with no runtime dependency.
- The tool uses a holistic graphical data model to generate source code automatically.
- The demo creates a data warehouse repository from AdventureWorks.
- The Data Warehouse Wizard generates staging, persistent staging, core, and data mart layers.
- Historization can be configured per column, including SCD1, SCD2, and other behavior.
- Transformations, persisting, macros, dimensions, facts, measures, and data marts are configured visually.
- The same model can be deployed on-premise, in Azure, or in mixed environments.
- The demo deploys a database, Azure Data Factory pipelines, and a Power BI model.
Transcript
Hello everybody, and thank you for joining us today for this virtual classroom: Data Automation for Beginners.
Today, we will look at how to automate data warehouses, data marts, Power BI models, and data lakehouses, and how AnalyticsCreator can help you create a data platform up to 10 times faster.
What is AnalyticsCreator? I want to explain it briefly.
AnalyticsCreator is a data automation technology that orchestrates the full lifecycle of a data warehouse, data lakehouse, or analytical data model. It covers the design process, development process, change management, and deployment.
It is made for experts, but also for non-experts who are starting their data journey and want to understand how to create data warehouses and data marts for analytical applications. These users can also work with AnalyticsCreator very easily.
AnalyticsCreator works with a holistic graphical data model, which you will see later in the presentation. Based on this holistic data model, AnalyticsCreator generates the source code instead of requiring manual programming.
AnalyticsCreator is mainly used to improve agility and to replace older ETL technologies.
We have many success stories on our website. I want to highlight one or two of them.
One example is Robert Bosch, although the logo is not shown. They built a large data warehouse in the Microsoft stack with thousands of users using AnalyticsCreator. At the beginning, they estimated around 12 months of development time for the first version. With AnalyticsCreator, it was completed in three months. This is a really impressive story.
The next example is mymuesli. Three years ago, they were beginners. Today, they are real experts with AnalyticsCreator. You can read the full story on our website, where it is available for download. We also have success stories for different industries, so I recommend downloading them from the website.
There is also a video on YouTube. A customer reported that they created integration layers in Azure Storage 20 times faster. We were surprised as well, so we reproduced the process, recorded it, and showed the customer how this speed improvement was possible.
I also want to explain the vision behind AnalyticsCreator.
AnalyticsCreator is a pure design-time tool. There is no runtime. As mentioned earlier, AnalyticsCreator first creates a holistic data model. From that model, the source code is generated. This is the key point: the source code is deployed automatically to your SQL Server, Azure environment, Power BI, or another target system.
After deployment, you no longer need AnalyticsCreator to run the solution. You are completely independent. There is no runtime and no vendor lock-in from us, which is very important.
We work 100% on the Microsoft stack. That is where we focus our resources. We do not generate code for Oracle, for example. We concentrate on Azure, SQL Server, and the full Microsoft environment.
To keep you independent, AnalyticsCreator has an open repository. All generated data and model definitions are stored in SQL Server. You can also work directly in the repository if you do not want to use the graphical user interface. This also allows you to build add-ons and develop additional functions or features on top of AnalyticsCreator.
This architectural picture shows the AnalyticsCreator environment.
From left to right, you can see the data flow. At the beginning, you have raw data from ERP systems and other sources. We also work with partners who provide connectivity tools, allowing you to connect to more than 250 sources.
In the second step, we create the Azure data lakehouse, data warehouse, or another analytical structure. This can be modeled using Kimball, Data Vault, or other approaches.
In the third section, we have tabular models used by Power BI, traditional OLAP models, and Qlik cubes if needed. We can automatically create not only Power BI models, but also Qlik and Tableau models. If you use another front-end technology, such as Looker or IBM Cognos, you can still connect directly to the data warehouse.
The process starts by connecting to the data source.
From the data source, AnalyticsCreator helps you define the data. There are wizards that automate metadata extraction. For example, if you use SAP or another ERP system, it is easy to retrieve metadata. If metadata is not available, for example from a CSV file, you describe it manually in the second step.
In the third step, an intelligent wizard uses the selected data and metadata to automatically create a dimensional model from your operational data. This gives you a draft analytical data model and a draft data warehouse, from the lower layers up to the data mart layer.
In step four, you make your changes. You adjust dimensions, calculate KPIs, configure historization, and define other development logic. This is where the main development work takes place.
After that, AnalyticsCreator generates the source code, creates the structure, and deploys everything to Azure, SQL Server, or another supported Microsoft environment.
I want to recommend the latest Data Management Survey. In the chart, you can see that AnalyticsCreator is positioned in the upper-right corner, which is the best place to be as an innovator and leader.
We have won several KPIs for multiple years in a row. These are shown with the green logos. You can download the survey from our website. The 2022 and 2023 versions are both available.
AnalyticsCreator supports many use cases. I do not want to go into too much detail here, because they are explained well on our website.
You can use AnalyticsCreator to build a new data warehouse, modernize an existing one, or automate existing environments. Modernization is especially interesting. There is a new video from one of our customers showing exactly what they did with their old data warehouse and how they recreated it automatically with AnalyticsCreator to speed up the whole process.
You can also use AnalyticsCreator to generate Data Vault models, data lakehouses, Synapse-based architectures, and many other scenarios. These are only examples; there are many more.
That concludes my part of the presentation. Dimitri will now walk through the process of modeling a data warehouse with AnalyticsCreator.
Dimitri, please go ahead. Or does anyone have a question?
No questions? Then I will continue with the presentation. I will now share my screen. Please confirm that you can see it.
To create a data warehouse, I will create a new data warehouse repository. Let’s call it “German,” for example. This repository is the database that contains the full definition of the data warehouse system. It is stored on the SQL Server located on my computer.
The database has now been created, so let’s start.
I will use the AdventureWorks database as the source for my data warehouse. It is located on my computer. You can see AdventureWorks 2019 here, and we will use this database as the source.
First, I add a new connector. AnalyticsCreator supports different types of connectors, such as SQL Server, Oracle, flat files, ODBC, and other sources. We can also work directly with SAP and read metadata from SAP systems. Azure Blob Storage and other interfaces are supported as well.
I will now create a connection to the AdventureWorks database. I modify the connection string, save it, and test the connection. The connection test is successful.
Now I have two possibilities for creating the data warehouse. I can import connected tables step by step, historize them, and continue manually. Or I can use the Data Warehouse Wizard, which creates the first draft version of the data warehouse automatically.
For this demo, I will use the Data Warehouse Wizard.
Here you can see the tables and views from the AdventureWorks database. I will select some tables from the Human Resources schema and use them to create a data warehouse.
I will create a classical Kimball data warehouse with facts and dimensions. AnalyticsCreator also supports other architectures, such as Data Vault or mixed architectures that combine both approaches.
For this demo, I select the Kimball approach.
AnalyticsCreator will import every table, historize every table, create dimensions from every table, and generate fact transformations from the Employee Department History and Job Candidate tables.
On the next screen, we can define the names of the objects in the data warehouse. For example, all dimensions will use the prefix “dim,” and fact transformations will use the prefix “fact.”
On the next screen, we can define additional properties. For example, we can create a calendar dimension for a specific time period.
I will now finish the wizard. AnalyticsCreator reads the metadata from the AdventureWorks database and creates the draft version of the data warehouse.
Here you can see the typical layer diagram of the data warehouse from left to right.
On the far left, we have the source layer. Technically, it is not a data warehouse layer, but it shows the data sources used in the data warehouse.
Next, we have the staging layer, where data is imported.
Then we have the persisted staging layer, where the historized tables are located.
The core layer contains transformations, facts, and dimensions.
Finally, we have the data mart layer. Later, when we generate a Power BI model, the structure of the data mart layer will be reproduced in the Power BI model.
Let’s look at the different objects in the diagram. For example, if I click on the source, I can see the structure of the Human Resources Department table, including columns, data types, and other metadata.
Next is the staging layer, which contains the imported data. If I click on the Dim Department table, I can see that its structure is the same as the source table.
This “IP” square defines the import package, or import pipeline if you use Data Factory. When I click on it, I can see how the import will be performed. There is a mapping between source and target columns.
You can already transform data here, although this is usually not necessary. In most cases, the data is imported as is and transformed later. However, you can define transformations during the import if needed.
You can also define filters to restrict the amount of data being imported. For example, you can add variables. I can create a timestamp variable and restrict the imported data using a filter on the Modified Date column. This makes it possible to perform differential loading or other import strategies.
You can also add scripts that are executed before or after the data import.
Import packages can be generated as Integration Services packages or as Azure Data Factory pipelines.
The next layer is the persistent staging layer.
It is not strictly necessary to historize data, but we generally recommend it. The persistent staging layer contains historized data.
If I look at the Department table in this layer, you can see that it has the same structure as the import table, plus a few additional columns. Two of them are Date From and Date To, which define the validity period of each data row. Another column is the surrogate key, which in this case is an identity column.
This is a typical slowly changing dimension Type 2 approach.
Each time data is imported, the imported data is compared with the data stored in the historized table. Changes are detected and stored. The old version receives a validity period, and the new version is added. This ensures that previous versions of the data remain accessible.
Historization is configured in the “HP” square, which defines the historization package or pipeline.
Here, we can configure how historization should be performed. For each column, we can define the historization type.
SCD Type 2 means that if a change is detected in the selected column, the old data row is closed by setting its validity date, and a new row is added.
SCD Type 1 means that if a change is detected, the existing row is updated without storing a history of the change.
You can also set the historization type to “None,” which means changes in that field are ignored. This can be useful for unimportant columns.
There are many other configuration options. For example, you can define what should happen when data is missing. Usually, missing data causes the corresponding row in the historized table to be closed. However, you can also decide not to close missing data, so it remains valid.
Another option is to create an empty record with defined default values or to keep values from the previous row. This can be useful for closing gaps in the historization timeline.
You can also define filters. For example, if your source table is very large but most records remain unchanged, you can import only data from the last two months. To avoid closing older records incorrectly, you can define a source filter so that only data from the last two months is historized, while older data remains unchanged.
AnalyticsCreator can also historize sources that are already historized. If your source contains a history of changes, AnalyticsCreator can preserve that history in the historization table and use it in the data warehouse.
Technically, historization is implemented through generated stored procedures. You can view the procedure code here. If needed, you can set the procedure type to manually created and modify it yourself. This is usually not necessary, but it is possible.
You can also add scripts before or after historization. The historization process in AnalyticsCreator is highly configurable, and almost every part of it can be adjusted.
The next layer in our data warehouse is the core layer, where you can see these blue squares.
We have a question here: Are the staging, persistent staging, and data mart tables stored separately?
Yes. For example, the Dim Department table in the staging layer and the Department table in the persistent staging layer are two separate tables.
The staging table is usually cleared every time data is imported. The persistent staging table remains stable and is usually not deleted. It is the main layer in the data warehouse that contains the historized data.
Another question: Can we store history only in staging tables?
It is not necessary to historize every table. You can have only the staging layer, where data is imported, without a persistent staging layer. This is just the recommendation of the Data Warehouse Wizard.
Some customers import data without using a persistent staging layer. Others persist data in another way, for example in the core layer, which I will show later.
This is important: what I am showing now is only a recommended data warehouse architecture. With AnalyticsCreator, you can create your own architectures without strict boundaries or restrictions.
This example is a typical Kimball architecture with staging, persistent staging, import, and historization. But you are free to create your own architecture.
Now we are in the core layer, or transformation layer.
The blue square represents a transformation. A typical transformation in AnalyticsCreator is a view, and AnalyticsCreator can create these views automatically.
Here we can see the definition of a transformation. AnalyticsCreator supports different types of transformations. This is a regular transformation. You can also create your own views using manual transformations, create SQL Server scripts or stored procedures, or even create your own Integration Services packages and inject them into the AnalyticsCreator repository as external transformations.
In this case, we recommend using views and regular transformations because they are simple and efficient.
The Department table is the source table for this transformation. The transformation exposes every column from this table in the view.
One interesting feature is predefined transformations. I will remove all predefined transformations and save the object. If we look at the generated view, we can see that it simply exposes all columns from the Department table without changes.
Now I will make a modification and create an unknown member. This is an important part of data warehouse technology.
You can see the UNION SELECT part, which contains the unknown member for this transformation. For example, if a fact table references a department that does not exist in the department table, that reference will be mapped to the unknown member. This avoids missing references to dimensions.
Predefined transformations are also very useful. For example, the Trim transformation removes leading and trailing spaces from every string column. If I apply it, every varchar and nvarchar column now includes the function to remove spaces.
I can add more predefined transformations, such as converting null strings to “N.A.” This allows us to perform type-based transformations across many columns efficiently.
You can define your own predefined transformations and reuse them across transformations.
Let’s look at Employee Department History. This is a fact transformation with a filter, created by AnalyticsCreator.
AnalyticsCreator imported the foreign keys from the AdventureWorks database and found that Employee Department History references the Department, Employee, and Shift tables. Based on these relationships, AnalyticsCreator created a typical fact transformation.
Each table used here is historized and has a surrogate key. The transformation retrieves the surrogate keys from the historized tables and exposes them in the fact transformation.
Joining historized tables can be difficult because business keys are not unique in historized tables. To solve this, AnalyticsCreator uses different types of historization. In this case, we use snapshot historization.
What is snapshot historization?
You can see the snapshot table as the first table in this transformation. It is created automatically by AnalyticsCreator and contains at least one row with the current date, stored as the snapshot date.
The transformation retrieves from each historized table only the data that was valid for the selected snapshot date. You can see that each table has a join condition where the snapshot date is between Date From and Date To.
Using this condition, we retrieve only the data that was valid for a specific snapshot date. Then we use business key relationships, such as Department ID, to join the tables.
If the snapshot table contains only the current date, the fact transformation returns only the current facts.
However, you can add more snapshots, such as the end of the previous month or previous periods. This gives you access to historical versions of the data.
The snapshot table is later used in the data mart layer as a snapshot dimension. It can contain more than one snapshot, allowing access to previous data versions in the data mart and in OLAP cubes.
If you only need current data, you can set the historization type to “Current Only.” In that case, the view changes and the snapshot table is no longer used. Instead, a filter identifies the current records in the historized tables.
Snapshot historization is not mandatory, but it is very useful. You can start with only the current date and later extend the snapshot dimension to include historical snapshots.
Now I will show persisting.
Every transformation here is a view, and sometimes it is important to persist this view into a table. AnalyticsCreator supports this.
When I click “Synchronize Data Warehouse,” the content of the view can be stored in a table. This is also called materialization or persisting.
The persisted table has the same structure as the view. Persisting is performed using stored procedures.
If I click on the “PP” square, I can see the persisting package or pipeline and how the persisting process works.
AnalyticsCreator supports different types of persisting. Full persisting means that the content of the persisted table is deleted and reloaded each time.
AnalyticsCreator also supports historical incremental persisting. This makes it possible to detect changes between the source and the persisted table, modify changed records, insert new records, and delete removed records.
As with historization, persisting is implemented through generated stored procedures. You can view the generated procedure code here and add scripts before or after the persisting process.
Now I will add additional columns to the fact transformation. For example, in Employee Department History, I will add a calendar dimension.
The Employee Department History table contains columns such as Start Date and End Date. These indicate when an employee started and finished working in a specific department.
Instead of using the date columns directly, I will use IDs from the calendar dimension.
AnalyticsCreator has prepared a calendar dimension for us. If I click on Dim Calendar, I can see the generated view. It contains every date within a specific time period. You can modify this calendar dimension by adding or changing columns.
To use calendar IDs in the fact transformation, I use a calendar macro. You can see the Date ID macro here.
Macros are an important mechanism in AnalyticsCreator. Let me show you how they work.
A macro is essentially T-SQL code with placeholders. For example, “..1” is a placeholder. When the macro is called, this placeholder is replaced by the first macro parameter.
In this case, the Date ID macro converts Start Date and End Date into IDs from the calendar dimension.
The mechanism is simple but powerful. You can call macros inside other macros, and you can create your own macros for reusable transformation logic.
I will rename the columns to FK Start Date and FK End Date.
Now let’s look at another transformation, Employee Pay History. I will add one more attribute here. We have Rate Change Date, and I will also add Rate, which will be used in the transformation and later in the data mart layer.
When I click “Synchronize Data Warehouse,” the model shown on the screen is materialized on SQL Server.
A new database is generated. In this case, it is called German DWH, based on the repository name. Each time I synchronize the data warehouse, the model is materialized in this database, and I can see the data warehouse structure there.
Let me demonstrate what happens if something is wrong. I add an incorrect column expression and save it. When I synchronize, AnalyticsCreator tries to materialize the structure and detects an error: invalid column name in the Employee Department History transformation.
The transformation is highlighted with a red border. I can click “Create in DWH” to see the same error message. Then I remove the incorrect column, save the change, and synchronize again successfully.
Now let’s look at the data mart layer.
The data mart layer is the most important interface layer of the data warehouse. It contains attributes and properties related to OLAP cubes and Power BI models.
Objects are exposed in the data mart layer. If I look at the Dim Department transformation, I can see which star schema it belongs to. In this case, AnalyticsCreator created a data star called “Star,” and Dim Department is exposed as a dimension in this star.
Employee Department History is used in the star as a fact transformation.
This transformation has two columns related to the calendar dimension. Since both Start Date and End Date refer to dates, we need separate role-playing dimensions.
I open the transformation settings and see that FK Start Date is bound to the calendar dimension. FK End Date is not bound, because two different columns cannot be bound to the same dimension directly.
So I create a new dimension for the End Date and call it Dim End Date. Then I rename the original Dim Calendar to Dim Start Date and synchronize the data warehouse.
Now we have two calendar dimensions in the data mart layer: Dim Start Date and Dim End Date.
Employee Pay History is also bound to a calendar dimension. Instead of binding it to Start Date, I create a new calendar dimension called Dim Rate Change Date.
After synchronizing, we now have three different calendar dimensions in the data mart layer: Start Date, End Date, and Rate Change Date.
Now I will add measures to our facts.
In the Measure tab, we can define measures for OLAP cubes, tabular models, multidimensional cubes, or Power BI models.
For Employee Department History, I add aggregations such as the number of departments and the number of employees. You can write your own expressions, but I will use standard aggregation functions such as Distinct Count.
AnalyticsCreator can also automatically generate measure names based on templates. This is useful for OLAP cubes, where every measure name must be unique.
For example, the name can consist of the aggregation name, column name, and table name. You can also define your own measure names.
I add more measures, such as the sum of Rate and a Distinct Count measure. For Job Candidate, I add one measure.
At this point, the data warehouse model is ready.
I will store the data warehouse model in the cloud.
Each AnalyticsCreator user has their own storage space in the AnalyticsCreator cloud. When I save the repository, it is stored there. If I save it again, the previous version is not deleted. When I load from the cloud, I can see different older versions of the data warehouse model and restore them if needed.
You can also save the repository to a file. The repository is stored as a SQL file, which can later be loaded again.
Because this is a text file, you can store it in a version control system such as Git. This allows you to manage the repository using your existing version control process.
What we have created so far is only the model of the data warehouse. Now we need to deploy it and create the physical data warehouse based on this model.
To do that, we generate a deployment package.
I will now create a new deployment package.
You can deploy your data warehouse in different ways. You can deploy it on-premise, where AnalyticsCreator generates the SQL Server database, multidimensional or tabular OLAP cubes, Power BI models, and Integration Services packages.
You can also deploy the data warehouse to the Microsoft cloud. In that case, AnalyticsCreator can generate Azure SQL databases, Azure Data Factory pipelines for importing and historizing data, tabular models, and Power BI models.
For this demo, I will publish the data warehouse to the cloud.
Here is our SQL Server in the cloud. In the cloud, we have a database called AV, which is currently empty. We will deploy the data warehouse into this database.
I select cloud deployment and enter the database name, AV. We use standard SQL security.
It is not necessary to deploy directly from AnalyticsCreator. AnalyticsCreator can generate a Visual Studio solution, and you can deploy it later yourself. SQL Server has its own tools for deploying databases, OLAP cubes, Integration Services packages, and Azure pipelines.
For this demo, we will deploy directly from AnalyticsCreator.
AnalyticsCreator generates DACPAC files, which are the modern way to deploy SQL Server databases. Then it deploys the DACPAC file.
There are also options for Power BI. AnalyticsCreator generates an XMLA script containing the definition of the Power BI model. We set the compatibility level to Power BI and deploy it to the Power BI cloud.
Let me show you Power BI.
This is our test Power BI environment. We have a workspace called AV, which is currently empty.
We have a Power BI Premium subscription, so the workspace has an XMLA endpoint. This endpoint can be used to deploy Power BI models.
I connect to the Power BI endpoint. At the moment, there are no databases in this endpoint. We will deploy the Power BI model.
AnalyticsCreator also generates Azure Data Factory pipelines. You can see the different packages or pipelines: one for import, one for historization, one for persisting, and one workflow pipeline. The workflow pipeline executes the other pipelines in the correct order.
I save the deployment package and start the deployment.
AnalyticsCreator generates the Visual Studio project or SQL Server Data Tools project. In this case, we deploy directly from AnalyticsCreator.
For production environments, we usually do not recommend direct deployment from AnalyticsCreator. For test or development environments, it is convenient.
For deploying DACPAC files, SQL Server provides the sqlpackage.exe utility. You can use this utility to deploy DACPAC files on-premise or to Azure.
The deployment is complete.
Let’s check the Azure SQL database. The AV database has been deployed and now contains import tables, staging tables, and other generated objects.
In Power BI, you can see the new dataset created by AnalyticsCreator. It contains the Power BI model, which has the same structure as the data mart layer.
Now I open Azure Data Factory.
AnalyticsCreator generated an Azure Data Factory ARM template, which I will import into the empty Data Factory.
At the moment, this Data Factory contains no pipelines and no datasets. It only contains two integration runtimes. One is bound to my SQL Server, and the other is the auto-resolve integration runtime used for Azure SQL Server.
I import the ARM template generated by AnalyticsCreator. Each time a deployment package is generated, a new directory is created. In that directory, we find the JSON file containing the ARM template.
I import the template, review it, and create the resources.
Now the ARM template generated by AnalyticsCreator is deployed into Azure Data Factory.
In Azure Data Factory, we can now see the generated pipelines: import pipelines, historization pipelines, persisting pipelines, and the workflow pipeline. The workflow pipeline starts the import, historization, and persisting processes.
The only remaining step is to bind the integration runtimes to the linked services.
There are two linked services. One is AdventureWorks, which is our data source. I bind it to the on-premise integration runtime, set the database name, use Windows Authentication, and test the connection successfully.
The second linked service is DWH, which connects to our Azure SQL Server. I test the connection successfully and apply the configuration.
After deployment, the linked services must be bound to the correct integration runtimes.
Now I execute the workflow pipeline. The pipeline starts, and the data from the on-premise AdventureWorks database will be loaded into the Azure database.
There is one more step for the Power BI dataset.
In the dataset settings, under data source credentials, I edit the credentials and sign in. The credentials are updated, and the dataset is ready to use.
Let’s check Azure Data Factory. The pipeline has succeeded, and the data has been loaded.
I check the AdventureWorks import table and confirm that the data has been imported.
Now I refresh the Power BI data model. Once the model is processed, I can create Power BI reports.
I create Quick Insights. Power BI generates insights based on the data loaded into the model.
Now the Power BI model is populated with data, and we can view the generated insights.
We have created a data warehouse from scratch, deployed it to Azure, created the Power BI data model, and loaded the data.
That concludes my presentation. If you have questions, please ask.
Thank you, Dimitri, for the great presentation.
Let’s see if we have any questions. At the moment, there are none.
One important point: I deployed the data warehouse to Azure, but the same model can also be deployed on-premise. You can deploy the data warehouse on-premise and create the Power BI model in the cloud.
You can create a mixed environment with both on-premise and Azure components. The model remains the same. It is not necessary to change the data warehouse model.
The model we generated was created by the Data Warehouse Wizard, but you can also create your own models.
Let me show you some projects from our partners.
This is a data warehouse from one of our customers. The customer mainly uses SAP tables as data sources. You can see the different data stars here.
For example, this is the Open Purchasing Orders data star.
AnalyticsCreator can be used to create very complex data warehouses. It is not only a data warehouse generator; it is also a data lineage tool.
For example, if I select an SAP table and set a filter, I can immediately see where the data from this specific SAP table is used—in which fact transformations, which data marts, and which parts of the data warehouse.
For large and complex data warehouse projects, AnalyticsCreator is not only a generator but also an important lineage tool. Without such a tool, it is difficult to understand what happens to your data, how it is transformed, and where it is used.
With AnalyticsCreator, this becomes much easier. People can understand what happens to their data inside the data warehouse.
You can visit our website and request a trial version using the “Free Trial” button. We will also upload this video to YouTube, so it will be easier to follow the demo step by step.
You can use the trial version for 30 days. If you have questions, please contact us. Our support team will be happy to help.
If there are no further questions, thank you and goodbye.