With 2020.1, we are proud to present our enhanced real-time process mining functionality, which is compatible with all ERP systems and other data sources, and ensures the analyses are always based on most recent data. QPR real-time process mining also supports a single-data concept with modern ERP systems like SAP S/4HANA, making it possible to create the process mining model on-the-fly as a report directly from the then-current data in SAP.
In addition to Real-Time Process Mining, the 2020.1 also includes a nice selection of new pre-configured process mining charts.
Process Mining Analysis delivered On-The-Fly!
Background: Emma has built an impressive number of process mining models to cover all the main processes of her organization. She has also carried out a process mining training program, so that more than 100 employees in her organization have turned into eager process mining enthusiasts using process mining models for RPA automation, process excellence, internal audit, supply chain management and driving digital transformation.
Challenge: After making the initial findings about processes, Emma's colleagues start to continuously ask for more up-to-date process mining models:
Managers and team leaders are asking for Business Review Process Models to view the processes and KPIs for the previous review period as it completes, whether it is a quarter, month, week, day or a work shift. They want to identify the most recent process bottlenecks, rework, and long lead times. They are also eager to see the root causes, so that they find corrective actions within their teams right away.
Internal Audit is requesting company code -specific models for audited processes. These Ad-Hoc Process Models should have the latest information concerning the audited business.
Customer service and logistics workers need access to most Customer Specific Process Models, containing all ERP events related to customer sales orders, ongoing projects, and metadata changes. This information needs to be available when the phone rings or inquiry email needs to be solved.
Real-time Dashboard Screens are requested, showing the current KPIs compared to targets, as well as the current bottlenecks in the operations at this very moment.
Operative managers from service desk and customer delivery teams are asking for Artificial Intelligence based Case-Level Prediction Models to show, based on previous process flow behavior, the probability of a customer ticket not meeting the Service-Level Agreement (SLA) or the probability of a customer delivery to be delivered late. These models are updated on an agreed period, for example once per day, as a tool for operative managers and teams to focus their daily efforts and improve customer satisfaction.
Solution: Real-Time Process Mining powered by QPR ProcessAnalyzer 2020.1 is capable of delivering the process mining analysis on-the-fly for each of these use cases! In this blog article, I will discuss the full process mining model life-cycle and present the needed functionalities. In the picture below, we see the main three stages of process mining model creation: Extraction, Transformation and Loading:
- Data Systems are the actual ERP systems containing data for process mining. Data Systems typically contain data stored in databases, tables and files.
- Extraction establishes the connection to the Data Systems, and enables further processing of the data.
- Transformation is all about combining data, joining tables, linking events from multiple tables into cases, and collecting case attribute and event attribute data. This stage converts the data into the event and case format used in process mining.
- Loading performs the process mining analysis and provides the analysis results for users, automated applications, and prediction engines.
Now it is time to dive into the technical details. Feel free to jump over the details now and come back later when you are implementing a real-time process mining solution....or better yet, browse through this full blog to get an idea of needed technology component details:
Data Systems are the physical ERP systems, web applications, files, and databases containing the event data for process mining. Data from multiple sources is typically used when creating end-to-end processes. An organization may be running for example several on-premise copies of a SAP ERP system, one copy of Salesforce as a cloud service, a few custom applications using SQL Server database, a selection of excel, csv and XES files containing data from legacy systems and 3rd party business partners. Finally, there can be IoT (Internet of Things) devices and applications sending event data without storing it into any database. All of these data sources are supported by QPR ProcessAnalyzer and can be freely mixed to create process mining models that provide valuable insights!
Connectors are the tools and interfaces for connecting to the Data Sources. QPR supports a wide selection of Connectors that make it easy to connect to all your Data Sources. The first set of Connectors allow QPR ProcessAnalyzer to pull data from other systems, databases and files. The underlying connectivity is build using standard interfaces and technologies like ODBC, OleDB, ADO.NET (SQL), SAP (R/3 SAP .NET Connector, SAP S/4HANA ODBC), Salesforce and generic Web Service. The source systems can also push the new data into QPR ProcessAnalyzer. This is supported by the QPR ProcessAnalyzer WebService API. Naturally, it is also possible for a user to manually import data using web, excel, and file uploader interfaces.
More information about the QPR Connectors:
Extraction can be performed by multiple software components:
- The QPR ProcessAnalyzer Server has full capability to connect to source data systems with all interfaces. It also support the WebService API for incoming event data and event streams.
- The QPR ProcessAnalyzer Script Launcher is used for three main purposes:
- Connecting to Data Sources inside the firewall. This is needed for extracting data from on-premise SAP R/3 and SQL Server databases when QPR ProcessAnalyzer is used as a cloud service.
- Scheduling and automating the extraction tasks. For example for monthly, weekly, daily, or hourly extractions.
- Exporting large databases, tables and reports from the QPR ProcessAnalyzer Server to local folders and databases.
- QPR ProcessAnalyzer Excel Client is used when developing extraction scripts.
- QPR ProcessAnalyzer support a wide variety of ETL tools and connectors. An ETL tool can extract and store the data, thus creating a Data Warehouse kind-of an environment. The ETL tool can also be used to create a virtual data warehouse, which provides connectivity to source data without storing the actual data into a separate database.
Extraction can be performed as complete or incremental.
The complete extraction mode is used to extract the full data from an ERP system for the analysis. Since all the data is reloaded, there is no need to worry about changes. When delivering real-time process models, the performance may become an issue, specifically if it takes hours to extract complete data from the Data System.
Incremental Extraction and Loading
For real-time process model extractions, it is beneficial to store data into datatables during the ETL process. Benefits/use cases for using datatables include:
- Large tables from ERP systems that do not support direct database access (for example standard SAP R/3) can be extracted and then accessed as direct SQL tables.
- Data that is not changing, for example change documents from a previous period (year, month, day, ...) can be stored for later use.
- Manually collected data from csv and excel files can be stored to datatables
QPR ProcessAnalyzer datatable concept is further documented in https://devnet.onqpr.com/pawiki/index.php/QPR_ProcessAnalyzer_Objects_in_Expression_Language#Datatable.
Datatables are stored into the underlying SQL Server database supporting Terabytes of data (https://docs.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server?view=sql-server-ver15)
Transformation Data Storages
Transformation converts the extracted data into process mining models. When connecting to multiple source systems, it is beneficial to store some of the extracted data for later use. Also, the transformation itself creates intermediate data that is used during the transformation operation, for example for temporary tables containing temporary results.
- QPR ProcessAnalyzer Data Tables. These are the default storage inside QPR ProcessAnalyzer for all kinds of extracted data. Data Tables are directly mapped into permanent SQL Server tables. Data Tables are accessed using the QPR ProcessAnalyzer Server which provides an extra layer of user rights administration. Data Tables are stored into the same database as process models, so that backups are taken automatically at the same time.
- Integration Database. Using a separate SQL database for integrations is beneficial for example when other 3rd party tools are used.
- SQL Sandbox. A SQL Sandbox provides a restricted area for transformation scripts to execute SQL statements without the risk of SQL statements being executed in the QPR ProcessAnalyzer database itself.
- In-Memory ETL Data Frames. In-Memory ETL, introduced in 2020.1, provides super fast access for real-time process models. The In-Memory storage has intelligence caching and the memory is automatically released for other purposes after it is no longer needed.
- Process Models. Process mining models offer yet another place for storing data. For example, for case-level prediction models, the historical cases used to build the machine learning prediction model can be stored directly into an existing process mining model while the new data related to on-going cases is loaded separately.
- QPR ProcessAnalyzer ETL scripts. Using the ETL scripts together with SQL Sandbox or Integration Database provides the ultimate functionality and flexibility of SQL queries with the familiar SQL syntax for making transformations. The benefit of ETL scripts is that it is possible to create any required mappings, table joins and KPI calculations with SQL. While SQL is a widely used query language, it is sometimes challenging to write complex SQL queries. Also the SQL performance may be limited with very large tables.
- QPR ProcessAnalyzer In-Memory ETL Expressions. Introduced in QPR ProcessAnalyzer 2020.1, the In-Memory ETL Expressions provide fast transformation of data into process mining cases and events. ETL Expressions contain a great selection of functions needed for any kinds of transformations.
- ODBC / SQL Queries. Part of the transformation logic can be embedded already into the extraction queries. While this is not always possible, for example when data from multiple source systems is used to create end-to-end processes, it is sometimes convenient to do some transformations already in the extraction scripts.
- WebService. The WebService interface operates with HTTP methods. Many 3rd party ETL tools and programming languages can be used for transformation mapping using WebService. More information: https://devnet.onqpr.com/pawiki/index.php/QPR_ProcessAnalyzer_Scripting_Commands#--.23CallWebService
- QPR ProcessAnalyzer Process Model. This is the traditional option for storing process mining models in QPR ProcessAnalyzer. The data is typically loaded with importEvents and importCaseAttributes ETL commands or similar manual operations.
- QPR ProcessAnalyzer Data Table. Introduced in QPR ProcessAnalyzer 2020.1, the Data Table storage provides a fast process mining model management using the Data Table storage directly without the need to reserve separate process model storage.
- QPR ProcessAnalyzer In-Memory. Introduced in QPR ProcessAnalyzer 2020.1, the In-Memory ETL option provides super fast Real-Time Process Mining as the model data is not written to permanent storage at all. For automatically refreshing Dashboard Screens and Case-Level Prediction models, In-Memory storage is the fastest and easiest solution.
More information about the options:
Trigger for Process Mining Model: Create and Update
There are many situations when a new process mining model is created or an existing model is updated with new data.
- Periodically started model creation using Windows Task Scheduler. This is the most used approach for creating and updating models periodically. Window Task Scheduler can start an ETL script or In-Memory expression that results in new data extractions and transformations.
- Manually started model creation using import. Super simple - just start the creation using any of the QPR ProcessAnalyzer clients.
- RPA tool initiated creation. 3rd party tools like RPA tools or ETL tools can initiate the model creation and update operations.
- Ad-Hoc model creation as an On-The-Fly process model using the web user interface. Introduced in QPR ProcessAnalyzer 2020.1, the On-The-Fly creation is initiated from the user interface by selecting a scope for the model, for example a certain company code, a certain vendor's purchase orders, a certain business area, or any similar scope. This can also be a drill-down operation from one process level to filtered amount of cases in another business process. On-The-Fly creation is supported by In-Memory, which means that the model is up and running without any persistent storage of the data.
- External push data driven WebService API creation. If the data is pushed to QPR ProcessAnalyzer by the source system or 3rd party ETL tool using the WebService API, then it is very typical to start the creation and updating of the process mining model after all data has been updated to QPR ProcessAnalyzer. This can be done easily by calling web service to complete the model creation or the update operation. More information: https://devnet.onqpr.com/pawiki/index.php/CallWebService_Script_Examples and https://devnet.onqpr.com/pawiki/index.php/QPR_ProcessAnalyzer_API:_ResetModelCache
- Automatic Loading on Server Startup. This can be easily defined per process mining model using: https://devnet.onqpr.com/pawiki/index.php/QPR_ProcessAnalyzer_Model_JSON_Settings#Automatic_Loading_on_Server_Startup
Trigger for Process Mining Model: Unload and Delete
Unloading process mining models from memory and deleting the models completed are necessary steps for covering the full life-cycle of process mining models. Typical situations include:
- Drop based on idle time. This option is useful when you want to guarantee that 1.) a Real-Time Process Mining model stays complete and unchanged for the duration of the analysis performed by one analyst user, and 2.) the contents of a Real-Time Process Mining model are refreshed with new up-to-date data after the specified amount of idle time has passed since last usage. For more information see: https://devnet.onqpr.com/pawiki/index.php/QPR_ProcessAnalyzer_Model_JSON_Settings#Memory_Usage_Settings
- Drop based on memory. This options keeps the process mining model in memory until free unused memory exists. When memory runs low, the model with the longest idle time is dropped.
- Delete, archive and restore models. These are the standard operations for deleting models and data from transformation data storages.
Thank you for reading this far! My personal tip-of-the-day for building real-time process mining in your organization is the following:
- Develop a process mining model that truly contains valuable information.
- Find the key users who would benefit from that model, and together with them, design a few dashboard views containing interesting analysis and findings.
- Automate the data extractions and agree on the data refreshing strategy. Your users may for example want to see the process automation report (below) updated on daily bases with live data from several ERP systems.
Pre-configured Process Mining Charts
QPR ProcessAnalyzer 2020.1 is shipped with pre-configured process mining charts for typical process mining tasks including:
Case Trend by Selected Event showing the case counts of how many cases go through any of the selected events. By default the report uses weekly trends:
Root Causes showing the possible root causes for any discovered process mining finding. By default, the chart shows the biggest problem areas. There is a nice tool tip that explains any selected root cause in more detail, for example: "Customer Group Kids has the analyzed feature in 22 % of the cases which is 5.1 % more (197 cases) than on average. This root cause explains 11 % (197 of 1,820 cases) of the occurrence of the analyzed feature, provided that there is a causal relationship." It is also possible to limit the search of Root Causes to only include the selected case attribute and to show the best practice examples.
QPR ProcessAnalyzer Usage Statistics Reporting
QPR ProcessAnalyzer includes extensive usage statistics reporting, which helps to understand model usage and ensures the results are delivered to right people. 2020.1 introduces five new usage reports as Chart View pre-configured charts:
- Latest Operations shows columns Operation, Start Time, End Time, Model, User, Duration and Additional Data
- Operations by time shows a column chart displaying Operation counts trend by Start Time
- Operation durations shows a histogram column chart with operation amounts per each duration unit
- Most used models shows columns Model, Operation count and Average operation duration.
- Most active users shows columns User, Operation count and Average operation duration.
This is the new QPR ProcessAnalyzer 2020.1 😀👍
If you’re already using QPR ProcessAnalyzer, go ahead and try these new features when you get a chance. If not, and if you’re new to Process Mining, read more on this page. If you want to know more about QPR ProcessAnalyzer, go here. Also, don’t hesitate to book a live QPR ProcessAnalyzer demo:
It’s a good time to take a look at Process Mining if your company hasn’t already. The capabilities and usability of Process Mining software are improving rapidly, and the market is quickly becoming mature, though there’s still much work to be done. If you think your company is ready to step it up with the future of as-is process modeling and process efficiency maximization, the fastest way to get things moving is to send our Process Mining team a direct message:
Thank you for participating, here are the release note presentation pdf and release webinar recording: