Importing and Reusing DMN Models using Red Hat Decision Manager
Decision Model and Notation (DMN) is a standard established by the Object Management Group (OMG) for describing and modeling operational decisions. DMN decision models can be shared between DMN-compliant platforms and across organizations so that business analysts and business rules developers are unified in designing and implementing DMN decision services. Reusing a DMN model in the scope of a larger decision flow is a common requirement. Matteo Mortaridid a KIE Live session earlier this year on this topic. In this presentation, he walks through how we can achieve model reuse through the concept of DMN Decision Service. This article explains the same concept in a step by step manner with a sample use case.
A decision service is an invocable function, with well-defined inputs and outputs, that is published as a service for invocation. The decision service can be invoked similarly to a Business Knowledge Model (BKM) node from within the model itself if necessary. The standard usage of the decision service node is for it to be leveraged for re-use from an external application or a business process.
The decision service allows for reusability by providing an interface for invocation from another DMN model.
Transaction Monitoring Use Case
Let us say, we have a real time transaction monitoring system. We would like to evaluate every incoming transaction against two possible outcomes: Fraud and Anti-Money Laundering (AML). Each of these decisions can also be invoked independently in separate flows. Let us define these flows here. We will define these using Red Hat Decision Manager’s DMN editor.
We have a simple fraud decision, which will look at properties of the transaction like the amount, average amount, the transaction type and the merchant type to determine if this transaction could possibly be fraudulent. The top level decision “Blocked Transactions” is shown here.
Notice how we have encompassed the decision within a decision service so that this can be reused outside of the context of the model itself.
Let us define the AML decision, in the same way.
Anti-Money Laundering Decision:
The AML alert decision is based on both transactional and customer profile characteristics. Like we did with the Fraud Alert, we will define it within a decision service.
Now that we have defined both of these decision flows, let us invoke them from the scope of a new DMN encompassing both decision services called TransactionMonitoringDMN to show how we can achieve this service reusability.
Import and Invoke the decision service:
The DMN editor provides a way to import DMN models that are already defined, providing a way to reuse the components. The import functionality is found on the DMN editor from the “Included Models” tab. Let us include the two models we created before. To do this click on the “Include Model” button and choose the model, we will give it a unique name as an alias (“fraud”) so that we can use it within this DMN.
We will add both the models this way.
After we include the models, all the model elements defined on these decisions should appear in the lower left section of the DMN editor like shown here. Notice that the include functionality also allows us to include any custom data type defined on these models. Let us filter by “Decision Service”.
Now let’s define a DMN that encompasses both of these decisions.
We define two data objects — Transaction and Customer. The Decision Services are invoked using the “Invocation” boxed expression. This allows us to map the inputs to the decision service.
The Fraud Alert output of this invocation maps to the “Blocked Transaction” defined on the included model. Similarly we have used a similar process to invoke the AML Alert decision as well. By using this technique we are able to define the Transaction Monitoring decision based on Fraud and AML decisions already defined. The complete DMN project can be found here.
DMN provides a powerful mechanism to define business rules in a business user friendly, platform agnostic way. By modularizing, and reusing the artifacts we are able to define cleaner models and achieve better quality, since the models that are defined are easier to test and manage.