Embedding analytics efforts into the everyday work of the business is difficult but not impossible. Tackling integration, data sourcing, and change management challenges can help ensure success
Many times when I speak with analytics managers or business people interested in analytics, they tell me that performing some analytics on data is not the primary problem they have. “We have to get the analytics integrated with the process and the systems that support it,” they say. This issue, sometimes called “operational analytics,” is the most important factor in delivering business value from analytics. It’s also critical to delivering value from cognitive technologies, which, in my view, are just an extension of analytics anyway.
A quick aside: Someone who anticipated this issue early on was Bill Franks, the chief analytics officer at Teradata. He published a book a couple of years ago, “The Analytics Revolution,” that is really about operational analytics. I wrote the foreword to the book, but the meat of the text is the good advice about integrating analytics with the core business processes of your organization.
Three things make operational analytics tough, in my opinion. First, to make it work, you have to integrate it with transactional or workflow systems. Second, you often have to pull data from a variety of difficult places. And third, embedding analytics within operational processes means you have to change the behavior of the people who perform that process.
If you are successful, you eventually will run into a fourth problem: The embedded analytical models will have to be monitored over time to make sure they remain correct. But since that’s a second-order problem (you should be so lucky to have it), I won’t discuss it further here.
Integration
To succeed with operational analytics, a company has to combine transaction systems, workflow systems, analytical systems, databases, and display/user experience tools—no easy task. Integrating with transactional systems takes a good deal of effort, although modern systems architectures make it a bit easier. Most transactional systems these days (including SAP and Oracle ERP systems) allow API-based connections. But there is usually a fair amount of effort involved in integrating an operational system—sucking out the data you need, doing the analytics somewhere (the cloud, in-database processing), and embedding the result into an interface for the front-line user.
You might be able to accomplish much of the integration with a workflow-oriented overlay tool like case management, business process automation (BPA), or robotic process automation, although those types of systems generally don’t do any analytics. That means human labor—from your organization or an external services provider—will be required to combine workflow and analytics. For example, a Boston-based BPA company, Pegasystems (I don’t have a financial relationship with them), partners with professional services firms to combine analytics-based recommendation engines with Pega’s multichannel marketing automation capabilities.
Various Data Sources
Problem two is getting all the needed data. That can be handled fairly easily if the data is in an information system in some sort of accessible format. But in many cases, the data is in a variety of formats—including paper reports, PDF files, unstructured articles, medical records, and more. To get that kind of data into your operational analytics system, you need more than analytics—you need artificial intelligence (AI).
One of the few vendors that combines AI capabilities with BPA is RAGE Frameworks, another Boston-based company headed by a former professor, Venkat Srinivasan, who holds a doctorate in computational linguistics. (I don’t have a financial relationship with them either, but I always like it when professors make good.) The AI capabilities allow RAGE applications in, for example, financial asset management to extract and classify relevant content from analyst reports and drive investment recommendations. RAGE also has worked with audit firms to extract data from paper and PDF files for account reconciliations. You simply can’t automate such processes if you can’t automate the “data ingestion” process. In addition, RAGE employs a variety of other “engines”—21 in total, including a computational linguistics engine, a decision tree engine, and a business-rules engine—to rapidly develop intelligent applications. This multiplicity of microservices is the only way I know of to quickly create operational systems that can analyze and think.
Changing Behavior
Finally, there is the need to persuade front-line users to change their behavior toward decisions and actions based on operational analytics. A “next-best offer” system for bank tellers, for example, has to persuade the teller to actually use the recommendations when working with customers. They won’t employ analytical recommendations if they don’t trust them.
To build such trust, transparency of analytical recommendations is essential. If the reason for the recommended product or action can’t be described in understandable language, the user won’t be able to assess whether it makes sense. That requires some sort of natural-language generation capability to describe the decision logic. It doesn’t favor many machine-learning approaches to analytics because, most of the time, there is simply no way to describe or interpret why a particular model prevails in a machine-learning process.
Organizations embarking on operational analytics are learning that analytics itself is the easy part. There is no shortage of available vendors, both proprietary and open source, of analytical algorithms. But building an operational analytics system means integrating and changing existing architectures and behaviors, and that’s always the hard part. It’s well worth the trouble, however, to build applications in which analytics and smart decision-making are embedded in a company’s systems and processes.
Tom Davenport is a senior advisor, Deloitte Analytics, and distinguished professor, Babson College and a Visiting Scholar at MIT IDE.
A version of this article was originally published by DataInformedon July 20.