From Data to Process to Decision

ODSC - Open Data Science
4 min readNov 6, 2019

I recently published a paper entitled ”Intelligent Decisions: How Businesses Can Improve Processes Using Artificial Intelligence Technologies.” The work focused on the possibility of employing artificial intelligence in the business process management functions of the enterprise. I would like to further explore this concept and investigate the data science tenets relevant to a successful implementation of such technology — moving from data to process to decision.

[Related Article: It’s About Time. Designing a Streaming Architecture For High-Frequency Sensor Data]

BI and the use/reuse of collected data

In the business intelligence landscape, the influx of data into the organization from sources such as competitors, customers, news and partners marks the beginning of the journey we will explore. We have an emerging understanding of the many roles that data professionals play in the cleaning and categorization, storage and security as well as the governance and query of this “digital loot.” The work of the database administrator and cloud engineer goes hand-in-hand with that of the data architect. Their collective efforts support the evolution of the data as it is clustered and mined, analyzed for trends and statistically manipulated. The work of the data visualization experts, neural network and expert system designers are as important as that of the communications manager and subject matter professionals. The precision necessary to implement artificially intelligent processes begins with the well-orchestrated assembly line of data caretakers during these phases of collection and use.

The smartest workflow design is full of data that the organization has already collected and analyzed. The reuse of this data in the form of a business process adds the iterative effect of lessons learned by directly improving the use of any new data that enters the enterprise’s knowledge ecosystem. Innovative business intelligence, and the underlying data science it takes to make BI work, is, therefore, a foundational component of advanced organizational efficiency and for the use of AI-Powered Process improvement.

Self-organizing process metadata

In a firm with a well-managed and innovatively automated knowledge ecosystem, processes could be trained to “self-organize” (a concept introduced by Nicolis and Prigogine (1977) and formally called autopoiesis). This intelligent use of organizational data for the design and implementation of processes could minimize human thought, input, and decision making as well as optimize analyzed knowledge in an effort to support future decision-making within the firm. The conjecture, of course, is that this minimization of human input is synonymous with the minimization of human error and that the optimization of analysis is synonymous with the improvement of organizational decisions.

Any technology supporting AI-Powered Process Improvement would benefit from well-informed processes designed by the organization to “learn” how to advance business outcomes. The data discovery and reuse necessary for such smart processes is another important part of the firm’s data framework.

Generalized (and then specialized) rules about data and data use

So far, this is a promising exploration with vanguard use of data and innovative sense-making tasks to transform collected information into repeatable processes. Without rules to govern both the stewardship of the data and the development and execution of the processes, we are far from a significant and controlled return on investment.

General rules for how to manage constraints and identify co-occurring events provide relevant inputs for the data-to-process-to-data feedback loop. Those are more objective than, say, the rules managing external contingencies, nomenclature, and cultural norms. How does an organization build data-focused rules related to such subjective topics into its business intelligence function? Into its processes? Reciprocal synthesis (different descriptions of the same thing), as well as lines of argument synthesis (different descriptions of slightly different but connected things), must be parsed out to avoid confusion and misaligned decisions (Alexander et. al., 2018). This is where the need for specialized rules to govern data, data use/reuse and process implementation comes into play.

Collecting data to inform processes to impact decisions: The Pinnacle

Even with the best rules, an organization still must determine the degree to which its data and processes can be relied upon to make decisions. With proper planning and an in-depth understanding of data/process interactions, repeatable algorithms for process improvement and decision making are feasible and desirable. Whether the organization takes a static approach using past decisions and situations to inform future processes or takes a more dynamic approach using value stream mapping (a type of process modeling meant to identify losses and to offer an integrated and standardized view of data across the enterprise [Librelato et.al., 2014]), decisions can be improved using the building blocks of the entire system. The system I am referring to is the data to process to decision system with all of its feedback loops and interconnections.

[Related Article: Bye Bye Big Data Era, the Insight Era is Here]

However “light touch” this discussion has been, the connection is clear: a strong data science function within an organization can support that organization’s ability to effect process improvement, either through artificially intelligent technology or other traditional methodologies.

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.