Information: The King of IIoT ROI

Information: The King of IIoT ROI
Information: The King of IIoT ROI

You’ve just realized that you have made a mistake. Things did not go as you planned, and what you thought would go right went completely wrong. How did this happen? There were many reasons, but when you boil it down what was missing or incorrect was information. Think about it; would the current situation have happened if you had had the right information? Possibly. Would you have been more prepared and able to mitigate any fallout? Most definitely.

We gather and use information literally all the time. In fact, we are barely aware of that we are gathering and using much of the information that makes up our daily lives. We have senses taking in data from around our environment that, combined with the contextualization that we have accumulated either through experience or teaching, becomes actionable information which allows us to live out our lives. For example, when we see a pot on the stove we become more cautious about it and check for heat before we grab it. If we feel heat, we grab something to insulate our hands before we pick it up. In those two sentences alone, how much information was processed, and how many actions were taken because of the information that was taken in?

A parallel situation exists in manufacturing, where an extreme amount of data is also taken in. In fact, there is such a high level of data available that it can become overwhelming, so organizations are limited in the amount of data that they can reliably contextualize and convert to information and are further limited in how much information they can act upon. Much of this comes down to the economics of information. How much is information worth and at what point is it less expensive to not have the appropriate information? 
While there are many types of information, it can be categorized in four different ways:

  • KK: Known known information: This is the most common and easily understood. This is known data that has been converted into known information. You know what its current state or future state is and can use that information to accurately predict what will happen based on a decision that was made.
  • KU: Known unknown information: This category could essentially be split into two. The first would be unconverted data, and the second would be future information. Unconverted data is where the question of economics comes into play. How much should be spent to capture and process the data? In this category the data and the costs to convert are known. Are these datapoints important enough to be converted, or can they be assumed or mitigated for a lower cost? Future information is the holy grail of information. In this case, the range of outcomes will be known, but the certainty of any one occurring will be limited to a probability rather than a foregone conclusion. While this may not provide a single value, the range of potential outcomes combined with their respective probabilities could be used to generate a weighted mean.
  • UK: Unknown known information: This is information that is known elsewhere within the organization but does not become known to those making the decisions. This information is typically tribal information that is only known by a few people and not formally recorded, or siloed information that is known only to part of the organization. While this may not always be the costliest information, or lack thereof, it will be the most annoying.
  • UU: Unknown unknown information: You may have heard the phrase “we don’t know what we don’t know.” The most dangerous and costly category of information due to the complete lack of knowledge about this information and its potential impacts to any decision. Other than taking general precautions, there is not much that can be done to mitigate its influence.


Information costs

Information costs and returns can be different depending on the situation. In a manufacturing organization, two general classifications typically apply: day-to-day operations and projects. In a day-to-day operation, data is constantly being generated, and depending on the operation, most of it is being converted to information.

Feedback on any changes occurs fairly quickly. In a project, data is being collected and interpretations are being used to generate actionable information. However, much of this information will be generated long before it is actually used, so feedback on information takes significantly longer than in a day-to-day operation situation. One consequence of this is that due to this difference in feedback cycles is that in a day-to-day operation situation, information can be gathered, used and updated in smaller amounts, while in a project situation, since information is really only used once, it must be mostly gathered prior to use, with little to no opportunity to update during use.

For example, in a production line a parameter may be tuned regularly, with each result building upon the information gathered. This information can be used to improve the operation over time. In a project where a production line is being installed, many of the decisions, especially the larger decisions about how big of a line to install, what equipment to use, and where to build the line, must be made in advance with little opportunity to make adjustments at a later point.

In terms of costs, with both situations, there would be both direct and opportunity costs. Direct costs would be related to the amount of time and money that would need to be spent to gather the required data and interpret it to create actionable information. Opportunity costs would be related to what would be lost by allocating those resources that would be needed to gain the information required. In a day-to-day operation, the opportunity costs could be significant. While there would be the loss of potential gains from alternative investments, there could also be significant losses from interrupting a process line, or the potential of off-spec product resulting from any testing being conducted. In a project situation, there would be less potential opportunity costs since production has not started and therefore cannot be interrupted. Direct costs would therefore make up most of the costs associated with the information in this situation.


Regular vs. perfect information

Since the information that we gather daily typically has some unknowns associated with it, it is usually far from perfect, and the level of perfection will vary depending on the type of information that it is. For KK information, the information will probably be more towards the perfect side. For KU information, it may be that the potential outcomes might be known, but only a probability of each outcome may be known. UK information will be similar in quality to KK information, the challenge will be trying to uncover it at the right time. Finally, UU information will be the furthest from perfect and because it is unknown, even potential outcomes will not be able to be estimated.

While these types of information may have differing levels of perfection, it should be noted that at one point or another all situations will ultimately yield perfect information, just not typically at the perfect time. The phrase “hindsight is 20/20” has been repeatedly proven. At some point, perfect information will become available. The challenge is trying to develop perfect information at the perfect time.

Since perfect information can be used to make perfect decisions, perfect information is valuable. However, it is not infinitely valuable, so there is a level at which the cost of the information stops making economic sense. The theory of how the value of perfect information is calculated straight forward, it is essentially the difference in expected value between the decisions made based on the information available, and the expected value of decisions made based on perfect information. For instance, using the following values:

 

Expected Return

 

A Outcome (p = 0.8)

B Outcome (p = 0.2)

Option A

$40

$10

Option B

$10

$50

 
We can see two outcomes where choosing the correct option will result in significantly better returns.  However, since it is not known exactly which outcome will occur and a decision must be made between Option A and Option B prior to determining which outcome occurred, some risk must be taken. In this case, since the expected value of Option A is much higher at $34 ($40 * 0.8 + $10 * 0.2 = $34) than Option B at $18 ($10 * 0.8 + $50 * 0.2 = 18), it would be better to proceed with Option A. Could this potentially not work out? Definitely! In fact, 20% of the time, this will end up being the worse option, and the decisionmaker might need to have a few uncomfortable conversations with their superiors.

What if perfect information was available? What would be the expected value of a decision to be made in this case? To calculate that, the assumption would be that the outcome would be known prior to making the decision, and that the optimal decision would be made based on this information. In this case, the expected value of this scenario would be $42 ($40 * 0.8 + $50 * 0.2 = $42). Knowing this, we can now calculate that the value of perfect information is the $8 ($42 – $34), which indicates that spending any more than $8 to get even the most perfect information is not worth it economically.


Application of perfect information

How can this valuation of perfect information be applied to the four categories from earlier?

  • KK Information: Can this information be improved upon? Example: What would be the impact of increasing the accuracy of a sensor? Would the value gained by increasing the accuracy of a sensor offset the additional costs of increasing its accuracy?
  • KU Information: How do we change this type of information to KK information? Example: We know that we cannot predict when a machine will fail. Could we connect the machine and begin to apply ML/AI to determine when the machine will fail? What would be the uptime impact of performing preventative maintenance vs. break-fix maintenance? Will this increase uptime? Will this reduce the number of off-spec product? Will these positive impacts offset the implementation costs?
  • UK Information: Can we expose and capture information better? Example: Operators are making adjustments to the production in the field to compensate for potential issues that are unknown to central engineering. How can this information be captured and implemented into process improvements?
  • UU Information: What information could be missing from our initial evaluations? Example: Company is transitioning to I4.0. What will be used to evaluate if they have the appropriate organization and infrastructure for this change? Can the maturity level (from a digital perspective) be measured? What would be the best and worst scenario of this transition? What probability of either occurring? What is the expected value of this transition? What would the expected value be with the information gained from an evaluation or maturity measurement be?


Can IIoT provide perfect information?

Definitely! Will it be used to the extent that it provides perfect information? Definitely not! While the value of perfect information can be calculated, the chances of being able to procure perfect information for that cost is slim to none. As in many cases, the law of diminishing returns is front and center in determining the point at which investing in the pursuit of perfect information becomes economically unfeasible. As seen previously, even perfect information isn’t perfectly invaluable. Even still, if most of the risk can be eliminated and the remainder can be mitigated, then the investment will be worth it. This may seem like a callous approach, but there will always be risk. Even safety systems do not completely eliminate risk, they merely reduce it to a level that is more comfortable. If 90% of the risk can be eliminated for 60% of the value of perfect information, it might not be economically feasible to eliminate the remainder of the risk.

Industrial Internet of Things (IIoT) has a unique place in the manufacturing world. It has little to no physical footprint in the organization. In fact, even from a digital perspective, the space required is minimal compared to the overall data storage requirements. However, its ability to interconnect and sort signal from noise allows data to be in the right place at the right time and in the right format. This allows for data to be refined into information, which further allows for it to be assimilated into knowledge.

Artificial Intelligence (AI) and Machine Learning (ML) are one of the more visible and promising aspects of I4.0 and require a high level of digitization within the organization using technologies such as IIoT. The Machine Learning aspect of this advancement essentially looks at what we know (good product / bad product), and then compares it against as much information as it can get to determine what might be an influencing parameter by looking at which parameters show the highest level of correlation. Once all of this has been figured out, an artificial intelligence is able to provide guidance on how to act, or in closed loop applications, make the adjustments automatically.

Thanks to this advancement and the information provided by IIoT, information that is known can then be used to create information from data that already existed, essentially turning data that the organization did not know that it knew about (UK information) into something usable. Continuing along, the infrastructure provided by IIoT allowed the AI to monitor and act on real-time data, rather than waiting for analysis to be performed manually.
Still, this is not perfect information. The correlations used by ML/AI are never 100% perfect and as any statistician will tell you, correlation does not equal causation. However, the information provided by the algorithm will reduce the risk associated with known unknowns (i.e. that bearings fail is known, when they fail is relatively unknown without some testing or monitoring).


Is information worth the cost?

The $64,000 question (depending on the application). The answer is really “it depends," but in many cases, yes. Maintenance is typically one of the biggest targets for cost reduction and a good application to look at in terms of this question. In many cases, maintenance is done on a preventative schedule, which looks to balance the opportunity costs of performing the maintenance against the potential costs of a failure. As the frequency of preventative maintenance periods goes up, the probability of a failure generally goes down. However, these intervals are typically calculated using static data and without additional monitoring abnormal failures cannot be predicted.

An example would be a process where downtime costs $50,000 per hour and there is a 5% chance of failure that has a weighted mean cost of $1 Million. In this instance, each single hour-long preventative maintenance routine that can be avoided, less than a 10% improvement would provide a $50,000 savings. Further, since an IIoT system would be actively monitoring the process, the risk of failures would be reduced, which would provide a savings of $10,000 for each 1% reduction in chance of failure. These savings would be further augmented by any reductions in quality issues and reductions in the severity of any failures. Even ignoring these harder to quantify savings, if the process was able to see a 20% reduction in maintenance and a 2% chance of failure reduction, the annual savings would be $140,000.


The bottom line

This example is just one of many potential use cases associated with the increased knowledge level that comes with IIoT investments, all of which provide nothing but better information. While it may be the most intangible thing out there, information is also one of the most valuable. Decisions that can make or break an organization are made every day based on available information. Better information leads to better decisions and better decisions lead to better outcomes for the organization.

About The Author


Ryan Kershaw is a Senior Member with ISA and holds a Certified Automation Professional designation. He was in the instrumentation and process controls industry for over 20 years before moving to the IIoT sector. Ryan is from Toronto, Canada and is an Enterprise Account Executive with Litmus Automation and a board of directors member with the Canadian Process Control Association.


Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..

Subscribe