Manufacturing Intelligence Solutions - Users' Perspectives & Plans

  • May 09, 2014
  • Feature

Perspectives from Pharmaceutical Automation Leaders – PAR Article Series, Part 3

By Bill Lydon, Editor

Pharmaceutical automation leaders from around the world gathered for the annual Pharmaceutical Automation Roundtable (PAR) in Copenhagen, Denmark to discuss a number of automation challenges facing their companies. While the context was specific to the pharmaceutical industry, these challenges are certainly applicable to all industry segments. The topic of this article is manufacturing intelligence solutions.

Manufacturing Intelligence (MI)

Manufacturing Intelligence (MI) systems allow the aggregation and subsequent visualization and analysis of data to drive business decisions. The presenter discussed how his company has multiple types of manufacturing intelligence solutions. These various solutions were acquired over time and through acquisitions. The company goal is to have one core MI application with a single support model. The company currently has a wide variety of aggregation and visualization tools across the network. Activities today are very much site driven. They include a variety of approaches from best-in-class sites and sites with no installed aggregation package.

The MI vision is that users should be “2 clicks away,” using a tool to browse to the data. They are currently consolidating the number of packages and systems to deliver best business value as determined by cost of ownership and ease of use. Data collection, aggregation, and transformation is the foundation for all data-driven projects and business activities. The biggest complaint from plant sites is they can’t get access because historian tools are cumbersome and difficult to use.

The goal is to make the data available to the right people (with proper access control) using web browsers through the company intranet. This will provide links to knowledge management tools and contextualized data from various applications, including SAP, LIMS, QTS, Historians, etc. More accessible data will be used to improve operations; for example, relevant data would allow analytics colleagues to build models. Another example is making data available to continuous improvement champions within the company. Ad hoc reporting is an important functionality and will enable people to focus on specific issues and investigations.

Achieving standard data structures will require developing common naming and/or mapping data.  Currently, plant sites have different identifier terminology conventions.

The company has defined the following use cases with representatives from each business unit:

Manual Data Entry (MDE)

This is defined as a flexible application that can be used to store both GMP (good manufacturing practices) and non-GMP data through manual data entry. The application would be 21CFR part 11 compliant with the ability to not use all security aspects for non-GMP data. Data entered into the application must be immediately available for retrieval. The systems should be able to distinguish between data that has only been entered (and not verified) and data that has been verified through either visual or double blind verification.

Data Integration

Real Time (DI-RT) is an application that can directly connect to a variety of data sources including SAP, Oracle, SQL server and data historians. The application should extract the data, join with other data sources, perform calculations, and transform the data into an alternative display format for presentation to the user in real time.

Data Integration

Extract Transform Load (DI-ETL) is an application that can connect to a variety of data sources including SAP, Oracle, SQL server and data historians. The application should extract the data, transform it into an alternative format, and store it in a database table. ETL procedures must be scheduled and triggered on demand.

Data Retrieval (DR)

This is defined as an application that is part of or can connect to the data integration application(s). It should provide an easy-to-use graphical user interface so users can query the real-time and stored data retrieved from the data sources, perform calculations, and save their views. The application must allow read-only connections from other applications to the objects it contains.

Annual Product Quality Report (APQR)

This is defined as an application that allows template reports to be built.  It should offer automatic population of raw data, calculations and charts (including statistical control) based on a defined date or batch range. The reports must include information on the selection criteria for the batches contained in the report. The application must be accessible to multiple users so they can share a single set of template reports. It must also be capable of automatically extracting the required data from an outside system.

Ad-hoc Statistical Analysis & Diagnostics – Univariate (UVA)

This is defined as an application that contains a wide variety of statistical (e.g. SPC) and diagnostic (e.g. T-tests) analyses and visualizations that can be performed on a single variable. The application must be accessible to multiple users so they can share charts and analyses. It must automatically extract the required data from an outside system. The application should store user annotations and allow vertical lines on graphs to indicate x-variable groupings. The application must also be capable of calculating and/or applying separate means and limits based on x-variable groupings.

Automated Process Monitoring – Univariate (UVM)

This is defined as an application that can connect to outside data sources and automatically retrieve data for use in statistical analyses and visualizations that are configured for routine use. The application should track statistical rules violations and schedule analyses for independent execution. It should then send notifications if the most recent analysis execution contains new statistical rules violations. The application must be accessible to multiple users, store user annotations and allow vertical lines on graphs to indicate x-variable groupings. The application should also be capable of calculating and/or applying separate means and limits based on x-variable groupings.

Online Scorecards & Dashboards (OSD)

This is defined as an application that can be configured to display simple scorecards and dashboards (e.g. traffic lights). The application must automatically extract the required data from an outside system and must be accessible to multiple users. A single screen on the application must display multiple dashboards sourced from independent data sources.

Ad-hoc Statistical Analysis & Diagnostics – Multivariate (MVA)

This is defined as an application for performing multivariate statistical analyses on large sets of manufacturing data.  Based on the analysis, it should generate multivariate statistical process models. The application will be used to increase process understanding, assist in process troubleshooting, and develop models for on-line multivariate analysis use (e.g. condition monitoring, soft sensors, advanced control). The application must allow for analysis of both batch and continuous processes and provide efficient methods to access, organize and process data from external sources. The application must also allow analyses to be easily shared between users.

Automated Process Monitoring – Multivariate (MVM)

This is defined as an application to perform multivariate statistical process monitoring for batch and continuous processes. The application must apply multivariate statistical process models (created within the application or with a separate MVA application) to real-time data coming from the process. It should show how it is performing and predict key process quality attributes. The application must alert users in real-time when a process deviation is detected or predicted outcomes will be outside of an acceptable range. The application must allow users, with minimal training, to easily determine what process variables are driving detected variations. The ideal application will include the ability to directly control the process to achieve desired process outcomes.

The group mapped out all of their needs and asked vendors if they could meet the requirements. None of the vendors could meet all of the requirements.

They defined a pilot project that involves three sites. Reports in many cases now take about six weeks to collect the data and build the report. If they could simply cut four weeks of data aggregation time, they could justify the investment.

They are building a manufacturing data warehouse that everyone can access. One of the challenges is defining the difference between MI and BI (business intelligence); there is significant overlap. With the exception of Multivariate (MVM), one vendor was selected for the pilots.

Discussion Questions

Does your company prefer to warehouse data or perform direct access where possible?

PAR participant comments:

Data sources have been consolidated into one common database system where users can easily access the data and put it to use.

At the corporate IT level, there is a data warehouse for the environment system, LIMS, etc. At the manufacturing level, there are various systems with their own databases.

Sometimes we use data warehouses and sometimes we don’t. There is a concern about lowering the performance of real-time systems directly connected to the data warehouse. In those situations, they set up an intermediate data warehouse. Sometimes this is also related to lowering licensing costs.

We are not looking to standardizing on a single application but select a system that best fits the need.

We don’t connect historians to controllers; this makes a difficult business case.

One of the major drivers on sites has been to have data for the CMMS (Computerized Maintenance Management System) system to improve maintenance efficiency.

Six Sigma programs have pushed us to capture more data.

Our company has a graveyard of data warehouse initiatives that have died. The business reason for building it was not clear and the technologies were not ready at the time. Data quality was another big issue at the time.

The data warehouse is the best method for analysis. The alternative of using direct access puts source systems at risk. If you have many people doing queries against your historian, the source system operation will be degraded quickly.

The data is now more available but the view of it isn’t very good. This is probably our greatest weakness. We have found no good tool to combine the data from the various sources in a way that makes sense to the user.

Data warehouses have started to grow and we haven’t taken it down to the site level yet. Sites are starting to satisfy their requirements with a myriad of different types of solutions due to the long wait time for rolling out data warehouses to the sites. The corporate initiatives have just taken too long.

Our process automation systems are all connected to the data historians. Any data we call from the site is coming from the plant data historian.

Did you actively try to standardize on a single MI application for Data Collection Aggregation & Transformation (DCAT) and Data Analysis & Visualization (DAV)?

PAR participant comments:

We have not standardized on a single application; we are using 2 to 3 today and are adding more. We have not found one package that meets all the needs.

We have created a new group - manufacturing intelligence and technology - because users are demanding more data. The goal is a data warehouse, but the solution is not identified.

We started by installing historians at every site. Looking back at the data, we determined we were not aligned; sites had different structure and content. For example, some people call it “charge” and some call it “dose.”

What business practices, e.g. process monitoring, CPV, APQR, have you automated or are considering automating? Which provided the most significant contribution to the project ROI?

PAR participant comments:

Consolidation of data into one common database has increased process improvement. Lean and OEE initiatives are a focus and driver for facility performance. This is illustrating more value of automation systems that provide this data.

We are automating business practices for process monitoring. The automation of the annual product quality reporting that used to take several weeks of manpower has been a big savings.

There is a cross-site initiative to compare information for trends, help with investigations, and drive improvements.

Understanding the business requirements is the biggest hurdle.

In a lot of cases, it’s hard to justify electronic batch records.

What do you see as the biggest difficulties/hurdles in achieving a full MI solution at your company?

PAR participant comments:

The overall boundary between business and automation is fuzzy.

When bringing an MI into an existing site, we have issues around integration of historical data and data stewardship.

It’s difficult to get people to move away from user-friendly spreadsheets and replicate that functionality in a nice easy to use visualization tool.

Recently there are more and more people comfortable with using the historian client for virtualization. They are requesting all kinds of data to be put into the historian so they can bring it into their analysis and reports.

We seem to have money to buy these systems, but you need a staff to make them useful.

Data quality control is very critical. It’s an automation responsibility to get users to trust the system.

About the PAR Meetings and this Article Series

Every year, I have the opportunity to attend the Pharmaceutical Automation Roundtable (PAR) meetings as the only outside observer. Last year’s meetings were held in September of 2013 at Novo Nordisk A/S at their facilities in Copenhagen, Denmark. Lead automation engineers from around the world attended this invitation-only, two-day event. This group of engineers has a wealth of practical knowledge and is willing to share with other participants - truly learning from each other. The PAR meetings represent a very knowledgeable group of automation professionals gathered in one place at any one time to discuss automation issues. This year, the participating companies included Amgen, Biogen, Idec, J&J, Eli Lilly, NNE, Novartis, Novo Nordisk, Pfizer, Sanofi-Aventis. The PAR meetings consist of various presentations given by PAR members on specific automation topics. Other members then provide comments about their experience, ideas, and challenges relating to the topics. This article series presents a summary of those conversations with each article highlighting one or more of the topics covered by the PAR meetings. Comments by specific PAR members are reported anonymously.

About PAR

PAR was founded about 15 years ago by Dave Adler and John Krenzke, both with Eli Lilly and Company. At the time, the purpose of the roundtable was to provide a means of benchmarking and sharing best practices for automation groups among peer pharmaceutical companies. The group specifically does not discuss confidential or proprietary information, cost or price of products, price or other terms of supply contracts, plans to do business or not do business with specific suppliers, contractors, or other companies.

Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..