10 Essential features to implement secure distributed HMI systems

10 Essential features to implement secure distributed HMI systems

By M. Taccolini, Tatsoft

In todays’ technology evolution there is a new generation of software and security tools, enablers to help you more efficiently implement real-time data aggregation and visualization applications, in adherence to the latest cyber security recommendations, and more easily manage multiple distinct user profile schemas.

In previous generations, it was possible to find networks with PLCs, instruments, control PCs and business applications all connected with no isolation.  Anyone could access anything directly.  In today’s world, that is simply no longer acceptable.  The protocols to connect Server and Client apps were proprietary, with no firewall in-between, or opening many firewall ports, is also no longer acceptable.

What about providing access to external users, such as suppliers, customers or partner companies?  In the past some suppliers would simply provide their own connections and going directly into the network (which has many security issues), or the company had the burden of creating local network users, which was not a good option IT departments would have to add users from other companies to the internal company domain…just because they needed access to one application.

Taking a step forward, leveraging current technologies you can have distributed applications, with the network layers from the field devices to the automation, business applications and third parties providing the most strict network security for critical infrastructure and national security applications, and to do so in a simple and manageable way.

Using the right approach, it is now possible to enforce good security procedures for user management and data access rights, without losing flexibility, and to greatly simplify the deployment and maintenance of applications.

This article presents some of the enabling technologies and examples of practical application areas.

Typical Requirements and Application Areas

Before listing the features, let’s review some of the typical requirements for this kind of application, with its network and deployment constraints.

Multiple User profiles need access to specific sub-sets of data; the security system should prevent unauthorized data access.

The User role, their current physical location and to which entity/company they belong, are the combined criteria to define the User security profile.

Data consolidation from multiple different data sources, including custom field-devices and protocols (PLCs, RTUs, fieldbuses), SQL databases, Historian servers, data analysis applications, MES and ERP systems.

Network security best practices are mandatory, the ports, protocols and connections used must be clearly identified and based on current standards.

Some of applications areas were that architecture is being deployed include:

Manufacturing: collecting data from multiple sites to headquarters, factory-floor Andon systems, integration of local SCADA HMIs to factory-level or enterprise-level management systems.

Oil & Gas: Local Rig monitoring (on-shore and off-shore) and data aggregation at the central office, pipeline monitoring.

Utilities: monitoring distributed assets, Pipelines, Water and Wastewater, Energy Generation and Distribution.

Batch Management: processes with dynamic production settings.

Decision Support Centers: Real-time dashboards for KPIs and Situational Awareness applications.

Features set

Some of the features presented may be critical in some projects, but not necessarily applied in other scenarios, therefore the numbering order does not represent a higher priority in this context; the description of each item follows after the list.

  1. Tag-Level Security
  2. Integrating Multiple User Security Models
  3. Security and Web Gateways
  4. Smart Clients
  5. Concurrent Engineering and Hot Start
  6. Built-in Redundancy
  7. Synchronization Tools, Store and Forward
  8. Built-in Communication Protocols
  9. Remote Stations and Automated Management
  10.  Network Transport-layer Independence

Tag-Level Security

Applying Read and/or Write security at the Tag level, independently of the User Interface and Display security. Today, most systems allow enabling screens and input fields based on User credentials, but some lack a built-in security definition at the asset level. By leveraging Tag-Level security, even if the user has authorization to open the display, the restricted data inside that display will not be presented. The same applies when the user is selecting pens to use in Trend charts, or customizing personal dashboards.

Integrating Multiple User Security Models

Modern applications should be able to manage multiple ways to do define identification and authorization, keeping the application and data servers centralized but allowing distinctive ways to get to the system. The 3 main security models the application should be able to handle are:

Active-Directory and Windows Authentication.

WS-Federation: this standard enables inter-companies data access, without having to create local users.

Application Internal Security: in some scenarios it is necessary or just easier to have the users defined directly at the application level.

Most importantly, the system must allow the concurrent use of the different models within the same project, some users will be using the system logging from the business network, using the Windows Integrated security, users from partner companies may use the WS-Federation login, while others still will enter their username and password defined at the application level.

Security and Web Gateways

Creating application gateways protects the server application network from the Users network. This item is a direct consequence of the previous one, managing multiple security models concurrently, as well an easy way to move across the security network zones (see Figure 1, architecture example from Tatsoft FactoryStudio). The concept is to keep the main Application and Data Servers isolated, and deploy independent portals or gateways. The gateway will manage the requirement for User identification and authentication, and allow the data-flow from one network security zone to the other.

Smart Clients

In a more generic way, this item relates to the adoption of up-to-date client-side technologies. With the tools that are available today, it is shocking that some new HMIs products still rely on tools using pixel graphics, which is resolution dependent, instead of vector graphics, which are not.

Being able to have data access from a browser is the minimum, but there is much more to consider, after all, there is a reason it is called a “smart” client. Here are some client-side features that can make a true difference in many projects:

Automatically updates the application core, or the specific project screens, without local user actions.

No installation required at the client side.

Option to automatically start the client application when the server application starts.

Handling events, custom scripts, even C#/VB code, at the client side.

Full-access to computer resources: run outside the browser with native 64-bit execution. On mobile devices, smart client allow you to use the camera, GPS and the other native app features, not restricted to browser only capabilities.

Concurrent Engineering and Hot Start

The previous paradigm was to allow multiple Users to access the runtime production project, but the project development itself was a centralized file-based configuration and it was necessary to shutdown in order to update the system. The modern design allows the following features:

SQL-centric multi-user configuration: instead of flat files, the project configuration is stored in a SQL database, allowing simple concurrent access from multiple users.

Hot Start: online configuration, being able to make changes on the fly, is useful, but it is not enough. Hot Start is the ability to make offline changes in the project configuration and deploy those changes without interruption of the runtime system or the client connections.

Built-in Redundancy in multiple levels

Without the need to create custom development or scripts, failover hot-switches should be available at the project configuration:

Hot-Standby redundancy for the real-time tag server.

Primary and Backup device (PLC) addresses for data acquisition

Client-side applications, automatically and transparently, can switch from primary to backup servers.

Synchronization Tools, Store and Forward

Setup local data storage at the machine level, or site level, then forward the information to a centralized location or office. Alarm and Tag Historian databases have the option to use Store and Forward, as well as automatic replication on redundant applications. The synchronization, storage and forwarding configuration is independent of which specific database is used to store the data, with an easy setup to route data crossing network domains and firewalls.

Built-in Communication Protocols and SQL tools

When it comes to Communication protocol drivers, external OPC applications are a great solution to enable access between different applications, but within the same project, it is much easier to have direct built-in access to the typical standard protocols, without having to setup two different tools, or exchange security certificates.

Having multiple independent tools would cause your management and maintenance efforts to increase exponentially in larger applications. Therefore, besides OPC, it is essential to have native support for common PLCs networks (Modbus, Rockwell, Siemens, Beckhoff, Omron, Mitsubishi, Koyo, GE, National Instruments and others) and IT protocols (Performance indicators, SNMP, Ping). Vertical markets may need additional built-in protocols, such as Oil & Gas Upstream (WITS) or Energy (DNP3, IEC-61850).

When considering options for data storage, it is not practical to deploy a full SQL server just to store some local data, but it is a good practice to avoid creating custom crypt data files.  The solution is to have a built-in embedded SQL database at the application itself to store the local data, and have a connection with the enterprise SQL servers for the corporate data.  The database connectivity tool shall seamlessly support multiple standards, such as ODBC, OLEDB, SQL-client and Oracle providers, or native connections to custom time-series databases such as OSIsoft PI.

Remote Stations and Automated Management

Allows easy setup to run native data-acquisition drivers or OPC clients on remote computers, with remote control on startup settings and centralized configuration.

In this deployment scenario, the data-acquisition nodes are running in another computer, or even in another network security zone, but the I/O mapping, the Start/Stop of the system and other configuration settings are managed centrally at the Application Server Computer (see figure 1, Tatsoft FactoryStudio architecture example for local and remote sites). In this scenario, a remote OPC server is not the best option, as you want to keep the entire configuration in the Application server, not in the data-acquisition node, neither would it be necessary to open firewall ports to DCOM or manage OPC-UA security certificates.

Development frameworks that leverage technologies such as WCF can create secure connections to active remote services, so all that happens transparently. You just manage the configuration at the server, start and stop the server, and the system automatically manages the remotely connected stations.

Network Transport-layer Independence

Certainly, the one-size-fits-all approach does not work well for distributed systems. Some connections may require HTTP/HTTPS, others may leverage binary TCP/IP, and the encryption and authentication methods may vary. 

In some connections, only raw data is exchanged, other connections are activating remote services, synchronously or asynchronously.  Software tools created leveraging the Microsoft .NET WCF (Windows Communication Foundation) have the independence between the application and the transport and security settings used to each connection. The exact same software can be deployed in multiple scenarios with no additional programming required.

Bottom-line

If the goal is to efficiently create flexible and secure systems, following the most recent cyber security recommendations, easily managing updates and providing a greater user experience; that is not possible to properly accomplish if the Application Development Framework has its source code foundation in legacy technologies from a decade or more ago. 

It is quite simple to increase release numbers, perform cosmetic changes in the user interface, or add localized specific features; but that does not make the software tool fully leverage current technologies. That is only possible with the complete renewal of the core components, such as Tag Definition, project storage, communication foundations, graphical technologies and programming languages.

For a quick evaluation on how up-to-date the tool is you are using, try to get hold of that same software as it was around 2007 and focus at the core: tag definition, multi-user ability, device communications, script languages, graphical technologies; if that is still very similar, the system is out-of-date, as many of the technologies described in this article only became available or matured for industrial applications after that.

Empower yourself with updated tools, and find how the reliability, flexibility and ease-of-implementation of automation projects can be 10 times greater than it was 10 years ago.

About the Author: M. Taccolini works with real-time software and factory-floor applications since 1986. He was the founder of the companies Unisoft and Indusoft, and original designer of their product lines. Currently holds the position of CEO for Tatsoft LLC, based in Houston, TX. 

MORE ARTICLES

VIEW ALL

RELATED