SCADA and Control System Security: New Standards Protecting Old Technology

  • June 01, 2011
  • Feature
May 2011
By Scott Howard, Byres Security Inc.
Most readers will readily agree that the industry has been through a period of tremendous change over the last 10-15 years. Simply put, the design and implementation of automation and control systems have changed radically over that time.
When electronic automation devices were first deployed in industrial plants, the vast majority of sensor and control devices were stand-alone systems, or communicated only through proprietary networks or simple serial interfaces such as RS-232, RS-485, or current loop. The idea of connecting these systems to each other, or to the company’s business systems, was out of the question. Why would you even want to do that?
But then in the 1990s, Microsoft moved Windows* onto a stable 32-bit platform, then Windows got Ethernet support, and TCP/IP took hold as a communications technology. This meant that control system vendors could build their products on top of an inexpensive, high-performance platform, resulting in control systems that were not only very rich and capable in their own right, but could also easily interface with the enterprise data systems that were running the business.
This sparked a revolution that is still underway. Control systems evolved to TCP/IP networks in the space of only a few years. PLC and sensor vendors rushed to retrofit Ethernet interfaces and TCP/IP onto their existing designs and to introduce new products that had these interfaces embedded. Windows-based software tools, such as data historians, HMIs, DCS and MES systems, attained new levels of capability that would have seemed like science fiction only ten years earlier. And IT technologies like VPNs permitted significant cost savings through remote management and troubleshooting.
Huge productivity gains resulted from this technological revolution. In fact, for most industries, the integration of business and control systems has become a competitive necessity. However, in adopting these commercial off-the-shelf technologies, we not only inherited all their advantages but also the disadvantages. The complexity of control systems has skyrocketed, so now there are many more potential points of failure – even down to simple components like cables, switches, routers, and power supplies. Malware became a new threat that had never been seen before in control systems, and it didn't care whether the PC was attached to an enterprise or a control network – both were infected indiscriminately. The financial and safety consequences of such an incident can be much more severe on the plant floor -- there’s nothing too scary about re-booting an accounting workstation; re-starting a boiler or a steel mill is a whole different matter. In addition, new threats such as Stuxnet are starting to emerge, indicating that control networks are now becoming targets in their own right.
One other thing is certain – the competitive pressures that drove this technological revolution are not going away. In fact the safe bet is that they will only increase over time. Where are the new productivity gains going to come from? It’s hard to predict the path of innovation, but it seems clear that significant productivity gains will not continue to be realized unless cyber security improvements can catch up.
Where do we go from here?
Before network security can be implemented, the company’s security policy must be clearly defined. A typical security policy might grant a user access to the network on the following basis:
  • User & device authentication: a user and endpoint device authenticates through some kind of credential (e.g.: password or certificate)
  • Device health: the device (PC or otherwise) through which the user accesses the network must prove that it meets the minimum requirements, such as up-to-date antivirus signatures, the latest OS patches, etc. and demonstrate it is not infected with malware.
If the criteria defined in the security policy are met, a device and/or user will then be granted access to the specific network resources they are authorized to use, based on their role in the company. However, good security policy does not stop at this point. Once access is granted, it’s usually desirable to continue to monitor the behavior of that person and/or device to ensure that it remains in compliance while connected to the network. For example, a malware infection on a PC might cause that computer to start scanning the network to find and infect other vulnerable PCs; this type of behavior can and should be detected and blocked in an ideal security environment.
The challenge to IT departments is – how do we implement and enforce this in a consistent manner with limited resources? Fortunately, help is on the way in the form of Network Access Control (NAC). NAC systems permit IT managers to implement a consistent network security policy throughout the organization by tying together policy servers (such as LDAP) with policy enforcement devices (e.g.: firewalls, VPNs, and security-enabled Ethernet switches).
Open Standards Provide Flexible Solutions
While proprietary NAC offerings can be effective, the non-profit Trusted Computing Group takes NAC a step further by defining a set of open, vendor-neutral standards called Trusted Network Connect (TNC). TNC defines standard interfaces that implement all the key requirements for effective NAC – user authentication, device health checking, and device behavior monitoring. NAC interfaces are already supported by a growing ecosystem of vendors, including major industry players such as Microsoft, Juniper Networks and Infoblox as well as many others.
Figure 1: Trusted Network Connect defines standard interfaces between the key building blocks of a NAC architecture
One of the key interfaces defined by TNC is called IF-MAP. IF-MAP defines a protocol that allows devices to ‘publish’ event data, and ‘subscribe’ to data events, through a Metadata Access Point (MAP) server. The MAP server acts as a central clearing house for security information between diverse systems from multiple vendors, including physical security (access control) systems, cyber security devices like firewalls and routers, wireless access points that can identify device location, intrusion detection systems that can report unusual patterns of device behavior, and data sensors that can detect leakage of sensitive business data.
This is a huge step forward in promoting device interoperability, because device vendors only have to implement a single interface to the MAP server rather than a custom interface to each unique protocol or device that a customer may want to use in their system.
Figure 2: the MAP server is the central 'clearing house' for security-related event information in TNC.
Since TNC is based on open standards, network operators can build application-specific solutions very quickly and easily by assembling subsystems from multiple vendors. By basing products on an open standard, vendors can contribute equipment that focuses on their area of expertise, creating solutions that are much more capable and flexible than could be achieved by one vendor attempting to span multiple application areas with their own proprietary solution. This is one of the key reasons why so many major vendors now support TNC.
So TNC and MAP are open standards - how does this help? Consider the case where a user is accessing the network remotely through a Virtual Private Network (VPN) connection; this is all well and good, because an established security policy permits this user to do so. But then something strange happens – the building access control system reports that the same user just scanned his badge on a card reader to gain access to a control room in the plant. Although the user is permitted access to the control room, he clearly cannot be in two places (remote access via VPN, and physically present in the control room) at the same time. If we build a NAC solution based on TNC-compliant products, then our access control system and our VPN server are both reporting security events through the same MAP server, where we can easily detect these conflicting conditions and respond appropriately. This type of system can be built today with TNC- and IF-MAP-compliant products.
This concept of open standards is also what makes MAP and TNC really interesting for control system engineers. TNC and MAP are the first NAC implementations that make it feasible to adapt to control networks in addition to IT and enterprise systems. Byres Security is working with other TCG members, as well as our partners and major customers in the automation and control industries, to adapt these technologies to the special requirements of automation and process control networks. This opens the possibility of using one set of standards and technologies to manage security for both IT and control networks, which would offer further cost and productivity improvements.
The Road Ahead
While there will likely be little relief from the relentless competitive pressures driving these changes, new technologies based on open standards like TNC and IF-MAP will help ensure that the security and reliability of automation and control systems can keep pace with the business requirements. Check out the TNC web site to learn how TNC and IF-MAP solutions can help improve and automate the security of your network.
More resources:
About the Author:
Scott Howard is technical sales manager for Byres Security Inc. and an active member of the Trusted Computing Group. His career in product development spans almost 20 years, during which time he developed embedded systems for consumer, industrial, networking, communications, public safety and SCADA applications. He also has over 10 years' experience in technical sales, marketing and support roles with companies such as Motorola (now Freescale) Semiconductor and Wind River Systems. Scott is a graduate of the electrical/electronics technology program at Camosun College in Victoria, BC Canada.

Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..