Understanding Microsoft's .NET Technology & Its Impact on Automation Application Development

Technology is an evolution, not a revolution. Sometimes we see advancements take place quickly, but when viewed from a high altitude vantage point, it’s often easier to see and understand the thread of technological evolution rather than the discrete events related to specific advancements. This paper will take just such a high altitude look at the development of Microsoft’s .NET technology with the intent of helping industrial automation users understand the context of .NET and what it might be able to do for users in the future.

 

Perhaps the best way to examine technology advancements is to look at them from the user’s perspective. What does the technology provide that helps us better solve a problem? A transportation analogy may be a good example. People were self-propelled initially, walking wherever they needed to go. With the creation of the wheel and the domestication of animals, we had improvements in the form of horses pulling chariots and buggies. At some point, we moved up to bicycles, boats and trains. Now we have airplanes, rockets and space vehicles.

 

But the user problem remained the same. The point was to move people and/or goods from point A to point B as efficiently as possible. At a high level, the user problem remains unchanged, but the technology applied to it continues to evolve and deliver ever increasing user benefits.

 

If you think about telephones, the same pattern is seen. The basic architecture of a phone has stayed pretty much the same since its invention in the 19th century. A phone consists of a handset and a base. Multiple phones are connected to an exchange, and exchanges are connected to each other in networks. Over the last century we’ve moved from wired to wireless systems, which has provided a new user benefit in terms of mobility. We’ve advanced from copper wires to fiber optics and satellite channels, which have increased communications speed and reliability. But the primary user benefit has remained – people can communicate.

 

What this shows is that architectures influence technological evolution. Architectural design defines work partitioning between components, shaping both domain requirements and enabling technologies. The benefits of a good architecture allow individual components to adapt to evolving technologies quickly and stretch the life span of systems, thus protecting user investments in the technology.

 

Started With Mainframes

Nowhere is this more evident than in the evolution of computing. By technological necessity, when computing began in earnest in the 1950s, mainframes predominated and people accessed application and data via dumb terminals attached to the central system. IBM ruled the world, until minicomputers came on the scene in the late 1960s and 1970s, with the likes of Digital Equipment and Data General. They were simply smaller packages of computing capability, designed for use by engineers, scientists and departmental workgroups. By the late 1970s and into the 1980s, the personal computer came into play as a workforce. Desktop computing took the world by storm, enabling individuals to work on their own and to link to other individuals as needed, via networks. Intel® and Microsoft ruled the world. And today, we see technology advancements bringing

this individuality down to the ultimate – in handheld computers, PDAs and Internet-enabled wireless phones.

 

Throughout this evolution – which often looks like a revolution – the application of these computing technologies has still remained the same: the user is able to compute. Now the problem becomes one of managing this ability so that it’s the most efficient process possible. That’s the big challenge for information technology in the Internet age. If people are to be able to compute with any application, from any location, at any time and from any device – the infrastructure requirements multiply to immense proportions. Total cost of ownership becomes a serious issue because administrative issues become larger than the technological issues. If everyone is using a rich application set, then all of those applications must be managed individually. The solution is the deployment of "thin client" technology, which brings us almost full circle to the mainframe era of the

1950s and 1960s. Using thin client computing, software and applications that require maintenance are hosted on network servers and aren’t accessible by the majority of the actual end users. They simply use the applications from whatever computing device they wish. The centralized servers and the software they host are maintained by experts at that central site, such as corporate IT departments or application service providers on the Internet. As examples, think of the Internet services such as MapQuest™, for finding directions and maps, or TurboTax®, for completing income tax forms online.

 

How Thin is Thin?

"Thin" computing is when:

  • There is no data at the "server" end of the application

  • There is no operational data at the client end

  • There is no application configuration on the client side

  • There is no "pre-installed" application software on the client side

  • There is no application software on the client side

  • There is no large operating system on the client side

This emphasis on thin clients doesn’t necessarily mean that thick clients are going away, however. In the foreseeable future, many applications will still require a level of richness of functionality that is best achieved with thick clients. These will likely include applications that are targeted for expert users. But rising acceptance of the Internet as a backbone means that thin client computing will become the next wave in the (r)evolution.

 

What is the .NET platform?

As indicated already, today’s Internet largely mirrors the old mainframe model. Despite plentiful bandwidth, information is still maintained in centralized databases, with "gatekeepers" controlling access to it. Users must rely on web servers to perform all operations, much like the old timesharing model. The problem with this is that web sites are isolated islands that cannot communicate with each other in any meaningful way. Web servers do little more than serve up individual pages to individual users—pages that mostly present hyper-text markup language (HTML) "pictures" of data, but not the data itself. The web browser is in many respects a glorified read-only dumb terminal—you can easily browse information, but it is difficult to edit, analyze, or manipulate it, which is really what people want to do with it.

 

These problems are only multiplied if you use more than one PC or mobile device. To access your online information, email, offline files and other data, you have to struggle with multiple, often incompatible interfaces, with varying levels of data access, and with only intermittent synchronization of all the information you need. For the web developer, the tools to build, test and deploy sites are less than adequate. Many focus more on building attractive rather than useful web sites. None of them addresses the entire software lifecycle, from design to development to deployment to maintenance, in a way that is consistent and efficient. No system today lets developers write code for the PC and deploy it to a variety of devices.

 

Corporate users face additional challenges. While the advent of server "farms" has made systems more reliable, by eliminating single points of failure, it has made system management more complex. Performance measurement, capacity planning and operations management are challenging multi-tier, multi-function web sites. New e-commerce systems rarely interoperate well with legacy business systems.

 

The fundamental idea behind Microsoft’s .NET initiative is that the focus is shifting from individual web sites, or devices connected to the Internet, to "constellations" of computers, devices and services that work together to deliver broader, richer solutions. The intent is to give users control over how, when and what information is delivered to them. Computers, devices and services will be able to collaborate with each other to provide rich services, instead of being isolated islands where the user provides the only integration.

 

.NET is intended to help drive a transformation in the Internet that will see HTML-based presentation augmented by programmable XML-based information. XML, the extensible markup language, is a widely supported industry standard defined by the World Wide Web Consortium (W3C), the same organization that created the standards for the web browser. XML provides a means of separating actual data from the presentational view of that data. It is key to providing a way to unlock information so that it can be organized, programmed and edited; a way to distribute data in more useful ways to a variety of digital devices; and means of allowing web sites to collaborate and provide a constellation of web services that will be able to interact with each another.

 

This is a major evolutionary change in computing for the "Wintel" architecture, since the Windows operating system isn’t optimized for thin client computing. It embodies the thick client model, with all software on every PC. This architecture is based on Microsoft’s common object model (COM), which isn’t inherently a distributed architecture. COM presents scalability issues and even the distributed COM (or DCOM) is an afterthought. It’s too complex for mobile or embedded computing and it’s not lightweight enough for use with Windows CE, the Microsoft platform for handheld computers.

 

The challenge for Microsoft has been to respond cleverly to the impact of the Internet. The competitive challenge has been difficult as well, with offerings such as the Java programming language from Sun Microsystems, the Palm operating system for handhelds, the Internet strength of AOL and Netscape, and the UNIX-based power of Oracle databases and applications. In addition, the rise of new open standards such as Java, the hyper-text transfer protocol (HTTP) and XML.

 

.NET is Microsoft’s answer to these challenges and, according to Microsoft Chairman Bill Gates, it’s "as significant as the move from DOS to Windows." .NET will fundamentally change the way computers and users interact. It’s intended to bring employees, customers, data and business applications into a coherent and intelligently interactive whole, so that business can benefit from radically increased efficiency and productivity.

 

On a practical basis, this means that previously complex tasks such as moving a purchase order from customer A to vendor B will be able to:

  • Employ a common language for messages

  • Use a common message content structure

  • Have the ability to send and receive messages easily, and

  • Have the ability to process messages within the context of business processes

And users will be able to do all this while using completely different computer systems.

 

Basic .NET Elements

There are four primary elements to the success of .NET in creating a constellation of Web Services:

  • XML, the eXtensible Markup Language that is the universal data medium for message content

  • HTTP, the Hyper Text Transmission Protocol that is the communications pipeline

  • UDDI, the Universal Description, Discovery and Integration director, which serves as a B2B Yellow Pages-style directory that allows companies to locate business partners

  • and SOAP, the Simple Object Access Protocol that defines how each Web Service interacts with the others

XML will be the common language of .NET. It already provides a universal data exchange format. It’s multi-platform in nature so there are no limitations. Its benefits are numerous. As an example, consider a conversation between two people who both speak English, but come from different backgrounds – say a software engineer and a lawyer. While they both speak English, the technical vocabulary of their different professions may make it difficult to actually communicate with each other.

 

The XML Schema is the solution for a situation like this, since it provides a common vocabulary (for example, VML) and grammar (XML) that’s understandable to both parties.

 

The basis for the common messaging system is SOAP, a protocol that has already been submitted by IBM, Microsoft, Sun and other companies to the W3C consortium. SOAP deploys HTTP as the transport mechanism. Since it’s already a widely adopted standard, it’s usable on multiple hardware platforms and it’s scalable all the way from handhelds to mainframes. The XML protocol is used to carry the content or data on top of the HTTP mechanism.

 

Here’s a simple example of a SOAP request for stock quote data. The top part of the code is an HTTP header and it’s a "request response" type of protocol. This is a typical example of something a user would post to a server in order to get a response back. The "envelope" is the content envelope within the SOAP protocol. The next portion of the body points to the object that will actually process the request.

POST /StockQuote HTTP/1.1

Host: www.stockquoteserver.com

Content-Type: text/xml; charset="utf-8"

Content-Length: 323

SOAPAction: Some-Namespace-URI#GetLastTradePrice

<SQ:Envelope

xmlns:SQ="http://schemas.xmlsoap.org/soap/envelope/"

SQ:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"

>

 

<SQ:Body>

<m:GetLastTradePrice xmlns:m="Some-Namespace-URI">

<symbol>DIS</symbol>

</m:GetLastTradePrice>

</SQ:Body>

</SQ:Envelope>

Here’s the response that was returned to provide the stock price quote, again using the SOAP protocol:

HTTP/1.1 200 OK

Content-Type: text/xml; charset="utf-8"

Content-Length: nnnn

<SP:Envelope

xmlns:SP="http://schemas.xmlsoap.org/soap/envelope/"

SP:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"

>

<SP:Body>

<m:GetLastTradePriceResponse

xmlns:m="Some-Namespace-URI">

<Price>34.5</Price>

</m:GetLastTradePriceResponse>

</SP:Body>

</SP:Envelope>

BizTalk is Microsoft’s formal initiative to create industry standard technical and content for business-to-business (B2B) integration. It comprises two segments as well, for the common messaging and for the standard document structure for multiple business purposes. BizTalk Server offers standardized messaging and has a feature called "Orchestration" that graphically models business processes so that e-messages can be processed accordingly. In the case of an invoice, for instance, orchestration lets users model the relationship with other companies – the actual process flow from order through fulfillment and all the pieces that are part of that process. Users can graphically represent that and produce systems and

software that support and validate it using the graphical capabilities of Visio, which Microsoft acquired last year.

 

 

Microsoft’s .NET technology brings together a number of elements, including the .NET framework, the user interface (UI), .NET languages and VisualStudio.NET. The block diagram shown in the figure above illustrates the relationships among the various components available for application development in the .NET environment.

 

Existing programming languages that play in the framework are shown at the top, including Visual Basic, C++, the new C# (or C-sharp) from Microsoft, and JScript or Java scripting. Certainly other languages will be created for the .NET framework. The common language specification defines what will be done to compile the code, providing the definition for an intermediate language (IL) processor that abstracts it from the hardware, much like Java byte code. It’s no longer compiled to a native processor. The web services, user interface, data and XML code, and base classes describe the content. And the common language runtime, which is analogous to the Virtual Machine, will load and run the software. The whole idea of .NET is to compile code into an intermediate environment, or the equivalent of byte codes, and process it in the execution environment provided by the common language runtime.

 

The new Visual Studio environment supports all traditional languages and will support new languages that are developed in the future. This eliminates the problems associated with different data types between languages. A string is a string and a long is a long, no matter what the language, and users now have consistent specs for accessing them and passing them from component to component. Applications can now be written in a mix of languages and they will still play together in ways that weren’t possible before.

 

Another nice feature of the .NET framework is the work that has been done with application assemblies. Assemblies are now associated with a "manifest" that is an XML document specifying references for all files, DLLs, executables, graphics or any other program component that is installed on the machine. This eliminates the problem of "DLL Hell," or the case where different application programs that use different versions of DLLs won’t be stepping on each other’s toes and causing applications to crash.

 

In summary, this new .NET framework provides multiple benefits, including:

  • A new programming model for software developers

  • A new application execution model for running applications

  • A new deployment model for managing deployment/installs of components

  • New technologies for improved distributed and Web applications

  • The promise of application portability

For software developers, this new toolset means they can increase their productivity. In the early stages of implementation this will come with some pain. It will require a fairly long learning curve to get used to it, and there will be some migration headaches as current COM-based applications are migrated to the new environment. They will likely coexist at first and be rewritten in the long term. The .NET technology is still evolving, so it will be a growth process that may take a few years to mature.

 

***

 

This white paper was written and provided by Wonderware.  For Wonderware specifically, and automation users in general, some of the .NET technologies are already available and being implemented now. XML, VML, XSL and HTTP/SOAP are already deployed in Wonderware’s SuiteVoyager™ portal product. In addition, Terminal Services for InTouch already provides thin client capabilities, as part of WonderACT. BizTalk, which uses Visio as the graphical tool for building applications, can be applicable to InTouch and InTrack. Wonderware takes advantage of the power of Microsoft SQL™ Server and will accelerate this effort by moving all plant data into SQL-based servers, including IndustrialSQL Server™, Alarm Server and Magellan. The user benefit is that all data will be available from one Web Server and users will be able to easily access any data they need to do their jobs, from anyplace in the enterprise, via any computing device of their choice. That’s the bottom line to both the .NET framework and to Wonderware’s product direction – to provide data anytime, anywhere and in any format that’s needed.  For more information on Wonderware or any of their products, please visit www.wonderware.com.