An introduction to OPC UA
In previous posts we had focused on topics related to Industry 4.0 (although Industry 5.0 is a topic for some time now, which is being defined with a special focus on an efficient collaboration between machines and humans), in which we have defined different concepts, even in a specific post we had defined what the Asset Administration Shell was. In this post, we are going to deal with the subject of OPC UA and see a small introduction.
OPC UA (Open Platform Communications Unified Architecture) is a communication standard used to exchange information and data between different devices and systems in industrial automation and control environments.
This standard is highly flexible and allows communication between devices and systems from different manufacturers, regardless of the hardware or software platform they use. Furthermore, it fits perfectly in industrial environments where security and interoperability are essential, offering a secure and scalable architecture, with advanced authentication and encryption capabilities.
Examples of applications include process monitoring and control, real-time data collection, equipment and asset monitoring, and supply chain management.
But how and, more importantly, why was OPC UA developed? It was developed by the OPC Foundation and the initial specification was introduced in 2008, with several versions and revisions of the standard thereafter to improve functionality and adapt to changing industry needs.
OPC UA was developed to address the limitations and challenges of the original OPC, now commonly referred to as OPC Classic, which sought to be more secure, scalable, interoperable and suitable for current and future industry needs.
So, this takes us further back through the following question: why did OPC develop? The development originated in the 1990s by the growing need for automation and software that could handle it.
At that time, issues related to the visualisation of variables or plant status were mostly based on Windows-based platforms. Therefore, if you wanted to communicate with the different PLCs (Programmable Logic Controllers) you needed a specific driver for each one of them, with all the possible differences in the protocols or connections between them.
Therefore, the development of OPC (OLE for Process Control) was originated from the need to establish a standard form of communication between devices and systems in the automation and control industry. Here are some reasons for its development:
- Industrial Automation: With the growth of industrial automation, there was an increasing demand for an efficient and standardised way of exchanging real-time data between devices, sensors, controllers and monitoring systems.
- Heterogeneity of Systems and Protocols: Prior to OPC Classic, there was a lack of standards for communication between devices and systems in industrial automation. Systems used different protocols and communication approaches, which made interoperability between devices from different manufacturers difficult.
- Need for integration: Data was not easily accessible so sending it to different applications was not straightforward and companies were looking to integrate control and data acquisition systems on a single platform. Therefore, there was a need for efficient communication between different systems and equipment, regardless of their origin. OPC provided a standardised interface that allowed systems to interact and share data more seamlessly.
- Leveraging existing Windows technologies: At the time OPC Classic was developed, technologies such as COM (Component Object Model) and DCOM (Dis-tributed Component Object Model) were prominent in Windows software development, so OPC Classic leveraged these technologies to enable communication between devices and systems.
What was proposed was the development of a layer that would act as a bridge between the HMI (Human Machine Interface) itself and the drivers of each PLC. In this way, the different existing programs to generate an HMI would not have to know all the necessary drivers.
This layer would therefore be the so-called “OPC Server”, while the different elements to be connected (such as a SCADA) would become the “OPC Clients”. The COM (Component Object Model) and DCOM (Distributed Component Object Model) technologies mentioned above, which already existed in Windows, were used to communicate between the server and the different clients.
These technologies were developed by Microsoft and were used to facilitate the creation of modular, reusable and distributed software in Windows operating system environments. These technologies provided a framework for the development of independent software components that could interact with each other in an efficient and coherent manner. Let’s define then:
Component Object Model (COM): COM is a software programming model that allows developers to create independent and reusable software components. A COM component is a self-contained software object that encapsulates a specific functionality. These components can be called by other applications or components to perform different tasks. It is based on interfaces and defines how components communicate with each other through function calls.
The main features of COM include the encapsulation of data and functionality, the reuse of components, the interoperability between programming languages and the ease of maintenance and updating of the software.
It must be said that COM is a model for communication within the PC itself, which brings us to DCOM.
Distributed Component Object Model (DCOM): DCOM is an extension of COM that enables communication between components over a network, which facilitates the creation of distributed and client-server applications, supporting remote communication between components on different computers.
This is especially useful in enterprise network environments where applications need to access resources on different systems. DCOM uses network protocol technologies to achieve communication between distributed components.
Both COM and DCOM technologies played an important role in the development of Windows applications, allowing developers to create more flexible, modular and distributed systems.
- OPC DA (OPC Data Access). Allows Clients to read or write variables in real time.
- OPC HA (OPC Historical Access). Allows access to historical values.
- OPC Alarmas y Eventos. Allows the reception of alarms or notifications.
OPC Classic disadvantages
The main disadvantages can be summarised in two: total dependence on working under Windows and the impossibility of sending data over the Internet. In addition to an excessively low level of communication security.
These problems were not really problems until the early 2000s, since there was no need to use other platforms or communication via http. But Industry 4.0 began to take shape, the widespread use of the internet, mobile phones, smart sensors, the generation of many data…. And the blurring of the boundaries between what we call IT and OT, which until then had been quite separate. Hence the need for the development of a new standard.
OPC UA main characteristics
- Much more complex data model. OPC Classic could only send simple data, such as the value of a sensor. With OPC UA, if you want, you can send much more data: units used, sensor model, configuration parameters… Therefore, we have the possibility of knowing and sending a kind of digital description of the device we are using, being the origin of what we know as AAS or Asset Administration Shell.
- Service-oriented architecture. In order to access data from a client, the OPC UA server uses services to provide clients with a series of methods or functions for reading or writing data or configuring and searching for servers. This allows for better usability and maintainability, as well as ease of development. A service can be defined as a call to a method or function of a programming language.
- Multi-platform: It gives the possibility to connect different platforms and environments: embedded systems, PLCs, PCs, Linux, Windows, Smartphones… and it can be connected through the internet using the HTTPS protocol.
- Integration with the IT world. Because, as we have already mentioned, the barrier between IT and OT is becoming increasingly blurred and simpler and faster communication is needed, the typical automatization pyramid has become obsolete, as sending data to elements such as the MES or ERP was complicated. With OPC UA that data can be sent more easily, and even some software can be implemented in the cloud, if there is no need for real-time process control.
Diferences between OPC UA and OPC Classic
Now that we know why OPC UA was created and its main features, we can intuit the main differences between OPC UA and OPC Classic. Here are some key differences between both technologies:
- Platform independent architecture:
- OPC Classic: it is strongly linked to the Windows operating system by using Microsoft COM/DCOM technologies. This limits its interoperability and scalability in heterogeneous environments such as Industry 4.0.
- OPC UA: it was designed from the ground up to be platform and operating system independent. It can run on Windows, Linux, macOS and other systems such as Android.
- OPC Classic: security in OPC Classic is limited and relies heavily on the security of the underlying operating system, i.e., Windows.
- OPC UA: offers enhanced security with features such as encryption, authentication, authorisation and digital certificate mechanisms, which makes it more suitable for environments where security is a key concern.
- OPC Classic: uses communication protocols such as DCOM, which can present challenges in larger, more complex networks. It is not suitable for communicating data over the Internet.
- OPC UA: uses more modern and efficient protocols, such as HTTP or TCP/IP, which allows communication over the Internet.
- Data and information model:
- OPC Classic: has a simple and inflexible data model.
- OPC UA: more robust and extensible data model, allowing richer and more detailed representation of information.
- Interoperability and Standards:
- OPC Classic: Interoperability between systems and vendors could be a challenge due to the dependency on DCOM and other specific technologies.
- OPC UA: It was designed to address interoperability issues and strives to be a truly open standard and widely adopted in the industry.
- Future Compatibility:
- OPC Classic: It is an older technology that could eventually become obsolete.
- OPC UA: OPC UA specifications have been made as abstract as possible, so that they are not related to any particular technology, with the aim that they can be used with any existing or new technologies that may be created in the future.
Pillars of interoperability
If we needed to define the most important pillars on which OPC UA is based to achieve interoperability we could list the following two:
Communication infrastructure and its protocols or transport mechanisms
This pillar defines how information is exchanged. We can define transport in this environment as a mechanism that sends an OPC UA message between a client and a server. Once the message is encrypted and secure, it is ready to be sent. There are two transports defined for OPC UA: OPC UA TCP and SOAP/HTTP.
Another related issue is serialisation, which transforms data into bytes so that it can be sent. Two types are supported: binary (suitable for environments where high performance and speed is required, such as in automation) and XML (for internet communications or enterprise applications).
In addition, a publish-subscribe communication model has also been defined, which allows information to be sent in real time. This model is based on notifications that are sent (normally through a broker) to the subscribed devices when there are modifications in the machine or factory data.
This pillar defines what information is exchanged and how it is structured. In other words, how the information is presented. It is a basic building block for systems to be interoperable and to be able to communicate information across different devices as it specifies a data modelling that defines the rules that an OPC UA server must follow to expose data to connected clients. It also defines the hierarchy within the automation environment being developed.
OPC UA provides an information architecture that maps the content of a complex object and its processes so that it can model its digital equivalent, i.e., the digital twin. It describes how to transform each physical device into its OPC UA equivalent, using an object as a building block to represent the data and behaviour of the underlying system to which the object it is representing belongs. The objects are interconnected by means of references and show this information from the server to the clients.
The concept behind OPC UA is similar to the Object Oriented Programming (OOP) paradigm that uses objects that can be defined as structures composed of attributes and methods, which are related to each other to develop software.
In this brief introduction we have only been able to scratch a little of what OPC UA has to offer. As main points we could highlight the following:
- Complex object-oriented, flexible and extensible data model.
- Increased security in communications based on: authentication, authorisation, server availability, encryption, confidentiality and integrity.
- Service-oriented architecture that allows the client to obtain information from the server.
- Different communication protocols: TCP, HTTPS, MQTT…
- Platform independent, allowing different operating systems and devices to communicate.
- Two communication models:
- Client-server model.
- Publish-subscribe model