SCADA reflections – the (hidden) price of standards

Image Description

Peter Humaj

May 26 2025, 4 min read

Today I would like to take a closer look at the modularity of SCADA systems, the standardization of interfaces between modules and its relationship to functional properties. At the beginning, a short story:

A former classmate came to our company for work. Subsequently, we talked about the problems he was currently handling (he used one unnamed American SCADA system). He needed the archive subsystem (historian) to recalculate statistics over these values ​​after manually editing the archived values ​​by the user.

It took me a while to clarify that the archive subsystem in question computes statistics at the end of the time interval (e.g. minutes or hours). Subsequent correction of values ​​by the user (or arrival of delayed values ​​from communication) does not automatically trigger any action of the archive subsystem. Recalculations must be "enforced" manually - so you need to know the values ​​of which objects the user has changed, find out which objects depend on the changed objects, and finally force the recalculation.

In the D2000 real-time application server, the behavior is different - if statistical and computed archive objects have ongoing calculations configured (Calculation method parameter is set to Continuous), then the D2000 Archive ensures that the values of these archive objects are consistent with the values of their source archives. When the value of the primary archive is updated (by the user or from a script), all archive objects that depend on it are automatically recalculated.

Another scenario that causes the recalculation of archive objects in the D2000 can be, for example, reading the values from the communication using the GETOLDVAL command. Under this, you can imagine reading values from a data logger (e.g. ESC8816 communication protocol) or from an electricity meter (IEC 62056-21:2002 Serial protocol).

Another option is to read the old values from another SCADA system based on D2000 technology and connected using the D2000 Gateway process. Such a reading is useful, for example, after a communication failure between two SCADA systems.

Why do I mention these cases? They are examples of how homogeneity (in the sense that all parts of the SCADA system - communication and archive subsystem - come from the same manufacturer) can help in implementing superior and above-standard functionality.

If we have subsystems from different manufacturers, working on the basis of a standard interface, and there is a requirement for such extended functionality (which is not supported by the interface), it can be a problem to implement it.

Don't get me wrong - standardized interfaces are a great thing. But I think the SCADA system should use them outwards. Not inwards - between your components or modules. I see a parallel to the question of why Linux does not have a stable kernel interface for drivers. As the documentation says, the development of the Linux kernel (bug fixes, optimizations, adding new features) naturally leads to modifications of the driver interface that the drivers must adapt to.

Similarly, enhancements, fixes, and implementations of new features in a SCADA system should not be blocked by stable ("frozen") interfaces.

If you agree with this argument, there is a logical consequence. The more customer requirements a SCADA system can meet „on its own“, and thus without the help of third-party products (and interface limitations), the better and more efficiently it can implement new functionality and new optimizations.

I will give examples that will be also related to archiving.

The D2000 Archive is the only process that has access to the archive database (the exception is the arcsynchro utility, which is used to patch holes in redundant archives and is not important in the current context). All other interfaces for access to archive values (D2000 JAVA API, D2000 OBJApi, D2000 ODBC Driver, D2000 OPC Server, D2000 OPC UA Server, D2000 VBApi, D2000 WorkBook) use the services of D2000 Archive – in the same way, other processes do, for reading or writing of historical values. This feature enabled, among other things:

  • On-change archiving of periodic values - this is an optimization of storing periodic values. If the values do not change over time, only the first value with a timestamp is stored, which saves space in the archive database as well as the performance of the archive server. When reading, a "decompression" must be performed - the generation of repeated values.
  • Implementation of an isochronous cache in the archive - if enough RAM is available, part can be dedicated to the D2000 Archive. The memory is used to remember the latest values of objects in the last N hours or days - depending on the size of the memory and the data flow. This can significantly speed up clients' access to data, as typically the most recent data is read the most frequently - for the last work shift or for the last day.
  • Support for compressed depositories on the PostgreSQL platform in D2000 version 22. The depository database is a feature of the D2000 Archive, which enables unlimited archiving of values. Depending on the configuration, e.g. it creates a new depository database every 30 days, fills it up, and when it is filled up, it is either disconnected or remains readable. Depository compression means optimization of its data structure after filling, while significantly saving space (the feature also uses TOAST technology implemented in PostgreSQL for storing large fields). Compressed depositories are usually 10 times smaller than the original depositories - a significant saving of space (and associated costs) when we consider that the data is ever growing in size and some of our customers already have more than 10 TB of depositories.

The Oracle company is a nice example from another area. Originally the manufacturer of probably the most advanced SQL database, they made a series of acquisitions (including SUN) and gained a whole technological "stack" that they can optimize and offer customers - from hardware, through the operating system, Java, database technology, to tools such as ERP, CRM, and SCM. By the way, when we mention SQL, which is also a standard - probably every SQL database has its own SQL language extensions, implementing improvements and new features, which the standard does not contain ...

Conclusion

In the end, I will just repeat what I wrote above. The greater the scope of the SCADA system and the more functionality it can cover "on its own", the more room it has for optimizing and implementing features that require coordinated cooperation of individual modules of the SCADA system. Developers can focus on delivering beneficial functionality to the customer rather than thinking about how to persuade third-party components to support new features and how to circumvent the limitations of existing standard interfaces.

Ing. Peter Humaj, www.ipesoft.com

Subscription was successful

Thank you for submitting form.

Image Description

Your message was successfully sent.

Thank you for submitting the form.

Image Description

Your message was successfully sent.

Thank you for submitting the form.

Image Description

Your message was successfully sent.

Thank you for submitting the form.

Image Description