The Reality of Implementing Edge Computing in OT.
The reality of implementing Edge Computing in OT is that “Edge Computing” is confused with “putting in computers,” and that is not exactly the case.
In previous posts, we have explained what Edge Computing is and what it consists of and what advantages it has by enabling the transition to an Industry 4.0 model. But, what is the reality at the time of implementation?
As always, it is a complex question to answer. It is common to find an organizational difficulty due to the coordination and interdependence between different departments of the company and the different nuances to which each of them pays attention:
- Engineering/Production: interested in having the tool that works correctly to perform the desired function in the process to generate or increase value.
- Maintenance: interested in having mechanisms to know the status and how to solve possible incidents in the shortest possible time.
- Communications: interested in ensuring that communications between the different components are correct and resilient to failures, although sometimes latency requirements, guaranteed bandwidths, segmentation, etc. are not taken into consideration to guarantee real time.
- Systems: interested in ensuring that the execution of servers and their applications is correct, secure, and redundant; however, sometimes attention is not paid to latency requirements, the effects of a potential zero crossing in the event of a failure, or how resource sharing can unpredictably affect the performance of the critical application.
- Cybersecurity: interested in ensuring that the whole is secure, strict control of functionalities and access, and may make its management difficult.
And we are left with the figure of the systems integrator, who, on many occasions, has to adapt to the different requirements that are given by the different departments and, in addition, manage to integrate the different systems with each other, that it works!! and quickly because it was for yesterday!!

To this task, which is not easy, are added the challenges of learning and mastering different technologies that are involved in the entire process. It is no longer enough to know about PLCs, HMIs, and SCADA; it is required to understand the role that virtualization plays, operating systems, and what cybersecurity risks they have, as well as the operation of the different communication protocols used to be able to define which network architecture is the most appropriate.
Moreover, on many occasions, it is required to install additional computers that must be prepared to withstand the environmental conditions of the environment where they are going to operate and must be properly protected to avoid introducing cybersecurity risks. In both cases, it poses a challenge to achieve guaranteed resilience. A survey of several systems integrators reveals that:
- 75% of integrators have had to supply hardware or edge computing elements within their industrial integration and automation projects.
- Only 25% have qualified personnel with sufficient knowledge to size and deploy complex server and virtualization architectures.
- The provisioning of hardware and licenses necessary for development represents a considerable management effort for 40%.
- For 65% of the integrators consulted, said provisioning and installation of the hardware infrastructure provides them with a residual value in their projects.
These difficulties, if not addressed during the design of the solution, end up causing problems later, when making modifications in environments already in operation is usually extremely complicated. To give some examples, it is not difficult to find situations such as:
- Use of the same network infrastructure for the transmission of field data as control communications.
- Use of computers that end up breaking down due to lack of maintenance and/or the environmental conditions of the environment.
- Lack of redundancy and existence of single points of failure that can put the process or parts of it at risk.
- Lack of monitoring of those elements that are necessary for the whole to function correctly.
- Obsolete equipment out of support/warranty by the manufacturer.
The reality in the implementation of Edge Computing in OT is that “Edge Computing” is confused with “putting in computers,” and that is not exactly the case.
Indeed, as the analysis of raw process data becomes more relevant for continuous improvement, it will be necessary to provide computing capacity to the different elements, adding computers? Yes, but not in any way, questions like what type of equipment do I need? Under what conditions will it work? do I need virtualization? What kind of maintenance does it require? How do I connect it to the network? What latency requirements do I have? What happens if it fails? How do I perform backups? How do I manage its supervision? What remote access is required? How do I update it? … must be answered to ensure that the final solution is reliable, maintainable, and durable.
To contribute our bit, from Logitek, we have worked jointly with global manufacturers to offer preconfigured and validated Edge Computing solutions for ICS OT. Helping our collaborators in deploying easy, fast, and standard hardware infrastructure is part of our mission as a company.
The main objective is to offer a solution that guarantees its operation, offering very strict fault tolerance mechanisms while its daily management does not require extensive or complicated training.
For more information, visit our OT Infrastructure solution.


