Many still view operational technology (OT) security as something to implement only after systems are up and running. However, some of the most difficult risks to manage originate in design and development.
The newly released Secure Connectivity Principles for OT from CISA, the UK NCSC, the FBI, and international partners reflect this change in thinking. The guidance shows that secure connectivity also depends on how systems are designed and built.

OT refers to the hardware and software that control physical processes in industrial and infrastructure systems. It powers everything from factory machinery and energy networks to pipelines, building systems, and connected vehicles. As these systems directly manage physical operations, any vulnerabilities can lead to equipment failures or safety hazards.
One of the most important points in the guidance is a call to action for developers and device manufacturers to create products that are secure by design, with security built into deployment, rather than becoming a problem that operators have to work around.
Secure by design is the first line of defense for OT networks because when software and systems are built securely from the start, operators can manage them with reduced risk.
Some of the main ways to put secure-by-design principles into practice include:
Deploy devices and software with secure defaults, making sure there are no default passwords or unencrypted protocols.
Provide documentation that operators can use to configure connectivity safely.
Plan updates, patches, and vulnerability management from the beginning.
Design systems with built-in segmentation, authentication, and monitoring.
Developers can unintentionally introduce vulnerabilities while working on software, even if they’re not interacting with live OT networks, and operators then have to manage them. These include:
Misconfigured access or missing authentication can leave critical dashboards exposed. For example, an unsecured Human-Machine Interface (HMI) could let an attacker manipulate processes such as temperature or valve positions.
Poor input validation or unchecked requests let unsafe commands through or allow directory traversal. If there’s unsafe input in OT systems, it can cause data errors, equipment damage, or safety incidents. For instance, a parameter that’s unchecked in a SCADA command could cause emergency shutdowns or override safety controls.
Legacy libraries or unsafe memory handling may cause crashes or denial-of-service situations. For example, if a Programmable Logic Controller (PLC) crashes, it can put a stop to production lines and disrupt critical infrastructure.
Insufficient logging makes it harder to detect failed logins or unusual actions. If there’s no proper telemetry, it can be difficult to work out whether an incident was accidental or malicious.
Buffer overflows or memory corruption can compromise sensor data or cause relay systems to fail. If an industrial sensor is affected, operators may then make decisions based on incorrect readings for temperature or pressure.
These examples show that secure design and secure coding help ensure that the systems your code interacts with are resilient against mistakes and misuse.
Alongside the OT connectivity principles, CISA and international partners have released separate guidance focused specifically on securing AI in operational technology environments.
While it was published separately from the OT principles themselves, it aims to show how AI can cause new risks in systems that weren’t designed for it. For developers and product teams building AI features into OT tools, this creates specific design responsibilities:
Have human approval before AI can make changes: If AI suggests a configuration to improve performance, an operator should review it first, especially if it could affect safety limits or critical controls.
Limit what AI can access: Create different permission levels based on potential impact. For example, AI recommendations for a reporting dashboard need less review than AI changes to control logic.
Track what AI changes: Make use of logging to record the details of what changed, when, and why. It’s important for both security investigations and safety audits.
Stop misconfigurations before they go live: If AI generates a PLC configuration, validation checks should catch insecure settings before they reach production systems.
It reinforces the broader secure-by-design message that AI cannot be added at the end of a project. It should also be considered during design and development, and not left entirely to operators to manage afterward.
The choices that developers make influence security outcomes, so risk should always be thought of earlier in the development process.
It should go without saying, but always use secure communication protocols with authentication and encryption.
Make sure to document connectivity and configuration decisions clearly so operators can use them.
Try to limit unnecessary interfaces and remote access points to reduce attack surfaces.
Plan updates, patches, and vulnerability management from the beginning.
It’s best to review AI-assisted workflows to ensure proper oversight.
Set boundaries for where AI can and cannot be used in OT systems.
Run structured testing and risk assessments to safely simulate misconfigurations or failure scenarios.
Reading guidance on its own will not teach developers the impact of insecure design choices, which is why SecureFlag’s OT, IoT, SCADA, and automotive labs are so valuable.
Our practical labs allow developers to safely explore realistic OT environments, experiment with misconfigurations, and see the consequences of design choices without putting production systems at risk. For example, developers can:
Identify and fix poor authentication or logging practices.
Work through unsafe input handling or connectivity errors.
Explore buffer overflows, race conditions, and mismanaged memory scenarios.
Interact with AI scenarios safely to find and fix security issues.
Together with the OT/IoT/SCADA risk template in our automated threat modeling solution, ThreatCanvas, teams can map assets, identify potential threats, and prioritize what remediation steps to take.
These approaches lead to practical security thinking that developers can apply throughout the lifecycle. Importantly, that’s before code reaches production and vulnerabilities become incidents.