Autonomous systems promise efficiency gains, improved availability and entirely new mobility concepts. But regardless of advances in perception or decision-making software, trust in autonomy depends on something more fundamental: whether the system remains reliable, resilient and controllable when things go wrong.
Once vehicles operate without human supervision, safety is no longer a feature or differentiator. It becomes a baseline requirement for every function, under all conditions — including fault scenarios. This places three tightly coupled disciplines at the centre of autonomous system design: functional safety, redundancy architecture and cybersecurity.
Together, they determine whether autonomy is not only technically viable, but also certifiable, deployable at scale and acceptable to regulators and the public.
Functional safety: managing failure, not eliminating it
Functional safety does not assume error-free operation. Instead, it focuses on ensuring that systems respond to faults in a predictable, traceable and low-risk manner. For autonomous vehicles, this principle is codified in a growing body of international standards.
ISO 26262 defines functional safety requirements for electrical and electronic vehicle systems, while ISO/PAS 21448 (Safety of the Intended Functionality, or SOTIF) addresses scenarios where systems behave as designed but still reach unsafe outcomes due to environmental ambiguity, sensor limitations or incomplete data. For highly and fully automated vehicles without a human fallback, UL 4600 extends these concepts to system-level safety validation.
At higher levels of automation, the design objective shifts. As functional safety specialist Dr Thomas Schneider of AVL has observed, the challenge at Level 4 and beyond is not avoiding faults, but continuing to operate safely despite them. That distinction has significant architectural implications for how motion control, actuation and decision execution layers are built.
Redundancy as a system principle
Redundancy is a core mechanism for maintaining control when individual components fail. In autonomous systems, this goes beyond simple duplication. Effective redundancy requires physically and logically independent functional paths, separate power supplies, and continuous cross-monitoring to prevent single points of failure.
Fail-operational behaviour — where a system continues operating in a degraded but controlled state — is increasingly seen as essential for autonomous applications such as driverless public transport, industrial logistics and defence mobility. In these contexts, an uncontrolled stop or loss of motion control may pose greater risk than continued operation to a defined safe state.
Architectures designed around multiple independent control paths allow systems to either continue operation or reach predetermined stopping points even when faults occur, rather than defaulting to shutdown.
Cybersecurity as a safety requirement
As autonomy increases, so does connectivity — and with it, cyber risk. A system that is functionally safe in theory can become unsafe in practice if its control paths or software can be manipulated.
Regulatory frameworks now reflect this reality. UNECE Regulations R155 and R156 mandate cybersecurity management systems for vehicle types, including protections for software updates and over-the-air mechanisms. These requirements include intrusion detection, software integrity verification, event logging and strict separation between safety-critical and non-critical domains.
In autonomous vehicles, cybersecurity is therefore inseparable from safety. Segmented networks, secure boot processes, encrypted in-vehicle communication and domain isolation are no longer optional design choices, but foundational requirements.
Verification and certification
In regulated environments, safety concepts must be demonstrable. Type approval processes under UNECE and ISO standards require extensive documentation, fault injection testing, simulation, laboratory validation and real-world trials.
Standardised frameworks such as PEGASUS and ASAM OpenSCENARIO have emerged to make safety testing repeatable and comparable across platforms. Continuous logging of control actions, state transitions and safety responses is increasingly used to support audits, certification and operational oversight of autonomous fleets.
Designing for trust
Trust in autonomous systems cannot be retrofitted. It must be designed into the architecture through deterministic behaviour, redundancy, secure communication and clearly defined responses to failure conditions.
In this context, motion execution layers such as NX NextMotion are positioned not as decision-making systems, but as deterministic control layers that translate higher-level commands into vehicle movement in a traceable and standards-compliant way. This separation of concerns — where autonomy decides and certified systems execute — is emerging as a common pattern for managing risk in safety-critical autonomy.
Safety as a system property
Autonomous driving is often framed as a race for better AI. In practice, it is a systems engineering problem. Functional safety, redundancy and cybersecurity form the structural foundations that enable autonomy to move from demonstration to deployment.
Without them, autonomy remains impressive but fragile. With them, it becomes manageable, certifiable and scalable — even in the absence of a human driver.

