Innovating Safely

Will future technologies be safer?

Innovative Solutions for Safety Critical Projects

CSL has earned an international reputation for its expertise in systems/software safety engineering. We help clients manage safety risk and security vulnerabilities associated with complex systems across a variety of technical domains including aerospace, automotive, defence, energy, medical technologies, and rail signalling. 

Our experience extends from the intricacies of real-time embedded software to solving challenges that arise from the use of emergent technologies such as Machine Learning (ML) and other forms of Artificial Intelligence (AI).

CSL has a proven ability to parachute into a complex technical domain and quickly guide our clients towards solutions. We turn uncertainty into confidence.

Consulting Capabilities

Hazard & Risk Analysis

CSL has developed innovative strategies for identifying and analyzing sources of risk arising from the limitations of advanced sensors, use of AI/ML and complex system interdependencies.

Consulting Capabilities

Safety Requirements Definition

CSL systematically derives, specifies and verifies safety requirements for complex systems, using specialized techniques to improve your approach to requirements engineering.

Consulting Capabilities

Safety V&V

CSL’s deep technical knowledge and advanced methods provide verification and validation evidence to both identify problems and help you make sound conclusions about safety risk.

Consulting Capabilities

Assurance Case Development

Using our proprietary software tool, Socrates, CSL develops comprehensive assurance cases that link your top-level claims to evidence through structured argumentation.

Client Sectors

Automotive

CSL guides clients to a deeper understanding of the challenges of mitigating risks associated with increased integration, connectivity and incorporation of Machine Learning.

Client Sectors

Maritime

Underwater navigation depends on sophisticated algorithms for processing real time data from advanced sensors. CSL provides expertise to develop compelling claims-argument-evidence safety cases.

Client Sectors

Energy

Decades of experience guide CSL to effectively identify hazards and assess the safety of highly integrated and interdependent “systems of systems” to enhance safety and support compliance.

Client Sectors

Rail

Real time control of a driverless train relies upon complex statistical algorithms. CSL’s uses deep knowledge of formal methods to address safety challenges at the heart of emergent technology.

Client Sectors

Aerospace

CSL draws on extensive experience to address safety challenges in emergent technology that arise from the introduction of Machine Learning and AI in airborne and ground-based systems.

Client Sectors

Medical

CSL can address challenges to safety in care environments including availability and integrity of information, cybersecurity and the emergence of advanced diagnostic capabilities based on AI.

Perspectives

Cross-Sector Application of ISO/PAS 8800 for AI & ML Safety

Guidance on applying ISO/PAS 8800 across industries for AI/ML safety assurance.

Bridging the Gap Between IEC 61511 and Use of AI in Plant Safety

Bridging deterministic safety and AI uncertainty: a roadmap to reconcile IEC 61511 with the future of intelligent plant systems.

Uncovering Unsafe Feature Interaction in Vehicle Control using Generative AI

Harnessing LLMs and Digital Twins to uncover hidden ADAS risks faster, cheaper, and smarter.

A Fuzzy Logic Language for Trustworthy Confidence in Assurance Cases

Certus: Bringing clarity to assurance confidence with fuzzy logic and a language engineers can trust.

Not a “One Size Fits All” for AI in Safety Assurance

A call to tailor LLM evaluation methods to each assurance case use, risk profile, and role.

Balancing AI’s Promise and Peril in Assurance

A practical framework for weighing the risks and rewards of applying LLMs in safety assurance cases.

From Standard to System: ISO/PAS 8800 Assurance in Practice with Socrates

From design to deployment, ISO 8800 defines how to argue AI safety with logic, evidence and live metrics.

First of its Kind – The NRC Sees the Argument

First instance of the use of an assurance case in the US NRC regulatory environment, using structured argumentation.

LLM & Structured Argumentation Leads to Trust

LLMs support safety-critical decisions by evaluating structured arguments, identifying gaps, surfacing doubts, and generating defeaters to challenge bias and strengthen trust.

Establishing Trust in Data for Critical Decision Making

Using structured argumentation to establish trust in In-line Inspection (ILI) results used to evaluate the integrity of a pipeline.

Functional Safety Requirements for Artificial Intelligence and ML Systems

Discover how Metamorphic Relations offer a breakthrough in defining functional safety requirements for AI/ML, tackling the ‘black-box’ problem.

Confirmation Bias & Safety Management Systems

Discover how Eliminative Argumentation and AI enhance Safety Management Systems by reducing bias and providing real-time performance insights through KPI integration.

Electric Over Water – Analysis of Hazards for an All-Electric Floatplane

Electric propulsion is reshaping aviation, but can safety keep pace with innovation? Explore how Eliminative Argumentation addresses the hazards of tomorrow’s aircraft.

Utilizing an Assurance Case Argument to Drive Development

How can a safety case transcend being merely a final step in the safety process?

Adding Defeaters to Confidence Assessment

Take account of defeaters and doubt in confidence assessment methods, including those used in our Socrates product.

Assurance Cases in the Large

See how CERN’s Machine Protection System uses structured argumentation to mitigate risks in the world’s largest particle accelerator, ensuring operational safety.

Generating Defeaters with Gen AI

Use Generative AI to help brainstorm defeaters for structured arguments.

Challenging Autonomy with Combinatorial Testing

With trillions upon trillions of possibilities, how could you identify a set of test cases for an autonomous system that is both manageable and adequate?

Safety Integrity Levels for Artificial Intelligence

Re-thinking how level-of-rigour approaches for conventional software are applied to AI-enabled systems.

Incremental Assurance Through Eliminative Argumentation

How can an assurance case represent how confidence in an argument changes over time?

Assurance Case Development as Data

Could AI learn how to guide a safety engineer in the development of a safety case?

Patterns for Security Assurance Cases

Re-usable patterns, based on NIST 800-53, for creating security assurance arguments.

Safety for Cyber-Physical Systems

A practical approach to applying formal methods to guide the development of numerically intensive control systems.

Eliminative Argumentation for Arguing System Safety

Using the power of “doubt” to overcome confirmation bias and harden a safety assurance
case.

Bridging the Gap Between ISO 26262 and Machine Learning

Helping automakers and suppliers reconcile the use of ML with the conservative principles of functional safety.

Reducing the Feature Interaction Explosion Problem

Morse: a method and tool to simplify the Feature Interaction analysis using abstract subject matter knowledge.

Practical Uses of Formal Methods in Development of Airborne Software

Using formal methods to find what testing missed.

Creating Safety Assurance Cases for Rebreather Systems

An early example of safety assurance case development for a rebreather system per EN 14143, utilizing Goal Structuring Notation (GSN).