SAFETY CULTURE
By Alex Wilson, Senior Program Manager for Aerospace and Defence, Wind River
Sep 10. There has been a remarkable increase in the number of systems that require safety certification of device software to one standard or another. This used to be the domain of specialised original equipment manufacturers in the aerospace arena but is increasingly found in military, transportation medical and industrial markets. Whether this increase is driven by the growth in device software controlling our lives, the more visible aspects of software failures, or the more rigorous application of safety critical software standards by government, the concern is the lack of a safety culture within these OEMs; that is, a deep understanding of what it takes to produce “safe” systems.
It is time for government departments to wake up and learn from work done in avionics systems and mandate equivalent and consistent mechanisms for the use of software in other systems that could lead to loss of life. Even within the traditional aerospace community there are emerging technologies that are pushing the envelope in terms of systems safety, namely in the area of unmanned aerial systems, where the safety argument has to include the air vehicle, the datalink and the ground station. Here the range of systems from handheld line-of-sight systems to aircraft the size of a passenger planes push the existing avionics standards that are based on a human-in-the-loop system of validation and verification. New certification methodologies are required for this radically different technology.
What does this mean? The academic approach to the problem is proving that a “system” meets the safety requirement. How is this done? There are well-established methodologies for going about proving a system is safe for use; from a software perspective most of these result in proving the software within these systems will not cause loss of life should they fail.
How does this relate to a safety culture? OEMs that have been in this business for a number of years have had to grow up with these standards and in a lot of cases partake in the establishment of best practices for these standards. These include academic research, participation in standards bodies and evolving working practices to address safety aspects of software development as a core competency. This forms a “safety culture” within a company. The cost involved in developing safety critical software is relatively high when compared to non-safety-critical software, but these established OEMs know and accept that as they have evolved their processes and understand that in order to prove the “safety” requirement they need to invest in the safety culture of their company.
The challenge for OEMs trying to break into the market or dealing with a safety requirement pushed at them is to figure out the best way to reach the requirement without impacting their existing business model or profitability. The approach that traditional OEMs have evolved — increasing rigour in the development process and developing software to include the safety requirement — inevitably increases the overall costs and reduces profitability, not to mention risking project schedules. The alternative to following this safety culture is to determine how to meet the requirements with the least possible outlay or risk to your business.
Unfortunately this route leads to the dark side, where the culture is to cut costs rather than prevent loss of life. Whilst there are mechanisms in place to stop this from happening in the aerospace industry with checks by independent government bodies to make sure safety requirements have been correctly proved, it is not always the case in other industries that rely on self-regulation and common industry practices. How many lives must be put at risk before independent validation of these systems is mandated by government?