Welcome back to our blog series on Controlled Software Development for medical devices! In this post, we’ll look at the significant Input phases of software development: User Needs, Risk & Security Analyses, and Requirements.
User Needs
An evaluation of user needs begins with asking extensive questions about who the end-user of the device is, what they are capable of, what their limitations are, and how their interaction with software design might affect device functionality or safety.
It is essential to consider whether a given device could have multiple audiences for different use cases. For example, the end-user of an implantable device is the patient, but audience consideration must include the physician, any potential caregivers who will be interacting with the device, the scientists who will conduct studies on the device, and so on.
By carefully considering end-user and audience at the outset of software development, the overall course for the software is set. This is also a crucial initial component in human factors studies, which should be considered throughout the design process.
Unless your device consists exclusively of software, it may well be that user needs have been investigated and documented at the system level, and perhaps even broken down from there to the software level, before the development team is brought into the project. In that case, it’s vital that the lead developer and systems engineer carefully review the use cases and risk profile together, asking whether any software-specific user needs have been missed or haven’t been clearly described. This review will build directly into the Requirements phase of the development lifecycle.
Risk Analysis
Risk management is not a single phase; instead, it begins as Requirements are defined and runs parallel to the development process for the remainder of the project. Risk Analysis and Management requires a multi-functional team of experts to determine how the software will affect risk, how software could mitigate other identified risks in the device as a whole and hazards created by the software itself. Outcomes of risk management must be documented, and each design change must result in concurrent risk management review. (See ISO 14971 for guidance on risk management).
Two major artifacts that you’ll want to begin working on concurrent with Design activities, which we’ll cover in the next post, are a Preliminary Hazard Analysis (PHA) and a Failure Modes & Effects Analysis (FMEA). Exploring these two documents in-depth could become a whole blog series in itself, but at a high level, generating these for your software involves critically examining the software design from both a top-down (PHA) and a bottom-up (FMEA) perspective. The PHA begins with a list of potential hazards the user could experience, classifies those hazards, and traces each risk to a design requirement or development activity that mitigates it. The FMEA begins with a list of software components, determines how each could fail, describes the effects of each possible failure and determines its hazard classification, and traces each impact of each failure mode to a requirement or activity that mitigates it. The difference is subtle but essential for demonstrating acceptably thorough risk management.
Vulnerability Assessment
As the FDA and ISO define it, “risk” refers to the device’s potential to harm the patient. One form of risk entails the device or device elements functioning as intended, failing to function as expected, or being misunderstood and/or misused, and that’s what the PHA and DFMEA risk analyses focus on. Another form of risk is the device’s potential to cause harm due to interference from malicious cyber activity.
Over the past two years, the FDA, ISO, and other medical device regulators have increasingly clarified their expectations concerning secure development, as it applies both to systems and software. Foundational for fulfilling most of these requirements is a Vulnerability Assessment, which looks critically at your intended design to identify all of the known ways a system with that design could be compromised or rendered unavailable through malicious cyber activity. The output of that assessment may then be used to define requirements and refine software design to ensure these vulnerabilities are mitigated to an acceptable level.
Requirements
Software requirements derive directly from User Needs and are scoped and informed by Risk and Security Analyses. Essentially, this stage of the process is about knowing the right questions to ask in order to arrive at the correct set of requirements. Requirements gathering involves collating several avenues of research, including functionality, safety, usability, and regulations.
Outputs of this phase build into the Design History File (DHF), which is required as part of the device approval submission. These outputs might all be labeled Requirements, or they can be broken down by area – for example, you may need to differentiate between requirements derived from user needs versus those required to meet applicable standards.
In our next post, we’ll take a high-level tour of the outputs that result from the input phases just described. If you’d rather not wait, click here to download our free white paper!