Vulnerability Scoring - Unsuitable Rubrics

Welcome back to our series on scoring vulnerabilities in medical device designs! In this post, we’ll look at the existing rubrics and explain what makes them either well-suited or poorly-suited to design-phase cybersecurity evaluations.

Share this
Share this

Welcome back to our series on scoring vulnerabilities in medical device designs! In this post, we’ll look at the existing rubrics and explain what makes them either well-suited or poorly-suited to design-phase cybersecurity evaluations.

First, let’s look at those which are not suitable:

 

FDA Premarket Guidance

This rubric has some minor value, but only considers two equally weighted attributes of a vulnerability (Exploitability and Severity). This approach is overly simplistic and open to manipulation of the final score.

*Note that these comments pertain to the 2014 Guidance linked above. The FDA is expected to release an updated Draft Guidance very soon.

 

NIST’s Risk Determination (Likelihood over Impact)

Heavy reliance on Likelihood invalidates this rubric. (For an explanation, see our previous post).

 

DREAD

Overly subjective; no longer supported.

 

CVSS 3.x

This version of CVSS removed “Collateral Damage” (i.e., “Severity”) from v2, which is the single most important factor to consider when scoring medical device vulnerabilities. It is possible to “spoof” this value by manipulating other attributes in this rubric, but then why use it if it is such as poor fit for medical devices?

 

NIST’s CMSS and CCSS

These are very close variants of CVSS v2, but they introduce Likelihood in the form of “General Exploit Level” and “Perceived Target Value.”

 

MITRE’s Medical Device Rubric

A greatly-expanded variant of CVSS v3 created by MITRE’s Steve Christey Coley and Penny Chase. This rubric was designed to be inclusive of multiple users’ perspectives, including Health Delivery Organizations and Medical Device Manufacturers. This variant introduces a large number of additional questions that must be answered to inform the attribute settings – 45 questions in all – which must be assigned to each vulnerability before a base metric can be computed! Because of this additional burden, this rubric is not suitable for design-phase vulnerability scoring. In the initial assessment of a design, hundreds of vulnerabilities may be identified, and each would need to have 45 answers entered by the assessor before determining a base score.

Also, because it is based on CVSS v3, no “Severity” attributes can be accounted for.

 

IVSS

This is an industrial variant of CVSS v3, and has several interesting additions to it, such as “Cascading Consequences” and “Process Control Consequences.” However, at its base level, it is oriented toward a fully designed and released product: fundamental to its base scoring are attributes such as “Report Confidence” and “Exploit Maturity.”

 

OWASP Risk Rating

This scoping rubric is very simple and is a member of the “likelihood over impact” scoring group. Since it relies on likelihood it is not suitable for design-phase vulnerability scoring.

 

PVSS/EPSS (a probability variant of CVSS)

Presented at the Black Hat 2019 conference, this rubric is a predictive variant of CVSS, where the probability of a vulnerability being exploited is considered by basing this on previous past exploits. Analysis of past Common Vulnerabilities and Exposures (CVEs) is brought to bear to help quantify exploit probability (“Likelihood”!) of each vulnerability.

Unfortunately, the vast quantity of CVEs in the database are related to MIS/IT systems, not embedded devices, so many potential vulnerabilities are left unconsidered, such as those requiring physical attack vectors. Also, existing CVEs are skewed towards the simplest of attacks, such as buffer overflows, which causes more complex attacks to be ranked as lower risks. Finally, because of restating “likelihood” as “exploit probability,” this rubric would not be suitable for design-phase vulnerability scoring.

In our final post in the series, we’ll look at two rubrics that are suitable for design evaluations, and conclude with brief thoughts about the work that is still needed before the medical device industry can cohere around a single vulnerability scoring standard.

Prev Post
Next Post