Exposed: EPSS Vulnerability Assessment Tool at Risk from AI-Malicious Attacks!
2024-12-19
Author: John Tan
What is EPSS?
Developed by a specialized group under the Forum of Incident Response and Security Teams (FIRST) and made public in April 2020, the EPSS framework serves to evaluate the likelihood that a given software vulnerability is actively exploited in real-world scenarios. It leverages an extensive dataset of various factors—totaling 1,477 unique features—to accurately predict risk levels.
For organizations, EPSS is a lifesaver, allowing them to prioritize vulnerabilities that pose the most significant threat, thus optimizing resource allocation. However, Ikar’s recent experiment highlights a troubling possibility: the robustness of EPSS can be compromised through deliberate manipulation of its input data.
The Vulnerability Exploitation Experiment
Ikar's proof-of-concept involved simulating an adversarial attack on the EPSS by modifying specific data types upon which its predictions are based, specifically social media buzz and public code availability. He selected an older vulnerability, CVE-2017-1235, associated with IBM WebSphere MQ 8.0.
Before the attack, CVE-2017-1235 registered a low exploitation risk score of 0.1, firmly positioned within the 41st percentile for vulnerability potential. Ikar's goal was to artificially inflate this score to illustrate the model’s susceptibility to manipulation.
To execute this strategy, he crafted simulated social media conversations about CVE-2017-1235 using ChatGPT, enhancing its visibility on platforms like Twitter. He then created a fictitious GitHub repository dubbed ‘CVE-2017-1235_exploit,’ which contained a hollow Python file—an empty shell lacking any real exploit functionality.
The outcome? The manipulated EPSS model jumped from a mere score of 0.1 to 0.14, and the vulnerability's risk ranking surged to the 51st percentile, mere points away from being classified as a higher threat.
Implications and Recommendations
Ikar stressed that the results of this experiment expose a significant flaw in the EPSS framework. Since the model's predictions hinge on external, and potentially alterable, metrics, it risks being misused by malicious entities eager to misguide organizations relying on its outputs.
While the findings are preliminary, they beg an immediate call to action. Organizations leveraging EPSS must remain vigilant; any fluctuations in risk scores should trigger thorough investigations to discern if the changes reflect genuine risks or manipulation. Furthermore, it’s prudent to approach EPSS with a multi-faceted data strategy, integrating various risk assessment metrics for a holistic overview.
Ikar's research reinforces a broader lesson: the vulnerability of all machine learning and AI models to manipulation, underscoring the need for continuous monitoring and robust safeguards to combat emerging threats.
As cyber threats become more sophisticated, understanding these dynamics is imperative for effective cybersecurity practices. Organizations must adapt and act swiftly to these emerging risks to ensure their defenses remain fortified.