Adversarial threat landscape MITRE ATLAS adversary tactics and techniques
In the emerging world of Generative AI and ML applications with Generative AI and ML Systems used to train data, craft a model and obtain model inference are open to serious security threats in the real world when exposed publicly..
The train data may relate to personal data and or confidential data of an organization pertaining to retail, manufacture, health, banking and finance industries.If not properly safeguarded, adversaries leverage their attacks to gain access to the AI/ML lifecycle and its systems to achieve their objectives.
MITRE ATLAS an Adversarial Threat Landscape for Artificial Intelligence System is a framework for incident response in the wake of ATT&CKS arising due to the vulnerabilities in the AI/ML Models.
MITRE ATLAS is an open source organization launched in 2021 that has framed a list of TACTICS,TECHNIQUES and PROCEDURES (TTP’s) the attackers utilize.
MITRE ATLAS initiative has captured the developing threat landscape with a wide range of illustrations and examples in the real world across different domains.
MITRE ATLAS encompasses the techniques and raises awareness the AI/ML Cybersecurity threat poses.
MITRE Atlas has about 87 techniques specifically addressing AI/ML systems.MITRE ATLAS has also about 365+ techniques addressing the enterprise landscape which is beyond the scope of this article.
Below are the TTP’s categorized as Techniques,SubTechniques and Mitigations to overcome the risks.
The adversary gathers information about the machine learning system they can use to plan future operations.
Uses Active or Passive techniques to gather information to support targeting.Information includes victims organization ML capabilities and research efforts.Information aids in phases of adversary lifecycle
-Obtain victims ML artifacts
-Targeting victims ML capabilities
-Tailoring attacks to particular Model used by Victim.
Adversary tries to establish resources they can use to support operations.
Resource development consists of techniques that involve
-Adversaries creating
-Purchasing
-Compromising/Stealing resources to support targeting.
Resources include
-ML Artifacts
-Infrastructure
-Account or Capabilities
Adversary gains access to ML system
Target maybe
-Network
-Mobile device
-Edge device
-Sensor platform
Techniques used may be
-introducing vectors to gain foothold into the system
ML Capabilities may be
-local onboard or
-cloud enabled
Adversary attempts to gain access to ML model
-By gaining access, data is input to the Model
-Level of access
-range from full knowledge of internals of the model
to the physical environment where data is collected.
– From staging to impacting the full ML System
Access is acquired
- Publicly via direct API access
- Indirectly via a product or service that ML utilizes
Adversary runs malicious code embedded in ML artifacts or software
Techniques include
-Controlled Code is executed on a local or remote system.
-Controlled Code is paired with other techniques to explore network or stealing data
Adversary may use a remote access tool to run script to perform Remote System discovery
Adversary maintains a foothold via ML Artifacts and software. Persistence involves techniques adversaries use to maintain access to ML System during restarts,users credentials that can cut off their access. Techniques involve retaining backdoor ML models and poisoning data.
Adversary gains higher level privilege access permissions.
Common approach is taking advantage through system weaknesses ,misconfiguration and vulnerabilities. Elevated access includes
Techniques overlap with persistence as OS features that allow an adversary to execute in a elevated context
Adversary tries to evade being detected by ML enabled security software throughout their compromise.
Softwares include being detected by -Malware detectors
Adversary gains access by stealing usernames and passwords.
Techniques include – Keylogging – Credential dumping
With credential access makes them harder to detect and create more accounts.
Adversary tries to manipulate ,erode confidence, interrupt or destroy your ML data and systems.
Techniques include -destroying -Tampering data -disrupt availability -compromise integrity by manipulating business,operational process and organizational resources
Adversary learns the ML environment.
Allows adversaries -what to control -what around the entry point -what to discover to reach their objective
Adversary uses native operating tools for the post-compromise information gathering objectives.
Adversary observe the environment and orient themselves before deciding how to act.
Adversary tries to gather ML artifacts relevant to the objective.
Target sources include -Software repositories -Container registries -Model repositories -Object stores.
Next goal after stealing data is to exfiltrate the ML artifacts
|