Blog

Gen AI security, Generative AI security,Security for Gen AI LLM security,Model security,Prompt security,RAG security,AI vulnerabilities, vulnerabilities in AI AI risks, GenAI risks, risks in GenAI,AI privacy, Privacy in AI,AI pipeline security GEN AI in industries,GEN AI solutions,LLM Testing, GenAI testing, Adversarial attacks,owasp risks

Adversarial threat landscape MITRE ATLAS adversary tactics and techniques

In the emerging world of Generative AI and ML applications with Generative AI and ML Systems used to train data, craft a model and obtain model inference are open to serious security threats in the real world when exposed publicly..

The train data may relate to personal data and or confidential data of an organization pertaining to retail, manufacture, health, banking and finance industries.If not properly safeguarded, adversaries leverage their attacks to gain access to the AI/ML lifecycle  and its systems to achieve their objectives.

 

MITRE ATLAS an Adversarial Threat Landscape for Artificial Intelligence System is a framework for incident response in the wake of ATT&CKS arising due to the  vulnerabilities in the AI/ML Models.

MITRE ATLAS is an open source  organization launched in 2021 that has framed a list of TACTICS,TECHNIQUES and PROCEDURES (TTP’s) the attackers utilize.

MITRE ATLAS initiative has captured the developing threat landscape with a wide range of illustrations and examples in the real world across different domains.

MITRE ATLAS encompasses the techniques and raises awareness the AI/ML Cybersecurity threat poses.

MITRE Atlas has about 87 techniques specifically addressing AI/ML systems.MITRE ATLAS has also about 365+ techniques addressing the enterprise landscape which is beyond the scope of this article.

Below are the TTP’s categorized as Techniques,SubTechniques and Mitigations to overcome the risks.

 

 

 

 

 

 

 

 

 

 

 

 

 

Reconnaissance

 

The adversary gathers information about the machine learning system they can use to plan future operations.

Uses Active or Passive techniques to gather information to support targeting.Information includes victims organization ML capabilities and research efforts.Information aids in phases of adversary lifecycle

-Obtain victims ML artifacts

-Targeting victims ML capabilities

-Tailoring attacks to particular Model used by Victim.

 

Resource Development

 

Adversary tries to establish resources they can use to support operations.

 

Resource development consists of techniques that involve

 

-Adversaries creating

-Purchasing

-Compromising/Stealing resources to support targeting.

 

Resources include

-ML Artifacts

-Infrastructure

-Account or Capabilities

 

Initial Access

 

Adversary gains access to ML system

 

Target maybe

-Network

-Mobile device

-Edge device

-Sensor platform

 

Techniques used may be

-introducing vectors to gain foothold into the system

 

ML Capabilities may be

-local onboard or

-cloud enabled

 

ML Model Access

 

Adversary attempts to gain access to ML model

 

-By gaining access, data is input to the Model

-Level of access

-range from full knowledge of internals of the model

to the physical environment where data is collected.

– From staging to impacting the full ML System

 

Access is acquired

  • Publicly via direct API access
  • Indirectly via a product or service that ML utilizes

 

 

Execution

 

Adversary runs malicious code embedded in ML artifacts or software

 

Techniques include

-Controlled Code is executed on a local or remote system.

-Controlled Code is paired with other techniques to explore network or stealing data

 

Adversary may use a remote access tool to run script to perform Remote System discovery

 

Persistence

Adversary maintains a foothold via ML Artifacts and software.

Persistence involves techniques adversaries use to maintain access to ML System during restarts,users credentials that can cut off their access.

Techniques involve retaining backdoor ML models and poisoning data.

 

Privilege Escalation

 

Adversary gains higher level privilege access permissions.

 

Common approach is taking advantage through system weaknesses ,misconfiguration and vulnerabilities.

Elevated access includes

 

  • System /root level
  • User account with admin like access
  • Local administrator
  • User accounts with access to specific system or perform specific function

 

Techniques overlap with persistence as OS features that allow an adversary to execute in a elevated context

 

Defense Evasion

 

Adversary tries to evade being detected by ML enabled security software throughout their compromise.

 

Softwares include being detected by

-Malware detectors

 

Credential Access

 

Adversary gains access by stealing usernames and passwords.

 

Techniques include

– Keylogging

– Credential dumping

 

With credential access makes them harder to detect and create more accounts.

 

Impact

 

Adversary tries to manipulate ,erode confidence, interrupt or destroy your ML data and systems.

 

Techniques include

-destroying

-Tampering data

-disrupt availability

-compromise integrity by manipulating business,operational process  and organizational resources

 

 

 

Discovery

 

Adversary learns the ML environment.

 

Allows adversaries

-what to control

-what around the entry point

-what to discover to reach their objective

 

Adversary uses native operating tools for the post-compromise information gathering objectives.

 

Adversary observe the environment and orient themselves before deciding how to act.

 

Collection

 

Adversary tries to gather ML artifacts relevant to the objective.

 

Target sources include

-Software repositories

-Container registries

-Model repositories

-Object stores.

 

Next goal after stealing data is to exfiltrate the ML artifacts

 

ML Artifact Staging

Adversaries are leveraging their knowledge of and access to the target system to tailor the attack.

Techniques include

-training proxy model

-poisoning the data

-crafting adversarial data to feed into the ML model

Techniques are performed in an offline manner and are difficult to mitigate to reach their objective.

 

Exfiltration

 

Adversaries exfiltrate data via ML Inference API access.

 

ML models leak private information via

-Invert ML Model

-Infer Training data Membership

 

Model itself is extracted for Intellectual Model theft.

 

Exfiltration raises privacy concerns.

Private training data includes personal identification data or protected data.

 

About ALERT AI :

Alert AI is Generative AI security platform for Generative AI applications and workflows in Healthcare , Pharma, Insurance, Life Sciences, Retail, Banking, Finance, Manufacturing.

Alert AI we are developing end-to-end, interoperable Generative AI security platform to help enhance security of Generative AI applications and workflows. against potential adversaries, model vulnerabilities, privacy, copyright and legal exposures, sensitive information leaks, Intelligence and data exfiltration, infiltration at training and inference, integrity attacks in AI applications, anomalies detection and enhanced visibility in AI pipelines. forensics, audit,AI  governance in AI footprint.

Llm tracking model tracking ML ops securityLLM Evaluations and Benchmarks

Alert AI

Alert AI is end-to-end, Interoperable Generative AI security platform to help enhance security of Generative AI applications and workflows against potential adversaries, model vulnerabilities, privacy, copyright and legal exposures, sensitive information leaks, Intelligence and data exfiltration, infiltration at training and inference, integrity attacks in AI applications, anomalies detection and enhanced visibility in AI pipelines. forensics, audit,AI  governance in AI footprint.

Alert AI Generative AI security platform

What is at stake AI & Gen AI in Business? We are addressing exactly that.

Generative AI security solution for Healthcare, Insurance, Retail, Banking, Finance, Life Sciences, Manufacturing.

Despite the Security challenges, the promise of Generative AI is enormous.

We are committed to enhance the security of Generative AI applications and workflows in industries and enterprises to reap the benefits .

Alert AI Generative AI Security Services

 

 

 

ALERT AI Generative AI Security platform, AI Privacy, LLM Vulnerabilities, Adversarial Risks, GenAI security, ALERT AI

 

Alert AI  360 view and Detections

  • Alerts and Threat detection in AI footprint
  • LLM & Model Vulnerabilities Alerts
  • Adversarial ML  Alerts
  • Prompt, response security and Usage Alerts
  • Sensitive content detection Alerts
  • Privacy, Copyright and Legal Alerts
  • AI application Integrity Threats Detection
  • Training, Evaluation, Inference Alerts
  • AI visibility, Tracking & Lineage Analysis Alerts
  • Pipeline analytics Alerts
  • Feedback loop
  • AI Forensics
  • Compliance Reports

 

End-to-End GenAI Security

  • Data alerts
  • Model alerts
  • Pipeline alerts
  • Evaluation alerts
  • Training alerts
  • Inference alerts
  • Model Vulnerabilities
  • Llm vulnerabilities
  • Privacy
  • Threats
  • Resources
  • Environments
  • Governance and compliance

 

Enhace, Optimize, Manage Generative AI security of Business applications

  • Manage LLM, Model, Pipeline, Prompt Vulnerabilities
  • Enhance Privacy
  • Ensure integrity
  • Optimize domain-specific security guardrails
  • Discover Rogue pipelines, models, Rogue prompts
  • Block Hallucination and Misinformation attack
  • Block prompts harmful Content Generation
  • Block Prompt Injection
  • Detect robustness risks,  perturbation attacks
  • Detect output re-formatting attacks
  • Stop information disclosure attacks
  • Track to source of origin training Data
  • Detect Anomalous behaviors
  • Zero-trust LLM’s
  • Data protect GenAI applications
  • Secure access to tokenizers
  • Prompt Intelligence Loss prevention
  • Enable domain-specific policies, guardrails
  • Get Recommendations
  • Review issues
  • Forward  AI incidents to SIEM
  • Audit reports — AI Forensics
  • Findings, Sources, Posture Management.
  • Detect and Block Data leakage breaches
  • Secure access with Managed identities

 

Security Culture of 360 | Embracing Change.

In the shifting paradigm of Business heralded by rise of Generative AI ..

360 is culture that emphasizes security in the time of great transformation.

Our commitment to our customers is represented by our culture of 360.

Organizations need to responsibly assess and enhance the security of their AI environments development, staging, production for Generative AI applications and Workflows in Business.

Despite the Security challenges, the promise of Generative AI is enormous.

We are committed to enhance the security of Generative AI applications and workflows in industries and enterprises to reap the benefits.

Home  Services  Resources  Industries

READ FROM INDUSTRY

OUR TESTIMONIALS


According our Customers, We make difference

SEND US A MESSAGE

CONTACT US


We are seeking to work with exceptional people who adopt, drive change. We want to know from you to understand Generative AI in business better to secure better.
``transformation = solutions + industry minds``

Hours:

Mon-Fri: 8am – 6pm

Phone:

1+(408)-364-1258

Address:

We are at the heart of Silicon valley few blocks form Cisco and other companies.

Exit I-880 and McCarthy blvd Milpitas, CA 95035

SEND EMAIL