DRAFT 

Actors: Holder- Issuer- Verifier- Wallet(Agent)- Governing Authority

General Risks

Deepfake and Identity Spoofing

Prompt Injection and Policy Manipulation 

Data Leakage and Membership Inference*

Misinformation and Social Engineering

Synthetic Identity Fraud (looks repeated)

Scalability and Accuracy Limitations of Existing Systems (question) (could it be a correct risk for wallets?)


Risks as AI-as-a-Service

GenAI: here we mean using GenAI outside the wallet (AI-as-a-Service)

Implicit data leakage (even without “sending data”)

Even if you think you’re only sending:

…the structure, timing, and combinations of requests can leak:

This is called inference leakage. Over time, the AI provider can reconstruct who you are and what you’re doing — without seeing raw identity data.

Loss of user sovereignty

When AI runs outside the wallet:

Result: The wallet becomes a UI, not an agent.

This quietly breaks self-sovereign identity principles.

Policy manipulation & dark negotiation

External AI can:

Even without malice:

This is algorithmic coercion, not a bug. + Explainable Generative AI*

Prompt and context retention

Most AI services:

Even anonymized logs can:

Once logged: You can’t revoke it.

Correlation across wallets and services

A single AI provider serving many wallets can:

This recreates centralized identity — without consent.

In addition, if the AIaaS provider experiences an outage, millions of users could be locked out of essential services simultaneously.

Regulatory and jurisdictional drift (question) 

External AI services may:

This creates:

Data may be stored in servers under foreign laws, complicating compliance with national regulations like GDPR and creating legal uncertainty. (question) 

Model hallucination becomes a security risk

Inside a wallet:

These can cause:

Hallucination here is not UX noise — it’s identity damage.


"Black Box" Opacity

Complex AI models are not transparent. Users can be denied access (e.g., false non-match) without a clear, explainable reason or recourse.

Algorithmic Bias & Discrimination

AI models can inherit biases, leading to unfair denials of access for specific demographic groups. The system is also vulnerable to adversarial attacks designed to fool it.

*

Membership Inference (or Membership Inference Attack, often shortened to MIA) is a type of privacy attack against machine‑learning models. In this attack, someone (an attacker) tries to figure out whether a specific data sample was part of the model’s training data. In simple words: The attacker wants to know “Was this person’s data used to train the model?” If the attacker can guess this correctly, they can learn private information about individuals.

Privacy-preserving training methods →  Privacy‑preserving training methods are techniques used in machine learning and AI to ensure that sensitive information from the training data cannot be reconstructed, identified, or leaked, while still allowing the model to learn useful patterns.like :

Explainable Generative AI Explainable Generative AI (XGAI) refers to methods and systems that make generative AI models (such as GPT, diffusion models, image generators, code‑gen models, etc.) understandable, transparent, and interpretable to humans.


  1. A. Golda et al., "Privacy and Security Concerns in Generative AI: A Comprehensive Survey," in IEEE Access, vol. 12, pp. 48126-48144, 2024, doi: 10.1109/ACCESS.2024.3381611.  →  https://ieeexplore.ieee.org/document/10478883
  2. 'THE EVOLUTION OF IDENTITY SECURITY IN THE AGE OF AI: CHALLENGES AND SOLUTIONS ', International Journal of Computer Engineering and Technology (IJCET)  Volume 16, Issue 1, Jan-Feb 2025, pp. 2305-2319, Article ID: IJCET_16_01_165 Available online at https://iaeme.com/Home/issue/IJCET?Volume=16&Issue=1 ISSN Print: 0976-6367; ISSN Online: 0976-6375; Journal ID: 5751-5249 Impact Factor (2025): 18.59 (Based on Google Scholar Citation) DOI: https://doi.org/10.34218/IJCET_16_01_165