DRAFT
Actors: Holder- Issuer- Verifier- Wallet(Agent)- Governing Authority
Using video authentication to log in to the wallet could create a vulnerability.
Facial authentication is required to ensure that the verifiable credential (VC) cannot be misused by someone else.
Biometric authentication is necessary whenever an action must be performed specifically by the VC owner and not by anyone else who may have access to the wallet. e.x. to grant power of attorney
Even if the wallet itself is secure, any connected AI-based helpdesk or verification service could be exploited via prompt injection.
If the wallet uses AI services (e.g., for fraud detection, identity verification, or UX personalization), sensitive identity data might be exposed during model training or inference. If wallet operations involve external AI APIs, data could leak through logs or model updates. Even if raw data isn’t shared, patterns in queries or metadata could allow attackers to infer user attributes. e.g your AI tools on your mobile phone have access to your ID wallet!!
If AI is integrated for convenience (e.g., chatbots or automated KYC), and those models access identity data, leakage and inference risks reappear. Metadata (timestamps, transaction patterns) can still be exploited for inference even if credentials are protected.
Digital identity wallets rely on users to manage credentials and recovery phrases. Attackers can still use AI-generated phishing emails, fake instructions, or fraudulent websites to trick users into revealing sensitive information. Social engineering campaigns can impersonate official wallet support or government identity services, convincing users to share credentials or approve malicious transactions.
AI-driven behavioral and pattern analysis
Enhanced fraud detection mechanisms
Verification based on multiple trusted sources rather than a single authority
Deployment of scalable and resilient system architectures
Use of AI to automate threat detection and response
Continuous improvement of algorithmic accuracy under high-load conditions [2]
GenAI: here we mean using GenAI outside the wallet (AI-as-a-Service)
Even if you think you’re only sending:
policies
capability lists
proof requests
…the structure, timing, and combinations of requests can leak:
user attributes
behavior patterns
service usage profiles
This is called inference leakage. Over time, the AI provider can reconstruct who you are and what you’re doing — without seeing raw identity data.
When AI runs outside the wallet:
decision logic lives elsewhere
prompt logic evolves without the user’s control
model updates silently change behavior
Result: The wallet becomes a UI, not an agent.
This quietly breaks self-sovereign identity principles.
External AI can:
bias disclosure decisions
“optimize” for platform goals
subtly over-disclose to reduce friction
Even without malice:
optimization objectives ≠ user interests
This is algorithmic coercion, not a bug. + Explainable Generative AI*
Most AI services:
log prompts
retain context
reuse data for tuning or monitoring
Even anonymized logs can:
correlate identities across services
deanonymize users through linkage attacks
Once logged: You can’t revoke it.
A single AI provider serving many wallets can:
correlate request fingerprints
identify the same user across devices or contexts
create a shadow identity graph
This recreates centralized identity — without consent.
In addition, if the AIaaS provider experiences an outage, millions of users could be locked out of essential services simultaneously.
External AI services may:
run in foreign jurisdictions
be subject to subpoenas
fall under surveillance regimes
This creates:
unclear data residency
legal exposure for wallet providers
compliance contradictions (GDPR, eIDAS, etc.)
Data may be stored in servers under foreign laws, complicating compliance with national regulations like GDPR and creating legal uncertainty.
Inside a wallet:
AI mistakes are bounded
Outside:
hallucinated policy interpretations
incorrect legal assumptions
wrong proof selection
These can cause:
over-disclosure
invalid consent
irreversible identity actions
Hallucination here is not UX noise — it’s identity damage.
Complex AI models are not transparent. Users can be denied access (e.g., false non-match) without a clear, explainable reason or recourse.
AI models can inherit biases, leading to unfair denials of access for specific demographic groups. The system is also vulnerable to adversarial attacks designed to fool it.
*
Membership Inference (or Membership Inference Attack, often shortened to MIA) is a type of privacy attack against machine‑learning models. In this attack, someone (an attacker) tries to figure out whether a specific data sample was part of the model’s training data. In simple words: The attacker wants to know “Was this person’s data used to train the model?” If the attacker can guess this correctly, they can learn private information about individuals.
Privacy-preserving training methods → Privacy‑preserving training methods are techniques used in machine learning and AI to ensure that sensitive information from the training data cannot be reconstructed, identified, or leaked, while still allowing the model to learn useful patterns.like :
Explainable Generative AI → Explainable Generative AI (XGAI) refers to methods and systems that make generative AI models (such as GPT, diffusion models, image generators, code‑gen models, etc.) understandable, transparent, and interpretable to humans.