- Protecting data
- Security of devices
- Physical vulnerabilities
- Device lost
- Device defection (not availability of device or no battery)
- Device stolen
- Lack of Device Security
- Physical vulnerabilities
- Security of Wallets → one App or wallet with lots of functionalities and different sectors.
- Phishing Attacks
- Malware and Viruses
- Social Engineering
- Security of Verifiable Credentials
- Just like with traditional passwords, weak keys or improperly stored credentials in distributed identity systems can be vulnerable for hacking
- by end user
- by service providers
- by issuers (tricky)
- by third parties → Misusing or reusing data by third parties through illegal access e.g. Intrusion through malicious App, social engineering, duplication, skimming
- Just like with traditional passwords, weak keys or improperly stored credentials in distributed identity systems can be vulnerable for hacking
- Security of Services → dependency to service security
- relying parties
- intermediaries
- GenAI: here we mean using GenAI outside the wallet (AI-as-a-Service) DRAFT
1. Implicit data leakage (even without “sending data”)
Even if you think you’re only sending:
policies
capability lists
proof requests
…the structure, timing, and combinations of requests can leak:
user attributes
behavior patterns
service usage profiles
This is called inference leakage.
Over time, the AI provider can reconstruct who you are and what you’re doing — without seeing raw identity data.
2. Loss of user sovereignty
When AI runs outside the wallet:
decision logic lives elsewhere
prompt logic evolves without the user’s control
model updates silently change behavior
Result:
The wallet becomes a UI, not an agent.
This quietly breaks self-sovereign identity principles.
3. Policy manipulation & dark negotiation
External AI can:
bias disclosure decisions
“optimize” for platform goals
subtly over-disclose to reduce friction
Even without malice:
optimization objectives ≠ user interests
This is algorithmic coercion, not a bug.
4. Prompt and context retention
Most AI services:
log prompts
retain context
reuse data for tuning or monitoring
Even anonymized logs can:
correlate identities across services
deanonymize users through linkage attacks
Once logged:
You can’t revoke it.
5. Correlation across wallets and services
A single AI provider serving many wallets can:
correlate request fingerprints
identify the same user across devices or contexts
create a shadow identity graph
This recreates centralized identity — without consent.
6. Regulatory and jurisdictional drift
External AI services may:
run in foreign jurisdictions
be subject to subpoenas
fall under surveillance regimes
This creates:
unclear data residency
legal exposure for wallet providers
compliance contradictions (GDPR, eIDAS, etc.)
7. Model hallucination becomes a security risk
Inside a wallet:
AI mistakes are bounded
Outside:hallucinated policy interpretations
incorrect legal assumptions
wrong proof selection
These can cause:
over-disclosure
invalid consent
irreversible identity actions
Hallucination here is not UX noise — it’s identity damage.
- Security of devices
- Losing data → lack of support mechanism by security issues
- Not enough recovery solution
- No insurance
- Dark Net → security economic → there is a business to generate fake ids or misuse of real ids, which could be used for washing money or any other illegal action
- Fake ID
- Misusing of VC
- Trust Infrastructure → any vulnerabilities causes by mistakes in Trust Infrastructure
- PKI
- Registry
- Any intermediaries