DRAFT
GenAI: here we mean using GenAI outside the wallet (AI-as-a-Service)
Even if you think you’re only sending:
policies
capability lists
proof requests
…the structure, timing, and combinations of requests can leak:
user attributes
behavior patterns
service usage profiles
This is called inference leakage. Over time, the AI provider can reconstruct who you are and what you’re doing — without seeing raw identity data.
When AI runs outside the wallet:
decision logic lives elsewhere
prompt logic evolves without the user’s control
model updates silently change behavior
Result: The wallet becomes a UI, not an agent.
This quietly breaks self-sovereign identity principles.
External AI can:
bias disclosure decisions
“optimize” for platform goals
subtly over-disclose to reduce friction
Even without malice:
optimization objectives ≠ user interests
This is algorithmic coercion, not a bug.
Most AI services:
log prompts
retain context
reuse data for tuning or monitoring
Even anonymized logs can:
correlate identities across services
deanonymize users through linkage attacks
Once logged: You can’t revoke it.
A single AI provider serving many wallets can:
correlate request fingerprints
identify the same user across devices or contexts
create a shadow identity graph
This recreates centralized identity — without consent.
External AI services may:
run in foreign jurisdictions
be subject to subpoenas
fall under surveillance regimes
This creates:
unclear data residency
legal exposure for wallet providers
compliance contradictions (GDPR, eIDAS, etc.)
Inside a wallet:
AI mistakes are bounded
Outside:
hallucinated policy interpretations
incorrect legal assumptions
wrong proof selection
These can cause:
over-disclosure
invalid consent
irreversible identity actions
Hallucination here is not UX noise — it’s identity damage.
More General:
Risk: The article highlights the rapid advancement of AI-driven deepfake technologies, which are capable of producing highly realistic synthetic images and videos. These technologies can undermine biometric authentication mechanisms such as facial recognition, which are commonly used by digital identity wallets.
Solutions proposed in the article:
Adoption of advanced AI-based detection mechanisms
Continuous updating of forgery detection algorithms
Use of multi-factor authentication instead of relying solely on biometric methods
Risk: The article discusses the emergence of synthetic identities created by combining real and fabricated data, which can bypass traditional identity verification systems. If such identities are stored or validated within digital identity wallets, they can compromise the overall trust model.
Solutions proposed in the article:
AI-driven behavioral and pattern analysis
Enhanced fraud detection mechanisms
Verification based on multiple trusted sources rather than a single authority
Risk: The article notes that many current digital identity security systems lack the scalability and accuracy required to handle large volumes of users and increasingly sophisticated AI-based attacks. This limitation poses a significant challenge for identity wallets operating at national or cross-border scale.
Solutions proposed in the article:
Deployment of scalable and resilient system architectures
Use of AI to automate threat detection and response
Continuous improvement of algorithmic accuracy under high-load conditions
Risk: The article emphasizes the absence of harmonized international standards and regulatory frameworks for digital identity systems. This lack of coordination creates interoperability and compliance challenges for digital identity wallets, especially in cross-border scenarios.
Solutions proposed in the article:
Development of shared legal and regulatory frameworks
Standardization of digital identity security processes
Stronger coordination between regulatory bodies and technology providers [2]