Even if you think you’re only sending:
policies
capability lists
proof requests
…the structure, timing, and combinations of requests can leak:
user attributes
behavior patterns
service usage profiles
This is called inference leakage.
Over time, the AI provider can reconstruct who you are and what you’re doing — without seeing raw identity data.
When AI runs outside the wallet:
decision logic lives elsewhere
prompt logic evolves without the user’s control
model updates silently change behavior
Result:
The wallet becomes a UI, not an agent.
This quietly breaks self-sovereign identity principles.
External AI can:
bias disclosure decisions
“optimize” for platform goals
subtly over-disclose to reduce friction
Even without malice:
optimization objectives ≠ user interests
This is algorithmic coercion, not a bug.
Most AI services:
log prompts
retain context
reuse data for tuning or monitoring
Even anonymized logs can:
correlate identities across services
deanonymize users through linkage attacks
Once logged:
You can’t revoke it.
A single AI provider serving many wallets can:
correlate request fingerprints
identify the same user across devices or contexts
create a shadow identity graph
This recreates centralized identity — without consent.
External AI services may:
run in foreign jurisdictions
be subject to subpoenas
fall under surveillance regimes
This creates:
unclear data residency
legal exposure for wallet providers
compliance contradictions (GDPR, eIDAS, etc.)
Inside a wallet:
AI mistakes are bounded
Outside:
hallucinated policy interpretations
incorrect legal assumptions
wrong proof selection
These can cause:
over-disclosure
invalid consent
irreversible identity actions
Hallucination here is not UX noise — it’s identity damage.