Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Risk: Many current digital identity security systems lack the scalability and accuracy required to handle large volumes of users and increasingly sophisticated AI-based attacks. This limitation poses a significant challenge for identity wallets operating at national or cross-border scale.
  • Solutions:
    • Deployment of scalable and resilient system architectures

    • Use of AI to automate threat detection and response

    • Continuous improvement of algorithmic accuracy under high-load conditions [2]


Risks as AI-as-a-Service

GenAI: here we mean using GenAI outside the wallet (AI-as-a-Service)

Implicit data leakage (even without “sending data”)

Even if you think you’re only sending:

...

This is called inference leakage. Over time, the AI provider can reconstruct who you are and what you’re doing — without seeing raw identity data.

Loss of user sovereignty

When AI runs outside the wallet:

...

This quietly breaks self-sovereign identity principles.

Policy manipulation & dark negotiation

External AI can:

  • bias disclosure decisions

  • “optimize” for platform goals

  • subtly over-disclose to reduce friction

...

This is algorithmic coercion, not a bug. + Explainable Generative AI*

Prompt and context retention

Most AI services:

  • log prompts

  • retain context

  • reuse data for tuning or monitoring

...

Once logged: You can’t revoke it.

Correlation across wallets and services

A single AI provider serving many wallets can:

...

In addition, if the AIaaS provider experiences an outage, millions of users could be locked out of essential services simultaneously.

Regulatory and jurisdictional drift (question) 

External AI services may:

...

Data may be stored in servers under foreign laws, complicating compliance with national regulations like GDPR and creating legal uncertainty. (question) 

Model hallucination becomes a security risk

Inside a wallet:

  • AI mistakes are bounded
    Outside:

  • hallucinated policy interpretations

  • incorrect legal assumptions

  • wrong proof selection

...

Hallucination here is not UX noise — it’s identity damage.


"Black Box" Opacity

Complex AI models are not transparent. Users can be denied access (e.g., false non-match) without a clear, explainable reason or recourse.

Algorithmic Bias & Discrimination

AI models can inherit biases, leading to unfair denials of access for specific demographic groups. The system is also vulnerable to adversarial attacks designed to fool it.

...

  1. A. Golda et al., "Privacy and Security Concerns in Generative AI: A Comprehensive Survey," in IEEE Access, vol. 12, pp. 48126-48144, 2024, doi: 10.1109/ACCESS.2024.3381611.  →  https://ieeexplore.ieee.org/document/10478883
  2. 'THE EVOLUTION OF IDENTITY SECURITY IN THE AGE OF AI: CHALLENGES AND SOLUTIONS ', International Journal of Computer Engineering and Technology (IJCET)  Volume 16, Issue 1, Jan-Feb 2025, pp. 2305-2319, Article ID: IJCET_16_01_165 Available online at https://iaeme.com/Home/issue/IJCET?Volume=16&Issue=1 ISSN Print: 0976-6367; ISSN Online: 0976-6375; Journal ID: 5751-5249 Impact Factor (2025): 18.59 (Based on Google Scholar Citation) DOI: https://doi.org/10.34218/IJCET_16_01_165 



ریسک: اگر از داده‌های مرتبط با هویت برای آموزش مدل‌های هوش مصنوعی استفاده شود، مهاجمان ممکن است ویژگی‌های حساس را استنتاج کنند یا داده‌های اصلی را از طریق وارونگی مدل یا حملات استنتاج عضویت بازسازی کنند.