...
- Risk: Since policies are enforced through cryptographic protocols rather than natural language interpretation, making prompt injection harder. But if the wallet uses AI-driven assistants or automated decision-making (e.g., for verifying credentials or guiding users), attackers can craft malicious prompts to manipulate the AI’s logic.
Even if the wallet itself is secure, any connected AI-based helpdesk or verification service could be exploited via prompt injection.
- Solution: Apply prompt hardening techniques, context isolation, and strict input validation. Use allowlists/denylists and sandbox testing for untrusted inputs.
Data Leakage and Membership Inference*
- Risk: If identity-related data is used to train AI models, attackers may infer sensitive attributes or reconstruct original data through model inversion or membership inference attacks.
If the wallet uses AI services (e.g., for fraud detection, identity verification, or UX personalization), sensitive identity data might be exposed during model training or inference. If wallet operations involve external AI APIs, data could leak through logs or model updates. Even if raw data isn’t shared, patterns in queries or metadata could allow attackers to infer user attributes. e.g your AI tools on your mobile phone have access to your ID wallet!!
If AI is integrated for convenience (e.g., chatbots or automated KYC), and those models access identity data, leakage and inference risks reappear. Metadata (timestamps, transaction patterns) can still be exploited for inference even if credentials are protected.
- Solution: Enforce data minimization, segregate sensitive datasets, adopt privacy-preserving training methods* (e.g., differential privacy), and secure the entire data lifecycle.
...
AI models can inherit biases, leading to unfair denials of access for specific demographic groups. The system is also vulnerable to adversarial attacks designed to fool it.
*
Membership Inference (or Membership Inference Attack, often shortened to MIA) is a type of privacy attack against machine‑learning models. In this attack, someone (an attacker) tries to figure out whether a specific data sample was part of the model’s training data. In simple words: The attacker wants to know “Was this person’s data used to train the model?” If the attacker can guess this correctly, they can learn private information about individuals.
Privacy-preserving training methods → Privacy‑preserving training methods are techniques used in machine learning and AI to ensure that sensitive information from the training data cannot be reconstructed, identified, or leaked, while still allowing the model to learn useful patterns.like :
...
- A. Golda et al., "Privacy and Security Concerns in Generative AI: A Comprehensive Survey," in IEEE Access, vol. 12, pp. 48126-48144, 2024, doi: 10.1109/ACCESS.2024.3381611. → https://ieeexplore.ieee.org/document/10478883
- 'THE EVOLUTION OF IDENTITY SECURITY IN THE AGE OF AI: CHALLENGES AND SOLUTIONS ', International Journal of Computer Engineering and Technology (IJCET) Volume 16, Issue 1, Jan-Feb 2025, pp. 2305-2319, Article ID: IJCET_16_01_165 Available online at https://iaeme.com/Home/issue/IJCET?Volume=16&Issue=1 ISSN Print: 0976-6367; ISSN Online: 0976-6375; Journal ID: 5751-5249 Impact Factor (2025): 18.59 (Based on Google Scholar Citation) DOI: https://doi.org/10.34218/IJCET_16_01_165