...
Privacy-preserving training methods → Privacy‑preserving training methods are techniques used in machine learning and AI to ensure that sensitive information from the training data cannot be reconstructed, identified, or leaked, while still allowing the model to learn useful patterns.like :
- Homomorphic Encryption
...
- : Allows computations directly on encrypted data.
- Secure Multi-Party Computation - MPC/SMPC
...
- : Multiple parties collaborate to train a model without seeing each other's data.
- Differential Privacy - DP: Adds mathematically controlled noise during training so the model cannot reveal information about any specific individual.
- Federated Learning (FL): The model is trained across many devices or servers, and raw data never leaves the device. Each device trains locally and only model updates (not data) are sent to a central server, updates are aggregated securely
- Trusted Execution Environments (TEE): Training happens in a secure, hardware‑isolated environment. like Microsoft Azure confidential computing
- Synthetic Data Generation: Instead of using real private data, a model (e.g., a generative model) produces synthetic data that mimics the statistical properties of the original dataset.
Explainable Generative AI →Explainable Generative AI (XGAI) refers to methods and systems that makegenerative AI models (such as GPT, diffusion models, image generators, code‑gen models, etc.) understandable, transparent, and interpretable to humans.
...
- A. Golda et al., "Privacy and Security Concerns in Generative AI: A Comprehensive Survey," in IEEE Access, vol. 12, pp. 48126-48144, 2024, doi: 10.1109/ACCESS.2024.3381611. → https://ieeexplore.ieee.org/document/10478883
- 'THE EVOLUTION OF IDENTITY SECURITY IN THE AGE OF AI: CHALLENGES AND SOLUTIONS ', International Journal of Computer Engineering and Technology (IJCET) Volume 16, Issue 1, Jan-Feb 2025, pp. 2305-2319, Article ID: IJCET_16_01_165 Available online at https://iaeme.com/Home/issue/IJCET?Volume=16&Issue=1 ISSN Print: 0976-6367; ISSN Online: 0976-6375; Journal ID: 5751-5249 Impact Factor (2025): 18.59 (Based on Google Scholar Citation) DOI: https://doi.org/10.34218/IJCET_16_01_165
این مدل در بسیاری از دستگاهها یا سرورها آموزش داده میشود و دادههای خام هرگز از دستگاه خارج نمیشوند.