- ITDM 2025 전망 | 금융 플랫폼 성패, 지속가능한 사업 가치 창출에 달렸다” KB국민카드 이호준 그룹장
- 2025年、CIOはAIに意欲的に投資する - そしてその先も
- The best robot vacuums for pet hair of 2024: Expert tested and reviewed
- These Sony headphones eased my XM5 envy with all-day comfort and plenty of bass
- I compared a $190 robot vacuum to a $550 one. Here's my buying advice
How can CIOs protect Personal Identifiable Information (PII) for a new class of data consumers?
- Limitation: Confidential computing requires companies to incur additional costs to move their ML-based services to platforms that require specialized hardware. The solution is also partially risk-free. An attack in May 2021 collected and corrupted data from TEEs that rely on Intel SGX technology.
While these solutions are helpful, their limitations become apparent when training and deploying AI models. The next stage in PII privacy needs to be lightweight and complement existing privacy measures and processes while providing access to datasets entangled with sensitive information.
Balancing the tightrope of PII confidentiality with AI: A new class of PII protection
We’ve examined some modern approaches to safeguard PII and the challenges the new class of data consumers faces. There is a balancing act in which PII can’t be exposed to AI, but the data consumers must use as much data as possible to generate new AI use cases and value. Also, most modern solutions address data protection during the ML training stage without a viable answer for safeguarding real-world data during AI deployments.
Here, we need a future-proof solution to manage this balancing act. One such solution I have used is the stained glass transform, which enables organisations to extract ML insights from their data while protecting against the leakage of sensitive information. The technology developed by Protopia AI can transform any data type by identifying what AI models require, eliminating unnecessary information, and transforming the data as much as possible while retaining near-perfect accuracy. To safeguard users’ data while working on AI models, enterprises can choose stained glass transform to increase their ML training and deployment data to achieve better predictions and outcomes while worrying less about data exposure.
More importantly, this technology also adds a new layer of protection throughout the ML lifecycle – for training and inference. This solves a significant gap in which privacy was left unresolved during the ML inference stage for most modern solutions.
The latest Gartner AI TriSM guide for implementing Trust, Risk, and Security Management in AI highlighted the same problem and solution. TRiSM guides analytics leaders and data scientists to ensure AI reliability, trustworthiness, and security.
While there are multiple solutions to protect sensitive data, the end goal is to enable enterprises to leverage their data to the fullest to power AI.
Choosing the right solution(s)
Choosing the right privacy-preserving solutions is essential for solving your ML and AI challenges. You must carefully evaluate each solution and select the ones that complement, augment, or stand alone to fulfil your unique requirements. For instance, synthetic data can enhance real-world data, improving the performance of your AI models. You can use synthetic data to simulate rare events that may be difficult to capture, such as natural disasters, and augment real-world data when it’s limited.
Another promising solution is confidential computing, which can transform data before entering the trusted execution environment. This technology is an additional barrier, minimizing the attack surface on a different axis. The solution ensures that plaintext data is not compromised, even if the TEE is breached. So, choose the right privacy-preserving solutions that fit your needs and maximize your AI’s performance without compromising data privacy.
Wrap up
Protecting sensitive data isn’t just a tech issue – it’s an enterprise-wide challenge. As new data consumers expand their AI and ML capabilities, securing Personally Identifiable Information (PII) becomes even more critical. To create high-performance models delivering honest value, we must maximize data access while safeguarding it. Every privacy-preserving solution must be carefully evaluated to solve our most pressing AI and ML challenges. Ultimately, we must remember that PII confidentiality is not just about compliance and legal obligations but about respecting and protecting the privacy and well-being of individuals.