top of page

5 Practical Steps Every CIO Must Take to Protect Against Data Leakage in the Age of AI-By Suman Basu |vCIO,CEO THATSIT  


The boardroom conversation has shifted. A year ago, CIOs were being asked, "How do we adopt AI faster?" Today, the smarter question is, "How do we make sure AI doesn't become our biggest data liability?"

Generative AI is transforming how work gets done — but it has also opened a new class of data risk that most organisations are woefully underprepared for. Employees are pasting sensitive documents into ChatGPT. Developers are feeding proprietary code into Copilot. Customer data is finding its way into AI tools that were never vetted by IT.

Gartner projects that by 2027, GenAI will be a factor in more than 40% of enterprise data breaches. The threat is not theoretical. It is happening right now, quietly, one prompt at a time.

Here are the five steps I believe every CIO must take — today — to get ahead of it.

Step 1: Know Your Data Before AI Touches It

You cannot protect what you have not mapped.

The foundational step is establishing a clear data classification framework — public, internal, confidential, and restricted — and then understanding exactly where your sensitive data lives and flows. Most organisations have this for their databases and file servers. Very few have extended it to cover what happens when an employee copies a paragraph from a confidential contract into an AI chat window.

Ask yourself: if an employee summarises a board presentation using an external AI tool today, where does that data go? Is it stored? Is it used for training? Can a third party access it? If you cannot answer these questions, your classification framework has a gap.

Data mapping is not glamorous work, but it is the bedrock on which every other control depends.

Step 2: Write an AI Acceptable Use Policy — And Make It Specific

Shadow AI is the new Shadow IT — and it is growing faster.

A policy that says "use AI responsibly" is not a policy. It is a wish. An effective AI Acceptable Use Policy (AUP) should spell out exactly which data categories can and cannot be used with external AI tools, which tools are approved versus prohibited, and what the consequences of non-compliance are.

Critically, this policy should be anchored to your existing compliance obligations. For organisations operating in India, that means alignment with the Digital Personal Data Protection Act (DPDPA). For those with global reach, GDPR, HIPAA, and sector-specific regulations apply. AI does not create new legal obligations, but it dramatically increases the surface area where existing ones can be violated.

The policy should be reviewed every six months, minimum. The AI landscape moves too fast for annual cycles.

Step 3: Enforce Controls Technically — Not Just Contractually

A policy without enforcement is optimism, not governance.

Technical controls need to evolve to match the threat. Specifically:

  • Extend your DLP (Data Loss Prevention) tools to cover AI endpoints. Many modern DLP platforms now support policies targeting browser-based AI tools. Use them.

  • Deploy API gateways if your organisation is building on top of LLMs. These gateways can inspect outbound payloads, redact sensitive content, and block requests that violate policy — before the data ever leaves your environment.

  • Consider private or self-hosted models for workflows involving your most sensitive data — M&A activity, HR records, legal documents, customer PII. The marginal cost of a private deployment is small compared to the cost of a breach.

  • Implement browser-level controls to restrict or monitor the use of unapproved AI tools on corporate devices.

The goal is not to block AI — it is to route the right data to the right tools, with the right controls in place.

Step 4: Vet Your AI Vendors Like You Vet Your Cloud Providers

Many CIOs apply rigorous scrutiny to cloud infrastructure vendors but take AI tools at face value. That asymmetry is a risk.

Before any AI tool is approved for enterprise use, your procurement and security teams should be asking:

  • Where is prompt data stored, and for how long?

  • Is our data used to train or fine-tune the model?

  • Who — within the vendor's organisation or supply chain — can access our inputs and outputs?

  • Does the vendor hold SOC 2 Type II, ISO 27001, or equivalent certifications?

  • Will they sign a Data Processing Agreement (DPA), and does it include zero-retention clauses?

Many enterprise AI contracts include opt-outs from training data pipelines — but only if you negotiate for them. Do not assume. Ask, and get it in writing.

Your AI vendor risk register should sit alongside your cloud and SaaS vendor risk registers. Same rigour, same cadence.

Step 5: Build a Culture of AI-Aware Security

Every other step on this list can be undermined by a single uninformed employee.

Traditional security awareness training teaches people not to click phishing links and not to share passwords. It does not teach them that summarising a client proposal in an external AI tool may constitute a data breach. That gap is where the real risk lives.

AI-aware security training should cover:

  • What constitutes sensitive data in an AI context (it is not always obvious)

  • How to identify whether a tool is approved for enterprise use

  • What to do — and who to tell — if they have already shared something they should not have

Beyond training, create psychological safety. If employees are afraid to report an AI misstep, you lose visibility. A low-friction, no-blame reporting channel for AI data incidents is as important as the controls themselves.

Run tabletop exercises. Simulate scenarios: "An employee used an external AI to draft a response to a regulatory enquiry. The prompt included case file data. What do we do?" The organisations that handle these incidents well are the ones that have practised.


The Bottom Line

AI is not going away, and CIOs who respond to this risk by blocking AI tools wholesale will find that employees simply find workarounds. The answer is not restriction — it is governance.

The CIOs who get this right will be the ones who mapped their data, wrote clear policies, enforced them technically, vetted their vendors seriously, and invested in a culture where people understand the stakes.

The window to get ahead of this is narrowing. The organisations building these foundations now will be resilient. Those waiting for a breach to motivate action will be learning the hard way.


What steps has your organisation taken to manage AI data risk? I would be interested to hear what is working — and what is not.


Let’s put this simply: we’ll keep things straightforward, using everyday language that everyone can easily understand. No fancy words, just plain talk!

 

 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

©2022 by Suman Basu 's Life Stories. Proudly created with Wix.com

bottom of page