Unchecked AI, Unseen Dangers: What the DeepSeek Breach Means for SA Companies and POPIA Compliance

The breach underscores substantial security risks associated with AI companies processing large volumes of user-inputted data, including sensitive content – particularly when users have limited control or oversight over information handling and security protocols.

Global Breach, Local Lessons

The DeepSeek incident illustrates the risk of AI innovation outpacing its legal regulation in the majority of jurisdictions globally. While South Africa has yet to adopt AI-specific laws, businesses are still accountable under existing legislation, including the Protection of Personal Information Act 4 of 2013 (POPIA), which governs personal data protection and security.

Internationally, regulators are taking decisive action. Both Ireland’s Data Protection Commission and Italy’s Garante have launched investigations into DeepSeek’s security failures. These authorities have a track record of issuing substantial penalties for data protection breaches, reinforcing that while AI operates across borders, legal accountability remains within specific geographical locations and their attendant legal structures.

For South African businesses, this underscores the importance of ensuring compliance with data protection laws, particularly in environments where employees increasingly rely on AI tools in the workplace.

POPIA Implications for South African Employers

The DeepSeek breach highlights a growing concern: how employees interact with AI models in the workplace, particularly when using publicly available tools like ChatGPT for work-related tasks.

POPIA mandates that organisations prevent unauthorised disclosure of personal information to third parties, and this includes AI platforms. POPIA was enacted prior to the accelerated adoption of AI platforms in the workplace and this introduces novel vulnerabilities, requiring specific consideration and guidance.

A single instance of sensitive data being input into a public AI model by an employee could breach POPIA, potentially resulting in financial, reputational and legal consequences.

Essential Steps for Employers

AI offers significant opportunities but introduces knowledge gaps and compliance challenges. South African employers can proactively implement several measures to protect data while maintaining compliance:

  • Establish a comprehensive AI policy: Define permissible tools and outline usage guidelines that align with POPIA’s conditions, including data minimisation or redaction, valid consent, relevant declarations on AI use and secure data transfers.
  • Implement regular training programmes: Conduct ongoing training addressing the risks of using AI platforms, sharing sensitive data with AI models, and ensuring that employees, contractors and service providers understand POPIA principles and legal implications.
  • Create incident response protocols: Develop clear procedures for identifying, containing and reporting data breaches, emphasising prompt and transparent reporting and action.
  • Maintain regular AI usage audits: Monitor organisational practices to identify unauthorised AI tool adoption to mitigate risks and ensure compliance with organisational policies.

Employee Responsibilities

Employees play a crucial role in preventing AI-related data breaches. Beyond organisational exposure, employees should be aware that negligence in handling sensitive data could result in reputational damage, liability, and disciplinary action. Essential precautions include:

  • Strict policy adherence: Follow organisational AI usage guidelines meticulously, treating all tools as restricted unless verified.
  • Consultation with management: Obtain approval before using or implementing any AI tools, including (and especially) widely available public models, for workplace tasks.
  • Data protection vigilance: Maintain absolute prohibition on inputting company, client or personal information into unauthorised platforms or authorised platforms where restrictions on usage exist.
  • Proactive security reporting: Immediately notify management or IT teams of suspected AI-related vulnerabilities.

Staying Ahead

The DeepSeek breach is a stark reminder that AI’s benefits come with significant risks if security and compliance are neglected. While South African businesses stand to gain from AI-driven efficiencies, data protection and appropriate usage must remain a priority.

By institutionalising clear AI policies and responsible usage guidelines, organisations can harness AI’s potential while mitigating preventable compliance risks.

--

Read the original publication at Cliffe Dekker Hofmeyr