*Seshni Moodley is an admitted attorney, director of Seshni Moodley attorneys incorporated , with expertise in digital, civil and criminal law. She holds a masters in human rights law and is currently pursuing her PhD in human rights law.
Image: supplied
South Africa stands at a crossroads. With the rapid adoption of generative AI and large-scale models in business, government and civil society, the country must determine how to safeguard citizens’ rights and maintain public trust.
Policy must be bold, practical and enforceable. Measures including watermarking, meaningful explainability and strict alignment with the Protection of Personal Information Act, 4 of 2013 (POPIA), are not optional extras. They form the foundation of a trustworthy AI ecosystem. There must be practical standards, independent audits and sectoral risk-tiering.
A risk-based, human-centred approach has been adopted in South Africa’s Draft National AI Policy. This policy makes it clear that organisations will be held accountable for the behaviour of autonomous systems.
For South African organisations, irrespective of the sector in which they operate, this is not just a policy conversation. Technical measures such as watermarking and governance measures, including explainability, must be integrated with POPIA obligations from the outset.
Both mechanisms, while useful, are not a silver bullet or sufficient on their own. The idea of watermarking—embedding machine-readable markers or provenance metadata into AI outputs and training datasets—is appealing, as it offers a technical means to differentiate synthetic from human-generated content and to track the lineage of data used to train models.
For organisations navigating reputational and regulatory risks, watermarking can be a useful way to demonstrate responsible practice and strengthen the evidence available during forensic audits. There are, however, limits. A key concern is that watermarking could become more of a symbolic compliance gesture than a tool for meaningful accountability if regulators treat it as a checkbox without proper standards or independent verification.
What businesses can do now is straightforward. They should ensure that any AI tools they procure use provenance standards that work across different systems. They should also require vendors to be transparent about watermarking functions and what its limitations are, and they should build independent checks into their audit processes.
For the public sector, any move towards mandatory watermarking must include clear technical rules, a robust accreditation system for those who verify watermarks, and a realistic assessment of where watermarking is most effective—such as in clearly synthetic media. It is less effective when dealing with complex model behaviour or heavily transformed data.
Explainability is often framed as a binary, but it is not a single technical metric. Systems are not simply explainable or not. In reality, the level of explanation required depends on the risks involved, who the explanation is for, and what decisions it will inform. For example, a customer whose loan application has been declined requires a plain-language explanation, while an auditor assessing fairness needs far more detailed technical information.
High-risk systems—those affecting people’s rights, livelihoods or safety—require layered explainability: clear documentation of training data, purpose, limitations and performance; decision logs and audit trails to trace specific outputs; human-in-the-loop checks with escalation for review or override; and clear, user-focused explanations tailored to affected individuals.
One-size-fits-all mandates risk stifling innovation or producing meaningless compliance. Regulators should publish sector-specific explainability checklists for finance, healthcare and social services that set minimum documentation and testing requirements while allowing technical flexibility.
Robust rules for responsible data use already exist in POPIA. These include lawful processing, clear purpose limitations, data minimisation, security safeguards and accountability. However, AI systems introduce new practical challenges, including large-scale data scraping, unclear data origins and cross-border model deployment. This means POPIA must be embedded throughout the entire AI lifecycle.
South African organisations should incorporate POPIA compliance into every stage of their AI systems. This includes documenting the lawful basis for personal information used in training, conducting data protection impact assessments for high-risk or decision-making models, and applying privacy-preserving techniques such as pseudonymisation, aggregation and differential privacy to limit re-identification risks.
The key message is simple: when AI systems process personal information or influence decisions about individuals, POPIA exposure increases. This risk is, however, manageable and preventable if compliance is built into procurement, development and deployment processes.
More can be done in South Africa to align with global trends. Regulatory frameworks must protect individuals’ rights while enabling local innovation. But policy without enforcement is merely symbolic. The Draft National AI Policy already points the way by emphasising a risk-based, human-centred approach and by treating corporate accountability for autonomous systems as a core concern.
These measures will only be effective if supported by shared standards, independent audits and sector-specific guidance. They represent the enforceable, risk-proportionate tools that the Draft Policy advocates.
*The opinions expressed in this article do not necessarily reflect the views of the newspaper.*
DAILY NEWS
Related Topics: