The Algorithmic Breach: Why Public AI is the Newest Liability in the Boardroom

The intersection of fiduciary duty and emerging technology has reached a critical juncture. For years, the conversation around Artificial Intelligence (AI) in the corporate world focused on productivity and competitive advantage. However, a recent landmark decision by the Federal Court of Australia has shifted that focus toward a more sobering reality: the personal liability of senior executives and directors.
In the matter of ASIC v Bekier [2026] FCA 231, the Federal Court examined the conduct of senior executives and non-executive directors within the Star Entertainment Group. While the Court found that certain non-executive directors had not breached their duties, it determined that two senior executives had failed to meet the required standard.
Of particular importance to the future of corporate governance are the observations made by Justice Lee in the judgment. His Honour’s comments serve as a stern warning regarding the informal use of public AI tools, such as ChatGPT and Gemini, in performance of statutory duties.
The “Glibness” of Public AI and the Non-Delegable Duty
Justice Lee’s analysis highlights a fundamental tension between the “glib” nature of Generative AI and the rigorous demands of Section 180 of the Corporations Act 2001 (Cth). His Honour noted that while tools like ChatGPT can produce remarkably coherent and authoritative-sounding text, they lack the capacity for genuine professional judgment.
The danger identified by the Court is two fold. First, there is the risk of “hallucination”, where an AI confidently presents false information as fact. Second, and perhaps more insidious for governance, is the risk that executives may outsource their critical thinking to an algorithm. In the eyes of the law, a director’s duty of care and diligence is non-delegable. One cannot simply point to a prompt and a response as a substitute for the “active and inquisitive” mind required of a corporate officer.
For an executive to rely on an unverified, public-facing AI model to draft board papers or assess risk is to invite a breach of duty. As Justice Lee’s comments suggest, the ease of these tools often masks a lack of evidentiary basis and a failure to engage with the specific, nuanced context of the organisation’s legal and ethical obligations.
The Risk of “Shadow AI” and Boardroom Silence
Perhaps the most insidious risk identified in the wake of this judgment is the rise of “Shadow AI”, executives using public AI tools to prepare reports or briefings without disclosing this use to the Board. When an executive presents an AI-generated draft as their own work, they are effectively bypassing the critical layers of human verification that the Board assumes are in place.
Equally concerning is a Board that remains silent on the issue. If a Board has not formally discussed the adoption of AI, nor implemented robust policies and controls to manage its use, it creates a governance vacuum. Without a clear policy framework, the organisation is exposed to:
- Confidentiality Breaches: Sensitive corporate data being “fed” into public models, where it is ingested into global training sets and potentially subject to foreign access laws like the US CLOUD Act.
- Regulatory Non-Compliance: A failure to maintain an audit trail of how decisions were reached and what data was relied upon.
- Information Asymmetry: Directors making decisions based on “hallucinated” or glib AI outputs provided by executives who have failed to disclose the source of the information.
The Sovereignty and Privacy Vacuum
Beyond the quality of the output, the use of public AI tools by executives introduces a secondary, systemic risk: the compromise of data sovereignty.
When a sensitive board paper or a confidential strategic memorandum is “fed” into a public AI model, that data often leaves Australian jurisdiction. It is ingested into global training sets, potentially accessible by the platform provider and subject to foreign access laws, such as the US CLOUD Act. For Australian organisations, particularly those in regulated sectors, this represents a catastrophic failure in information security and a potential breach of privacy and confidentiality obligations.
The Athena Board Approach: Security-First Intelligence
At Athena Board, we recognise that the answer to these risks is not the prohibition of AI, but the implementation of AI within a secure, sovereign environment.
In contrast to the public tools discussed in the Federal Court, the Athena Board approach is built upon four pillars of governance-grade intelligence:
- Jurisdictional Integrity: AI processes operate within a secure, locally hosted “Lockbox” architecture. Your sensitive deliberations never leave the country where you and determined your data should be stored and are never used to train public models.
- Human-in-the-Loop Governance: Athena Board’s upcoming AI-powered minutes creation and transcription features are designed to assist, not replace, the Company Secretary. By providing a structured draft based on actual meeting transcripts, we eliminate the “glibness” of general AI, ensuring every sentence is tethered to the reality of the boardroom discussion.
- Policy-Driven Controls: The Athena Board platform allows Boards to set granular controls, ensuring that if AI is used to summarise or draft content, its use is transparent, disclosed, and governed by established board policy.
- Verifiable Accuracy: Rather than relying on the broad, unverified training data of public models, Athena’s tools focus specifically on the materials that you indicate are to be used. This mitigates the risk of hallucination and provides directors with a reliable audit trail for their decision-making.
Conclusion: A New Standard
The decision in ASIC v Bekier makes it clear that the Federal Court will not view the convenience of AI as a valid excuse for a lapse in executive duty. Directors and executives must ensure that any technology used in the service of their duties is as rigorous and secure as the governance standards they are sworn to uphold.
As boards look toward the future, the choice is clear: continue the high-risk use of public, unverified algorithms, or adopt a governance-specific platform that treats security, sovereignty, and accuracy as foundational requirements.
To learn more about how Athena Board is building the future of secure, AI-assisted governance, visitwww.athenaboard.com or contact us at sales@athenaboard.com