Is 'Shadow AI' Undermining Your Accounting Firm's Compliance?
When the Chief Digital Officer of KPMG Australia, John Munnelly, described his firm’s early experiments with AI, he recalled a single discovery that “absolutely scared the pants off me”. The firm’s AI had found a document on its servers listing thousands of employee credit card numbers. Their immediate response was to block ChatGPT and fundamentally reassess the risk.
If one of the Big Four firms, with all its resources, identified such a significant internal risk, it poses a critical question for every practice leader in Australia: Do you know what data is being fed into AI within your firm, and by whom?
The answer, most likely, is no. And that creates a compliance risk that cannot be ignored.
The Rise of 'Shadow AI' in Practice
Your team is under pressure. They face client demands, complex compliance work and a chronic shortage of skilled staff. So, when they discover a tool that helps them work faster, it is only natural for them to use it.
This is how ‘Shadow AI’ enters your firm. It refers to the unmonitored or unauthorised use of AI applications by staff without the knowledge or oversight of management. It happens, for example, when an accountant uses a free online tool on their phone to summarise a client document or draft a sensitive email using a public AI chatbot. They are not being malicious; they are trying to be more efficient.
Yet, this creates a profound risk. A speaker at a recent CPA Australia webinar titled ‘Can you use AI and comply with your ethical obligations?’ warned, when your staff “are using ChatGPT and loading up stuff into that, that has confidential client data, you're making that available to the world.”
A Direct Challenge to Your Professional Obligations
This data security issue is a direct challenge to your professional and ethical obligations.
Tim Sandow, president of The Tax Institute, has been clear on this point. He has warned that entering confidential information into a tool like ChatGPT would count as a breach of a practitioner's privacy obligations. The risk, he explained, is “If you are using an open-source program like ChatGPT and you are using it to, say, summarise a letter of advice, and it’s got confidential information, then you actually don’t know where that information is going”.
Furthermore, this unmonitored activity undermines your firm’s Quality Management System (QMS) as required by APES 320. It creates undocumented processes and unknown quality risks that you cannot manage. It also puts your firm at odds with the principles of integrity and confidentiality at the heart of APES 110 Code of Ethics for Professional Accountants.
As a firm leader, you are responsible for the work your practice produces and for safeguarding your clients' data. The uncontrolled use of AI puts both of those responsibilities in jeopardy.
You should NOT simply ban AI
The answer should not be to ban AI. The technology is too powerful and the productivity benefits are too great. But you cannot ignore the threat it poses to your firm's reputation and compliance standing.
The first step is to recognise the reality of the profound AI risks, especially Shadow AI use of free tools like ChatGPT.
The next is to create a framework to manage it.
In Part 2 of our series, we will highlight important considerations as you develop an AI Use Policy for your firm, helping you build the safety net you need to innovate with confidence.