5 Essentials for Your Firm's AI Use Policy

Drew Pflaum
October 7, 2025
3 min read

A speaker at a recent CPA Australia webinar titled ‘Can you use AI and comply with your ethical obligations?’ posed a question that should resonate with every firm leader: “What are your staff gonna do if they dunno what the policy is or you don't have a policy? And if that makes you lose sleep at night, then you've got a problem you need to be doing something about.”

In Part 1 of this Safe Use of AI by Australian Accounting Firms series, we explored the hidden compliance risks of ‘Shadow AI’ and accountants using free AI such as ChatGPT. Now, we move from the problem to solution. The cornerstone of safely adopting AI and managing its risks is to implement a clear and practical AI Use Policy.

AI Use Policy for Accountants

This policy is a critical component of your Quality Management System (QMS) under APES 320. It provides your team with the clarity they need to innovate safely and demonstrates to your clients and insurers that you are managing this new technology responsibly.

Based on best practice governance templates, here are five essential components every Australian accounting firm must include in its AI Use Policy.

1. A Clear List of Prohibited Uses

Your policy must be unambiguous about what is not permitted. The most critical restriction for any accounting practice is the handling of sensitive data.

Your policy should strictly prohibit uploading any confidential, client-sensitive or personally identifiable information into any public or unapproved AI tool.

2. Mandatory Human Oversight and Accountability

As The Tax Institute's president Tim Sandow noted, AI-generated work requires the same quality control as work done by a junior staff member. Your policy must reinforce that AI is an assistant, not a replacement for professional judgment.

Embed a principle of mandatory human oversight. This means a qualified professional must always review, verify and ultimately take responsibility for any AI-generated output before it is used for client advice or external communication. The final accountability always rests with the practitioner, not the platform.

3. Strong Data Privacy and IP Protection

The policy must explicitly state that all AI use must comply with the Australian Privacy Act. This reinforces your firm’s commitment to protecting client data. It should also cover the protection of your firm’s own intellectual property. Your internal templates, workpapers and client lists are valuable assets that must not be exposed to unauthorised AI systems.

4. An Approved and Vetted AI Vendor List

A policy that only says what not to do is incomplete. To truly stop Shadow AI, you must provide your team with a safe and effective alternative. Your policy should name the specific AI platforms that have been vetted and approved by the firm for professional use.

This is the most proactive step you can take. By investing in a secure, purpose-built tool, you eliminate the main reason staff seek out unauthorised solutions in the first place: the need for a better way to do their work.

5. Required Staff Training and Awareness

A policy is only effective if your team understands it. Your framework should include mandatory training for all staff on the firm’s AI policy, the risks of non-compliance and the correct use of approved tools. Documenting this training provides evidence of your commitment to quality management and professional standards.

With a strong policy in place, the final step is choosing the right technology. In Part 3, we will explore what to look for in an AI platform that was built by an accountant, for accountants.