top of page

Decoding the Australian Voluntary AI Safety Standard: A Practical Guide for SMEs

Written by Janey Treleaven with help from NotebookLM by Google and Claude Sonnet 3.5


Today, we're diving into the new Australian Voluntary AI Safety Standard. As someone with ADHD, I find reading policy documents challenging, but I'm excited to share how I tackled this one using an innovative approach.


First, let me introduce you to an experimental AI tool from Google called Notebook LM. This tool allows you to upload documents and then chat with the AI about the content, save outputs as notes, and even generate a lively two-person podcast. I used Notebook LM to get a high-level understanding of the AI Safety Standard before I read it in detail.


Now, don't get me wrong – as far as government documents go, this one is pretty well-written, and I'm an advocate for the approach they're taking. However, using Notebook LM provided a great way to consume the document at a high level before diving into the details.


For those who prefer audio content, here is podcast the AI tool generated just from the policy document in less than a minute:
AI Safety Standard - Audio OverviewNotebookLM by Google

"I use AI, I don't develop it. Does this even apply to me?" ai safety


If you've been wondering whether this standard applies to your business because you use AI but don't develop it, the answer is a resounding yes!


If you use AI in any aspect of your business, you are what's called an "AI Deployer". And this standard is particularly focused on organizations like yours.


The standard recognizes that most Australian businesses using AI are deployers rather than developers. It's designed to be especially helpful for small and medium-sized enterprises (SMEs) that may not have in-house AI expertise.


Even if you're relying on third-party AI systems, this standard provides guidance on how to work with your suppliers to ensure responsible AI use. Whether you're using a chatbot on your website, an AI-powered analytics tool, or any other form of AI in your business operations, this standard is here to help you navigate the complexities of safe and responsible AI use.


The Ten Guardrails: What They Mean for SMEs


Before we dive into how to implement these guardrails, let's break them down and see what they mean in practice for small and medium-sized businesses:

Guardrail

What it Means

Practical Consideration for SMEs

1. Accountability

Clear roles and responsibilities

Who in your business is responsible for AI decisions?

2. Risk Management

Identify and plan for risks

What could go wrong with your AI use, and how would you handle it?

3. Data Governance

Manage data properly

How are you ensuring the quality and security of your data?

4. Testing and Monitoring

Regularly check AI systems

How often do you check that your AI tools are working correctly?

5. Human Oversight

Keep humans in the loop

Are there always humans involved in important AI-driven decisions?

6. Transparency

Be open about AI use

Do your customers know when they're interacting with AI?

7. Contestability

Allow challenges to AI decisions

Can people easily question or appeal AI-driven outcomes?

8. Supply Chain Transparency

Understand third-party AI tools

Do you know how your AI vendors ensure safety and reliability?

9. Record Keeping

Maintain documentation

Could you explain how and why your AI made a specific decision if asked?

10. Stakeholder Engagement

Consider AI's impact on all groups

Have you considered how your AI use affects different stakeholders?


Remember, you don't need to implement everything at once – start with what's most relevant to your business and build from there.


Implementing the Standard: A 4-Step Approach for SMEs

A workflow diagram with 4 steps. Step 1 is "Establish AI Leadership & Strategy", represented by a person with a briefcase. Step 2 is "Inventory, Assess, & Document Your AI Tools", represented by a person holding a clipboard. Step 3 is "Ensure Transparency and Contestability", represented by a person with a magnifying glass. Step 4 is "Review & Improve Regularly", represented by a person with a checkmark. The overall diagram has a simple, clean design.
Credit: Ideogram AI prompted by Janey

Now that we understand the guardrails, let's look at a practical, four-step approach to implementing the standard in your business"


1. Establish AI Leadership and Strategy

  • Appoint an AI Owner: Designate someone (it could be you!) responsible for overseeing AI use in your business.

  • Create an AI Strategy: Document your approach to AI, including goals and ethical considerations.

  • Develop an Acceptable Use Policy: Set clear guidelines for how employees should use AI tools.

  • Provide Training: Ensure your team understands your AI strategy and policies.


2. Inventory, Assess, and Document Your AI Tools

  • List All AI Tools: Create a central inventory of all AI tools used in your business.

  • Evaluate Each Tool: Assess each tool against the 10 guardrails, considering:

    • Risks and potential impacts

    • Data management practices

    • Testing and monitoring processes

  • Keep Good Records: For each AI tool, document:

    • Its purpose and use cases

    • Risk assessments and mitigation strategies

    • Testing results and monitoring data


3. Ensure Transparency and Contestability

  • Be Open About AI Use: Decide how to inform customers and stakeholders about your AI use.

  • Allow Challenges: Create a process for people to question or contest AI-driven decisions.


4. Review and Improve Regularly

  • Set Up Regular Reviews: Periodically review and update your AI practices to stay current with best practices and regulations.

  • Continuously Improve: Use insights from reviews to enhance your AI strategy, tool assessments, and transparency measures.


By following these four steps and considering each guardrail, you're well on your way to implementing the Australian Voluntary AI Safety Standard in your business. Remember, the goal is to use AI responsibly and safely, not to create bureaucracy. Start small, focus on what's most important for your business, and improve over time.


Mapping the 4 Steps to the 10 Guardrails


To help you see how these steps align with the guardrails, here's a quick reference:

  1. Establish AI Leadership and Strategy: Addresses Accountability (Guardrail 1) and lays the foundation for all other guardrails.

  2. Inventory, Assess, and Document: Covers Risk Management (2), Data Governance (3), Testing and Monitoring (4), Supply Chain Transparency (8), and Record Keeping (9).

  3. Ensure Transparency and Contestability: Directly addresses Transparency (6), Contestability (7), and Human Oversight (5).

  4. Review and Improve Regularly: Supports Stakeholder Engagement (10) and ensures ongoing compliance with all guardrails.


This approach simplifies the implementation process while still covering all aspects of the AI Safety Standard. It provides a clear, actionable plan for SMEs to adopt responsible AI practices.


Need Help? We're Here for You


Implementing these guardrails might seem like a daunting task, especially if you're juggling multiple responsibilities in your business. If you're feeling overwhelmed or unsure where to start, remember: you don't have to do this alone.


At Intelligence Assist, we specialise in helping businesses like yours navigate the complexities of AI adoption and compliance. Our team of experts can guide you through each step of implementing the Australian Voluntary AI Safety Standard, tailoring our approach to your specific needs and resources.


How We Can Help:


  • Assess your current AI use and identify areas for improvement

  • Develop a customised AI strategy and acceptable use policy

  • Assist with a holistic Digital & AI tool inventory and risk assessment

  • Create transparency and contestability processes

  • Set up an ongoing review and improvement system


Don't let the challenges of responsible AI use hold your business back. Take the first step towards safe and compliant AI practices today.


Ready to get started? Visit our website at intelligenceassist.com.au and book a chat with one of our AI compliance experts. Let's work together to make AI work for your business, safely and responsibly.


Remember, adopting these guardrails isn't just about compliance—it's about building trust with your customers, protecting your business, and positioning yourself at the forefront of responsible AI use. Let Intelligence Assist help you turn this challenge into an opportunity for growth and innovation.


[Don't forget to check out the AI-generated podcast for a different perspective on this topic!]

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page