Data protection has been a crucial concern in the digital age for quite some time now. Most of you are probably well aware of the importance of cybersecurity and privacy. In fact, you might be tired of hearing about it, or it keeps you up at night, or you’ve already fallen victim to a data breach. In the context of AI tools, it’s not just about how malicious actors can misuse your data; it’s also about safeguarding your intellectual property (IP) and even your voice.
Recently, Donald Trump discussed with Logan Paul the hypothetical scenario of his voice being cloned and an alert being sent out about a missile strike, causing widespread panic. Is the age of truth truly over?
In this article, we aren’t focusing on how to differentiate between human-made and AI-made content. Instead, we’ll explore how to determine which AI tools you can trust.
Current AI Regulations in Australia Protect Your Data
Currently, there are no AI-specific laws in Australia. Instead, AI is regulated indirectly through existing laws and regulations around privacy, consumer protection, anti-discrimination, etc. However, the government has acknowledged that these laws likely do not adequately prevent AI harms, and more work is needed. CSIRO, as Australia’s leading science agency, is developing concrete software engineering guidelines to help developers operationalise responsible AI principles in practice.
While this is a step in the right direction, what should businesses do in the meantime? Wait? Definitely not. Adoption is already lagging in Australia compared to other countries, and large corporations aren’t waiting, that’s for sure. With hundreds of new AI tools launching every week (at least that’s what it feels like in my role as Founder and CEO of Intelligence Assist), how do you know which ones to trust with your data?
Key Certifications and Regulations
There are key certifications or regulations that already exist, and at Intelligence Assist, whenever we start evaluating a tool, one of the first things we look for is any reference to one of these.
These certifications and regulations provide important guardrails as AI becomes more ubiquitous. By adopting relevant standards, AI developers and deployers can mitigate risks, meet rising expectations around responsible AI, and build the trust needed for AI to deliver on its immense potential to benefit business and society.
SOC 2 Type I and Type II (USA)
SOC 2 is a voluntary compliance standard developed by the American Institute of Certified Public Accountants (AICPA). It specifies how organizations should manage customer data based on five “trust service principles” – security, availability, processing integrity, confidentiality, and privacy.
SOC 2 Type I attests that a company’s systems are suitably designed to meet relevant trust principles at a specific point in time. SOC 2 Type II goes a step further and verifies that those systems operated effectively over a period of time (usually 6-12 months).
Benefits:
Demonstrates a company has established strict information security policies and procedures
Builds trust and confidence with customers that their data is secure
Gives companies a competitive edge, especially when selling to enterprise clients
Ensures consistency in a company’s security practices
Examples of AI companies with SOC 2 Type II:
Anthropic (developer of Claude AI)
Hugging Face
Salesforce Einstein
Zendesk
ISO 27001 (International)
ISO 27001 is the leading international standard for information security management systems (ISMS). It provides a framework for implementing, maintaining and continually improving an ISMS.
The standard uses a risk-based approach and includes requirements related to leadership and policy, planning, support, operation, performance evaluation, and improvement. It covers security controls across areas like access control, cryptography, physical security, incident management and more.
Benefits:
Protects intellectual property and sensitive data
Provides a competitive advantage when bidding for contracts
Improves information security awareness and practices across the organization
Avoids regulatory fines and reputational damage from data breaches
Examples of AI companies with ISO 27001:
IBM Watson
Dataiku
Darktrace
ISO 42001 (International)
ISO 42001 is a new international standard, published in July 2022, that provides a framework for the governance of AI systems. It addresses transparency, accountability and ethical considerations.
The standard guides organizations in establishing an AI management system to direct and control the use, development, and deployment of AI responsibly. It covers aspects like leadership, planning, support, operation, and improvement.
Benefits:
Ensures AI systems are transparent, explainable and aligned with human values
Helps identify and mitigate risks specific to AI like bias and privacy issues
Provides assurance to stakeholders that AI is being used responsibly
Facilitates compliance with AI regulations as they emerge
Examples of AI companies with ISO 42001:
The standard is still very new so no companies are certified yet, but early adopters are expected soon
GDPR or Australian Privacy Principles Compliant
The EU’s General Data Protection Regulation (GDPR) and Australia’s Privacy Act 1988 both aim to protect the privacy rights of individuals and regulate how organizations collect and handle personal data.
While the GDPR applies to any organization processing EU residents’ data, the Australian Privacy Principles (APPs) apply to businesses with over $3M turnover, health providers and some small businesses.
Both laws require privacy by design, data minimization, security safeguards, breach notification, and give individuals rights relating to their data. The GDPR has some additional requirements around data portability and the right to be forgotten.
Benefits:
Builds trust by demonstrating responsible data practices
Avoids hefty fines for non-compliance (up to 4% of global turnover under GDPR)
Streamlines data handling processes and improves data governance
Makes it easier to do business globally by meeting a high privacy standard
Examples of GDPR-compliant AI companies:
OpenAI (developer of GPT-4o and DALL-E)
DeepMind (Google’s AI research lab)
HubSpot
Mailchimp
NIST AI Risk Management Framework Adoption
The US National Institute of Standards and Technology (NIST) released the AI Risk Management Framework in January 2023. It is a voluntary framework to help organizations identify and manage risks related to AI systems.
The framework promotes AI systems that are valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful biases managed. It provides a structured process to achieve those characteristics.
Benefits:
Provides a common language to talk about AI risks
Helps organizations systematically identify, analyse and communicate AI risks
Integrates with other risk management processes
Aligns with emerging global AI standards and regulations
Examples:
The framework is still new so no companies have fully implemented it yet
Early drafts were piloted by a diverse set of organizations including Deloitte, Humana, MIT, NASA and the World Economic Forum
HIPAA Compliance (Healthcare AI Tools Only)
For AI tools handling protected health information (PHI) in the United States, compliance with the Health Insurance Portability and Accountability Act (HIPAA) is essential. HIPAA sets strict standards for safeguarding patient data privacy and security.
Protects sensitive patient health data from unauthorized access or breaches
Helps healthcare providers maintain patient trust and confidentiality
Reduces the risk of costly fines and reputational damage associated with HIPAA violations
Gives healthcare AI tools a competitive advantage by demonstrating commitment to data privacy
Examples of HIPAA-compliant AI tools:
Lindy: An AI scribe that accurately transcribes doctor-patient conversations while ensuring data privacy
Suki: An AI-powered voice assistant that streamlines clinical documentation and integrates with EHRs securely
Google Cloud Platform Healthcare AI: A suite of HIPAA-compliant AI tools for healthcare, including Natural Language API and Cloud Healthcare API
IBM Watson Health: An AI platform for healthcare that provides personalized treatment recommendations while safeguarding patient data
When evaluating healthcare AI tools, look for evidence of their HIPAA compliance measures, such as Business Associate Agreements (BAAs), data de-identification techniques, access controls, secure infrastructure, staff training, and breach notification procedures.
If you can’t find one of these certifications, you can always use the chatbot to ask or look in their privacy policy.
Some key elements to look for in the privacy policy include:
What data is collected and how it is used
How data is stored and protected
Whether data is shared with third parties
Data retention and deletion policies
User rights and control over their data
If an AI tool doesn’t have a privacy policy, this is a significant red flag, and you shouldn’t share anything with them, even conversationally.
Conclusion
In conclusion, data protection is of utmost importance when choosing AI tools for your business. As the AI landscape continues to evolve rapidly, it’s crucial to perform due diligence before integrating any AI tool into your operations. Certifications and compliance with established regulations can provide a level of assurance, but it’s equally important to thoroughly review privacy policies and understand how your data will be handled.
We encourage you to share your experiences or concerns regarding AI tools and data protection. Stay informed about the latest trends and updates in this space, as the landscape is constantly shifting. If you have any questions or need further assistance, please don’t hesitate to reach out to us at Intelligence Assist.
Together, we can navigate the exciting yet challenging world of AI while ensuring the protection of our valuable data and intellectual property.