ISO/IEC 42001: how we govern the AI behind your voice agent
ISO 42001 is the first international standard for responsible AI management. Here's what it requires, why we pursued it early, and what it means when an AI is answering your calls.
Written By Catherine Weir
Last updated About 3 hours ago
If you're a business owner whose AI voice agent is going to be talking to your customers every day, there's a question worth asking: how is this AI being built, trained, updated, and monitored? Is anyone making sure it doesn't pick up biases? That it doesn't make up information? That it performs the same way on the 10,000th call as it did on the first?
For most voice AI vendors, the honest answer is: "we'll take your word that it works, and you'll take ours that we're being careful." That's not a satisfying answer when the AI is representing your business to your customers.
ISO/IEC 42001 is the international standard that turns those promises into an auditable, third-party-verified system. We are certified against it.
What ISO 42001 actually is
ISO/IEC 42001:2023 is the first international standard specifically for AI management systems — the organizational systems, policies, processes, and controls that an organization uses to develop, deploy, operate, and improve AI systems responsibly.
It was published in December 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as the result of a multi-year effort involving regulators, academics, and AI practitioners from more than 50 countries.
Unlike earlier AI guidelines — which were largely voluntary and non-specific — ISO 42001 defines a rigorous management system framework modeled after well-established standards like ISO 27001 (for information security) and ISO 9001 (for quality management). Organizations implement it, get audited against it, and get issued a certificate valid for three years with annual surveillance audits.
What ISO 42001 requires
The standard defines an AI Management System (AIMS) that covers the full lifecycle of every AI system the organization operates. Its 10 top-level clauses require an organization to:
Understand its context — document the intended uses, affected stakeholders, and external and internal issues relevant to the AI systems
Demonstrate leadership commitment — establish AI governance roles, policies, and accountability at the executive level
Plan for AI risks — conduct AI impact assessments, identify and document risks (including bias, privacy, security, safety, and societal risks), and define controls
Provide resources — ensure people working on AI systems are competent, aware of AI-specific risks, and have access to the tools needed to manage those risks
Operate under defined processes — document the AI lifecycle from data gathering to training to evaluation to deployment to monitoring to decommissioning
Evaluate performance — measure how AI systems perform against their stated purpose, monitor for drift, and audit AI governance internally
Improve continuously — use incidents, audits, and feedback to improve the AI system and the management system itself
Implement controls — apply the 38 specific AI controls defined in Annex A, which cover data quality, bias mitigation, transparency, explainability, human oversight, security, and more
Why we pursued it early
We're a voice AI company. Every call our agents make is an instance of AI-human interaction at scale. The risks the standard is designed to manage — bias, hallucination, unauthorized automated decision-making, lack of explainability, model drift — are exactly the risks that matter when AI picks up the phone.
We also believe ISO 42001 is going to become the baseline expectation for AI vendors the way SOC 2 became the baseline for SaaS. The EU AI Act explicitly references it. U.S. federal procurement language is starting to require it. Large enterprises are beginning to ask for it in RFPs. Rather than wait to be required to do it, we pursued certification as soon as it was possible to be certified.
What our certification covers
Our real-time voice AI inference platform
Our agent-building and configuration tools
Our transcription and speech-to-text systems
Our intent detection, call summarization, and post-call analytics systems
Our processes for vetting, integrating, and monitoring third-party AI model providers
Our data governance for training, fine-tuning, and evaluation datasets
Our incident response processes specific to AI failures (wrong answers, biased behavior, safety incidents)
What this means for you
You can prove to enterprise buyers and regulated-industry customers that the AI answering your phones is governed by an internationally-recognized management system — not just "we trained some models"
You get the benefit of our ongoing AI impact assessments — we identify new AI risks as our systems evolve and apply controls without you needing to think about it
You get human oversight built into the platform — every AI decision has a defined path for human review, correction, and escalation
You get transparency tooling — you can see what your AI said, why it said it, and how its behavior has changed over time
You get documented model updates — any time we change the AI powering your calls, it goes through a defined evaluation and release process
You inherit our EU AI Act readiness — if you do business with European customers or employees, our ISO 42001 certification substantially reduces your own burden
The 38 AI controls (Annex A)
The part of the standard that's most interesting to security and compliance reviewers is Annex A, which defines 38 specific AI controls organized into nine categories:
Policies related to AI
Internal organization (governance and accountability)
Resources for AI systems (data, tools, infrastructure)
Assessing impacts of AI systems
AI system lifecycle
Data for AI systems
Information for interested parties
Use of AI systems
Third-party and customer relationships
Our control mapping — which shows how each of our internal processes maps to each of the 38 controls — is available through the Trust Center.
Requesting our certificate
Our ISO 42001 certificate is public; we're happy for any customer, prospect, or partner to review it. Visit our Trust Center at trust.365agents.com to download the certificate, see our certification body, and review the scope statement.
If you need a deeper review of our AI Management System — for example, for an enterprise security questionnaire or a regulatory filing — request the ISO 42001 deep-dive package. It includes our Statement of Applicability, our AI impact assessment methodology, and the control mapping described above, all under a mutual NDA.