In our latest webinar, Continual CEO Oliver Crofton sat down with Monica Mahay, Chief Compliance Officer at SkyShowtime and Co-Founder of KnowledgeBridge AI, to explore how organisations can raise the bar on AI literacy and embed responsible practices into their governance frameworks. The conversation focused on the growing regulatory and operational pressures created by the rapid adoption of artificial intelligence, particularly the obligations stemming from the EU AI Act.
For governance, risk, and compliance professionals, AI is no longer a distant technology issue, it is an enterprise-wide business risk that touches everything from data protection and intellectual property to ethics, transparency, and organisational culture. Monica emphasized that without a strong foundation of AI literacy, companies will struggle to evaluate risks, meet regulatory expectations, and maintain trust with customers, employees, and regulators. By equipping staff at all levels with the knowledge to understand and challenge AI systems, GRC leaders can ensure their organizations are not only compliant but also resilient and competitive in a future where AI is central to strategy and operations.
Monica explained that AI literacy is now as fundamental as learning to drive, employees need the skills to understand, evaluate, and interact responsibly with AI. She highlighted three pillars of AI literacy: technical understanding of what AI is, how it works, and its limits; risk awareness to spot legal, operational, and reputational issues; and policy clarity, ensuring staff know internal processes, escalation paths, and compliance expectations. Training should reach everyone from PAs to senior leadership and must be continuously updated as risks and regulations evolve.
While AI adoption is accelerating, Monica cautioned that governance must align with business strategy. She pointed out risks that often fly under the radar, such as inaccurate outputs, intellectual property concerns, and the rise of “shadow AI” - unauthorised use of external tools. Leadership, she noted, is still catching up, but since AI is ultimately a business risk, senior leaders must take ownership. Her advice: empower business units to manage lower-risk cases directly, supported by strong literacy programs, rather than over-centralising governance.
Monica provided a pragmatic breakdown of the EU AI Act. Not every system is high-risk, only around 5–15% are expected to fall in scope. Obligations vary depending on whether a company is acting as a provider, deployer, importer, or distributor, and she recommended assessing systems early to avoid modifications that may accidentally trigger “provider” obligations. Businesses should also prepare documentation and oversight processes well in advance and plan for a fragmented global regulatory landscape.
On fairness, Monica stressed that compliance comes first, then ethics. Legal duties such as anti-discrimination, data protection, and employment law are non-negotiable, while ethical principles may go further but should not replace legal due diligence. She encouraged organisations to test AI in real-world scenarios rather than rely on checkbox audits, demand bias and fairness audits from vendors, and educate staff on spotting adverse AI outputs and escalation processes. Transparency and explainability are equally critical, especially for high-risk systems. She recommended adopting “explainability by default” approaches, offering layered explanations tailored to audiences, and maintaining rigorous documentation throughout the AI lifecycle.
When asked about governance, Monica highlighted the need for clear ownership and accountability. While cross-functional collaboration is essential, appointing a dedicated AI governance lead, such as an AI Officer, helps ensure consistency and authority. She also outlined four strategies for responsible, organisation-wide AI innovation: building an AI inventory for visibility and accountability, establishing a cross-functional governance group, rolling out AI literacy training company-wide, and fostering a culture of safe innovation with compliance as an enabler.
To support practical adoption, Monica shared a checklist of key questions to ask AI vendors. These include: is the system classified under the EU AI Act, is it considered “high-risk” in its intended use, who developed and trained it, will it train on our data and who has access, what decisions it makes and whether there are human-in-the-loop controls, whether the vendor conducts regular AI risk assessments or audits, and what processes exist for AI-related incidents or complaints. These questions, she explained, help mitigate shared risk and ensure responsible AI adoption.
Finally, Monica spoke about KnowledgeBridge AI, the UK-based consultancy she co-founded, which helps organisations navigate AI adoption through tailored literacy programs and governance solutions. With a mission to transform complexity into capability, KnowledgeBridge empowers compliance leaders to future-proof their AI strategies. Learn more at knowledgebridge.ai.
With over 15 years experience in governance, risk, compliance, and cyber investigations, Oliver is widely regarding as a thought leader on the topics of corporate regulation and ethics. Oliver co-founded Continual to provide mid-sized organisations with better compliance software which meets the evolving regulatory landscape.
Experience the power of supplementing your ethics and compliance program with AI. Schedule a personalised demo now to see how our advanced platform can give you clearer risk insights and better corporate governance.
We are also available on the details below.