TELUS didn't wait for generative AI to arrive before building governance infrastructure. Jesslyn Dymond, Director of AI Governance & Data Ethics, joined the company in 2019 to stand up responsible AI practices alongside the machine learning teams building them, which meant that when generative AI hit, the governance scaffolding was already there. Jesslyn walks through the specific structures TELUS uses to govern AI at scale: a CEO-led AI board that includes the CIO, Chief AI Officer, and Chief Data and Trust Officer; a network of hundreds of data stewards embedded across business units and appointed by VPs; and a unified intake process called a Data Enablement Plan that consolidates privacy, security, and responsible AI review into a single workflow instead of separate forms and sign-offs.
Jesslyn also shares how TELUS certified its first generative AI customer support tool to the international Privacy by Design standard and then had it independently audited, and what that process required the team to work through on transparency and user experience. She makes a pointed case for why shadow AI is best addressed with access to better internal tools rather than policy restriction alone, explains how her team grades levels of agency within their agentic AI framework to determine what controls need to be in place before approving systems, and describes how TELUS took the concept of purple teaming out of the security world and applied it to AI governance, including running those sessions with students and the general public.
Topics discussed:
Building proactive AI governance infrastructure before adoption by embedding responsible AI practices alongside ML development teams
Structuring enterprise AI oversight through a CEO-led board including CIO, Chief AI Officer, and Chief Data and Trust Officer
Deploying VP-appointed data stewards across business units to connect governance policy with on-the-ground AI implementation
Consolidating privacy, security, and responsible AI review into a single Data Enablement Plan to reduce friction and improve complianceÂ
Certifying a generative AI customer support tool to the international Privacy by Design standard and navigating external audit requirements
Grading levels of agency within an agentic AI framework to determine appropriate controls
Countering shadow AI by prioritizing internal tool access and functionality over policy restriction alone
Applying purple teaming from security practice to AI governance to test systems collaboratively across various teams