AI Policy Orchestrator — Turning AI Policy Into Automated Oversight

The AI Policy Orchestrator project began with a simple insight: every enterprise had AI policies, but no consistent way to enforce them. I led the redesign of that governance model, creating a unified, scalable system that turns fragmented rules into automated oversight—helping organizations define, monitor, and prove compliance across all AI platforms with clarity, confidence, and control.

Role

Role

Principal Product Designer

Timeline

Timeline

Sep 2025 - Present

Team

Team

PM, Engineering, AI Governance Lead

Platforms

Platforms

Web, Databricks, Bedrock, Azure

Where the Problem Really Started

When I joined the AI Policy Orchestrator initiative, I stepped into a familiar pattern across enterprises: AI was scaling rapidly, but governance had not evolved to match it.
Policies existed—but not in systems. They were scattered across:

  • PDFs

  • Wiki pages

  • Slack threads

  • Long email chains

  • Outdated manuals

  • Internal documents no one maintained

Across interviews with Dell, TELUS, and Lumen, customers described the same challenges: fragmented processes, unclear ownership, and no way to ensure AI rules were actually followed.

One customer summarized it best:

I know what our AI policy is… I just can’t tell you where it is, whether it’s followed, or who owns it.

This was never about designing a policy page.
It was about creating AI governance that could survive the real world.

Why This Work Mattered

AI had shifted from experimentation to operations.
Leadership now needed:

  • Proof of compliance

  • Automated evidence

  • Clear oversight

  • Consistency across platforms

Developers needed speed—not bureaucracy.
Governance teams needed signals—not screenshots.

Without a unified system:

  • Violations went unnoticed

  • Policies weren’t enforced

  • High-risk models operated without oversight

  • Guardrails were implemented differently everywhere

  • There was no defensible audit trail

The organization was moving at two speeds:
AI velocity and AI risk.

My role was to help unify them.

My Design Approach

As a Principal Designer, I focused on elevating governance thinking beyond UI and toward systems design.

Redefine the mental model of AI governance:

Research across Dell, TELUS, and Lumen made one insight clear: “AI policy” meant something different to every enterprise.
So I reframed governance using a model that aligned legal, engineering, and AI teams:

Policy → Conditions → Observations → Violations → Remediation → Evidence

This became the backbone for the entire information architecture.

Making policy authoring operational:

Traditional policy writing is legalistic and abstract.
I designed a guided, step-based authoring flow:

  1. Metadata

  2. Conditions

  3. Actions

  4. Preview

  5. Activation

This turned policy into machine-readable rules that platforms could enforce in real time.

Violations that drive action—not overwhelm:

Early designs surfaced everything at once.
Customers hated it.

I redesigned the violation model around:

  • severity

  • system grouping

  • contextual explanations

  • bulk actions

  • remediation guidance

The goal: reduce cognitive load so teams could act quickly.

Embracing consistency:

For governance to scale, predictability mattered more than novelty.

I established:

  • consistent status semantics

  • unified card + table patterns

  • structured detail panels

  • reusable remediation workflows

This made the experience intuitive regardless of role.

Stack

Stack

Stack

The System We Built

The final Orchestrator provides a clear narrative for AI governance:

  1. What policies exist
    With owners, versions, frameworks, and real-time status.

  2. Where they apply
    Across models, agents, datasets, and platforms like Databricks, Bedrock, and Azure.

  3. What’s compliant—and what isn’t
    At a glance.

  4. What changed—and why
    With versioning and audit logs.

  5. What action to take next
    With guided remediation, suppression, and evidence workflows.

Every decision aligned directly to the customer research—automation, precision, clarity, and continuous checks.

The Impact

Early indicators showed:

  • Clear distinction between compliant and violated systems

  • Faster identification of policy gaps

  • Higher confidence during audits

  • Developers using governance proactively

  • Greater visibility across AI systems

Most importantly:

“I can finally tell where my risks are, why they exist, and what to do about them.”

This validated the entire design direction.

Business Value Delivered

The Orchestrator positions enterprises to:

  • meet global AI compliance (EU AI Act, NIST RMF, ISO 42001)

  • reduce manual compliance workflows

  • enforce model-level governance consistently

  • unify legal, governance, and engineering teams

  • accelerate AI delivery without added risk

  • produce evidence automatically

  • scale oversight across platforms

This wasn’t just a UI redesign.
It became the foundation of enterprise AI trust.