Human judgment, accountable systems, and responsible AI in real organizations.
Boards and executives do not fail at AI because they lack tools. They fail because accountability erodes as decisions accelerate.
Artificial intelligence already shapes hiring, pricing, compliance, product decisions, and risk exposure across organizations.
The challenge for leaders, boards, and founders is not whether to adopt AI.
It is whether the systems surrounding it support accountable human judgment once decisions accelerate, scale, or become opaque.
I work with organizations to evaluate, govern, and oversee AI-enabled systems so responsibility remains visible, decisions remain defensible, and performance remains steady under pressure.
This work sits at the intersection of law, leadership behavior, and governance.
How AI risk actually shows up
-
Vendor contracts that obscure responsibility for outputs, bias, or downstream use
-
AI tools embedded into workflows without clear decision ownership
-
Overreliance on vendor assurances about safety, accuracy, or compliance
-
Governance frameworks that exist separately from procurement, legal, and operations
-
Leaders expected to oversee systems they were never trained to interrogate
I coined optical ethics by design™ to describe this condition. Accountability thins once AI-managed decisions move faster than human judgment. This is where most organizations believe they have oversight and discover too late that they do not.
Effective AI governance does not live in slogans, policy decks, or one-time assessments.
It lives inside contracts, workflows, escalation paths, and human override mechanisms that still function when pressure is real.
Who this work is for
This work is designed for leaders who retain responsibility for outcomes:
-
Board members and advisory boards
-
Founders and executive leadership teams
-
General counsel, compliance, and risk leaders
-
Investors, accelerators, and mission-driven organizations deploying AI
-
Organizations operating in regulated or high-stakes environments
If AI touches decisions that affect people, trust, safety, or reputation, governance belongs at the center.
This work assumes retained responsibility, not delegated blame.
What I help organizations do
My work focuses on making AI governance operational, durable, and credible across technology, legal exposure, and human behavior.
The goal is not compliance theater. It is preserved accountability when things go wrong.
AI readiness and risk assessment
-
Identify where AI is already influencing decisions, directly or indirectly
-
Assess legal, operational, ethical, and reputational risk by use case
-
Evaluate organizational readiness to oversee AI-enabled decisions
-
Clarify which decisions require human review, override, or escalation
AI contract and vendor review support
-
Review AI vendor contracts and platform terms through a governance and risk lens
-
Identify gaps in accountability, audit rights, data use, model updates, and liability allocation
-
Pressure-test vendor representations against actual workflow and risk exposure
-
Support internal legal and procurement teams with issue-spotting, framing, and negotiation priorities
This work strengthens internal decision-making and governance without replacing outside counsel.
Workflow and decision architecture
-
Map AI-enabled workflows to decision rights and responsibility
-
Identify where automation increases the risk of error, bias, or diffusion of accountability
-
Design approval cues, human-in-the-loop structures, and override points
Governance design
-
Board-ready AI governance structures aligned to real operations
-
Clear ownership for AI-related decisions and incidents
Governance is tested when something breaks, not when everything appears to work.
Executive and board AI literacy
-
Build the confidence to ask informed, non-technical questions
-
Understand where AI tools fail, drift, or mislead
-
Strengthen oversight without slowing execution
Ongoing advisory support
-
Guidance during vendor selection, rollout, scale, or regulatory scrutiny
-
Decision support during moments of uncertainty or incident response
-
Periodic reassessment as tools, vendors, and risk profiles evolve
Decisions I help leaders and boards navigate
-
Should this AI system be deployed, paused, or redesigned
-
Whether vendor assurances are sufficient or incomplete
-
Where human override must be preserved
-
How to govern AI across decentralized teams and workflows
This work is about improving judgment before mistakes harden into exposure.
For founders and early-stage teams
AI risk often enters earlier than expected.
Founders and startups face unique challenges:
-
Early vendor lock-in through poorly understood terms
-
Over-trusting demos, benchmarks, or generic compliance claims
-
Building speed without decision discipline
-
Scaling tools before accountability is defined
I help early-stage teams build lightweight governance and vendor discipline that protects momentum without importing enterprise bureaucracy.
Why my background matters here
I bring a rare combination of experience to AI governance work:
-
Former Deputy General Counsel, Chief Privacy Officer, Chief Compliance Officer, and HR executive for a Fortune 500 company
-
Decades of experience advising organizations on risk, ethics, governance, and leadership behavior
-
Law professor and program designer focused on decision-making under pressure
-
Frequent speaker and writer on responsible AI, AI regulation, and the future of work in an AI-powered world
-
Retained by a global technology company as a subject-matter expert to help train advanced AI models in legal reasoning and decision-making
-
Advisory board member for AI-based startups
This allows me to work fluently across technical teams, legal and compliance functions, executives, and boards without diluting responsibility or oversimplifying risk.
AI navigation and governance work is typically delivered through:
-
Advisory board roles
-
Retained governance and risk advisory
-
AI readiness and risk assessments
-
AI contract and vendor review support
-
Executive, board, and team education
Responsible AI governance is not a framework.
It is a discipline that determines whether accountability survives scale.
When judgment is designed into systems early, organizations move faster with fewer reversals.
Request an advisory conversation.