Contact

Human judgment, accountable systems, and responsible AI in real organizations.

Boards and executives do not fail at AI because they lack tools. They fail because accountability erodes as decisions accelerate.

 

Artificial intelligence already shapes hiring, pricing, compliance, product decisions, and risk exposure across organizations.

 

The challenge for leaders, boards, and founders is not whether to adopt AI.

 

It is whether the systems surrounding it support accountable human judgment once decisions accelerate, scale, or become opaque.

 

I work with organizations to evaluate, govern, and oversee AI-enabled systems so responsibility remains visible, decisions remain defensible, and performance remains steady under pressure.

 

This work is built at the convergence of law, leadership behavior, and governance.

 

I train frontier AI models, have governed AI risk from inside organizations building and deploying these systems, and have written on AI governance and legislation. That is the vantage point this practice is built on.

How AI risk actually shows up 

  • Vendor contracts that obscure responsibility for outputs, bias, or downstream use

  • AI tools embedded into workflows without clear decision ownership

  • Overreliance on vendor assurances about safety, accuracy, or compliance

  • Governance frameworks that exist separately from procurement, legal, and operations

  • Leaders expected to oversee systems they were never trained to interrogate

I coined optical ethics by design™ to describe this condition. Accountability thins once AI-managed decisions move faster than human judgment. This is where most organizations believe they have oversight and discover too late that they do not.

 

Effective AI governance does not live in slogans, policy decks, or one-time assessments.

 

It lives inside contracts, workflows, escalation paths, and human override mechanisms that still function when pressure is real.

 

Who this work is for

This work is designed for leaders who retain responsibility for outcomes.

You do not need to know what AI governance means yet. You need to be the person in the room when it matters.

 

  • Board members and advisory boards

  • Founders and executive leadership teams

  • General counsel, compliance, and risk leaders

  • Investors, accelerators, and mission-driven organizations deploying AI

  • Organizations operating in regulated or high-stakes environments

  • Leaders whose organizations have adopted AI tools without defined decision ownership or accountability for outputs

  • Heads of AI governance, responsible AI, and technology risk practices

 

If AI touches decisions that affect people, trust, safety, or reputation, governance belongs at the center. 

 

For organizations that also need compliance program assessment, whistleblower program audits, or board preparation on compliance obligations alongside AI governance work, that advisory is available through a dedicated engagement. See Compliance Advisory.

  

You do not need to have a defined AI governance problem to need this work. You need to be responsible for an organization that uses AI, or is about to.


AI governance becomes urgent through ordinary moments: a board member asks a question nobody in the room can answer, a vendor contract arrives with terms legal has not seen before, or someone starts using an AI tool and nobody knows who owns the output. Those moments do not announce themselves. They accumulate.

 

When an AI initiative stalls or fails, it is rarely the technology. It is that nobody mapped the stakeholders, nobody established who controls the data, and nobody defined who owns the decisions the system is making.


If your organization uses AI tools, works with AI-enabled vendors, or makes decisions that affect people through automated systems: the question is not whether you have adopted AI. It is whether the people responsible can name those decisions, own them, and account for them when asked.

  

What I help organizations do

 

My work focuses on making AI governance operational, durable, and credible across technology, legal exposure, and human behavior.

 

The goal is not compliance theater. It is preserved accountability when things go wrong. 

 

AI readiness and risk assessment

  • Identify where AI is already influencing decisions, directly or indirectly

  • Assess legal, operational, ethical, and reputational risk by use case

  • Evaluate organizational readiness to oversee AI-enabled decisions

  • Clarify which decisions require human review, override, or escalation

 

AI contract and vendor review support

  • Review AI vendor contracts and platform terms through a governance and risk lens

  • Identify gaps in accountability, audit rights, data use, model updates, and liability allocation

  • Pressure-test vendor representations against actual workflow and risk exposure

  • Support internal legal and procurement teams with issue-spotting, framing, and negotiation priorities

 

This work strengthens internal decision-making and governance without replacing outside counsel. 

 

 

Workflow and decision architecture

  • Map AI-enabled workflows to decision rights and responsibility

  • Identify where automation increases the risk of error, bias, or diffusion of accountability

  • Design approval cues, human-in-the-loop structures, and override points

 

Governance design

  • Board-ready AI governance structures aligned to real operations

  • Clear ownership for AI-related decisions and incidents

     

    Governance is tested when something breaks, not when everything appears to work.

 

Executive and board AI literacy

  • Build the confidence to ask informed, non-technical questions

  • Understand where AI tools fail, drift, or mislead

  • Strengthen oversight without slowing execution

 

Ongoing advisory support

  • Guidance during vendor selection, rollout, scale, or regulatory scrutiny

  • Decision support during moments of uncertainty or incident response

  • Periodic reassessment as tools, vendors, and risk profiles evolve

 

Decisions I help leaders and boards navigate

 

  • Should this AI system be deployed, paused, or redesigned

  • Whether vendor assurances are sufficient or incomplete

  • Where human override must be preserved

  • How to govern AI across decentralized teams and workflows

 

This work is about improving judgment before mistakes harden into exposure.

 

For founders and early-stage teams

 

AI risk often enters earlier than expected.

 

Founders and startups face unique challenges:

  • Early vendor lock-in through poorly understood terms

  • Over-trusting demos, benchmarks, or generic compliance claims

  • Building speed without decision discipline

  • Scaling tools before accountability is defined

 

I help early-stage teams build lightweight governance and vendor discipline that protects momentum without importing enterprise bureaucracy.

 

Why my background matters here

 

I bring a rare combination of experience to AI governance work:

  • As former Deputy General Counsel, Chief Privacy Officer, and Chief Compliance Officer of a publicly traded company, I worked directly with our board on governance, risk, and accountability

  • As General Counsel and advisory board member of AI startups, I understand how these systems get built and deployed before governance catches up

  • I am retained by a global technology company to train frontier AI models using reinforcement learning from human feedback, benchmarked against GDPVal, a benchmark measuring AI performance on real-world professional tasks graded against human expert standards

  • I have reviewed AI vendor contracts, advised on AI-related risk, and written on AI governance and legislation

  • I speak and train small businesses, nonprofits, and professional groups on AI governance and responsible AI use

  • Founding member, She Leads AI; member, AI Governance Community of Practice focused on algorithmic bias, privacy, and surveillance

 

This allows me to work fluently across technical teams, legal and compliance functions, executives, and boards without diluting responsibility or oversimplifying risk.

 

My compliance background, including five years on the federal Whistleblower Protection Advisory Committee, congressional testimony on Dodd-Frank, and service as a commissioner on the Miami-Dade Commission on Ethics and Public Trust, directly informs how I approach AI governance as a legal, behavioral, and accountability question, not a technology question.

  

AI navigation and governance work is typically delivered through:

  • Advisory board roles

  • Retained governance and risk advisory

  • AI readiness and risk assessments

  • AI contract and vendor review support

  • Executive, board, and team education

 

Responsible AI governance is not a framework.

 

It is a discipline that determines whether accountability survives scale.

 

For organizations that also need compliance program assessment, whistleblower program audits, or board preparation on compliance obligations alongside AI governance work, that advisory is available through a dedicated engagement. See Compliance Advisory.

 

Representative Engagements

 

NYC Bar Association Podcast, February 2026 — Synthetic Employees and the Future of Work. Featured expert on AI governance, employment law implications of deploying AI agents and autonomous systems, and multi-stakeholder governance frameworks for organizations integrating AI into their workforce.
Link: NYC Bar Association Podcast, February 2026 — Synthetic Employees and the Future of Work


Ethics at the Edge: AI, Integrity, and Innovation — Empowered by AI: The SLAI Effect, 2025. Featured guest on ethical AI governance, organizational integrity, and the human judgment layer that responsible AI innovation requires.
Link: Ethics at the Edge: AI, Integrity, and Innovation — Empowered by AI: The SLAI Effect, 2025


Contract Audit: AI Edition — Law Insider by SimpleDocs, 2024. Featured expert on AI in contract review, vendor risk, and the accountability gaps that emerge when AI-assisted decisions move faster than human oversight.
Link: Contract Audit: AI Edition — Law Insider by SimpleDocs, 2024


AI Contracts Explained Podcast, Episode 30 — Training In-House Lawyers in This AI World. Guest expert on preparing in-house legal teams for AI governance, contract risk, and the judgment demands that AI-accelerated environments place on legal professionals.
Link: AI Contracts Explained Podcast, Episode 30 — Training In-House Lawyers in This AI World

 

 

 

Request an advisory conversation.