Agile Visa
AI Capability Partnership Proposal
National Board of Medical Examiners
March 2026
Strategic AI Partnership Brief · NBME

Your Staff Are Already
Using AI to Do
NBME's Work.
Do You Know How?

Every team at NBME — psychometricians, item developers, assessment designers, technology teams, operational staff — is already using AI tools in their day-to-day work. Some are using it well. Many are using it in ways that put the integrity of NBME's assessments, the confidentiality of candidate data, and the rigour your mission demands at serious and avoidable risk.

Organisation
National Board of Medical Examiners · Philadelphia, PA
NBME's Mission
Advancing assessment of health care professionals to achieve optimal care for all
The Urgency
AI is already inside your workflows — the question is whether it is being used safely, consistently, and at the level your mission requires

Assessment data, candidate information and test content are your most protected assets. AI changes the risk equation.

  • Item writers and assessment developers pasting draft test content into consumer AI tools — exposing high-stakes examination material to third-party servers with no data retention controls
  • Candidate data — scores, identifiers, performance patterns — handled by AI tools outside any approved data governance framework
  • No shared protocol across teams on which AI tools are approved for which data types — creating inconsistent practice and unmanaged liability
  • AI models producing plausible but incorrect outputs in psychometric or clinical contexts — staff without the fluency to identify and challenge those errors before they enter a workflow
  • Significant manual overhead in reporting, documentation, research synthesis, and internal communication that AI could handle safely and consistently — freeing staff for higher-value assessment work
  • Health professionals and educators looking to NBME for leadership on AI in assessment — a responsibility that requires NBME's own teams to model genuine AI fluency, not cautious avoidance

Optimal care for all depends on assessments that keep pace with how medicine and technology evolve together.

NBME's mission is to advance the assessment of health care professionals to achieve optimal care for all. That mission does not pause while the healthcare system transforms around it. AI is already reshaping how clinicians work, how medical knowledge is produced and accessed, and how health professionals are trained. The organisations responsible for assessing those professionals need to understand AI deeply enough to assess its role in clinical competence — and that starts with their own teams.

This is not about technology for its own sake. It is about ensuring that the people who design, build, and deliver the assessments that protect patients have the AI fluency to do that work responsibly in a world where AI is present in every clinical setting their candidates will enter.

Three phases. Designed to match where your organisation is right now.

We do not come in with a fixed programme and ask NBME to fit into it. We start with where your teams are and build from there. Most organisations begin with Phase 1 and move to Phases 2 and 3 once they see the results. Every phase stands on its own and delivers value independently.

Phase 1 · Foundation
AI Fluency for Every Team

A focused 90 to 120 minute hands-on session for any team at NBME. Not a lecture. Not a slide overview. A live, interactive session where every participant works with AI tools directly under facilitated guidance.

Understand how GenAI models work and why that changes how you need to use them professionally
Navigate the AI tool landscape and make informed decisions about which tools are right for which tasks
Write prompts that produce high-quality, reliable output consistently — including in assessment and research contexts
Apply a clear data safety protocol before every AI interaction — know what can and cannot go into any given tool
Leave with a personal AI toolkit and a data classification checklist built for their specific role at NBME
90–120 min · Up to 15 per cohort · Hands-on
Phase 2 · Automation
Agentic AI for NBME Workflows

A 2 hour hands-on build session where participants go from using AI to building AI agents. Designed for teams with recurring, structured workflows that currently depend on manual effort — and should not have to.

Understand what an AI agent is, how it is structured, and where it produces reliable results versus where human oversight remains essential
Build a working AI agent during the session — automating a real task from their own team's workflow using free, approved tools
Identify at least three additional automation opportunities in their team's work to pursue after the session
Understand the governance questions that apply when AI agents act autonomously on organisational workflows
2 hours · Live build · Working agent · Free tools
Phase 3 · Consulting
AI Partnership — Embedded and Ongoing

For organisations ready to move beyond individual training and build AI capability into how the work gets done. We work alongside your teams as a trusted AI partner — mapping workflows, identifying automation opportunities, and building the internal capability that sustains after our engagement ends.

Workflow assessment — identifying where manual processes in assessment development, research, and operations can be safely automated
NBME-specific AI governance framework — approved tool stack, data classification protocol, human-in-the-loop standards for assessment work
Internal AI champions development — building lasting capability that does not depend on ongoing external support
Quarterly capability review and programme evolution as the AI landscape continues to develop
Ongoing · Quarterly cycles · Embedded partnership

Every team has a different relationship with AI. Every team has something to gain.

We design each cohort around the actual work of the people in the room. The session for item development teams looks different from the session for psychometricians, which looks different from the session for technology or operations teams. The framework is consistent. The application is specific.

📝
Item Development Teams

Safe, approved use of AI for item writing efficiency, content review, and format variation — without any confidential examination material leaving a controlled environment.

📊
Psychometrics and Research

AI for literature synthesis, report drafting, data summarisation, and research communication — with a clear understanding of where AI output requires human verification in a high-stakes context.

💻
Technology and Product Teams

Practical AI fluency for teams building and maintaining assessment platforms — including understanding AI capabilities relevant to the tools their candidates will encounter in clinical practice.

🔧
Operations and Programme Management

AI for documentation, status reporting, stakeholder communication, and process efficiency — the administrative overhead that consumes time that could go toward mission-critical work.

🌐
Partnerships and External Relations

AI fluency for teams who work with medical schools, healthcare organisations, and professional bodies — ensuring NBME speaks credibly about AI in assessment when partners and clients raise it.

🏛
Leadership and Strategy Teams

Strategic AI literacy for leaders making decisions about AI's role in assessment development, platform capability, and NBME's positioning on AI in healthcare evaluation.

A personal AI toolkit built for their specific role and data environment at NBME
A data safety decision framework — clarity on what can and cannot go into any AI tool given NBME's data classifications
Prompt techniques they can apply immediately to their existing work — not after a follow-up course
The ability to identify AI hallucinations and know when AI output requires independent verification
Confidence to raise AI-related questions and opportunities with their team and with leadership
ICAgile-recognised certification adding professional credibility to their AI capability

The organisations that set standards must demonstrate them.

NBME sets the standard for how healthcare professionals are assessed. Medical schools, healthcare systems, licensing bodies, and clinicians themselves look to NBME for rigour and leadership in evaluation. When AI begins to reshape how clinical competence is demonstrated and assessed — and it already is — NBME's voice on that question needs to come from a place of genuine working knowledge, not from the sidelines.

Building AI fluency across NBME's staff is not overhead. It is how NBME stays credible as the organisation that defines what it means to be a safe and competent health professional in a world where AI is part of clinical practice.

The assessment professionals and researchers who go through this programme do not just become better at their jobs. They become part of an organisation that can speak with authority about AI in healthcare assessment — because they have done it themselves.

Different industries. Different challenges. One consistent result.

Every engagement below is real and anonymised to protect our clients. The common thread across all of them is the same situation NBME faces today — teams using AI without a shared framework, data at risk they did not know was at risk, and the potential to gain significant capability that was sitting unused.

Egypt · Financial Services
Regional Bank
Compliance-Bounded AI Rollout
A banking organisation needed to build AI capability across 120 operations staff while staying within strict regulatory data boundaries. Internal IT had blocked consumer AI tools with no approved alternative in place. The challenge was building genuine fluency — not just awareness — within those constraints.
Approved AI toolkit defined and training delivered within regulatory compliance boundaries
120 staff trained across 6 cohorts with role-specific content for each team
AI usage protocol embedded into staff onboarding across the organisation
Singapore · Banking
Singapore Bank
Leadership and Analyst Dual Track
A Singapore bank needed fundamentally different training for senior leadership and for analyst teams. Generic AI sessions had failed both groups. Leadership needed strategic literacy to govern AI responsibly. Analysts needed hands-on capability to change how they actually worked. One programme could not serve both.
Two separate role-specific tracks delivered concurrently with no shared content dilution
89% of analysts applied a new AI technique to real work within 2 weeks of the session
Programme embedded in the bank's annual capability calendar within 3 months
Malaysia · Professional Services
HR Consultancy
Full Three-Phase Engagement
A professional services firm needed AI capability that went deep enough to sustain real client conversations about AI transformation. Surface awareness was not enough — their clients were asking hard questions and the team needed to answer from genuine competence, not slide frameworks.
All three phases delivered over 12 weeks, full team of 40 trained
Firm now offers AI advisory as a service line to their own clients
Revenue from AI advisory exceeded training investment within 90 days
Europe · Manufacturing
Manufacturing Firm
Workflow Automation Programme
A European manufacturing organisation wanted to understand where AI could reduce manual reporting and documentation overhead without disrupting operations. Leadership had appetite but no clear starting point. They needed to see proof before committing to scale.
4 reporting workflows identified for immediate automation through value stream mapping
13 AI agents built, tested and deployed within Phase 2
16 hours per week recovered per team after automation went live

A lasting AI capability that protects your mission and accelerates your work

01
Assessment Data That Stays Protected

Every staff member leaves with a clear, internalised protocol for which AI tools are safe for which data types. Examination content, candidate information, and research data stay within approved boundaries — not by policy, but by practised habit.

02
Teams That Spend Time on What Matters

The documentation, reporting, research synthesis, and internal communication that consumes hours of expert time every week — handled safely and efficiently by AI. Those hours go back to the assessment work only NBME's people can do.

03
Leadership on AI in Healthcare Assessment

NBME teams who understand AI well enough to engage credibly when healthcare organisations, medical schools, and policymakers ask about AI's role in assessment. That credibility is part of how NBME fulfils its mission in a world where AI is already in the clinic.

We start small and build proof before we ask for scale.

NBME does not need a firm-wide mandate on day one. We propose starting with a single cohort — one team, one phase, 90 minutes. The outcomes from that session become the evidence base that makes the case internally for the next cohort and the next phase. Expansion happens because people who went through the programme pull their colleagues in — not because of a top-down directive.

1
Pilot Session

One team. One phase. 90 minutes. Real outcomes you can point to immediately.

2
Internal Proof

Measurable participant outcomes become the case for expanding to the next team.

3
Cohort Rollout

Expand across teams and departments. Content adapted for each team's specific context.

4
Sustained Partnership

Phase 3 consulting, quarterly updates, capability evolution as AI continues to develop.

5,000+
Professionals trained globally
22
AI and Agile courses
10+
Countries delivered in
20+
Years in practice
"The organisations I work with that carry the highest responsibility for quality and rigour are often the ones most cautious about AI — and rightly so. But caution without capability is not a strategy. It is a position that erodes over time as the world moves forward. The goal is not to adopt AI uncritically. It is to understand it well enough to use it where it genuinely helps and to protect what matters most where it does not."
Prashant Shinde
AI-Agile Expert · Founder and CEO, Agile Visa
ICAgile Accredited Provider · HQ Singapore · Delivering Globally