Blog

AI Literacy Is Not Optional Anymore: What I Learned Teaching It at FANA

There’s a moment I’ve been thinking about a lot since I walked off the stage at the Florida Association of Nurse Anesthetists Sand and Surf conference a this last weekend. It wasn’t the applause — although the engagement in that room was genuinely something. It was a conversation I had afterward, in the hallway outside the main ballroom, with a senior CRNA who had been practicing for over 25 years.

He pulled me aside and said: “I’ve been telling myself AI is something for the younger generation. That my experience is enough and I don’t need to learn another technology platform. But listening to you today, I realized I’m already using it. I just didn’t understand what was underneath it.”

That sentence — “I didn’t understand what was underneath it” — is exactly why I built this lecture. And it’s exactly why I’m writing this post.

Why This Conversation Had to Happen at FANA

When the Florida Association of Nurse Anesthetists gave me the opportunity to address their membership, I knew this wasn’t the right venue for a dry technology overview. CRNAs and SRNAs don’t respond to “here are ten features you should know about.” They respond to clinical reality. They respond to stories. They respond to someone standing in front of them who has stood in the same OR they stand in, at 2 AM, with a patient whose pressure is doing something unexpected.

So I built a lecture that started not with AI — but with a frustrated computer science professor at the University of Washington who got mad about an airline ticket.

“The future of anesthesia doesn’t belong to AI. It belongs to the clinicians who master it.”

His name was Oren Etzioni. In 2001, he boarded a flight to his brother’s wedding and discovered that the people sitting next to him had paid less for their seats — even though they booked later. He didn’t get angry. He got curious. He spent two years analyzing 12,000+ historical airfare combinations and proved you could predict whether a fare would rise or fall with 62% accuracy. He built a company called Farecast. Microsoft bought it for $115 million. That algorithm became Bing Travel’s price prediction feature — the one that told you whether to “buy now or wait.”

I told that story as an entry point because every clinician in that room had made a decision about when to buy a plane ticket based on pattern recognition. They just hadn’t connected it to what an LLM does. And once they did — once they understood that the same conceptual architecture that predicted airline prices in 2007 was the intellectual ancestor of the model predicting their patient’s hemodynamic trajectory in 2026 — the room shifted. You could feel it.

What the Talk Actually Covered

The lecture ran 45 minutes and covered six sections. I’ll give you the architecture here because I’ve had several requests for the full slide deck, and I want people to understand the deliberate logic of how it was structured before they download it.

Section One — Foundation: What Is an LLM?

We started with the essential mental model: a large language model is a next-word prediction engine. Extraordinarily sophisticated. Trained on essentially the entire written internet. But at its core, always asking: given everything I’ve seen, what is the most probable next word here? I used the medical student analogy — brilliant, well-read, never touched a patient, knowledge frozen at graduation. That’s your LLM. Useful for a staggering range of tasks. Not a substitute for clinical presence.

Section Two — Prompting and Limitations: The Skill Nobody Teaches You

Four critical limitations every clinician needs to internalize: knowledge cutoff, no access to your patient, hallucination, and no memory between sessions. We spent real time on hallucination because it’s the one that is clinically dangerous in a way that’s counterintuitive. The AI doesn’t know it’s wrong. It generates confident, fluent, professionally toned, completely fabricated content — including fake citations with real journal names. I showed the RACE prompting framework: Role, Actual patient numbers, Clinical question structured, Expected format. Same AI, thirty additional seconds of prompt construction, categorically different output quality.

Section Three — The Clinical AI Bridge: ChatGPT vs. OpenEvidence vs. Epic

This is where the talk started to really differentiate itself from what most AI education offers. I walked through the architectural difference between general AI (parametric, everything in weights, no citation verification) and RAG-based clinical AI like OpenEvidence (retrieval from real peer-reviewed literature before generating any answer, citations traceable and real). The published data is striking: in a head-to-head comparison, general LLMs gave actionable, evidence-based answers on fewer than 5% of tested clinical questions. OpenEvidence gave actionable answers on 48%. Same questions. Completely different reliability.

Section Four — Epic’s AI Ecosystem: What Just Dropped

Three weeks before this talk, Epic had its biggest AI announcement in company history at HIMSS 2026. I had the advantage of covering genuinely current material. More than 85% of Epic’s customer base is now actively using Art, Penny, and Emmie. Art’s ambient listening AI Charting is live. And Epic just announced Agent Factory — a visual drag-and-drop platform that lets health systems build and deploy their own AI agents inside the EHR.

But the piece that I spent the most time on was CoMET and Curiosity — Epic’s foundation models, trained on 118 million patients, 115 billion medical events, 151 billion tokens of real EHR data. This is not a language model in the sense we think of ChatGPT. It was trained on sequences of structured medical events — diagnoses, labs, medications, procedures, in chronological patient order — using the same transformer architecture that originated in a Google language translation paper in 2017. One model, 78 clinical tasks, predicting readmission risk, cardiovascular events, length of stay, early cancer patterns. This is not the future. It is in the February 2026 research rollout.

Section Five — Use Cases by Expertise Level

Three tiers — beginner, intermediate, advanced — with real prompts, real expected outputs, and safety caveats on every drug-dosing query. The beginner cases were the highest-engagement section: drug mechanism review using RACE framework, handoff summary drafting, patient education in plain language. The intermediate section introduced the multi-tool workflow: general AI for structure, OpenEvidence for citations, clinical judgment for the final plan. The advanced section covered OR efficiency modeling, personalized resident learning pathways, and JCAHO compliance auditing workflows — the administrative use cases where the return on investment is genuinely staggering.

Section Six — Agentic AI, Risks, and Your Irreplaceable Role

We closed with three vignettes of what agentic AI looks like in anesthesia over the next two to ten years — the pre-surgical optimizer, the intraoperative sentinel, the administrative AI associate. Followed by an honest accounting of the risks: deskilling, liability, HIPAA, and algorithmic bias. And then — because I’ve watched too many AI talks end on a note that leaves clinicians feeling anxious rather than empowered — we ended with what AI cannot replace. Tactile and sensory assessment. The therapeutic relationship. Contextual moral reasoning. Crisis improvisation. Those four things are irreplaceable. And in an AI-rich environment, they don’t diminish. They stand out more clearly.

Why AI Literacy Is a Professional Responsibility Now

I want to be direct about something, because I think the field sometimes dances around it.

AI literacy is no longer optional professional development for anesthesia clinicians. It is a core competency. Not because AI will replace us — it won’t, and I made that argument explicitly and at length during the talk. But because the AI is already in your OR. It’s in your HemoSphere. It’s in your Epic system. It will be embedded in your pre-op workflow within two years if it isn’t already. And a clinician who doesn’t understand the tools they’re using is not practicing at the standard of care the profession deserves.

More than that: the clinicians who understand these tools will make better decisions than those who don’t. They will catch things earlier. They will synthesize evidence faster. They will prepare more thoroughly. They will document more completely. The gap between the AI-literate CRNA and the AI-avoidant one is going to widen quickly, and I believe that gap will eventually show up in patient outcomes data.

That’s not an argument designed to frighten anyone. It’s an argument designed to motivate.

The room at FANA got it. The questions afterward were sharp, clinically specific, and — most importantly — forward-looking. Not “should we use this?” but “how do we build this into our residency curriculum?” and “what’s the right way to incorporate HPI training into our department?” and “how do I get the full slide deck to share with my chief CRNA?”

That’s the conversation I wanted to start. And it did.

If you are a program director, chair of anesthesia, or department leader who would like to bring this lecture to your institution or residency program, I would welcome that conversation. Contact me at the link below or reach out directly.

Contact:
jpabalate@jpcaic.com