By Jonathan Pabalate, DNP, CRNA, APRN | Founder, JPCAIC — JPC Anesthesia Informatics Corp | Nurse Anesthesia Faculty, University of North Florida
There is a version of me that most of my OR colleagues have never met.
Before I was a CRNA, before I was faculty, before I was building AI-assisted perioperative workflows for surgical centers — I was a kid in a desktop publishing class, learning PageMaker on a Mac SE, completely absorbed in the idea that a computer could help you make something beautiful. I went on to audio engineering. Photography. Graphic design as a side pursuit. The clinical path eventually dominated, as it tends to, but the creative instincts never went away. They just went underground.
I suspect I’m not alone in this. Healthcare is full of people who quietly maintain a second identity: the cardiologist who shoots street photography, the CRNA who records music on weekends, the OR nurse who used to design posters for her college theater program. We don’t talk about it much, because it doesn’t seem relevant to the work. Or at least, it didn’t used to.
That’s starting to change.
A Convergence That Didn’t Feel Inevitable — Until It Was
In late April 2026, Adobe and Anthropic announced the Adobe for Creativity connector — a bridge that brings over 50 tools from Photoshop, Firefly, Lightroom, InDesign, Premiere, and Express directly into Claude. On the surface, this sounds like a tool for designers and content creators. And it is. But the implications for clinicians who sit at the intersection of healthcare, education, and AI are worth thinking through more carefully.
I’ve been exploring it with a specific lens: not as a full-time creative professional, but as someone who uses creative tools episodically — for educational materials, consulting deliverables, presentations, and the occasional personal project — and who is already deeply invested in how AI reshapes clinical and professional workflows.
Here’s what I’ve found worth paying attention to.
The Shift From “Go Do That in Photoshop” to “Let’s Just Do It”
The traditional relationship between AI assistants and creative software was advisory. You’d ask an AI how to do something, get step-by-step instructions, then go execute them yourself in the application. The AI was a manual. A helpful one, but still just a manual.
The Adobe × Claude integration collapses that gap. Instead of Claude telling you how to adjust white balance on a batch of images in Lightroom, it can do it — with you describing the outcome you want in natural language. Instead of explaining how to lay out a document in InDesign, it can build the structure while you focus on the content.
For a clinician who uses these tools occasionally but isn’t living in them daily, this is significant. The cognitive overhead of relearning a tool you haven’t touched in six months is real. When you can describe your intent instead of navigating menus, the barrier to entry drops substantially.
What This Looks Like in Practice (For Someone Like Me)
I want to be honest that I’m still in early exploration of this integration — I haven’t run it through every use case I care about. But the potential is visible from where I’m standing.
For educational material development: I regularly build lecture slides, case-based discussions, and simulation scenarios for anesthesia residents. Those materials need visuals — diagrams, annotated images, formatted documents. Right now, I context-switch constantly: write in one tool, design in another, format in a third. The ability to have Claude coordinate across those surfaces — pulling from Firefly for generated visuals, formatting in InDesign, iterating in a single conversational thread — would meaningfully compress that workflow.
For consulting deliverables: JPCAIC produces reports, scope-of-work documents, and presentation decks for healthcare organizations. Branding consistency matters. Right now, maintaining that consistency across document types requires discipline and templates. AI-assisted coordination across the Creative Suite creates the possibility of brand-coherent outputs that don’t require manual enforcement at every step.
For photography: I shoot regularly, primarily for personal projects. Lightroom is my home base for processing, but Photoshop is where I go when something needs more deliberate attention. The integration doesn’t replace the craft — and I wouldn’t want it to — but having a conversational layer for complex multi-step edits is genuinely interesting.
The Deeper Point: Why This Matters for Clinical AI Literacy
Here’s the argument I’d make to any clinician reading this: your creative background is not separate from your clinical and professional development. It is directly relevant to how you will navigate the AI-enabled future of healthcare.
Why? Because understanding how AI integrates with complex professional tools — how to describe an outcome rather than a sequence of steps, how to evaluate AI-assisted output critically, how to preserve intent while delegating execution — is the same cognitive skill set that will define how effectively clinicians work with AI in the OR, the documentation workflow, and the diagnostic pipeline.
When I use Claude to help me build a perioperative informatics dashboard, the skills I’m exercising are structurally similar to using it to produce a branded consulting document in InDesign. In both cases, I’m doing the clinical or strategic thinking. The AI is doing the translation into format and execution. Knowing where that line is — and being fluent at working at it — is what clinical AI literacy actually means.
Clinicians who only engage with AI through their EHR or their hospital’s approved toolset will have a narrower version of that literacy. Those who develop it across domains — clinical, creative, analytical — will have something more transferable.
A Note on What This Isn’t
I want to be precise about the limits of what I’m describing.
The Adobe × Claude integration is not a replacement for expertise. Knowing how light behaves, how typographic hierarchy works, how color communicates — none of that is automated. What is changing is the execution layer for people who already have that knowledge but have been constrained by tool friction and time.
It also isn’t magic. Like every AI integration, the quality of the output depends heavily on the quality of the input. Vague instructions produce vague results. Clinicians who are already disciplined about communicating precise clinical intent will likely adapt well to the communication style that gets the best results out of these tools.
What I’m Watching Next
I’m specifically interested in how this integration develops along two vectors:
The first is education-facing creative workflows — whether AI-coordinated design tools can help clinician educators produce higher-quality materials at the pace that modern medical education demands. The gap between what we want to produce and what we can realistically execute in the time available is real, and it matters for learners.
The second is professional identity. I think there are a lot of clinicians like me — people who have maintained a creative practice quietly alongside their clinical careers, who have never fully integrated those two identities because there was no natural bridge. AI-assisted creative tools may be part of what builds that bridge. Not by automating creativity, but by lowering the friction that kept the two worlds separate.
That feels worth paying attention to.
