Leadership Matters: Thoughts on AI Collaboration

Leadership Matters: Thoughts on AI Collaboration

February 17, 20265 min read

Hello again, and thank you for checking in!

If you’ve been following along with my recent posts, you know we’ve spent time talking about data, analytics, and how new technologies can help PA programs keep pace with growing assessment and accreditation demands. Today, I’d like to step slightly to the side of that conversation and look at something just as important: how we, as educators and professionals, decide to use these tools responsibly.

I promise this isn’t going to be a dramatic “robots are coming” discussion. In my experience, every new tool in education comes with two predictions: one group believes it will change everything overnight for the better, while another insists it will ruin everything forever. Reality usually lands somewhere in between. That’s where thoughtful leadership comes into play.

Open the pod-bay doors, Hal…or maybe just the conversation

AI is here. It’s already part of research-driven professions, healthcare systems, and educational environments. The question isn’t whether it exists, or even whether we’re going to use it. That’s already happening. Instead, we must ask questions like, “How do we use it well?” and “How do we maximize the potential of the collaboration between us and all this computing power?”

Thinking of this relationship between us humans and AI as a collaboration is key to alleviating the anxiety many of us feel toward AI. Outside of academia, our reactions to AI seem to depend largely on context. It’s interesting to watch how differently people respond to AI depending on where they encounter it. In creative spaces, there’s a noticeable pushback. Artists and writers sometimes feel the need to clarify that AI didn’t generate their work, and many consumers actively look for that distinction.

At the same time, in everyday situations, people embrace AI without much hesitation. I recently ordered pizza from a local restaurant and realized halfway through that the order taker was an AI. I worried it might be one of those voice-activated bots that eventually prompt us to shout “Operator! Operator!” at the phone. Not so. My AI order-taker did a great job, handling my last-minute “wait, make sure those are black olives, not green olives” correction perfectly. The contrast was fascinating. It reminded me that the real issue isn’t the technology itself, but how and where we choose to trust it.

Another reason I approach this conversation calmly is that education has lived through moments like this before. Every generation of health educators encounters a new tool that seems poised to change everything. Many of us (of a certain age) have lived through the introduction of computers in classrooms, electronic health records, online learning platforms, simulation technology. Each time, the real transformation wasn’t the tool itself, but how educators and medical professionals chose to integrate it into their practice. The technology changed workflows, but the profession’s core values stayed the same. AI feels similar to me. We have an opportunity to decide thoughtfully how this new capability serves the mission we already have.

In PA education, our work has always been built on professional judgment, mentorship, and accountability. Those principles don’t disappear because new tools enter the room. If anything, they become more important.

A Familiar Pattern in Professional Work

I’ve been thinking about the AI revolution not only as an educator, but as someone whose own business has begun incorporating AI as a practical tool, and also through conversations with professionals in other fields who are navigating similar questions.

My writer friend recently told me that she initially worried AI might replace her profession entirely. Over time, she discovered that AI didn’t replace her creative process but instead accelerated parts of it. Things like outlines, frameworks, and brainstorming became easier. What remained uniquely hers was the voice, judgment, and creative decisions that shaped the final result. That experience feels familiar to what many of us are seeing in education. Tools can support the work, making certain processes faster or clearer. But they don’t replace the thinking behind the work.

Honestly, the best results come when human expertise and technology collaborate, not when one tries to replace the other.

Leadership Means Defining the Boundaries

A colossal technological change invariably requires a shift in how leadership thinks. It’s worth saying plainly: AI does not replace professional judgment.

In PA education, we mentor students, model professional behavior, weigh nuance, and understand context in ways no algorithm can replicate. Those human elements are the heart of what we do.

Education program leaders are increasingly responsible for deciding:

  • Which processes benefit from automation

  • Which decisions must remain fully human

  • How transparency and accountability are preserved

  • How technology supports, rather than drives, educational goals

  • How AI can be leveraged to make our work better than ever

AI doesn’t remove educators' responsibility. If anything, it requires clearer intentionality about how we teach, assess, and make decisions. What technology can do is reduce friction, saving time on repetitive work, helping us see patterns sooner, and allowing educators to focus more energy on teaching, mentoring, and thoughtful decision-making.

Here’s a pattern I’ve noticed. One interesting effect of AI is that it tends to reveal areas where processes were already unclear. When expectations and workflows are well defined, technology supports them effortlessly. When they’re vague or inconsistent, technology highlights those gaps faster and shows us exactly what’s needed to fill them in.

In many ways, this moment challenges us to be more explicit about our goals, standards, and reasoning. That’s a healthy evolution for education, even if it feels uncomfortable at times.

Looking Ahead

As we move forward, one of the most important conversations for PA education isn’t just about new tools, but about clarity: clear expectations, educational intent, and processes that support both students and faculty. In the coming weeks, I’ll be exploring practical ways programs can strengthen that clarity as standards continue to evolve - and I don’t mean that just figuratively. We’ve been hosting a webinar series on creating syllabi compliant with the 6th Edition Standards, and I’m excited to share the highlights with you readers!

Thanks again for spending a few minutes with me today, and for the thoughtful work you continue to do on behalf of your students and the profession.

Collaborative IntelligenceProfessional JudgmentARC-PA 6th EditionEducational IntentionalityAugmented Pedagogy
blog author image

Scott Massey

With over three decades of experience in PA education, Dr. Scott Massey is a recognized authority in the field. He has demonstrated his expertise as a program director at esteemed institutions such as Central Michigan University and as the research chair in the Department of PA Studies at the University of Pittsburgh. Dr. Massey's influence spans beyond practical experience, as he has significantly contributed to accreditation, assessment, and student success. His innovative methodologies have guided numerous PA programs to ARC-PA accreditation and improved program outcomes. His predictive statistical risk modeling has enabled schools to anticipate student results. Dr Massey has published articles related to predictive modeling and educational outcomes. Doctor Massey also has conducted longitudinal research in stress among graduate Health Science students. His commitment to advancing the PA field is evident through participation in PAEA committees, councils, and educational initiatives.

Back to Blog

© 2024 Scott Massey Ph.D. LLC