AI helps you move fast—SMEs help you get it right. Knowing when to use each, and how to combine them, makes all the difference.
SMEs can feel like ghosts—hard to schedule and slower to respond. Meanwhile, AI tools like ChatGPT or Bard are available 24/7—ready to spin up outlines, explanations, and rework technical content in seconds. As one expert puts it:
“Tools such as ChatGPT, Google Gemini and Microsoft co-pilot … are just an evolution of tools to make us more efficient.” (chameleoncreator.com)
So; when is human insight critical, and when can AI get the job done—perhaps even better?
🔍 Capability | SME (Human Expert) | AI (e.g. ChatGPT) |
---|---|---|
Accuracy | ✅ High—when focused | ⚠️ Variable—requires prompt quality |
Contextual depth | ✅ Real-world, internal insights | ❌ Often generic or surface-level |
Speed | ❌ Slow, scheduling-dependent | ✅ Instant draft generation |
Availability | ❌ Hard to book | ✅ Always on, no meetings needed |
Ideation & structuring | ⚠️ Sometimes rigid or detailed | ✅ Excellent at generating analogies, outlines |
Follow-up clarity | ✅ Can answer specific questions | ⚠️ May hallucinate if prompts aren’t tight |
Regulated content | ✅ Essential | ❌ Risky without review |
Call in a Subject Matter Expert when:
You're building highly technical, compliance-heavy, or regulated content.
You need real-world examples, edge cases, or internal tools/procedures.
Legal, health, or reputational consequences are on the line.
You’re tackling new content that AI literally can’t know yet.
You're stuck in “the zone where you don’t know what you don’t know.”
Pro tip: Don’t ask SMEs to write the content. Ask for voice notes, annotated slides, or diagrams—low effort, high fidelity.
Lean on AI when:
You need a first draft or outline to get unstuck fast.
You're dealing with generic or foundational topics, like onboarding or safety protocols.
You want multiple versions—e.g., an analogy, a summary, a quiz, or a checklist.
You need help refining SME input that’s too dense or technical.
You’re filling in minor knowledge gaps—like “what is a load balancer?” or “difference between API and SDK?”
As Lane Hannah from Vodafone noted:
“We now have even quicker access to that knowledge and a way of … having it curated for you.” (chameleoncreator.com)
Pro tip: Use AI to amplify your thinking—not replace your consult. AI should support, not substitute, your judgment.
Start with AI to create an outline, storyboard, or draft.
Share it with a SME—ask, “What’s wrong or missing here?”
Refine the input using SME feedback plus your instructional design skills.
Return to AI for alternate formats—quizzes, metaphors, simplified versions.
Melissa Crawford offers valuable perspective on AI’s broader role:
“In the long term, you'll see individuals lean into using AI as a personal assistant on steroids... one AI assistant that links them all, coaches and supports them.” (chameleoncreator.com)
But she also warns:
“My main concern is data ... data captured for one purpose is used for another without a person realising.” (chameleoncreator.com)
That’s why ethical design matters: AI should augment learning, not create new risks. Always vet for bias, privacy, and transparency.
You're not replacing humans by using AI. You're maximising your impact.
A skilled learning designer knows when to bring in SMEs, when to trust AI, and how to bridge the two. That’s not just efficiency—it’s craft.