Claude vs. ChatGPT vs. Perplexity: Which AI Tool Should Medical Trainees Actually Use?
If you've spent any time around residents or fellows lately, you know AI tools have officially infiltrated medicine. People are using them to draft H&Ps, summarize papers for journal club, write emails to attendings, and (whether they admit it or not) study for boards.
But here's the thing nobody tells you: not every AI tool is good at every task.
I learned this the hard way. I once used ChatGPT for a literature search and got back hallucinated citations that did not exist. I asked Perplexity to help me draft a personal statement and got something that read like a corporate press release. I spent six months using only ChatGPT before I realized I had been doing my writing on the wrong platform the whole time.
I run Making It Through Medicine on the side of a busy training schedule. That means in a normal week I'm switching between writing clinical notes, drafting research papers, building course content, replying to emails I should have answered three days ago, and trying to keep my Google Drive from spontaneously combusting.
I use Claude, ChatGPT, and Perplexity every single day. After a lot of trial and error, I've figured out what each one is actually good at.
Here's the honest breakdown.
The 30-second version
If you want the TL;DR before we get into the weeds:
Use Claude when you're writing or editing anything substantive. Papers, personal statements, emails that matter, contract review, complicated clinical reasoning.
Use ChatGPT when you want to brainstorm, make images, run an interactive study session, or use voice mode on your commute.
Use Perplexity when you need a real answer with citations you can actually verify. Guidelines, recent papers, anything where "trust me bro" isn't going to fly.
Now for the part where I show you how this plays out in real life.
Claude: my default for writing and thinking
Claude is the tool I reach for whenever the output has to read like an actual human being wrote it. It's also the one I trust most for nuanced tasks where missing the point would cost me hours.
What Claude is best at for medical trainees
Long-form writing. If I'm drafting a paper, an abstract, a research proposal, a personal statement, or a blog post like this one, Claude is the tool. The writing has texture. It doesn't lean on the same tired transitions ("furthermore," "moreover," "in today's fast-paced world") that immediately tip off any reader that an AI got involved.
Editing your own writing. This is honestly where Claude shines for me. I write a draft, paste it in, and ask Claude to flag where the argument falls apart, where the prose is flabby, or where I'm being too cautious. It's a better editor than most peer reviewers I've had.
Reading and analyzing PDFs. You can drop in a 40-page paper and ask real questions. "What was the primary endpoint?" "Were the groups balanced at baseline?" "Is the conclusion actually supported by the data?" Claude gives you a real answer instead of regurgitating the abstract.
Contract and offer letter review. This is the use case nobody talks about and every senior trainee needs. When you're looking at your first attending contract, paste it into Claude and ask it to flag unfavorable terms, ambiguous language, and anything worth negotiating. It will not replace a contract attorney. But it will help you walk into that conversation knowing exactly what to ask.
Working through clinical reasoning for studying. Not for direct patient care decisions, please. But for studying? Excellent. "Walk me through the differential for new-onset heart failure in an infant." "Why do we use prostaglandin in ductal-dependent lesions?" Claude is patient and pedagogical in a way ChatGPT often isn't.
Where Claude falls short
It doesn't have web search the way Perplexity does. If you're asking about a paper from last month or a guideline that was updated this year, you may get something out of date. Claude has gotten better about this, and some versions have a search feature now, but I still default to Perplexity for anything time-sensitive.
It also doesn't generate images, which is annoying if you're making social content or trying to drop something into a slide deck.ChatGPT: the multi-tool
ChatGPT was the first AI tool most of us tried. For a lot of trainees it's still the only one they use. That's fine. But it means you're missing what each tool does best.
Here's where ChatGPT actually earns its place in your rotation.
What ChatGPT is best at
Brainstorming. When I have a vague idea for a blog post, a course module, or a research project and I want to throw it around with something that will riff back, ChatGPT is great. It's more freewheeling than Claude and will surface adjacent ideas I would not have thought of on my own.
Image generation. This is huge if you create any kind of content: featured images for blog posts, slides for a lecture, social media graphics. ChatGPT's built-in image generator covers most of what a medical trainee needs without having to learn another tool.
Voice mode. I use this on drives to and from clinic. I'll pull up a study topic and have an interactive back-and-forth without taking my hands off the wheel. The closest thing I've found to a study buddy on demand.
Custom GPTs. If you do something repetitive, like drafting consult notes in a specific format or summarizing patient handoffs, you can build a custom GPT once and reuse it forever. I have one set up for breaking down specialty papers into a standardized clinical summary format. It has saved me hours.
Code Interpreter for data analysis. If you're working on a research project and have a CSV of data you need to play with, you can upload it and have ChatGPT run analyses, generate charts, and walk you through the stats. You still need to know what you're doing statistically. But it's a useful sandbox.
Where ChatGPT falls short
It's the worst of the three for serious writing. The default voice is corporate-blog-post-meets-LinkedIn-influencer, and even with heavy prompting it tends to drift back. If the words are going to be read by a real human who matters, I almost always start somewhere else.
It also still hallucinates more than I'm comfortable with on clinical questions and citations. Anything where you need a real source, don't trust ChatGPT alone.
Perplexity: the research engine
Perplexity is the tool I tell every medical trainee to start using yesterday. It's the only one of the three designed around real-time web search with cited sources, and that's exactly what we need most of the time in medicine.
What Perplexity is best at
Clinical questions where you need a real answer with a real source. "What's the latest AAP guideline on infant feeding?" "What's the current evidence on prophylactic antibiotics for chest tubes?" "What did the latest TOPCAT analysis show?" Perplexity gives you an answer with citations you can actually click and read. That's huge for clinical work.
Quick literature searches. I still use PubMed for systematic reviews. For the "I just need to know what's out there on X" kind of question, Perplexity is excellent. It pulls recent papers and summarizes them with links, which saves a long trip down a Google Scholar rabbit hole.
Current events in medicine. New drug approvals, guideline updates, recent research. Perplexity knows about them because it's searching the live web. Claude and ChatGPT often don't, depending on their training cutoff.
Fact-checking the other tools. This is my secret use case. When ChatGPT or Claude tells me something that sounds slightly off, I copy the claim into Perplexity and ask it to verify. Half the time it confirms, half the time it gives me a more nuanced answer with the actual source attached.
Where Perplexity falls short
It's not a writing tool. The default output reads like a research summary, because that's what it's designed to be. Do not ask Perplexity to draft your personal statement.
It also doesn't have the same conversational depth as Claude or ChatGPT. You ask, it answers. It's less of a thinking partner and more of a very good librarian.How I actually use all three in a real week
Here's what a typical week looks like for me:
Monday: I'm writing a paper. I draft in Claude and ask it to refine specific paragraphs as I go. When I need to cite a recent study, I switch to Perplexity to find the best source and verify the data.
Tuesday: Lecture prep. ChatGPT for a quick image on the title slide, Claude to outline the talk and refine the script.
Wednesday: Blog post for Making It Through Medicine. Claude for the writing, Perplexity for any stats or studies I want to reference, ChatGPT for the featured image.
Thursday: Studying for the in-training exam. Voice mode in ChatGPT on the drive home. Claude for working through reasoning on tougher cases. Perplexity when I want to look up a recent guideline change.
Friday: Email triage. Claude for any email that actually matters (a program director, a partnership conversation, a patient's family). A custom GPT for the more standardized stuff.
The point isn't that you need to use all three. The point is that picking the right tool for the job will save you hours and make the output dramatically better.
My honest recommendation
If you can only pay for one, get Claude. The writing is better, the analysis is better, and most of what medical trainees need to do (write, think, edit, learn) it does best.
If you have a little more budget, add Perplexity. The free version is solid, but Pro is worth it if you do any meaningful amount of literature work.
ChatGPT comes third for me, with one caveat: voice mode and image generation are unique enough that it's still worth having access to. The free tier covers most of what you need.
Whatever you pick, the bigger principle is this: stop using one AI tool for everything.
They aren't interchangeable. Treating them like they are will leave you with worse writing, sketchier citations, and a lot of wasted time.
The trainees who get the most out of these tools treat them like specialized instruments rather than a single magic wand. Pick the right one for the job, learn its quirks, and let each tool do what it was built to do.
Want more of this?
If you found this helpful and want more practical takes on tools, money, and side hustles for medical trainees, sign up for the Making It Through Medicine email list. New posts every week. No fluff. No AI-sounding nonsense.