AI Study Tools: An Honest Review After 6 Months

March 2026 · 14 min read · 3,366 words · Last Updated: March 31, 2026Advanced
# AI Study Tools: An Honest Review After 6 Months I used 8 AI study tools across 6 months of med school. My exam average went from 78 to 89. But 3 of those tools actually made things worse. Here's what nobody tells you about AI study assistants: they're not all created equal, and some will actively sabotage your learning without you realizing it. I'm a third-year med student, and last semester I decided to run an experiment. I tracked every study session, every practice question, and every exam score while rotating through different AI tools. I logged my time, my retention rates, and my actual performance on high-stakes exams. The results surprised me. Some tools I expected to love became crutches that weakened my recall. Others I almost dismissed became the backbone of my study routine. And the difference between the best and worst tools? A 15-point swing in exam performance. This isn't a sponsored post. I paid for most of these tools myself, and I'm going to tell you exactly which ones earned their subscription fees and which ones are collecting dust in my browser bookmarks.

Why I Started This Experiment

Med school has a dirty secret: we're all drowning in information, and nobody actually knows the best way to study anymore. Traditional methods like Anki and handwritten notes still dominate the study groups, but everyone's whispering about AI. ChatGPT can explain complex pathways. Notion AI can summarize lectures. Quizlet's AI can generate practice questions from your notes. But here's the problem—nobody's actually measuring whether these tools work, or if they're just expensive procrastination dressed up as productivity. I started tracking my AI tool usage in January, right at the beginning of my pathology and pharmacology block. This was perfect timing because both courses are memorization-heavy with high-stakes exams every three weeks. I could actually measure the impact. My baseline wasn't great. Fall semester, I averaged 78% on exams using traditional methods: Anki flashcards, lecture recordings at 2x speed, and handwritten notes. I was studying 6-7 hours daily and barely keeping my head above water. I needed something to change, and AI seemed like the obvious answer. So I made a spreadsheet. Every study session got logged: which tool I used, how long I studied, what material I covered, and how I felt about my understanding (1-10 scale). After each exam, I'd note my score and which tools I'd used to prepare for that specific content. I also did weekly self-quizzes on material from 1, 2, and 4 weeks prior to track retention. The goal wasn't to find a magic bullet. It was to figure out which tools actually helped me learn versus which ones just made me feel productive.

The Night I Almost Failed Because of AI

Three weeks into the experiment, I had my first pharmacology exam. I'd spent the previous two weeks using an AI tool that generated practice questions from my lecture slides—let's call it "QuizBot" for now. QuizBot was incredible. It would take my messy lecture notes and instantly create 50 multiple-choice questions. The interface was slick. The questions looked professional. I felt like I was studying efficiently because I could blast through 200 questions in an evening. I walked into that exam confident. I'd done over 600 practice questions. I knew this material cold. I got a 71. I sat in my car afterward, staring at the score on my phone, completely confused. How did I bomb this? I'd studied more than ever. I'd done more practice questions than anyone in my study group. That night, I went back through every QuizBot question I'd answered. And I found the problem: the AI was generating questions that were too easy. It would ask things like "Which drug blocks ACE?" when the real exam asked "A patient presents with hyperkalemia and a dry cough after starting a new medication. What is the most likely mechanism?" QuizBot was testing recall. The exam was testing application. I'd spent two weeks training myself to recognize answers, not to think critically about the material. The AI had made me dumber. This was my wake-up call. Not all AI assistance is helpful. Some tools optimize for the wrong metrics—they make you feel good about studying without actually improving your understanding. From that point forward, I got ruthless about measuring actual outcomes, not just how productive I felt.

The Data: 6 Months, 8 Tools, 12 Exams

Here's what I tracked across the entire semester:
Tool Primary Use Avg Study Time/Week Avg Exam Score 4-Week Retention Cost/Month
ChatGPT Plus Concept explanation 8 hours 87% 82% $20
Notion AI Note summarization 3 hours 81% 71% $10
QuizBot (anonymized) Practice questions 6 hours 74% 65% $15
Elicit Research papers 4 hours 89% 88% $12
Mem.ai Spaced repetition 5 hours 86% 91% $15
Otter.ai Lecture transcription 2 hours 79% 68% $17
Consensus Medical literature 3 hours 88% 85% $9
Anki + AnkiGPT Flashcard generation 7 hours 90% 93% Free + $8
The numbers tell a clear story. The tools that helped me actively engage with material (ChatGPT for explanations, Elicit for research, Anki for active recall) dramatically outperformed the tools that let me passively consume information (Notion AI summaries, Otter transcripts, QuizBot's easy questions). My exam average climbed from 78% to 89% over the semester, but that improvement wasn't linear. It came in jumps as I dropped the tools that weren't working and doubled down on the ones that were. The retention data is even more telling. Four weeks after studying material, I could recall 93% of what I'd learned through Anki + AnkiGPT, but only 65% of what I'd studied with QuizBot. That's a massive difference when you're preparing for board exams that test everything you've learned over two years.

What the Numbers Actually Mean

Looking at that table, you might think the answer is simple: use Anki, drop QuizBot, done. But more nuanced.
The best AI study tool isn't the one with the highest score. It's the one that makes you think harder, not the one that makes studying feel easier.
ChatGPT Plus became my most-used tool not because it gave me answers, but because it forced me to articulate my confusion. When I didn't understand a concept, I couldn't just highlight text and get a summary. I had to write out a question: "I don't understand why ACE inhibitors cause hyperkalemia. Can you explain the mechanism step by step?" That act of formulating the question—identifying exactly what I didn't understand—was half the learning. ChatGPT's explanation was the other half. But the tool only worked because I used it actively, not passively. Elicit and Consensus scored high for a similar reason. When I needed to understand a complex topic like the renin-angiotensin-aldosterone system, I'd use these tools to pull up actual research papers and clinical studies. Then I'd use ChatGPT to help me understand the papers. This two-step process—finding primary sources, then getting help interpreting them—led to much deeper understanding than just reading a summary.
The tools that hurt my performance all had one thing in common: they let me avoid the hard work of thinking. They made me feel productive while actually making me passive.
Notion AI's summarization feature was the worst offender. I'd dump my lecture notes into it, get a clean summary, and feel like I'd studied. But I hadn't engaged with the material at all. I'd just watched an AI engage with it for me. My brain never had to do the work of deciding what was important, what connected to what, or what I didn't understand. Otter.ai had the same problem. Having perfect transcripts of every lecture sounds amazing, but it removed the need to actively listen and take notes. Note-taking forces you to process information in real-time, to decide what matters, to rephrase concepts in your own words. Otter eliminated that cognitive work, and my retention suffered. The lesson here isn't "avoid summarization tools." It's "avoid tools that let you skip the cognitive work of learning." Some AI tools are cognitive amplifiers—they make your thinking more powerful. Others are cognitive replacements—they think for you. You want the first kind, not the second.

The Myth That AI Makes Studying Faster

Everyone assumes AI study tools save time. That's the whole pitch, right? Study smarter, not harder. Get more done in less time. It's bullshit. The tools that actually improved my performance made me study longer, not shorter. Anki + AnkiGPT had me spending 7 hours per week on flashcards. ChatGPT sessions often ran 90 minutes as I worked through complex topics. Using Elicit to find and read research papers added hours to my study time. But : that time was productive. I was learning, not just reviewing. I was building understanding, not just memorizing facts. Compare that to QuizBot, where I could blast through 200 questions in two hours and feel incredibly productive. I was moving fast, checking boxes, seeing progress bars fill up. But I wasn't learning. I was just training myself to recognize patterns in easy questions.
The best AI study tools don't make studying faster. They make studying more effective by forcing you to engage more deeply with the material.
This is the opposite of what most students want to hear. We're all looking for the hack, the shortcut, the way to get an A without grinding. But learning doesn't work that way. You can't outsource understanding to an AI. What you can do is use AI to make your study time more focused and more challenging. ChatGPT can generate harder questions than you'd come up with yourself. Elicit can find papers you'd never discover on your own. Anki's algorithm can optimize your review schedule better than you could manually. But all of these tools require you to put in the cognitive work. They're not shortcuts. They're power tools. And like any power tool, they're only useful if you're willing to do the actual work.

My Current Stack: What Actually Works

After six months of experimentation, here's what I use every single day: 1. Anki + AnkiGPT for active recall - I spend 45-60 minutes every morning on flashcards. AnkiGPT helps me generate cards from my notes, but I always edit them to make sure they're testing understanding, not just recall. The key is making cards that force me to explain mechanisms, not just identify terms. 2. ChatGPT Plus for concept explanation - Whenever I hit a wall understanding something, I open ChatGPT and work through it. I don't just ask for an explanation—I ask it to quiz me, to give me analogies, to help me connect the concept to things I already know. I treat it like a tutor, not a search engine. 3. Elicit for finding research papers - When I need to go deeper on a topic, Elicit helps me find relevant papers fast. I'll usually pull up 3-5 papers on a topic, skim them, then use ChatGPT to help me understand the key findings and how they connect. 4. Consensus for clinical context - This tool searches medical literature specifically, which is perfect for understanding how concepts apply in clinical practice. When I'm studying a drug class, I'll use Consensus to find studies on clinical outcomes, side effects, and real-world usage. 5. Notion for organization (without AI) - I still use Notion, but I turned off the AI features. I use it purely for organizing my notes and tracking my study schedule. The act of organizing information manually helps me process it. 6. Otter.ai for specific lectures only - I still use Otter, but only for lectures where the professor talks too fast for me to take notes. And I always go back and take my own notes from the transcript—I never just read the transcript and call it studying. 7. Traditional methods that still matter - Handwritten notes for complex diagrams and pathways. Study groups for discussing difficult concepts. Practice questions from official sources (USMLE, NBME) to calibrate my understanding. Total cost: about $65/month. Total study time: still 6-7 hours per day. But my exam average is now consistently in the high 80s, and my retention is dramatically better.

The Tools I Dropped and Why

Not everything made the cut. Here's what I stopped using and the specific reasons why: QuizBot and similar auto-question generators - These tools optimize for quantity over quality. They'll generate hundreds of questions, but most are too easy or test the wrong things. I wasted weeks doing practice questions that didn't prepare me for actual exams. Now I only use official practice questions or ones I write myself. Notion AI's summarization - It made me lazy. I'd dump notes in, get a summary, and never actually process the information. My retention was terrible. Now I summarize my own notes manually, which forces me to decide what's important. Otter.ai for every lecture - Having perfect transcripts removed the need to actively listen. I'd zone out in lectures knowing I could read the transcript later. But I never retained information as well from transcripts as I did from live note-taking. Now I only use Otter for fast-talking professors. Grammarly and similar writing assistants - These are great for emails, but terrible for study notes. When I used Grammarly to clean up my notes, I'd spend time making them pretty instead of making them useful. Study notes should be messy and personal, not polished. Any tool with a "study mode" or "focus mode" - These features are usually just timers and website blockers dressed up as AI. They don't actually help you learn. If you need to block distracting websites, use a free browser extension. Don't pay for AI to do it. Flashcard apps that auto-generate cards from highlights - Tools like Readwise and RemNote can turn your highlights into flashcards automatically. Sounds great, but it removes the cognitive work of deciding what's worth remembering and how to phrase the question. Making flashcards manually is part of the learning process. The pattern here is clear: I dropped tools that let me avoid cognitive work. The tools that survived are the ones that make me think harder, not the ones that think for me.

How to Choose AI Study Tools That Actually Work

Based on my six months of data, here's how to evaluate whether an AI study tool will actually help you: 1. Does it make you think harder or does it think for you? - Good tools force you to engage actively with material. Bad tools let you passively consume information. If a tool's main benefit is "saves time," it's probably making you passive. 2. Can you measure its impact on actual performance? - Track your exam scores, retention rates, and study time. If a tool doesn't improve at least one of these metrics, drop it. Don't keep using something just because it feels productive. 3. Does it test understanding or just recall? - Tools that generate easy questions or simple flashcards will hurt your performance on exams that test application and analysis. Make sure your study tools match the cognitive level of your actual exams. 4. Is it replacing a cognitive process or enhancing one? - Taking notes is a cognitive process. Having an AI take notes for you replaces that process. Having an AI help you organize your notes enhances your process. Choose enhancement over replacement. 5. Would you still learn if the tool disappeared tomorrow? - If losing access to a tool would cripple your studying, you're probably too dependent on it. Good tools should make you a better learner, not a dependent one. 6. Is the cost justified by measurable results? - I pay $65/month for AI tools, but my exam average increased 11 points. That's worth it. If you're paying for tools that don't improve your performance, you're wasting money. 7. Does it integrate with your existing workflow? - The best tools fit seamlessly into how you already study. If you have to completely change your study routine to use a tool, it's probably not worth it. 8. Can you customize it to your learning style? - Everyone learns differently. Tools that force you into a specific method (like pre-made flashcard decks or fixed study schedules) rarely work as well as tools you can adapt to your needs.

The Uncomfortable Truth About AI and Learning

Here's what I learned after six months: AI can't make you smarter. It can only make your effort more effective. The tools that improved my performance didn't reduce my study time. They didn't make learning easier. They didn't eliminate the need for hard work. What they did was make my hard work more focused and more challenging. ChatGPT forced me to articulate my confusion clearly. Anki forced me to actively recall information instead of passively reviewing it. Elicit forced me to engage with primary research instead of relying on summaries. These tools made studying harder, not easier. But that's exactly why they worked. The tools that failed—QuizBot, Notion AI, Otter transcripts—all promised to make studying easier. And they delivered on that promise. Studying felt easier. It felt more efficient. It felt productive. But easy studying doesn't lead to learning. It leads to the illusion of learning. This is the trap most students fall into with AI tools. We want studying to feel effortless. We want to find the hack that lets us learn without the cognitive strain. So we gravitate toward tools that make us feel productive without making us think hard. And then we're surprised when our exam scores don't improve. The uncomfortable truth is that learning requires cognitive strain. It requires confusion, struggle, and effort. AI tools can't eliminate that. What they can do is make sure your struggle is productive—that you're wrestling with the right concepts, at the right level of difficulty, in the right sequence. But you still have to do the wrestling.

The Stack That Got Me Through Boards

I'm writing this two weeks after taking Step 1. I won't get my score for another month, but I walked out of that exam feeling more prepared than I've ever felt for any test in my life. Here's the exact stack I used for the final eight weeks of dedicated study: Daily routine: - 6:00 AM - 7:00 AM: Anki reviews (500-600 cards) - 7:00 AM - 12:00 PM: UWorld questions (2 blocks of 40) - 12:00 PM - 1:00 PM: Lunch + review incorrect answers - 1:00 PM - 4:00 PM: Content review using First Aid + ChatGPT - 4:00 PM - 5:00 PM: Anki new cards (50-100) - 5:00 PM - 6:00 PM: Exercise + dinner - 6:00 PM - 8:00 PM: Weak areas using Elicit + Consensus for deep dives - 8:00 PM - 9:00 PM: Review the day's Anki cards one more time AI tools in the stack: - AnkiGPT generated cards from First Aid, Pathoma, and Sketchy, but I edited every single card - ChatGPT explained concepts I got wrong on UWorld, focusing on mechanism and clinical application - Elicit found research papers on high-yield topics I was struggling with - Consensus helped me understand clinical context for drugs and diseases What I didn't use: - No question banks except UWorld and NBME practice exams - No summarization tools - No lecture transcripts - No auto-generated study schedules The key was using AI to enhance my active learning, not to replace it. Every tool in my stack required me to think, to struggle, to engage. None of them let me be passive. Did it work? I'll know in a month. But regardless of my score, I know I learned more in those eight weeks than I did in the entire previous year. And that's because I finally figured out how to use AI tools the right way. The best AI study tools don't make learning easier. They make learning more effective. And there's a huge difference between those two things. If you're going to use AI for studying, use it to make yourself think harder, not to avoid thinking altogether. Use it to find better questions, not easier answers. Use it to go deeper, not to skim faster. That's the difference between an 11-point improvement and wasted money on tools that don't work.

Disclaimer: This article is for informational purposes only. While we strive for accuracy, technology evolves rapidly. Always verify critical information from official sources. Some links may be affiliate links.

E

Written by the Edu0.ai Team

Our editorial team specializes in education technology and learning science. We research, test, and write in-depth guides to help you work smarter with the right tools.

Share This Article

Twitter LinkedIn Reddit HN

Related Tools

Tool Categories — edu0.ai All Education Tools — Complete Directory How to Cite Sources Correctly — Free Guide

Related Articles

How to Make Group Study Actually Effective (Not Just Social) AI Tutoring: Help Students Learn Better AI Tutoring vs Human Tutoring: I Tested Both for a Full Semester

Put this into practice

Try Our Free Tools →

🔧 Explore More Tools

Study Tools For CollegeCitation GeneratorSitemap HtmlBibliography GeneratorBlogPomodoro Timer

📬 Stay Updated

Get notified about new tools and features. No spam.