What are the best A...
 
Notifications
Clear all

What are the best AI tools for writing research papers in 2024?

6 Posts
7 Users
0 Reactions
148 Views
0
Topic starter

What are you guys actually using for research papers this year that doesnt just hallucinate half the citations? Ive been in academia for like six years now so I know my way around a Zotero library and basic LaTeX formatting but honestly the sheer amount of new AI stuff hitting the market since January is making my head spin. Im currently working on a pretty heavy literature review for a paper on synaptic plasticity and its honestly getting exhausting trying to keep track of what tool does what. I tried using the standard GPT-4 stuff for a while to help bridge the gap between my raw notes and a decent first draft but the hallucination problem is still so bad with specific citations that I end up spending more time fact checking than if I just wrote it from scratch.

Im looking for something more specialized for 2024. Ive heard people mention things like Jenni AI or Paperpal or even Scite for the verification side of things but I dont want to waste money on five different subscriptions that all do the same thing. My budget is probably around 25 bucks a month tops since im on a grad student stipend and I really need something that can handle complex technical jargon without stripping the nuance out of the science.

Does anything actually exist yet that can take a folder of my own PDFs and help me synthesize the info without making stuff up? I tried a tool last week—cant remember the name now—that completely misread a p-value in one of my source papers and it nearly ruined a whole section of my discussion. Is there a consensus on what is actually reliable for high-level academic work right now or are we still just in the wild west phase where everything is half-baked? Id love to hear what your workflows look like these days especially if youre dealing with STEM stuff where the details actually matter...


6 Answers
12

I've been sticking to a pretty tight budget lately. Honestly, I'm super satisfied with this simple setup:

  • Elicit Plus Research Assistant for extracting data from my own PDFs without the hallucinations.
  • Consensus Premium Search Engine for finding evidence-based answers. This combo stays well under your budget and handles the technical nuance way better than GPT-4. Works well for me!


10

I switched my lab workflow to SciSpace Premium Academic Plan for meta-analyses recently. I dont trust general models.

  • Pricing is $12 monthly.
  • Superior OCR and parsing.
  • Reliable vector extraction.


3

Interested in this too


3

Disagreeing slightly on the subscription front. Over the years, I've found that Google NotebookLM AI Research Assistant handles a folder of PDFs better than paid wrappers because it anchors responses strictly to your text. It wont just invent synaptic mechanisms out of thin air.

  • Grounded source analysis
  • Page-level citations
  • Free for now Tbh, I still check the math manually tho, dont trust AI with p-values yet...


3

Look, if you want accuracy in a field as dense as synaptic plasticity, stop using general-purpose models for the heavy lifting. I have found a two-pronged setup works best to stay under that 25 dollar mark while ensuring technical precision.

  • Scite.ai Assistant Individual Plan
  • This is non-negotiable for verification. It uses Smart Citations to show if a claim is supported or contradicted by subsequent papers, which is way more reliable than just finding a PDF.
  • Claude 3.5 Sonnet via Anthropic Pro
  • For drafting, I have moved away from GPT-4. Claude handles complex scientific jargon and logical flow much better without the weird hallucinations or assistant tone. Honestly, Scite.ai Assistant Individual Plan is the only thing I trust for citations because it checks the context of how a paper was cited. You can upload your own library there too. It is built for the exact verification problem you are hitting with standard LLMs right now.


3

Building on the earlier suggestion about keeping responses grounded strictly to your own files, Ive been really satisfied with a custom local indexing setup lately. No complaints here! Its honestly a total game changer for technical work because you can basically force the system to show you the exact page and line it pulled from every single time. Going the DIY route has some big perks:

  • No monthly fees eating into your stipend every month.
  • You control the logic, which is key for not missing those tiny p-values tucked away in results sections.
  • Total privacy for your unpublished notes and raw data. It works well if you like knowing exactly how the data is being handled, tho it admittedly takes a bit of tinkering to get the settings right. I wanted to ask though... are you looking for a tool that helps mostly with the initial connecting the dots phase of the lit review, or do you need it to be precise enough for the actual drafting and citations? Also, how heavy are your source PDFs on things like complex tables or charts versus just straight text?


Share: