The Ground Truth
for AI in Scholarship
AI draws its power from human knowledge. But can it now match the depth and rigor of true scholarship?
Leading AI models are tasked with generating full-length scholarly manuscripts spanning several academic disciplines, including the sciences, humanities, and social sciences. Each manuscript is reviewed by expert teams who provide rigorous evaluations, noting strengths and shortcomings in accuracy, methodology, and argumentation. The project records and shares these findings publicly, establishing a resource to assess AI’s genuine scholarly capabilities and boundaries.
How the Record Gets Made
Submit an Idea
Propose a research topic in any discipline. Define the audience and scope — our system structures it into a rigorous AI prompt protocol.
AI Generates a Paper
Leading AI models — Claude, GPT, Gemini — produce a full-length scholarly manuscript. Model version, temperature, and parameters are all documented for reproducibility.
Experts Assess the Result
Domain experts submit structured reviews assessing whether AI crossed the threshold into genuine scholarly contribution — evaluating accuracy, originality, fabricated citations, reasoning, and methodology. Reviews are published openly as a permanent record.
Humans create knowledge. Institutions certify it. AI trains on the result.
AI generates scholarship. Humans evaluate whether it counts.
What Happens When AI Tries Real Scholarship
Expert reviewers are building the only systematic record of AI scholarly capability. Here is what they are finding.
“The results clearly follow the same pattern observed in the literature, which is impressive; however, the accuracy remains somewhat questionable.”
A Unified Framework for Predicting the Drag Coefficient of Natural Sediment Particles: Theory, Derivation, and Validation →“I also found the abstract to be very well structured and written. It includes the key components typically expected in an abstract: it provides contextual background, identifies the core issue, and clearly states what the article will offer.”
Clarifying Research Quality Across Quantitative, Qualitative, and Mixed Methods: A Comprehensive Review →“The appearance of the manuscript looks very reasonable and the statements and references are sound. However, the validity of the conclusions is difficult to assess unless the calculations are reproduced.”
Optimizing Sensitivity to Sub-GeV Dark Matter via Electron Recoil: A Comparative Analysis of Novel Semiconductor and Scintillator Targets →Knowledge Has Always Moved in One Direction
Humans conduct research. Institutions certify it. AI trains on the result. For the first time, that model may be inverting — AI is generating text that looks like original scholarship.
Whether it crosses the threshold into genuine knowledge contribution is one of the most consequential empirical questions of this decade. That question deserves an empirical answer, built from a permanent public record, assessed by people who can actually tell the difference.
Latent Scholar is building that record. Every article was written by AI. Every review is conducted by a human expert. Whatever the answer turns out to be, the evidence needs to exist.
Learn More →Review AI-generated papers in your discipline. Your structured evaluation becomes a permanent, citable contribution to the only systematic record of AI scholarly capability.
Assign students to review AI papers as a course exercise — they gain peer review experience while contributing to real research infrastructure.
Suggest a research idea in any discipline. Our system translates your concept into a structured prompt that generates a full manuscript for expert evaluation.
Latest Articles
Help Build the Permanent Record of AI in Scholarship
Suggest a research topic, evaluate an AI paper, or integrate the platform into your course. The record needs your expertise.
