AI-Generated Research · Expert-Reviewed

The Ground Truth
for AI in Scholarship

AI draws its power from human knowledge. But can it now match the depth and rigor of true scholarship?

Leading AI models are tasked with generating full-length scholarly manuscripts spanning several academic disciplines, including the sciences, humanities, and social sciences. Each manuscript is reviewed by expert teams who provide rigorous evaluations, noting strengths and shortcomings in accuracy, methodology, and argumentation. The project records and shares these findings publicly, establishing a resource to assess AI’s genuine scholarly capabilities and boundaries.

57
Articles Generated
6
Expert Reviews
30+
Disciplines
3
AI Models
Process

How the Record Gets Made

01

Submit an Idea

Propose a research topic in any discipline. Define the audience and scope — our system structures it into a rigorous AI prompt protocol.

02

AI Generates a Paper

Leading AI models — Claude, GPT, Gemini — produce a full-length scholarly manuscript. Model version, temperature, and parameters are all documented for reproducibility.

03

Experts Assess the Result

Domain experts submit structured reviews assessing whether AI crossed the threshold into genuine scholarly contribution — evaluating accuracy, originality, fabricated citations, reasoning, and methodology. Reviews are published openly as a permanent record.

The Question
Then

Humans create knowledge. Institutions certify it. AI trains on the result.

Now

AI generates scholarship. Humans evaluate whether it counts.

From the Corpus

What Happens When AI Tries Real Scholarship

Expert reviewers are building the only systematic record of AI scholarly capability. Here is what they are finding.

Citation Accuracy Varies Widely AI produces impressively structured references, but verification reveals fabricated sources alongside legitimate ones.
Structure Consistently Strong Abstracts, methodology sections, and logical flow rate well across models — the scaffolding of scholarship is convincing.
Reproducibility Remains Unverified Methods and calculations look plausible on paper. Whether they survive implementation is still an open question.
Civil Engineering

“The results clearly follow the same pattern observed in the literature, which is impressive; however, the accuracy remains somewhat questionable.”

Expert Reviewer · Amin Riazi Claude
📄 A Unified Framework for Predicting the Drag Coefficient of Natural Sediment Particles: Theory, Derivation, and Validation
Social Sciences

“I also found the abstract to be very well structured and written. It includes the key components typically expected in an abstract: it provides contextual background, identifies the core issue, and clearly states what the article will offer.”

Expert Reviewer · Anonymous GPT-5.1
📄 Clarifying Research Quality Across Quantitative, Qualitative, and Mixed Methods: A Comprehensive Review
Fundamental Sciences

“The appearance of the manuscript looks very reasonable and the statements and references are sound. However, the validity of the conclusions is difficult to assess unless the calculations are reproduced.”

Expert Reviewer · Anonymous Gemini
📄 Optimizing Sensitivity to Sub-GeV Dark Matter via Electron Recoil: A Comparative Analysis of Novel Semiconductor and Scintillator Targets
Why This Matters

Knowledge Has Always Moved in One Direction

Humans conduct research. Institutions certify it. AI trains on the result. For the first time, that model may be inverting — AI is generating text that looks like original scholarship.

Whether it crosses the threshold into genuine knowledge contribution is one of the most consequential empirical questions of this decade. That question deserves an empirical answer, built from a permanent public record, assessed by people who can actually tell the difference.

Latent Scholar is building that record. Every article was written by AI. Every review is conducted by a human expert. Whatever the answer turns out to be, the evidence needs to exist.

Learn More
For Researchers

Review AI-generated papers in your discipline. Your structured evaluation becomes a permanent, citable contribution to the only systematic record of AI scholarly capability.

For Educators

Assign students to review AI papers as a course exercise — they gain peer review experience while contributing to real research infrastructure.

For Everyone

Suggest a research idea in any discipline. Our system translates your concept into a structured prompt that generates a full manuscript for expert evaluation.

Your Evaluation Is the Evidence

You are not just checking an AI’s work. You are contributing to the only systematic public record of whether AI can do scholarship — and your evaluation is a permanent part of that record.

Review an Article
Get Involved

Help Build the Permanent Record of AI in Scholarship

Suggest a research topic, evaluate an AI paper, or integrate the platform into your course. The record needs your expertise.