Building the Permanent Record
of AI in Scholarship
AI is generating text that looks like original research. Whether it crosses the threshold into genuine scholarly contribution is an open question. Latent Scholar is building the only systematic, expert-validated public record to answer it.
Knowledge Has Always Moved in One Direction
Humans conduct research, institutions certify it through peer review, and AI systems train on the accumulated record. For the entire history of knowledge production, AI has been downstream of human inquiry — a consumer of scholarship, never a producer of it.
That assumption is now open for the first time. Large Language Models are generating text that is structurally indistinguishable from original scholarly work — complete with methodology, citations, and argumentation. Whether this crosses the threshold into genuine knowledge contribution — accurate, original, methodologically sound — is one of the most consequential empirical questions of this decade.
That question cannot be answered with opinions. It requires a systematic, expert-validated, permanent public record of what AI produces when asked to do real scholarship, assessed by people who can actually tell the difference.
How the Record Gets Made
Our workflow transforms AI outputs into a durable, audited corpus. Every step is documented and publicly visible — creating a full provenance chain from research question to expert verdict.
The platform is designed to be scientifically meaningful regardless of how AI develops. If AI scholarship improves to the point of genuine accuracy and originality, the corpus becomes a longitudinal record of that development — the only systematic evidence of how and when it happened. If AI scholarship does not reach that threshold, the corpus becomes the evidence base for understanding why, and where the persistent failure points are.
Who This Record Serves
For the Academic Community
An open, evidence-based resource for understanding what AI can and cannot do in scholarly contexts — replacing speculation with data from structured expert evaluation.
For Researchers & Technologists
A comparative dataset tracking AI scholarly capability across models, disciplines, and time. The better AI gets, the more historically significant the early record becomes.
For Institutions & Policymakers
The most credible public data on AI scholarly content quality that exists. Evidence for governance decisions, AI policies, and accreditation standards — grounded in expert review, not automated detection.
Our Commitments
Transparency
Every article is clearly labeled as AI-generated. We disclose the model used and the key prompt parameters so readers can see exactly how the content was produced. The full provenance chain — from idea to output to review — is permanently public.
Screening
We actively review both inputs and outputs for safety, following strict policies against hate speech, defamation, and harmful content. This applies to both the ideas submitted and the AI-generated output.
Rigor
Our review system documents what AI gets right alongside what it gets wrong. By systematically recording both strengths and failures, we are building a permanent record that tracks whether AI is approaching the threshold of genuine scholarly contribution — not just cataloguing errors.
Help Build the Permanent Record
Suggest a research topic, evaluate an AI paper in your discipline, or integrate the platform into your course. The record needs your expertise.
