Aims
-
- Evaluate how effectively LLMs can simulate, reproduce, or extend scientific reasoning across disciplines.
- Provide a benchmark venue for comparing AI-generated studies with conventional scholarship.
- Enable transparent assessment of model performance, accuracy, and limitations in research contexts.
- Maintain an openly accessible, growing corpus of prompts, generated articles, and peer reviews to support research on authorship, evaluation, and academic integrity.
Scope
We accept prompt-based submissions from all scholarly fields, including (but not limited to):
- Engineering, Natural, and Physical Sciences
- Social Sciences, Humanities, and the Arts
- Life, Environmental, and Health/Medical Sciences
- Computational, Data, and AI Research
Each accepted submission comprises:
- the contributor’s prompt (defining the question/task and constraints);
- an article generated by the editorial team using documented, reproducible protocols; open peer reviews by qualified human experts (the prompt author may serve as a reviewer, with disclosure).