Diotima
Authorship and AI usage analysis for submitted academic work
What it resolves for the institution
Existing AI detection tools produce scores. A score is not evidence — it cannot be placed in a student file, presented to a jury, or relied upon in a dispute. Authorship uncertainty ends up managed individually, inconsistently, and often silently.
Diotima produces an argued report, not a score. It operates across three levels of analysis applied simultaneously:
- Semantic and linguistic analysis — vocabulary patterns, syntactic choices, stylistic consistency across sections, and textual rhythm. Like the scansion of a poem, Diotima measures the mathematical regularity of the text — sentence length variation, syntactic density, structural asymmetry. Human writing is irregular and hesitant. AI-generated content is statistically smooth. That difference is measurable.
- Cognitive depth — whether the reasoning reflects genuine intellectual engagement, as opposed to the structural fluency typical of AI-generated content.
- Argumentative depth — whether the argument is genuinely constructed and defended, or assembled from plausible components that never cohere into an original position.
No single level is sufficient. The combination is what makes the report arguable.
What it produces
A structured authorship report — located in the text, explained, and qualified. Not a verdict. Evidence the institution can use and, if necessary, defend in a file, before a jury, or in a formal dispute.
Illustration: Authenticity analysis applied to a student paper.
What it costs not to have it
Without reliable authorship analysis, decisions rest on assumptions the institution cannot defend. Reviews repeat without resolution. And the institution signs grades on work whose authorship it has never formally established.