Overview
Tuition.in has two separate AI grading flows:
- Test answer grading — triggered automatically when a student submits a test containing subjective questions (short text, long text, or file upload). No creator action needed.
- Assignment submission grading — creator-initiated. You open a submission, optionally run OCR on uploaded files, then click Run AI orRun + Apply to get a score and feedback you can accept or override.
Both flows use the same underlying rubric-based evaluation powered by GPT-4o-mini.
What the AI grades
| Question / submission type | AI involvement | Trigger |
|---|---|---|
| MCQ, multi-select, true/false, fill-blank | None — deterministic auto-grade | Automatic on submit |
| Short text (test) | Scores against rubric, provides feedback | Automatic on submit (background) |
| Long text (test) | Scores against rubric, provides paragraph feedback | Automatic on submit (background) |
| File upload (test) | OCR → grade extracted text against rubric | Automatic on submit (background) |
| Assignment submission | Scores text + OCR against rubric + blueprint | Creator-initiated (manual button click) |
OCR pipeline
OCR (Optical Character Recognition) is used when a student submits an image of handwritten work. The pipeline works as follows:
- The student's uploaded file URL is inspected. Only image URLs (JPEG, PNG, WebP) are eligible — PDF files are skipped (see limitations below).
- Each image URL is passed to the OpenAI vision API (
gpt-4o-miniwith the image in the message). The model extracts all readable text from the image. - Extracted text from all images is concatenated into a single block called
ocrText. - The AI grader then evaluates
ocrTextagainst the rubric, exactly as it would evaluate a typed text answer.
For assignments, OCR can be run on a student's uploaded files even before AI evaluation. Click Run OCR in the submission panel to extract text, review it, and then run AI evaluation.
Writing a good rubric
The quality of AI grading is directly proportional to rubric quality. A vague rubric produces unreliable scores; a precise rubric produces scores a human teacher would recognise as fair.
Structure your rubric as a list of scorable criteria:
- State the maximum marks for each criterion explicitly.
- Describe what earns full marks, partial marks, and zero for each criterion.
- For factual questions, list key terms, formulas, or dates that must appear.
- For conceptual questions, describe the reasoning chain that must be present.
Example — 10-mark question on photosynthesis:
Assignment AI evaluation
Assignments (created within a batch) support AI-assisted grading in addition to manual grading. The workflow:
- Create an assignment with a rubric and/or a blueprint (answer key). See below.
- A student submits their answer (text, file URLs, or both).
- Open the submission from the assignment management page → click the student's submission row.
- Optionally run OCR if the student uploaded images.
- Click Run AI to see the AI's score + feedback before applying.
- Click Run + Apply to run AI and immediately apply the score.
- Review, override if needed, and click Save grade.
Blueprint / answer key
A blueprint is an optional reference document — a PDF or image of the model answer or correction guide — that the AI uses when evaluating submissions. This is particularly useful for:
- Maths or physics problem sets where the step-by-step solution matters.
- Language assignments with a sample essay or expected content structure.
- Standardised tests where a marking scheme is published.
To add a blueprint: when editing an assignment, paste the URL of a PDF or image in the Blueprint / answer key URL field. The platform automatically runs OCR on the blueprint (if it's an image) or stores the URL (if it's a PDF) when the first submission is graded. The extracted blueprint text is stored and re-used for all subsequent submissions — you don't pay for blueprint OCR on every submission.
Running AI evaluation
In the grading panel for a submission:
- Run AI — evaluates the submission and displays the AI's suggested score and feedback. Does not update the stored grade. Use this when you want to inspect the AI output before committing.
- Run + Apply — evaluates and immediately saves the score + feedback as the student's grade. The student is notified. You can still override afterward.
Both options show a per-criterion rubric breakdown (if a rubric was set) in an expandable table: each row lists the criterion, the awarded marks, the maximum, and a short AI comment.
Reviewing & overriding AI grades
After AI evaluation (whether from a test or assignment), the final grade panel at the bottom of the grading view allows you to:
- Edit the Score field — type any number from 0 to max. The AI-suggested value is pre-filled.
- Edit the Feedback field — pre-filled with AI feedback. Personalise it or replace it entirely.
- Click Save grade to commit.
When you manually override an AI grade, the submission is marked with gradedBy: MANUAL. When the AI grade is applied as-is, it's marked gradedBy: AI. This is shown as a badge on the submission.
Confidence scores
The AI returns a confidence score (0.0–1.0, shown as a percentage) with every subjective grade. This reflects how certain the model is about its assessment.
| Confidence | Interpretation | Recommended action |
|---|---|---|
| 80–100% | High. Clear match or mismatch with rubric. | Safe to apply directly in most cases. |
| 50–79% | Moderate. Answer is partially correct or ambiguous. | Review the rubric breakdown before applying. |
| < 50% | Low. Unclear answer, unusual response, or rubric ambiguity. | Grade manually — don't rely on AI score. |
Low confidence can indicate: the student wrote in a language other than the rubric, the answer is a diagram description that doesn't map to text rubric criteria, or the rubric is underspecified.
Student notifications
Students are notified when:
- A test is submitted — they receive an immediate notification with their objective score and a note that subjective questions are being graded.
- AI grading completes for subjective test answers — the score is updated and a follow-up notification is sent.
- An assignment is manually graded or AI grade is applied — the student receives a notification with their score and feedback summary.
Students can see the full rubric breakdown, per-criterion feedback, and AI confidence score in their test results view.
Limitations & edge cases
- PDF OCR not supported. Only JPEG, PNG, and WebP images are processed. Ask students to photograph or screenshot their work.
- Non-English answers. The model handles Hindi, Tamil, and other major Indian languages reasonably well, but accuracy is lower than English. If your course is in a regional language, always review AI grades before applying.
- Mathematical notation. Typed LaTeX or equations are understood by the model. Handwritten maths in images depends on OCR quality — messy handwriting may confuse the extraction step.
- Rate limits. AI grading calls are subject to OpenAI rate limits. During a large batch submission (e.g., 100 students all submitting within 60 seconds), a few grading jobs may be delayed by 30–60 seconds. Students see a "grading in progress" state until their job completes.
- Hallucinated scores. In rare cases the model may award scores that don't match the rubric. The confidence score is your early warning — treat anything below 50% as a signal to check manually.