All documentation

AI Grading & OCR

Understand how the platform uses AI to evaluate written answers and handwritten work — and how to review, override, and improve AI grades.

Tutors & Institutions11 min read
Who this is for
Creators (tutors or institutions) who use short-text, long-text, or file-upload questions in tests, or who create assignments and want AI assistance when grading submissions.

Overview

Tuition.in has two separate AI grading flows:

  1. Test answer grading — triggered automatically when a student submits a test containing subjective questions (short text, long text, or file upload). No creator action needed.
  2. Assignment submission grading — creator-initiated. You open a submission, optionally run OCR on uploaded files, then click Run AI orRun + Apply to get a score and feedback you can accept or override.

Both flows use the same underlying rubric-based evaluation powered by GPT-4o-mini.

What the AI grades

Question / submission typeAI involvementTrigger
MCQ, multi-select, true/false, fill-blankNone — deterministic auto-gradeAutomatic on submit
Short text (test)Scores against rubric, provides feedbackAutomatic on submit (background)
Long text (test)Scores against rubric, provides paragraph feedbackAutomatic on submit (background)
File upload (test)OCR → grade extracted text against rubricAutomatic on submit (background)
Assignment submissionScores text + OCR against rubric + blueprintCreator-initiated (manual button click)

OCR pipeline

OCR (Optical Character Recognition) is used when a student submits an image of handwritten work. The pipeline works as follows:

  1. The student's uploaded file URL is inspected. Only image URLs (JPEG, PNG, WebP) are eligible — PDF files are skipped (see limitations below).
  2. Each image URL is passed to the OpenAI vision API (gpt-4o-mini with the image in the message). The model extracts all readable text from the image.
  3. Extracted text from all images is concatenated into a single block called ocrText.
  4. The AI grader then evaluates ocrText against the rubric, exactly as it would evaluate a typed text answer.

For assignments, OCR can be run on a student's uploaded files even before AI evaluation. Click Run OCR in the submission panel to extract text, review it, and then run AI evaluation.

Check OCR output before evaluating
If a student's handwriting is dense or the image quality is low, the OCR text may contain errors. Always review the extracted text (click "Show extracted text") before clicking Run AI. If the extraction is poor, grade the submission manually.
Only images are supported
PDF files are not processed by OCR. If a student submits a scanned PDF, ask them to re-submit as a JPEG or PNG (photograph the pages). For assignments, you can also ask them to type their text answer in the text field alongside the file.

Writing a good rubric

The quality of AI grading is directly proportional to rubric quality. A vague rubric produces unreliable scores; a precise rubric produces scores a human teacher would recognise as fair.

Structure your rubric as a list of scorable criteria:

  • State the maximum marks for each criterion explicitly.
  • Describe what earns full marks, partial marks, and zero for each criterion.
  • For factual questions, list key terms, formulas, or dates that must appear.
  • For conceptual questions, describe the reasoning chain that must be present.

Example — 10-mark question on photosynthesis:

Award 3 marks for correctly explaining the light-dependent reactions: - 3 marks: chlorophyll absorbs light, water splits (photolysis), ATP and NADPH produced - 2 marks: two of the above points - 1 mark: one correct point - 0: no correct points Award 3 marks for the Calvin cycle (light-independent reactions): - 3 marks: CO₂ fixation, RuBisCO, formation of G3P, regeneration of RuBP - 1-2 marks: partial correct description - 0: incorrect or absent Award 2 marks for overall equation: 6CO₂ + 6H₂O + light → C₆H₁₂O₆ + 6O₂ Award 2 marks for location: thylakoids (light reactions), stroma (Calvin cycle)
AI rubric breakdown
When a rubric is set, the AI returns a per-criterion breakdown alongside the total score. Students see this breakdown in their results, with comments per criterion. This is far more educational than a bare number.

Assignment AI evaluation

Assignments (created within a batch) support AI-assisted grading in addition to manual grading. The workflow:

  1. Create an assignment with a rubric and/or a blueprint (answer key). See below.
  2. A student submits their answer (text, file URLs, or both).
  3. Open the submission from the assignment management page → click the student's submission row.
  4. Optionally run OCR if the student uploaded images.
  5. Click Run AI to see the AI's score + feedback before applying.
  6. Click Run + Apply to run AI and immediately apply the score.
  7. Review, override if needed, and click Save grade.

Blueprint / answer key

A blueprint is an optional reference document — a PDF or image of the model answer or correction guide — that the AI uses when evaluating submissions. This is particularly useful for:

  • Maths or physics problem sets where the step-by-step solution matters.
  • Language assignments with a sample essay or expected content structure.
  • Standardised tests where a marking scheme is published.

To add a blueprint: when editing an assignment, paste the URL of a PDF or image in the Blueprint / answer key URL field. The platform automatically runs OCR on the blueprint (if it's an image) or stores the URL (if it's a PDF) when the first submission is graded. The extracted blueprint text is stored and re-used for all subsequent submissions — you don't pay for blueprint OCR on every submission.

Rubric vs blueprint
A rubric describes how to score (criteria + weightings). A blueprint shows what a correct answer looks like. Both can be used together: the rubric guides scoring; the blueprint gives the AI a reference to compare against.

Running AI evaluation

In the grading panel for a submission:

  • Run AI — evaluates the submission and displays the AI's suggested score and feedback. Does not update the stored grade. Use this when you want to inspect the AI output before committing.
  • Run + Apply — evaluates and immediately saves the score + feedback as the student's grade. The student is notified. You can still override afterward.

Both options show a per-criterion rubric breakdown (if a rubric was set) in an expandable table: each row lists the criterion, the awarded marks, the maximum, and a short AI comment.

Reviewing & overriding AI grades

After AI evaluation (whether from a test or assignment), the final grade panel at the bottom of the grading view allows you to:

  • Edit the Score field — type any number from 0 to max. The AI-suggested value is pre-filled.
  • Edit the Feedback field — pre-filled with AI feedback. Personalise it or replace it entirely.
  • Click Save grade to commit.

When you manually override an AI grade, the submission is marked with gradedBy: MANUAL. When the AI grade is applied as-is, it's marked gradedBy: AI. This is shown as a badge on the submission.

Always review before bulk-applying
The Run AI step (without Apply) is your review checkpoint. Use it for the first few submissions of a new assignment to calibrate your trust in the AI's scoring for that rubric. Once you're satisfied, Run + Apply is safe.

Confidence scores

The AI returns a confidence score (0.0–1.0, shown as a percentage) with every subjective grade. This reflects how certain the model is about its assessment.

ConfidenceInterpretationRecommended action
80–100%High. Clear match or mismatch with rubric.Safe to apply directly in most cases.
50–79%Moderate. Answer is partially correct or ambiguous.Review the rubric breakdown before applying.
< 50%Low. Unclear answer, unusual response, or rubric ambiguity.Grade manually — don't rely on AI score.

Low confidence can indicate: the student wrote in a language other than the rubric, the answer is a diagram description that doesn't map to text rubric criteria, or the rubric is underspecified.

Student notifications

Students are notified when:

  • A test is submitted — they receive an immediate notification with their objective score and a note that subjective questions are being graded.
  • AI grading completes for subjective test answers — the score is updated and a follow-up notification is sent.
  • An assignment is manually graded or AI grade is applied — the student receives a notification with their score and feedback summary.

Students can see the full rubric breakdown, per-criterion feedback, and AI confidence score in their test results view.

Limitations & edge cases

  • PDF OCR not supported. Only JPEG, PNG, and WebP images are processed. Ask students to photograph or screenshot their work.
  • Non-English answers. The model handles Hindi, Tamil, and other major Indian languages reasonably well, but accuracy is lower than English. If your course is in a regional language, always review AI grades before applying.
  • Mathematical notation. Typed LaTeX or equations are understood by the model. Handwritten maths in images depends on OCR quality — messy handwriting may confuse the extraction step.
  • Rate limits. AI grading calls are subject to OpenAI rate limits. During a large batch submission (e.g., 100 students all submitting within 60 seconds), a few grading jobs may be delayed by 30–60 seconds. Students see a "grading in progress" state until their job completes.
  • Hallucinated scores. In rare cases the model may award scores that don't match the rubric. The confidence score is your early warning — treat anything below 50% as a signal to check manually.

Header Logo