Auto-FIE Documentation

A Special Education Operating System for Texas Educational Diagnosticians

What is Auto-FIE?

Auto-FIE is an AI-powered system that automates the Full Individual Evaluation (FIE) workflow for Texas special education. It handles the entire pipeline from referral intake to report generation:

1. Referral Intake

Structured intake form captures student demographics, concerns, and suspected disabilities.

2. Logic Engine

Automatically determines required report sections, recommended test batteries, and evaluator roles based on IDEA categories.

3. AI Report Generation

LLM-powered narrative generation for each report section, using actual test scores and clinical data.

System Architecture

Referral Intake Form
        |
        v
+-------------------+
|   Logic Engine    |  Maps disability categories -> evaluation plan
|  - Section Engine |  (which report sections are needed)
|  - Battery Select |  (which tests to administer)
|  - Eval Assigner  |  (which specialists are needed)
+-------------------+
        |
        v
+-------------------+
| Assessment Entry  |  Scores, rating scales, health data entered
+-------------------+
        |
        v
+-------------------+
| Eligibility Engine|  Checks IDEA criteria against assessment data
|  (Rule-based AI)  |  Auto-evaluates score thresholds + flags items
+-------------------+    needing clinical judgment
        |
        v
+-------------------+
|  LLM Narrative    |  Generates professional diagnostic narratives
|  Generator        |  for each report section using structured data
|  (Claude/OpenAI)  |
+-------------------+
        |
        v
+-------------------+
| Report Composer   |  Assembles sections into full FIE report
|  + PDF Generator  |  Produces print-ready PDF via WeasyPrint
+-------------------+

AI Backend

Swappable LLM Providers

The system uses an abstract LLM provider interface that supports multiple backends:

Anthropic Claude

Set LLM_PROVIDER=claude and ANTHROPIC_API_KEY

OpenAI GPT

Set LLM_PROVIDER=openai and OPENAI_API_KEY

Narrative Generation

Each FIE report section has a specialized prompt template that instructs the LLM to write in formal diagnostic language, cite specific test scores, and follow TEA guidelines. The system feeds structured data (scores, percentiles, rating scales, health records) into section-specific prompts to generate professional narratives.

Supported report sections: Reason for Referral, Background Information, Physical/Health Summary, Behavioral Observations, Cognitive Assessment, Academic Achievement, Adaptive Behavior, Social-Emotional, Communication, Motor, Auditory Processing, Visual Processing, Attention/Executive Functioning, Autism-Specific, Dyslexia, Assistive Technology, Eligibility Determination, Recommendations.

Eligibility Engine

A rule-based engine that evaluates eligibility criteria for all 13 IDEA disability categories. It automatically checks score thresholds, ability-achievement discrepancies, and behavioral rating scale elevations. Criteria requiring clinical judgment (observations, educational impact) are flagged for evaluator review rather than auto-determined.

13 IDEA Disability Categories

The system supports all 13 disability categories recognized under IDEA in Texas:

AU Autism
DB Deaf-Blindness
D Deafness
ED Emotional Disturbance
HI Hearing Impairment
ID Intellectual Disability
MD Multiple Disabilities
OI Orthopedic Impairment
OHI Other Health Impairment (OHI)
SLD Specific Learning Disability
SI Speech/Language Impairment
TBI Traumatic Brain Injury
VI Visual Impairment

API Reference

All functionality is available via REST API endpoints. The web UI uses these same APIs via HTMX/fetch.

Students

POST /students/api · GET /students/api · GET /students/api/{id} · PATCH /students/api/{id}

Referrals

POST /referrals/api · GET /referrals/api/list · GET /referrals/api/{id} · PATCH /referrals/api/{id} · POST /referrals/api/{id}/generate-plan

Test Results & Rating Scales

POST /test-results/api · GET /test-results/api/referral/{id} · POST /test-results/api/rating-scales · GET /test-results/api/rating-scales/referral/{id}

AI Report Generation

POST /reports/api/sections/{id}/generate — Generate AI narrative for one section
POST /reports/api/referral/{id}/generate-all — Generate AI narratives for all sections
POST /reports/api/referral/{id}/compose — Compose full report text
POST /reports/api/referral/{id}/pdf — Generate and download PDF

Eligibility Engine

POST /eligibility/api/referral/{id}/evaluate — Run eligibility check against assessment data
GET /eligibility/api/referral/{id} — Get stored eligibility determination

Tech Stack

FastAPI

Backend Framework

SQLAlchemy

ORM + Migrations

Claude / OpenAI

LLM Providers

WeasyPrint

PDF Generation

HTMX

Frontend Interactivity

Tailwind CSS

Styling

PostgreSQL/SQLite

Database

Alembic

DB Migrations

CLI Commands

# Start the server

uvicorn app.main:app --reload

# Generate fake test cases

python -m app.seed.generate_all --count 20

python -m app.seed.generate_all --all-categories

# Generate AI report narratives for all cases

python -m app.seed.generate_reports

python -m app.seed.generate_reports --provider openai

python -m app.seed.generate_reports --dry-run

# Run database migrations

alembic upgrade head

# Run tests

pytest tests/ -v