Lesson-aware AI answers for every student. Zero model infrastructure. Live in 4 days.
Embedding AI into a live math tutoring platform — from API to student experience.
How Acceleo Mathematica partnered with SundayPyjamas to build a role-aware AI Teaching Assistant, usage metering aligned to student packages, and a real-time analytics dashboard — all on a single integrated API platform.
Acceleo Matemática
<4 days
API key to live sessions
0
Model infrastructure managed
Full
History retained across sessions
5
Lesson stages supported
The problem
Teaching math at scale is inherently personal. Dr. Gabriel Toichoa and his team at Acceleo Matemática had built a live curriculum with whiteboard sessions and modular lessons, but one constraint kept surfacing: students asking questions mid-session had to wait for teacher availability, and follow-up between sessions was inconsistent. Acceleo needed an AI assistant that understood exactly what lesson was active, remembered what each student had previously asked, and gave accurate, grounded answers without a dedicated AI infrastructure team to maintain it.
What we built
Acceleo Matemática ran on Next.js and Supabase. They needed lesson-aware AI responses and per-student conversation memory. We integrated SundayPyjamas AI Suite directly into their existing platform with no separate model deployment, no embedding pipeline, and no vector database administration.
The assistant answers student questions grounded in the active lesson topic using a vector index over Acceleo's curriculum library. A separate memory thread per student means the AI picks up exactly where any previous session left off. A local rule-based fallback keeps the experience reliable when the AI endpoint is unreachable, which is critical in a live classroom context where degraded experience has real consequences.
What we built for Acceleo
- Role-aware AI teaching assistant (teacher + student modes)
- 5-surface deployment: live session, post-session, TA workspace, lesson prep
- Module-aware behaviour across 5 lesson stages
- Socratic method engine for student interactions
- Async content generation (practice problems, quizzes, warm-ups)
- Safety enforcement + content-blocked webhook
- Analytics dashboard with usage, latency, and quality metrics
Technical integration
A lightweight architecture: lesson-aware chat, persistent student memory, grounded retrieval, and deterministic fallback.
Lesson-grounded chat
POST /api/v1/apps/{appId}/chat/messagelessonContext + sessionContext -> grounded responses
Per-student memory
POST /api/v1/apps/{appId}/memory/save-messagesthread -> Supabase persistence -> session continuity
Curriculum retrieval
Vector index over lesson libraryupsert + query endpoints -> curriculum-cited answers
Graceful fallback
Local fallback in API routerule-based handling -> resilient classroom experience
Usage metering
AI usage is metered via the platform's credits model and mapped to Acceleo's student package tiers. That keeps classroom consumption, billing, and product packaging in sync without Acceleo operating separate metering infrastructure.
role-based context (teacher vs. student)•Deployment:/dashboard/student/assistantOutcomes
Students get a contextual answer in seconds, whether mid-lesson or between sessions. Teachers get full conversation history per student, with the context they need to understand what each student is working through. The previous wait-for-teacher pattern is no longer the primary constraint.
The Acceleo team manages zero AI infrastructure: no model endpoints, no embedding providers, and no vector database administration as they scale. First live student session was reached in less than 30 days after API access.
What this architecture enables in regulated industries
The pattern Acceleo uses — context-grounded responses, per-user persistent memory, graceful fallback, and zero infrastructure overhead — is directly applicable to regulated enterprise environments:
Healthcare
Protocol-aware clinical assistant with per-patient conversation history. Same memory threading, same grounded retrieval, deployed in your cloud.
Financial services
Client advisory assistant with full relationship history. Grounded in your documentation, scoped to your business rules.
Enterprise AI
Department-specific assistants with workspace isolation and persistent context, using the same zero-infrastructure model at scale.
Building something similar?
We can have a working proof-of-concept on your data in 10 days.