YAILA is an AI learning workspace for your own study documents.
You upload a PDF, the backend processes it into searchable chunks, and then you can study from the same material using chat, summary, flashcards, quiz, concept graph, and roadmap views.
Demo link: I will add this later.
- Upload and process documents
- Ask document-grounded questions in AI Chat
- Generate and read structured summaries
- Practice with flashcards and quizzes
- Explore concept relationships in a knowledge graph
- Follow a generated learning roadmap
- Track activity on dashboard/profile pages
This project is React + Express + MongoDB, with configurable AI/vector providers.
flowchart LR
U[User] --> FE[Frontend (React + Vite)]
FE --> API[Backend API (Express)]
API --> DB[(MongoDB)]
API --> LLM[LLM Provider (Groq/Gemini)]
API --> VEC[Vector Store (Mongo/Endee)]
API --> INGEST[Ingestion Service]
INGEST --> PARSER[PDF Parser]
PARSER --> CHUNKER[Chunking]
CHUNKER --> EMBED[Embeddings]
EMBED --> DB
EMBED --> VEC
API --> CHAT[Chat + Tutor Orchestrator]
CHAT --> RETRIEVE[Retrieval Service]
RETRIEVE --> DB
RETRIEVE --> VEC
API --> SUMMARY[Summary Service]
API --> FLASH[Flashcard Service]
API --> QUIZ[Quiz Service]
API --> GRAPH[Knowledge Graph Service]
API --> ROADMAP[Roadmap Service]
sequenceDiagram
autonumber
actor User
participant FE as Frontend
participant API as Backend API
participant Q as Queue
participant P as Parser
participant C as Chunker
participant E as Embedder
participant DB as MongoDB
participant VS as Vector Store
User->>FE: Upload document
FE->>API: POST /api/documents
API->>DB: Save document metadata
API->>Q: Enqueue ingestion job
Q->>P: Parse page batches
P->>C: Send cleaned text
C->>E: Build chunk batches
E->>DB: Save chunks/progress
E->>VS: Upsert vectors
API->>DB: Mark ingestion completed
API->>DB: Trigger summary/graph/roadmap follow-up
API-->>FE: Document ready
sequenceDiagram
autonumber
actor User
participant FE as Frontend
participant API as Backend API
participant I as Intent Service
participant R as Retrieval Service
participant VS as Vector Store
participant O as Tutor Orchestrator
participant LLM as LLM Provider
User->>FE: Ask a question
FE->>API: POST /api/ai/chat/:id
API->>I: Classify intent
API->>R: Fetch relevant chunks
R->>VS: Semantic + lexical lookup
VS-->>R: Ranked chunks
R-->>API: Grounding context
API->>O: Build final prompt
O->>LLM: Send prompt + context
LLM-->>O: Answer
O-->>API: Response + citations
API-->>FE: Chat result
- User logs in (or guest login).
- User uploads a document from Documents page.
- Backend parses, chunks, embeds, and indexes it.
- Document opens in a multi-tab workspace (chat/summary/flashcards/quiz).
- User can continue with graph and roadmap views.
backend/
config/
controllers/
jobs/
middleware/
models/
repositories/
routes/
services/
tests/
utils/
vendor/endee/
frontend/
public/
readme/
src/app/
src/services/
README.md
backend/services/documentIngestionService.jsbackend/services/chunkingService.jsbackend/services/retrievalService.jsbackend/services/chatService.jsbackend/services/tutorOrchestratorService.jsbackend/services/summaryService.jsbackend/services/quizService.jsbackend/services/knowledgeGraphService.jsbackend/services/roadmapService.js
frontend/src/app/routes.tsxfrontend/src/app/context/AuthContext.tsxfrontend/src/services/api.jsfrontend/src/app/pages/DocumentDetail.tsxfrontend/src/app/pages/KnowledgeGraph.tsxfrontend/src/app/pages/LearningRoadmap.tsx
/api/auth/api/documents/api/ai/api/flashcards/api/quizzes/api/graph/api/roadmaps/api/dashboard/api/activity/api/notifications
Backend:
cd backend
npm install
cp .env.example .env
npm run devFrontend:
cd frontend
npm install
npm run devFrom backend/.env.example:
AI_PRIMARY_PROVIDERAI_FALLBACK_PROVIDERVECTOR_STORE_PROVIDERDOCUMENT_UPLOAD_MAX_MBINGESTION_PAGE_BATCH_SIZEEMBEDDING_BATCH_SIZERETRIEVAL_TOP_KRESUME_INGESTION_ON_BOOT
- Health:
GET /api/health - AI health:
GET /api/ai/test
Run tests:
cd backend
npm testRun ingestion benchmark:
cd backend
npm run benchmark:ingestion









