A small VLM that sees everything
-
Updated
Sep 15, 2025 - HTML
A small VLM that sees everything
little single file fronted for llama.cpp/examples/server created with vue-taildwincss and flask
GUI for GGML Alpaca models
Wrapper script + Docker setup for llama.cpp batched-bench: run, collect, and browse historical performance results.
🤖 Empower your coding with Note Studio AI, a privacy-focused IDE offering offline AI support and high performance for seamless development.
Real-time vision demo using SmolVLM with llama.cpp backend
My personal README!
A sovereign publishing interface for The Signal — a decentralized creator platform using IPFS and local AI.
A web UI for managing multiple models with llama-server.exe on windows
A DIY browser interface for interacting with llama locally.
Chat with LLaMA directly in the Opera browser sidebar.
Real time local video chat example with llama.cpp
🤖 Empower your documents with a local AI assistant for PDF, DOCX, and TXT files, ensuring privacy by keeping data off the cloud.
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."