Comment Triage Demo: Ollama LLMs in Institutional CMS Workflows

I built this proof of concept to see how a local Large Language Model (LLM) can assist institutional CMS workflows without sending data off-prem. I kept the scope tight: clarity, auditability, and human control.

Goal

I wanted to reduce manual review time for public feedback while keeping staff in charge. My target was assistive summaries, classifications, and draft replies rather than automated decisions.

What the demo does

  • The demo categorizes and analyzes incoming comments
  • It returns short explanations for each classification
  • It translates comments to German
  • It drafts response suggestions

Why local LLMs

I chose a local model (Ollama + mistral) to keep sensitive content on the same machine and avoid external API dependencies. That makes the system easier to audit and defend in public-sector contexts.

Workflow snapshot

  1. I submit a comment.
  2. The backend asks the local LLM to analyze it and explain its reasoning.
  3. The UI shows the analysis and suggested responses for staff review.
  4. I never auto-publish, hide, or delete anything.

Implementation notes

  • Backend: PHP (simple HTTP API)
  • Frontend: React + Vite
  • Model runtime: Ollama with the mistral model

What happens in the code

The app seeds a local SQLite database with multilingual sample comments so the UI has data on first run. The PHP API exposes endpoints to list comments, analyze one or all comments, generate a draft response, translate a comment, save a review status, and reset the demo data. Analysis sends only the id and text to Ollama’s chat API with a JSON schema, then stores language, topic, sentiment, urgency, response need, inappropriate-content flags, and a short explanation. The React UI loads comments, runs analysis sequentially to avoid timeouts, shows tags and optional reasoning, supports German translation, and lets staff draft and save a response with a human confirmation checkbox.

What this proves

From my testing, a local LLM can reduce the manual burden of comment review while preserving privacy and human oversight. It also surfaces the limits: mistral on an underpowered device is slower, can miss nuance, and translation quality varies across languages. Smaller models also struggle with longer context, edge cases, and consistency. With a dedicated server and stronger hardware, a larger model with more context window and parameters would mitigate most of these issues while keeping the same workflow intact.