A retrieval-augmented workflow designed to turn unstructured source material into structured, citation-aware briefings and reports.

What I built

  • Retrieval pipelines that connected source material to generation steps
  • Evaluation workflows covering retrieval relevance, answer quality, groundedness, and citation behavior
  • Structured prompting patterns for converting plain-language inputs into consistent reporting outputs
  • Iteration loops that made failure analysis visible before wider rollout

Outcome

  • Improved trust in generated outputs by tying responses back to evidence
  • Created a reusable evaluation framework rather than relying on ad hoc spot checks
  • Helped move the system from early prototype work toward more reliable decision-support usage

Stack

LLMs, RAG, evaluation design, prompt engineering, retrieval analysis, and workflow tooling