AI Engineer at Jurimesh
April 2025 – presentI lead Jurimesh's contract comparison feature end-to-end, enabling M&A, PE, and legal teams to compare large sets of contracts in minutes instead of days. The system highlights high-level deal risks as well as paragraph-level differences, supporting both strategic due diligence and detailed legal review.
My work spans the full product lifecycle: working closely with clients to understand workflows, researching and validating technical approaches, shipping production-ready MVPs, and owning the feature in production. The contract comparison system combines retrieval with targeted LLM analysis and task-specific processing to balance quality with scalability, and was featured in Jurimesh's launch video and blog post.
Beyond contract comparison, I designed Jurimesh's domain-specific chunking and document structuring pipeline for legal text, significantly improving retrieval quality across chat and comparison workflows while handling bursty workloads and highly variable document structures.
On the ML side, I fine-tuned multilingual clause classifiers and built robust retrieval-augmented generation (RAG) pipelines, with offline evaluation, automated regression testing, and LLM-as-a-judge pipelines to monitor and improve prompt and retrieval quality.
From an engineering perspective, I work across the stack: building distributed async worker systems for LLM workloads, production APIs, and real-time client features. This includes LLMOps, evaluation tooling, and production observability.