EkaCare’s prescription parsing solution leverages its customised vision-LLM
to accurately extract structured data such as symptoms, diagnosis, medical history, vitals and medications. Designed specifically for the Indian healthcare ecosystem our solution offers high level of accuracy and doesn’t involve human in a loop.
Our prescription parsing technology is powered by or custom Large Language Models (LLMs), specifically trained on millions of anonymized medical documents.
These documents span diverse formats and contexts, with a particular focus on the Indian healthcare ecosystem. Our models understand drug names which
are specific to India, something that SOTA models such as GPT-4o and Sonnet 3.5 fails often.Our rigorous training and fine-tuning process ensures exceptional accuracy while minimizing common pitfalls like hallucinations that often impact other SOTA LLMs.
The result is a highly reliable system, as demonstrated in the benchmarks provided in the subsequent section.Our process consists of two core steps:
Assigning SNOMED-CT identifiers to the extracted medical concepts. This is what enables interoperability and interpretability of this rich medical data.
Our benchmark experiments with evaluation dataset comprising thousands of documents showcase Eka’s superior performance in terms of accuracy compared to other SOTA models. NOTE this evaluation dataset contains both PDF and clicked images.
Task
Parrotlet-V (Eka Care’s LLM)
OpenAI GPT-4o
Claude Sonnet 3.5
Qwen2-VL (7B)
Llama-3.2-Vision (11B)
Phi-3.5-vision (4.2B)
Prescription parsing
0.921
0.853
0.867
0.630
0.593
0.433
A deeper view on results of these experiments are summarised below.