FosterHealth’s AI-powered scribe listens to conversations between physicians and patients and generates fact-checked clinical notes at the end of each session in the customised format specified by the physician.
Recent advancements in AI, fuelled by transformer architecture, large-scale models, and training over diverse datasets have demonstrated significant performance gains in tasks related to speech processing and language understanding. These advancements enabled productivity gains across various sectors, including but not limited to design, software development and customer support. However, their current limitations, such as hallucinating inaccurate information, continue to hinder their adoption in safety-critical domains like healthcare. In order to create value for users in the healthcare field, we need technological and interface innovations that focus on enhancing the reliability of AI systems.
Our design philosophy
Every healthcare operator must find our application trustworthy. They should feel comfortable and confident about our AI model’s outputs every time they use our application to get help with documentation related tasks. To achieve this design goal, we incorporate three main design principles into our product development process:
Design an independent fallback system that enhances the reliability by actively monitoring the primary AI system and intervening when it detects issues
If needed, the AI model should ask the user for help before generating the final output
The user should be able to review the AI outputs in a seamless manner
In this blog, we describe how our fact-checking system leverages these principles and helps in establishing user trust.
Fact-checking system
During the session, our speech processing system transcribes the conversation between the patient and physician in real-time. In cases where the conversation occurs in other languages or switches between English and other languages, it translates the content into English. At the end of the session, before generating clinical notes, if it has any questions about medical terms (procedures, drugs etc.), it asks for help and corrects the transcript.
To enhance the reliability of the clinical note generation system, we enforce multiple controls. We use decoding algorithms to control the output of primary AI (complex, transformer architecture based model) and ensure that it is structurally compliant with the custom format specified by the user. We use a simpler independent fallback system to ensure that the generated clinical note is grounded in the transcript. If it detects that any section in the generated note is not consistent with the transcript, it intervenes and fixes it.
The fact-checking system’s interface enables the user to quickly review the note — if they click on any line generated by the AI, it will point to the corresponding reference in the transcript. This feature helps the user quickly gain the necessary context from the original conversation. The interface allows the users to edit the note as they seem fit — once they feel comfortable with the clinical note, they can transfer the note to the electronic health record.
Due to high innovation velocity in the field of AI, we are seeing continuous enhancements to improve the accuracy of AI models, optimisations to reduce the inference time and compute size requirements. Our goal is to deliver the state of the art technology in a reliable and trustworthy manner. We are constantly talking to our users, collaborating with leading research institutes and healthcare experts and continually improving our service. If you have any additional questions or if you want to partner with us, please contact us here.
Comments