top of page

Genesis

​​

Abstract
We introduce a prompt-guided large language model (LLM) workflow for emergency departments that processes anonymized speech-to-text transcripts to deliver automated triage scoring, clinical documentation, patient education, operational cost insights, and iterative case updates. Requiring no additional infrastructure, this system is not a medical device but a decision-support tool, requiring physician oversight. Early experience demonstrates high usability, potential time savings, and minimal errors, paving the way for formal prospective evaluation.


Description
A pragmatic, zero-infrastructure AI solution for EDs, handling everything from rapid triage to patient discharge summaries via a single anonymized transcript—freeing clinicians to focus on true patient care while safeguarding data privacy and professional judgment.


Challenges in the Emergency Department
Emergency departments (EDs) are high-pressure environments marked by a paradox lack and overload of information in a setting with relentless time and space constraints. Physicians must rapidly process complex patient data amid frequent interruptions, especially during information exchange, often stretching their cognitive capacity and obstructing their workflow1, 2. Furthermore, emergency physicians spend between 44% and 65% on documentation alone, more than any other activity including direct patient care3, 4. Inexperienced providers may struggle with clinical decision-making under these conditions, while even veterans risk missing details when juggling dozens of parallel tasks. Additionally, patient satisfaction scores in emergency departments are typically low due to perceived staff attitude, information provision and waiting times5, 6. The challenge is real: we must catch every critical detail, make optimal decisions, and keep patients and staff fully informed, despite an environment that disrupts us at every turn. In the ED, patients arrive with symptoms, not diagnoses, demanding that we transform uncertainty into decisions in mere minutes. ED clinicians need better support to manage the flood of data, documentation and communication demands and varying experience levels on the care team.

​

A Prompt-Based AI Workflow Solution
We propose a pragmatic solution leveraging recent advances in artificial intelligence: a LLM-prompt that assists clinicians in the ED and can be implemented rapidly without any additional infrastructure. With patient consent, the initial conversation is recorded and transcribed locally on a smartphone, briefly checked for anonymization, then pasted into a large language model prompt whose outputs are emailed back to the user’s account and simply inserted into the hospital system, no special infrastructure required. We named our GPT-based system "alKem15t" (pronounced "alchemist") to highlight its four conceptual underpinnings: the German abbreviations “KI” (artificial intelligence) and “KIS” (hospital information system), the transformative notion of an alchemist capable of creating “gold,” and the numeric substitution of “I” and “S” with “1” and “5” as a nod to its technological roots. Each key automated capability of alKem15t addresses a challenge in emergency care, including:


1.) Triage Stratification: the LLM generates the alKem15t Score, which incorporates neurological status (0–20), cardiopulmonary function (0–20), warning symptoms (0–15), overall condition (0–10), critical labs (0–15), red flags (0–10), and disease trajectory (0–10), always adding only the highest point in each category and assigning zero for missing data, assuming that if it’s not mentioned its likely not cardinal. The sum then maps to a color-coded risk tier—green (0–30), yellow (31–60), orange (61–85), or red (86–100). No vital parameters (or clicks) are necessary as it generates the risk category only from what is known based on the input. 


2.) Recommendation: the LLM suggests an immediate management plan including diagnostic procedures and interventions as well as their setting and time scale.


3.) Follow-Up Questions: the LLM generates clarifying questions an attending might ask to further assess the case. The questions are tailored to fill in missing information from the alKem15t score and consider red flags as well as rare diseases.


4.) Automated Discharge Letter: the LLM produces a concise, structured discharge summary letter. This draft includes the diagnosis as well as differentials, the chief complaint, pertinent history, exam findings, treatments given and planned follow-up. 


5.) Patient-Friendly Explanation: the LLM delivers a plain-language overview of why the patient came to the ED, the likely diagnosis, potential tests and treatments, and reassures them that the team is actively reviewing each step behind the scenes, so patients feel secure and informed about next steps even if no one is physically present.


6.) Consults: the LLM is directed to outline brief, organ-specific assessments for multiple specialties, starting with emergency medicine and including at least two subspecialties of internal medicine, two surgical fields, and three other relevant disciplines. Each consult should briefly address recommended diagnostic steps (e.g. imaging, labs), potential therapies (medication, intervention, surgery, conservative management), setting of care (inpatient vs. outpatient), and possible risks or complications.


7.) Background Knowledge: the LLM provides brief relevant medical background to support clinical reasoning. The prompt instructs the LLM to produce a thorough discussion of each leading or differential diagnosis, focusing on underlying pathophysiology, epidemiological data (incidence, prevalence), known risk factors, typical clinical outcomes, and the pharmacologic treatments currently in use (including mechanism of action, potential side effects, and relevant drug interactions).


8.) Operational Context: the LLM is instructed to compile a concise, tabular cost-revenue overview specific to a municipal hospital in North Rhine–Westphalia (Germany) for each potential care setting (ambulatory, inpatient, and a holding-unit scenario), then offers coding recommendations.


9.) In the final step, the prompt instructs the LLM to ask whether any additional details are available about the case and, upon receiving new inputs, to restart its entire analysis, incorporating both the previously known and newly provided information into its updated response, ensuring an adaptive, iterative workflow.
Before using the alKem15t workflow, it is imperative to acknowledge and adhere to the following two requirements. Firstly, the transcript must be fully anonymized, that is, stripped of all personally identifying details (not merely pseudonymized), before being entered into the LLM to protect patient privacy (e.g. using a local app like ScribeAI with no cloud upload, and deleting the file right after generating the final output). Second, this system is not a medical device but a decision-support aid, meaning every element of the AI-generated output must be interpreted, validated, and if necessary corrected by a qualified physician, ensuring that ultimate responsibility remains under professional medical judgment.


Occasionally, older ChatGPT versions produce “hallucinations” in the physician’s letter module, most often fabricating details about history or ECG findings and also miscalculate the alKem15t score by exceeding the category maximum. Both issues are quickly spotted and corrected by the supervising clinician. Interestingly, even with identical prompts and inputs, the model can produce a slightly different “objective” alKem15t score, hinting at a perplexing digital form of subjectivity. Whether this reflects a human-like interpretative variability or a quirk of large language models remains an open, somewhat philosophical question. In practice, however, the resulting variations seldom alter the final risk tier, especially in newer LLM models.
Meanwhile, preparations are underway to formally evaluate alKem15t performance in the ED, with a data protection plan already submitted, so its effectiveness on various outcome parameters can be objectively measured. The prompt itself is publicly available at https://chatgpt.com/g/g-67e928fca9208191b4cdcb3de9c4cd0c-alkem15t, where it currently defaults to German but can be adapted to other languages simply by specifying the output language. This solution requires zero added infrastructure, is safe and compliant if used correctly and frees up time for what we signed up to do: genuine patient care. Let us bring Prometheus’s fire into the ED, no worries, we’ll keep the fax machines running for now.

​

1.    Berg LM, Kallberg AS, Goransson KE, Ostergren J, Florin J, Ehrenberg A. Interruptions in emergency department work: an observational and interview study. BMJ Qual Saf. Aug 2013;22(8):656-63. doi:10.1136/bmjqs-2013-001967
2.    Brixey JJ, Tang Z, Robinson DJ, et al. Interruptions in a level one trauma center: a case study. Int J Med Inform. Apr 2008;77(4):235-41. doi:10.1016/j.ijmedinf.2007.04.006
3.    Neri PM, Redden L, Poole S, et al. Emergency medicine resident physicians' perceptions of electronic documentation and workflow: a mixed methods study. Appl Clin Inform. 2015;6(1):27-41. doi:10.4338/ACI-2014-08-RA-0065
4.    Hill RG, Jr., Sears LM, Melanson SW. 4000 clicks: a productivity analysis of electronic medical records in a community hospital ED. Am J Emerg Med. Nov 2013;31(11):1591-4. doi:10.1016/j.ajem.2013.06.028
5.    Taylor C, Benger JR. Patient satisfaction in emergency medicine. Emerg Med J. Sep 2004;21(5):528-32. doi:10.1136/emj.2002.003723
6.    Boudreaux ED, O'Hea EL. Patient satisfaction in the Emergency Department: a review of the literature and implications for practice. J Emerg Med. Jan 2004;26(1):13-26. doi:10.1016/j.jemermed.2003.04.003

From Chaos to Clarity: A Zero-Resource Generative AI Workflow for Emergency Care

bottom of page