Pilot Development of a ‘Clinical Performance Examination (CPX) Practicing Chatbot’ Utilizing Prompt Engineering
Article information
Abstract
Objectives
In the context of competency-based education emphasized in Korean Medicine, this study aimed to develop a pilot version of a CPX (Clinical Performance Examination) Practicing Chatbot utilizing large language models with prompt engineering.
Methods
A standardized patient scenario was acquired from the National Institute of Korean Medicine and transformed into text format. Prompt engineering was then conducted using role prompting and few-shot prompting techniques. The GPT-4 API was employed, and a web application was created using the gradio package. An internal evaluation criterion was established for the quantitative assessment of the chatbot’s performance.
Results
The chatbot was implemented and evaluated based on the internal evaluation criterion. It demonstrated relatively high correctness and compliance. However, there is a need for improvement in confidentiality and naturalness.
Conclusions
This study successfully piloted the CPX Practicing Chatbot, revealing the potential for developing educational models using AI technology in the field of Korean Medicine. Additionally, it identified limitations and provided insights for future developmental directions.