GPs & AI: Top 10 Concerns
- General practitioners are increasingly voicing concerns about the integration of artificial intelligence (AI) in their daily practice.A recent review by the Medical Protection Society (MPS) revealed that GPs'...
- The MPS reported that all AI-related inquiries came exclusively from GPs, with no such concerns raised by secondary care physicians. This highlights a specific apprehension among family doctors...
- Specific areas of concern include the use of AI for transcribing consultations, generating medical certificates, clinical prompts, and processing lab results.
GPs Concerned About AI Risks in Medical Practice,Patient Safety
General practitioners are increasingly voicing concerns about the integration of artificial intelligence (AI) in their daily practice.A recent review by the Medical Protection Society (MPS) revealed that GPs’ primary worries revolve around patient safety, data protection, and potential liability issues arising from AI use.
The MPS reported that all AI-related inquiries came exclusively from GPs, with no such concerns raised by secondary care physicians. This highlights a specific apprehension among family doctors regarding the evolving role of AI in primary care.
Specific areas of concern include the use of AI for transcribing consultations, generating medical certificates, clinical prompts, and processing lab results. Doctors are also wary of generative AI’s broader implications.
Dr. Ben White, deputy medical director at MPS, acknowledged members’ interest in AI tools for enhanced patient care and efficiency. However, he noted that calls to the advice line reflect significant unease about potential risks, notably concerning liability, patient safety, data security, and informed consent.
“Technology will always need to work alongside and complement the work of doctors and other healthcare professionals, and it can never be seen as a replacement for the expertise of a qualified medical professional,” said Professor Kamila Hawthorne, chair of the Royal College of General Practitioners.
These concerns echo previous findings, with a past survey indicating that approximately one in five doctors expressed apprehension about incorporating AI into their clinical workflows. The MPS Foundation has contributed to a white paper emphasizing the need for AI tools to be usable, safe, and beneficial for both patients and clinicians.
Nell thornton, an improvement fellow at the Health Foundation, noted that the MPS findings align with their own research. She emphasized the need for clarity on regulation and professional liability to ensure AI’s responsible progress and adoption.
The Royal College of General practitioners acknowledges the potential of AI to improve patient experience but stresses the importance of close regulation to ensure patient safety and data security.
Dr. Rosie Shire, from the Doctors’ Association UK GP committee, highlighted the appeal of AI software summarizing GP consultations. This could allow doctors to focus more on patients during appointments. However,she emphasized the need for accuracy and reliability,particularly in understanding different accents and dialects.
Shire also cautioned against over-reliance on AI-generated summaries, noting that doctors must have time to review and confirm the details.She stressed that AI cannot replace a doctor’s intuition and clinical judgment.
Liability remains a key question. shire said clarity is needed on who bears obligation if AI malfunctions and causes patient harm. white clarified that the MPS typically dose not provide indemnity for AI software failures, expecting the software designers or producers to be liable. However, individual GPs using AI systems remain perhaps liable for any resulting harm.
What’s next
As AI continues to evolve, ongoing dialog and clear guidelines are crucial to address GPs’ concerns and ensure the safe and effective integration of AI into medical practice. Further research and policy development will be essential to navigate the ethical and legal complexities of AI in healthcare.
