A recent JAMA Network Open study reveals widespread skepticism among patients regarding artificial intelligence (AI) in healthcare. Many doubt that hospitals and clinics will handle AI ethically, with 65.8% of participants expressing concerns about responsible usage and 57.7% fearing potential harm from AI-driven tools.

The research, based on responses from over 2,000 individuals, indicates that trust in AI correlates with overall faith in the healthcare system. Patients who had previously faced discrimination were particularly doubtful about AI’s fair implementation. Women, in particular, showed greater hesitation compared to men regarding AI’s responsible use, though both genders expressed similar concerns about its potential dangers.

Interestingly, familiarity with AI or having a strong understanding of health concepts did not significantly influence trust levels. Experts stress the urgent need for transparent communication and clear guidelines to address patient apprehensions.

Further reinforcing these findings, a 2024 Athenahealth/Dynata survey highlighted that trust in AI varies depending on its application. While 40% of respondents supported AI assisting doctors in diagnoses, only 17% were comfortable with AI taking over patient interactions.

With AI evolving rapidly in medicine, public concern underscores the necessity of strict regulatory frameworks. About 57% of respondents advocated for government regulations, while 50% supported similar oversight for technology firms.

To foster confidence, healthcare institutions must prioritize ethical AI deployment, ensuring both safety and accountability.