The Pulse

As health systems adopt therapy chatbots and other mental health AI tools, clear governance, documentation, and clinician oversight are key to safe and ethical use. Early clarity on liability, validation, and decision rights can prevent adoption slowdowns. With federal funding driving prevention and population-level AI, organizations should prioritize equity, privacy, and bias safeguards while aligning governance with national standards to build trust and enable real-world care innovation. Taken together, this quarter shows that AI risk in healthcare is no longer hypothetical, it is operational, governance-driven, and increasingly visible to regulators.

Three things shaping AI in healthcare this quarter:

AI in the therapist's office: 2025 Practitioner Pulse Survey (APA, 2025)

The APA survey finds AI use among psychologists has grown substantially in 2025, with far fewer clinicians now reporting that they "never" use AI at work compared with 2024. Most current use is concentrated in education and administrative support, while only a small minority use AI for diagnosis or direct client-facing support, reflecting both interest and caution.

Why it matters: Rapid growth in psychologist AI adoption signals mainstreaming, guiding organizations to support low-risk administrative uses while establishing safeguards for client-facing applications.

Barriers and enablers for generative artificial intelligence in clinical psychology: a qualitative study based on the COM-B and theoretical domains framework (TDF) models (Springer Nature, 2025)

This qualitative study interviewed 14 practicing psychologists and found 18 key factors influencing the adoption of generative AI in therapeutic work—12 barriers (e.g., limited AI knowledge, confidentiality concerns, role-threat) and 6 enablers (e.g., AI training, administrative relief). The findings emphasize that successful integration depends not just on technology, but on clinician capability, organizational support, and alignment with therapeutic identity.

Why it matters: Adoption of generative AI in psychology hinges on training, identity, and organizational backing, guiding where leaders should invest to avoid failure and resistance.

FDA digital advisers confront risks of therapy chatbots, weigh possible regulation (STAT10, 2025)

Federal regulators are evaluating safety risks associated with AI-driven therapy tools, a step that could determine future standards for evidence, supervision, and patient protection in mental health AI.

Why it matters: Emerging rules for therapy chatbots will shape procurement, clinical supervision, and liability models for digital mental health tools.

Psychology & Behavioral Health

As conversational AI enters therapy, clinicians should define clear boundaries, monitor tone and accuracy, and ensure these tools enhance, not replace, the therapeutic relationship. Because chatbots still fall short in empathy, cultural sensitivity, and crisis response, thoughtful oversight and patient education are essential. Clinicians should routinely ask about mental health app use, discuss limitations and safety planning in consent, and start with low-risk applications like documentation or psychoeducation. Framing AI tools as quality-improvement aids, not substitutes for clinical judgment, helps maintain trust, safety, and care integrity.

Medicine & Clinical Innovation

Start small by piloting AI tools within one clinical unit, training staff, and setting clear escalation and human-override protocols to integrate AI safely into care. Involving clinicians early in tool selection and workflow design ensures AI supports real clinical needs, enhances efficiency, and protects patient relationships. Used as an assistive layer, AI can surface evidence, highlight uncertainty, and streamline tasks like literature scans, early-warning analytics, or report translation, but outputs should always be verified. Modular, interoperable systems and gentle engagement tools can strengthen collaboration and patient involvement, while ongoing validation and outcome monitoring keep AI aligned with clinical judgment and accountability.

Connect

Helping healthcare professionals adopt AI ethically and responsibly.

Produced by Wayde AI with AI assistance.

Reply

or to participate

Recommended for you

No posts found