The Pulse
Three things shaping AI in healthcare this fortnight:
Stanford AI Experts Predict What Will Happen in 2026 — Stanford experts expect 2026 to focus less on “can AI do it” and more on evaluation, safety, and real-world impact in areas like clinical care and mental health. (Stanford University HAI, 2025)
AI Use at Work Rises — Gallup reports that nearly half of U.S. workers now use AI, showing that AI‑supported knowledge work is becoming routine while healthcare still lags in structured adoption. (Gallup, 2025)
AI Meta-Hallucinations In Mental Health Are Giving Out Unsafe Self-Explanatory Psychological Guidance — Commentators warn that people may internalize unsafe or distorted AI mental health advice, raising new concerns about suggestibility, pseudo‑relationships, and perceived authority of chatbots. (Forbes, 2025)
Takeaway: AI is spreading fast and getting more capable. Psychology and healthcare now need tighter evaluation and guardrails, especially as people begining to treat AI like therapists.
Psychology & Behavioral Health
Opportunity and Challenge: The Ethical Implications of Artificial Intelligence for Consulting Psychologists (APA, 2025)
APA highlights how AI is entering consulting work, from assessment to training, while raising issues of competence, confidentiality, bias, and role clarity. Psychologists are urged to update skills and explicitly map AI use onto existing ethical standards.
Clinician Cue: Add AI to your informed consent, contracts, and ethics reviews so clients know when tools are used, what data they touch, and where human judgment remains primary.
Use of generative AI chatbots and wellness applications for mental health: An APA health advisory (APA, 2025)
The APA health advisory cautions that generative AI chatbots and wellness apps are not substitutes for mental health treatment, even when they feel conversational or personalized. It highlights risks around accuracy, privacy, crisis response, and blurred boundaries, and urges users and professionals to treat these tools as adjuncts, not providers.
Clinician Cue: When patients mention using mental health chatbots or wellness apps, ask specific questions about what they are using them for, clarify that these tools are not therapy, and review safety plans and referral options for higher‑risk concerns.
Medicine & Clinical Innovation
Researchers Discover Bias in AI Models That Analyze Pathology Samples (Harvard Medical School, 2025)
Researchers show pathology AI can latch onto site‑specific artifacts and other spurious cues, leading to hidden performance gaps across hospitals and populations. Models that look strong on test sets may still introduce biased risk estimates into clinical decisions.
Quick Win: When AI‑assisted pathology is referenced, ask whether it was validated on your site and population, and document these limits in case discussions.
Unhealthy alcohol use detection in electronic health records: A comparative study using natural language processing (Science Direct, 2025)
NLP models improved detection of unhealthy alcohol use from clinical notes compared with billing codes alone. Still, flagged cases require clinical verification and careful integration into workflows.
Quick Win: Use NLP screening tools to flag behavioral health risks hidden in routine notes, then pair alerts with brief, open-ended questions in your next patient visit to uncover needs without labeling.
Ethics & Oversight
Policy & Compliance: Expectations around AI use in healthcare are tightening, even if formal rules lag, so start assuming that clearly documenting when and how you use AI will become part of standard practice.
Bias & Transparency: AI can sound polished and confident while still missing the mark, especially across different populations or settings, which makes plain-language explanations and openness about limits more important than the tech itself.
Accountability & Governance: As AI starts to feel relational or authoritative to patients and staff, decide who owns its use and who is accountable when it informs decisions, flags risk, or offers guidance.
Wayde AI Insight
AI is no longer something happening to healthcare; it is happening inside it. Patients turn to it for reassurance, clinicians lean on it to think and document, and organizations are threading it into everyday workflows. The bigger risk isn’t replacement, but quiet influence on decisions, expectations, and trust without enough reflection. The most effective practitioners right now aren’t the most technical but the most curious: they ask where a tool helps, where it might mislead, and how to keep the human relationship in front. Treat AI like a capable junior assistant—great for reducing noise, spotting patterns, and saving time, but never a stand‑in for judgment, care, or accountability.
Connect
Helping healthcare professionals adopt AI ethically and responsibly.
Produced by Wayde AI with AI assistance.
