The Pulse

Three things shaping AI in healthcare this fortnight:

  • As Trump rolls back protections, Governor Newsom signs first-of-its-kind executive order to strengthen AI protections and responsible use — California is introducing stricter responsible AI requirements for vendors, including privacy, security, and transparency safeguards, while expanding the use of generative AI in public services. (Gov.Ca, 2026)

  • HHS Aligns Health Technology Leadership to Deliver Data Liquidity, Affordability, and an AI-Enabled Health Care System for Americans — HHS is consolidating key technology and health IT functions to better integrate AI into healthcare systems, with a focus on improving data access, interoperability, and secure infrastructure. (HHS, 2026)

  • Why AI health chatbots won’t make you better at diagnosing yourself — A study found that people using AI chatbots were less accurate in identifying conditions and no better at choosing care than those using typical sources, despite the models performing well in isolation. (The Conversation, 2026)

Takeaway: AI is advancing across policy, systems, and patient use, but real-world outcomes still depend on how these tools are implemented and understood.

Psychology & Behavioral Health

As AI use surges among psychologists, so do concerns about risks (APA, 2026)

AI use among psychologists is increasing rapidly, with a notable shift from non-use to both frequent and occasional use in just one year. At the same time, concern is rising, especially around hallucinations for active users and data privacy and social harm for those less engaged. Across the board, issues like bias, lack of testing, and limited transparency remain central concerns.

Clinician Cue: Expect patients and colleagues to use AI more often. Build basic literacy so you can evaluate outputs, address concerns, and guide appropriate use in practice.

How does artificial intelligence improve ophthalmology education outcomes?—The mediating role of learning motivation and self-efficacy (Frontiers, 2026)

In this study, higher AI use was associated with better academic performance, with motivation and self-efficacy playing a key role in that relationship. Students who felt more confident and engaged appeared to benefit more from AI-supported learning. AI literacy further strengthened this effect, suggesting that understanding how to use AI enhances its impact.

Clinician Cue: AI may be most effective when it supports engagement and confidence. Focus on how patients and trainees interact with tools, not just the tools themselves.

Medicine & Clinical Innovation

How AI could change the way physicians read heart ultrasounds (AMA, 2026)

New AI models trained on large volumes of echocardiographic data aim to analyze multiple cardiac features at once and potentially detect signals beyond the heart. Early research suggests these systems could expand screening capabilities and surface findings that might otherwise be missed. Ongoing clinical trials will determine whether these tools improve outcomes and integrate effectively into real-world workflows.

Quick Win: Stay aware of emerging AI-supported imaging tools, especially those that may surface incidental findings or expand screening beyond the original clinical question.

Can AI manage an entire medical decision process? (MedicalXpress, 2026)

In simulated clinical scenarios, an AI system performed at levels comparable to medical students and showed similar diagnostic accuracy while completing cases more quickly. Its decision patterns often aligned with expert clinicians, though it relied more heavily on testing and communicated less effectively. The findings suggest potential for workflow support, but not readiness for independent decision making.

Quick Win: Use AI as a second set of eyes to support decision processes, while maintaining clinician oversight and judgment in final care decisions.

Ethics & Oversight

  • Policy & Compliance: New state-level requirements are emphasizing responsible AI use, with stricter expectations around privacy, security, and vendor accountability.

  • Bias & Transparency: Clinicians report growing concern about hallucinations, bias, and limited testing, highlighting the need for clearer visibility into how AI systems generate outputs.

  • Accountability & Governance: As AI becomes more integrated into workflows and patient use increases, clinicians remain responsible for interpretation, decision making, and appropriate use.

Wayde AI Insight

AI is moving quickly into policy, education, and clinical workflows, but its value is not automatic. Strong performance in controlled environments does not guarantee better outcomes in practice, especially when patients and clinicians interpret results differently. As adoption grows, variation in AI literacy and expectations will likely widen gaps in how effectively these tools are used across settings. The consistent signal across these updates is that AI works best when paired with informed users who understand its limits. Technology can extend clinical reach, but judgment, context, and trust still shape the quality of care.

Connect

Helping healthcare professionals adopt AI ethically and responsibly.

Produced by Wayde AI with AI assistance.

Reply

Avatar

or to participate

Recommended for you