Insights

April 13, 2026: This Week in AI

Move fast and break things, at whatever the cost. AI labs are pushing the limits of what they can deliver with little concern to the cost: mental health, to bias and behavior, and to cybersecurity.

Here’s this week in AI for April 13. Past weeks’ themes continue to persist: move fast and break things, at whatever the cost. Anthropic, Meta, and other AI labs are pushing the limits of what they can deliver with little concern for the impact and cost—to mental health, to bias and behavior, and to cybersecurity.

1. Influencing buying behavior

AI is moving from helping you shop to shaping what you buy—and how much you pay—through personalized pricing, recommendations, and real-time influence. While this increases efficiency for companies, it’s raising consumer concern and regulatory scrutiny around fairness and transparency.

  • 62% of Americans are concerned about personalized pricing based on their data (Talker Research, NY Post). Reports suggest this can lead to different customers seeing different prices for the same product based on browsing behavior, location data, purchase history, and device data.
  • The U.S. House Committee on Energy and Commerce is investigating airlines and travel companies over AI-driven dynamic pricing practices (Reuters), including using demographic data to influence pricing and potential price discrimination.
  • Meta’s new AI model, Muse/Spark, focuses on integration into shopping experiences, guiding product decisions and influencing a customer’s “discovery” of products (NY Post, The New York Times). Meta also struck a deal with Broadcom for custom AI chips as it scales infrastructure (Reuters).

2. Anthropic’s “Mythos” (Project Glasswing)

Anthropic’s new “Mythos” model (previewed as Project Glasswing) represents a step-change in AI capability—especially in cybersecurity and reasoning—while also raising serious concerns about misuse, national security risks, and who should have access to systems this powerful.

  • Anthropic is limiting use to select partners (including government and financial institutions), reflecting concerns about misuse and impact (The New York Times). If AI can find vulnerabilities or simulate cyberattacks, it doesn’t only help cybersecurity teams—it can also lower the barrier for bad actors (NPR, PBS NewsHour).
  • Policymakers are increasingly involved, with debate over whether models like this are too dangerous for broad release (The Hill). Models like Mythos are claiming to be so advanced they’re not being released publicly—which means a small group (governments, big companies) may get access to capabilities that others don’t.
  • Mythos reflects intensifying competition among leading AI labs to push capabilities without much concern for safety, security, and access (The Economist, PBS NewsHour).

3. My AI doctor

AI is rapidly expanding its reach in healthcare and medical research, from drug discovery to patient support. But its biggest impact may be how patients and caregivers use AI to make decisions outside the system, often before doctors are involved—pointing to the strain on healthcare access.

  • AI is accelerating drug discovery timelines. Novo Nordisk is partnering with OpenAI to speed up early-stage research and identify drug targets faster (CNBC).
  • Cancer patients are increasingly turning to AI to understand diagnoses and explore treatments, despite inconsistent accuracy and risk of misinformation (The New York Times). Related reporting has documented intense personal reliance on AI during illness (The Free Press).
  • AI-enabled robots are being explored to address caregiver shortages, raising tradeoffs between efficiency and human connection (The Washington Post).

Follow-up: AI psychosis

A common theme each week is AI’s impact on mental health. Andrej Karpathy warned that developers are already experiencing “AI psychosis” (The New Stack). Research by JAMA Psychiatry raised concerns about AI being used as a substitute for mental health support, despite limitations and proven cases of negative impact (NPR). Experiments with AI journaling show people forming deep emotional reliance on AI as a thought partner (The Guardian).

MIT Technology Review put out their “Current State of AI” analysis this week with an interesting finding: AI is advancing quickly and adoption is growing, but we’re starting to move from exponential to incremental changes, with improvements becoming harder to achieve. Physical, real-world limitations are starting to make scaling AI harder and more expensive.

Let's work together

If you're exploring how AI fits into your business, struggling to move from idea to execution, or need experienced design leadership to guide the way—I'd love to talk.