Insights

March 30, 2026: This Week in AI

This week in AI didn’t feel like a story about new capabilities—it felt like a story about conflict: a quiet war around your job, data, trust, and choice as AI is embedded, scaled, and contested.

The headlines this week point to something deeper than product launches (hell, there were a lot of them). AI is no longer just improving the quality of its output: it’s being integrated into all parts of our lives and challenged at the same time. And as that happens, new questions emerge:

  • Can you trust the systems you’re being asked to use?
  • Do you have a choice in whether you use them?
  • Who decides what “good” or “safe” looks like?
  • And who benefits as AI becomes the default at work and home?

At the center of all of it is one word: trust.

1. The constant challenge of security and authority

Trust isn't just about the models' accuracy—it’s systemic.

  • Research showing users increasingly rely on AI for personal advice, even in high-risk contexts. AI sycophancy (the likelihood of simply agreeing with you) averages around 50% of the time. Stanford Professor Dan Jurafsky says it is “a safety issue, and like other safety issues, it needs regulation and oversight.” In an article from MIT Technology Review, AI agetns are being considered more "untrusted" because of the increasing likelihood of users forming false beliefs reinforced by AI responses.
  • A rise in AI-powered cyberattacks, forcing companies to rethink security models entirely. Companies like Microsoft investing heavily in red teaming AI systems, stress-testing them like hackers would.
  • Platforms like Wikipedia pushing back on AI-generated content to preserve credibility.
  • App stores are being flooded with low-quality AI-generated products (“AI slop”), making it harder to tell what’s real, safe, or worth using.

I was a trained crisis counselor with The Trevor Project for years, and it was a huge shift to see kids slowly stop asking if I was AI. At first, they used to care about it ("are u a robot"). Now, people assume and expect it.

This is the key shift: the risk isn’t just that AI gets something wrong—it’s that the system encourages you to trust it when maybe you shouldn’t. Whether its an interface that sounds confident when its wrong or vibe coded systems that lack the security we assume for users, trust is being artificially manufacturered, not earned.

2. The AI skill gap is forming fast

At the same time, a different kind of divide is ramping up: not access to AI, but effective use of it. This week made that clear:

  • Power users and AI-native companies are pulling ahead—fast. Companies like Meta are now mandating AI usage as part of employee performance.
  • Schools and governments are starting to introduce AI literacy as a core skill, not an elective. Cities like Boston are now introducing AI literacy for every high school student.
  • NVIDIA's Jensen Huang explicitly called on workers to adapt or risk being left behind, calling AI skill the "baseline, not a specialty." That's not just tech workers, but blue-collar roles too in manufacturing and trades, too.

I had a conversation with a friend this week about constantly feeling behind keeping up with how fast AI is moving. But the reality is, most people still treat AI like a search engine.

The people getting disproportionate value from AI are reimaging how their work gets done and how they think, execute, and iterate through tasks.

3. AI in everything, want it or not

AI is no longer a product you choose—it’s in the infrastructure. And this week showed how aggressively that’s happening:

  • AI models are being embedded directly into devices (AI PCs), not just cloud tools, like HP's new AI-first PCs.
  • Financial products, like Amex Graphite, are bundling AI into credit cards and services.
  • Entire industries—from energy to media—are reframing around AI as a core input.

Signs point to something deeper: the rise of “cognitively outsourced” behavior, where thinking itself gets delegated to the tool. This is a shift from "help me do something" to "just do it for me."

You don’t choose to use AI. It’s just there, embedded, assumed, and existing without verification.

4. AI progress far outpaces policy

While all of this accelerates, policy is lagging further and futher behind:

  • Parts of the EU AI Act are already being delayed, even as deployment accelerates. This is the first comprehensive law regulating AI usage based on risk to people and society.
  • Disputes between AI companies (OpenAI, Anthropic) and government agencies are surfacing tensions. Decisions about how AI is used in defense aren’t being set by clear policy—they’re being negotiated in real time between governments and private companies, shaping global norms without guardrails or precedent.

We’re moving from abstract debates about “AI safety” to very real questions:

  • Who is liable when AI systems fail?
  • What counts as “high-risk” usage?
  • How do you regulate systems built from multiple vendors and layers?
  • How do you regulate when progress outpaces policy?

And here’s the uncomfortable truth: the faster AI becomes normalized and embedded, the harder it is to regulate retroactively.

When we look at it together, this week points to something bigger than incremental progress. The MIT Technology Review calls this the “AI War" – defining who controls the systems we rely on: who defines trust, who benefits the most, and who gets left behind.

The real question isn’t whether AI will be powerful. It’s whether the systems around it will give people more agency or or slowly take it away.

Sources

Want to discuss this week's themes in more depth?

Join the conversation on Substack

Let's work together

If you're exploring how AI fits into your business, struggling to move from idea to execution, or need experienced design leadership to guide the way—I'd love to talk.