[23276] in Privacy_Forum

home help back first fref pref prev next nref lref last post

[ PRIVACY Forum ] Script of my national radio report yesterday re

daemon@ATHENA.MIT.EDU (Lauren Weinstein)
Tue Mar 3 11:25:36 2026

Date: Tue, 3 Mar 2026 08:17:29 -0800
From: Lauren Weinstein <lauren@vortex.com>
To: privacy-dist@vortex.com
Message-ID: <20260303161729.GA18486@vortex.com>
Content-Disposition: inline
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Errors-To: privacy-bounces+privacy-forum=mit.edu@vortex.com


This is the script for my national radio report yesterday discussing
who is responsible for the accelerating AI mess. As always there may
have been minor wording variations from this script as I presented the
report live on air.

 - - - 

First I want to quickly mention an incident from the weekend. We've
talked about problems with Waymo robotaxis before, and over the
weekend one blocked an EMS ambulance trying to reach the scene of a
mass shooting in Austin. I'll have more to say about this in the near
future.

Anyway, for years there have been various claims that "AI is going to
become super intelligent and take over the world, and turn us all into
pets or batteries or something". And many of us in the computer
science community have been saying it's not the machines you have to
worry about it's the people who build and control and use these
systems.

Billionaire Jack Dorsey, who you might remember as the CEO of Twitter
prior to Twitter flying off and chirping bye bye, just announced that
he's firing something like 40% of the employees of his financial
services firm Block, which you may know as Square, to replace them
with AI, and he says most firms will probably do the same. That's
about 4000 humans being laid off in his case. Of course how well this
will work out for him is anyone's bet, since various firms who have
tried replacing large percentages of their employees with AI have had
to hire many back when things didn't go well, but I'm sure as a member
of the billionaire class he'll find a way to personally weather any
possible resulting storms.

Also on this theme of people being the issue, we have the billionaire
CEO of AI firm Anthropic announcing that the firm was abandoning its
core AI safety provisions, apparently primarily because other firms
were doing much the same and you know, competition. So it's a race to
the bottom. But here's the kicker, this seems not to have been enough
of a bottom for Secretary of Defense slash War Pete Hegseth, who
demanded even fewer safety restrictions on Pentagon use of Anthropic's
AI. Cutting to the chase, Anthropic refused and a few days ago the
administration banned Anthropic from federal work. There's a pile more
frankly utterly bizarre aspects to this but that's essentially the
saga.

The point being again that these issues of who determines what's safe
and unsafe for AI is in the hands of people, not the machines. Oh, and
by the way, this controversy apparently pushed Anthropic's Claude
chatbot to #1 on the Apple App store!

And that takes us to another story that touches on AI and weapons. You
may recall that I very recently reported here on a new laser weapon
system shipped to Texas by the Pentagon that was supposed to be used
against incursions by "cartel drones" across the border that have
ostensibly been a problem. The FAA ended up suddenly closing the
airspace over El Paso due to fears of the system accidentally taking
down a commercial or civil aircraft, when it was revealed that the
laser shot down what was first reported to be a cartel drone but later
was revealed to be a party balloon.

Well guess what, there's apparently been another similar incident in
Texas, but this time the system actually shot down a drone, not a
party balloon! But unfortunately, it was reportedly a U.S. government
border patrol drone that the system shot down, not a cartel drone. So
the FAA did another airspace closure over similar concerns as last
time. You can't make this stuff up. Our weapon systems are shooting
down our own aircraft.

It's pretty obvious why so many experts are so concerned about AI
without suitable safeguards being deployed to control advanced
weapons.

Vast numbers of jobs and lives and other aspects of our society are
being negatively impacted by AI madness, and many observers are
questioning the decisions to so rapidly deploy these systems. So if
you've got the feeling that AI is quickly getting out of control, you
definitely aren't wrong. But don't blame AI. Don't blame the machines.
The people building, deploying, and using these AI systems are the
responsible parties. THAT'S the reality.

 - - - 

L

 - - -
--Lauren--
Lauren Weinstein 
lauren@vortex.com (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad: https://www.nnsquad.org
         PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
privacy mailing list
https://lists.vortex.com/mailman/listinfo/privacy

home help back first fref pref prev next nref lref last post