[23449] in Privacy_Forum

home help back first fref pref prev next nref lref last post

[ PRIVACY Forum ] Script of my national radio report yesterday on

daemon@ATHENA.MIT.EDU (Lauren Weinstein)
Tue Apr 7 11:27:45 2026

Date: Tue, 7 Apr 2026 08:20:05 -0700
From: Lauren Weinstein <lauren@vortex.com>
To: privacy-dist@vortex.com
Message-ID: <20260407152005.GA1452@vortex.com>
Content-Disposition: inline
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Errors-To: privacy-bounces+privacy-forum=mit.edu@vortex.com


This is the script of my national radio report yesterday on the very
bad week for AI. As always there may have been minor wording
variations from this script as I presented this report live on air.

 - - - 

Yeah, this has been a remarkably bizarre week for AI, oh but before we
get into that I do have a bulletin. Remember we've discussed in the
past about how the U.S. FCC was talking about banning consumer grade
Internet routers made outside the U.S. just as they did for drones,
and on the same basis. They claim they have "security concerns" but
have never described or demonstrated them publicly, so it's hard not
to feel like they're treating us as ignorant fools with these claims.
Anyway, the FCC went ahead and did ban consumer grade Internet routers
made outside the U.S. And this of course affects a far larger number
of people than drones, because if you've got Internet in your house,
use Wi-Fi and so on, you probably have a covered router, and as a
practical matter none you're likely to have are built in the U.S. So
it's going to be a mess. Existing models can still be manufactured,
imported and sold, but as new models are designed with better
performance and even better security, U.S. consumers are blocked from
them under the current rules unless exceptions are made. I'll discuss
this more in the future.

OK, about AI's awful week. So some days ago, Anthropic made a little
boo boo. Well, actually, a very big boo boo. They accidentally
publicly posted the source code for their Claude Large Language Model
AI on GitHub, a very popular source code repository. They quickly
realized their error and closed access, but by then it had ALREADY
been noticed and copies were propagating everywhere. Anthropic
reportedly got thousands of copies pulled down but it's still out
there and even already translated into other computer languages.

And while this isn't going to crush Anthropic or anything like that
it did give quite a bit of insight into Claude's design and likely
future features including agentic capabilities and lots of other
things.

Perhaps even more relevant is that this kind of screw up doesn't
exactly give anyone much faith in how safe Anthropic's operations are
overall and that just adds again to the mass of serious concerns about
Generative AI and how Big Tech is managing it.

BUT WAIT THERE'S MORE! 

You may have heard of OpenClaw -- and no this isn't a character from
an old TV series like Get Smart or Inspector Gadget. OpenClaw is a
public open source agentic AI that very quickly became enormously
popular because it could be so easily deployed. And that enthusiasm
among AI boosters wasn't really held back by concerns that all sorts
of possible problems and exploits might be part and parcel of
OpenClaw, and now sure enough that's the case. Without getting into
technical details it's a pretty bad situation and frankly as you can
imagine hasn't come as a surprise to many observers.

If you're somewhat, uh, shall we say, skeptical of? concerned about?
these AI systems, as obviously I and many others inside and outside
the technical community definitely are, these events inspire even
greater concerns about these systems.

Because now we're seeing more evidence of the inherent problems, not
just as deployed by Big Tech but even as public open source systems.
This isn't just a matter of big corporations trying to exploit us by
shoving this tech down our throats, but also a set of fundamental
problems that are likely inextricably linked to the very concepts
involved with the creation and deployment of these Large Language
Model AI systems, whether commercial or free, closed source or open
source. And agentic AI, which is where the big push is now, is of peak
concern, since running amok they can do enormous damage to users.

These kinds of events cannot be minimized or ignored. They are
flashing red warnings of even more numerous and serious problems yet
to come. And if we don't take these warnings as a clear signal for
meaningful regulation and control, we'll have nobody but ourselves to
blame when future AI fiascos make today's AI problems look like
vacations in comparison.

 - - - 

L

 - - -
--Lauren--
Lauren Weinstein 
lauren@vortex.com (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad: https://www.nnsquad.org
         PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
privacy mailing list
https://lists.vortex.com/mailman/listinfo/privacy

home help back first fref pref prev next nref lref last post