[23358] in Privacy_Forum
[ PRIVACY Forum ] Script of my national radio report re controversy
daemon@ATHENA.MIT.EDU (Lauren Weinstein)
Tue Mar 17 10:57:34 2026
Date: Tue, 17 Mar 2026 07:48:05 -0700
From: Lauren Weinstein <lauren@vortex.com>
To: privacy-dist@vortex.com
Message-ID: <20260317144805.GA5679@vortex.com>
Content-Disposition: inline
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Errors-To: privacy-bounces+privacy-forum=mit.edu@vortex.com
This is the script of my national radio report yesterday on the
controversy over Grammarly's "Expert Review" AI personas of actual
people, that were created without permission. As always there may have
been minor wording variations from this script as I presented this
report live on air.
- - -
Yeah it seems like every time we explore an outrageous issue regarding
AI, we want to say, well it can't get worse than this one, but then a
bit of time goes by and whammo there's another one that just
demonstrates that when it comes to these Big Tech AI CEOs there's just
no bottom, and they seem to live in a world completely different from
the one most of us inhabit.
So it's important to remember that by and large the entire generative
AI Large Language Model industry is based on the mass appropriation of
data, of content, by these powerful firms used to train their AIs,
typically without any direct compensation offered to the sources nor
permission asked in advance. And this has taken place on a colossal
scale, mind-numbingly enormous and apparently largely driven by the
view by these firms that they will never be held to account for what
they do so long as they continue to have friends in high political
places.
And of course we've talked over time about a whole range of
staggeringly awful aspects of these AI systems including how they're
trained, the kinds of misinformation that have become so associated
with them, and how generally these AI firms don't want to take
responsibility for damages done to anyone who uses those systems --
which are becoming increasingly difficult to avoid using.
Well, today we have yet another example that again demonstrates that
the raw hubris frequently associated with these AI firms seems to be
effectively unlimited. You may know of the Grammarly website that is
used to provide spelling, punctuation, and grammar checking and other
services related to written materials, that they've increasingly
pitched as an AI-powered service. Recently they changed the name of
the firm itself from Grammarly to something a bit less modest. The
company is now called SUPERHUMAN. Never expect modesty when it come to
Big Tech.
So, Superhuman slash Grammarly came up with what they thought was a
sure winner AI feature that they call "Expert Review". They created
virtual AI personas of actual people that they considered to be
notable, both alive and deceased, so that users could have their
writing "critiqued" by these virtual creations. So they included
famous scientists, writers, columnists, you get the idea. And then the
AI virtual person would supposedly offer opinions and suggestions,
ostensibly as the real person would, or in the case of deceased
persons, supposedly would have if they were still alive.
And of course you'd expect that Superhuman paid these actual people --
at least the living ones -- handsomely to participate in this, right?
WELL you know where this is going, they didn't pay these people
handsomely, they didn't pay them anything. But at least they asked for
permission before using their personas, right? NO, of course not,
permission wasn't asked. Well, this has all blown up into a massive
controversy -- a class action case has apparently already been filed.
Many of the persons whom Superhuman simulated were understandably very
upset when they found out about this, not only because this was done
without payment or their permission but because of the obvious risks
of reputational damage in such situations from whatever those virtual
AI personas spout. So in response Superhuman set up an opt-out email
address -- totally inadequate -- and then as the controversy continued
took the service down at least for now, saying that they they had
"missed the mark".
This is classic Big Tech "move fast and break things" -- "plow ahead
without permission and apologize later if you have to" BIG HUBRIS
thinking. Yet another example of why society should demand that the AI
industry be subject to very stringent regulations no matter how many
politicians in either party try to protect those AI firms.
This is a matter of basic humanity that transcends technology and
politics. We need strong AI regulations. We needed strong AI
regulations before now, but at the very least we need them
immediately, before even more AI abominations are spewed from those
Big Tech firms that seem to care plenty about their bottom lines, but
not at all about us.
- - -
L
- - -
--Lauren--
Lauren Weinstein
lauren@vortex.com (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
privacy mailing list
https://lists.vortex.com/mailman/listinfo/privacy