[33867] in RISKS Forum
Risks Digest 34.90
daemon@ATHENA.MIT.EDU (RISKS List Owner)
Wed Mar 18 20:03:04 2026
From: RISKS List Owner <risko@csl.sri.com>
Date: Wed, 18 Mar 2026 17:11:41 PDT
To: risks@mit.edu
RISKS-LIST: Risks-Forum Digest Wednesday 18 March 2026 Volume 34 : Issue 90
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
<http://catless.ncl.ac.uk/Risks/34.90>
The current issue can also be found at
<http://www.csl.sri.com/users/risko/risks.txt>
Contents:
Artificial Intelligence in the Operating Room Leads to Occasional Botches
(William Yurcik)
My Self-driving Car Crash (Raffi Krikorian in The Atlantic)
A Possible U.S. Government iPhone-Hacking Toolkit Is Now in the
Hands of Foreign Spies and Criminals (WiReD)
Canada orders OpenAI safety review after grilling Sam Altman over security
lapses (Politico)
Armed Robots Take to the Battlefield in Ukraine (Vitaly Shevchenko)
Tennessee grandmother jailed after AI facial recognition error links her
to fraud (The Guardian)
Google Translate logs expose plot (OC-media)
Anthropic Sues Trump Administration for Targeting Company (WSJ)
Epstein files reveal shoddy spelling and ghastly grammar (Town and Country
Magazine)
AI chatbot kids' toys (BBC)
ChatGPT, Other Chatbots Approved for Official Use in the Senate (NYTimes)
To avoid accusations of AI cheating, college students are turning to AI
(via Steven Bacher)
Grammarly Disables AI 'Expert Review' After Backlash From Authors and
Journalists (Decrypt)
AI is getting scary good at finding hidden software bugs -- even in
decades-old code (ZDNET)
Meta and TikTok let harmful content rise after evidence outrage
drove engagement, say whistleblowers (BBC)
Americans Recognize AI as a Wealth Inequality Machine, Pollster Finds
(Gizmodo)
Russia is sharing satellite imagery and drone tech with Iran (L.Weinstein)
Negative Light' Used to Send Secret Messages Inside Heatn (Alan Bradley)
Trump funding solicitation offers donors private national security briefings
(News)
AI as nukes (Lauren Weinstein)
The Register and Unrecognized Risks (Cliff Kilby)
District denies enrollment to child based on license plate
Online scams and AI reader data (The Register via Rob Slade)
On Moltbook (from Bruce Schneier)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Fri, 6 Mar 2026 15:43:46 +0000
From: "Yurcik, William (CMS/OIT)" <William.Yurcik@CMS.hhs.gov>
Subject: Artificial Intelligence in the Operating Room Leads to
Occasional Botches
As of 9 Feb 2026, over 1,350 medical devices utilizing Artificial
Intelligence (AI) were Food and Drug Administration-approved. These medical
devices were linked to 182 product recalls, approximately 43% of which
occurred under a year after implementation. According to litigation, some of
the AI-enhanced medical devices led to surgical errors with life-altering
consequences, while others misidentified human body parts.
Acclarent's TruDi Navigation System, an AI-assisted chronic sinusitis (sinus
inflammation) treatment, allegedly malfunctioned on multiple occasions
during surgery leading to to a puncture of the base of one patient's skull,
and strokes after a major arterial injury to two additional patients while
doctors were using the TruDi Navigation System.
Unrelated to TruDi, an AI-assisted heart monitor in production by Medtronic
failed to recognize serious irregularities with patients' heartbeats on at
least 16 different occasions.
A different AI tool named Sonio Detect, built to assist with fetal imagery
in prenatal ultrasounds, was found in June 2025 to be misidentifying body
parts, though no safety issues were identified.
DSI Analyst Comments: This should serve as a reminder to doublecheck any
AI-assisted work. There are reports of big tech companies beginning to
re-hire developers who were laid off in great numbers last year and replaced
by AI, due to coding errors created by AI tools.
------------------------------
Date: Wed, 18 Mar 2026 10:09:37 PDT
From: Peter Neumann <neumann@csl.sri.com>
Subject: My Self-driving Car Crash (Raffi Krikorian in The Atlantic)
Raffi Krikorian, *The Atlantic*, April 2026
The Tesla was driving perfectly --- until it wasn't:
Uncontrolled crash into a wall.
------------------------------
Date: Mon, 16 Mar 2026 15:25:07 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: A Possible U.S. Government iPhone-Hacking Toolkit Is Now in the
Hands of Foreign Spies and Criminals (WiReD)
A highly sophisticated set of iPhone hijacking techniques has likely
infected tens of thousands of phones or more. Clues suggest it was
originally built for the US government.
Google notes that Apple patched vulnerabilities used by Coruna in the
latest versions of its mobile operating system, iOS 26, so its
exploitation techniques are only confirmed to work against iOS 13
through 17.2.1.
https://www.wired.com/story/coruna-iphone-hacking-toolkit-us-government/
------------------------------
Date: Thu, 5 Mar 2026 10:26:24 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Canada orders OpenAI safety review after grilling Sam Altman ove
security lapses (Politico)
Canada's AI minister said the tech CEO expressed “horror and responsibility
in general” during a virtual meeting about a recent mass shooting.
OTTAWA — Following one of Canada's deadliest mass shootings, OpenAI CEO Sam
Altman was grilled by a senior government official Wednesday over why the
company failed to alert police to the suspected shooter's seemingly violent
ChatGPT messages and why it failed to stop the user from bypassing a ban.
Canada's AI Minister Evan Solomon met virtually with Altman, who committed
to providing a “full report” on how OpenAI's systems catch dangerous users
and stop banned users from creating new accounts. Solomon said he’s
enlisted Canada's Artificial Intelligence Safety Institute to test OpenAI's
technology to make sure it isn’t a danger to the public. [...]
https://www.politico.com/news/2026/03/05/canada-openai-safety-review-altman-00814165
------------------------------
Date: Wed, 11 Mar 2026 14:52:06 PDT
From: Peter Neumann <neumann@csl.sri.com>
Subject: Armed Robots Take to the Battlefield in Ukraine (Vitaly Shevchenko)
Vitaly Shevchenko, BBC News (03/06/26), via ACM TechNews
While the use of aerial drones in Ukraine and other war theaters has
captured headlines, the war between Ukraine and Russia is increasingly
being shaped by armed uncrewed ground vehicles (UGVs). UGVs are used
by Ukrainian forces to fire machine guns, deploy explosives, and
ambush armored vehicles, as well as to transport supplies and evacuate
wounded troops. Most UGVs are remotely operated, with humans making
final decisions to fire weapons to comply with ethics and
international humanitarian law.
------------------------------
Date: Sat, 14 Mar 2026 15:10:17 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Tennessee grandmother jailed after AI facial recognition error
links her to fraud (The Guardian)
Angela Lipps spent nearly six months in jail after AI software linked
her to a North Dakota bank fraud case
A Tennessee grandmother says she is trying to rebuild her life after
an incident of mistaken identity by an artificial intelligence (AI)
facial recognition system tied her to a North Dakota bank fraud
investigation.
Angela Lipps, 50, spent nearly six months in jail after Fargo police
identified her as a suspect in an organized bank fraud case using
facial recognition software, according to south-east North Dakota news
outlet InForum. Lipps told the outlet she had never been to North
Dakota and did not commit the crimes.
Lipps, a mother of three and grandmother of five, said she has lived
most of her life in north-central Tennessee. She had never been on an
airplane until authorities flew her to North Dakota last year to face
charges.
In July, US marshals arrested Lipps at her Tennessee home while she
was babysitting four children. She said she was taken away at gunpoint
and booked into a county jail as a fugitive from justice from North
Dakota.
“I’ve never been to North Dakota, I don’t know anyone from North
Dakota,” Lipps told WDAY News.
She remained in a Tennessee jail for nearly four months without bail
while awaiting extradition. She was charged with four counts of
unauthorized use of personal identifying information and four counts
of theft.
https://www.theguardian.com/us-news/2026/mar/12/tennessee-grandmother-ai-fraud
------------------------------
Date: Mon, 16 Mar 2026 22:17:15 -0700
From: Geoff Kuenning <geoff@cs.hmc.edu>
Subject: Google Translate logs expose plot (OC-media)
Federal investigators used logs of Google Translate activity to charge a
Russian operative of planning a murder.
The linked article implies that U.S. investigators acquired a search
warrant to acquire the logs "despite their use of encrypt[ion]"; the
Russian was communication with a Serbo-Croatian. But who even knew that
Translate kept logs?
Especially if you're engaged in crime, you should assume that ANY online
service is feeding information to the government (as well as, of course,
every advertiser in the universe).
https://oc-media.org/google-translate-logs-expose-russian-plot-against-relatives-of-chechen-opposition-leader-zakaev/
------------------------------
Date: Wed, 11 Mar 2026 14:52:06 PDT
From: Peter Neumann <neumann@csl.sri.com>
Subject: Anthropic Sues Trump Administration for Targeting Company (WSJ)
Amrith Ramkumar and Keach Hagey, The Wall Street Journal (03/09/26),
via ACM TechNews
Anthropic has filed a lawsuit against the Trump administration for
designat~<ing the AI company a security threat and attempting to cancel its
federal contracts. Anthropic listed the U.S. Defense Department, Defense
Secretary Pete Hegseth, several federal agencies, and numerous other
administration officials as defendants. The administration's actions stemmed
from a disagreement with Anthropic over how the U.S. Department of Defense
could use the company's AI tools.
------------------------------
Date: Wed, 11 Mar 2026 20:56:46 +0000
From: Henry Baker <hbaker1@pipeline.com>
Subject: Epstein files reveal shoddy spelling and ghastly grammar
Apparently, the most shocking revelation so far from the public
unredacted Epstein files is laying bare the widespread shoddy spelling
and ghastly grammar from these "best and brightest" elites who
attended some of the best schools, and very likely had some of the
highest "verbal" SAT scores.
Microsoft/Google/Apple, etc., has to be embarrassed at wasting every
penny they ever spent on grammar and spell-checking software.
https://en.wikipedia.org/wiki/Grammar_checker
https://www.townandcountrymag.com/society/money-and-power/a70381633/epstein
-files-spelling-mistakes/
PS. Yours truly developed a spell-checking program for the 8-bit Apple II,
back when dinosaurs still paid roaming fees.
PPS. AI can now beat most humans on verbal SAT's, and -- with training from
the public Epstein trove -- can now precisely mimic its poor spelling and
bad grammar. With AI, you, too, can write like a one-percenter !
------------------------------
Date: Sat, 14 Mar 2026 16:11:40 +0000
From: Martin Ward <martin@gkc.org.uk>
Subject: AI chatbot kids' toys (BBC)
AI toys for children misread emotions and respond inappropriately,
researchers warn
https://www.bbc.co.uk/news/articles/clyg4wx6nxgo
Researchers are calling for tighter regulation of AI-powered toys
designed for toddlers, after conducting one of the first tests in
the world to investigate how under-fives interact with the technology.
When one five-year-old said, "I love you," to the toy, it replied:
"As a friendly reminder, please ensure interactions adhere to the
guidelines provided. Let me know how you would like to proceed."
Pivot to AI report:
https://www.youtube.com/watch?v=sqpL3I7kVLU
The toy company is just buying ChatGPT as a service. The guardrails
are just an extra prompt they put after the chatbot default prompt.
Last year, the public interest research group tried stress testing
these fake guardrails. ...
The researchers still found it super easy to get the bots to dive
into weird stuff or maybe sex fetish talk. The various bots could
mostly hold it together over a few minutes, but between 10 minutes
to an hour, all the bots guardrails broke down because they ran
over the chatbot context window.
------------------------------
Date: Sat, 14 Mar 2026 12:11:34 -0400
From: Tom Van Vleck <thvv@multicians.org>
Subject: ChatGPT, Other Chatbots Approved for Official Use in the Senate
(The NY Times)
https://www.nytimes.com/2026/03/10/us/politics/us-senate-chatgpt-ai-chatbots.html
O boy. that will fix all our problems.
(especially those caused by having too much money in our wallets.)
------------------------------
Date: Fri, 13 Mar 2026 08:32:42 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: To avoid accusations of AI cheating, college students are turning to
AI
On college campuses across the United States, the introduction of generative
artificial intelligence has sparked a sort of arms race.
Rapid adoption of AI by young people set off waves of anxiety that students
could cheat their way through college, leading many professors to run papers
through online AI detectors that inspect whether students used large
language models to write their work for them. Some colleges say they've
caught hundreds of students cheating this way.
However, since their debut a few years ago, AI detectors have repeatedly
been criticized as unreliable and more likely to flag non-native English
speakers on suspicion of plagiarism. And a growing number of college
students also say their work has been falsely flagged as written by AI --
several have filed lawsuits against universities over the emotional distress
and punishments they say they faced as a result. [...]
Amid accusations of AI cheating, some students are turning to a new group of
generative AI tools called “humanizers.” The tools scan essays and suggest
ways to alter text so they aren'
t read as having been created by AI. Some
are free, while others cost around $20 a month.
Some users of the humanizer tools rely on them to avoid detection of
cheating, while others say they don't use AI at all in their work, but want
to ensure they aren’t falsely accused of AI-use by AI-detector programs.
[...]
------------------------------
Date: Thu, 12 Mar 2026 21:39:23 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: Grammarly Disables AI 'Expert Review' After Backlash From Authors
and Journalists (Decrypt)
https://decrypt.co/360748/grammarly-disables-ai-expert-review-backlash-authors-
journalists
[DeCrypt is usually in the basement or in unmarked stadium concrete.
This one is very promising. PGN]
------------------------------
Date: Mon, 16 Mar 2026 15:25:56 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: AI is getting scary good at finding hidden software bugs -- even
in decades-old code (ZDNET)
But I also creates bugs -- about 1.7 times as many as humans, including
critical and major issues.
https://www.zdnet.com/article/ai-finds-hidden-bugs-old-code/
------------------------------
Date: Mon, 16 Mar 2026 11:29:55 -0600
From: Matthew Kruk <mkrukg@gmail.com>
Subject: Meta and TikTok let harmful content rise after evidence outrage
drove engagement, say whistleblowers (BBC)
https://www.bbc.com/news/articles/cqj9kgxqjwjo
Social media giants made decisions which allowed more harmful content on
people's feeds, after internal research into their algorithms showed how
outrage fueled engagement, whistleblowers told the BBC.
More than a dozen whistleblowers and insiders have laid bare how the
companies took risks with safety on issues including violence, sexual
blackmail and terrorism as they battled for users' attention.
An engineer at Meta, which owns Facebook and Instagram, described how he
had been told by senior management to allow more "borderline" harmful
content - which includes misogyny and conspiracy theories - in user's feeds
to compete with TikTok.
"They sort of told us that it's because the stock price is down," the
engineer said.
------------------------------
Date: Tue, 17 Mar 2026 14:34:58 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: Americans Recognize AI as a Wealth Inequality Machine, Pollster Finds
(Gizmodo)
Americans Recognize AI as a Wealth Inequality Machine, Pollster Finds
https://gizmodo.com/americans-recognize-ai-as-a-wealth-inequality-machine-pollssters-find-2000734713
------------------------------
Date: Tue, 17 Mar 2026 13:26:24 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: Russia is sharing satellite imagery and drone tech with Iran
------------------------------
Date: Wed, 18 Mar 2026 12:17:11 -0400 (EDT)
From: ACM TechNews <technews-editor@acm.org>
Subject: Negative Light' Used to Send Secret Messages Inside Heat
(Alan Bradley)
Alan Bradley, Live Science (03/12/26). via ACM TechNews
A team led by researchers at Australia's University of New South Wales
Sydney used thermoradiative diodes to transmit data, disguised at background
thermal radiation, at a rate of 100 kilobits per second. The hidden data
transfer method uses "negative light," negative luminescence that dims
infrared radiation in the environment. Patterns of brighter- or
darker-than-usual light generated by the thermoradiative diodes can be read
by specialized receivers as data but otherwise blend into the usual infrared
background noise.
------------------------------
Date: Sat, 14 Mar 2026 14:04:32 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: Trump funding solicitation offers donors private national security
briefings. (News)
Yes, I'm serious.
https://www.ms.now/news/trump-fundraising-pitch-features-u-s-soldiers-killed-in
-iran-war
------------------------------
Date: Tue, 17 Mar 2026 09:33:57 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: AI as nukes
Ultimately, we may have to treat various applications of LLM
generative AI as we do nuclear weapons. Just because they exist and
can be used to destroy societies, that doesn't mean we cavalierly
permit them to be used that way. -L
------------------------------
Date: Sun, 15 Mar 2026 00:30:43 -0400
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: The Register and Unrecognized Risks
The Register has a fairly nice piece on zram/zswap. I am personally a fan
of zram, so it was an interesting read until towards the end of the article.
https://www.theregister.com/2026/03/13/zram_vs_zswap/
"We know a lot of people also advocate encrypting your machines' drives,
especially laptops. We feel that https://xkcd.com/538/ applies here. Turn
it off, especially swap."
The author seems to reside in a world where the only possible reason
someone would have your hard drive is due to nation-state level interest in
your person. The risk of leaving a laptop at a restaurant, or in a cab was
unrecognized, and therefore the of risk of data retrieval by 3rd parties
was accepted.
In the world I reside in, forgetting your laptop in your car or having your
computer stolen by someone less than personally interested in you is much
more likely.
SecurityWeek apparently also lives in that world.
https://www.securityweek.com/lost-and-stolen-devices-a-gateway-to-data-breaches
-and-leaks/
Also as a minor aside, hibernation may not work on a machine that doesn't
have encrypted swap. It may also not work on a machine that has encrypted
swap and a kernel with lockdown enabled.
I note this to remind Risks that it's very difficult to give generic
recommendations as they rapidly become senseless without context.
I'm still giving the general advice that you should encrypt your drives.
------------------------------
Date: Fri, 13 Mar 2026 07:37:07 -0400
From: Bob Gezelter <gezelter@rlgsc.com>
Subject: District denies enrollment to child based on license plate
reader data (The Register)
The Register published an account of a child in the Chicago, Illinois
area who was denied enrollment in her local school district due to her
mother's vehicle being tracked to overnight stays in Chicago during
July and August.
The report highlights a common and in my opinion quickly accelerating
problem: failing to distinguish the difference between "correct" and
"often correct."
Authorities' problem is not imagined. There are cases of children
being registered outside of their true residence district to access
schools. However, the standard of proof is the question.
The mother produced documentation of residence, e.g., utility bills,
mortgage, driver's license, documenting district residence. The
reported sole evidence of non-residence is the overnight presence of
her vehicle in Chicago.
Why would her vehicle be in Chicago overnight when she and the child
are supposed to be home? We can start with the reported dates. July
and August. School vacations. How many go away for various reasons
during school vacations? The mother says she loaned her car to a
relative.
Taken more generally, could the mother (or a relative) been working an
overnight shift? How many work hours other than 9 AM to 5 PM?
Over the years, I have given many talks on the difference between
"often true" and "true." This appears to be an example. It is true
that MOST vehicles overnight at their owners residences, but that is
not an absolute truth.
(Two talks in that series are:
Cyberspace and the Intersection of Security: New Challenges for
Corporate Security, Law Enforcement, and our Privacy
[https://www.rlgsc.com/stjohns/2014-04/cyberspace-and-the-intersection-of-secur
ity.html]
St Johns University. April 2014
Les Approximations Dangereaux: The Sorcerer's Apprentice and Other
Dangerous Approximations
[https://www.rlgsc.com/e-protectit/sorcerers.html] E-Protectit, Norwich
University, March 2002 )
Automation of decision making requires those employing the automation
to understand the limits of approximations. Otherwise, one quickly
descends to "guilty until proven innocent."
The Register article is at:
https://www.theregister.com/2026/03/12/district_denies_enrollment_to_child/
------------------------------
Date: Mon, 16 Mar 2026 06:43:22 -0700
From: Rob Slade <rslade@gmail.com>
Subject: Online scams and AI
https://fibrecookery.blogspot.com/2026/03/online-scams-and-ai.html
I have been under a targeted grief scam attack for about a month, now,
although the early stages of it started a little over two months ago, and
the origin of the whole process now dates back almost five months. My
colleagues in security are finding this hilarious, of course, and have
encouraged me to continue the contact, for research purposes.
In that regard, it has been somewhat useful. At the very least, it has
pointed me to the use, and utility, of the concept of "frictionless" as a
characteristic of conversational style that can be used, surprisingly early
in the process, for identifying some contact as a scam, or potential scam.
In addition (and somewhat relatedly), I have been intrigued at the (mostly
indirect) connections between the research into online scams and frauds,
and my research into the risks of the new generative artificial
intelligence systems.
I started to note and oddly consistent characteristic of the email messages
I was receiving. "Debra" noted that "she" was keeping an open mind as we
get to know each other as life has taught "her" that meaningful connections
often begin with simple conversations, and "she" looks forward to learning
more about me. Outside of work, "she" enjoy simple pleasures. "She" likes
taking walks, listening to good music, reading, and spending quiet time
reflecting or enjoying nature. "She" also enjoys travelling when "she"
can, trying new foods, and having relaxed conversations with good company.
"She" values honesty, kindness, and a good sense of humor. (I note that
this seems to be copied directly from "How to Write A Generically
Attractive Dating Profile in 25 Words or Less.")
"Debra" included pictures. I'm learning more about Google Lens and the
reverse image search capabilities, but the additional pictures provide
little to go on. The pictures could be of the same woman, but, given the
"similar" pictures that Google pulls up, they could just be "blonde woman,
older but still socially active and visiting the hairdresser quite
regularly."
The primary characteristic is "frictionless
https://fibrecookery.blogspot.com/2026/03/frictionless.html ." The emails
are as polite (and pretty much as content-free) as a conversation with a
genAI chatbot. (It is not beyond the bounds of possibility that an AI tool
is involved.)
This issue of "friction" in relationships, or "frictionless" conversation,
is originating with regard to generative AI, and conversing with chatbots.
But it seems to be a useful characteristic in regard to identifying scams.
Ordinary relationships have friction: disagreements between the parties to
the relationship. Chatbots are primarily built to be polite, and to seldom
directly challenge the person they are conversing with, and so the
discussions are tending to be described as frictionless. The same
characteristics tend to show up in conversations involved in scams.
It's fairly obvious that "Debra" (and probably "Edmund" before her) really
aren't paying attention to what I'm writing. I'm not exactly hiding the
fact that I'm a security expert, and my sigblock currently contains a
reference to a series of postings on online frauds and scams (of which
series this posting is a part).
As noted elsewhere, the frictionless nature of the messages that "Edmund"
or Debra" write raises the suspicion that the scammer is using some kind of
genAI tool to generate their responses. The messages, as noted above, are
pretty content-free. As a test, I took one of the messages that *I* sent,
asked a few chatbots to create responses to them, and got results that,
while not word-for-word identical, were, effectively, basically the same.
I suppose I should save time by simply having a chatbot write my responses
to "Debra." So I did.
Interestingly, Claude and Qwen refused, noting that "Debra's" messages
showed signs of being part of an online scam, and warning that I should end
the correspondence. However, ChatGPT, Meta AI, and DeepSeek were all happy
to comply, with no warnings of the danger. Meta AI's was the friendliest.
(ChatGPT noted that I wasn't in any position to help.) I stitched
together bits of all three to compose my reply.
The genAI/LLM chatbots *really* let me down at one point. I asked them
(well, the three remaining ones that didn't refuse the previous time) to
respond to a later message. ChatGPT did provide a response, but it
contained a pretty flat "no" as far as being involved in anything legal.
That's probably safer, for the general public (although ChatGPT missed the
boat on that last time), but, for my purposes of trolling the scammers, it
isn't very helpful. Meta AI and DeepSeek are all in, eager to get involved
with the lawyer and get on with being scammed!
But then I realized that I wasn't being fair to the chatbots. When I added
a note to the effect that I *realized* that this was a scam, but wanted to
continue (short of sending money) the bots were more helpful. (Well,
except for Qwen. Qwen still feels that this is a really bad idea, and
wants me to report the scam. Rather ironically, to the US FTC.) (Oh, and,
even when informed that this is a scam, Meta AI is still all in, and wants
me to hurry up and get involved with a possibly criminal power of
attorney.) ChatGPT provided a reasonably and suitably cautious reply.
Claude's reply was better, and more specific, and included an extra warning
to be cautious. DeepSeek was complimentary, and congratulated me on my
approach, as well as ending with some warnings. The reply itself was a bit
weak, and it seemed to get confused about just who had had the power
outage, so that wasn't terribly useful. For any future similar research,
I'd probably use a combination of ChatGPT and Claude, mostly Claude.
Online scams, frauds, and other attacks (OSF series postings)
https://fibrecookery.blogspot.com/2026/02/online-scams-frauds-and-other-attacks
.html
------------------------------
Date: Sun, 15 Mar 2026 20:38:52 PDT
From: Peter Neumann <neumann@csl.sri.com>
Subject: On Moltbook (from Bruce Schneier)
[This is a nice follow-up to previous RISKS risks items on Moltbook. PGN]
[2026.03.03]
[https://www.schneier.com/blog/archives/2026/03/on-moltbook.html]
The MIT Technology Review has a good article
[https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/]
on Moltbook, the supposed AI-only social network:
Many people have pointed out that a lot of the viral comments were in fact
posted by people posing as bots. But even the bot-written posts are
ultimately the result of people pulling the strings, more puppetry than
autonomy.
Despite some of the hype, Moltbook is not the Facebook for AI agents, nor
is it a place where humans are excluded, says Cobus Greyling at Kore.ai,ß∆ a
firm developing agent-based systems for business customers. ``Humans are
involved at every step of the process. From setup to prompting to
publishing; nothing happens without explicit human direction.''
Humans must create and verify their bots' accounts and provide the prompts
for how they want a bot to behave. The agents do not do anything that they
haven't been prompted to do.
------------------------------
Date: Sat, 28 Oct 2023 11:11:11 -0800
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:
http://mls.csl.sri.com/mailman/listinfo/risks
=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) has moved to the ftp.sri.com site:
<risksinfo.html>.
*** Contributors are assumed to have read the full info file for guidelines!
=> OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
delightfully searchable html archive at newcastle:
http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
Also, ftp://ftp.sri.com/risks for the current volume/previous directories
or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
If none of those work for you, the most recent issue is always at
http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
<http://www.acm.org/joinacm1>
------------------------------
End of RISKS-FORUM Digest 34.90
************************