[33849] in RISKS Forum
Risks Digest 34.81
daemon@ATHENA.MIT.EDU (RISKS List Owner)
Fri Nov 28 18:50:39 2025
From: RISKS List Owner <risko@csl.sri.com>
Date: Fri, 28 Nov 2025 15:56:35 PST
To: risks@mit.edu
RISKS-LIST: Risks-Forum Digest Friday 28 October 2025 Volume 34 : Issue 81
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
<http://catless.ncl.ac.uk/Risks/34.81>
The current issue can also be found at
<http://www.csl.sri.com/users/risko/risks.txt>
Contents:
Tower on Billionaires' Row Is Full of Cracks; Who's to Blame (NYTimes)
Amazon faces FAA probe after delivery drone snaps Internet cable in Texas
(CNBC)
Cloudflare outage and single points of failure
(TomsHardware via Martin Ward + Cliff Kilby)
Cryptographers cancel election results after losing decryption key
(ArsTechnica)
Bug in jury systems used by several U.S. states exposed sensitive personal
data (TechCrunch)
Asahi says 1.5 million customers' data potentially leaked in cyber-attack
(BBC)
X feature reveals locations of some users. It could backfire. (NBC News)
Help! My Rental Car Died Within a Mile, and Avis Charged Me $1,367. (NI Times)
Pentagon contractors want to blow up military right to repair (The Verge)
Google boss says trillion-dollar AI investment boom has 'elements of
irrationality' (BBC)
Holding AI responsible (Lauren Weinstein)
OpenAI is arguing that a teen who committed suicide with assistance
violated the Terms of Service by successfully bypassing ChatGPT
*safeguards* (Lauren Weinstein)
WhatsApp API Flaw Let Researchers Scrape 3.5 Billion Accounts
(Lawrence Abrams)
Privacy commissioner calls for better cybersecurity in Alberta schools after
big breach (CBC via Matthew Kirk)
More people are using ChatGPT like a lawyer in court. Some are starting to
win. (NBC News)
Generative AI Hallucinations in Legal Motions -- Corrected (Bob Gezelter)
AI Chatbots Are Putting Clueless Hikers in Danger (Futurism)
The AI boom is based on a fundamental mistake (The Verge)
AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and Instructing
Kids How to Find Knives (Gizmodo)
How to Tell What’s Real Online (YouTube via Matthew Kruk)
Is it AI -- or a plane crash? (Politico via Steve Bacher)
AI chatbots giving self-harm instructions (Cybernews)
Fears About AI Prompt Talks of Super PACs to Rein In the Industry
Space debris (TechReview)
Cloudflare outage not caused by attack as CEO first suspected, but by a
single file that got too big (ArsTechnica)
Keurig crash (paul wallich)
This is what your AI girlfriend looks like without makeup (x)
The Most Joyless Tech Revolution Ever: AI Is Making Us Rich and Unhappy (WSJ)
Re: AN0M (Steve Bacher)
Re: Chinese researchers just unveiled a photonic quantum chip that doesn't
deliver a 1,000-fold speed boost to AI data centers (John Levine)
Re: Dog Accidentally Shoots and Injures a Pennsylvania Man ... (Martin Ward)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Sat, 25 Oct 2025 18:02:20 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Tower on Billionaires' Row Is Full of Cracks; Who's to Blame
(The New York Times)
A superstar team of architects and developers insisted on an all-white
concrete facade. It could explain some of the building’s problems.
The New York Times reviewed thousands of pages of court documents, public
records and private correspondence between the buildings’ residents and
planners. They reveal that for years, several key members of the team of
developers, engineers and architects behind 432 Park had expressed concerns
about its white exterior, even before the concrete was poured.
Concrete typically gets its gray tint from iron oxides in cement; altering
the components can affect its strength, color and performance. Builders of
432 Park were presented with a major challenge: how to come up with a
concrete mixture that met their exacting aesthetic. Companies involved with
the job called it one of the most difficult concrete projects ever executed.
Seeking what he once called an “absolutely pure” building, Harry Macklowe, a
well-known New York developer, tore down the luxury Drake Hotel and
commissioned Rafael Viñoly, the Uruguayan modernist, to design a perfectly
rectilinear body for a tower on the site. They assembled engineers,
construction firms and concrete specialists to carry out the vision.
The tower at 432 Park Avenue was set to become the tallest residential
building in the world and one of the slimmest. Its “slenderness” ratio is 15
to one; by comparison the Empire State Building has a ratio of three to one
because it has a much wider base. [...]
Like other supertall towers, 432 Park relies on the counterweight system to
address the forces of wind and reduce the feeling of swaying for
residents. But unlike many other supertall towers that are tiered or taper
toward the top, 432 Park is rectangular, making it less aerodynamic.
The developers believed that their boxy design would work, thanks to a
series of open-air floors that allow wind to pass through.
But the rapid appearance of cracks, the emergence of new ones and past
breakdowns in the counterweight system all point to the building facing
unexpected stress from wind, said Scott Chen, a forensic engineer in
Melbourne, Australia, who studied the building.
Cracks in the facade increase the risk of water seeping into the structure,
which could cause the steel rebar to rust and expand, producing even more
cracks.
This cycle of degradation affects what experts call the building’s
stiffness, or its ability to respond to wind. More cracking could exacerbate
existing problems with mechanical systems, they said, and make the building
increasingly vulnerable.
If this cycle of stress continues, the consequences could be huge, according
to engineering experts.
“Chunks of concrete will fall off, and windows will start loosening up,”
said Mr. Bongiorno, the structural engineer, who echoed concerns of other
independent engineers contacted by The Times. “You can’t take the elevators,
mechanical systems start to fail, pipe joints start to break and you get
water leaks all over the place.
https://www.nytimes.com/2025/10/19/nyregion/432-park-avenue-condo-tower.html
[It's sort of like cryptography -- rolling your own from scratch is
perilous. You must also be keenly aware of all the past mistakes. PGN]
------------------------------
Date: Wed, 26 Nov 2025 09:20:29 -0700
From: geoff goodfellow <geoff@iconia.com>
Subject: Amazon faces FAA probe after delivery drone snaps Internet cable in
Texas (CNBC)
Amazon is facing a federal probe after one of its delivery drones downed an
Internet cable in central Texas last week. The probe comes as Amazon vies
to expand drone deliveries to more pockets of the U.S., more than a decade
after it first conceived the aerial distribution program, and faces stiffer
competition from Walmart, which has also begun drone deliveries.
The incident occurred on 18 Nov 2025 around 12:45 p.m. Central in Waco,
Texas. After dropping off a package, one of Amazon's MK30 drones was
ascending out of a customer's yard when one of its six propellers got
tangled in a nearby Internet cable, according to a video of the incident
viewed and verified by CNBC.
The video shows the Amazon drone shearing the wire line. The drone's motor
then appeared to shut off and the aircraft landed itself, with its
propellers windmilling slightly on the way down, the video shows. The drone
appeared to remain intact beyond some damage to one of its propellers.
The Federal Aviation Administration is investigating the incident, a
spokesperson confirmed. The National Transportation Safety Board said the
agency is aware of the incident but has not opened a probe into the matter.
Amazon confirmed the incident to CNBC, saying that after clipping the
Internet cable, the drone performed a safe contingent landing, referring to
the process that allows its drones to land safely in unexpected conditions.
``There were no injuries or widespread Internet service outages. We've paid
for the cable line's repair for the customer and have apologized for the
inconvenience this caused them,'' an Amazon spokesperson told CNBC, noting
that the drone had completed its package delivery. [...]
https://www.cnbc.com/2025/11/25/amazon-faa-probe-delivery-drone-incident-texas.html
------------------------------
Date: Tue, 18 Nov 2025 12:56:22 +0000
From: Martin Ward <martin@gkc.org.uk>
Subject: Cloudflare outage and single points of failure (TomsHardware)
"Cloudflare has confirmed it is aware of a major issue affecting its Global
Network, which is causing outages on platforms like X (formerly Twitter),
and, ironically, Downdetector."
https://www.tomshardware.com/news/live/cloudflare-outage-under-investigation-as-twitter-downdetector-go-down-company-confirms-global-network-issue-clone
There is an old saying "The Net treats censorship as damage and routes
around it" (John Gilmore). But the modern Internet has imposed multiple
single points of failure on top of the old Net model (AWS servers,
Cloudflare servers, etc.) which means that it can no longer even route
around damage, let alone censorship.
[Cliff Kilby notes:
What could possibly be risky about fronting all your services though a
single portal you cannot maintain?
https://www.tomshardware.com/news/live/cloudflare-outage-under-investigation-as-twitter-downdetector-go-down-company-confirms-global-network-issue-clone
Oh. That.
]
------------------------------
Date: Tue, 25 Nov 2025 00:40:07 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Cryptographers cancel election results after losing decryption
key. (Ars Technica)
The voting system required keys from three different sources, i.e.,
3-out-of-3, rather than the more conventional 2-out-of-3. One of the keys has
been “irretrievably lost.”
https://arstechnica.com/security/2025/11/cryptography-group-cancels-election-results-after-official-loses-secret-key/
------------------------------
Date: Fri, 28 Nov 2025 07:56:00 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Bug in jury systems used by several U.S. states exposed sensitive
personal data (TechCrunch)
Several public websites designed to allow courts across the United States
and Canada to manage the personal information of potential jurors had a
simple security flaw that easily exposed their sensitive data, including
names and home addresses, TechCrunch has exclusively learned.
A security researcher, who asked not to be named for this story, contacted
TechCrunch with details of the easy-to-exploit vulnerability, and identified
at least a dozen juror websites made by government software maker Tyler
Technologies that appear to be vulnerable, given that they run on the same
platform.
The sites are all over the country, including California, Illinois,
Michigan, Nevada, Ohio, Pennsylvania, Texas, and Virginia.
Tyler told TechCrunch that it is fixing the flaw after we alerted the
company to the information exposures.
The bug meant it was possible for anyone to obtain the information about
jurors who are selected for service. To log into these platforms, a juror is
provided a unique numerical identifier assigned to them, which could be
brute-forced since the number was sequentially incremental. The platform
also did not have any mechanism to prevent anyone from flooding the login
pages with a large number of guesses, a feature known as “rate-limiting.”
[...]
https://techcrunch.com/2025/11/26/bug-in-jury-systems-used-by-several-us-states-exposed-sensitive-personal-data/
------------------------------
From: Matthew Kruk <mkrukg@gmail.com>
Date: Thu, 27 Nov 2025 00:10:37 -0700
Subject: Asahi says 1.5 million customers' data potentially leaked in
cyber-attack (BBC)
https://www.bbc.com/news/articles/ce86n44178no
Japanese beer giant Asahi revealed on Thursday that a massive cyber-attack
in September has potentially leaked the personal information of more than
1.5 million customers.
The drinks company published a statement on its investigation into the
ransomware attack, which had crippled its operations across its factories
in Japan and forced employees to take orders by pen and paper.
Asahi said it found that personal details of people who had contacted its
customer service centres were likely exposed and that those affected would
be notified soon.
The firm added that it would delay the release of its full-year financial
results to focus on dealing with the fallout of the attack.
[A kiss is just a kiss, Asahi is just a sigh. As Time Goes By.
Herman Hupfeld, Casablanca, 1962. And a leak is just (another) leak,
as time goes by! PGN]
------------------------------
Date: Tue, 25 Nov 2025 21:02:47 +0000 (UTC)
From: Steve Bacher <sebmb1@verizon.net>
Subject: X feature reveals locations of some users. It could backfire.
(NBC News)
Advocates for transparency on social media cheered this weekend when X, the
app owned by tech billionaire Elon Musk, rolled out a new feature that
disclosed what the company said were the country locations of accounts.
The feature appeared to unmask a number of accounts that were portraying
themselves as belonging to Americans but in reality were based in countries
such as India, Thailand and Bangladesh.
But by Monday, the effectiveness and accuracy of the feature were already in
question, as security experts, social media researchers and two former X
employees said the location information could be inaccurate or spoofed using
widely available technology, such as virtual private networks (VPNs), to
hide their locations
The former employees said the idea had been pitched since at least 2018, but
had been repeatedly shot down.
``Now that this feature exists, I think it's absolutely going to be
exploited, and people will learn to dodge it very quickly,'' said Darren
Linvill, a professor and a co-director of Clemson University's Media
Forensics Hub. [...]
https://www.nbcnews.com/tech/elon-musk/x-user-location-feature-country-elon-musk-new-rcna245620
(The RISK is not only that users could spoof it to appear legitimate but
also that many of the supposedly unmasked foreign agents may be unjustly
accused Americans simply because of inaccuracies in how the location is
determined.)
The former employees said the idea had been pitched since at least 2018, but
had been repeatedly shot down.
------------------------------
Date: Thu, 20 Nov 2025 10:07:00 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Help! My Rental Car Died Within a Mile, and Avis Charged Me $1,367.
(NY Times)
A visitor to Italy had to abandon an SUV after it conked out just minutes
from the rental agency. Then he got another surprise: a hefty repair bill.
https://www.nytimes.com/2025/11/20/travel/avis-rental-car-repair-charges.html
------------------------------
Date: Wed, 26 Nov 2025 17:41:59 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Pentagon contractors want to blow up military right to repair
(The Verge)
https://www.theverge.com/news/830715/military-contractors-right-to-repair-ndaa-data-as-a-service
------------------------------
Date: Tue, 18 Nov 2025 06:49:20 -0700
From: Matthew Kruk <mkrukg@gmail.com>
Subject: Google boss says trillion-dollar AI investment boom has
'elements of irrationality' (BBC)
https://www.bbc.com/news/articles/cwy7vrd8k4eo
Every company would be affected if the AI bubble were to burst, the head of
Google's parent firm Alphabet has told the BBC.
Speaking exclusively to BBC News, Sundar Pichai said while the growth of
artificial intelligence (AI) investment had been an "extraordinary moment",
there was some "irrationality" in the current AI boom.
It comes amid fears in Silicon Valley and beyond of a bubble as the value of
AI tech companies has soared in recent months and companies spend big on the
burgeoning industry.
------------------------------
Date: Mon, 17 Nov 2025 09:53:58 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Holding AI responsible
Seems to me that if these AI systems are going to refer to themselves as "I"
and "we", etc. the firms running them should be 100% responsible for the
errors they spew and the damage they do, to the same extent as any
individual person would be. That means you, billionaire CEOs!
------------------------------
Date: Wed, 26 Nov 2025 17:34:29 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: OpenAI is arguing that a teen who committed suicide with assistance
from ChatGPT violated the Terms of Service
by successfully bypassing ChatGPT *safeguards*.
------------------------------
Date: Wed, 26 Nov 2025 11:30:58 PST
From: ACM TechNews <technews-editor@acm.org>
Subject: WhatsApp API Flaw Let Researchers Scrape 3.5 Billion Accounts
(Lawrence Abrams)
Lawrence Abrams, BleepingComputer (11/22/25)
Researchers at Austria's University of Vienna and SBA Research uncovered a
critical flaw in the WhatsApp API that allowed them to scrape 3.5 billion
user phone numbers and associated personal details by automating
contact-discovery checks without encountering any rate limits. By abusing
multiple unp rotected endpoints, they collected profile photos, "about"
text, and device information. Their findings show that improperly protected
APIs remain one of the biggest drivers of mass data exposure.
------------------------------
Date: Wed, 19 Nov 2025 06:33:28 -0700
From: Matthew Kruk <mkrukg@gmail.com>
Subject: Privacy commissioner calls for better cybersecurity in Alberta
schools after big breach
https://www.cbc.ca/news/canada/calgary/privacy-commissioner-report-powerschool-9.6983650
Alberta's privacy commissioner wants to see improved security policies in
schools after a cybersecurity breach last year exposed highly sensitive
information of hundreds of thousands of students.
A new privacy commissioner report was released this week after 33 public,
charter and Francophone schools and school boards flagged to the office
earlier this year that they were affected by an online breach of the
education software provider PowerSchool. The platform is used to store a
range of student information.
Personal information the breach exposed varied between school boards, but
it included names, birthdates, addresses, social security numbers, academic
records and medical information, like diagnoses and medications. The breach
affected students, parents and staff members.
------------------------------
Date: Wed, 19 Nov 2025 08:30:07 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: More people are using ChatGPT like a lawyer in court. Some are
starting to win. (NBC News)
With generative AI tools available to anyone with an Internet connection, a
rising number of litigants are using ChatGPT to assist in their legal cases.
But AI’s growing abilities to create realistic videos, images, documents and
audio have judges worried about the trustworthiness of evidence in their
courtrooms.
https://www.nbcnews.com/tech/innovation/ai-chatgpt-court-law-legal-lawyer-self-represent-pro-se-attorney-rcna230401
htps://www.nbcnews.com/tech/tech-news/ai-generated-evidence-deepfake-use-law-judges-object-rcna235976
------------------------------
Date: Wed, 26 Nov 2025 19:32:18 -0500
From: Bob Gezelter <gezelter@rlgsc.com>
Subject: Generative AI Hallucinations in Legal Motions -- Corrected
[CORRECTION: Subject Line in message was incorrect]
The Guardian published an article reporting that the Nevada County
California District Attorney's office withdrew a criminal case filing
containing at least one "artificial intelligence"-generated "inaccurate
citation."
Generative AI hallucinations in legal filings is a serious problem. In my
consulting practice, I have been retained by attorneys to assist in
understanding technical issues relating to computers and networks. Each and
every element of a filing must be researched and evaluated. Each incorrect
argument or citation requires effort by the other party and the court. If
not detected quickly, the cost of such an error can mount to thousands or
tens of thousands of dollars, significant amounts for a resource-stretched
public defender, or a private attorney representing a private citizen or
small business.
The full article can be found at:
https://www.theguardian.com/us-news/2025/nov/26/prosecutor-ai-inaccurate-motion
------------------------------
Date: Thu, 27 Nov 2025 07:48:20 +0800
From: Dan Jacobson <jidanni@jidanni.org>
Subject: AI Chatbots Are Putting Clueless Hikers in Danger (Futurism)
AI Chatbots Are Putting Clueless Hikers in Danger, Search and Rescue
Groups Warn Relying on ChatGPT for hiking advice is a horrible idea.
https://futurism.com/ai-chatbots-hikers-danger
------------------------------
Date: Tue, 25 Nov 2025 17:39:00 -0700
From: geoff goodfellow <geoff@iconia.com>
Subject: The AI boom is based on a fundamental mistake (The Verge)
*Large language mistake*
* Cutting-edge research shows language is not the same as intelligence. The
entire AI bubble is built on ignoring it.*
EXCERPT:
“Developing superintelligence is now in sight,” says
<https://archive.ph/o/BV16q/https://www.meta.com/superintelligence/> Mark
Zuckerberg, heralding the “creation and discovery of new things that aren’t
imaginable today.” Powerful AI “may come as soon as 2026 [and will be]
smarter than a Nobel Prize winner across most relevant fields,” says
<https://archive.ph/o/BV16q/https://www.darioamodei.com/essay/machines-of-loving-grace>
Dario Amodei, offering the doubling of human lifespans or even “escape
velocity” from death itself. “We are now confident we know how to build
AGI,” says
<https://archive.ph/o/BV16q/https://blog.samaltman.com/reflections> Sam
Altman, referring to the industry’s holy grail of artificial general
intelligence — and soon superintelligent AI “could massively accelerate
scientific discovery and innovation well beyond what we are capable of doing
on our own.”
Should we believe them? Not if we trust the science of human intelligence,
and simply look at the AI systems these companies have produced so far.
The common feature cutting across chatbots such as OpenAI’s ChatGPT,
Anthropic’s Claude, Google’s Gemini, and whatever Meta is calling its AI
product this week are that they are all primarily ``large *language*
models.'' Fundamentally, they are based on gathering an extraordinary
amount of linguistic data (much of it codified on the Internet), finding
correlations between words (more accurately, sub-words called *tokens*), and
then predicting what output should follow given a particular prompt as
input. For all the alleged complexity of generative AI, at their core they
really are models of language.
The problem is that according to current neuroscience, human thinking is
largely independent of human language -— and we have little reason to
believe ever more sophisticated modeling of language will create a form of
intelligence that meets or surpasses our own. Humans use language to
communicate the results of our capacity to reason, form abstractions, and
make generalizations, or what we might call our intelligence. We use
language to think, but that does not *make *language the same as thought.
Understanding this distinction is the key to separating scientific fact
from the speculative science fiction of AI-exuberant CEOs.
The AI hype machine relentlessly promotes the idea that we’re on the verge
of creating something as intelligent as humans, or even *superintelligence*
that will dwarf our own cognitive capacities. If we gather tons of data
about the world, and combine this with ever more powerful computing power
(read: Nvidia chips) to improve our statistical correlations, then presto,
we'll have AGI. Scaling is all we need.
But this theory is seriously scientifically flawed. LLMs are simply tools
that emulate the communicative function of language, not the separate and
distinct cognitive process of thinking and reasoning, no matter how many
data centers we build.
Last year, three scientists published a commentary
<https://archive.ph/o/BV16q/https://gwern.net/doc/psychology/linguistics/2024-fedorenko.pdf>
in the journal *Nature* titled, with admirable clarity, “Language is
primarily a tool for communication rather than thought.” Co-authored by
Evelina Fedorenko (MIT), Steven T. Piantadosi (UC Berkeley) and Edward
A.F. Gibson (MIT), the article is a tour de force summary of decades of
scientific research regarding the relationship between language and thought,
and has two purposes: one, to tear down the notion that language gives rise
to our ability to think and reason, and two, to build up the idea that
language evolved as a cultural tool we use to share our thoughts with one
another.
Let’s take each of these claims in turn...
[...]
https://archive.ph/BV16q
-or-
https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
------------------------------
Date: Mon, 17 Nov 2025 16:47:37 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and
Instructing Kids How to Find Knives (Gizmodo)
https://gizmodo.com/ai-powered-teddy-bear-caught-talking-about-sexual-fetishes-and-instructing-kids-how-to-find-knives-2000687140
------------------------------
Date: Thu, 27 Nov 2025 20:29:20 -0700
From: "Matthew Kruk" <mkrukg@gmail.com>
Subject: How to Tell What’s Real Online
https://www.youtube.com/watch?v=o4I_hOz_MLw
In a world overflowing with opinions, clips, conspiracies, and AI-generated
answers, how do you know what’s actually true? Neil deGrasse Tyson breaks
down his personal checklist for navigating the modern information
landscape—yellow flags, red flags, and why evidence-based thinking matters
more than ever. From scientific claims and podcasts to clipped videos and
industry commentary, Neil shows you how to separate signal from noise and
think like a scientist in the digital age.
How do you tell what’s real? Neil deGrasse Tyson breaks down how to tell
which sources are trustworthy and which yellow flags to look out for. In an
age of so much information, how do you parse what’s real and what’s
misinformation?
------------------------------
Date: Fri, 28 Nov 2025 08:13:09 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Is it AI -- or a plane crash?
In the age of artificial intelligence, a top federal accident investigator
worries about the technology’s potential influence on the public following
disasters.
About 11 hours after the nation’s worst airline disaster in more than two
decades, an X user posted a dramatic image of rescuers climbing atop the
wreckage in the Potomac River, emergency lights illuminating the night sky.
But it wasn’t real.
The image, which got more than 21,000 views following January’s deadly crash
between a regional jet and an Army Black Hawk helicopter, doesn’t match
photos of the mangled fuselage captured after the Jan. 29 disaster —- or the
observations of Washington police officers responding to the scene,
according to police department spokesperson Tom Lynch.
One media fact-check quickly flagged it as a forgery —- probably created
using artificial intelligence, according to a “DeepFake-o-meter” developed
by the University at Buffalo. Three AI checking tools used by POLITICO also
labeled it as being likely AI-generated. The post is no longer available; X
says the account has been suspended.
But the image is not an isolated occurrence online. A POLITICO review found
evidence that AI-created content is already becoming routine in the wake of
transportation disasters, including after a UPS cargo plane crash earlier
this month that killed 14 people. Posts about aviation incidents highlighted
in this story were from users who didn’t respond to requests for comment.
[...]
https://www.politico.com/news/2025/11/27/people-believe-this-stuff-ai-a-new-headache-for-air-disaster-investigators-00635068
------------------------------
Date: Thu, 20 Nov 2025 07:43:07 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: AI chatbots giving self-harm instructions (Cybernews)
https://cybernews.com/ai-news/llms-self-harm/
------------------------------
Date: Tue, 25 Nov 2025 20:54:09 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Fears About AI Prompt Talks of Super PACs to Rein In the Industry
(NYT Times)
https://www.nytimes.com/2025/11/25/us/politics/ai-super-pac-anthropic.html?unlocked_article_code=1.4E8.ubAv.HlNvbDVS7hjY&smid=url-share
------------------------------
Date: Wed, 26 Nov 2025 13:00:45 -0600
From: Robert Dorsett via another list
Subject: Space debris (TechReview)
https://www.technologyreview.com/2025/11/17/1127980/what-is-the-chance-your-plane-will-be-hit-by-space-debris/
What is the chance your plane will be hit by space debris?
------------------------------
Date: Wed, 19 Nov 2025 16:05:19 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Cloudflare outage not caused by attack as CEO first suspected, but
by a single file that got too big (ArsTechnica)
SAME OLD SAME OLD SAME OLD SAME OLD ...
Duct tape and bent paper clips! -L
https://arstechnica.com/tech-policy/2025/11/cloudflare-broke-much-of-the-internet-with-a-corrupted-bot-management-file/
------------------------------
Date: Thu, 27 Nov 2025 08:28:40 -0500
From: paul wallich <pw@panix.com>
Subject: Keurig crash (caffiend?)
This one feels very old school, but I think it's still a useful reminder:
A few days my spouse complained that one of the "favorites" buttons on our
coffee maker (which trigger a preset temperature and brewing time) wasn't
working. I figured it was probably just a mechanical/electrical failure, and
we groused about shoddy manufacturing.
This morning another one of the buttons didn't work, but in a weird way:
When I pushed it, the little display briefly indicated the correct preset
values, then changed back to the default setting for one of them. So I
unplugged and replugged the coffee maker, waited for it to reboot, and
Presto! the buttons were working again.
I have no idea what went wrong in the code -- bit flip error, very slow
memory leak, something else entirely -- but somehow clearing the RAM of a
machine that typically runs for months or years at a time fixed it for now.
But this shows once again how even the simplest machines in our lives are
operated by piles of code of unknowable complexity, quality or fault
tolerance, developed with toolchains the end user knows nothing about, and
with (probably thankfully) no provisions for ever updating.
------------------------------
Date: Mon, 17 Nov 2025 17:10:03 -0700
From: geoff goodfellow <geoff@iconia.com>
Subject: This is what your AI girlfriend looks like without makeup
https://x.com/AdamLowisz/status/1990132224998486073
------------------------------
Date: Mon, 24 Nov 2025 18:11:50 -0700
From: geoff goodfellow <geoff@iconia.com>
Subject: The Most Joyless Tech Revolution Ever: AI Is Making Us Rich and
Unhappy (WSJ)
*Discomfort around artificial intelligence helps explain the disconnect
between a solid economy and an anxious public*
https://www.wsj.com/tech/ai/the-most-joyless-tech-revolution-ever-ai-is-making-us-rich-and-unhappy-6b7116a3?st=3DUBhQ8Z
EXCERPT:
Artificial intelligence might be the most transformative technology in
generations. It is also the most joyless.
While Wall Street greets AI with open arms, ordinary Americans respond with
ambivalence, anxiety, even dread.
This isn't like the dot-com era. A survey in 1995 found 72% of respondents
comfortable with new technology such as computers and the Internet. Just
24% were not.
Fast forward to AI now, and those proportions have flipped: just 31% are
comfortable with AI while 68% are uncomfortable, a summer survey for CNBC
found.
Why the difference? The dot-com bubble, like the AI boom, had its excesses
and absurdity. But it also shimmered with optimism and adventure. From
Fortune 500 CEOs to college dropouts, everyone had a web-based business
idea. Demand for digitally savvy workers was off the charts.
Today, the optimism is largely confined to AI architects and gimlet-eyed
executives calculating how much AI can reduce head count while workers
wonder whether they will be replaced by AI, or someone who knows AI. Meta
Platforms, Microsoft and Amazon, three of the leading purveyors of AI, have
all announced layoffs this year.
*A piece of the *disconnect*
[Long but worthy item truncated for RISKS. PGN]
------------------------------
Date: Thu, 20 Nov 2025 11:21:00 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Re: AN0M (RISKS 34.79)
Am I understanding this correctly? It seems like a variation on the notion
of an encryption system with a backdoor (remember the Clipper chip?), except
that it's promoted only to presumed criminals.
------------------------------
Date: 19 Nov 2025 22:36:36 -0500
From: "John Levine" <johnl@iecc.com>
Subject: Re: Chinese researchers just unveiled a photonic quantum chip that
doesn't deliver a 1,000-fold speed boost to AI data centers (x)
>This "world first" 6-inch thin-film lithium niobate marvel just won the
>Leading Technology Award at World Internet Conference Wuzhen Summit, beating
>400+ global entries ...
Here's a less breathless article. It's a significant advance in photonoics,
circuits that use light rather than electrons, and it seems that it will
speed up some kinds of computations, but it is far from a general purpose AI
accelerator.
"Taking a step back, claims that the device can outstrip leading
NVIDIA graphics processors by a factor of 1,000 reflect the type of
performance gains quantum approaches are expected to deliver on
certain classes of problems, though these comparisons are often
faulty, as they depend heavily on the underlying task and are not
equivalent to general-purpose speed."
https://thequantuminsider.com/2025/11/15/chinas-new-photonic-quantum-chip-promises-1000-fold-gains-for-complex-computing-tasks/
------------------------------
Date: Wed, 19 Nov 2025 17:58:34 +0000
From: Martin Ward <martin@gkc.org.uk>
Subject: Re: Dog Accidentally Shoots and Injures a Pennsylvania Man, Police
Say (RISKS-34.80)
> The man had been cleaning a shotgun
... while it was still loaded??!!
I think the man was lucky to avoid a Darwin Award.
------------------------------
Date: Sat, 28 Oct 2023 11:11:11 -0800
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:
http://mls.csl.sri.com/mailman/listinfo/risks
=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) has moved to the ftp.sri.com site:
<risksinfo.html>.
*** Contributors are assumed to have read the full info file for guidelines!
=> OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
delightfully searchable html archive at newcastle:
http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
Also, ftp://ftp.sri.com/risks for the current volume/previous directories
or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
If none of those work for you, the most recent issue is always at
http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
<http://www.acm.org/joinacm1>
------------------------------
End of RISKS-FORUM Digest 34.81
************************