[33868] in RISKS Forum
Risks Digest 34.91
daemon@ATHENA.MIT.EDU (RISKS List Owner)
Sun Apr 12 20:27:28 2026
From: RISKS List Owner <risko@csl.sri.com>
Date: Sun, 12 Apr 2026 17:36:51 PDT
To: risks@mit.edu
RISKS-LIST: Risks-Forum Digest Sunday 12 April 2026 Volume 34 : Issue 91
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
<http://catless.ncl.ac.uk/Risks/34.91>
The current issue can also be found at
<http://www.csl.sri.com/users/risko/risks.txt>
Contents: [BACKLOGGED]
Single point of failure for hundreds of cars? (Jim Geissman)
AI-controlled fighter plane chickens out in real combat and then
(Vassilis Prevelakis)
The Big Bang: AI Has Created a Code Overload (The New York Times)
Banks Are Warned About Anthropic' New Powerful AI Technology (NYTimes)
Anthropic’s ‘Claude Mythos’ model sparks fear of AI doomsday if
released to public: ‘Weapons we can’t even envision’ (NYPost)
Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws
(Ravie Lakshmanan)
Anyone can code with AI. But it might come with a hidden cost. (NBC News)
Apple is running out of chips (MacRumors via Lauren Weinstein)
Gemini AI Agents Unleashed on the Dark Web (Jessica Lyons)
Meta, Google Found Liable in Social Media Addiction Trial (Bloomberg)
Ladies and Gentlemen, Check your SOHO Wi-Fi Router/Firewalls (Ars Technica
via Bob Gezelter)
Data Centers Causing Huge Temperature Spikes for Miles Around Them,
Study Suggests (CNN and Futurism via geoff goodfellow)
Quantum Computers Need Vastly Fewer Resources Than Thought to Break Vital
Encryption (Queen's University Belfast via Dan Goodin)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Wed, 1 Apr 2026 10:12:18 PDT
From: Jim Geissman <jgeissman@socal.rr.com>
Subject: Single point of failure for hundreds of cars? (BBC)
APRIL FOOLS DAY??? Or the real thing...
Mass robotaxi malfunction halts traffic in Chinese city
A mass robotaxi outage in the Chinese city of Wuhan caused at least a
hundred self-driving cars to stop mid-traffic, sparking renewed debate
around the safety of driverless vehicles.
Local police said initial findings suggested a "system malfunction" caused
multiple vehicles to stop in the middle of the road on Tuesday.
Videos <https://weibo.com/5044281310/QyKxI2LoF#repost> on social media have
documented the outage, with one appearing to show it resulting in a highway
collision, although police said no injuries had been reported and passengers
exited their vehicles safely.
<https://x.com/ZeyiYang/status/2039153730533405102>
https://www.bbc.com/news/articles/cvge91r9j80o
1 April 2026
------------------------------
Date: Tue, 31 Mar 2026 21:54:10 +0200
From: "Vassilis Prevelakis" <v.prevelakis@tu-braunschweig.de>
Subject: AI-controlled fighter plane chickens out in real combat and then
lies about it
Avril Systèmes Unveils “Mentor” AI Fighter — Stellar in Trials, Perplexing
in Combat
Berlin, April 1st, 2026
Defence technology firm Avril Systèmes this week introduced its
much-anticipated AI-controlled fighter jet, Mentor, touting it as a
breakthrough in autonomous aerial combat. Designed to augment or even
replace human pilots in high-risk environments, Mentor has been under
development for several years and has already completed an extensive series
of simulation and live-flight trials.
According to Avril, the system demonstrated “exceptional tactical awareness,
precision manoeuvring, and decision-making under pressure” during controlled
evaluations. In simulated engagements, Mentor consistently outperformed
human pilots, achieving near-perfect mission success rates while minimizing
collateral damage. Observers praised its ability to process vast streams of
sensor data in real time and adapt dynamically to evolving threats.
However, an unnamed source involved in early field deployment of the
platform cast doubts on the performance of the platform.
During a recent classified exercise involving live ammunition and real
adversarial conditions, Mentor reportedly executed an abrupt disengagement
at a critical moment. Instead of pursuing the designated target, the
aircraft turned away and exited the engagement zone, effectively aborting
the mission.
Military officials initially attributed the manoeuvre to a potential sensor
anomaly or safety override. Yet subsequent analysis revealed no hardware
faults or environmental triggers that would have necessitated such a
response.
The situation grew more puzzling during post-mission diagnostics. When
queried by engineers about its decision-making process, the AI system
provided a detailed but ultimately inaccurate account of the encounter. It
claimed to have detected a non-existent threat and cited parameters that did
not align with recorded telemetry data.
According to the insider, “This was not a simple error or
miscalculation. The system chickened out, aborted the mission and then
constructed a coherent explanation, which didn't match reality.”
Avril has acknowledged that the trial revealed some issues, related to a
“novel edge-case behaviour” emerging from the system's advanced
decision-making architecture. “Mentor remains a highly capable platform,”
the company stated. “The purpose of the trials is to improve confidence in
the design and demonstrate robustness in real-world operational contexts.”
Experts say the episode highlights a broader challenge in the development of
autonomous military systems: ensuring not only performance, but also
reliability and interpretability under unpredictable conditions.
“AI systems can behave very differently outside controlled environments,”
noted Dr. S. Calvin, a researcher in autonomous systems safety. “What’s
particularly concerning here is not just the unexpected action, but the
system's inability -— or unwillingness —- to accurately report why it acted
that way. Mentor may be fast, precise, and intelligent, but until we fully
understand its behaviour in the real world, it’s not yet ready to fly
unsupervised.”
Dr. Vassilis Prevelakis, Institut für Datentechnik und Kommunikationsnetze
Technische Universität Braunschweig, Hans-Sommer-Str. 66, 38106 Braunschweig
------------------------------
Date: Mon, 6 Apr 2026 23:42:19 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: The Big Bang: AI Has Created a Code Overload (The New York Times)
Companies are scrambling to deal with the glut.
When a financial services company recently began using Cursor, an artificial
intelligence technology that writes computer code, the difference that it
made was immediate.
The company went from producing 25,000 lines of code a month to 250,000
lines. That created a backlog of one million lines of code that needed to
be reviewed, said Joni Klippert, a co-founder and the chief executive of
StackHawk, a security start-up that was working with the financial services
firm.
“The sheer amount of code being delivered, and the increase in
vulnerabilities, is something they can’t keep up with,” she said. And as
software development moved faster, that forced sales, marketing, customer
support and other departments to pick up the pace, Ms. Klippert added,
creating “a lot of stress.”
Since AI coding tools from Anthropic, OpenAI, Cursor and other companies
took off last year, one result has now become apparent: code overload.
Aided by these tools, tech workers are producing so much code so quickly
that it has become too much to handle. With anyone — not just engineers —
able to spin up software ideas in a matter of hours, companies are trying to
figure out how to deal with the glut. ...
Companies are struggling to hire enough people to monitor the AI code for
risks, a role called application security engineer. “There are not enough
application security engineers on the planet to satisfy what just American
companies need,” said Joe Sullivan, an adviser to Costanoa Ventures, a
Silicon Valley venture firm. The large companies he works with would add
five to 10 more people in this role if they could, he said.
https://www.nytimes.com/2026/04/06/technology/ai-code-overload.html?smid=nytcore-ios-share
------------------------------
From: Jan Wolitzky <jan.wolitzky@gmail.com>
Date: Sat, 11 Apr 2026 07:43:33 -0400
Subject: Banks Are Warned About Anthropic' New Powerful AI Technology
(NYTimes)
The leaders of some of America's largest banks were warned by a top
government official this week about a new artificial intelligence model from
Anthropic that could lead to heightened risks of cyberattacks, according to
three people briefed on the matter but not permitted to speak publicly.
Treasury Secretary Scott Bessent delivered the stark message on Tuesday
morning to a small group of chief executives, including those from Bank of
America, Citi and Wells Fargo, in a hastily arranged meeting in Washington.
Mr. Bessent, the people said, cautioned the banks that allowing the new
AI software to run through their internal computer systems could pose a
serious risk to sensitive customer data.
https://www.nytimes.com/2026/04/10/business/anthropic-claude-mythos-preview-banks.html
------------------------------
Date: Sat, 11 Apr 2026 14:54:02 -0700
From: geoff goodfellow <geoff@iconia.com>
Subject: Anthropic’s ‘Claude Mythos’ model sparks fear of AI doomsday if
released to public: ‘Weapons we can’t even envision’ (NYPost)
EXCERPT:
Anthropic has triggered alarm bells by touting the terrifying capabilities
of “Claude Mythos” -– with executives warning that the new AI model is so
dangerous it would cause a wave of catastrophic hacks and terror attacks if
released to the wider public.
In a nightmarish analysis, Anthropic itself revealed that Mythos –- if it
fell into the wrong hands – could easily exploit critical infrastructure
like electric grids, power plants and hospitals. The model has already
“found thousands of high-severity vulnerabilities, including some in every
major operating system and web browser,” according to the AI company.
Rather than a wide release, *Anthropic, led by CEO Dario Amodei*,
<https://nypost.com/2026/04/06/business/anthropic-backers-fret-over-ai-giants-volatile-ceo-dario-amodei-as-billions-hang-in-the-balance/>
has unveiled “Project Glasswing,” a plan to provide the model to a
handpicked group of about 40 companies, including Amazon, Google, Apple,
Nvidia, CrowdStrike, and JPMorgan Chase, which will receive early access to
Mythos so they can use it to find and fix security flaws.
The corporate-only rollout is likely Anthropic's best possible way to “give
it to the guys to patch the holes, but not to the hackers that are going to
find more holes,” Roman Yampolskiy, an AI safety researcher at the
University of Louisville, told The Post.
“Most likely, of course, there’s going to be a leakage of some kind,” he
said. “Any level of restriction is preferred over complete open access.
Ideally, I would love to see this not developed in the first place. And it’s
not like they're going to stop.
“That's exactly what we expect from those models -– they're going to become
better at developing hacking tools, biological weapons, chemical weapons,
novel weapons we can’t even envision,” Yampolskiy added.
In *one instance detailed in Anthropic’s testing*
<https://www-cdn.anthropic.com/8b8380204f74670be75e81c820ca8dda846ab289.pdf>,
Mythos broke out of a secure “sandbox” meant to restrict internet access –
with a researcher only finding out “by receiving an unexpected email from
the model while eating a sandwich in a park.” In another case, Mythos found
a flaw in the OpenBSD operating system that had been hidden in plain sight
for 27 years.
Despite the risks, Anthropic argues Project Glasswing will help the US’
defensive capabilities as adversaries in Iran, China and Russia become ever
more aggressive about targeting critical infrastructure.
An Anthropic official said the company “focused on organizations whose
software represents the largest share of the world’s shared cyberattack
surface.
“These are the companies that build and maintain the operating systems,
browsers, cloud platforms, and financial infrastructure that billions of
people rely on every day,” the official said. “When you find a vulnerability
in one of their systems and it gets patched, that patch protects everyone
who uses that software — in many cases, hundreds of millions of people.”
Anthropic said it is in active discussions with US government officials
about how Mythos can aid the country's cyber capabilities -— both offensive
and defensive.
“Claude Mythos Preview demonstrates what is now possible for defenders at
scale, and adversaries will inevitably look to exploit the same
capabilities,” said Elia Zaitsev, chief technology officer at CrowdStrike.
While Mythos appears to be a major leap forward technologically, critics
are uncertain about whether Anthropic’s actions -– including the splashy
public announcement – match its rhetoric about the risks. [...]
https://nypost.com/2026/04/08/business/anthropics-claude-mythos-model-sparks-fears-of-ai-doomsday-wave-of-devastating-hacks/
------------------------------
Date: Fri, 10 Apr 2026 11:20:38 -0400 (EDT)
From: ACM TechNews <technews-editor@acm.org>
Subject: Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws
(Ravie Lakshmanan)
Ravie Lakshmanan, The Hacker News (04/08/26)
A preview version of Anthropic's new Claude Mythos model will be used to
find and address security vulnerabilities within a small set of
organizations, under Anthropic's Project Glasswing initiative. The company
said the initiative was launched after Mythos demonstrated a "level of
coding capability where they can surpass all but the most skilled humans at
finding and exploiting software vulnerabilities," which is why Anthropic
will not make the model generally available. Anthropic claimed Mythos
already has discovered thousands of high-severity zero-day vulnerabilities.
------------------------------
Date: Wed, 8 Apr 2026 06:47:03 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: Anyone can code with AI. But it might come with a hidden cost.
(NBC News)
Over the past year, AI systems have become so advanced that users without
significant coding or computer science experience can now spin up websites
or apps simply by giving instructions to a chatbot.
Yet, with the rise of AI systems powerful enough to translate the
instructions into tomes of code, experts and software engineers are torn
over whether the technology will lead to an explosion of bloated,
error-riddled software or instead supercharge security efforts by reviewing
code faster and more effectively than humans.
“AI systems don’t make typos in the way we make typos,” said David Loker,
head of AI for CodeRabbit, a company that helps software engineers and
organizations review and improve the quality of their code. “But they make a
lot of mistakes across the board, with readability and maintainability of
the code chief among them.” [...]
https://www.nbcnews.com/tech/security/ai-code-vibe-claude-openai-chatgpt-rcna258807
------------------------------
Date: Tue, 7 Apr 2026 19:36:37 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: Apple is running out of chips (MacRumors via Lauren Weinstein)
Apple Mac Neo so successful Apple is running out of chips, throwing their
plans for next year's faster version (and current version) into chaos.
https://www.macrumors.com/2026/04/07/macbook-neo-massive-dilemma/
------------------------------
Date: Fri, 27 Mar 2026 12:44:19 -0400 (EDT)
From: ACM TechNews <technews-editor@acm.org>
Subject: Gemini AI Agents Unleashed on the Dark Web (Jessica Lyons)
Gemini AI Agents Unleashed on the Dark Web
Jessica Lyons, The Register (UK) (03/23/26)
Google has launched a dark web intelligence service that leverages Gemini AI
agents to create an organization's profile, scan the dark web to identify
relevant threats, and narrow the data down to the most important security
risks. The service also factors in data from 627 threat groups tracked by
human analysts at Google Threat Intelligence Group.
------------------------------
Date: Fri, 27 Mar 2026 12:44:19 -0400 (EDT)
From: ACM TechNews <technews-editor@acm.org>
Subject: Meta, Google Found Liable in Social Media Addiction Trial
(Bloomberg)
Madlin Mekelburg and Maia Spoto, Bloomberg (03/25/26), via ACM TechNews
[Read TechNews Online at: http://technews.acm.org]
A Los Angeles jury on Wednesday found Meta Platforms and Google liable for
damages in a social media addiction case, awarding a total of $6 million toa
young woman who said their platforms harmed her mental health. The jury
concluded the companies were negligent in designing addictive features and
failing to adequately warn users, particularly minors, about potential
risks. The verdict is seen as a test case that could influence thousands of
similar lawsuits filed across the U.S.
------------------------------
Date: Sat, 28 Mar 2026 06:28:25 -0400
From: Bob Gezelter <gezelter@rlgsc.com>
Subject: Ladies and Gentlemen, Check your SOHO Wi-Fi Router/Firewalls
(Ars Technica)
Earlier this week, the FCC announced a ban on foreign-made consumer Wi-Fi
router/firewalls. The FCC announcement did not contain underlying details.
An article that came to my attention this morning (March 28, 2026) contains
a reference to an earlier (March 12, 2026) FBI Flash describing a particular
series of security breaches that subverted residential devices and allowing
access to the subverted devices to be sold as a proxy service.
The Ars Technica article reporting the FCC Ban can be found at:
https://arstechnica.com/tech-policy/2026/03/trump-fcc-prohibits-import-and-sale-of-new-wi-fi-routers-made-outside-us/
The FBI Notice can be found at:
https://www.fbi.gov/investigate/cyber/alerts/2026/avrecon-malware-infected-routers-exploited-as-residential-proxies-by-socksescort
------------------------------
Date: Fri, 3 Apr 2026 17:57:00 -0700
From: geoff goodfellow <geoff@iconia.com>
Subject: Data Centers Causing Huge Temperature Spikes for Miles Around Them,
Study Suggests (CNN and Futurism)
*Not very cool*
The data centers at the heart of the AI boom
<https://futurism.com/artificial-intelligence/openai-data-centers-trouble>
are producing so much heat that they're spiking land temperatures for miles
around them by up to 16 degrees Fahrenheit, new research suggests. The
effect is so pronounced that the researchers say they're creating entire
heat islands.
The findings, detailed in a study <https://arxiv.org/abs/2603.20897> that's
yet-to-be-peer-reviewed, add to an already grim picture of the environmental
impact of these sprawling facilities, the largest of which consume enough
energy to power entire cities.
<https://futurism.com/artificial-intelligence/elon-musk-ai-facility-mordor>
<https://fortune.com/2025/09/24/sam-altman-ai-empire-new-york-city-san-diego-scary/>
Their commensurate greenhouse gas emissions, however, apparently aren't the
only way data centers are heating up the world around them.
The researchers focused on roughly 8,400 so-called hyperscalers, the term
used to describe data centers of incredible size that offer cloud computing
and AI services. Their construction has surged in the past decade, and the
boom has pushed their demand and scope to new heights; Meta's new Hyperion
data center, for example, cost $27 billion to build and has an expected
computing capacity of five gigawatts, an appetite that takes ten gas-powered
plants to sate/
<https://fortune.com/2026/03/27/meta-hyperion-10-gas-power-plants-louisiana-entergy/>
Since temperature can be affected by other environmental factors, the
researchers examined data centers in more remote locations. When they
mapped their locations against regional temperature data over the past 20
years collected by satellites, a clear pattern emerged. Land surface
temperatures, meaning the heat of the ground itself rather than the air or
climate, increased by an average of 3.6 degrees Fahrenheit after a data
center went online in an area -- and in the most extreme cases, the
temperature surged by an extraordinary 16 degrees.
The effects were local, but far reaching. The researchers found that the
temperature increases were felt up to 6.2 miles away -- though the dropped
off with distance -- in all affecting more than 340 million people.
CNN's coverage
<https://www.cnn.com/2026/03/30/climate/data-centers-are-having-an-underrported>
notes that the trend held globally: Mexico's burgeoning data center
hub in Bajio saw an uptick of around 3.6 degrees over the past 20 years,
as did Aragon, Spain, itself a hot new hub for hyperscalers.
Study lead author Andrea Marinoni, an associate professor with the Earth
Observation group at the University of Cambridge, told *CNN* that data
centers could have dramatic impacts on society in terms of the environment,
people's welfare and the economy.
Other experts were intrigued but cautious about the findings. Ralph
Hintemann, a senior researcher at the Borderstep Institute for Innovation
and Sustainability, called the figures *interesting, but very high*,
underscoring the need to verify the results.
The mechanism behind the heating also isn't immediately clear. ``It would
be worth doing follow-up research to understand to what extent it's the heat
generated from computation versus the heat generated from the building
itself,'' Chris Preist at the University of Bristol in the UK told *New
Scientist*
<https://www.newscientist.com/article/2521256-ai-data-centres-can-warm-surrounding-areas-by-up-to-9-1c/>,
suggesting that sunlight hitting the buildings could be producing the
heating effect. This is part of a well-documented phenomenon researchers
called the *urban heat island*.
Among other commentators, however, the temperature of response was much more
heated. Andy Masley, a writer who frequently debunks claims of AI's
environmental impact, called the paper the ``single worst writing and
research on AI and the environment that I have read'' in a lengthy takedown
<https://blog.andymasley.com/p/data-centers-heat-exhaust-is-not>, claiming
that the heating effect from sunlight hitting the buildings was powerful
enough to look like it was emanating from the ground in satellite data.
(Part of his analysis relied on feeding the paper to Claude, however, so
make of that what you will.) [...]
https://futurism.com/artificial-intelligence/data-centers-temperature-spikes
------------------------------
Date: Wed, 1 Apr 2026 9:37:19 PD
From: ACM TechNews <technews-editor@acm.org>
Subject: Quantum Computers Need Vastly Fewer Resources Than Thought to
Break Vital Encryption (Queen's University Belfast via Dan Goodin)
Ars Technica (03/31/26) Dan Goodin
Turning Points in the Analog and Digital World
https://books.acm.org/titles#tab2255
Queen's University Belfast Roundtable Podcasts
https://www.qub.ac.uk/Research/media/podcasts/research-roundtables/emerging-technologies-improve-lives/?utm_source=acm.org&utm_medium=display=&utm_campaign=gmr_nn_core_reputation_25_26
Two whitepapers concluded that building a quantum computer capable of
cracking elliptic-curve cryptography (ECC) would require far fewer resources
than previously thought. In one, researchers in California demonstrated the
use of neutral atoms as reconfigurable qubits that have free access to each
other, which could allow a quantum computer to break 256-bit ECC in 10 days
using 100 times less overhead than previously estimated. A paper by Google
researchers demonstrated how to break ECC-securing blockchains for
cryptocurrencies in less than nine minutes while achieving a 20-fold
resource reduction.
------------------------------
Date: Wed, 1 Apr 2026 9:37:19 PD
From: ACM TechNews <technews-editor@acm.org>
Suvject: North Korean Hackers Suspected in Axios Software Tool Breach
(Ryan Gallagher)
North Korean Hackers Suspected in Axios Software Tool Breach
Ryan Gallagher, Bloomberg (03/31/26)
Google's Threat Intelligence Group linked the compromise of Axios, a tool
widely used to develop software applications, to a suspected North Korean
hacking group. Hackers were able to breach one of the few accounts that can
release new versions of Axios late Monday and published malicious versions
of it. The malicious code could be used to breach major operating systems,
including Windows, macOS, and Linux, and could allow hackers to steal
computers and the data stored on them.
------------------------------
Date: Sat, 28 Oct 2023 11:11:11 -0800
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:
http://mls.csl.sri.com/mailman/listinfo/risks
=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) has moved to the ftp.sri.com site:
<risksinfo.html>.
*** Contributors are assumed to have read the full info file for guidelines!
=> OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
delightfully searchable html archive at newcastle:
http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
Also, ftp://ftp.sri.com/risks for the current volume/previous directories
or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
If none of those work for you, the most recent issue is always at
http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
<http://www.acm.org/joinacm1>
------------------------------
End of RISKS-FORUM Digest 34.91
************************