[33856] in RISKS Forum

home help back first fref pref prev next nref lref last post

Risks Digest 34.83

daemon@ATHENA.MIT.EDU (RISKS List Owner)
Fri Jan 9 21:54:31 2026

From: RISKS List Owner <risko@csl.sri.com>
Date: Fri, 9 Jan 2026 19:01:28 PST
To: risks@mit.edu

RISKS-LIST: Risks-Forum Digest  Friday 9 January 2026  Volume 34 : Issue 83

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/34.83>
The current issue can also be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
Aviation Delays Ease as Airlines Complete Airbus Software Rollback
 (Simon Sharwood)
Chinese Peptides Are the Latest Biohacking Trend in the Tech World
 (The New York Times)(
Thieves are stealing keyless cars in minutes. Here's how to protect
 your vehicle (Los Angeles Timtes)
Software Error Forces 325,000 Californians to Replace Real IDs (Neil Vigdor)
NASA Library closing (The NY Times)
EFF's Investigations Expose Flock Safety's Surveillance Abuses: 2025 in
 Review (via Monty Solomon)
Zoom's "AI Companion" is surveillance as a service (via Gabe Goldberg)
AI app appologises over false crime alerts across U.S. (BBC)
Google AI deletes user's entire hard drive (via Geoff Kuenning)
Boys at her school shared AI-generated, nude images of her.  Shhe was the
 one expelled from Sixth Ward Middle School (ABC News)
CIA, ESP, Psychic Program, Spy Secrets, Declassified Documents(via geoff g)
He Switched to eSIM, and Is Full of Regret (WiReD)
AT&T to launch new service for customers as it takes on T-Mobile
 (via Monty Solomon)
The big regression (via Monty Solomon)
AI Customer DisService Slop (Henry Baker)
News orgs win fight to access 20M ChatGPT logs. Now they want more.
 (Ars Technica)
Capability Maturity Models and generative artificial intelligence
 (Rob Slade)
Fake AI Chrome Extensions Steal 900K Users' Data (Dark Reading)
AI starts autonomously writing prescription refills in Utah (Ars Technica)
Stolen Data Poisoned to Make AI Systems Return Wrong Results
 (Thomas Claburn)
Good cannot successfully battle Evil using only good means is the
 essential message of Machiavelli's "The Prince" -- 1513 (via LW)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Wed, 3 Dec 2025 11:10:50 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Aviation Delays Ease as Airlines Complete Airbus Software Rollback
 (Simon Sharwood)

Simon Sharwood, *The Register* (U.K.) (12/01/25), via ACM TechNews

Airlines worldwide faced delays as Airbus rolled back a software update on
around 6,000 A320 planes after JetBlue Flight 1230 experienced a sudden
nose-down drop due to a flight control issue believe linked to corrupt data
caused by intense solar radiation. The problem, linked to the aircraft's
elevator and aileron computer, could push elevators beyond structural
limits, potentially endangering aircraft. Airbus and aviation authorities
ordered the rollback from version L104 to L103+, a procedure taking roughly
three hours.

------------------------------

Date: Tue, 6 Jan 2026 15:24:02 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Chinese Peptides Are the Latest Biohacking Trend in the Tech World
 (The New York Times)

The gray-market drugs flooding Silicon Valley reveal a community that
believes it can move faster than the FDA.

“*Do your own research* has lots of dangers,” Dr. Topol said. “If they
really were good citizen scientists, they would know what the criteria are:
randomized, placebo-controlled trials; peer-reviewed publications
independent of the company. We don’t have any of those studies for most of
these peptides.”  [...]

Brooke Bowman, 38, is the bushy-haired, fast-talking chief executive of
Vibecamp, an annual gathering of the rationalist and post-rationalist
communities. These groups are interested in metacognition, or improving the
art of thinking itself -— a proclivity that makes them especially interested
in mind-enhancing substances. She considers herself a transhumanist -—
someone who believes in using technology to augment human abilities —- and
even got an RFID chip implanted in her hand to link to her Telegram profile
when tapped. (The chip, which she got at a “human augmentation dance party,”
was installed too deep and doesn’t work.)

https://www.nytimes.com/2026/01/03/business/chinese-peptides-silicon-valley.html

What, me worry?


  [No worries.  It's supposed to be a bird in the hand, and a chip on the
  shoulder.  But Dr. Topol wants evidence-based peptide research, and that
  would make a lot of sense.  PGN]

------------------------------

Date: Sat, 29 Nov 2025 06:57:07 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: Thieves are stealing keyless cars in minutes. Here's how to protect
 your vehicle (Los Angeles Times)

Car thieves are using tablets and antennas to steal keyless or "push to
start" vehicles, police warn, but there are steps owners can take to protect
their vehicles.

https://www.latimes.com/california/story/2025-11-26/thieves-are-stealing-keyless-cars-in-minutes-heres-how-to-protect-your-vehicle

------------------------------

Date: Wed, 7 Jan 2026 11:26:01 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Software Error Forces 325,000 Californians to Replace Real IDs
 (Neil Vigdor

Neil Vigdor. The New York Times (01/02/26), via ACM TechNews

The California Department of Motor Vehicles identified a software glitch
involving the expiration dates applied to the Real IDs of around 325,000
legal immigrants residing in the state, making them valid beyond the end of
their legal stay in the U.S. The software error, which affected 1.5% of Real
ID holders, applied the same renewal interval for legal immigrants as all
other residents. Holders of the affected Real IDs will need to replace them.

------------------------------

Date: Sat, 3 Jan 2026 20:28:06 -0500
From: David Lesher <wb8foz@8es.com>
Subject: NASA Library closing (The NY Times)

https://www.nytimes.com/2025/12/31/climate/nasa-goddard-library-closing.html?unlocked_article_code=1.BlA.fYTN.IyBt401FgjzK&smid=url-share

<https://www.reddit.com/r/maryland/comments/1q23hym/nasas_largest_library_is_closing_amid_staff_and/

The current administration is shuttering the NASA Goddard library today and
ordering all the books and documents thrown out.

I am asking anyone in the area with a vehicle or backpack to show up and
rescue as many books as possible. I am asking anyone who knows someone
working at the trash company or Goddard to reach out.

There may be nothing we can do but even if we save only one book it will be
worth it. Even if only two people show up and leave empty handed it'll be
better than doing nothing.

If you need any further motivation i'll pay you a dollar per book rescued,
or you know you could sell it on ebay for lots of money. While i'd rather
have these items remain in a library - literally anything is better than
sending them to the dump.

But please show up today and early next week if you're able. It'll take time
for the library to toss everything out and trash service is rarely instant
but by within the month we likely lose some of our space and science history
forever if we don't save this collection.

  [Satire: Today's mantra has become:
    ``Who needs libraries when we have AI?''
  Sadly, we really need librarys that do not purge books that are factual.
  PGN]

------------------------------

Date: Sat, 3 Jan 2026 23:05:38 -0500
From: Monty Solomon <monty@roscom.com>
Subject: EFF's Investigations Expose Flock Safety's Surveillance Abuses:
 2025 in Review

https://www.eff.org/deeplinks/2025/12/effs-investigations-expose-flock-safetys-surveillance-abuses-2025-review

------------------------------

Date: Sun, 21 Dec 2025 02:40:17 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Zoom's "AI Companion" is surveillance as a service

 From a friend... be cautious here.

- - -  Forwarded Message:
Date: Tue, 16 Dec 2025 12:42:21 -0600
Subject: Life: Zoom's "AI Companion" is surveillance as a service

Out of curiosity I turned on Zoom’s “AI Companion” during a call. Here’s the
notification their lawyers had them send me. Net: Zoom sells every piece of
information they gather. See the *bolded text*.

*From:*ZoomInfo Notification <noreply@zoominformation.com>
*Sent:* Thursday, December 11, 2025 1:16 PM
*Subject:* Notice of personal information processing

<https://aooptout.zoominformation.com/acton/ct/43119/s-0200-2512:0/Bct/l-0334/l-0334:6e96/ct0_0/1/lu?sid=TV2%3Ae0Kfrmrqd>

Personal Information Notice

This Personal Information Notice is to inform you of the collection,
processing, and sale of certain personal information or personal data about
you ("*Personal Information*"). ZoomInfo collects business contact and
similar information related to individuals when they are working in their
professional or employment capacity, and uses this information to create
professional profiles of individuals (“*Professional Profiles*”) and
profiles of businesses (“*Business Profiles*”). *We provide this information
to our customers, who are businesses trying to reach business professionals
for their own business-to-business sales, marketing, and recruiting
activities.  * You can opt out of our database visiting our Trust Center
<https://aooptout.zoominformation.com/acton/ct/43119/s-0200-2512:0/Bct/l-0334/l
-0334:6e96/ct1_0/1/lu?sid=TV2%3Ae0Kfrmrqd>.  At the Trust Center, you can
also submit an access request or claim your professional business profile in
order to make updates to your information. Using the Trust Center is the
quickest and easiest way to access your information or have it deleted or
corrected. However, if you prefer to email or call us, our contact
information is listed under the /Who We Are/ section below. For additional
information, please review our Privacy Policy.

<https://aooptout.zoominformation.com/acton/ct/43119/s-0200-2512:0/Bct/l-0334/l-0334:6e96/ct2_0/1/lu?sid=TV2%3Ae0Kfrmrqd>.
[...]

------------------------------

Date: Tue, 23 Dec 2025 11:15:15 -0700
From: Matthew Kruk <mkrukg@gmail.com>
Subject: AI app aplogises over false crime alerts across U.S. (BBC)

https://www.bbc.com/news/videos/c4g4v3yd28yo

A company behind an AI-powered app called CrimeRadar has apologised for the
distress caused by false crime alerts issued to local US communities after
a BBC Verify investigation.

CrimeRadar uses artificial intelligence to monitor openly available police
radio communications, automatically generating a transcript and then
producing crime alerts for users across the US.

BBC Verify has found multiple instances from Florida to Oregon of
CrimeRadar sending misleading and inaccurate alerts about serious crime to
local residents - as Thomas Copeland explains.

------------------------------

Date: Sun, 07 Dec 2025 13:54:28 -0800
From: Geoff Kuenning <geoff@cs.hmc.edu>
Subject: Google AI deletes user's entire hard drive

A user who was using Google Antigravity to create a small application needed
to clear his cache while debugging.  He asked Antigravity to do that, and
rather than issuing the proper command it did "rmdir D:\", which blew away
all the files on that drive.  Oops.

This is, of course, a variation on the risks of using autocomplete.

https://www.tomshardware.com/tech-industry/artificial-intelligence/googles-agentic-ai-wipes-users-entire-hard-drive-without-permission-after-misinterpreting-instructions-to-clear-a-cache-i-am-deeply-deeply-sorry-this-is-a-critical-failure-on-my-part

(I am reminded of an incident more than a couple of decades ago, when a
student sysadmin discovered that somebody had written an "Adventure" shell
whose commands were patterned after the popular game of that name.  He
switched to root access to install it on our main system, and then tested it
while still running as root.  A few minutes later he triggered the message
"You have awakened the rm -rf monster."  Fortunately, he immediately aborted
the comand so the damage was limited.)

------------------------------

Date: Tue, 23 Dec 2025 18:04:14 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Boys at her school shared AI-generated, nude images of her. She
 was the one expelled from Sixth Ward Middle School

A 13-year-old girl at a Louisiana middle school got into a fight with
classmates who were sharing AI-generated nude images of her.

https://abc7chicago.com/post/boys-school-shared-ai-generated-nude-images-she-was-expelled-sixth-ward-middle/18306695/

  [Must be a front-ward denying back ward to blame the victim?  PGN]

Ashley MacIsaac Concert Canceled After AI Falsely Identifies Him as Sex
Offender. A Google AI search confused him with a different man named McIsaac

https://exclaim.ca/music/article/ashley-mac-isaac-concert-cancelled-after-ai-falsely-identifies-him-as-sex-offender

  [PGN is back:
    A well-known young man from Cape Bretton
    Was falsely accused of mispettin'.
      AI picked MacIsaac
      Instead of the guy sick.
    And left his whole audience a-frettin'.
   ]

------------------------------

Date: Tue, 23 Dec 2025 16:27:47 -0700
From: geoff goodfellow <geoff@iconia.com>
Subject: CIA, ESP, Psychic Program, Spy Secrets, Declassified Documents

Third Eye Spies (FULL "Remote Viewing" DOCUMENTARY)

*For more than 20 years the CIA studied psychic abilities for use in their
top-secret spy program. With previously classified details about ESP now
finally coming to light, there can be no more secrets.You paid for it; you
deserve to know about it. A psychic spy program developed during the Cold
War (Russia/USSR v USA) escalated after a Stanford Research Institute
experiment publicized classified intel. As a result, the highly successful
work of physicist Russell Targ was co-opted by the CIA and hidden for
decades due to the demands of ‘national security.’ But when America's
greatest psychic spy dies mysteriously, Targ fights to get their work
declassified; even if it means going directly to his former enemies in the
Soviet Union to prove the reality of ESP to the world at large. Revealed
for the first time, this is the newly declassified true story of America's
psychic spies. The implications of their success show us all what we are
truly capable of – now there can be no more secrets...

https://www.youtube.com/watch?v=-WUaS_Ynd_M

------------------------------

Date: Mon, 5 Jan 2026 14:32:20 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: He Switched to eSIM, and Is Full of Regret (WiReD)

https://www.wired.com/story/i-switched-to-esim-and-i-am-full-of-regret

Feature? Bug?

------------------------------

Date: Sun, 4 Jan 2026 23:29:02 -0500
From: Monty Solomon <monty@roscom.com>
Subject: AT&T to launch new service for customers as it takes on T-Mobile

AT&T makes a bold promise to customers while battling growing competition.
https://www.thestreet.com/retail/att-to-launch-new-service-for-customers-as-it-takes-on-t-mobile

To help keep pace with its competitors, AT&T plans to sweeten the deal for
its phone customers by launching a limited beta program during the first
half of this year. The program will grant select customers and FirstNet
users early access to satellite-based cellular service, according to a
recent press release.

Since 2024, AT&T has been collaborating with AST SpaceMobile to develop a
satellite cellular service for customers that will provide coverage in areas
traditional cell towers are unable to reach, especially in remote or
off-grid locations.

------------------------------

Date: Tue, 6 Jan 2026 20:45:37 -0500
From: Monty Solomon <monty@roscom.com>
Subject: The big regression

My folks are in town visiting us for a couple months so we rented them a
house nearby.  It’s new construction. No one has lived in it yet. It’s amped
up with state of the art systems. You know, the ones with touchscreens of
various sizes, IoT appliances, and interfaces that try too hard.

And it’s terrible. What a regression.

https://world.hey.com/jason/the-big-regression-da7fc60d

------------------------------

Date: Sun, 04 Jan 2026 16:28:41 +0000
From: Henry Baker <hbaker1@pipeline.com>
Subject: AI Customer DisService Slop

Companies are quickly replacing humans (often from south of the equator)
with AI's for their "Customer Service" issues.

The good news: the AI customer service "agent" typically speaks English
without a heavy accent and with good grammar.

The bad news: as best I can tell, the AI "Customer Service" agent is a
acebo* used to politely jolly us along without actually doing anything --
analogous to those placebo crosswalk buttons.

Asking to talk to the agent's boss won't do anything, either.

Emailing support doesn't help -- email gets read by an AI, as well.

This situation reminds me of an old joke about the professor who decided he
wanted to save time and teach his classes via tape recordings.  He stopped
by his classroom after a few weeks and found the room empty of people, but
student tape recorders at each desk.

Where is Lily Tomlin's Ernestine when we desperately need her ?

------------------------------

Date: Tue, 6 Jan 2026 20:41:50 -0500
From: Monty Solomon <monty@roscom.com>
Subject: News orgs win fight to access 20M ChatGPT logs. Now they want more.
 (Ars Technica)

https://arstechnica.com/ai/2026/01/news-orgs-want-openai-to-dig-up-millions-of-deleted-chatgpt-logs/

------------------------------

Date: Wed, 7 Jan 2026 05:55:25 -0800
From: Rob Slade <rslade@gmail.com>
Subject: Capability Maturity Models and generative artificial intelligence

I've just had a notification from LinkeDin exhorting me to keep up with
cybersecurity and artificial intelligence frameworks and maturity models.

I assume that when they say artificial intelligence, they really mean
generative artificial intelligence, since the world, at large, seems to have
forgotten the many other approaches to artificial intelligence, such as
expert systems, game theory, and pattern recognition.  (Computers, at least
until we get quantum computers, seem to be particularly bad at pattern
recognition.  I tend to tell people that this is because computers have no
natural predators.)

I have no problems with frameworks.  I have been teaching about
cybersecurity frameworks for a quarter of a century now.  Since I've been
teaching about them, I have also had to explore, in considerable depth,
frameworks in regard to capital risk (from the finance industry), business
analysis breakdown frameworks, checklist security frameworks, cyclical
business improvement and enhancement frameworks, and a number of others.
I've got a specialty presentation on the topic for conferences.  I include
maturity models.  In a fair amount of detail.  It's an important model
within the field of frameworks.  It not only tells you are where you are,
but in strategic terms, what type of steps to take next, in terms of
improving your overall business operations.

But a capability and maturity model?  For a technology, and even an
industry, that didn't even exist four years ago?

Okay, let's set aside, for a moment, the fact that the entire industry is
only four years old.  We needn't argue about that.  I've got a much stronger
case to make that this is a really stupid idea.

Capability maturity models, in general, have five steps.  (Yes, I know,
there are some people who add a sixth step, and sometimes even a 7th,
usually in between the existing steps.)  But let's just stick with the basic
maturity model model.

The first step is usually "chaotic."  Some models now call this first step
"initial," rather than "chaotic," since nobody thinks that they work in a
chaotic industry.  But, let's face it: when a new industry starts up, it's
chaos.  You really don't know what you're doing.  If you are really lucky,
you succeed, in that you make enough revenue, or you have patient enough
investors, to continue on until you find out what you are doing, and how to
make enough revenue to survive, by doing it.  That's chaotic.  It doesn't
mean that you aren't working hard.  It doesn't mean that you don't have at
least some idea of what you are doing, and the technology, or the business
model, that you are working with.  But, that's just the nature of a startup.
You don't have a really good idea of what you are doing.  You don't have a
really good idea of what the market is.  You may have some idea of what your
customers are like, but you don't have an awful lot of hard information
about that.  It's basically chaos.

That's basically where generative artificial intelligence is right now.

Building upon the idea of neural networks, which is a been around for eighty
years (and was deeply flawed even to begin with), about a dozen companies
have been able to build large language models.  These LLMs have been able to
pass the Turing test.  If you're chatting with a chatbot, you're not really
sure whether you're chatting with a chatbot, or some really boring person
who happens to be able to call up dictionary entries really quickly.  We
know enough about neural networks, and Markov chain analysis, and Bayesian
analysis, to have a very rough idea of how to build these models, and how
they operate.  But we still don't really know how they are coming up with
what they're coming up with.  We haven't been able to figure out how not to
get them to just simply make stuff up, and tell us wildly wrong "facts."  We
haven't been able, sufficiently reliably, to tell them not to tell us stuff
that's really, really dangerous.  We try to put guard rails on them, but we
keep on getting surprised by how often they present us with particularly
dangerous text, in ways we never expected.

We don't know what we're doing.  Not really.  So it's chaotic.

We don't really know what we're doing.  So, we don't really know, quite yet,
how to make money off of what we're doing.  Yes, some businesses have been
able to find specific niches where the currently available functions of
large language models can be rented, and then packaged, to provide useful
help in some specific fields.  Some companies that are on the edges of this
idea of genAI are able to rent LLM capabilities from the few companies that
have built large language models, and have been able to find particular
tasks, which they can then perform for businesses, and get enough revenue to
survive.  And yes, through low rank adaptation, either the major large
language model companies, or some companies that are renting basic functions
from them, are able to produce specialty generative AI functions, and make
businesses out of them.  But the industry as a whole, overall, is still
spending an awful lot more money building the large language model models
then the industry, as a whole, is making in revenue.  So we still don't know
how generative artificial intelligence works, and we still haven't figured
out how to make money from it.  It's chaotic.

But another point about capability maturity models is that the second step
is "repeatable."  The initial step, chaotic, is where you don't know what
you're doing.  The second step is when you know that you can do it again
(even if you *still* don't know what you're doing).

And even the companies, the relatively few companies, who have actually
built large language models from scratch, haven't done it again.

Oh yes, I know.  The companies that have made large language models keep on
changing the version numbers.  And each version comes out with new features,
or functions, and becomes a bit better than the one with the version number
before it.

The thing is, you will notice that they still keep the same basic name for
their product.  That's because, really, this is still the same basic large
language model.  It's just that the company has thrown more hardware at it,
and more memory storage, and possibly even built data centres in different
locations, and shoveled in more, and more, and more data for the large
language model to munch on, and extend it's statistical database further and
further.  Nobody has built another, and completely different, large language
model, after they have built the first one.

In the first place, it's bloody expensive.  You have to build an enormous
computer, with an enormous number of processing cores, and an enormous
number of specialty statistical processing units, and enormous amounts of
memory to store all of the data that your large language model is crunching
on, and it requires enormous amounts of energy to run it all, and it
requires enormous amounts of energy, and probably an awful lot of water, to
take the waste heat away from your computers so that they don't fry
themselves.

And you've now got competitors, chomping at your heels, and you can't waste
time risking enormous amounts of money, even if you can get a lot of
investors eager to give you that money, trying a new, and unproven, approach
to building large language models, when you already have a large language
model which is working, even if you don't know how well it's working.  So
nobody is going to repeat all the work that they did in the first place,
when they've got all this competition that they have to keep ahead of.  When
they have a large language model, which they really don't understand, and
they are trying desperately to figure out what the large language model is
doing, so that they can fix some of the bugs in it, and make it work better.
Even if they don't really know how it works.

Okay, yes, you can probably argue that the competitors are, in fact,
repeating what you're doing.  Except that they don't know what *they're*
doing, either.  All of these companies have the generative artificial
intelligence tiger by the tail, and they aren't really in charge of it.  Not
until they can figure out what the heck it is doing.

I'm not sure that that counts as the "repeatable" stage of a maturity model.

And the third stage is "documented."  At the "documented" stage, you
definitely *do* have to understand what you're doing, so that you can
document what you are doing.  And yes, all of the general artificial
intelligence companies are looking, as deeply as they can, as far as they
can, into the large language model that they have produced, and are
continuing, constantly, to enhance.  The thing is, while, yes, they are
producing some documentation in this regard, it's definitely not the whole
model that is completely documented.  Yes, they are starting to find out
some interesting things about the large language models.  They are starting
to find out, by analyzing the statistical model that the large language

models are producing, what might be useful, and what might be creating
problems.  But nobody's got a really good handle on this.  (The way you can
tell that people really don't have a good handle on this, is that the large
language model companies are spending so much money, all over the world,
lobbying governments to try and prevent the governments from creating
regulations to regulate generative artificial intelligence. If the genAI
companies knew what they were doing, they would have some ideas on what
kind of regulations are helpful, and what kind of regulations would help
make the industry safer, and what kind of business and revenue regulations
might affect.  But they don't actually know what they're doing, and
therefore they are terrified that the governments might [probably
accidentally] cut off a profitable revenue stream, or even just a
potentially useful function for generative artificial intelligence.)

So, no.  You can't have an artificial intelligence capability maturity
model.  Yet.  Because we don't know what generative artificial intelligence
is.  Yet.

------------------------------

Date: Thu, 8 Jan 2026 19:15:19 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Fake AI Chrome Extensions Steal 900K Users' Data (Dark Reading)

https://www.darkreading.com/cloud-security/fake-ai-chrome-extensions-steal-900k-users-data

------------------------------

Date: Thu, 8 Jan 2026 19:12:24 -0500
From: Monty Solomon <monty@roscom.com>
Subject: AI starts autonomously writing prescription refills in Utah
 (Ars Technica)

https://arstechnica.com/health/2026/01/utah-allows-ai-to-autonomously-prescribe-medication-refills/

------------------------------

Date: Fri, 9 Jan 2026 11:03:13 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Stolen Data Poisoned to Make AI Systems Return Wrong Results
 (Thomas Claburn)

Thomas Claburn, *The Register* (U.K.) (01/06/26), via ACM TechNews

Researchers in China and Singapore have developed a technique that renders
data stolen from knowledge graphs (KGs) useless when inserted without
consent into a GraphRAG (retrieval-augmented generation) AI system. Their
framework, known as AURA (Active Utility Reduction via Adulteration),
degrades KG responses to large language models to produce hallucinations and
inaccurate predictions. The technique works when attackers have the KG but
not the secret key necessary for accurate data retrieval.

------------------------------

Date: Fri, 9 Jan 2026 10:27:15 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Good cannot successfully battle Evil using only good means is the
 essential message of Machiavelli's "The Prince" (1513)

------------------------------

Date: Sat, 28 Oct 2023 11:11:11 -0800
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest.  Its Usenet manifestation is
 comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
 subscribe and unsubscribe:
   http://mls.csl.sri.com/mailman/listinfo/risks

=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
   includes the string `notsp'.  Otherwise your message may not be read.
 *** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
 copyright policy, etc.) has moved to the ftp.sri.com site:
   <risksinfo.html>.
 *** Contributors are assumed to have read the full info file for guidelines!

=> OFFICIAL ARCHIVES:  http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
  http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
  Also, ftp://ftp.sri.com/risks for the current volume/previous directories
     or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
  If none of those work for you, the most recent issue is always at
     http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
  ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
 *** NOTE: If a cited URL fails, we do not try to update them.  Try
  browsing on the keywords in the subject line or cited article leads.
  Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

------------------------------

End of RISKS-FORUM Digest 34.83
************************

home help back first fref pref prev next nref lref last post