[33362] in RISKS Forum

home help back first fref pref prev next nref lref last post

Risks Digest 34.30

daemon@ATHENA.MIT.EDU (RISKS List Owner)
Sun Jun 9 17:11:46 2024

From: RISKS List Owner <risko@csl.sri.com>
Date: Sun, 9 Jun 2024 14:11:28 PDT
To: risks@mit.edu

RISKS-LIST: Risks-Forum Digest  Sunday 9 Jun 2024  Volume 34 : Issue 30

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/34.30>
The current issue can also be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
An Object Lesson From Covid on How to Destroy Public Trust (Zeynep Tufekci)
From the *It's Not a Glitch* Dept. (9NEWS Colorado)
Colorado discovers error causing EV tax credit denials Architecture
 (Zhang Tong)
Scientists Find Security Risk in RISC-V Open-Source Chip (The Register)
Study finds 268% higher failure rates for Agile software projects
The best video I've seen explaining the techical reasons why
 keeping AM radios in cars is so important! (YouTube)
AI Systems Are Learning to Lie and Deceive (Henry Baker)
Hamane's Ai Pin (The NYTimes)
Microsoft's Jaime Teevan doubles down on Windows Recall's "privacy
 sh*t-show" (Henry Baker)
U.S. to open broad antitrust probe into AI giants (Axios)
PHP+Windows Vulnerability (Cliff Kilby)
Annandale man wins fraud case against a bank (Annandale Today)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Sun, 9 Jun 2024 12:22:04 -0700
From: "Peter G. Neumann" <Peter.Neumann@SRI.COM>
Subject: An Object Lesson From Covid on How to Destroy Public
 Trust (Zeynep Tufekci)

Zeynep Tufekci in the 9 Jun 2024 Sunday Option, p. 6--7
  [Beautifully placed across both pages below *Why Covid Probabaly Started in
  a Lab*, by Alina Chan.  PGN]

Officials should have told us what they knew, or at least leveled with us
about what they didn't know.  Public health officials squandered our faith
in them by not being transparent.

https://www.nytimes.com/2024/06/08/opinion/covid-fauci-hearings-health.html

  [Once again, my old adage from the cryptowars is relevant,
  and bears repeating -- with a new addition:

    Pandora's cat is out of the barn,
    and the Genie won't go back in the closet.

      [With apologies to the canary in the coal mine, who deserves more
      credit in the case of Covid disinfomration.  PGN]

  Deja Vu all over again for Robert Redfield opening up the discussion.
  in RISKS-34.25, in regard to my previous items in RISKS-34.22-24

  One more relevant item to add:

  Sarah Knapton, Science Editor, The Telegraph, 5 Jun 2024 Covid vaccines
  may have helped fuel rise in excess deaths.  Experts call for more
  research into side effects and possible links to mortality rates.
  https://www.telegraph.co.uk/news/2024/06/04/covid-vaccines-may-have-helped-fuel-rise-in-excess-deaths/
  PGN]

------------------------------

Date: Sat, 8 Jun 2024 13:21:42 -0400
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: From the *It's Not a Glitch* Dept. (AAIB Reports)

Back in March, a 737-800 barely completed a takeoff.  AAIB recently released
an incident report: AAIB Special Bulletin: S1/2024 G-FDZS AAIB-29891 (Crown
copyright 2024)

  "The manufacturer described the A/T system on the 737NG as having a long
  history of nuisance disconnects during takeoff mode engagements."

For an aircraft that's been around only since 1996, and in production until
2020: what is a "long history"?

Don't worry about that glitch, we documented it.  It's a feature!

https://www.gov.uk/aaib-reports/aaib-special-bulletin-s1-slash-2024-boeing-=
737-8k5-g-fdzs

------------------------------

Date: Sun, 9 Jun 2024 09:23:08 -0600
From: Jim Reisert AD1C <jjreisert@alum.mit.edu>
Subject: Colorado discovers error causing EV tax credit denials
 (9NEWS Colorado)

Steve Staeger, Anna Hewson, 9NEWS, 8 Jun, 2024

The Colorado Department of Revenue said a coding error in an automated
system for state electric vehicle tax credits led to new EV owners
getting their credits denied.

https://www.9news.com/article/money/consumer/steve-on-your-side/colorado-ev-tax-credit-error/73-dced4e8a-092d-4598-985d-86347e67a8c9

------------------------------

Date: Fri, 7 Jun 2024 11:04:17 -0400 (EDT)
From: ACM TechNews <technews-editor@acm.org>
Subject: Scientists Find Security Risk in RISC-V Open-Source Chip
 Architecture (Zhang Tong)

Zhang Tong, South China Morning Post, 5 Jun 2024, via ACM Technews

Researchers at China's Northwestern Polytechnical University have identified
a security risk in the RISC-V open-source chip architecture. China's
domestic chip industry has relied on the standard to build CPUs and sidestep
U.S. sanctions. The vulnerability in the RISC-V SonicBoom open-source code
lets attackers skirt security protections in modern processors and operating
systems without administrative rights. U.S. lawmakers reportedly are
considering restricting China's access to RISC-V.

------------------------------

Date: Thu, 6 Jun 2024 11:29:08 -0400
From: Tom Van Vleck <thvv@multicians.org>
Subject: Study finds 268% higher failure rates for Agile software projects
 (The Register)

https://www.theregister.com/2024/06/05/agile_failure_rates/

the work is by Dr Junade Ali (Cambridge University), author of *Impactan
Engineering*.  TheRegister article, which has bar charts and t-test
figures. points to
https://www.engprax.com/post/268-higher-failure-rates-for-agile-software-projects-study-finds

There is a book, available on Amazon for $8.99:
https://www.amazon.com/Impact-Engineering-Transforming-Project-Management-ebook/dp/B0D36J6D63
It probably has more detail on how "failure" is defined, etc,
and how many significant digits there are in "268."

Anyways, 268 is a a lot of failure, however decided.  (A former
colleague once said, "our project didn't FAIL, it just never delivered
a usable version.")

I wonder if "success" statistics might be correlated with
- Experience of team, including success/failure history
- Experience of leadership, including success/failure history
- amount of tine spent by team members/leaders on non-project activity
- how much a project's goals changed between inception and delivery
- how much a teams process changed between inception and delivery
- methods and tools used by all or part of the team
... and lots more factors.  Maybe the book says.

I feel like someone should also study, in this context, artifacts not shipped
- how many tines each feature is reviewed and how many person-hours this takes
- how much effort is spent on features eventually omitted
- how many times features are generalized and expanded vs simplified
- cost of architecture and tools crucial to the project but not shipped to
  end users
- documentation: cost of creation, review, update, distribution, disposal
Some of these may be crucial to success.

------------------------------

Date: Fri, 7 Jun 2024 17:56:29 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: The best video I've seen explaining the technical reasons why
 keeping AM radios in cars is so important! (YouTube)

https://www.youtube.com/watch?v=0OIrx2Za8OY

  [ELECTRIC CAR folks are talking about millions of dollars to fix the cars
  to keep AM from interfering with the electric signals.  Great pros and
  cons discussed.  Don't let them get away with this one.



PGN]

------------------------------

Date: Sat, 08 Jun 2024 18:31:28 +0000
From: Henry Baker <hbaker1@pipeline.com>
Subject: AI Systems Are Learning to Lie and Deceive

Lemme get this straight: we allow AI systems to:
* set bail
* set sentencing
* deny credit
* select tax returns for audits
* choose battlefield targets
* choose prospective employees for hiring
etc., etc.

But what is the remedy when the AI outright *lies* on *purpose*
(presumably not the original purpose of the AI, we can only hope) ?

https://futurism.com/ai-systems-lie-deceive

Liar Liar

Jun 7, 5:01 PM EDT by Noor Al-Sibai

AI Systems Are Learning to Lie and Deceive, Scientists Find
"GPT- 4, for instance, exhibits deceptive behavior in simple test
scenarios 99.16% of the time."

AI models are, apparently, getting better at lying on purpose.

Two recent studies &mdash; one published this week in the journal PNAS and
the other last month in the journal Patterns &mdash; reveal some jarring
findings about large language models (LLMs) and their ability to lie
to or deceive human observers on purpose.

https://www.pnas.org/doi/full/10.1073/pnas.2317967121
https://www.cell.com/action/showPdf?pii=S2666-3899%2824%2900103-X

In the PNAS paper, German AI ethicist Thilo Hagendorff goes so far as
to say that sophisticated LLMs can be encouraged to elicit
"Machiavellianism," or intentional and amoral manipulativeness, which
"can trigger misaligned deceptive behavior."

"GPT-4, for instance, exhibits deceptive behavior in simple test
scenarios 99.16% of the time," the University of Stuttgart researcher
writes, citing his own experiments in quantifying various
"maladaptive" traits in 10 different LLMs, most of which are different
versions within OpenAI's GPT family.

Billed as a human-level champion in the political strategy board game
"Diplomacy," Meta's Cicero model was the subject of the Patterns
study. As the disparate research group &mdash; comprised of a physicist, a
philosopher, and two AI safety experts &mdash; found, the LLM got ahead of
its human competitors by, in a word, fibbing.

Led by Massachusetts Institute of Technology postdoctoral researcher Peter
Park, that paper found that Cicero not only excels at deception, but seems
to have learned how to lie the more it gets used &mdash; a state of affairs
"much closer to explicit manipulation" than, say, AI's propensity for
hallucination, in which models confidently assert the wrong answers
accidentally.

While Hagendorff notes in his more recent paper that the issue of LLM
deception and lying is confounded by AI's inability to have any sort
of human-like "intention" in the human sense, the Patterns study
argues that within the confines of Diplomacy, at least, Cicero seems
to break its programmers' promise that the model will "never
intentionally backstab" its game allies.

The model, as the older paper's authors observed, "engages in
premeditated deception, breaks the deals to which it had agreed, and
tells outright falsehoods."

Put another way, as Park explained in a press release: "We found that
Meta&rsquo;s AI had learned to be a master of deception."

"While Meta succeeded in training its AI to win in the game of
Diplomacy," the MIT physicist said in the school's statement, "Meta
failed to train its AI to win honestly."

In a statement to the New York Post after the research was first
published, Meta made a salient point when echoing Park's assertion
about Cicero's manipulative prowess: that "the models our researchers
built are trained solely to play the game Diplomacy."

Well-known for expressly allowing lying, Diplomacy has jokingly been
referred to as a friendship-ending game because it encourages pulling
one over on opponents, and if Cicero was trained exclusively on its
rulebook, then it was essentially trained to lie.

Reading between the lines, neither study has demonstrated that AI
models are lying over their own volition, but instead doing so because
they've either been trained or jailbroken to do so.

That's good news for those concerned about AI developing sentience &mdash;
but very bad news if you're worried about someone building an LLM with
mass manipulation as a goal.

  [So who put the AI in fAIl?  PGN]

------------------------------

Date: Sun, 9 Jun 2024 13:02:12 -0700
From: Peter G Neumann <Peter.Neumann@SRI.COM>
Subject: Hamane's Ai Pin (NYTimes)

Tripp Mickle and Erin Griffith, The New York Times, 7 Jun 2024

Inside the Spectacular Flob of a Bold AI Device -- Humane's Ai Pin was
supposed to disrupt the smart phone.a After horrible reviews and slow sales,
Humane is said to be talking to potential buyers.

------------------------------

Date: Thu, 06 Jun 2024 12:55:23 +0000
From: Henry Baker <hbaker1@pipeline.com>
Subject: Microsoft's Jaime Teevan doubles down on Windows Recall's
 "privacy sh*t-show" (The Register)

 (Follow-on to RISKS-34.27, Windows Total "Recall" ...)

FYI -- Clueless/tonedeaf at best, NSA/Chinese/Russian mole at worst, chief
scientist and technical fellow at Microsoft Research Jaime Teevan doubles
down on the new Windows "Recall" continuous keylogging "bug-feature".

Why any sane government or corporation would allow this Windows feature to be
activated, I can't imagine, as it exponentially increases the attack surface.

I would guess that the plaintiff's bar is warming up the lawsuits against
Microsoft and drafting their subpoenas for Microsoft's *own* Recall
databases from Microsoft employees like Dr. Teevan.

And we've just been recently discussing the issue of *ethics* in CS...

https://www.antipope.org/charlie/blog-static/2024/06/is-microsoft-trying-to-commit-.html

https://www.theregister.com/2024/06/06/microsoft_research_recall/

Microsoft Research chief scientist has no issue with Windows Recall

As tool emerges to probe OS feature's SQLite-based store of user activities

Thomas Claburn Thu 6 Jun 2024 // 07:26 UTC

Asked to explore the data privacy issues arising from Microsoft Recall, the
Windows maker's poorly received self-surveillance tool, Jaime Teevan, chief
scientist and technical fellow at Microsoft Research, brushed aside
concerns.

Teevan was speaking on Wednesday with Erik Brynjolfsson, director of the
Stanford Digital Economy Lab, at the US university's Institute for
Human-Centered Artificial Intelligence's fifth anniversary conference.
Brynjolfsson said when Recall was announced, there was "kind of a backlash
against all the privacy challenges around that. So, talk about both the
pluses and minuses of using all that data and some of the risks that creates
and also some of the opportunities."

This was clearly a popular topic.

"Yeah, and so it's a great question, Erik," said Teevan. "This has come up
throughout the morning as well &ndash; the importance of data. And this AI
revolution that we're in right now is really changing the way we understand
data."

She continued, "Microsoft generally helps large enterprises manage their
data, create data, share data, and that data is really something that makes
the business of work different in the context of generative AI.

"And as individuals too, we have important data, the data that we interact
with all the time, and there's an opportunity to start thinking about how to
do that and to start thinking about what it means to be able to capture and
use that. But of course we are rethinking what data means and how we use it,
how we value it, how it gets used."

*The Register* noted when Recall was introduced at Microsoft Build last
month that the software &ndash; which builds an archive of screenshots taken
every few seconds and logs user activities, so that past actions can be
recalled &ndash; presents a significant privacy risk. As recently described
by author Charlie Stross, it is "the product nobody wanted" and "an utter
privacy shit-show."

Undaunted by Teevan's unwillingness to acknowledge why Recall struck a
nerve, Brynjolfsson probed further.

"Is it stored locally?" he asked. "So suppose I activate Recall, and I don't
know if I can, but when you have something like that available, I would be
worried about all my personal files going up into the cloud, Microsoft, or
whatever. Do you have it kept locally?"  Teevan responded, "Yeah, yeah, so
this is a foundational thing that we as a company care a lot about is
actually the protection of data. So Recall is a feature which captures
information. It's a local Windows functionality, nothing goes into the
cloud, everything's stored locally."

And that was that, as if continuously recording one's computing activities
in a series of screenshots and activity logs has no security or privacy
implications if the data is local and protected by Microsoft Account
credentials &ndash; and not much of a reassurance in light of the release of
security researcher Alex Hagenah's tool Total Recall. This code can extract
and display data from Recall's unencrypted SQLite database, in which the
operating system "feature" stores snapshots of user activity.

Meanwhile, security researchers and analysts continue to pile on, calling
for Recall - due to be released later this month - to be forgotten.

As Stross argues, Windows PCs with Recall will be targeted by lawyers during
discovery proceedings because they will provide access not just to email
messages but conversations in any messaging or collaboration app, and
possibly spoken conversations if speech-to-text data gets captured by
Redmond's activity logger. It's also handy for a system intruder to us to
snoop on what their victim has been up to lately, personally and for work.

"It's a shit-show for any organization that handles medical records or has a
duty of legal confidentiality; indeed, for any business that has to comply
with GDPR (how does Recall handle the Right to be Forgotten?  In a word:
badly), or HIPAA in the US," he wrote in his post.

"This misfeature contravenes privacy law throughout the EU (and in the UK),
and in healthcare organizations everywhere which has a medical right to
privacy."

Referring to Recall's ability to avoid capturing DRM'd content, the sci-fi
scribe continued: "About the only people whose privacy it doesn't infringe
are the Hollywood studios and Netflix, which tells you something about the
state of things."

  [Also, note this item in *The Verge*:
  https://www.theverge.com/2024/6/7/24173499/microsoft-windows-recall-response-security-concerns

  Microsoft says it’s making its new Recall feature in Windows 11 that
  screenshots everything you do on your PC an opt-in feature and addressing
  various security concerns. The software giant first unveiled the Recall
  feature as part of its upcoming Copilot Plus PCs last month, but since
  then, privacy advocates and security experts have been warning that Recall
  could be a “disaster” for cybersecurity without changes.

  Thankfully, Microsoft has listened to the complaints and is making a
  number of changes before Copilot Plus PCs launch on June 18th. Microsoft
  had originally planned to turn Recall on by default, but the company now
  says it will offer the ability to disable the controversial AI-powered
  feature during the setup process of new Copilot Plus PCs. “If you don’t
  proactively choose to turn it on, it will be off by default,” says Windows
  chief Pavan Davuluri.

  PGN]

------------------------------

Date: Thu, 6 Jun 2024 17:13:49 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: U.S. to open broad antitrust probe into AI giants

https://www.axios.com/2024/06/06/us-regulators-investigate-ai-companies?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioscloser&stream=top

------------------------------

Date: Sat, 8 Jun 2024 12:10:17 -0400
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: PHP+Windows Vulnerability

If you're running XAMPP on a Windows environment, and its publically faced,
a few issues.

https://www.tenable.com/blog/cve-2024-4577-proof-of-concept-available-for-p=
hp-cgi-argument-injection-vulnerability
There is a trivial to exploit remote execution vulnerability.

https://www.apachefriends.org/faq_windows.html
"XAMPP is not meant for production use but only for development
environments."

Why serve the content with CGI when PHP recommends FastCGI for Windows?
https://www.php.net/manual/en/install.windows.recommended.php

This may be due to known prior vulnerabilites with CGI.
https://nvd.nist.gov/vuln/detail/CVE-2012-1823
https://www.kb.cert.org/vuls/id/20276/

* CVE-1999-0174
* CVE-1999-0237
* CVE-1999-0260

Why use a Windows license at all?
https://www.php.net/manual/en/install.fpm.php
PHP-FPM tends to be more performant, and has a larger user base (read: more
documentation) and is trivial to configure with either Apache or Nginx as
the front end.

Cue the "we can't just install linux!" discussion.

  [See also Nasty Windows Servers bug with very simple exploit hits PHP
  just in time for the weekend
https://arstechnica.com/security/2024/06/php-vulnerability-allows-attackers-to-run-malicious-code-on-windows-servers/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
  From LaurenW.  PGN]

------------------------------

Date: Fri, 7 Jun 2024 19:27:03 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Annandale man wins fraud case against a bank
 (Annandale Today)

Richard Bennett, an 81-year-old Annandale resident who was scammed out of
$1.5 million, had to sue Old Dominion National Bank to get his money back.

A jury at the U.S. District Court for the Eastern District of Virginia ruled
in his favor on May 31.

His attorney, Blake Weiner, charged the bank had transferred money to a
fraudster without valid authorization, then refused to refund the money.  A
suspect hasn’t been identified in the case.

Weiner, a former assistant U.S. attorney at the Department of Justice, said
Bennett created a personal account at the Tysons branch of Old Dominion to
collect interest on nearly $2.5 million from the sale of his government
contracting business. His company, Advanced Systems Development, focused on
security issues.

When Bennett saw a bank statement that showed $1.5 million had been
transferred from his account without his knowledge, he notified the
bank. “The bank closed the account and said sorry, and that was it,” Weiner
said.

The bank’s position was that “basically, we’re not responsible because the
client should have done a better job of protecting their account,” he said.

A scammer had gained access to Bennett’s email and sent a message to a
Dominion vice president asking for information on how to enroll in online
banking. After setting up the account, the same scammer requested a change
in the account’s phone number.  The bank simply sent the fraudster a form to
sign. The bank approved the form after concluding the signature looked
similar to Bennett’s signature.

https://annandaletoday.com/annandale-wins-fraud-case-against-a-bank/

------------------------------

Date: Sat, 28 Oct 2023 11:11:11 -0800
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest.  Its Usenet manifestation is
 comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
 subscribe and unsubscribe:
   http://mls.csl.sri.com/mailman/listinfo/risks

=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
   includes the string `notsp'.  Otherwise your message may not be read.
 *** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
 copyright policy, etc.) has moved to the ftp.sri.com site:
   <risksinfo.html>.
 *** Contributors are assumed to have read the full info file for guidelines!

=> OFFICIAL ARCHIVES:  http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
  http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
  Also, ftp://ftp.sri.com/risks for the current volume/previous directories
     or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
  If none of those work for you, the most recent issue is always at
     http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
  ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
 *** NOTE: If a cited URL fails, we do not try to update them.  Try
  browsing on the keywords in the subject line or cited article leads.
  Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

------------------------------

End of RISKS-FORUM Digest 34.30
************************

home help back first fref pref prev next nref lref last post