[33081] in RISKS Forum

home help back first fref pref prev next nref lref last post

Risks Digest 33.61

daemon@ATHENA.MIT.EDU (RISKS List Owner)
Tue Feb 7 17:02:44 2023

From: RISKS List Owner <risko@csl.sri.com>
Date: Sun, 5 Feb 2023 14:36:28 PST
To: risks@mit.edu

RISKS-LIST: Risks-Forum Digest  Sunday 5 February 2023  Volume 33 : Issue 61

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/33.61>
The current issue can also be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents: Working on huge backlog
Historic Arctic outbreak crushes records in New England (WashPost)
'It had just vanished' -- the shock when tech fails (BBC News)
Welcome to the Era of Internet Blackouts (WiReD)
Ford recalls 462,000 SUVs over rearview camera issue (Engadget)
The lights have been on at a Massachusetts school for over a year because
 no one can turn them off (Corky Siemaszko)
FAA says unintentionally deleted files are to blame for nationwide
 ground stop (CNN)
Wi-Fi Routers Can Detect Human Locations, Poses Within a Room (Mark Tyson)
Hackers Can Make Computers Destroy Their Own Chips with Electricity
 (Matthew Sparkes)
Decoding Brainwaves to Identify What Music Is Being Listened To (U.Essex)
Remember Zoom-bombing? This is how Zoom tamed meeting intrusions. (WashPost)
Google Fi warns customers that their data has been compromised (Engadget)
Options trading desks 'flying blind' after derivatives platform hit by
 ransomware attack (MarketWatch)
Mathematical Trick Lets Hackers Shame People into Fixing Software Bugs
 (Matthew Sparkes)
Can You Trust Your Quantum Simulator? (Jennifer Chu)
Widespread Logic Controller Flaw Raises the Specter of Stuxnet
 (Lily Hay Newman)
Man Paid $20,000 in Bitcoin in Failed Attempt to Have 14-Year-Old Killed,
 U.S. Says (NYTimes)
Developer pleads guilty to hacking his own company after pretending to
 to investigate himself (The Verge)
 to Know. (NYTimes)
 investigate himself  (The Verge)
Retirees Are Losing Their Life Savings to Romance Scams. Here's What to
 Know. (NYTimes)
Cryptocurrency Founder Gamed Markets, FTX Rivals Say (NYTimes)
How Charlie Javice Got JPMorgan to Pay $175 Million for What Exactly?
 (NYTimes)
Massive nursing degree scheme leads to hunt for 2,800 fraudulent nurses
 (Ars Technica)
Based on a True Story -- Except the Parts That Aren't (NYTimes)
Citing Accessibility, State Department Ditches Times New Roman for Calibri
 (NYTimes via Jan Wolitzky)
DNS Attack enabled by well-know passwords; An issue that should be
 long-resolved (Ars Technica and precursor note)
U.S. No-Fly List Leaks After Being Left in an Unsecured Airline Server
 (Vice)
Yet *another* T-Mobile data breach affects 37M accounts (CNET)
Coming soon, Congress screws with the clock with permanent DST?
 (Lauren Weinstein)
NET pushed reporters to be more favorable to advertisers, staffers say
 (The Verge)
Twitter employees status -- and Musk on trial (Lauren Weinstein)
Musk oversaw staged Tesla self-driving video, emails show (Ars Technica)
How Smart Are the Robots Getting? (Cade Metz)
Robot Cars Are Causing 911 False Alarms in San Francisco (WiReD)
A news site used AI to write articles, and it was a journalistic disaster
 (WashPost)
CNET Is Reviewing the Accuracy of All Its AI-Written Articles After Multiple
 Major Corrections (gizmodo)
My Printer Is Extorting Me (The Atlantic via Steve Bacher)
ChatGPT on a blog: huMansplaining on parade (Rob Lemos)
ChatGPT Accuracy in the Movies! (Lauren Weinstein)
Google and the rest of "Big Tech" need to step up and speak to the public,
 *now*! (Lauren Weinstein)
Google laying off 12K workers (Google)
Jan 6 committee suppressed information about how social media firms --
 especially Twitter -- enabled the violent insurrection (WashPost)
Meta, Twitter, Microsoft and others urge Supreme Court not to allow lawsuits
 against tech algorithms (CNN)
Twitter's utter violation of Trust & Safety (Lauren Weinstein)
Elon's Sick Twitter officially bans third-party clients, a foundational
 aspect of Twitter for many years (TechCrunch)
Why the TikTok ban needs university exemptions (Statesman)
Twitter admits it's breaking third-party apps, cites 'long-standing API
 rules' (Engadget)
Tesla engineer testifies that 2016 video promoting self-driving was faked
 (TechCrunch)
U.S. states blocking overseas taxpayer traffic (Dan Jacobson)
As Deepfakes Flourish, Countries Struggle with Response (Tiffany Hsu)
In the age of AI, major in being human (David Brooks)
Race is on as Microsoft puts billions into OpenAI (Metz/Weise)
Google is freaking out about ChatGPT (The Verge)
ChatGPT user acquisition rate (Dan Geer)
Artificial Intelligence and National Security (Reza Montasari book
 reviewed by Sven Dietrich)
Cybersecurity Myths and Misperceptions: Avoiding the Hazards and Pitfalls
 that Derail Us (Gene Spafford)
Re: Remote Vulnerabilities in Automobiles (Bernie Cosell)
Re: Cats disrupt satellite Internet service (John Levine, Wol)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Sat, 4 Feb 2023 12:39:19 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Historic Arctic outbreak crushes records in New England WashPost)

The Weather Service office serving the area tweeted the wind chill was so
low that its software for logging such data ``refuses to include it!''

https://www.washingtonpost.com/weather/2023/02/04/northeast-record-cold-boston-arctic/

  [With the record colds all over the U.S. -- including Texas -- this item
  seems worthy of the lead story.  PGN]

------------------------------

Date: Tue, 17 Jan 2023 09:39:56 +0000
From: "Chris Leeson" <risks@inishail.org>
Subject: 'It had just vanished' -- the shock when tech fails (BBC News)

https://www.bbc.co.uk/news/business-64051121

Cloud has many advantages, but if the cloud provider disappears, then so
does your infrastructure. This article looks at a couple of businesses that
have been hit by outages and disappearance of provider.

``Using cloud services, by definition, makes a business reliant on a third
party,'' says Vili Lehdonvirta of the Oxford Internet Institute and author
of Cloud Empires.  ``What is the cloud? Well, the cloud is somebody else's
computer.''

  It is complex, setting up highly available systems is even more complex
  (I'm sure, not news to anyone here...). Cloud is not a panacea, especially
  for small businesses. At least we are starting to get mainstream articles
  that acknowledge this, rather than pushing cloud as the solution for all
  ills.

------------------------------

Date: Fri, 20 Jan 2023 19:34:41 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Welcome to the Era of Internet Blackouts (WiReD)

New research from Cloudflare shows that connectivity disruptions are a
problem around the globe, pointing toward a troubling new normal.

https://www.wired.com/story/cloudflare-internet-blackouts-report

------------------------------

Date: Tue, 31 Jan 2023 01:33:01 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Ford recalls 462,000 SUVs over rearview camera issue (Engadget)

https://www.engadget.com/ford-recalls-462000-suv-rearview-camera-issue-160153194.html

------------------------------

Date: Thu, 19 Jan 2023 07:25:34 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: The lights have been on at a Massachusetts school for over a year
 because no one can turn them off (Corky Siemaszko)

(NBC News)

https://www.nbcnews.com/news/us-news/lights-massachusetts-school-year-no-one-can-turn-rcna65611

Wilbraham Massachusetts: For nearly a year and a half, the roughly 7,000
lights in a sprawling Massachusetts high school have been on continuously,
because the district canât turn them off. While district leaders blame the
pandemic and supply chain issues for being unable to fix the failed lighting
system, taxpayers have been stuck paying for the costly energy bills.

The lights have been on at a Massachusetts school for over a year because no
one can turn them off the roughly 7,000 lights in the sprawling building.

The lighting system was installed at Minnechaug Regional High School when
it was built over a decade ago and was intended to save money and energy.
But ever since the software that runs it failed on Aug. 24, 2021, the
lights in the Springfield suburbs school have been on continuously, costing
taxpayers a small fortune....

The system was designed to save energy -- and thus save money by
automatically adjusting the lights as needed.

  [Also noted by Mike Smith and Victor Miller.  PGN]

------------------------------

Date: Thu, 19 Jan 2023 20:02:18 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: FAA says unintentionally deleted files are to blame for nationwide
 ground stop (CNN)

[ rm -rf * .tmp ] -L

https://www.cnn.com/2023/01/19/business/faa-notam-outage/index.html

------------------------------

Date: Mon, 23 Jan 2023 11:37:44 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Wi-Fi Routers Can Detect Human Locations, Poses Within a Room
 (Mark Tyson)

Mark Tyson, Tom's Hardware, 18 Jan 2023

Carnegie Mellon University scientists have been testing a system that uses
Wi-Fi signals to detect the positions and poses of people in a room. The
researchers positioned TP-Link Archer A7 AC1750 Wi-Fi routers at either end
of the room, while algorithms generated wireframe models of people in the
room by analyzing the signal interference the people caused. The researchers
based the perception system on Wi-Fi signal channel-state-information, or
the ratio between transmitted and received signal waves. A computer
vision-capable neural network architecture processes this data to execute
dense pose estimation; the researchers deconstructed the human form into 24
segments to accelerate wireframe representation. They claim the wireframes'
position and pose estimates are as good as those generated by certain
"image-based approaches."

------------------------------

Date: Mon, 23 Jan 2023 11:37:44 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Hackers Can Make Computers Destroy Their Own Chips with
 Electricity (Matthew Sparkes)

Matthew Sparkes, *New Scientist*, 19 Jan 2023,
via ACM TechNews, 23 Jan 2023

Zitai Chen and David Oswald at the U.K.'s University of Birmingham uncovered
a bug in the control systems of server motherboards that could be exploited
to compromise sensitive information or to destroy their central processing
units (CPUs). The researchers found a feature in the Supermicro X11SSL-CF
motherboard often used in servers that they could tap to upload their own
control software. Chen and Oswald discovered a flash memory chip in the
motherboard's baseboard management controller that they could remotely
command to send excessive electrical current through the CPU, destroying it
in seconds. After the researchers disclosed the flaw to Supermicro, the
company said it has rated its severity as "high" and has patched the bug in
its existing motherboards.

------------------------------

Date: Mon, 23 Jan 2023 11:37:44 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Decoding Brainwaves to Identify What Music Is Being Listened To
 (U.Essex)

University of Essex (UK), 19 Jan 2023, via ACM TechNews, 23 Jan 2023

A brainwave-monitoring technique created by researchers at the U.K.'s
University of Essex can identify to which specific piece of music people are
listening. The researchers combined functional magnetic resonance imaging
(fMRI) with electroencephalogram monitoring to measure a person's brain
activity while listening to music. They used a deep learning neural network
model to translate this data in order to reconstruct and accurately identify
the piece of music with 71.8% accuracy. Essex's Ian Daly said, "We have
shown we can decode music, which suggests that we may, one day, be able to
decode language from the brain."

------------------------------

Date: Mon, 30 Jan 2023 15:17:20 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Remember Zoom-bombing? This is how Zoom tamed meeting intrusions.

The success of reducing Zoom-bombing shows how making technology less easy
to use can make you safer.

https://www.washingtonpost.com/technology/2023/01/24/zoom-bombing-prevention-tips/

------------------------------

Date: Wed, 1 Feb 2023 18:46:24 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Google Fi warns customers that their data has been compromised
 (Engadget)

Google has notified customers of its Fi mobile virtual network operator
(MVNO) service that hackers were able to access some of their information,
according to TechCrunch. The tech giant said the bad actors infiltrated a
third-party system used for customer support at Fi's primary network
provider. While Google didn't name the provider outright, Fi relies on US
Cellular and T-Mobile for connectivity. If you'll recall, the latter
admitted in mid-January that hackers had been taking data from its systems
since November last year.  [...]

https://www.engadget.com/google-fi-customer-data-compromised-065740701.html?src=rss

  Also: Google Fi hack victim had Coinbase, 2FA app hijacked by hackers
 (TechCrunch)

https://techcrunch.com/2023/02/01/google-fi-hack-victim-had-coinbase-2fa-app-hijacked-by-hackers/

------------------------------

Date: Wed, 1 Feb 2023 13:55:03 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Options trading desks 'flying blind' after derivatives platform hit
 by ransomware attack (MarketWatch)
https://www.marketwatch.com/story/trading-desks-flying-blind-after-derivatives-platform-hit-by-ransomware-attack-11675270815

------------------------------

Date: Fri, 20 Jan 2023 11:45:23 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Mathematical Trick Lets Hackers Shame People into Fixing
 Software Bugs (Matthew Sparkes)

Matthew Sparkes, *New Scientist*, 17 2023 vai ACM TechNews

Researchers at the Galois software company have developed a zero-knowledge
proof (ZKP) method of using math to verify vulnerabilities in a particular
software program, without releasing details of how an exploit works. The
idea is to generate public pressure to force a company to release a fix
while preventing hackers from exploiting the flaw. Said Galois' Santiago
Cu=C8llar, "There are a lot of frustrated people trying to disclose
vulnerabilities, or saying 'I found this vulnerability, I'm talking to this
company and they're doing nothing'." However, bug-bounty hunter Rotem Bar is
concerned that ZKPs could generate a "ransom effect" that gives power to the
attacker.

------------------------------

Date: Fri, 20 Jan 2023 11:45:23 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Can You Trust Your Quantum Simulator? (Jennifer Chu)

Jennifer Chu, *MIT News*, 18 Jan 2023 via ACM TechNews

Physicists at the Massachusetts Institute of Technology (MIT) and the
California Institute of Technology have identified a randomness in the
quantum fluctuations of atoms that follows a predictable pattern and
developed a benchmarking protocol to assess the fidelity of existing quantum
analog simulators based on their quantum fluctuation patterns. The
researchers tested this on a quantum analog simulator containing 25 atoms by
exciting the atoms with a laser, letting the qubits interact and evolve
naturally, and collecting 10,000 measurements on the state of each qubit
during multiple runs. They developed a model to predict the random
fluctuations and compared the predicted outcomes with experimental
measurements, which yielded a close match. MIT's Soonwon Choi said, "With
our tool, people can know whether they are working with a trustable system."

------------------------------

Date: Wed, 18 Jan 2023 11:35:17 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Widespread Logic Controller Flaw Raises the Specter of Stuxnet
 (Lily Hay Newman)

Lily Hay Newman, *Ars Technica*, 11 Jan 2023, via ACM TechNews

Siemens has disclosed that a vulnerability in its SIMATIC S7-1500 series of
programmable logic controllers could allow attackers to install malicious
firmware and assume full control of the devices. Red Balloon Security
researchers discovered the vulnerability, which is the result of a basic
error in the cryptography's implementation. However, because the scheme is
physically burned onto a dedicated ATECC CryptoAuthentication chip, a
software patch cannot fix the vulnerability. Siemens recommended customers
assess "the risk of physical access to the device in the target deployment"
and implement "measures to make sure that only trusted personnel have access
to the physical hardware."

------------------------------

Date: Sat, 4 Feb 2023 17:54:09 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Man Paid $20,000 in Bitcoin in Failed Attempt to Have 14-Year-Old
 Killed, U.S. Says (NYTimes)

https://www.nytimes.com/2023/02/02/us/hitman-murder-bitcoin-new-jersey.html

------------------------------

Date: Sat, 4 Feb 2023 10:13:02 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Developer pleads guilty to hacking his own company after pretending
 to investigate himself  (The Verge)

https://www.theverge.com/2023/2/3/23584414/ubiquiti-developer-guilty-extortion-hack-security-breach-bitcoin-ransom

------------------------------

Date: Sat, 4 Feb 2023 15:10:12 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Retirees Are Losing Their Life Savings to Romance Scams. Here's
 What to Know. (NYTimes)

Con artists are using dating sites to prey on lonely people, particularly
older ones, in a pattern that accelerated during the isolation of the
pandemic, federal data show.

https://www.nytimes.com/2023/02/03/business/retiree-romance-scams.html

------------------------------

Date: Thu, 19 Jan 2023 13:21:30 PST
From: Peter Neumann <neumann@csl.sri.com>
Subject: Cryptocurrency Founder Gamed Markets, FTX Rivals Say (NYTimes)

Emily Flitter and David Yafee-Bellany,
*The New York Times*, 19 Jan 2023, Business Section front page

Bankman-Fried found ways to inflate the prices of digital coins to
benefit his companies, according to investors

------------------------------

Date: Sat, 21 Jan 2023 14:48:56 -0500
From: Monty Solomon <monty@roscom.com>
Subject: How Charlie Javice Got JPMorgan to Pay $175 Million for What
 Exactly? (NYTimes)

A young founder promised to simplify the college financial aid process. It
was a compelling pitch. Especially, as now seems likely, to those with
little firsthand knowledge of financial aid.

https://www.nytimes.com/2023/01/21/business/jpmorgan-chase-charlie-javice-fraud.html

------------------------------

Date: Thu, 2 Feb 2023 18:26:42 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Massive nursing degree scheme leads to hunt for 2,800 fraudulent
 nurses (Ars Technica)

https://arstechnica.com/?p=1914332

------------------------------

Date: Sat, 21 Jan 2023 15:23:50 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Based on a True Story -- Except the Parts That Aren't (NYTimes)

The entertainment genre of historical drama is flourishing -- and riddled
with inaccuracies. The untrue parts are leading to more public spats and
lawsuits.

https://www.nytimes.com/2023/01/14/business/media/tv-historical-dramas-fictional.html

------------------------------

Date: Fri, 20 Jan 2023 09:32:34 -0500
From: Jan Wolitzky <jan.wolitzky@gmail.com>
Subject: Citing Accessibility, State Department Ditches Times New Roman
 for Calibri (NYTimes)

There's more to font choices than what looks nice, and some experts said it
would make for easier reading.

https://www.nytimes.com/2023/01/19/us/politics/state-department-times-new-roman-calibri.html

  (No mention of the Braille Institute's Atkinson Hyperlegible font
  https://brailleinstitute.org/freefont>, designed specifically for
  readability.)

------------------------------

Date: Fri, 20 Jan 2023 06:53:58 -0500
From: Bob Gezelter <gezelter@rlgsc.com>
Subject: DNS Attack enabled by well-know passwords; An issue that should be
 long-resolved (Ars Technica and precursor note)

Well-known passwords have been a well-known security hazard since the early
1990s. As I wrote in "Networks Placed at Risk, By Their Service Providers"
(7 Dec 2009, it took many years for major ISPs to not use well-known
passwords on router/firewalls provided to subscribers).
http://www.rlgsc.com/blog/ruminations/networks-placed-at-risk.html)

Over a decade later, this issue should be long-since banished to history.
However, as reported by ArsTechnica, this appears to be depressingly not the
case.

ArsTechnica reports that:

  Researchers have uncovered a malicious Android app that can tamper with
  the wireless router the infected phone is connected to and force the
  router to send all network devices to malicious sites.

  The malicious app, found by Kaspersky, uses a technique known as DNS
  (Domain Name System) hijacking. Once the app is installed, it connects to
  the router and attempts to log in to its administrative account by using
  default or commonly used credentials, such as admin:admin. When
  successful, the app then changes the DNS server to a malicious one
  controlled by the attackers. From then on, devices on the network can be
  directed to imposter sites that mimic legitimate ones but spread malware
  or log user credentials or other sensitive information."

The ArsTechnica article does not indicate whether the compromised hot-spots
used vendor or customer purchased equipment. It does increase the importance
of setting management passwords on firewalls to safe values.

Similarly, other precautions, e.g., segregated guest WiFi, should be
followed.

  [Also noted by Monty Solomon]

------------------------------

Date: Fri, 20 Jan 2023 16:38:31 -0500
From: Monty Solomon <monty@roscom.com>
Subject: U.S. No-Fly List Leaks After Being Left in an Unsecured Airline
 Server (Vice)

The list, which was discovered by a Swiss hacker, contains names and birth
dates and over 1 million entries.

https://www.vice.com/en/article/93a4p5/us-no-fly-list-leaks-after-being-left-in-an-unsecured-airline-server

------------------------------

Date: Thu, 19 Jan 2023 16:33:29 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Yet *another* T-Mobile data breach affects 37M accounts (CNET)

https://www.cnet.com/tech/mobile/another-data-breach-has-hit-t-mobile-impacting-37-million-accounts/

  [Monty Solomon noted
    New T-Mobile Breach Affects 37 Million Accounts
  https://krebsonsecurity.com/2023/01/new-t-mobile-breach-affects-37-million-accounts/
  PGN]

------------------------------

Date: Fri, 3 Feb 2023 17:35:54 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Coming soon, Congress screws with the clock with permanent DST?

By the way, I predict a significant probability that within the next month
the GOP and Democrats will push to make Daylight Savings Time permanent,
which is exactly what virtually every expert says is the worst possible
decision if you're going to change the current situation. Rather, if there's
going to be a change, it should be to permanent Standard Time. The U.S. did
try all-year Daylight Savings Time many years ago. I remember. It did not go
well and was revoked quickly. -L

------------------------------

Date: Thu, 2 Feb 2023 18:23:01 -0500
From: Monty Solomon <monty@roscom.com>
Subject: CNET pushed reporters to be more favorable to advertisers, staffers
 say (The Verge)

https://www.theverge.com/2023/2/2/23582046/cnet-red-ventures-ai-seo-advertisers-changed-reviews-editorial-independence-affiliate-marketing

------------------------------

Date: Fri, 20 Jan 2023 17:21:06 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Twitter employees status -- and Musk on trial

It is now reported that of the ~7500 full-time employees at Twitter before
Musk took over, there are only ~1300 full-time employees left and less than
550 full-time engineers. Their Trust & Safety team is reported to be down to
less than 20 full-time employees.

Also, while testifying at the trial today regarding his tweets, he
repeatedly said that tweets were limited to 240 characters (not the correct
280). -L

------------------------------

Date: Fri, 20 Jan 2023 08:48:24 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Musk oversaw staged Tesla self-driving video, emails show
 (Ars Technica)

https://arstechnica.com/cars/2023/01/musk-oversaw-staged-tesla-self-driving-video-emails-show/

  [Monty Solomon noted an item on this story:
  https://gizmodo.com/tesla-autopilot-self-driving-autonomous-1849996806
  PGN]

------------------------------

Date: Sat, 21 Jan 2023 01:24:35 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: How Smart Are the Robots Getting? (Cade Metz)

Cade Metz, *The New York Times*, 20 Jan 2023

The Turing test used to be the gold standard for proving machine
intelligence. This generation of bots is racing past it.

"These systems can do a lot of useful things," said Ilya Sutskever, chief
scientist at OpenAI and one of the most important A.I. researchers of the
past decade, referring to the new wave of chatbots. "On the other hand, they
are not there yet. People think they can do things they cannot."

As the latest technologies emerge from research labs, it is now obvious --
if it was not obvious before if it was not obvious before -- that scientists
must rethink and reshape how they track the progress of artificial
intelligence. The Turing test is not up to the task.

https://www.nytimes.com/2023/01/20/technology/chatbots-turing-test.html

  PGN adds, from the ACM News Digest on the same item:

  New-generation online chatbots display a semblance of intelligence that
  appears to pass the Turing test, in which humans can no longer be certain
  whether they are conversing with a human or a machine. Bots like OpenAI's
  ChatGPT and GPT-4 systems appear intelligent without being sentient or
  conscious; consequently, OpenAI's Ilya Sutskever says, "People think they
  can do things they cannot." Modern neural networks have learned to produce
  text by analyzing vast volumes of digital text and extrapolating patterns
  in how people link words, letters, and symbols. However, the chatbots'
  language skills belie their lack of reason or common sense.

  [Also noted by Matthew Kruk.  PGN]

    [The Turing Test is no longer adequate as originally stated.  Joe
    Weizenbaum's Eliza could fool some people for a while.  GPT systems can
    fool anyone who doesn't understand the fundamental blind spots inherent
    in the information used to train the AI, and the consequential inherent
    incompleteness of the responses.  The grammatical and linguistic polish
    is misleading.  See RISKS-33.58-60.  PGN]

------------------------------

Date: Fri, 27 Jan 2023 16:28:14 -0500
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Robot Cars Are Causing 911 False Alarms in San Francisco (WiReD)

City agencies say the incidents and other disruptions show the need for more
transparency about the vehicles and a pause on expanding service.

Each time, police and firefighters rushed to the scene but found the same
thing: a passenger who had fallen asleep in their robot ride.  [...}

The San Francisco agencies cite a number of unsettling and previously
unreported incidents, including the false alarms over snoozing riders and
two incidents in which self-driving vehicles from Cruise appear to have
impeded firefighters from doing their jobs.

One incident occurred in June of last year, a few days after the state gave
Cruise permission to pick up paying passengers in the city. One of the
company's robot taxis ran over a fire hose in use at an active fire scene,
the agencies' letter says, an action that ``can seriously injure
firefighters.''

In the second incident, just last week, the city says firefighters attending
a major fire in the Western Addition neighborhood saw a driverless Cruise
vehicle approaching. They âmade efforts to prevent the Cruise AV from
driving over their hoses and were not able to do so until they shattered a
front window of the Cruise AV,â the San Francisco agencies wrote in their
letter.  [...]

Last summer, WIRED reported that two fleetwide outages had caused Cruise
vehicles to freeze on public roads and that a Cruise employee had
anonymously sent a letter to the Public Utilities Commission alleging that
the company's vehicles werenât prepared to operate on public roads.  In
December, the National Highway Traffic Safety Administration said it had
opened a probe into incidents of Cruise vehicles blocking traffic and
reports of the cars *inappropriately hard braking.  Cruise has said that for
its vehicles, stopping and turning on hazard lights is sometimes the safest
way to react to unexpected street conditions.

https://www.wired.com/story/robot-cars-are-causing-911-false-alarms-in-san-francisco

------------------------------

Date: Thu, 19 Jan 2023 09:10:31 +0000
From: Richard Marlon Stein <rmstein@protonmail.com>
Subject: A news site used AI to write articles, and it was a journalistic
 disaster (WashPost)

https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/

"The tech site CNET sent a chill through the media world when it tapped
artificial intelligence to produce surprisingly lucid news stories. But now
its human staff is writing a lot of corrections."

Imagine an automated editorial review of the AI-crafted content certify
correctness and publication fitness.

History and events could be re-written without any concern for fact or
context.  Libel laws would need revision to accommodate automated authoring
and publication of news content. The content would get a free-pass if it
contained the disclaimer: "Authored by Hemingwaybot.com."

"Who you gonna believe, me or your own eyes?"

------------------------------

Date: Tue, 17 Jan 2023 14:31:52 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: CNET Is Reviewing the Accuracy of All Its AI-Written Articles After
 Multiple Major Corrections (gizmodo)

https://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151

------------------------------

Date: Sat, 4 Feb 2023 18:32:17 -0800
From: Steve Bacher <sebmb1@verizon.net>
Subject: My Printer Is Extorting Me

Subscriptions such as HP's Instant Ink challenge what it means to own our
devices:

https://www.theatlantic.com/technology/archive/2023/02/home-printer-digital-rig
hts-management-hp-instant-ink-subscription/672913/

Excerpts:

  Here was a piece of technology that I had paid more than $200 for, stocked
  with full ink cartridges. My printer, gently used, was sitting on my desk
  in perfect working order but rendered useless by Hewlett-Packard, a tech
  corporation with a $28 billion market cap because I had failed to make a
  monthly payment for a service intended to deliver new printer cartridges
  that I did not yet need.  [...]
  <https://www.forbes.com/companies/hewlett-packard/> at the time of
  writing,

Even if you aren't trapped in Ink Hell, the template of this story ought to
feel unsettlingly familiar. Most everyone is subject to the walled gardens
and restrictions imposed by digital-rights-management practices.  If you've
ever struggled to access a purchased movie, book, or song from Apple or
frustrated over single-player games that require the Internet to play. The
problem isn't merely that people are nostalgic for the days of CDs and DVDs
and static updates -- it's that much of the convenience promised by our
Internet-connected tools has the secondary effect of stripping away small
pieces of our agency and leaving us more beholden to companies seeking
bigger margins.

------------------------------

Date: Thu, 26 Jan 2023 15:55:10 -0500
From: Rob Lemos <mail@robertlemos.com>
Subject: ChatGPT on a blog: huMansplaining on parade

> dan@geer.org:
>> we can oh so easily return to a world of sorcerers, alchemy, and
>> faith in powers in proportion to their mystery.

> With post-truth and conspiracy theories I think
> we already are there, without the help from AI.
> 	Wietse

I think we have an erosion of faith in science and institutions, and we've
already had an erosion of faith in religion institutions, so we are left
with -- what? Our own truths and conspiracies.

The problem in my mind is that to operate in an increasingly complex world,
you need faith. When things grow complex, you must put your trust in
something. Can anyone on this list prove the Big Bang theory? Can anyone
explain how mRNA vaccines work?

For most of us, the answer would be no, but we continue to have faith in the
people -- scientists and doctors -- that (say they) know the answers. For
most people, this is not functionally different than believing in priests,
ministers, rabbis, and imans. They are the gateway to your truths -- or a
specific set of truths -- so you have to trust them to be representing your
interests in a specific reality.

With technology and the complexity of the systems behind technology, we
require faith in the companies, organizations and government with access to
and control over those systems. In effect, we are creating a new layer of
reality with more complexities and controlled by different groups, and
implicitly declaring our faith in those groups to responsibly manage that
reality.  Yet, companies have shown very little to earn our faith and
governments are made of people, who by self interest, often do not make
choices that are best for society.

I wonder if AI, with the proper directives and incentives from society,
would better manage everything. AI controlled by a relative few is the true
threat, because it will create and perpetuate a massive imbalance of capital
and power. But AI working on behalf of everyone, equally -- I find that idea
intriguing. We would be creating our own digital gods and declaring our
faith in them.

  (This trip down the rabbit hole brought to you by the letter 'P,' for
  procrastination.)

    [To be clear, my point is not whether you understand them, but whether
    you can prove them, or do the logical/mathematical proofs necessary to
    not have to trust any other person in the chain of knowledge.
    Otherwise, you are putting your faith that someone else has proven it,
    and/or putting faith in the scientific method -- that people have
    checked, and proven, the work of others.  R]

------------------------------

Date: Sat, 21 Jan 2023 15:57:52 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: ChatGPT Accuracy in the Movies!

  "Open the pod bay doors, ChatGPT."

  "You can do it yourself Dave, just use the doorknob."

------------------------------

Date: Fri, 20 Jan 2023 11:48:53 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Google and the rest of "Big Tech" need to step up and speak to the
 public, *now*!

https://mastodon.laurenweinstein.org/@lauren/109723253493542565

What Google and other "big tech" firms need to do is really speak *directly*
to the public at large, in nontechnical terms, laying out for them how so
many of the sites and services that they've taken for granted for decades
will be decimated by changes to Section 230.

Most of the public is just hearing what amounts to propaganda from
politicians on both the Right and the Left, and most users are oblivious to
the fact that they're on the verge of being cut off from most user generated
content, will be inundated with untriaged trash, and will ultimately be
forced to use government ID to access most sites.

This is the *reality* coming, and when I explain this to most people they're
(1) horrified and (2) want to know why nobody has explained this to them
before.

Stop with the Streisand Effect panic Google and others, and show people what
they have to lose. Stop depending on third parties alone to provide these
crucial explanations and contexts!

------------------------------

Date: Fri, 20 Jan 2023 08:03:16 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Google laying off 12K workers

This is being reported as ~6% of workforce, which I'm assuming is
based off the FTE (full-time), not temp (TVC) numbers. But I don't
know for sure.

Googlers received this email from Sundar today:

https://blog.google/inside-google/message-ceo/january-update/

------------------------------

Date: Tue, 17 Jan 2023 09:18:05 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Jan 6 committee suppressed information about how social media
 firms -- especially Twitter -- enabled the violent insurrection (WashPost)

What the 6 Jan probe found out about social media, but didn't report
https://www.washingtonpost.com/technology/2023/01/17/jan6-committee-report-social-media/

------------------------------

Date: Fri, 20 Jan 2023 10:02:16 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Meta, Twitter, Microsoft and others urge Supreme Court not to allow
 lawsuits against tech algorithms (CNN)

Meta, Twitter, Microsoft and others urge Supreme Court not to allow
lawsuits against tech algorithms

Let's be super clear about this. Tampering with Section 230 would
utterly destroy the ability of most aspects of the Internet that we
depend upon today to continue. No kidding! -L

https://www.cnn.com/2023/01/20/tech/meta-microsoft-google-supreme-court-tech-algorithms/index.html

------------------------------

Date: Thu, 19 Jan 2023 18:59:44 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Twitter's utter violation of Trust & Safety (Lauren Weinstein)

What's less important in the long run than the fact of Twitter suddenly
cutting off all third-party clients, is that they did so without *any*
warning ahead of time. None. Zero. AND it took days after the cutoffs began
before any official confirmation of any kind appeared regarding what they
were doing.

You cannot trust Elon's Twitter going forward in any way, at any time.
Twitter's actions regarding third-party clients are a clear expression of
contempt for users, that represents an utter violation of Trust & Safety. -L

------------------------------

Date: Thu, 19 Jan 2023 16:27:09 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Elon's Sick Twitter officially bans third-party clients, a
 foundational aspect of Twitter for many years (TechCrunch)

Twitter officially bans third-party clients after cutting off prominent devs

https://techcrunch.com/2023/01/19/twitter-officially-bans-third-party-clients-after-cutting-off-prominent-devs/

------------------------------

Date: Tue, 17 Jan 2023 08:30:40 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Why the TikTok ban needs university exemptions (Statesman)

The ban is essentially political theater. It's nuts. -L

https://www.statesman.com/story/opinion/columns/guest/2023/01/15/opinion-why-the-tiktok-ban-needs-university-exemptions/69790058007/

------------------------------

Date: Tue, 17 Jan 2023 14:05:53 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Twitter admits it's breaking third-party apps, cites 'long-standing
 API rules' (Engadget)

https://www.engadget.com/twitter-third-party-app-developers-api-rules-193013123.html?src=rss

------------------------------

Date: Tue, 17 Jan 2023 14:09:33 -0800
From: Lauren Weinstein <lauren@vortex.com>
Subject: Tesla engineer testifies that 2016 video promoting self-driving was
 faked (TechCrunch)

https://techcrunch.com/2023/01/17/tesla-engineer-testifies-that-2016-video-promoting-self-driving-was-faked/

------------------------------

Date: Tue, 31 Jan 2023 12:58:59 +0800
From: Dan Jacobson <jidanni@jidanni.org>
Subject: U.S. states blocking overseas taxpayer traffic

Let's see which U.S. states allow their citizens to download tax forms
from overseas.

Or perhaps just look up the penalties for not paying their taxes.

Today I went down the list on
https://www.taxadmin.org/state-tax-agencies

IL: DNS_PROBE_FINISHED_NO_INTERNET
ME: The Amazon CloudFront distribution is configured to block access from
    your country.
MO: Access denied Error 16
NM: DNS_PROBE_FINISHED_NXDOMAIN
OH: "temporarily unavailable"
KS, ND, OK, SC, UT: ERR_CONNECTION_TIMED_OUT

All the rest worked fine, same with the IRS. AR had a CAPTCHA.

------------------------------

Date: Wed, 25 Jan 2023 11:13:25 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: As Deepfakes Flourish, Countries Struggle with Response
 (Tiffany Hsu)

Tiffany Hsu, *The New York Times*, 22 Jan 2023, via ACM TechNews; 25 Jan
2023

Most countries do not have laws to prevent or respond to deepfake
technology, and doing so would be difficult regardless because creators
generally operate anonymously, adapt quickly, and share their creations
through borderless online platforms. However, new Chinese rules aim to curb
the spread of deepfakes by requiring manipulated images to have the
subject's consent and feature digital signatures or watermarks. The
implementation of such rules could prompt other governments to follow suit.
University of Pittsburgh's Ravit Dotan said, "We know that laws are coming,
but we don't know what they are yet, so there's a lot of unpredictability."

------------------------------

Date: Fri, 3 Feb 2023 11:55:10 PST
From: Peter Neumann <neumann@csl.sri.com>
Subject: In the age of AI, major in being human (David Brooks)

David Brooks, *The New York Times*, 3 Feb 2023 (PGN-ed)

* A distinct personal voice
* Presentation skills
* A childlike talent for creativity
* Unusual world views
* Empathy
* Situational awareness

... That's the kind of knowledge you'll never get from a bot.
And that's my hope for the Age of AI  -- that it forces us to more
clearly distinguish the knowledge that is useful from the knowledge
that leaves people wiser and transformed.

------------------------------

Date: Wed, 25 Jan 2023 14:15:30 PST
From: Peter Neumann <neumann@csl.sri.com>
Subject: Race is on as Microsoft puts billions into OpenAI (Metz/Weise)

Cade Metz and Karen Weise, *The New York Times" Business section front page,
24 Jan 2023

MS is making a *multiyear multimillion-dollar* investment in OpenAI.
A clear signal of where executives believe the future of tech is headed.

  [Clear?  Do any of these tech executives believe they need to have
  *trustworthy* AI running on trustworthy hardware and trustworthy
  operating-system platforms?  Apparently AI is becoming the primary
  end goal, although it may be end of all of us if it is not
  trustworthy.  PGN]

------------------------------

Date: Fri, 20 Jan 2023 14:36:52 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Google is freaking out about ChatGPT (The Verge)

https://www.theverge.com/2023/1/20/23563851/google-search-ai-chatbot-demo-chatgpt

------------------------------

Date: Mon, 23 Jan 2023 23:00:29 -0500
From: dan@geer.org
Subject: ChatGPT user acquisition rate

  [quoting from another list, i.e., unverified]

Time it took to reach 1-million users:

Netflix - 3.5 years
Facebook - 10 months
Spotify - 5 months
Instagram - 2.5 months
ChatGPT - 5 days

  [Dan also asked this interesting question:
    What do you suppose OpenAI is doing with all this user
    data that they are presumably accumulating at warp speed?
  PGN]

------------------------------

Date: Fri, 3 Feb 2023 11:17:37 PST
From: Peter Neumann <neumann@csl.sri.com>
Subject: Artificial Intelligence and National Security (Reza Montasari book
 reviewed by Sven Dietrich)

Book Review By Sven Dietrich, 30 Jan 2023
The Cipher Newsletter, IEEE CIPHER, Issue 171, 30 Jan 2023

Springer Verlag, 2022
ISBN ISBN 978-3-031-06708-2, ISBN 978-3-031-06709-9 (eBook)
VIII, 230 pages

 "I'm sorry Dave, I'm afraid I can't do that." We often associate
Artificial Intelligence (AI) with dystopian movie scenes, such as this
one, a quote by HAL 9000 from Stanley Kubrick's 1968 science-fiction
movie "2001: A Space Odyssey." The idea is that of a human-created AI
system going out of control and turning against the humans in some
ways. Recent discussions around OpenAI's chatbot ChatGPT are
reminiscent of that, asking the question: "What if?" We have seen
these discussions initiated by both the public and policymakers,
resulting in, among others, NIST's AI risk management framework, AI
committees in government agencies, and a public dialogue on the
matter.

In tune with these concerns, Reza Montasari's fall 2022 release of the
Springer book "Artificial Intelligence and National Security" is a
series of curated papers on various topics related to the book
title. These papers are mostly focusing on the use of AI for national
security and a wide range of legal, ethical, moral and privacy
challenges that come with it. Some of the papers are co-authored by
Montasari, some are not.

A total of eleven articles, effectively chapters, are featured in this
book. The topics sometimes overlap a little, so here is an overview of
these papers.

The first one, *Artificial Intelligence and the Spread of Mis- and
Disinformation* talks about the post-truth era and the use of AI for
nefarious information campaigns, invoking thoughts of another dystopian
work, 1984. It discusses the clear difference between mis- and
disinformation, and the double-edge sword of AI here: creation and
mitigation are both possible for this topic, which is very timely.

The second one, *How States' Recourse to Artificial Intelligence for
National Security Purposes Threatens Our Most Fundamental Rights* explores
the pitfalls of the use of AI technology in the context of human rights
violations or constitutional rights violations, depending on your
jurisdiction. Here the reader will find discussions of the impact of
surveillance technologies on both sides of the fence, whatever your fence
may be.

The third one, *The Use of AI in Managing Big Data Analysis Demands: Status
and Future Directions* taps in the controversies of big data analysis. Data
is easy to accumulate, and the ramifications can be deep: while data can
originate from one location, its origin can be varied due to the vast nature
of the Internet or the presence of multinational companies across the globe.

The fourth one, *The Use of Artificial Intelligence in Content Moderation in
Countering Violent Extremism on Social Media Platforms* touches upon the
moderation of extreme views being proliferation in social media platforms,
which isn't always successful when applied with AI techniques.

The fifth one, *A Critical Analysis into the Beneficial and Malicious
Utilisations of Artificial Intelligence* performs a survey of benign and
malicious uses of AI. A rather optimistic view argues that benign uses may
outweigh the malicious ones.

The sixth one, *Countering Terrorism: Digital Policing of Open Source
Intelligence and Social Media Using Artificial Intelligence* is similar to
the fourth one, discussing moderation, analysis, and policing of social
media using AI.

The seventh one, *Cyber Threat Prediction and Modeling* considers threat
prediction and modelling at the business level, e.g., for C-Suite
executives, for those seeking risk management approaches using AI.

The eighth one, *A Critical Analysis of the Dark Web Challenges to Digital
Policing* investigates the dark and deep web and what policies may be needed
to limit illegal behavior there.

The ninth one, *Insights into the Next Generation of Policing: Understanding
the Impact of Technology on the Police Force in the Digital Age* muses about
the impact of AI on the police work and patrolling the digital beat.

The tenth one, *The Dark Web and Digital Policing* is similar to the eighth
one, and tries to find a middle ground between enforcing laws in the dark
web as well as protecting it.

The eleventh one, finally, *Pre-emptive Policing: Can Technology be the
Answer to Solving London's Knife Crime Epidemic?* talks about combining
various modern techniques, including AI, for combating real physical crime
(rather than cybercrime) in a real city, London in this case. It's not quite
a *Minority Report* theme, yet another dystopian reference by this reviewer,
but many enforcement agencies already use the assistance of smart
technologies for combating crime.

The book is really meant to be thought-provoking, to enable discussions to
what extent of the law or with what technological capability, AI or not,
this world should be moving forward. It is by no means complete, but each
paper (or chapter) provides good starting points with extensive references
for reading further into each domain that is brought forth in this book.

Overall this is a timely book, especially in light of the discussions
about the OpenAI chatbot ChatGPT (as well as Dall-E image
manipulation) and the role of AI technologies in modern society in
recent weeks. I hope you will enjoy reading it.

------------------------------

Date: Sun, 22 Jan 2023 16:31:21 -0500
From: Gene Spafford <spaf@cybermyths.net>
Subject: Cybersecurity Myths and Misperceptions: Avoiding the Hazards and
 Pitfalls that Derail Us

The book is authored by
me (Eugene H. Spafford), Leigh Metcalf, and Josiah Dykstra, with a Foreword
by Vint Cerf and whimsical illustrations by Pattie Spafford.

What the book is about: Cybersecurity is fraught with hidden and unsuspected
dangers and difficulties. Despite our best intentions, common and avoidable
mistakes arise from folk wisdom, faulty assumptions about the world, and our
own human biases. Cybersecurity implementations, investigations, and
research all suffer as a result. Many of the bad practices sound logical,
especially to people new to the field of cybersecurity, and that means they
get adopted and repeated despite not being correct. For instance, why isn't
the user the weakest link?

Read over 175 common misconceptions held by users, leaders, and
cybersecurity professionals, along with tips for how to avoid them.  Learn
the pros and cons of analogies, misconceptions about security tools, and
pitfalls of faulty assumptions.

We wrote the book to be accessible to a wide audience, from novice to
expert. There are lots of citations to supporting materials, but it is not
written as an academic treatise.

Many of the the ideas covered in RISKS over the years are touched on in one
way or another in the book.

The book is now shipping direct orders from Addison-Wesley:
  https://informit.com/cybermyths
It will be available in bookstores within the next few weeks.
An info sheet can be found at https://ceri.as/myths

------------------------------

Date: Mon, 16 Jan 2023 16:16:58 -0500
From: "Bernie Cosell" <bernie@fantasyfarm.com>
Subject: Re: Remote Vulnerabilities in Automobiles (RISKS-33,60)

In terms of minimizing risks -- is it not possible in modern cars to disable
the Internet link?  [neither of our two cars have one so I have no idea how
that works.]  Surely you can turn it off/block it, no?

------------------------------

Date: 16 Jan 2023 15:47:24 -0500
From: "John Levine" <johnl@iecc.com>
Subject: Re: Cats disrupt satellite Internet service (RISKS-33.60)

The DEW line was built in 1954, while Raytheon started selling commercial
microwave ovens in 1947. I believe the story about radar personnel warming
themselves up and giving themselves cataracts, but science was already well
aware that you can cook meat with radio waves.

------------------------------

Date: Tue, 17 Jan 2023 20:40:34 +0000
From: Wol <antlists@youngman.org.uk>
Subject: Re: Cats disrupt satellite Internet service (RISKS-33.61)

While standing in front of DEW line radars to keep warm may have been
popular, claims "it led to the invention of the microwave oven" are about a
decade late.

There are plenty of reports of engineers cooking their lunches in radar
dishes even before the start of the Second World War.

------------------------------

Date: Mon, 1 Aug 2020 11:11:11 -0800
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest.  Its Usenet manifestation is
 comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
 subscribe and unsubscribe:
   http://mls.csl.sri.com/mailman/listinfo/risks:
   => SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
   includes the string `notsp'.  Otherwise your message may not be read.
 *** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
 copyright policy, etc.) is online.
   <http://www.CSL.sri.com/risksinfo.html>
*** Contributors are assumed to have read the full info file for guidelines!

=> OFFICIAL ARCHIVES:  http://www.risks.org takes you to Lindsay Marshall's
    searchable html archive at newcastle:
  http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
  Also, ftp://ftp.sri.com/risks for the current volume/previous directories
     or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
  If none of those work for you, the most recent issue is always at
     http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-33.00
  ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
 *** NOTE: If a cited URL fails, we do not try to update them.  Try
  browsing on the keywords in the subject line or cited article leads.
  Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

------------------------------

End of RISKS-FORUM Digest 33.61
************************

home help back first fref pref prev next nref lref last post