[33456] in RISKS Forum
Risks Digest 34.45
daemon@ATHENA.MIT.EDU (RISKS List Owner)
Sat Sep 14 20:13:32 2024
From: RISKS List Owner <risko@csl.sri.com>
Date: Sat, 14 Sep 2024 17:13:16 PDT
To: risks@mit.edu
RISKS-LIST: Risks-Forum Digest Saturday 14 Sep 2024 Volume 34 : Issue 45
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator
***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
<http://catless.ncl.ac.uk/Risks/34.45>
The current issue can also be found at
<http://www.csl.sri.com/users/risko/risks.txt>
Contents:
The Social Impact of those Little Computers in Our Pockets
(Peter Bernard Ladkin)
The U.S. Military Is Not Ready for the New Era of Warfare
(NYTimes via Susmit Jha)
The AI nightmare is already here, thanks to our own governments
(Lauren Weomsteom)
Hacker tricks ChatGPT into giving out detailed instructions for
making homemade bombs (TechCrunch)
AI Wants to Be Free -- Or at least very, very cheap (NYMag)
Tech giants fight plan to make them pay more for electric grid upgrades
(WashPost)
A tech firm stole our voices: then cloned and sold them (BBC)
The Bands and the Fans Were Fake. The $10 Million Was Real. (NYTimes_
Authors fighting deluge of fake writers and AI-generated books (CBC)
AI + Script-Kiddies: Malware/Ransomware explosion? (Henry Baker)
Insurance company spied on house from the sky. Then the real nightmare
began. (via GG)
AI worse than humans in every way at summarising information, government
trial finds (Crikey)
Generative AI Transformed English Homework. Math Is Next
(WiReD)
The national security threats in U.S. election software -- hiding in
plain sight (Politico)
He’s Known as *Ivan the Troll*. His 3D-Printed Guns Have Gone Viral.
(NYTimes)
Quantum Computer Corrected Its Own Errors, Improving Its
Calculations (Emily Conover)
Debloating Windows made me realize how packed with useless features
it is (Ada Developers)
50,000 gallons of water needed to put out Tesla Semi fire (AP News) (AP)
See How Humans Help Self-Driving Cars Navigate City Streets
(The New York Times)
Love (of cybersecurity) is a battlefield (ArsTechnica)
Senate Proposal for Crypto Tax Exemption Is Long Overdue (Cato Institute)
More on tariffs and bans against Chinese or other countries' goods
Signal Is More Than Encrypted Messaging. Under Meredith Whittaker,
It’s Out to Prove Surveillance Capitalism Wrong (WiReD)
The For-Profit City That Might Come Crashing Down (NYTimes)
``It just exploded.'' Springfield woman claims she never meant to spark false
rumors about Haitians (NBC NEws)
Re: Feds sue Georgia Tech for lying bigly about computer security
(Cliff Kilby, Dylan Northrup, Cliff Kilby)
Re: Standard security policies and variances (Cliff Kilby)
Re: How Navy chiefs conspired to get themselves illegal warship WiFi
(Shapir, Stan Brown)
Re: Former Tesla Autopilot Head And Ex-OpenAI Researcher Says
'Programming Is Changing So Fast' That He Cannot Think Of Going Back To
Coding Without AI (Steve Bacher)
Re: Moscow's Spies Were Stealing U.S. Tech, Until the FBI Started a Sabotage
Campaign (djc)
Abridged info on RISKS (comp.risks)
----------------------------------------------------------------------
Date: Mon, 9 Sep 2024 17:06:01 +0200
From: "Prof. Dr. Peter Bernard Ladkin" <ladkin@causalis.com>
Subject: The Social Impact of those Little Computers in Our Pockets
On Monday, 29 July, a Taylor-Swift-themed dance lesson for young kids was
being held in Southport near Liverpool. A knife-wielding youth entered,
stabbed and killed three young girls, aged 6, 7 and 9, and injured 8 other
kids and 2 adults.
Rumors were apparently spread on "social media" that the perpetrator was an
asylum seeker from -- I dunno -- the middle East or Africa or somewhere. He
isn't -- he is a born and bred Brit. An outdoor commemoration for the poor
kids the next day was overrun by a group of thugs who then attacked a
mosque. Hotels housing asylum seekers elsewhere were attacked, with attempts
to set them on fire (with the occupants present). A series of "flash riots",
so to speak.
Such riots spread in a couple of days to lots of cities. The government
response was rapid (PM Sir Kier Starmer had dealt with the 2011 riots and we
can presume he has his ideas about what went right and wrong with that
response.) There were plenty of onlookers and there [were] lots of video on
those little computers we nowadays carry along in our pockets. The police
set a massive rapid evaluation program in motion; courts and prisons were
put on standby (I understate; some prisons released inmates early to make
space for the expected influx). Rioters and those who encouraged them were
identified, arraigned, and sent to prison extremely rapidly, the first ones
within eight days from offence to prison:
https://www.theguardian.com/uk-news/article/2024/aug/07/rioter-southport-jailed-far-right
. Not only active rioters were jailed, but those who incited them in "social
media" messages.
It quietened down relatively quickly. There has been talk of some 1,000 or
so offences that were to be prosecuted and likely to lead to jail terms (it
is generally unwise to plead "not guilty" when the police have veridical
film of you doing what you did).
The "flash riots" were organised through those little digital computers that
everyone carries around in their pockets. They were encouraged -- "fed" is
probably an appropriate word -- by a well-known "extreme rightist" and
ex-football hooligan, Stephen Yaxley-Lennon, from, of ael places,
Cyprus. And by a cameo from new MP Nigel Farage, he of Brexit fame, who said
there were "questions" about the attack and attacker that needed answers
that did not appear to be forthcoming. He was, of course, "just asking
questions" but the intent seemed to be to suggest something was being
covered up by the "authorities".
Neither Yaxley-Lennon nor Farage could have done what they did with the
effects it had without those little computers in everyone's pockets. It used
to be that political actors had to pretty much persuade (or own!) a
professional journalist to print their words in a newspaper. And those words
wouldn't necessarily be printed the way the actor wants. For example, if
Farage had spoken to a journalist about his "questions", the journalist
would have been able quickly to ascertain the reality and it is unlikely
that there would have been anything there worth publishing. I can't recall
newspapers ever printing verbatim much of what Yaxley-Lennon seems to want
to say (although there are quite a few words about what he's done and was
doing).
On the other hand, those little computers were also used by others to make
videos of what was going on, which led from offence to prison so rapidly.
There are significant personal consequences in this technology-fueled
behaviour. If you are going to punch a policeman at such a gathering (see
below), someone likely has you on film. If you are identifiable, that could
well lead to your arrest and conviction. Lay people would be surprised by
the ways there are of identifying people, even masked people, from film.
The police can appeal to the public for information, and there are often
plenty of people, not all of them your friends, who know what you look
like. (We can note in addition that such techniques surely can also be used
by authoritarian states pursuing critics just as well as they can be used by
British police pursuing rioters.)
We can and probably should also remark what people not otherwise involved
can be led to do. There is a 53-year-old woman who lives a "sheltered life"
in a smallish village (2,201 inhabitants) miles away from any riots, caring
for her ill husband at home, who lost her cool once on a Facebook group and
has been sentenced to 15 months in jail for it:
https://www.theguardian.com/uk-news/article/2024/aug/14/woman-53-jailed-over-blow-the-mosque-up-facebook-post-after-southport-riots
hat of the future? The videos the police assessed here were most likely to
be veridical. We might have to think much harder in the future about
deepfake videos, and how videos should be assessed. Are we really so
certain that we technologists will still be able to tell the real from the
fake? There have been a few hefty scandals in the UK involving politicians
and alleged sexual abuse of minors. Here is one
https://en.wikipedia.org/wiki/Cyril_Smith But there are dissimulators, such
as Carl Beech
https://www.theguardian.com/uk-news/2019/jul/22/how-nick-the-serial-child-abuse-accuser-became-the-accused who falsely accused Lords Bramall, Brittan and
former MP Harvey Proctor of abusing him. What happens when such people have
film? Are we going to be able to tell the veridical from the fake?
------------------------------
Fate: Fri, 13 Sep 2024 8:51:34 PDT
From: Peter Neumann <neumann@csl.sri.com>
Subject: The U.S. Military Is Not Ready for the New Era of Warfare
(The New York Times)
Possible URL:
https://www.NewYorkTimes.com/ai-drones-robot-war-pentagon
[Thanks to Susmit Jha. For some unknown reason, I cannot find who wrote
it, or when it ran. PGN]
Techno-skeptics who argue against the use of AI in warfare are oblivious to
the reality that autonomous systems are already everywhere -- and the
technology is increasingly being deployed to these systems' benefit.
Hezbollah's alleged use of explosive-laden drones has displaced at least
60,000 Israelis south of the Lebanon border . Houthi rebels are using
remotely controlled sea drones to threaten the 12 percent of global shipping
value that passes through the Red Sea, including the supertanker Sounion,
now abandoned, adrift and aflame, with four times as much oil as was carried
by the Exxon Valdez.
Yet as this is happening, the Pentagon still overwhelmingly spends its
dollars on legacy weapons systems. It continues to rely on an outmoded and
costly technical production system to buy tanks, ships and aircraft carriers
that new generations of weapons -- autonomous and hypersonic == can
demonstrably kill. Take for example the F-35, the apex predator of the
sky. The fifth-generation stealth fighter is known as a =93flying
computer=94 for its ability to fuse sensor data with advanced weapons. Yet
this $2 trillion program has fielded fighter airplanes with less processing
power than many smartphones.
The history of failure in war can almost be summed up in two words:
*too late*, Douglas MacArthur declared hauntingly in 1940.
http://schemas.microsoft.com/office/2004/12/omml
------------------------------
Date: Tue, 10 Sep 2024 07:03:59 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: The AI nightmare is already here, thanks to our own governments
It's important to understand the specifics of the AI nightmare that
yes, has now arrived. We know that these AI systems being pushed by
Google and other firms are not actually intelligent, and that they
frequently misunderstand input data and spew forth answers or other
output that often appear reasonable even when frequently completely
wrong or riddled with errors.
Government agencies rushing to use these systems to cut their
workloads -- processing unemployment applications, creating
transcripts of police encounters based on body camera audio -- and so
on, are creating a perfect storm for these AI systems, being hyped to
the hilt by desperate Big Tech firms -- to horribly impact people's
lives in major ways. THIS is the real danger of AI today -- much more
so than (bad enough!) inaccurate and nonsensical Google Search AI
Overview answers. -L
------------------------------
Date: Thu, 12 Sep 2024 08:29:21 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: Hacker tricks ChatGPT into giving out detailed instructions for
making homemade bombs (TechCrunch)
https://techcrunch.com/2024/09/12/hacker-tricks-chatgpt-into-giving-out-detailed-instructions-for-making-homemade-bombs/
------------------------------
Date: Thu, 12 Sep 2024 10:06:39 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: AI Wants to Be Free -- Or at least very, very cheap
(NYMag)
The tech companies banking on AI are already scrambling to find ways to
offset their skyrocketing costs, but the public may not be willing to pay
for something they’re already used to getting on the cheap.
https://nymag.com/intelligencer/article/ai-wants-to-be-free.html
[This would be a good thing, if users who don't pay would no longer get
inaccurate summaries and other AI junk from Google et al. SB]
[I suppose if the commercial AI shysters were to adopt and seriously use
some of the evidence-based trustworthiness now coursing through the
naturally intelligent research sub-community, they might justifiably
charge for their efforts. They really do not deserve to charge for
crapware that is riddled with errors and sloppiness. Just a thought.
Please note that AI encompasses a huge variety of techiques and
applications, and is a very broad-brush term. Some of it has random
elements, which tend to make it non-deterministic, and therefore
non-repeatable. Military uses should demand evidence-based
trustworthiness in AI systems with respect to carefully stated
requiremets and assumptions. However, I believe applying that to all AI
systems with requirements for critical uses can benefit. PGN]
------------------------------
Date: Fri, 13 Sep 2024 18:48:58 -0700
From: "Jim" <jgeissman@socal.rr.com>
Subject: Tech giants fight plan to make them pay more for
electric grid upgrades
https://www.washingtonpost.com/technology/2024/09/13/data-centers-power-grid
-ohio/
A regulatory dispute in Ohio may help answer one of the toughest questions
hanging over the nation's power grid: Who will pay for the huge upgrades
needed to meet soaring energy demand from the data centers powering the
modern Internet and artificial intelligence revolution?
The power company said projected energy demand in central Ohio forced it to
stop approving new data center deals there last year while it figured out
how to pay for the new transmission lines and additional infrastructure they
would require.
The energy demands of data centers have created similar concerns in other
hot spots such as Northern Virginia, Atlanta and Maricopa County, Ariz.,
leaving experts concerned that the U.S. power grid may not be capable of
dealing with the combined needs of the green energy transition and the
computing boom that artificial intelligence companies say is coming.
------------------------------
Date: Sat, 31 Aug 2024 22:32:03 -0600
From: Matthew Kruk <mkrukg@gmail.com>
Subject: A tech firm stole our voices: then cloned and sold them (BBC)
https://www.bbc.com/news/articles/c3d9zv50955o
The notion that artificial intelligence could one day take our jobs is a
message many of us will have heard in recent years.
But, for Paul Skye Lehrman, that warning has been particularly personal,
chilling and unexpected: he heard his own voice deliver it.
In June 2023, Paul and his partner Linnea Sage were driving near their home
in New York City, listening to a podcast about the ongoing strikes in
Hollywood and how artificial intelligence (AI) could affect the industry.
The episode was of interest because the couple are voice-over performers
and - like many other creatives -- fear that human-sounding voice generators
could soon be used to replace them.
This particular podcast had a unique hook -- they interviewed an AI-=
powered chat bot, equipped with text-to-speech software, to ask how it
thought the use of AI would affect jobs in Hollywood.
But, when it spoke, it sounded just like Mr Lehrman.
------------------------------
Date: Thu, 5 Sep 2024 08:22:45 -0700
From: Jim <jgeissman@socal.rr.com>
Subject: The Bands and the Fans Were Fake. The $10 Million Was Real.
A North Carolina man used artificial intelligence to create hundreds of
thousands of fake songs by fake bands, then put them on streaming services
where they were enjoyed by an audience of fake listeners, prosecutors said.
Penny by penny, he collected a very real $10 million, they said when they
charged him with fraud.
The man, Michael Smith, 52, was accused in a federal indictment unsealed on
Wednesday of stealing royalty payments from digital streaming platforms for
seven years. Mr. Smith, a flesh-and-blood musician, produced A.I.-generated
music and played it billions of times using bots he had programmed,
according to the indictment.
The supposed artists had names like "Callous Post," "Calorie Screams" and
"Calvinistic Dust" and produced tunes like "Zygotic Washstands,"
"Zymotechnical" and "Zygophyllum" that were top performers on Amazon Music,
Apple Music and Spotify, according to the charges.
"Smith stole millions in royalties that should have been paid to musicians,
songwriters, and other rights holders whose songs were legitimately
streamed," Damian Williams, the U.S. attorney for the Southern District of
New York, said in a statement on Wednesday.
https://www.nytimes.com/2024/09/05/nyregion/nc-man-charged-ai-fake-music.htm
l?unlocked_article_code=1.IU4.qG5j.YQ5cIffWwcJP
------------------------------
Date: Thu, 12 Sep 2024 06:54:49 -0600
From: Matthew Kruk <mkrukg@gmail.com>
Subject: Authors fighting deluge of fake writers and AI-generated books
(CBC)
https://www.cbc.ca/news/entertainment/ai-generated-books-amazon-1.7319018
------------------------------
Date: Tue, 10 Sep 2024 16:36:40 +0000
From: Henry Baker <hbaker1@pipeline.com>
Subject: AI + Script-Kiddies: Malware/Ransomware explosion?
While this paper is only 2 years old, the quality of AI's/Copilot's has =
gotten much, much better.
The "highest and best use" of Copilot/AI during the next several years may =
well be thee production of malware and ransomware *at scale* by relatively
unskilled individuals.
The dream of the White House in turning unemployed coal miners into =
computer programmers may have finally been realized. :-)
https://ieeexplore.ieee.org/document/10284976
GitHub Copilot: A Threat to High School Security?
Exploring GitHub Copilot's Proficiency in Generating Malware from Simple =
User Prompts
Eric Burton Martin; Sudipto Ghosh
24 October 2023
This paper examines the potential implications of script kiddies and novice=
programmers with malicious intent having access to GitHub Copilot, an =
artificial intelligence tool developed by GitHub and OpenAI. The study =
assesses how easily one can utilize this tool to generate various common =
types of malware ranging from ransomware to spyware, and attempts to =
quantify the functionality of the produced code. Results show that with a =
single user prompt, malicious software such as DoS programs, spyware, =
ransomware, trojans, and wiperware can be created with ease. Furthermore, =
uploading the generated executables to VirusTotal revealed an average of =
7/72 security vendors flagging the programs as malicious. This study has =
shown that ***novice programmers and script kiddies with access to Copilot =
can readily create functioning malicious software with very little coding =
experience.***
https://www.nytimes.com/2023/06/02/opinion/ai-coding.html
Farhad Manjoo June 2, 2023
It's the End of Computer Programming as We Know It.
"Wait a second, though -- wasn't coding supposed to be one of the =
can't-miss careers of the digital age? ... Joe Biden to coal miners: Learn
to code!"
------------------------------
Date: Fri, 30 Aug 2024 16:43:06 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: Insurance company spied on house from the sky. Then the
real nightmare began. (via GG)
Author: Part of what is so disturbing about the whole episode is how opaque
it was. When Travelers took aerial footage of my house, I never knew. When
it decided I was too much of a risk, I had no way of knowing why or how. As
more and more companies use more and more opaque forms of AI to decide the
course of our lives, we're all at risk. AI may give companies a quick way to
save some money, but when these systems use our data to make decisions about
our lives, we're the ones who bear the risk. Maddening as dealing with a
human insurance agent is, it's clear that AI and surveillance are not the
~right replacements. And unless lawmakers take action, the situation will
only get worse.
https://www.msn.com/en-us/news/technology/ar-AA1onU5O
Travelers clarified that any "high-resolution aerial imagery" it used did
not come from a drone.
Whether drone or ... what? .. the risk is remote surveillance interpreted by
AI, no humans needed for underwriting. What could go wrong?
------------------------------
Date: Tue, 3 Sep 2024 07:40:49 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: AI worse than humans in every way at summarising information,
government trial finds (Crikey)
https://www.crikey.com.au/2024/09/03/ai-worse-summarising-information-humans-government-trial/
------------------------------
Date: Sat, 31 Aug 2024 07:47:19 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: Generative AI Transformed English Homework. Math Is Next (WiReD)
ByteDance’s Gauth app scans math homework and provides thorough, often
correct, answers using AI. Millions have already downloaded it for free.
https://www.wired.com/story/gauth-ai-math-homework-app/
------------------------------
Date: Sun, 1 Sep 2024 08:25:21 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: The national security threats in U.S. election software -- hiding in
plain sight (Politico)
Hacking blind spot: States struggle to vet coders of election software.
In New Hampshire, a cybersecurity firm found troubling security bugs — and
the Ukrainian national anthem -— written into a voter database built with
the help of an overseas subcontractor.
When election officials in New Hampshire decided to replace the state’s
aging voter registration database before the 2024 election, they knew that
the smallest glitch in Election Day technology could become fodder for
conspiracy theorists.
So they turned to one of the best -- and only -- choices on the market:
small, Connecticut-based IT firm that was just getting into election
software.
But last fall, as the new company, WSD Digital, raced to complete the
project, New Hampshire officials made an unsettling discovery: The firm had
offshored part of the work. That meant unknown coders outside the U.S. had
access to the software that would determine which New Hampshirites would be
welcome at the polls this November.
The revelation prompted the state to take a precaution that is rare among
election officials: It hired a forensic firm to scour the technology for
signs that hackers had hidden malware deep inside the coding supply chain.
The probe unearthed some unwelcome surprises: software misconfigured to
connect to servers in Russia and the use of open-source code — which is
freely available online — overseen by a Russian computer engineer convicted
of manslaughter, according to a person familiar with the examination and
~Mgranted anonymity because they were not authorized to speak about it. [...]
https://www.politico.com/news/2024/09/01/us-election-software-national-security-threats-00176615
------------------------------
Date: Tue, 10 Sep 2024 10:16:30 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: He's Known as *Ivan the Troll*. His 3D-Printed Guns Have Gone
Viral. (NYTimes)
From his Illinois home, he champions guns for all. The Times confirmed his
real name and linked the firearm he helped design to terrorists, drug
dealers and freedom fighters in at least 15 countries.
https://www.nytimes.com/2024/09/10/world/europe/ivan-troll-3d-printed-homemade-guns-fgc9.html
------------------------------
Date: Fri, 13 Sep 2024 11:21:37 -0400 (EDT)
From: ACM TechNews <technews-editor@acm.org>
Subject: Quantum Computer Corrected Its Own Errors, Improving Its
Calculations (Emily Conover)
Emily Conover, *Science News*, 10 Sep 2024, via ACM Technews
Microsoft and Quantinuum researchers demonstrated a quantum computer that
uses quantum error correction to fix its own mistakes mid-calculation. The
researchers were able to perform operations and error correction repeatedly
on eight logical qubits, with the corrected calculation having an error rate
one-tenth that of the original physical qubits. The researchers also
achieved a record entanglement of 12 logical qubits, with an error rate less
than one-twentieth that of the original physical qubits.
------------------------------
Date: Sun, 1 Sep 2024 04:49:15 -0400
From: "Gabe Goldberg" <gabe@gabegold.com>
Subject: Debloating Windows made me realize how packed with useless features
it is (Ada Developers)
Windows 11 comes with lots of unnecessary bloatware that can slow down
your system.
Win11Debloat significantly improves system performance by removing
unnecessary background processes.
Consider using Win11Debloat or Tiny11 to customize your Windows install
and remove unwanted applications for a cleaner experience.
https://www.xda-developers.com/debloat-windows-packed-useless-features/
------------------------------
Date: Fri, 13 Sep 2024 11:53:28 +0000
From: Henry Baker <hbaker1@pipeline.com>
Subject: 50,000 gallons of water needed to put out Tesla Semi fire (AP News)
(AP)
WASHINGTON (AP) California firefighters had to douse a flaming battery in a
Tesla Semi with about 50,000 gallons (190,000 liters) of water to extinguish
flames after a crash, the National Transportation Safety Board said
Thursday. In addition to the huge amount of water, firefighters used an
aircraft to drop fire retardant on the immediate area; of the electric truck
as a precautionary measure, the agency said in a preliminary report. The
freeway was closed for about 15 hours*** as firefighters made sure the
batteries were cool enough to recover the truck. If a home fireplace
generates ~1.5kW and a Tesla Semi has a battery which store ~900kWh, then it
could "burn" for *600 hours* -- i.e., 25 days.
https://apnews.com/article/tesla-semi-fire-battery-crash-water-firefighters-7ff04a61e562b80b73e057cfd82b6165
(Alternatively, 309,597 *AI inferences* could be performed with this same
900kWh.)
------------------------------
Date: Thu, 5 Sep 2024 19:00:07 -0400
From: "Gabe Goldberg" <gabe@gabegold.com>
Subject: See How Humans Help Self-Driving Cars Navigate City Streets
(The New York Times)
In places like San Francisco, Phoenix and Las Vegas, robot taxis are
navigating city streets, each without a driver behind the steering
wheel. Some don’t even have steering wheels.
But cars like this one in Las Vegas are sometimes guided by someone
sitting here: An office scene with a person standing in front of
workstations with people seated at computers.
This is a command center in Foster City, Calif., operated by Zoox, a
self-driving car company owned by Amazon. Like other robot taxis, the
company’s self-driving cars sometimes struggle to drive themselves, so
they get help from human technicians sitting in a room about 500 miles away.
Inside companies like Zoox, this kind of human assistance is taken for
granted. Outside such companies, few realize that autonomous vehicles
are not completely autonomous.
https://www.nytimes.com/interactive/2024/09/03/technology/zoox-self-driving-cars-remote-control.html?smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb&ngrp=mnn&pvid=77CAD2CB-56B4-4A3A-B6BA-B83FDD381C21
------------------------------
Date: Fri, 30 Aug 2024 19:46:08 -0400
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: Love (of cybersecurity) is a battlefield (ArsTechnica)
https://arstechnica.com/security/2024/08/city-of-columbus-sues-man-after-he-discloses-severity-of-ransomware-attack/
It is difficult to work in cybersecurity if the response to a dispute of an
entities public audit result is a lawsuit against the researcher.
If this job made any sense I would have expected this article to read 'City
of Columbus issues dispute with auditor'.
How did the researcher locate any usable material from any source if no
material was released? How can you illegally posses something that was
attested to not exist?
Arguing that the dark web is inherently criminal is a very foolish path to
tread. What makes the dark web dark? The lack of dependence on the public
dns root?
By that definition every company webfilter and pinhole creates a dark web. I
refuse to allow anything on .xyz resolve in my home. Did I require special
tools or knowledge to accomplish that? Is my act of disconnecting from root
dns a criminal one? Am I now obligated to serve malware associated domains
because filtering them would create a dark web?
If you are doing contract pen-testing, follow your contract to the letter.
If you're doing open research, probably best to be active with a reputable
org (I like ACM), and you'll probably want to find a lawyer.
Above all, keep pointing out lies.
(I can call a published falsehood a lie still, right? I might need to call
my lawyer.)
[with my respects to Pat Benatar.]
------------------------------
Date: Sun, 1 Sep 2024 17:13:49 -0400
From: "Gabe Goldberg" <gabe@gabegold.com>
Subject: Senate Proposal for Crypto Tax Exemption Is Long Overdue (Cato Institute)
Using Cryptocurrency as a Form of Payment Could Become Practical with
Proposed Legislation
Senate Proposal for Crypto Tax Exemption Is Long Overdue
Four senators are fighting to exempt low-value crypto transactions from
federal taxation. Congressional approval for their proposal is long overdue.
Bitcoin policy has been the talk of the town in Washington, D.C. ever
since former President Donald Trump, Republican Senator Cynthia Lummis
(WY), and presidential candidate Robert F. Kennedy Jr. all announced
their support for a strategic Bitcoin reserve at the Bitcoin 2024
conference in Nashville. Yet, the renewed introduction of another
proposal in Congress flew under the radar, and it’s long overdue:
creating a tax exemption for taxpayers who pay with cryptocurrency.
https://www.cato.org/commentary/senate-proposal-crypto-tax-exemption-long-overdue
These too, surely -- https://en.wikipedia.org/wiki/Doubloon
------------------------------
Date: Fri, 13 Sep 2024 07:12:21 -0700
From: Lauren Weinstein <lauren@vortex.com>
Subject: More on tariffs and bans against Chinese or other countries'
goods
China is a repressive Communist regime. However, for decades U.S.
firms have voluntarily handed manufacturing dominance to China, in
order to benefit their own profits. U.S. consumers have become
dependent on this arrangement, and U.S. manufacturers overall have
shown little interest in "making quality stuff" again at reasonable
prices.
When import restrictions and/or tariffs are applied to a foreign
country, it should be for valid reasons that will promote valid
outcomes. Not for political reasons (especially fake national security
claims).
There are certainly valid discussions to be had regarding our relationship
with China. But when you dig down into the motives behind these kinds of
tariffs and bans, you find very little but politics in play. -L
------------------------------
Date: Fri, 6 Sep 2024 17:11:28 -0400
From: "Gabe Goldberg" <gabe@gabegold.com>
Subject: Signal Is More Than Encrypted Messaging. Under Meredith Whittaker,
It’s Out to Prove Surveillance Capitalism Wrong (WiReD)
On its 10th anniversary, Signal’s president wants to remind you that the
world’s most secure communications platform is a nonprofit. It’s free.
It doesn’t track you or serve you ads. It pays its engineers very well.
And it’s a go-to app for hundreds of millions of people. [...]
Yeah. I don’t think anyone else at Signal has ever tried, at least so
vocally, to emphasize this definition of Signal as the opposite of
everything else in the tech industry, the only major communications
platform that is not a for-profit business.
Yeah, I mean, we don’t have a party line at Signal. But I think we
should be proud of who we are and let people know that there are clear
differences that matter to them. It’s not for nothing that WhatsApp is
spending millions of dollars on billboards calling itself private, with
the load-bearing privacy infrastructure having been created by the
Signal protocol that WhatsApp uses.
Now, we’re happy that WhatsApp integrated that, but let’s be real. It’s
not by accident that WhatsApp and Apple are spending billions of dollars
defining themselves as private. Because privacy is incredibly valuable.
And who’s the gold standard for privacy? It’s Signal.
I think people need to reframe their understanding of the tech industry,
understanding how surveillance is so critical to its business model. And
then understand how Signal stands apart, and recognize that we need to
expand the space for that model to grow. Because having 70 percent of
the global market for cloud in the hands of three companies globally is
simply not safe. It’s Microsoft and CrowdStrike taking down half of the
critical infrastructure in the world, because CrowdStrike cut corners on
QA for a fucking kernel update. Are you kidding me? That’s totally
insane, if you think about it, in terms of actually stewarding these
infrastructures.
https://www.wired.com/story/meredith-whittaker-signal
------------------------------
Date: Fri, 6 Sep 2024 17:35:36 -0400
From: "Gabe Goldberg" <gabe@gabegold.com>
Subject: The For-Profit City That Might Come Crashing Down (NYTimes)
There are more than 5,400 of these special economic zones in the world,
ranging on a spectrum from free ports for duty-free trading all the way
to the special administrative region of Hong Kong. About 1,000 zones
have cropped up in just the past decade, including dozens of start-up
cities — sometimes called charter cities — most of them in developing
nations like Zambia and the Philippines. Some have actually grown into
major urban centers, like Shenzhen, which went from a fishing village to
one of China’s largest cities, with a G.D.P. of $482 billion, after it
was designated a special economic zone in 1980.
Each zone offers a degree of escape from government oversight and
taxation, a prospect that has excited libertarian and anarcho-capitalist
thinkers at least since Ayn Rand imagined a free-market utopia called
Galt’s Gulch in “Atlas Shrugged.” Today, escalating clashes between the
government and Big Tech — like the S.E.C.’s regulatory war on crypto, or
the Federal Aviation Administration’s repeated investigations into
SpaceX — have spurred some Silicon Valley entrepreneurs to seek
increasingly splintered-off hubs of sovereignty. And with government
dysfunction preventing reforms even in wealthy cities like San
Francisco, locked in a decades-long affordable-housing crisis, and New
York City, which just lost out on as much as $1 billion when Albany
scrapped a 17-years-in-the-making congestion pricing plan that would
have funded public transit, it’s not hard to see the appeal of starting
from scratch. [...]
There are about three dozen charter cities currently operating in the world,
according to an estimate from the Adrianople Group, an advisory firm that
concentrates on special economic zones. Several others are under
development, including the East Solano Plan, run by a real estate
corporation that has spent the last seven years buying up $900 million of
ranch land in the Bay Area to build a privatized alternative to San
Francisco; Praxis, a forthcoming “cryptostate” on the Mediterranean; and the
Free Republic of Liberland, a three-square-mile stretch of unclaimed
floodplain between Serbia and Croatia. Many of the same ideologically
aligned names — Balaji Srinivasan, Peter Thiel, Marc Andreessen, Friedman —
recur as financial backers; Patrik Schumacher, principal of Zaha Hadid
Architects and a critic of public housing, is behind several of their urban
(or metaversal) designs.
https://www.nytimes.com/2024/08/28/magazine/prospera-honduras-crypto.html
------------------------------
Date: Sat, 14 Sep 2024 10:59:58 -0400
From: Monty Solomon <monty@roscom.com>
Subject: ``It just exploded:'' Springfield woman claims she never
meant to spark false rumors about Haitians (NBC NEws)
``It just exploded.'' Springfield woman claims she never meant to spark
false rumors about Haitians.
The woman behind an early Facebook post that helped spark baseless rumors
about Haitians eating pets told NBC News that she feels for the immigrant
community.
https://www.nbcnews.com/news/us-news/-just-exploded-springfield-woman-says-never-meant-spark-rumors-haitian-rcna171099
[The old World War Two adage: A Slip of the Lip Can Sink a Ship. This
rumor may have cost some would-be roomers a place to live. PGN]
------------------------------
Date: Mon, 9 Sep 2024 00:49:36 -0400
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: Re: Feds sue Georgia Tech for lying bigly about computer security
(Northrup, RISKS-34.44)
If the "hired-gun outsider" declares there's not a reason for 'ssh' to be
available (because they're applying rules crafted for Windows hosts), does
that make it true?
I can think of a lot of good reasons to disable ssh in an enterprise
environment that is primarily Windows.
With containerization, there isn't really anything to ssh to, and if the
org is virtualizing under HyperV, you can get console access (HyperV remote
manager, unless they renamed it again). SSH would be redundant and the lack
of specialists who can validate and secure the config would create a
vulnerability.
Also, unless the org has enough linux depth to bind nss to kerberos, you end
up with dual authority for logins, exposing another vulnerability. The more
systems of record, the more likely housekeeping isn't done.
I know ansible for linux is still heavily dependent on ssh, but puppet, chef
(infra) and salt don't use it. Allowing ssh for an org that uses chef infra
will tend to having admins oneshot servers via knife-ssh rather than
updating the cookbooks and having the servers update automatically. This
contributes to config drifts and config drifts are another vulnerability.åπππππ
I rarely use ssh anymore because the enterprise has moved away from it.
As to having systems admins set or participate in setting security policy.
Maybe. I know a little biology but I don't feel it's a good idea for me to
tell the doc how to set my leg, after all, I'm the one who broke
------------------------------
Date: Mon, 9 Sep 2024 12:34:31 -0400
From: Dylan Northrup <northrup@gmail.com>
Subject: Re: Feds sue Georgia Tech for lying bigly about computer
security (Kilby, RISKS-34.45)
I can also think of a lot of good reasons. None of them are "because it's
on this security checklist" and all of them are related to the
circumstances of the specific host (services running, environment, data
sensitivity, etc). Things the system admin responsible for maintaining that
host is in the best position to know.
I commend to you the M*A*S*H episode "Morale Victory". Operating on a
patient with a leg and hand injury, Dr. Winchester makes a heroic effort to
save the leg so the patient can walk again. The hand injury gets less focus
and results in permanent nerve and tendon damage diminishing the use of
three of the patient's fingers. Turns out the patient is a concert pianist
and would have happily amputated the leg if the hand could have been
restored full functionality.
Security procedures should never be a "one size fits all" set of rules.
Guidelines should be reviewed before they are mandated. Variances should be
resolved before implementation and not reverted post hoc. And, most
importantly, there *must* be a recognition that the security of a system
needs to be balanced with its usability and the concessions *must not*
always be to sacrifice usability for security. There is a lot of nuance
that, in my and many of my peers' experiences, short-term contractors tend
to gloss over in favor of simple and briskly delivered audit reviews. It's
harder to say "yes" and write the justification for a variance than to say
"no" and check a box. To say that differently, it's harder to do complex
things properly than it is to do habitual things quickly. If only the
incentives were aligned so that people who can say "yes" are incentivized
toward doing them properly, but security compliance is a veritable breeding
ground for moral hazards and the person demanding compliance with a policy
rarely pays the cost of that compliance.
------------------------------
Date: Mon, 9 Sep 2024 14:03:45 -0400
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: Re: Feds sue Georgia Tech for lying bigly about computer
security
I can see I've made an error. My apologies at the attempt at humor. I can
see that it was misplaced.
Why did the company hire an outside contractor?
They failed the audit.
Why would any manager listen to any employee of an org complain about the
guy who was hired to fix a problem the org created?
They won't.
The results tend towards more contractors getting hired to fix things that
employees break.
Yelling at the contractor, or the policy is misplaced vitrol. ߘ˜˜˜There
wouldn't be a contractor setting policy if the org didn't have a problem
that the org created.
------------------------------
Date: Mon, 9 Sep 2024 01:03:09 -0400
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: Re: Standard security policies and variances
If your org's variance process is broken, and you've reported it as such:
Congrats!
You're now paid to do nothing.
Bonus points if you can create a variance deadlock for which neither can be
approved without both being approved at the same time, with each variance
going to different process owners.
If you'd hoped to get work done at work, then my apologies. No work has been
done at work since 1952.
------------------------------
Date: Mon, 9 Sep 2024 11:10:18 -0700
From: Stan Brown <the_stan_brown@fastmail.fm>
Subject: Re: How Navy chiefs conspired to get themselves illegal warship
WiFi (RISKS-34.44)
I had a similar reaction to "cannot be understated". But you missed the
other one, "becomes even more paramount". It's like "unique": thing can't be
more or less paramount: either it's paramount or it isn't.
> I can see I've made an error. My apologies at the attempt at humor. I can
> see that it was misplaced.
Or mistimed. I am currently recovering from some crud I contracted at a
recent convention. My humor and sarcasm related abilities have been
severely impaired. Also, unless it is overwhelmingly obvious, I tend not to
attribute humor or sarcasm to someone I'm not familiar with when
communicating via text. My apologies for being a bad audience.
Why did the company hire an outside contractor?
> They failed the audit.
Or they wanted someone "from the outside" to be the bad guy to enforce
policy. Or the outside contractor *is* the auditor.
And when the org did not create the problem?
- In a previous life, my company signed a very lucrative contract with a
bank. My manager was sympathetic to my team's feedback regarding the
worthlessness of some of the policies imposed on us by the bank (through no
fault of our own). He largely shielded us from direct interaction with the
auditors and worked with my team to "provide necessary evidence for
compliance" in a way that did not affect the services we ran, nor detract
from the actual security measures we'd put into place.
- In another previous life as a federal government contractor, there were
many, many policies that had no bearing on my team or its services but that
had to be "applied" in one way or another to insure compliance. This was in
the early 2000s, so it was generally things like "anti-virus on Linux
machines" (before there were Linux virii and commercial AV software that
would fulfill the compliance requirements). In the cases I saw, managers
and PMs were more than willing to go through the mental gymnastics
necessary to do what was needed to check the box, even if it had a
detrimental effect on the service, security, or both. It was easier and
simpler for the project as a whole to take this approach.
- In my current life working at a state university, we have policies
imposed on us by state and federal partners, our vendors, and many
third-party research partners. In none of these cases do the policies
originate from a "problem the org created"; they are imposed by third
parties for primarily/exclusively legal (aka "compliance") reasons and not
technical ones.
The results tend towards more contractors getting hired to fix things that
> employees break.
Or auditing software getting installed to validate compliance with the
owners of the auditing software being in the "moral hazard"ous situation of
generating the reports, but not being responsible for running both the
service that is alerting *and* finding a way for that service to comply
with the imposed policy.
Yelling at the contractor, or the policy is misplaced vitrol.
> There wouldn't be a contractor setting policy if the org didn't have a
> problem that the org created.
I believe I've presented three counterfactual examples above.
------------------------------
Date: Tue, 10 Sep 2024 10:49:51 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: Re: Former Tesla Autopilot Head And Ex-OpenAI Researcher Says
'Programming Is Changing So Fast' That He Cannot Think Of Going Back To
Coding Without AI (RISKS-34.44)
The article may be found here:
https://www.benzinga.com/news/24/08/40540066/former-tesla-autopilot-head-and-ex-openai-researcher-says-programming-is-changing-so-fast-that-he-ca
------------------------------
Date: Thu, 12 Sep 2024 09:20:36 +0200
From: djc <djc@resiak.org>
Subject: Re: Moscow's Spies Were Stealing U.S. Tech, Until
the FBI Started a Sabotage Campaign (Shapir, RISKS-34.44)
Wikipedia presents this as fact, and it is fact, not mere legend. The chip
was developed at DEC's Hudson, Massachusetts microchip lab -- I worked in
that building -- and I've seen a photo of it with the legendary trolling.
When I worked in the 1990s as a consultant for DEC in Europe, Eastern
European microelectronics engineers described to me the techniques they used
to try to reverse-engineer DEC's chips beginning with MicroVAX chips that
they lifted from black-market systems. (One technique was progressive
ablation of layers of the chip.) They weren't very successful with that,
but they succeeded to develop new instantiations of the VAX architecture in
discrete components, working from paper specifications and bootlegged
systems. I saw some.
Those guys *loved* their VAXes and loved talking about what they had done.
------------------------------
Date: Sat, 28 Oct 2023 11:11:11 -0800
From: RISKS-request@csl.sri.com
Subject: Abridged info on RISKS (comp.risks)
The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
subscribe and unsubscribe:
http://mls.csl.sri.com/mailman/listinfo/risks
=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line that
includes the string `notsp'. Otherwise your message may not be read.
*** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored. Instead, use an alternative
address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
copyright policy, etc.) has moved to the ftp.sri.com site:
<risksinfo.html>.
*** Contributors are assumed to have read the full info file for guidelines!
=> OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's
delightfully searchable html archive at newcastle:
http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
Also, ftp://ftp.sri.com/risks for the current volume/previous directories
or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
If none of those work for you, the most recent issue is always at
http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00
ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
*** NOTE: If a cited URL fails, we do not try to update them. Try
browsing on the keywords in the subject line or cited article leads.
Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
<http://www.acm.org/joinacm1>
------------------------------
End of RISKS-FORUM Digest 34.45
************************