[22522] in Privacy_Forum

home help back first fref pref prev next nref lref last post

[ PRIVACY Forum ] Script of my national radio report yesterday on

daemon@ATHENA.MIT.EDU (Lauren Weinstein)
Tue Nov 18 11:00:35 2025

Date: Tue, 18 Nov 2025 07:53:26 -0800
From: Lauren Weinstein <lauren@vortex.com>
To: privacy-dist@vortex.com
Message-ID: <20251118155326.GA3599@vortex.com>
Content-Disposition: inline
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Errors-To: privacy-bounces+privacy-forum=mit.edu@vortex.com


This is the script of my national radio report yesterday on the
serious new Android "Pixnapping" exploit, and related discussion of
sideloading and the risks of agentic AI. As always, there may have
been minor wording variations from this script as I presented this
report live on air.

 - - - 

Yep, I really do wish that I could bring more good news but man when
the tech world is sliding into darkness even faster you still gotta
call them as they are and things do seem to be going in the wrong
direction painfully fast.

So the latest bulletin item of interest is researchers have discovered
a new exploit that could be used by malware in a particularly
insidious way. They're calling it Pixnapping, and that is a very
descriptive name. What this technique does is bypass Google's Android
and deep level hardware protections and reportedly could let malware
basically read almost anything on the user's screen, including
sensitive personal information, one-time codes from apps like Google
Authenticator and others. There are timing constraints involved and
other details but overall it's pretty awful.

And what makes it even worse is that this not only appears to apply to
pretty much any relatively modern Android phone, tablet, or other
Android device, but there isn't a fix for it yet, and getting one that
actually works -- keeping in mind the hardware involvement -- looks
like a nontrivial matter. And that assumes you have Android devices
that are still getting updates, which LOTS of people with even
relatively new devices don't receive.

Apparently Google developed what they hoped was a fix for this, but
the original researchers turned around and quickly found a way to
bypass the fix. So that's pretty depressing.

Now there is some relatively good news -- such as it is. Currently
this exploit apparently hasn't been seen in the wild, only in the
hands of the researchers who discovered it, and they reportedly say
they won't release the code for the exploit until there is a fix. But
of course, once they release the code it could show up pretty much
anywhere and people with unpatched devices could be vulnerable.

Is there anything users can do right now or if they have devices that
never get the fix? Well yeah, it's pretty much the standard advice,
which is to try stay away from dodgy apps and suspicious websites, in
particular don't install apps from unreliable sources that might
ultimately carry this exploit or other exploits as a payload.

I'll mention in passing that Google has been talking about blocking
(except in a restricted set of cases) users from "sideloading" apps,
that is, installing apps directly without going through their Play
Store or other official sites for example. Apparently they've backed
off a little bit on this -- the details aren't clear yet -- because
while sideloading can be a vector for malware it also is important for
people to be able to use to install perfectly safe and legal apps that
perhaps Google or some government somewhere doesn't approve of and so
isn't in the Play Store, etc. Apple's iOS, e.g. iPhones has always
been very restrictive in this context and that's been a real problem
for many completely legit users over the years.

And there's something else important I want to add here. We need to be
thinking about this new exploit, and malware, and bugs in general in a
new way due to Google and other firms pushing very hard -- Google is
really at it for this holiday season -- for people to use their
so-called "agentic" AI systems to browse the web for you and make
purchase decisions for you and take all sorts of other potentially
risky actions on YOUR behalf.

Because it's not difficult to imagine how agentic AI and the like
could actually "supercharge" malware and bugs that could manipulate
the AI in perhaps a wide range of very problematic ways that could
cause users potentially a lot of grief, with no real confidence that
the AI firms are willing to take full responsibility for any damage
caused.

So in terms of what can go wrong with this tech, it's quite possibly
the reality that much worse is -- unfortunately -- still to come.

 - - - 

L

 - - -
--Lauren--
Lauren Weinstein 
lauren@vortex.com (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Signal: By request on need to know basis
Founder: Network Neutrality Squad: https://www.nnsquad.org
         PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
privacy mailing list
https://lists.vortex.com/mailman/listinfo/privacy

home help back first fref pref prev next nref lref last post