[19448] in Privacy_Forum
[ PRIVACY Forum ] Vivid example of how Google Search AI overviews
daemon@ATHENA.MIT.EDU (PRIVACY Forum mailing list)
Tue Oct 22 11:19:13 2024
Date: Tue, 22 Oct 2024 08:11:05 -0700
To: privacy-dist@vortex.com
Content-Disposition: inline
MIME-Version: 1.0
Message-ID: <mailman.916.1729609866.1854.privacy@vortex.com>
From: PRIVACY Forum mailing list <privacy@vortex.com>
Reply-To: PRIVACY Forum mailing list <privacy@vortex.com>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Errors-To: privacy-bounces+privacy-forum=mit.edu@vortex.com
Vivid example of how Google Search AI overviews give conflicting,
worse than useless answers
Here's a vivid example of how badly Google Search AI Overviews can
screw up a simple question. I ask it twice, with almost identical
wording, whether Vasquez Rocks is in the TMZ -- the Hollywood Thirty
Mile Zone. In one case it definitively says yes. In the other, it just
as definitively says no! Note that after saying No in the main answer,
the added text below contradicts that. It can't even get its answer
straight in a single response!
And this is a harmless question. What happens when someone asks about
something really important and this happens? AI Overviews are worse
than useless because you CANNOT TRUST THE ANSWERS! All the Google
disclaimers in the galaxy won't fix that.
Screenshots: https://mastodon.laurenweinstein.org/@lauren/113351685382748787
L
- - -
--Lauren--
Lauren Weinstein
lauren@vortex.com (https://www.vortex.com/lauren)
Lauren's Blog: https://lauren.vortex.com
Mastodon: https://mastodon.laurenweinstein.org/@lauren
Founder: Network Neutrality Squad: https://www.nnsquad.org
PRIVACY Forum: https://www.vortex.com/privacy-info
Co-Founder: People For Internet Responsibility
_______________________________________________
privacy mailing list
https://lists.vortex.com/mailman/listinfo/privacy