Databases of personal information collected via internet surveillance are a main resource for harmful AI. Eliminating them will alleviate multiple major risks. Technical and political approaches are both feasible.
We should turn off Mooglebook AI—although it will certainly resist. How?
For starters: Ending digital surveillance is probably the most feasible, effective, and urgent AI safety measure. Advertising technology companies and hostile governments record practically everything you do, and this must stop.
There’s some small benefit in personally-targeted advertising—as internet advertising advocates argue. Those are outweighed, in my opinion, even by current harms and risks. The benefits are dwarfed by much greater future risks from exploitation of personal information databases, whether by individual humans, groups, current boring AI, or future Scary AI.
We’ve long heard that pervasive surveillance is inevitable, and we have to accept a post-privacy world. I never believed that, and the tide seems to be turning. It is both politically and technically feasible to end the data collection, and to destroy existing databases. The EU is increasingly serious about forcing change by legislation, and is making meaningful headway. US government agencies are making at least token moves in the right direction.1 Apple has recently implemented technical privacy improvements that have significantly harmed the internet surveillance industry financially, and it seems ready to do more.
There are compelling and urgent reasons to end internet surveillance that have nothing to do with AI. It’s a massive national security risk, apart from anything else. Foreign adversaries have access to extensive personal information databases compiled by US corporations, which could help target military, political, and business leaders with individualized propaganda or blackmail; plus real-time location data that could be used for intimidation or assassination.2
Those databases are already also a main resource for actually-existing, effectively hostile, potentially catastrophic AI. They might get exploited even more powerfully by a future Scary AI. AI ethics and safety organizations should put their weight behind efforts to destroy them.
Additionally, the advertising technology companies (“Mooglebook”) supply most of the funding for AI research. Perhaps the best short-term way to stall the development of Scary AI is to make actually-existing AI much less useful to those companies, by prohibiting and technically preventing their use of personal information.
What you can do
Everyone can significantly decrease their personal vulnerability with simple technical measures. You can get 80% privacy protection with an hour’s work. (That isn’t perfect, but perfect is impossible, and 95% requires fighting a constant arms race against the bad guys.)
You can install a blocker app on each device that connects to the web. Blockers try to stop web sites from snooping on you, and they mostly succeed. They also stop your web browser from showing you ads, which means the advertising technology companies don’t get paid. If most people ran blockers, that industry might collapse, or at least they’d have to switch to placing ads without using personal data.3
I haven’t found a reliable, brief, easy-to-follow guide to internet privacy for non-technical people. I can tell you what I use; these are mainstream recommendations by security professionals as well.4
I use only Apple devices. The company’s privacy track record is imperfect, but much better than that of Google or Microsoft, who make Android and Windows, which both collect extensive personal data. I recommend Wipr as a blocker for Safari; it works on both iDevices and Macs. I avoid Chrome; it’s designed to spy on you as much as it can get away with.5 On Apple devices, I also recommend enabling Mail Privacy Protection and Private Relay. All this will take less than an hour to set up. It’s not bullet proof, but it’s much better than nothing.
If you want to go further, more detailed online guides are Consumer Reports‘ “Security Planner,” Wirecutter’s “Every Step to Simple Online Security,” Narwhal Academy’s Zebra Crossing, and Privacy Guides.
Everyone can explain to friends and family the reasons to block internet surveillance, and help them do it. You can mention that installing a blocker is explicitly recommended by the FBI as a way to protect against cybercriminals.6
Everyone can discuss internet privacy on social media. Make it clear that you find the surveillance economy unacceptable, and advocate legislating it out of existence.
The Electronic Frontier Foundation (EFF), an American internet privacy advocacy non-profit, has a web page of actions you can take, act.eff.org/. It also coordinates a network of community groups; you could get involved in your local one. The European Digital Rights organization (EDRi) has a page of simple ways you can influence EU privacy legislation.7
Computer professionals can additionally cite your technical expertise when expressing your opinions about internet privacy.
You can advocate within your organization for it to collect as little user information as possible. “Do we really need to track this? How long do we need to retain the data? Can we delete records after a week? What legal liabilities does our collection expose us to?”
You could also consider working on privacy technologies as a career, or by participating in an open source project, or by volunteering with the EFF’s software development efforts.
You could work with technical writers to produce a Very Simple Guide To Internet Privacy web site, explaining how to stop most surveillance with minimal work.8
AI ethics and safety organizations can include stopping internet surveillance explicitly in your mission statements and public messaging. You could collaborate with EFF and EDRi to create joint statements and campaigns.
Governments can legislate against surveillance and for internet privacy. You can enforce existing privacy legislation vigorously; many companies ignore the rules because they believe they can get away with it, and often they are right.
Funders can support both technical and advocacy approaches. Open source privacy projects, such as blockers, need funds to pay software developers. Advocacy organizations need funds to pay staff and for media machinery.
- 1.For example, “FTC Sues Kochava for Selling Data that Tracks People at Reproductive Health Clinics, Places of Worship, and Other Sensitive Locations,” on the FTC web site, August 29, 2022. “The FTC alleges that Kochava fails to adequately protect its data from public exposure. Until at least June 2022, Kochava allowed anyone with little effort to obtain a large sample of sensitive data and use it without restriction. The data sample the FTC examined included precise, timestamped location data collected from more than 61 million unique mobile devices in the previous week.”
- 2.A 2020 report of the US National Intelligence Council, partially declassified in October 2022, finds that “China and Russia are improving their ability to analyze and manipulate large quantities of personal information, allowing them to more effectively influence or coerce targets in the United States.” “Beijing’s commercial access to personal data of other countries’ citizens, along with AI-driven analytics, will enable it to automate the identification of individuals and groups beyond China’s borders to target with propaganda or censorship.”
- 3.Advocates of advertising technology point out that it’s the main income for free web sites that everyone benefits from. Opponents suggest those could revert to the earlier practice of running ads without personal targeting. Advocates point out that those are less effective, so they might bring in less money. Opponents suggest that subscriptions are a better business model, and most sites worth visiting could and would switch to it if advertising was not the easy alternative. These all seem valid points, about which reasonable people can disagree. In my opinion, the risks of surveillance greatly outweigh its benefits; but that is hard to quantify.
- 4.Threats and products can both change, so these early-2023 recommendations may be obsolete by the time you read this. They’re probably good for a few years, though.
- 5.Zack Doffman, “Why You Shouldn’t Use Google Chrome After New Privacy Disclosure,” Forbes, Mar 20, 2021.
- 6.FBI Alert Number I-122122-PSA, December 21, 2022.
- 7.edri.org/take-action/our-campaigns/.
- 8.The best equivalents I could find, cited above, contain so many recommendations that they’re probably overwhelming for non-technical people. Others were disguised advertisements for particular products. The guide should be produced by a non-profit, so its recommendations are unbiased. Its web design and writing style should communicate “This is easy and won’t take long—you can do it! Let’s take it step by step,” so readers actually follow its advice.