Comments on “Spurn artificial ideology”

Add new comment

Viral ideologies

SusanC 2023-02-14

But … the “AI is an existential risk” trope looks like a viral ideology too.

  1. It’s a conspiracy theory, at least in some variants.
    Those variants postulate that our current political problems are due to some malign outside force … not the Russians this time, but instead satanic AI’s run by the social media companies

  2. It recycles some rather dubious Christian tropes
    a) Satan. Here reimagined as a satanic AI.
    b) The apocalypse. Here, we immanentize the eschaton by imagining some final war between humans and the satanic AI.

So, if we’re being skeptical about ideologies, we ought to be sceptical about this conflux of ideas, too.

Political neutrality

David Beers 2023-02-18

Interesting piece up until this point.

“Political neutrality” isn’t really a coherent concept on which to base policy is it? This text and its recommendations is intensely political: you’ve taken a side on a controversial and consequential topic over which there is intense debate: whether technology should be controlled or be developed freely by whoever can obtain the resources to develop it.

Suggesting political neutrality for corporate entities seems particularly incoherent since corporations are explicitly constructed by ideology. The corporate version of “political neutrality” would be that the unfettered market is a neutral arbiter of what is right and that whatever corporate policies are adopted to maximize profits must be accepted in the interest of political neutrality. Again, this is an ideology you are explicitly opposing in this piece.

Neutrality as the ideology of Deep Learning

David Beers 2023-02-18

Sorry to pile on about this “political neutrality” issue but I think it’s kind of a critical point in your argument. For one thing, LLM methodology is precisely an attempt to generate an optimal arrangement of tokens based on all available data using a neutral statistical methodology. In its purest form the resulting trained model has not been “influenced” by any theoretical priors. It’s not polluted by a theory of mind or a model of the physical world, to say nothing of political ideologies. I suggest that rather than invoke neutrality, you stick to refuting the argument that AI proponents make that their models are the pinnacle of neutrality because their inferences are untainted by the ideological preconceptions of their makers, who therefore cannot be held accountable for their behavior.

regulation and ideology

David Chapman 2023-02-18

you’ve taken a side on a controversial and consequential topic over which there is intense debate: whether technology should be controlled or be developed freely by whoever can obtain the resources to develop it.

There’s some debate over whether large models should be restricted to large institutions. I haven’t taken a position on that. I’ve suggested that the technology should be regulated, but have not specified how. I have not suggested restricting it to large labs; I say somewhere explicitly that I don’t have an opinion about that either way. (Uh… I just looked and it’s in Are language models scary so that explicit statement isn’t up yet.)

Technologies that aren’t regulated at all are quite rare. You can’t sell home tools without conforming to a pile of regulations (and rightly so). Regulations can impede progress, and maybe the non-regulation of most software is one reason it’s progressed a lot. However, regulations are also often worth the cost.

I think getting regulations for AI right will be difficult, but important, and unavoidable.

Suggesting political neutrality for corporate entities seems particularly incoherent since corporations are explicitly constructed by ideology. The corporate version of “political neutrality” would be that the unfettered market is a neutral arbiter of what is right and that whatever corporate policies are adopted to maximize profits must be accepted in the interest of political neutrality.

Not what I was suggesting. I’m suggesting adopting a policy that a company will not take political stands on issues unrelated to its business, and that prohibits employees from political activism while at work.

refuting the argument that AI proponents make that their models are the pinnacle of neutrality because their inferences are untainted by the ideological preconceptions of their makers, who therefore cannot be held accountable for their behavior.

This isn’t a central point for me, but I do cover it in Gradient Dissent. I discuss Alexander Campolo and Kate Crawford’s “Enchanted Determinism: Power without Responsibility in Artificial Intelligence,” which is excellent on this.

Add new comment:

You can use some Markdown and/or HTML formatting here.

Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.

If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.