If we must have AI, we should replace “neural” methods with simpler, cheaper, and safer alternatives.
The biggest worry for most AI doom scenarios are AIs that are deceptive, incomprehensible, error-prone, and which behave differently and worse after they get loosed on the world.
That is precisely the kind of AI we’ve got.
This is bad, and needs fixing.
“Artificial intelligence” is a vague term for “software that does something it’s surprising for software to be able to do.” Current text and image generation systems fit that definition. However, calling them “neural” or “AI” puts them in a special category, in which it may seem ordinary software engineering considerations do not apply. I think that’s a mistake.
Reframing them as “expensive, poorly designed, buggy software rushed prematurely to market” dispels the mystical aura.
The lesson of “Forcing a brick airplane to fly” was that spending tens of billions of dollars on an ill-considered software development project as a PR stunt can get you amazing functionality, at the price of reliability. I expect that if you spend tens of billions of dollars to develop similar capabilities while aiming for scientific understanding and engineering quality control, you can get that instead.
Software history is replete with inferior technologies achieving “lock-in” by accident or brute force. Such dominance typically lasts only a decade or two.
I find it probable that neural networks and GPTs will be superseded by better, quite different techniques, within a decade perhaps. I provide no proposal for what those might be. It’s reasonable, then, to object to the whole of Gradient Dissent with “there is no alternative.” That is true; but it may be only because no serious attempt has been made to find any.
Some single method may be easier to apply, more reliable, and more understandable, as well as making more efficient use of both data and hardware. Alternatively, a uniform “learning” technology that can bypass the hard work necessary to understand and solve particular problems may remain a pipedream for the forseeable future.
Instead, adequate algorithmic understanding of particular task domains may lead to the replacement of neural networks with diverse better technologies. Those would include a statistical component, but might be primarily conventional software. In any case, they should result in systems that are much easier to understand and control. Such alternatives might be equivalently or more powerful; orders of magnitude more efficient; amenable to conventional engineering methods; and more reliable.
“AI” shades into “advanced computational systems,” which may be quite different from any current technology. We cannot accurately anticipate what those may be capable of. They may have large beneficial or harmful effects.
It would be wise, in future, to pay greater attention to potential risks than we’ve done with backprop-based AI.