Comments on “How should we evaluate progress in AI?”
Adding new comments is disabled for now.
Comments are for the page: How should we evaluate progress in AI?
Machine learning that matters
Interesting post, summarising many of my own perceptions. I especially like your call for the researcher to maintain contact with the messy real world. This reminds me of Kiri Wagstaff’s “Machine Learning that Matters” at ICML 2012.
Interpretability, transparency, explanations
I would say the we are still inundated with work that only uses unreliable proxies of real-world usefulness. It is very easy to build a model, find some data, compute some numbers, and run a variety of statistical tests to claim success. It is far harder to explain what a model has actually learned to do, and demonstrate whether it is even useful to anyone. My research in music information retrieval (MIR) echoes this and what you have said: B. L. Sturm, “Revisiting priorities: Improving MIR evaluation practices,” in Proc. ISMIR, 2016.
However, there are efforts advocating for interpretability and transparency and explanations in ML: see the Interpretable ML Symposium; and Fairness, Accountability, and Transparency in Machine Learning; and HORSE. And there is pressure from governments in ensuring the fair application of such algorithms in society, e.g., the recent UK Parliament Science and Technology Committee’s call for written evidence of “Algorithms in decision making”.
Other papers of interest:
D. J. Hand, “Classifier technology and the illusion of progress,” Statistical Science, vol. 21, no. 1, pp. 1–15, 2006.
C. Drummond and N. Japkowicz, “Warning: Statistical benchmarking is addictive. Kicking the habit in machine learning,” J. Experimental Theoretical Artificial Intell., vol. 22, pp. 67–80, 2010.
B. L. Sturm, “A simple method to determine if a music information retrieval system is a “horse”,” IEEE Trans. Multimedia, vol. 16, no. 6, pp. 1636–1644, 2014.
This Is The Best Portrait Of "The AI Mangle" I've Ever Read
This is a fantastic portrait of real work within the field of study historically called “Artificial Intelligence”, and I feel that it does a wonderful job portraying both the positive and negative aspects of work in this field.
You have an insider’s grasp of the stories and details, and yet time has given you an outsider’s clarity of detached perspective :-)
I think it is extremely useful to notice that of the six dimensions of practice in AI that you point to (Science, Engineering, Mathematics, Philosophy, Design, and Spectacle) none of these dimensions are really essentially about “AI”.
I think maybe what you’re doing in a deep sense is talking about how modern science works in real life in nearly every field of study with aspirations of technical correctness, but you’re maybe doing this through the lens of a field you know very well.
Based on my experience in several different fields, what you’re saying is quite similar to how basically ALL of them work in real life. It is very common for people to start out wanting to do research that will “cure cystic fibrosis” and end up obsessing over yeast genetics for years. With just a little bit of squinting, this dynamic appears quite similar to the toy models produced as ground breaking GOFAI projects, like Shrdlu, Copycat, Sonya, and Eurisko.
To aim for more generality: I have a long running hobby interest in producing a “cliology of science”.
Cliology in general would function intellectually as a quantitative, predictive, and manipulative theory of history in general. It would compressively explain and largely “post-dict” the past, and it would “pre-dict” the future in a probabilistic way, helping people to build forecasts of the future that gave theoretical reasons for empirical measurement of the determinants of the future that would be hardest to compressively explain and most in need of simply measuring them and taking them as historical givens. Naturally this project is probably impossible in general, but as a research prompt it offers a glorious vision that leads one into many “rabbit holes” of learning :-)
A cliology of science would merely attempt the restricted and hopefully easier task of developing a theory like this which works specifically on “history of science”. Naturally this project is probably impossible in general, but it is likely to be literally the hardest part of general cliology, and so figuring out why it is impossible might shed light on the nature of history itself, potentially accomplishing something that would seem, as you say, “snazzy” ;-)
Anyway, this is backing raised here in support of explaining why, if you haven’t heard of him, Bruno Latour might be relevant to your interests. His book “Science In Action” is on my list. When I search now I see another probably relevant book, with a full online PDF: Pickering’s “The Mangle of Practice”
I’ve not read much of Latour directly (because life is finite), but I have read summaries and have an “imaginary Latour” in my head whose central idea seems to be the thing you have just described about AI work over the last 50 years!
Basically there is this thing Latour calls “the mangle” that happens in the workflow of serious knowledge workers. They have tools and techniques, and they also have theory. Their job is to use the tools and techniques to show a thing they theory says should be possible that no one has yet done before. Every small empirical failure in the lab has two possible interpretations: (1) the investigator is incompetent and applying their tools and techniques with insufficient diligence or funding or (2) the theory is wrong.
Showing the thing is taken as incremental confirmation of both the theory (especially the theory’s scope) and also confirmation of the investigator’s adequacy as an investigator who has mastered the tools and techniques of their field.
From the perspective of the lay public, investigatory adequacy is so fundamentally assumed that it almost never arises in discourse in any thoughtful way. To raise this issue well is basically a kind of heresy, attacking the integrity of a Real Scientist and implicitly potentially attacking the ecclesiastic authority of Scientists In General.
From the perspective of a bureaucrat making fast decisions in a funding agency the expectation is that current competence correlates with past success at either an institutional or personal level. A great proposal might screen off prestige indicators, but serious prestige failures for a great proposal would probably cause the proposal to be read as a sophisticated lie. Then they basically fire and forget. A grant that produces no positive result weeds out the investigator as a reasonable winner of future grants. (Implications for the incentives to game this system are left as an exercise for the reader.)
However, from the perspective of the individual scientist, the question posed by the mangle is a serious and urgent and live thing.
Early in one’s career it arises very regularly, and often in the negative, as one discovers more techniques that genuinely haven’t been mastered yet, and are needed to accomplish a project. The western blot keeps not coming out and maybe you ask someone for help who knows the trick or maybe you just keep debugging it yourself… You have trouble figuring out just the right u-substitution to attempt to actually find the integral of a complicated equation…
The beginning of a general mathematical model here would be the “hope function”, which has been wonderfully written up by Gwern. There is a chest with N drawers (techniques), an object (theoretical confirmation) is either in one or none of them, and as drawers are opened and found empty the probability goes up both for other drawers as well as the possibility that the object is elsewhere. Extending this to science, and specifically to the mangle, the drawers are investigatory techniques, and the investigator’s skill is a parameter that probabilistically determines the chance of a drawer seeming empty but actually being full.
When all drawers have been exhausted and no theoretically predicted observation has been seen it suggests going back and double checking drawers a second or third or fourth time.
In real life, over a real career, investigatory skills presumably go up over time to some peak (and then begin to decay when the reseracher leaves the lab). However, with every subsequent new “piece of science” that is genuinely new, requiring a new application of a previous skill with additional new demands, the question arises again. Nobel winners rarely win a second prize, because luck is a major component here. Marie Curie got two, but she used essentially the same technique for both: boiling a huge amount of slightly radioactive ore down to its maximally radioactive essence. There turned out to be more than one such essence, hence more than one Nobel was awarded! She was quite a lucky scientist ;-)
I would propose that “a rationality” is similar to a “technique” or a “drawer” here: a tool of clear shape and purpose with limits that make it inherently non-universal in scope.
Then “a meta-rationality” would be similar to a field of study where people gain specific skills, and also a general meta-skill at navigating “the mangle” relative to a shared overarching vision of what the field is even supposed to be aiming for.
The implication is that to understand reality in general it may be necessary to have a tragically duplicative “meta-meta-rationality”.
The cultural diversity of fields is clearly visible, and I think often attributed to reality itself having structure which can be broken into parts and analyzed separately. However, I am personally suspicious of this. It seems likely to me that reality is “one big thing” and that the lines between different fields (that expect mastery of different techniques to demonstrate answers to different questions) are often at least somewhat socially contingent responses to the tragically short lifespans of humans and the complex political realities of scientific funding and scientific self promotion.
I would suggest that perhaps there is another layer here, which this essay on “AI” strongly points to a piece of, which are dimensions of variation between fields, that might help someone interested in reality itself figure out which fields they should be aware of and borrow from in order to achieve various goals a knowledge worker might have.
Here I come back to your breakdown, which I find fascinating, and want to extend a bit into a list of possible field-level virtues:
- Truth as pursued by Science in general.
- Technical Utility as efficiently deployed by Engineers from various fields.
- Abstract Rigor as embodied most in the field of Mathematics.
- Meaning as verbally clarified and discovered through exploratory Philosophy.
- “Snazziness” as creatively embodied in a process of Design.
- Influence which is classically achieved via Spectacle, with the aim of recruiting resources into a grand endeavor.
Now my personal approach to science is descriptivist and opportunistic. My cliology of science hobby let’s me look at science itself like a bug under a microscope without caring much about it. Also, I like fields that offer me something useful, and I mostly just want to know what they are really like so that I can find useful things faster when a field is relevant to my interests.
However I don’t personally find it meaningful to exhort scientific fields to be different than they actually are.
I might dis a children’s book for being long and boring, or dis it for making children cry. I won’t dis a children’s book for being childish however. That’s essential to the idea of a children’s book! Similarly I’m not going to complain about romance novels being full of unrealistic relationships. Similarly I’m not going to complain about Science Fiction showing an astonishing amount of political agency put in the hands of scientifically literate technicians.
I feel like maybe academic or scholarly fields (to the degree that they don’t get government funding) are like books from different genres. The scholars in them want what they want, and have made their own choices about the pursuit of truth, utility, rigor, meaning, and snazziness. They only have so many “points” to spend on the formation of scholars in that field and tradeoffs are inevitable…
Even spectacle (or lack thereof) I can forgive. Basically the only thing I see as a sound basis for complaining about a field of study would be that it might be funded by taxes collected by governments on pain of punishment, and then spent in a way that fails to benefit (or even hurts) the public from whom the taxes were taken. This isn’t even a thing that a field deals with. Fields span countries, living in the space of the mind.
You could talk about the Canadian Ecology Establishment or the British Physics Community and how each may or may not be funded at the appropriate levels, and in the public mind I’d expect this to be related to their production of ecological spectacles visible to Canadian taxpayers or physics spectacles visible to British taxpayers… but this is going to be even MORE “inside baseball” than the question of how researchers should personally deal with the mangle in their field ;-)
In the meantime, as an aspiring meta-meta-rationalist I think it is helpful, when looking at fields, to separate the tradeoffs that I might abstractly wish it had made (so that it would be useful for me as a sort of idea thief) versus the tradeoffs it actually made (for its own internal productive reasons which may have causes that can be scientifically analyzed).
You taking an prescriptive interest here sort of makes me wonder… Are you thinking of going back into the AI laboratory with a new theoretical perspective and new stance towards “the mangle”? That would be pretty awesome :-)
On tools vs. agents, and machine learning vs. statistics
Your comments about dishwashers and demos reminds me of the argument I made about tools and agents in my two posts here. IMO, one of the biggest problems with current AI hype is the focus – not in the research literature, but in the popular conversation – on fully autonomous systems that can act without human supervision. This doesn’t even make sense as an overly optimistic take on the direction of the research, since most deep learning (even the really impressive stuff) is focused on mapping inputs to outputs on circumscribed tasks without a full perception-decision-action-result loop, and deep reinforcement learning has not yet had practical successes outside of game playing (and not for lack of trying). Maybe people are responding to AlphaGo and AlphaGo Zero, attributing their successes to generic “deep learning” and ignoring the decision-making element. I don’t know.
The other thing I wanted to say was about this:
“Data science” is, in part, the application of AI (machine learning) methods to messy practical problems. Sometimes that works. I don’t know data science folks well, but my impression is that they find the inexplicability and unreliability of AI methods frustrating. Their perspective is more like that of engineers. And, I hear that they mostly find that well-characterized statistical methods work better in practice than machine learning.
I work as a data scientist, and much of this is pretty good characterization of what I do and think. But I don’t agree with the conflation here of “AI” and “machine learning,” nor with the assertion about “well-characterized statistical methods.” In practice, a “data scientist” is usually something like an applied statistician who works entirely within the “algorithmic modeling” culture described by Breiman in his “two cultures” paper (which has probably influenced my thinking more than any other single academic paper, BTW). That is, they are concerned with making predictions (or making other productive uses of data), not with interpretation per se; while we care about interpretability a lot, it’s mainly as a means to other ends. We don’t care about what a given coefficient is (estimation), only what it does. This leads to a “whatever works” attitude about model classes, and we will use complicated/fancy model classes with no estimation or hypothesis testing frameworks as long as they perform well in way we can in itself statistically validate.
I find terms like “AI” and “machine learning” unhelpful here, as it’s not clear to me where the boundaries are supposed to lie, and it’s in the very nature of my work to use everything from linear regression to fancy deep neural nets depending on my use case. I do find that some of the cutting-edge deep learning stuff is not very useful in practice, but some of it is, and the useful / not-useful lines I tend to draw are not easy to match up to the boundaries of concepts like “machine learning.” And while deep learning methods are very data hungry (as you say, they are much like fuzzy look-up tables of massive size) and costly to create for new tasks, they’ve grown to be mature and reliable engineering components for some very standard, broadly applicable ones like parsing.
Clarifying the two cultures
In a perhaps-odd coincidence, Breiman’s paper was recommended to me by someone else yesterday, and I did a quick skim then. I haven’t read it properly… but I guess I’d say that, so far, I find it unsatisfactory. I would like statistics to be both interpretable and predictive.
Hmmm… reading over my comment again, I think I didn’t really do justice to Breiman’s view (or my own) when I said algorithmic modelers like me don’t care about interpretation for its own sake. The divide I’m trying to get at is more subtle: it’s about whether you assume at the outset that the real phenomenon follows patterns you know how to interpret.
A lot of the statistical work that gets done in the world follows the procedure “perform linear (or logistic) regression no matter what the data set looks like, do hypothesis tests on the coefficients, draw conclusions about the phenomenon from these tests.” This is a bit of a caricature, but honestly most of the social and medical science I read does exactly that. There is sort of a “convenient-world assumption” here where you take a certain kind of easily interpretable pattern (linear relationships), assume all patterns in the data are of that form, and run a pattern-detecting procedure that depends on that assumption. If the assumption is not valid, this can result in missing real patterns but also seeing unreal ones.
What Breiman and I prefer is to try a bunch of models, aiming at predictive performance, and then do interpretation afterwards. When I do start to interpret, I’m not locked in to a single framework and betting everything on its (approximate) truth. Instead I have a bunch of frameworks, and a sense of how much it “costs” in predictive terms to make the assumptions inherent in each one. Sometimes the only model that predicts well is a fancy one I can’t interpret – but that has interpretive value in itself, telling me that my data is really that messy, and that I would be fooling myself if I made a convenient-world assumption. I’m letting reality tell me how easy it is for me to understand, rather than assuming it is easy for me to understand.
Latour
Bit off topic but relating to Latour… I read his ‘Why Has Critique Run Out Of Steam?’ recently which I think you would like. He covers some similar territory to you on the dangers of half-digested pomo ideas, and talks about how this kind of critical theory pushes the rich world of real objects into two impoverished categories:
We can summarize, I estimate, 90 percent of the contemporary critical scene by the following series of diagrams that fixate the object at only two positions, what I have called the fact position and the fairy position…
The fairy position is very well known and is used over and over again by many social scientists who associate criticism with antifetishism. The role of the critic is then to show that what the naïve believers are doing with objects is simply a projection of their wishes onto a material entity that does nothing at all by itself…
But, wait, a second salvo is in the offing, and this time it comes from the fact pole. This time it is the poor bloke, again taken aback, whose behavior is now “explained” by the powerful effects of indisputable matters of fact: “You, ordinary fetishists, believe you are free but, in reality, you are acted on by forces you are not conscious of. Look at them, look, you blind idiot” (and here you insert whichever pet facts the social scientists fancy to work with, taking them from economic infrastructure, fields of discourse, social domination, race, class, and gender, maybe throwing in some neurobiology, evolutionary psychology, whatever, provided they act as indisputable facts whose origin, fabrication, mode of development are left unexamined)
There’s a lot more going on in the essay beyond this… not sure I can really summarise it.
David – yes, you are right to
David – yes, you are right to distinguish three cultures/approaches. When it comes to doing science, we do need something the second culture lacks: persistent mechanistic hypotheses that get tested, discussed and approved over time. This is also lacking in the practice of first-culture science, as I lamented here (but see also my exchange with discoursedrome in the notes).
But even here, the second culture has some lessons that ought to be heeded. You write
It wants to discover the data-generating process in the real world (not the shape of the data you’ve got). You are willing to sacrifice some predictive accuracy if there’s reason to think that a less predictive model is more mechanistically accurate.
There is a danger here. One reason that second-culture stuff has been so popular recently is the raw predictive success of the complicated, less interpretable models it embraces. I read that success as evidence for something that always seemed plausible to begin with: outside of physics and chemistry, reality’s “data-generating processes” are vastly more complex than the sorts of things humans tend to come up with off the tops of their heads, when playing the guess-and-check-and-guess-again hypotheco-deductivism game. This is a clue about reality, not a sacrifice of reality in favor of mere “prediction.”
If we’re following this line of thought, we ought to look closer at what these “second-culture models” do. If we do, we will see a striking feature: they generally do not look like models of any data-generating process we would expect in reality. The classic workhorses of data science are models that average together lots of decision trees, like random forests (Breiman’s invention) or their even more formidable sibling, boosted trees. These do fantastically well, but no one thinks reality consists (mechanistically) of decision trees ensembles. The same goes, of course, for neural nets. Calling these things “models” of the data is actually kind of misleading; it might be better to call them something like “perceptual systems.” No one expects to peer inside an organism’s visual system and “read off” the physics of light and the properties of ecologically common materials, but this is no count against vision. (The fact that this distinction is often of no practical importance to the practicing data scientist perhaps explains why statistics and AI have gotten so curiously blended together in that field.)
So, if we care about reality, we may have the following worry. What if, in many subjects of interest, our mechanistic hypotheses are just too simple? What if, in eschewing the “second-culture models,” we are like scientists trying to understand the relation of retinal activations to facts about the world, who go on proposing and testing crude hypotheses about the mean values of the activations or their standard deviations, when it might be better to learn to see, and only then to interpret what happens in the (massively complex) process of sight?
Gelman and his paper with Shalizi
Unless I misunderstand you, I don’t think Gelman falls into group three, which seems like it would rule out “wrong but useful” models. He wrote a really, really good paper with Cosma Shalizi, Philosophy and the Practice of Bayesian Statistics, which is worth reading for anyone.
A key quote:
We are not interested in falsifying our model for its own sake – among other things, having built it ourselves, we know all the shortcuts taken in doing so, and can already be morally certain it is false. With enough data, we can certainly detect departures from the model – this is why, for example, statistical folklore says that the chi-squared statistic is ultimately a measure of sample size (cf. Lindsay & Liu, 2009). As writers such as Giere (1988, Chapter 3) explain, the hypothesis linking mathematical models to empirical data is not that the data-generating process is exactly isomorphic to the model, but that the data source resembles the model closely enough, in the respects which matter to us, that reasoning based on the model will be reliable. Such reliability does not require complete fidelity to the model.
The really interesting point that has stuck with me from the paper is that, having designed a model, one should treat actually fitting it as a kind of “principal-agent” problem. Treat the model as your agent drawing conclusions from the data, and yourself as the principal. The model has priors, not you, but you set its priors so the model won’t draw conclusions in “bad faith” (like overfitting). And hopefully, you then check that the conclusions make sense (if you’ve got a generative, Bayesian model, one way is just to draw a bunch of samples and see if they’re distributed like your observations).
Looks more effective != is more effective
One of the later sections (“Spectacle”) was more relevant than you might have realized.
A story I heard directly from the former head of engineering at a major appliance manufacturer many years ago: the research lab had come up with a new spray design that looked like it would be much more effective in cleaning dishes. The design had been transferred to engineering and the lab was working on measuring just how much more effective the new design was. As designs were completed and moved to manufacturing, the marketing people were brought in to figure out how best to sell the new capability.
Unfortunately, not long before the product was scheduled to be introduced, the tests were finally completed. The new spray was not more effective than the old one – it looked great, but it didn’t clean dishes any better. And by this time, they were starting to make the new dishwashers and it was too late to cancel the advertising space.
They did the only thing that they could do: without claiming that it was more effective, they showed the spray in action and talked aboput how it was more powerful. They didn’t lie, but they didn’t tell the whole truth – it wasn’t any better at cleaning dishes than the old design.
I wonder how many AI systems are like this….
There was a [recent article] (https://www.theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies)
on the “AI” services that use people behind the scenes to supplement, fix, or provide the entire service.
More engineers and developers building dishwashers with New! Improved! sprays....
Book review of "naming nature" by Carol Kaesuk Yoon
I was curios about the tradeoff between accuracy and interpretability in other fields so I checked out a book on taxonomy.
According to Carol Kaesuk Yoon categorizing living things (and foods) is a basic human need. However the flood of new plants and animals brought to europe by the age of exploration provided new problems for classification. To deal with it, Lineus came up with a single system for every natural thing (rocks included).
The continuing discovery of more and more species lead to the professionalization of taxonomy. You had to have a good deal of experience with a particular family of life to get a good feeling for how they should be grouped. The flaw with this method was that because the decisions were based on personal weighing of evidence, it was difficult to resolve disagreements on how things should be classed.
There were three movements to rationalize taxonomy. The first in the 50s was based on using computers to classify creatures based on number of shared features (chapter 8 has a great discussion of the cloudiness this pushed to the edges). The second was based on classifying based on how similar the amino acid sequences were. And the third said that the only thing that matter was not general similarity but shared innovations instead.
Each of these movements was more scientific than the last (and tremendously criticized as completely lacking judgement by traditionalists) and brought new unexpected discoveries, but also each one also brought taxonomy farther and farther from the everyday intuitive classification of life. This culminated for Kaesuk Yoon in the destruction of “fish” as a taxonomic category. The argument goes, that nestled within the fish is the clade of tetrapods, a group that includes every amphibian reptile and mammal. If “fish” was a clade then we would have to include every tetrapod as a fish, therefore fish is not a meaningful taxonomic group.
Kaesuk Yoon said that when she started writing the book she was confident that science was the only way to order the world, but as she wrote she realized that her own two eyes tell her that fish obviously exist. Because of this she realized that there is more than one true way of organizing the world and that she like everyone else was continuously moving between them. The upshot of all of this is that since “fish” is not a taxonomic category but a naive classification, whales are absolutely 100% fish.
Gelman & Shalizi and Bayesian nonparametrics
I see what you’re getting at, then, and that makes sense.
Bayesian nonparametrics might be an intermediate case. (i.e. allow a huge or infinite number of parameters, and then set your priors to encourage using only as many as necessary.) On the one hand, these models basically never make any attempt to even somewhat say something about reality – they are way more like the “just throw boosted trees at it” approach.
On the other hand, you do need to be in conversation with the data as you fit them. It is very obviously like the principal-agent process described in that paper – your chosen priors are the only thing encouraging the agent to be frugal in its use of parameters.
For whatever reason, nonparametric models are my prototype for Bayesian modeling, with the hierarchical models that Gelman likes being towards the edge of the category. Now that I think about it, though, Stan has essentially no support for nonparametrics (for good technical reasons, it can’t sample discrete r.v.s), and nobody seems bothered by that. So that does make it seem like Gelman and the other Stan people really are part of that third culture.
There’s so many different cultural divides, all intersecting in weird ways, and I guess out of habit I tend to divide people up by which computational methods they prefer.
I roughly agree with what you
I roughly agree with what you’re saying here.
A possible view one could take on the “human robot interaction” end of AI is that it’s about investigating a weird quirk of human psychology, that human beings will accept as “honorary people” objects - such as dolls and robots - that they clearly know not to be actually human. Any robots you might build in the course of the scientific investigation are experimental stimuli intended to trigger this aspect of human psychology; the goal is not to understand how to build robots, it is to understand human beings. If a certain amount of fakery is involved, of course it’s fake — the experiment is investigating our willingness to be fooled.
(Of course, machine learning as used by e.g. Amazon is not like HRI).
Argument for deep learning by Chollet
If you are still planning on making a blog post on why you don’t think the current AI techniques will go very much farther, then you might want to address this argument by François Chollet in his new book Deep Learning with Python:
Deep learning has several properties that justify its status as an AI revolution, and it’s here to stay. We may not be using neural networks two decades from now, but whatever we use will directly inherit from modern deep learning and its core concepts. These important properties can be broadly sorted into three categories:
Simplicity – Deep learning removes the need for feature engineering, replacing complex, brittle, engineering-heavy pipelines with simple, end-to-end trainable models that are typically built using only five or six different tensor operations.
Scalability – Deep learning is highly amenable to parallelization on GPUs or TPUs, so it can take full advantage of Moore’s law. In addition, deep-learning models are trained by iterating over small batches of data, allowing them to be trained on datasets of arbitrary size. (The only bottleneck is the amount of parallel computational power available, which, thanks to Moore’s law, is a fast-moving barrier.)
Versatility and reusability – Unlike many prior machine-learning approaches, deep-learning models can be trained on additional data without restarting from scratch, making them viable for continuous online learning—an important property for very large production models. Furthermore, trained deep-learning models are repurposable and thus reusable: for instance, it’s possible to take a deep-learning model trained for image classification and drop it into a video-processing pipeline. This allows us to reinvest previous work into increasingly complex and powerful models. This also makes deep learning applicable to fairly small datasets.
He doesn’t claim it will lead to general intelligence or anything, but he thinks it will go quite far; he doesn’t seem to think recent demonstrations are misleading.
The scope of AI / opening remarks.
“Artificial intelligence is an exception. It has always borrowed criteria, approaches, and specific methods from at least six fields:
1. Science 2. Engineering 3. Mathematics 4. Philosophy 5. Design 6. Spectacle.”
Not borrowed. That is the unique scope of AI. It’s cross-domain by its nature. Our unique blindness is that Biology is #1. It determines and trumps the rest. Constrains them.
It’s notable that it was filled under “other” here although I was watching for it.
Evaluate the First Working AGI for similarity to brain function.
We should evaluate progress in AI on the basis of how well any AI entity (such as MindForth or ghost.pl AI or the JavaScript Tutorial AI Mind) simulates genuine brain function and the use of concepts and not statistics to demonstrate Natural Language Understanding (NLU).