Recent comments
Existential Relevance
Commenting on: New evidence that AI predictions are meaningless
To answer to tour questions, to me, it all boils down to existential relevance. Basically, you hold a strong opinion if having an opinion on it is important for your daily life.
Why don’t people have strong ungrounded opinions about chemistry, like they do about physics?
- Well in a sense they do. There are lots of New Age beliefs on chemistry. Like the reaction of water molecule structure to positive/negative thoughts. What makes quantum physics particularly attractive though, is first, the name. It sounds revolutionary, intellectual and scientific. And for New Age believers, that’s an important feat because most are « rational traditional religion denier ». Basically lots of them were either a bit traumatised by their religious upbringing or were always fundamentally skeptical about religion and so decided to go against it. But they actually created a new religion to make sense of an unknown higher power which would then be called « the universe ». Everybody needs meaning and this is a way for some people to explain their daily suffering and reduce uncertainty about present and future existence.
Why don’t people have strong ungrounded opinions about surgical techniques, like they do about nutrition?
- Nutrition is something you do every day and is relevant to anyone. If you choose to prioritise your health above other things, it becomes obvious you need to learn how to eat well. It has a very intimate relationship with your body because you eat 3 times a day. You can’t just forget to eat. And as it is still a very unsure field with lots of discoveries and experiments, it’s hard to sort the truth out of it. It’s also something which also relates to taste so of course it might help you to believe in all the good effects on starting a keto diet if you don’t like pasta by example. On another point, people might actually worry about surgical techniques if they have an operation coming soon! Existential relevance again!
Why don’t people have strong ungrounded opinions about superconductor research, like they do about AI research?
- Because of Terminator (and science fiction as whole). Really this film created such a vivid and credible image of what AI could be in one of the worst case scenarios in so many minds. Now as this is such an existential risk, it has existential relevance. For the AI optimist, this will make their life insanely better, so we better advance it. For the AI pessimist, this will end their life, so we better shut it down. Even if superconductor research is interesting and important, few people have an image of what is a superconductor in the first place. And even if people knew, what image would people have about how it affects their daily life? To be honest, there is also a lot of people who have no clue and don’t actually have a strong opinion. The people who are the most invested are usually nerds like me working in software and who read too much sci-fi! It’s sort of a fantasy coming true, no wonder it’s exciting!
So yeah, these are some unorganised thoughts on why these topics hold such big ungrounded opinions.
It all boils down to this feeling of:
« This is important in MY life »
Had to pick a number, and this was a number?
Commenting on: New evidence that AI predictions are meaningless
Re: predicting a date 400 years in the future, I wonder if this comes down to, “you have to pick a year [or as Brian says feel obliged, or are chosen for willingness to play the game], you don’t think it’ll be in reasonably-small-N years, and this was a year.” If not, I agree this would be a pretty surprising result!
No maths
Commenting on: New evidence that AI predictions are meaningless
My theory: no maths
Now, of course, if you actually read a book on quantum field theory you might encounter a lot of maths (some of it highly alarming, from the point of view of rigour), but the people speculating about philosophy of quantum mechanics are mostly going by the math-less explanations for the general public.
A friend of mine, a very smart therapist, likes to point to quantum theory as an example of something nearly everyone just takes on trust without bothering to actually read the mathematics. “Well, apart from autistic savants like you, Susan” (i paraphrase what she actually said slightly here)
LK99
Commenting on: New evidence that AI predictions are meaningless
I think with LK99 we did see non-experts suddenly having opinions about superconductors.
“Any temperature is room temperature if your house is cold enough” was one of the better jokes to come out of that affair.
What really happened, though?
Commenting on: New evidence that AI predictions are meaningless
I don’t trust the summary and I don’t know what happened, but I can speculate:
Being a forecaster at all means you’re willing to play the game. I expect that someone whose position is “the only way to win is not to play” wouldn’t be invited to participate?
Also, if someone were to talk to them, perhaps they would admit to being less certain than the data indicates?
More generally, refusing to speculate seems vaguely anti-social and few of us do it consistently. Making overconfident predictions is how people talk.
What links nutrition and AI and quantum?
Commenting on: New evidence that AI predictions are meaningless
Everyone’s gotta eat and everyone has to react to the new shiny thing and everyone doesn’t understand superposition?
It’s a skill to resist forming an opinion and it’s not something you usually get rewarded for (although in rational work it is rewarded sometimes).
So maybe people find themselves in a position where they have to have an opinion, and they go with the one that fits their current memeplex best. And given there’s no real way to find an answer, the opinion sticks.
No one cares about chemistry because boring equations mean nothing about the eternal soul. And you can actually do chemistry if you start caring about it for some reason.
So I think the answer is, it’s things you have to engage with to be alive, that have no real answer plus a failure to dodge opinion formation in the brain.
Safe experimentation discouraged by the funding lottery
Commenting on: Do AI as science and engineering instead
David:
I don’t think we have the luxury of experimenting on a small number of unwilling guinea pigs that automobiles had.
We do; by not putting them on the internet. But then we can’t create the spectacle and we don’t get the funding.
I think this points to a huge part of the problem. Most of the research funding in computer technology (and beyond) are being driven by the whims of rich people, all trying to invest in things that will definitely make them richer. Which makes funding priorities especially vulnerable to being hijacked by tulip fever.
We saw this with the DotCom bubble, the DataFarming bubble, the crypto bubble, and now the backprop bubble.
To whom is this page addressed?
Commenting on: This is about you
I also enjoyed this page and I’d be sad to see it deleted. One thing I find confusing is I’m not sure who “you” is. To whom is this page addressed?
I’m guessing it’s addressed to the reader. But the link in the first sentence makes it seem like it’s addressed to a future AGI. The idea of pleading with a future AGI seems to undercut the message of the rest of the book.
Maybe clarifying this would help?
Possible typo, tree villages and avoiding the cybersewer
Commenting on: Meaningful futurism
Possible typo;
we live their fragmented, irrational, incoherent wreckage.
Did you mean “we live in”? It does make sense the way it’s written, and maybe this poetic turn of phrase was intentional. But it did trip me up when I read that passage.
Treehouses. Even treehouse “cities” like Lothlorien.
OMG yes! When I was a child I was fascinated by the treetop village of the Ewoks in Return of the Jedi. Then as a teenager, by a similar one in Robin Hood: Prince of Thieves.
Sarah Constantin tweeted a remarkable thread of several dozen “things I’d like to see more of.”
Thanks for including a sample of these in the body of the text. It’s amazing how often people still link to Xitter posts and threads as if they were still part of the open web, visible to anyone who clicks the links.
Sadly I can no longer read Sarah’s thread without logging into Xitter, something I can do, but prefer not to. Anyone who’s deleted their account, in protest at the enshittification of the platform, doesn’t have that option. Had I not set up an account years ago to echo my posts to the fediverse (from a GNU social then a Mastodon account), I wouldn’t even have the option of spelunking in that cybersewer, looking at the diamonds in the rough. Encouraging people to do that is probably best avoided, for reasons discussed in this very book.
Who sold the future?
Commenting on: How AI destroyed the future
I’d add that there may be an opportunity for science fiction to help us out of the current nihilistic pit: by imagining utopia, or at least pretty-good outcomes, again.
I totally agree. I’ve been very encouraged by some of the more utopian subgenres that branched off cyberpunk, as the net became mainstream and the subcultural mode broke down in the social grey goo of the atomised mode. The emergence of solarpunk in particular has been encouraging. Here’s my own little contribution to the genre;
https://strypey.dreamwidth.org/1305.html
Living in China for a couple of years was an interesting contrast to neoliberalised Aotearoa. They are literally building a Tomorrowland of gleaming skyscrapers and superfast electric trains, and a solarpunk landscape of solar panels sited above what look like fish farms. The Chinese are also living in an emerging cyberpunk dystopia, as their increasingly autocratic leaders re-centralise political power (as we see in Hong Kong), and exert more direct control over the corporations behind pervasive tech platforms like WeChat, TaoBao and TikTok. Yet those I spoke to seemed as optimistic about the future as I imagine people would have been in the US in the years after WW2.
Solarpunk, Trekonomics and Fully-Automated Luxury Communism
Commenting on: Cozy futurism
This page reminds me of a few things. One is the solarpunk aesthetic, a utopian successor to cyberpunk that paints pictures of sustainable technological futures.
Another is the concept of Fully-Automated Luxury Communism, a term coined by Aaron Bastani as the title of a manifesto published in 2019. A book that could be seen as a precursor to FALC is Trekonomics by Manu Saadia, a French economist, published in 2016.
All three explore ways that technological advances could be harnessed to produce a sustainable prosperity for all, as summed up nicely in the classic Buckminster Fuller quote;
“Our collective objective should be to make the world work for 100 per cent of humanity in the shortest possible time through spontaneous cooperation without ecological offence or the disadvantage of anyone.”
Neoliberalism as a foreclosure on the future
Commenting on: How AI destroyed the future
“What sort of society and culture do we want, and how do we get that” is the topic of the AI-driven culture war. The culture war prevents us from thinking clearly about the future.
I agree with this, although rather than “AI-driven” I would say “AI-amplified”. The DataFarms (or “MoogleBook” as you describe them) have certainly fanned the flames with their automated aggregation of eyeballs, but the initial fire was lit long before they existed.
By whom and why? I agree with the theories put forward by the late Mark Fisher in ‘Capitalist Realism’, and by the late David Graeber in a number of his works, including ‘The Democracy Project’. They argue that destroying our capacity to imagine and realise progressive futures was the main purpose of neoliberalism. Which they see as a political project rather than an economic one. A rearguard action by a capitalist establishment shaken by the upheavals of the 1960s and early 70s, and determined to make sure there was as little change as possible from the 1980s onward, in case change once again threatened their concentrations of power and wealth.
Along with the displacement activity of the Culture War - which itself began before social media although it was less mainstream - you can see the effects of neoliberal pessimism at work in the changing themes of sci-fi. Which increasingly depicted futures that were much worse than the present, rather than better.
Take the totalitarian dystopia genre. True, pioneering authors like Orwell, Huxley, Kafka, Hesse, and Burgess were writing about these much earlier, as a veiled critique of existing authoritarianism and technocracy. But dystopian socities started to become more common in sci-fi in the late 1970s and early 80s, with movies like Soylent Green, Logan’s Run, The Running Man, They Live, Brazil and Starship Troopers; TV series like Blake’s 7 and Max Headroom; books like A Handmaid’s Tale; and comics like V for Vendetta and Judge Dredd. Over the last 20-30 years we’ve seen positive visions of the future almost completely displaced by powerful dystopias in young adult sci-fi (from Tomorrow When the War Began to The Hunger Games, Divergent, The Maze Runner etc);
https://www.teachthought.com/literacy/best-young-adult-dystopian-novels-books/
Another common type of bleak sci-fi future is the post-armageddon wasteland. Mad Max, one of the most famous examples, was released in 1979. Through the 1980s-90s there were dozens of these, including Blade Runner, The Quiet Earth, Escape from New York, The Day After, Twelve Monkies, and Waterworld, as well as TV series like Survivors, The Tripods (admittedly based on a 1960s novel series), and V (the initial 5-part mini-series and spinoff series). In the last couple of decades there have been dozens of movies, TV series and video games about nuclear armageddon (The 100, Jericho), climate armageddon (Snowpiercer, The Day After Tomorrow), viral pandemic armageddon (The Tribe, The Last Ship, The Rain, The Stand), technology armageddon (Terminator, The Postman, Revolution), astronomical armageddon (Rage, Melancholia, These Finals Hours, The Long Dark, Greenland, Don’t Look Up), alien invasion armageddon (Independence Day, Half-Life, Defiance, Falling Skies, A Quiet Place), or zombie armageddon (Resident Evil, The Living Dead etc).
Then there’s the designer despair of cyberpunk. True, Philip K. Dick was experimenting with this aesthetic from the 1960s onwards, but it didn’t really take off as a sci-fi subgenre until Neuromancer was published in 1984. The Matrix movies were an example of both cyberpunk despair and post-armageddon survivalism. By the time they were released, between 1999 and 2003, cyberpunk gloom had totally replaced the ‘gee whizz’ optimism of the Star Wars and Star Trek era, to the point where even the new films and TV series in those franchises adopted a dark, brooding tone.
As a result of all this foreclosing of the future, to quote Fisher; “it is easier to imagine the end of the world than the end of capitalism”. Or as The Sex Pistols put it in back in 1977 in God Save the Queen, “No future for you”. The DataFarms and their recommendation algorithms just put all this on digital steroids.
Zero-sum competition for funding undermines collaboration
Commenting on: Stop obstructing scientific progress!
Hal writes:
As important as a few individuals like Galileo were, science only became an unstoppable enterprise within a few decades due to one particular scenius the Royal Philosophical Society founded around 1660.
You can see this Great Man narrative style in political history too. Even Marx, whose ideas included a teleological version of history that made individuals almost irrelevant, has been made into the Great Man of Communism by this individualist approach to historical storytelling.
I think “individualist” is the key word in understanding why. To borrow the terms David coined in Meaningness, the ‘choiceless’ mode emphasises harmony with the collective. To transcendend this, and usher in the ‘systematic’ mode, it was necessary to counterbalance this with an emphasis on the autonomy of the individual. The synthesis being the systematic cooperation of autonomous individuals. But this emphasis on the individual is still with us, in the Great Man narratives of history, in the rugged individualism of most conservatives, and so on.
To bring this back to the point of the page we’re commenting on, it also results in the mythology that leads to the hero worship of Silicon Valley founders and billionaire philanthropists. The myth that scientific and technological outputs are the product of Great Minds, rather than meta-systematic collaboration. This mythology also leads to a focus on funding researchers, not large-scale, collaborative investigations, which in turn leads to a zero-sum competition between researchers for financial survival, which undermines any potential for collaboration.
Non-AI Free Code software aiding scientists
Commenting on: Radical progress without Scary AI
Here’s an example of advanced software aiding scientists without AI;
https://www.gmu.edu/news/2024-01/new-video-camera-system-captures-colored-world-animals-see
Note that full source code for the software is available under a free license (ie Open Source), to aid anyone trying to reproduce or build on the research. Something that’s much harder, if not impossible, when black box AI is used.
The traps of techno-determinism and reductionism
Commenting on: Radical progress without Scary AI
Most people do not viscerally believe that any further progress is possible. That disbelief, that unwarranted pessimism, is a major impediment to progress itself.
The late David Graeber wrote about this a decade ago and is worth quoting at length;
“For earlier generations, many science fiction fantasies had been brought into being. Those who grew up at the turn of the century reading Jules Verne or H.G. Wells imagined the world of, say, 1960 with flying machines, rocket ships, submarines, radio, and television—and that was pretty much what they got. If it wasn’t unrealistic in 1900 to dream of men traveling to the moon, then why was it unrealistic in the sixties to dream of jet-packs and robot laundry-maids?
In fact, even as those dreams were being outlined, the material base for their achievement was beginning to be whittled away. There is reason to believe that even by the fifties and sixties, the pace of technological innovation was slowing down from the heady pace of the first half of the century. There was a last spate in the fifties when microwave ovens (1954), the Pill (1957), and lasers (1958) all appeared in rapid succession. But since then, technological advances have taken the form of clever new ways of combining existing technologies (as in the space race) and new ways of putting existing technologies to consumer use (the most famous example is television, invented in 1926, but mass produced only after the war.) Yet, in part because the space race gave everyone the impression that remarkable advances were happening, the popular impression during the sixties was that the pace of technological change was speeding up in terrifying, uncontrollable ways.”
https://thebaffler.com/salvos/of-flying-cars-and-the-declining-rate-of-profit
I think this can be extrapolated to help us grok contemporary attitudes.
Like the space race in the 60s, the mainstreaming of the net and the digitisation of everything has given techno-utopians the impression that the pace of technological progress has been accelerating. Just about everyone else, seeing that the opposite has been true for decades as Graeber says, and trapped in the postmodern Swamps of Sadness, has fallen into techno-pessimism.
These are two sides of the same techno-determinist coin. I agree with what you’ve said elsewhere, that progress (in technology and elsewhere) is possible but not inevitable. Achieving it is not guaranteed and will require significant effort.
I believe a better understanding of how science gets done well, and why that works, should give us insight into how to accelerate it. (This is the engineering attitude!)
You may be right about this and I agree that it’s worth a try. But here be dragons!
Reductionist ideology posits that anyything can reproduced by breaking it down into its constituent parts and studying the chains of causation that link them into a whole. Then reproducing all the parts and putting them back together. This universalism is obviously wrong, just try reproducing a human that way : P
I think it’s possible to gain a deep understanding of how past scientific progress was achieved without being able to reproduce it. For example, when part of the answer is ‘with abundant, cheap fossil fuels’. Or to come up with metasystematic ways of reproducing the conditions for doing good science, without a mechanistic understanding of how past breakthroughs were achieved.
AI driving cars
Commenting on: Recognize that AI is probably net harmful
Mavi:
self driven cars powered by image models?
Certainly image recognition is part of what a driving AI has to do, but only a very small part. Children over 2 can definitely recognise images, but I wouldn’t let them drive ; )
where does that fit?
I think David covered this when he said;
mainly experimental prototypes and vaporware fantasies instead
“Developing driverless cars has been AI’s greatest test. Today we can say it has failed miserably, despite the expenditure of tens of billions of dollars in attempts to produce a viable commercial vehicle.”
Christian Wolmar, Dec 2023
Apple has been pivoting to surveillance advertising
Commenting on: Create a negative public image for AI
Advertising is most of the business of Facebook and Google, and sizeable chunks for Amazon and Microsoft.
As I’ve mentioned in other comments, this is increasingly true of Apple as well:
New companies like Purism, E Foundation, and Pine64 have emerged to purue Apple’s pre iThing business model; selling hardware and subcription services at a high enough price to subsidise software development on top of existing Free Code. Apple built MacOS X as a layer of proprietary “IP” icing on top of BSD. Their successors are pursuing a similar strategy, but without the “IP” snake oil that mainly serves rent-seekers.
MOLE; Machine Operated Learning Emulators
Commenting on: Fight DOOM AI with SCIENCE! and ENGINEERING!!
This is an excellent human-readable summary of the problems with the “machine learning”* approach from the POV of an insider to the field, , and what we can do about it right now. Possibly the most important page in the book so far.
- I have been calling it MOLE; Machine Operated Learning Emulators, both to avoid using vague and potentially misleading descriptors like “AI” or “Machine Learning” and to emphasise how blind and stupid MOLE actually is compared to sci-fi AI monsters like SkyNet or Control.
grApple is also a surveillance advertising company
Commenting on: End digital surveillance
grApple is no better at respecting privacy than BorgSoft or Goggle;
https://pluralistic.net/2024/01/12/youre-holding-it-wrong/
For mobile, I suggest looking into GNU/Linux OS like postmarketOS, whose creatures aim to make it run on any Android device (thus the name). Or mobile devices like Pine64’s PinePhone and PineTab, or Purism’s Librem 5, which run a mobile GNU/Linux OS by default.
I second the suggestion to use a GNU/Linux OS on desktops and laptops too. These are now fairly simple to install on any Windows laptop, including cheap or used ones, making them much more accessible than even used MacOS laptops.
I’ve been using Ubuntu for a while and it’s fine.
I used Ubuntu for a few years, but switched to Trisquel when the Ubuntu owners capitulated to the surveillance industry with the Amazon Lens;
https://www.gnu.org/philosophy/ubuntu-spyware.en.html
Some devices can’t run Trisquel, due to a minority of hardware makers who still refuse to release enough technical info to allow Linux developers to support their chips. If I end up with those I use LMDE, the Debian Edition of Linux Mint, another Ubuntu fork.
it’s not an option for most people, because it requires using terminal and sometimes coding
This hasn’t been true for about 2 decades. I’ve been using desktop GNU/Linux for nearly 20 years, and I’ve never had to write code. The only time I’ve needed the terminal is when I’m trying something experimental, or something goes badly wrong.
This happened at least as often when I used to use Windows. But finding and following instructions for using the Windows terminal required professional levels of IT skill. Whereas the web has loads of clear instructions aimed at GNU/Linux beginners, and forums where more experienced users will happily help them.
YMMV.
A vast Meaning mess.
Commenting on: Transformative AI
Empathy is not ‘Eternalism’.
Social media vs. recommender media
Commenting on: Apocalypse now
Suppose FB, Twitter, etc. eliminated their recommender algorithms and presented you with posts and reposts only from people whom you choose to follow. Then would most people (especially those who are more prone to reflexive outrage) see a substantially different distribution of messages than now?
I think so. Based on my experiences over the last decade or so as an active user of the fediverse; the decentralised social network of which Mastodon servers are a part.
Lots of people have migrated into the ‘verse from Titter since Melon Husk acquired it, and they do bring their reflexive outrage habits with them. But after a few months they seem to either calm down, or find the ‘verse is too short of ragebait and drift back to Titter.
What Leonie Said.
Commenting on: Transformative AI
QAnon on the one hand and Back Lives Matter on the other??
That’s some lazy thinking.
Why is "general" AI scarier?
Commenting on: Artificial general intelligence (AGI)
What makes “general” AI scary to most people is that we intuitively contrast it with AI following a goal given to it by humans, operating under human control. This intuition breaks under the analysis you provided on the previous page; AI doesn’t need human-level (or greater) intelligence, or the ability to determine its own goals, to be dangerous.
Daemon by Daniel Suarez illustrates this point
Commenting on: Autonomous AI agents
The novel Daemon by Daniel Suarez is an excellent illustration of your point on this page. Unlike most sci fi stories about murderous AI, Daemon takes place in a near future Earth just like ours, not a far future or fantastical setting (eg one with superheroes).
More to the point, it doesn’t posit that the AI villain has any kind of self-awareness or intentions. It has a limited ability to take control of other automated systems, and to adapt when its attempts to pursue its pre-programmed goals are thwarted. But it needs no mindness to be dangerous, just an carefully designed automated system, following goals and a rough plan given to it by its creator.
Agree with everything you say
Commenting on: Better text generation with science and engineering
I agree with everything you say. However, I guess we aren’t the only ones who have realised that. We have and are developing commercial systems along the lines that you discussed, using inverted indexes for retrieval and text generators for generating text. I suspect that many more people have also realised this and are following the same path.
Autistic hermits of the world, UNI-- ...actually, don't.
Commenting on: A future we would like
Ah yes, that figures. I’m quite the sperg-hermit too. I do have and cherish my friends, but prefer to spend much of my time alone, working on whatever project calls me. I do it for the beauty of what I produce. And the process of creation is itself beautiful. So it is autotelic, too.
What’s kind of sad is that I often feel like I have few to share it with that deep down actually care to listen. It’s like seeing beauty in things that basically don’t have to do with people is alien to them. What’s stranger is the commonality of this attitude even among my mostly tech-friends. They have a feel for it more than most, at least. But only a few have that same twinkle in their eye. I think the presence of that is a good sign they aren’t just in it for the money.
Maybe I could share some of my stuff online like you do, and get that sense of being fully seen and appreciated for the things that come from my soul. But for reasons like the ones you’ve expressed in this book, I do not use any social media. And from the AI angle, I am incredibly wary of making any of my git repos public, and am suspicious of even private repos, lest the companies lie about privacy: I don’t want to feed the beast training data, and especially not code that provides infrastructure around LLMs. I recently even cut out Youtube, and have noticed an associated improvement in my quality of life. I refuse to be a meat-puppet.
But I digress; I’d like you to know that your work has greatly influenced and helped me. Having read Meaningness near the end of college helped me through the philosophy-spawned existential torment I had been going through for years. I felt that it provided a map of where I’d been and for where I might go next. I noticed how the things you said were similar to the answers of the questions I’d asked of generally known-to-be-wise men. Because of this and how novel it felt, despite not understanding all of it, I felt that I could trust the direction it was leading me, and chose to keep its words in the back of my mind, ready for the day I might understand them. I think it’s helped me mature a lot faster than I would have otherwise, because it’s given me hints of what to look for: the failures of systematic/explicit reasoning, the relationship it has to implicit reasoning, where my levels of development might be lopsided, what the different kinds of reasoning are good for, and how they work together. And the limits of it all.
It made me consider that maybe it might be ok to stop biting bullets, and that this might be more wise than cowardly. I’m more at peace now. I’m only a few years into my career, but am learning at a fine pace and now have a good deal of autonomy. I keep noticing all the little details & gotchas, just as you’d foretold. I may still sometimes bitch about them, especially when they seem like the result of some batshit idea, but I begrudgingly now expect this to be the default, lol. This has made me into a pretty dang-good debugger.
Anyway, I’ve binged this far into the book just today, and find myself having thought many similar thoughts, but appreciating the deeper elucidation you’ve provided. I directly trace the similar ways I’ve been thinking about AI to the kinds of things you write about, and it’s nice to have that validated by hearing it from the horse’s mouth. And you have convinced me to at least look into mechanistic interpretability ;) we’ll see if I find it fun.
Alright, essay over. Thank you for your work.
Move to the front of the book
Commenting on: This is about you
Cite it as an alms to humanity or some such thing in between the introduction and chapter one.
Love it btw!
Don't delete the best page of the book
Commenting on: This is about you
I sent this page to my friend who is not technical but is worried about AI and she really liked it and then read the rest of the book.
The page ties the whole work together with vividness and meaningness and I don’t think anyone else could have written it.
The only reason to remove this page would be if it was going to be the first page of a new book of AI/meaning/non-dual poems
Self driven cars?
Commenting on: Recognize that AI is probably net harmful
This might not be the most pertinent page, but as examples of AI you mention language and image models. It’s self driven cars powered by image models? Or where does that fit? Or is it not powered by AI? That’s an example where the application has big responsibility (as opposed to the language and image models that can be used for free on the Internet)
Titanic Disaster
Commenting on: Rollerskating transsexual wombats
So, I’m watching the news reports of the demise of submersible at the Titanic site, and thinking: the first AI disaster is going to be like this too. i.e. an obviously unsafe system is deployed until people die, and then we all say how it was obviously unsafe all along.
AI - the solution?
Commenting on: Practical actions you can take against AI risks
my question & answer:
INNER ATTUNEMENT ! ...
CONSCIENCE ! ...
A I ?
</pre>
let humanity develop AI with these prime characteristics
To be or not to be
Commenting on: Artificial neurons considered harmful
Again, I like this as a poetic turn of phrase, but I think you left out a “be” there.