Comments on “A future we would like”
Adding new comments is disabled for now.
Comments are for the page: A future we would like
How to avert a moderate apocalypse... and create a future we would like
Adding new comments is disabled for now.
Comments are for the page: A future we would like
Autistic hermits of the world, UNI-- ...actually, don't.
Ah yes, that figures. I’m quite the sperg-hermit too. I do have and cherish my friends, but prefer to spend much of my time alone, working on whatever project calls me. I do it for the beauty of what I produce. And the process of creation is itself beautiful. So it is autotelic, too.
What’s kind of sad is that I often feel like I have few to share it with that deep down actually care to listen. It’s like seeing beauty in things that basically don’t have to do with people is alien to them. What’s stranger is the commonality of this attitude even among my mostly tech-friends. They have a feel for it more than most, at least. But only a few have that same twinkle in their eye. I think the presence of that is a good sign they aren’t just in it for the money.
Maybe I could share some of my stuff online like you do, and get that sense of being fully seen and appreciated for the things that come from my soul. But for reasons like the ones you’ve expressed in this book, I do not use any social media. And from the AI angle, I am incredibly wary of making any of my git repos public, and am suspicious of even private repos, lest the companies lie about privacy: I don’t want to feed the beast training data, and especially not code that provides infrastructure around LLMs. I recently even cut out Youtube, and have noticed an associated improvement in my quality of life. I refuse to be a meat-puppet.
But I digress; I’d like you to know that your work has greatly influenced and helped me. Having read Meaningness near the end of college helped me through the philosophy-spawned existential torment I had been going through for years. I felt that it provided a map of where I’d been and for where I might go next. I noticed how the things you said were similar to the answers of the questions I’d asked of generally known-to-be-wise men. Because of this and how novel it felt, despite not understanding all of it, I felt that I could trust the direction it was leading me, and chose to keep its words in the back of my mind, ready for the day I might understand them. I think it’s helped me mature a lot faster than I would have otherwise, because it’s given me hints of what to look for: the failures of systematic/explicit reasoning, the relationship it has to implicit reasoning, where my levels of development might be lopsided, what the different kinds of reasoning are good for, and how they work together. And the limits of it all.
It made me consider that maybe it might be ok to stop biting bullets, and that this might be more wise than cowardly. I’m more at peace now. I’m only a few years into my career, but am learning at a fine pace and now have a good deal of autonomy. I keep noticing all the little details & gotchas, just as you’d foretold. I may still sometimes bitch about them, especially when they seem like the result of some batshit idea, but I begrudgingly now expect this to be the default, lol. This has made me into a pretty dang-good debugger.
Anyway, I’ve binged this far into the book just today, and find myself having thought many similar thoughts, but appreciating the deeper elucidation you’ve provided. I directly trace the similar ways I’ve been thinking about AI to the kinds of things you write about, and it’s nice to have that validated by hearing it from the horse’s mouth. And you have convinced me to at least look into mechanistic interpretability ;) we’ll see if I find it fun.
Alright, essay over. Thank you for your work.