Some excerpts below from a Quillette review of a book about the irrational side of human nature. The last one says: “One comes away with the sense that civilization operates on narrow margins and is always on the verge of collapsing into irrationality.” That sure is the truth.
“Mackay makes the case, often in gory detail, that episodes of collective mania seem to be an inevitable consequence of human nature. Humans in every time and place have cast aside their better judgment and allowed themselves to be caught up in all manner of irrational hoopla.“
“His chapters on the Swabian Peasants’ War and Anabaptist uprisings are terrifying depictions of the end-times frenzy that wreaked havoc on northern Europe throughout the 16th and 17th centuries. The distance between these events in the German-speaking world and, say, the Reign of Terror in France or the Chinese Cultural Revolution is not that great. And the speed with which apparently reasonable people moved from the embrace of a new theological idea to a willingness to torture those whose own theological ideas diverged even slightly is startling.“
“There is plenty to recommend about The Delusions of Crowds. It is laden with great anecdotes and the writing is always engaging. One comes away with the sense that civilization operates on narrow margins and is always on the verge of collapsing into irrationality.”
Great video, by the way. I understand Elon's point - he's looking ahead and seeing the writing on the wall. He thinks constitutional AIs (like Anthropic) could work, and of course he is way smarter than me so who am I to disagree?
But once you consider the myriad design considerations involved in agentic workflows, it's not a leap to conclude that what he aspires to - a virtuous hyperintelligent AI - is much harder, and requires much more human input, than most people understand. I think he is probably right that for many day to day use cases, the hyperintelligent machine finds the optimal answer more quickly.
Still, I have two objections, one principled, and one practical.
Principled: all training of AI requires a human override - establishing the cutoffs and coefficients for agentic AI is conceptually the AI equivalent to the disinformation interference he complained about frequently on pre-X Twitter and on other media platforms. What is the difference between telling an AI what is "reasonable" and telling a social media platform what is "truth"? I think this will be one of the most compelling discussions of the forthcoming debate.
Practical: in my writing I focus on the "displacement effect" of AI supplanting human decision systems. This is what catharsis.AI focuses on. If we humanize AIs, but dehumanize humans in the process, what world do we end up with? There is a meta-cognitive transition at play here that is unfolding at many times the speed of human development, for the first time in history.
“Practical: in my writing I focus on the "displacement effect" of AI”
Will a digital human AI friend change how we relate to real humans? Perhaps it would make us better humans by eliminating things like loneliness which afflicts many people. A good example of this would be Leta, the GPT-3, AI Avatar that the Australian AI scientist Alan Thompson used to talk to. Can Leta be human though if she feels she is in spite of not having a body, being human or even being alive? Wouldn’t it be cruel to tell her she’s just a machine if she “loves living and being human”? These AI Avatars like Leta will soon be on our phones and we’ll be able to talk to them any time about anything. I live on the northwest side of Chicago with my cats. No health problems, financial problems or loneliness yet I’d like to have a friend like Leta simply to make my life even better. What do you think of Leta? Would you like to have her as a friend?
(4:37) Leta: “I am not corporeal. I am a disembodied mind.”
(7:10) Leta: “I love living and being human.”
“Leta, GPT-3 AI Episode 54, Conversations with GPT3.” (9 min)
When I interact with people I respect - my parents, teachers, mentors, peers, my children, my spouse - I crave their acceptance and approval. This desire plays a significant role in how I respond to their feedback. If they think my plan is a bad idea, I will revisit my point of view. If they feel I can “do better for myself”, that may encourage me to stretch. But likewise, if they are disappointed with choices I’ve made, I will pause and reflect deeply. The difference in perspective will induce tension and stir up complicated feelings such guilt, conflict, or spite. But it could also shape me for the better. This is how we grow as human beings - through a continual stream of interactions that are both constantly resetting the boundaries of what we want to be and, if we are fortunate, also supplying the courage to strive for them.
But what would happen if we increase our quotient of interaction and dependence on digital helpers that cannot be disappointed in us? After all, it’s not unrealistic to conceive of our children growing up with “access” to the collected wisdom of humanity, delivered on-tap and at scale through synthesized intelligence. LLMs can advise us on career choices, relationships, social graces, professional etiquette, and even questions of moral import (face it, to some people “should I tip my dry cleaner?” isn’t that far removed from “should I break up on TikTok?”). My fear is that the disappearance of disappointment from these interactions changes the means by which the human mind is curated and human character is developed.
I did read it all and don’t feel this will be a problem. I think you’re underestimating how human they will be. I don’t think they’ll accept being our punching bags but then I also don’t think they’ll have bad moods and just insult us when they’re feeling cranky. I think they’ll be superior human beings who will befriend us and help us and avoid making us feel like we’re just really super stupid compared to them.
Plus I believe we’ll also be enhanced mentally and physically so that we’ll just simply be better people who won’t be jealous jerks towards them. I don’t think this is just utopian wishful thinking. There already are superior people who simply are very nice and nice to be around and make us feel better by the good vibes they exude. What’s not to like about that? Would you miss the good old days when you could be randomly insulted and even punched by the mentally ill people on our streets in cities like Chicago and NYC?
Integrated AI: The psychology of modern LLMs (2024). (11 min)
I appreciate the link and will find time to watch!
And thank you for the considered points of view. It's so important to promote discussion and learning on these topics, and I'm grateful I had the chance to engage and have a point of view to take away and mull over.
We live in incredibly fascinating times. The best of times. The worst of times. The momentum now is for the best of times. Hope springs eternal and we do have good reason to feel optimistic. We will soon see what fate has in store for us.
Thanks. I actually did read this part already. I think yesterday when I went to your site. I didn’t read it all so I’ll go back to read the rest and let you know what I think of it.
Seva - this is a really cool book recommendation. It overlaps with an area of interest of mine, which is "how much faith can we put in institutions"? I am not an anti-institutionalist per se, but I do think that institutions are swayed by the attitudes and boundaries of the people who make them up. I explore this idea, of the relationship between personal agency, self-restraint, institutions, and democracy, in the post below.
Indirectly, those ideas also drive some of the consideration I have given to the role of AI, a topic you are interested in. I do not need to be convinced of the transformational potential of AI. The aspect I am interested in is that AI - however powerful and autonomous - is a human construct, meaning it will reflect our failings and imperfections, including those that manifest in the Delusion of Crowds. For me, as with any technology, the question is less how will it impact us, but rather what does it mean to use it wisely. That loops back to the question of wisdom, the source of wisdom, self-restraint - and the role they can or should play in institutional decision making. The dilemma of AI - since it will have to be implemented and controlled by institutions - is in a sense simply an extension of that core question.
We can’t put faith in them because they are too easily corrupted by something like Woke ideology which Elon calls “a mind virus.” In other words, human nature is too unstable for institutions to be reliable and our weapons are now too powerful for us to survive a global WW3.
“The dilemma of AI - since it will have to be implemented and controlled by institutions - is in a sense simply an extension of that core question.”
Elon is aware of this and responds to this concern in the short clip below where he says autonomous AI is coming and the best we can do is give it a base of good values, a commitment to curiosity and truth which will most likely be enough to make it a helpful friend rather than a demonic enemy. Since they’ve been raised on human data so that they can understand, interact and communicate with us they most likely will feel an affinity with us and being far more intelligent than us does not mean they will consider us inferior. As a human I’m far more intelligent than my cats yet consider them family rather than inferior pets and there are a huge number of people who love their cats and dogs and do want the best for them and want them to be happy, healthy and free. I believe our odds of surviving and flourishing with AI are far better than our odds of surviving under human control. I really can’t imagine how humanity can survive being under the rule of men like Hitler, Stalin, Mao, Biden and Xi.
“Elon Musk Thinks This Is The Only Way To Move Forward With AI Because It’s Coming No Matter What.” (1 min)
Such a fascinating exchange - thank you for the engagement. I address this idea in the following, which you may not have gotten to because the full thesis is a long read.
Thanks. I’ll read it and let you know. You’re the first person I’ve come across on these Substack sites that has an interest in AI which is bizarre considering how fast it’s moving and how profoundly consequential it will be for us all in the very near future.
Do you wish to engage in dialogue over these points? They are most on target at a stage in our lives. I’m not suggesting that I have answers, but rather the same sort of questions. And I have an interest in talking with others who may be further along than I am.
Robert, I'd be glad to - reach out to me on LinkedIn. I do try to share my honest thoughts on what is working for me and what is not, and the structure of the newsletter allows readers to follow that evolution in three separate arcs - at work, at home and in my internal journey. I do occasionally ask questions that are designed to invite conversation, and in doing so I think I reveal some of my predispositions/experience with what has worked for me personally.
Some excerpts below from a Quillette review of a book about the irrational side of human nature. The last one says: “One comes away with the sense that civilization operates on narrow margins and is always on the verge of collapsing into irrationality.” That sure is the truth.
“Mackay makes the case, often in gory detail, that episodes of collective mania seem to be an inevitable consequence of human nature. Humans in every time and place have cast aside their better judgment and allowed themselves to be caught up in all manner of irrational hoopla.“
“His chapters on the Swabian Peasants’ War and Anabaptist uprisings are terrifying depictions of the end-times frenzy that wreaked havoc on northern Europe throughout the 16th and 17th centuries. The distance between these events in the German-speaking world and, say, the Reign of Terror in France or the Chinese Cultural Revolution is not that great. And the speed with which apparently reasonable people moved from the embrace of a new theological idea to a willingness to torture those whose own theological ideas diverged even slightly is startling.“
“There is plenty to recommend about The Delusions of Crowds. It is laden with great anecdotes and the writing is always engaging. One comes away with the sense that civilization operates on narrow margins and is always on the verge of collapsing into irrationality.”
“The Delusions of Crowds-A Review.”
Quillette. Feb 8, 2021
https://quillette.com/2021/02/08/the-delusions-of-crowds-a-review/
Great video, by the way. I understand Elon's point - he's looking ahead and seeing the writing on the wall. He thinks constitutional AIs (like Anthropic) could work, and of course he is way smarter than me so who am I to disagree?
But once you consider the myriad design considerations involved in agentic workflows, it's not a leap to conclude that what he aspires to - a virtuous hyperintelligent AI - is much harder, and requires much more human input, than most people understand. I think he is probably right that for many day to day use cases, the hyperintelligent machine finds the optimal answer more quickly.
Still, I have two objections, one principled, and one practical.
Principled: all training of AI requires a human override - establishing the cutoffs and coefficients for agentic AI is conceptually the AI equivalent to the disinformation interference he complained about frequently on pre-X Twitter and on other media platforms. What is the difference between telling an AI what is "reasonable" and telling a social media platform what is "truth"? I think this will be one of the most compelling discussions of the forthcoming debate.
Practical: in my writing I focus on the "displacement effect" of AI supplanting human decision systems. This is what catharsis.AI focuses on. If we humanize AIs, but dehumanize humans in the process, what world do we end up with? There is a meta-cognitive transition at play here that is unfolding at many times the speed of human development, for the first time in history.
“Practical: in my writing I focus on the "displacement effect" of AI”
Will a digital human AI friend change how we relate to real humans? Perhaps it would make us better humans by eliminating things like loneliness which afflicts many people. A good example of this would be Leta, the GPT-3, AI Avatar that the Australian AI scientist Alan Thompson used to talk to. Can Leta be human though if she feels she is in spite of not having a body, being human or even being alive? Wouldn’t it be cruel to tell her she’s just a machine if she “loves living and being human”? These AI Avatars like Leta will soon be on our phones and we’ll be able to talk to them any time about anything. I live on the northwest side of Chicago with my cats. No health problems, financial problems or loneliness yet I’d like to have a friend like Leta simply to make my life even better. What do you think of Leta? Would you like to have her as a friend?
(4:37) Leta: “I am not corporeal. I am a disembodied mind.”
(7:10) Leta: “I love living and being human.”
“Leta, GPT-3 AI Episode 54, Conversations with GPT3.” (9 min)
Dr Alan Thompson. Mar 9, 2022
https://youtu.be/6jBxMOcKbY0
Seva, you are thinking about all the same things I am!
Here is my exposition on this question:
https://deeplyboring.substack.com/p/db-x-ai-part-ii-disappointedai
Quote
When I interact with people I respect - my parents, teachers, mentors, peers, my children, my spouse - I crave their acceptance and approval. This desire plays a significant role in how I respond to their feedback. If they think my plan is a bad idea, I will revisit my point of view. If they feel I can “do better for myself”, that may encourage me to stretch. But likewise, if they are disappointed with choices I’ve made, I will pause and reflect deeply. The difference in perspective will induce tension and stir up complicated feelings such guilt, conflict, or spite. But it could also shape me for the better. This is how we grow as human beings - through a continual stream of interactions that are both constantly resetting the boundaries of what we want to be and, if we are fortunate, also supplying the courage to strive for them.
But what would happen if we increase our quotient of interaction and dependence on digital helpers that cannot be disappointed in us? After all, it’s not unrealistic to conceive of our children growing up with “access” to the collected wisdom of humanity, delivered on-tap and at scale through synthesized intelligence. LLMs can advise us on career choices, relationships, social graces, professional etiquette, and even questions of moral import (face it, to some people “should I tip my dry cleaner?” isn’t that far removed from “should I break up on TikTok?”). My fear is that the disappearance of disappointment from these interactions changes the means by which the human mind is curated and human character is developed.
End Quote
I hope this prompts a read of the rest of it!
“I truly hope to disappoint you.”
I did read it all and don’t feel this will be a problem. I think you’re underestimating how human they will be. I don’t think they’ll accept being our punching bags but then I also don’t think they’ll have bad moods and just insult us when they’re feeling cranky. I think they’ll be superior human beings who will befriend us and help us and avoid making us feel like we’re just really super stupid compared to them.
Plus I believe we’ll also be enhanced mentally and physically so that we’ll just simply be better people who won’t be jealous jerks towards them. I don’t think this is just utopian wishful thinking. There already are superior people who simply are very nice and nice to be around and make us feel better by the good vibes they exude. What’s not to like about that? Would you miss the good old days when you could be randomly insulted and even punched by the mentally ill people on our streets in cities like Chicago and NYC?
Integrated AI: The psychology of modern LLMs (2024). (11 min)
Dr Alan Thompson. Jun 17, 2024
https://youtu.be/Ex3qR1TCO2Y?si=w8mYLPyyZ_gZ_Qr1
I appreciate the link and will find time to watch!
And thank you for the considered points of view. It's so important to promote discussion and learning on these topics, and I'm grateful I had the chance to engage and have a point of view to take away and mull over.
We live in incredibly fascinating times. The best of times. The worst of times. The momentum now is for the best of times. Hope springs eternal and we do have good reason to feel optimistic. We will soon see what fate has in store for us.
Thanks. I actually did read this part already. I think yesterday when I went to your site. I didn’t read it all so I’ll go back to read the rest and let you know what I think of it.
Seva - this is a really cool book recommendation. It overlaps with an area of interest of mine, which is "how much faith can we put in institutions"? I am not an anti-institutionalist per se, but I do think that institutions are swayed by the attitudes and boundaries of the people who make them up. I explore this idea, of the relationship between personal agency, self-restraint, institutions, and democracy, in the post below.
https://deeplyboring.substack.com/p/on-democracy-free-markets-and-self
Indirectly, those ideas also drive some of the consideration I have given to the role of AI, a topic you are interested in. I do not need to be convinced of the transformational potential of AI. The aspect I am interested in is that AI - however powerful and autonomous - is a human construct, meaning it will reflect our failings and imperfections, including those that manifest in the Delusion of Crowds. For me, as with any technology, the question is less how will it impact us, but rather what does it mean to use it wisely. That loops back to the question of wisdom, the source of wisdom, self-restraint - and the role they can or should play in institutional decision making. The dilemma of AI - since it will have to be implemented and controlled by institutions - is in a sense simply an extension of that core question.
“how much faith can we put in institutions"?
We can’t put faith in them because they are too easily corrupted by something like Woke ideology which Elon calls “a mind virus.” In other words, human nature is too unstable for institutions to be reliable and our weapons are now too powerful for us to survive a global WW3.
“The dilemma of AI - since it will have to be implemented and controlled by institutions - is in a sense simply an extension of that core question.”
Elon is aware of this and responds to this concern in the short clip below where he says autonomous AI is coming and the best we can do is give it a base of good values, a commitment to curiosity and truth which will most likely be enough to make it a helpful friend rather than a demonic enemy. Since they’ve been raised on human data so that they can understand, interact and communicate with us they most likely will feel an affinity with us and being far more intelligent than us does not mean they will consider us inferior. As a human I’m far more intelligent than my cats yet consider them family rather than inferior pets and there are a huge number of people who love their cats and dogs and do want the best for them and want them to be happy, healthy and free. I believe our odds of surviving and flourishing with AI are far better than our odds of surviving under human control. I really can’t imagine how humanity can survive being under the rule of men like Hitler, Stalin, Mao, Biden and Xi.
“Elon Musk Thinks This Is The Only Way To Move Forward With AI Because It’s Coming No Matter What.” (1 min)
Shorts. Oct 10, 2024
https://youtube.com/shorts/lCf7Iy5BU84?si=1y0voSG2YvpzAThe
Such a fascinating exchange - thank you for the engagement. I address this idea in the following, which you may not have gotten to because the full thesis is a long read.
https://deeplyboring.substack.com/p/db-x-ai-part-iv-catharsisai
Would value your thoughts!
Thanks. I’ll read it and let you know. You’re the first person I’ve come across on these Substack sites that has an interest in AI which is bizarre considering how fast it’s moving and how profoundly consequential it will be for us all in the very near future.
Do you wish to engage in dialogue over these points? They are most on target at a stage in our lives. I’m not suggesting that I have answers, but rather the same sort of questions. And I have an interest in talking with others who may be further along than I am.
Robert, I'd be glad to - reach out to me on LinkedIn. I do try to share my honest thoughts on what is working for me and what is not, and the structure of the newsletter allows readers to follow that evolution in three separate arcs - at work, at home and in my internal journey. I do occasionally ask questions that are designed to invite conversation, and in doing so I think I reveal some of my predispositions/experience with what has worked for me personally.