AI is even harder to merge with than normal tools, because the brain is very complicated. And “merge with AI” is a much harder task than just “create a brain computer interface”. A brain-computer interface is where you have a calculator in your head and can think “add 7 + 5” and it will do that for you. But that’s not much better than having the calculator in your hand. Merging with AI would involve rewiring every section of the brain to the point where it’s unclear in what sense it’s still your brain at all.
Finally, an AI + human Franken-entity would soon become worse than AIs alone. At least this would how things worked in chess. For about ten years after Deep Blue beat Kasparov, “teams” of human grandmasters and chess engines could beat chess engines alone. But this is no longer true - the human no longer adds anything. There might be a similar ten-year window where AIs can outperform humans but cyborgs are better than either- but realistically once we’re in the deep enough future that AI/human mergers are possible at all, that window will already be closed.
In the very far future, after AIs have already solved the technical problems involved, some eccentric rich people might try to merge with AI. But this won’t create a new master race; it will just make them slightly less far behind the AIs than everyone else.
I’m not sure Scott has bit the transhumanist bullet here. Let me lay out a few points:
#1 There is no God.
#2 Death is morally wrong. At the most basic first principle of morality, it is morally wrong that your mother will die, it is morally wrong that father will die, it is morally wrong that your children will die, and it is morally wrong that you will die.
There is no fundamental religious/universal reason for people to die, there is no deeper purpose, it is fundamentally just an engineering oversight we haven’t been able to fix yet.
#3 Death is fundamental to the human experience, to the meaning of being human. Once we successfully implement immortality, we are no longer human.
And I don’t mean this abstractly, I mean imagine living for 3000 years. What would your daily experience be like? How much would we have to rewire your brain to make this work? How much do you want to be able to remember?
I get the vibe Scott is worried about humanity being left behind, “the successor species”, but that’s not what’s happening. There is no morally acceptable future with humanity, there is no AI and humans living in harmony or conflict, we’re fundamentally discussing two “successor species”, AI and immortals.
I like that Scott is specifying how that successor species should be designed, what it should include, but…I’m not sure he’s internalized that there isn’t a future for humanity as it exists, we’re all going to become something fundamentally inhuman…and that’s a good thing.
I don’t think AI safety/alignment should set itself as the protector of…classical human values the way Scott does here. Don’t get me wrong, I have deep, deep sympathy for preserving the fundamental human experience but…AI enthusiasts are not the people to do this, Scott is not the person to do this.
So in some sense part of what you’re looking for are people with complementary skills, but have hobbies in other areas. You want those people.
When you’re writing and it doesn’t roll off the page, it’s because you don’t know who those people are. How could you know who they are? You haven’t met them yet. You don’t know who you’re writing to. You don’t know why. You’re not sure what they’re interested in. You’re getting zero feedback as you write. At least when you speak, you can see them eyeing the exit.
But with writing, this doesn’t happen. You’re sitting in front of a blank page. You’re trying to put your thoughts down and it just doesn’t happen. You’re almost literally talking to a wall. You’re like those homeless people who come up to strangers and start rambling. The telos of where you’re headed is terrifying and of course you resist.
I vibe with this, a lot about the struggle of building relationships and community.
All I can add to this is a lot of people seem to be looking for their “tribe” and that’s a real thing, a core thing like family is a core thing but…I think a lot of people are expecting their “tribe” to be one categorical thing that will fill all their social needs.
Where a lot of what I’m struggling with now is not just developing multiple friend groups but introducing them to each other.
Like, I got career/job homies and I got nerd homies and I got weird homies and I got downtown party homies and gaming homies but…they don’t really intermix. Like, there’s guys who might go to an AI talk but they’re not going to go downtown to party and vice versa.
But that seems…big. Like, your “tribe” feels less like one coherent entity, like ACX or something, and more like…you and your foodie friend and your MMA friend all like Lex Friedman and talk about it but your foodie friend and your MMA friend might also be super outdoorsy and go do hikes together. Like, if you have 4 interests, your tribe isn’t interest #1, it’s 4-6 people with 2-3 interests in common.
I dunno, this feels as half baked as the original essay but, in both, I think there’s something real.
…No summary, it’s just, like 5 hours of…this?
Which is…interesting. Like, a 5 hour online “salon” of talks, readings, and musical performances. I took a serious look at this and, man, I actually really like this idea but…5 hours of that music…meh? I dunno, excellent concept, wrong vibe for me, but it’s definitely interesting, especially as it’s…doing something, not just writing.
I would like to formally request that
spend less time writing and more time making undead grimdark Americana AI art.…
God, Wagner is long. Dude, I’m starting to enjoy opera, catch the vibe but…5 hours? Dude, nothing happens, there’s no plot, I swear I could read all the lyrics and lines in 5 minutes. Just…glacially slow.
…
This song is frustrating. It’s got a great vibe and some killer, like properly murderous, riffs but…man, it’s repetitious and…off.
…
I’m 75% of the way through Edith Wharton’s “House of Mirth” and I’m officially bailing, gonna read some garbage 40k and then on to Ian Smith’s “Bitter Harvest”. Reread some “Sense & Sensibility” by Austen, forgot how much I liked her, and tried to find something else in that genre. Edith Wharton is…not that. Three big issues:
First, where Austen is certainly critical of British high society, especially later on in “Persuasion”, it always comes off as a flawed yet decent system. “House of Mirth”, and a lot of, like, late 19th-early 20th century literature is extremely negative. The American and European upper class in these stories isn’t flawed, it’s utterly fake and empty without any redeeming value.
Second, our heroine is not flawed, she’s…bad. A pretentious spoiled girl with few redeeming features whose primary appeal seems to be that she’s, like, 90% a garbage person in a society of 110% useless & horrible people. Almost a Holden Caulfield like problem where…just a fundamental immature and unserious person is criticizing the fakes.
It’s very hard to get into a book fundamentally about two people with the wit and maturity of teenagers amongst a society of rich adult toddlers.
Finally, and I’m sure this is unfair, but “The Great Gatsby” just does all of this better. I’m sure “House of Mirth” was an inspiration for Gatsby but…Gatsby ain’t my favorite book but it does all the same stuff here, just so significantly better.
…
I’m changing up my stack a bit
Taking off the 300 Melatonin, mostly because hormones=scary. Some sort of infrequent schedule is probably okay, will experiment with it on nights when I know in advance I’ll have problems. Taking the melatonin @ 7:30 definitely helps with sleep.
A number of people have recommended magnesium for sleep, among other things, and it seems to be working so far.
Also, trying L-Glutamine to help with sugar cravings.
Re. "I get the vibe Scott is worried about humanity being left behind, “the successor species”, but that’s not what’s happening. There is no morally acceptable future with humanity, there is no AI and humans living in harmony or conflict, we’re fundamentally discussing two “successor species”, AI and immortals."
That's an interesting way of putting it. I have been thinking more along the lines that humans, even if they survive, must either be under the control of AI, or must become much smarter, because humans as they are now are simply too stupid to not kill themselves off in a higher-tech future. But I think the "immortals aren't 'human'" reframe accomplishes something similar without being insulting to humans today.