23 Comments

The core issue with Elon's ownership of twitter is he doesn't understand what the value of the platform is to most users. Its main use is for people to consume content, the vast majority of users tweet very little and are using it as a combination RSS feed/comment section. But he seems to think most people are on there to post, like some old school forum. So you have him making features that mostly appeal to people who post a lot, and trying to charge money to the small minority of people who actually make the content.

Like Lebron without Twitter is still Lebron, while Twitter without Lebron is GeoCities without the sense of graphic design. I'm now sort of curious if Elon was in charge of Netflix, if he would try to charge rightsholders to be on the platform.

Expand full comment

This terrific article seems to be talking about two different topics--Twitter and AI--but in fact there is a problem both share: the people making the decisions are not like the rest of us, and, not only that, they lack awareness of how different they are from most people. So, Musk, as you note, has the false belief that he can scold and browbeat people into purchasing his product. He may be unaware of how other people operate because of his Asperger’s, or because he is surrounded by fanboys, but for whatever reason, he is making decisions based on a unique mindset, and he doesn’t realize it. This mismatch doesn’t bode well for Twitter’s bottom line. (Meta is a telling counterexample here too: Zuckerberg invested heavily in VR, presumably because he and nerds like him enjoy it. But now that he is seeing that regular people don’t like VR, he has had the sense to shift his company away from it.)

Similarly, as you note, the people who are freaking out about AI “have otherwise never thought deeply about public policy and who have no concept of how their prescriptions might intersect with broader political and economic realities.” The “rationalists” (I always laugh at how irrational the rationalist community can be) are unaware that policy decisions and consumer choice will mitigate the threat of any technology. In any case, rationalists aside, normal people are responding to AI exactly as we would predict, and nothing is particularly scary. They’re taking care of mundane tasks, goofing around, and enhancing their work. People aren’t plotting to exterminate humanity using AI. That particular fear exists in the heads of a small, very unusual group of people, who don’t realize how unusual they are.

I’m reminded of Socrates, who said that he knows that he knows nothing, which is more than most people know.

Expand full comment

Being self-defeatingly rude is apparently Elon’s brand. I’m pretty close to the exact target audience for a Tesla (liberal-ish suburbanite who doesn’t take many long road trips, and likes cool things with environmental branding). But I also don’t want political comments on my car, because that would be annoying. So I will wait until another brand has a similarly cool electric car.

Expand full comment

I think there is a fallacy in the AI section. It seems to say:

1. Certain people are saying AI could kill us all

2. Their prescriptions to fix this are impractical and won’t work

3. Therefore AI won’t kill us all

It’s possible their fixes are wrongheaded but their underlying concerns are not.

Expand full comment
Apr 9, 2023·edited Apr 9, 2023

Regarding Mastodon: I've certain seen some of this HOA-like attitude, but ultimately it hasn't affected my experience. Also, its being influenced by a lot of the ex-pats. For example, previously there's been a bit of a culture against full-index searching; you can only search by hashtag. But there are some tools coming online to remedy that to a degree and I think the lead developer of Mastodon has been more ambivalent about implementing it.

Substack Notes should implement the ActivityPub protocol, then it could integrate with Mastodon servers and those of other networks with the technology integrated. If the HOA-ers don't like it, they can block it. The rest of the network can follow those accounts if they like. I'm not sure if there are subscription-only features to SN, but that can be resolved (it's been done for the WordPress plugin).

Expand full comment
Apr 8, 2023·edited Apr 9, 2023

“I am sure that AI will pose problems... but this doomsday stuff just marks its proponents as bored dilettantes with no idea how policy or international relations work.

So I assume that in four years, this whole panic will be forgotten, just like how nobody talks about UBI anymore.”

Of course, your bets/predictions are, at least, somewhat aligned. Even if AI doesn’t destroy us all in a nightmare sci-if scenario, if it kills enough jobs (and it may very well) you’ll be hearing quite a bit about UBI.

Expand full comment

I work in applied ML (biotech) and I’m simultaneously very excited about these types of models and exasperated by how they seem to have tricked otherwise intelligent people into believing they represent anything more than a request-response loop. Like, a LLM doesn’t (and probably can’t? Although I’ve been wrong before) have anything like a continuous conscious experience - it doesn’t have “motivations” that would cause it to pursue any goals. The whole discussion about “AI alignment” presupposes some notion of general human values that we’re supposed to align these tools to - this makes sense on the small scale (the models should do what I want to do to the best of its ability, although by the time you’re writing precise machine-readable specifications just write the freaking code), but like, am I “aligned” with other humans who share different ideological goals? Some humans even believe that there are too many humans!

They’re incredibly useful for software development (the Noah Smith analogy about machine tools is pretty good, although I’d compare it more to CAD software as used by mechanical engineers), querying large preloaded databases (preloaded in the form of training), and writing silly stories. Some of those things are pretty cool!

Whenever anyone tries to go down this AI apocalypse doom loop, I think it’s helpful to replace “AI” with “a really smart person” - could a really smart person figure out how to launch the nukes? Or could they do complex gain of function research by themselves and create a plague? Like, maybe? But it’s not clear to me that these LLMs will ever become “smarter” than a human, maybe they top out at human level? But you’re still limited by what humans can or cannot do.

Expand full comment

> So I assume that in four years, this whole panic [about AI] will be forgotten, just like how nobody talks about UBI anymore.

Big fan of the site, but this take is wildly wrong. Four years from now, AI will be an even bigger part of our lives than it is right now. The people who warned that AI would take people's jobs still think that's true, but they've grown increasingly worried about a world in which humans lose control of our collective destiny.

You might think that's absurd, but consider what the world would be like if all the lawyers were AIs and all the programmers were AIs. Imagine that all the best work in economics was done by humans who used AI assistance, but people speculated that the AI could do the economics without human assistance.

I work as an AI researcher, and I can tell you we're barreling toward that world. I would bet you at 5:1 odds that people are panicking about this more in 4 years from now than they are now.

Expand full comment

I personally think (with the caveat that I know nothing about the technology) that people are worried about AI for the wrong reasons. I'm not worried about them "taking over" or other sci-fi scenarios. No, I'm worried about programmers relying completely on AI to write the code for things, and then no one actually has a handle on that code. Then, without any sci-fi stuff, you could have major software glitches in systems (pick whatever system could cause havoc that you choose, financial, air traffic control, whatever) that no one expected or knows how to fix because they've relied upon the AI to get it right. Doesn't require sentience, I Robot scenarios or anything that spectacular.

Expand full comment

On AI, you aren't actually engaging with the arguments that those who are concerned have for their core beliefs. Yeah, 6 months moratorium is kind of silly and half baked, and they don't seem to have very good other solutions, other than "we need more research". But the core problem (it seems to me) is well laid out, and sometimes problems genuinely don't have an easy (or even moderately difficult) solution. At the very least, it seems clear that there IS a path from where we are to apocalyptic AI with a relatively short timeline, which means that it is important to have people working on it, and trying to convince the rest of the world to pay attention

Expand full comment

Every Mastodon server has a set of AUP, just like every other service on the internet. The difference is that each server can set their own policies, including which other servers they interact with.

But! Mostly nobody cares. Join one of the big ones and you won't even notice.

Expand full comment

Out in meatspace, I lead a team that documents an engineering software suite. We specialize in describing to our customers how and why they should use specific features, and in what ways and combinations are appropriate.

Contra a couple emails from higher management, I am not so worried about large-language-model artificial "intelligence" products. The bullshit they produce is very impressive, but it does not appear to be on a convergent track with knowledge or understanding.

Expand full comment

Substack notes: To each their own. One of the main reasons I don't like Twitter and didn't like it from the beginning seems to apply to Substack notes. I just don't see the appeal of reading or writing 140 character comments (or however many characters are allowed). I guess Twitter has extra baggage over the last few years that makes it distinct from Substack notes. But if a "note" walks like a duck and tweets like a duck, then it's probably a tweet.

Expand full comment

Re: Twitter, do you have any feelings about the fact that some of your audience discovered you there? Not in a sense of responsibility, just a feeling about the loss of something that has been valuable.

RE AI: the pause seems like a request to let big players form their counter attack vs upstarts.

Expand full comment