This App May Indicate Something Is Deeply Wrong with Us


But what?

There’s a new social media app. It’s sort of like Twitter, it’s free, and everyone who signs up for it immediately has—I kid you not—one million followers. And they’re all hanging on your every word, eager to talk about every post you make on it.

The catch? Those one million followers are exclusively LLM-based bots.

No one on SocialAI (“the AI social network”) has human followers. Only bots. No other human users of the app see anything you write on it or the app’s replies to you.

When you sign up—yes, I tried it, briefly, for the purposes of this post—you get to select the “personalities” of the types of bots who will follow you. You can select a mix of followers who are more supportive or more critical (there’s even a “trolls” option, if that’s what you’re into), liberal or conservative, debaters, teachers, “odd-balls” and “curious cats”, “alarmists” and “optimists”, and so on. And then you just post, wait a few moments, and then see what your followers have to say.

This morning, I put up the following post on SocialAI: “I’m considering creating a social media app in which everyone has a million followers. The catch is that each and every one of these followers is an AI. Would this be good?” Here are some of the replies:

The app’s creator, Michael Sayman, introducing the app on X, writes:

I’ve always wanted to create something that not only showcases what’s possible with tech but also helps people in a real, tangible way. SocialAI is designed to help people feel heard, and to give them a space for reflection, support, and feedback that acts like a close-knit community. This app is a little piece of me – my frustrations, my ambitions, my hopes, and everything I believe in. It’s a response to all those times I’ve felt isolated, or like I needed a sounding board but didn’t have one. I know this app won’t solve all of life’s problems, but I hope it can be a small tool for others to reflect, to grow, and to feel seen.

To some, (like my follower Rita Truthwhistler, above) this will be the culmination of social media, which, despite the word “social” often isolates people from one another by substituting for more substantive forms of engagement, and encourages narcissism.

But really it’s just sad. If this app becomes successful, what does that tell us? That we’re not good at being there for other persons, such that many of them feel they have to turn to this? That we don’t care if there are other persons there for us, since we can have substitutes like this? Both?

Someone might reply: “But isn’t it better that people whose social needs go unfulfilled have at least this?” Maybe for them (if they really have no other prospects for social engagement), in some ways. Maybe. But for humanity? No. Better we figure out how to better appreciate each other and understand why that’s important, than encourage “solutions” that will ultimately make us too ignorant and unmotivated to do so.

Subscribe
Notify of
guest

23 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Michael McInerney
Michael McInerney
7 months ago

While I don’t remember the original source for the idea, someone came up with a thing called “heaven banning” (along with a fake article from the future saying that it was proving to be a controversial policy on Reddit). Heaven banning is tantamount to your comments posted to any given site not being visible to any of the site’s actual users, but the user would still be engaged with as if it were visible. However all of the “engagement” was being posted by AI, not actual people.

The saddest thing isn’t that such an idea was feasible. It’s that there’s probably a non negligible market of people who would willingly sign up for that of their own accord.

SCM
SCM
7 months ago

While I don’t think this app is a particularly good idea, I do now want to change my last name to “Truthwhistler.” Who wouldn’t want to be the Yondu Udonta of philosophy?

Daniel Muñoz
7 months ago

This is the Experience Machine on a shoestring budget.

SorryNotLLM
SorryNotLLM
Reply to  Daniel Muñoz
7 months ago

Indeed, only on the premise that the users know how to maximize their pleasure by ‘follower-profiling’.

Michel
7 months ago

I already have this app. It’s called assigning take-home work like essays.

Will Behun
Will Behun
7 months ago

My immediate thought (and likely my use for this) was that at least for me it might be nice to “test run” some of my posts to see what kind of responses I’m liable to get without actually bothering anyone. I know in the past I’ve had to take down posts or add commentary because of my own blind spots. It would be nice to have a sandbox.

Robert Gressis
Robert Gressis
7 months ago

I suspect that this is a taste of things to come. Humans are complicated and unpredictable, and if you have a choice between a being who doesn’t react the way you hope and a being that does, many people will think, “why shouldn’t I deal with the beings who make me happier?”

I think people who didn’t *grow up* constantly interacting with AI will find AI interlocutors to be unsatisfying in some way, but most of the people who do grow up with them will prefer them and think that the older humans are masochistic for wanting to deal with other humans.

And ironically, the more that people turn themselves into the kinds of people who find other people to be too obnoxious to be worth the bother, the more that they will be right, because people who prefer AIs to humans are deeply off-putting, because they’ll have turned themselves into instant gratification machines.

P.D. Magnus
7 months ago

It’s true that social media often makes people more isolated, but it also makes some people connected in ways that they couldn’t be without the social network. This system is all the downside without even the possibility of the upside.
One might think that the new AI thing is the natural limit of sites that drive user engagement with bot accounts. (The Ashley Madison leaks from a few years back showed that most men paying for the system were just interacting with dummy accounts.) However, the AI system not only isn’t social but by design couldn’t be social. It could not connect you with anyone, even if there were other subscribers who wanted to connect with you. It is a social media themed on-line game rather than a social network.

Patrick Lin
7 months ago

Generative AI has massive environmental costs for questionable benefits:

  1. Training an AI model creates emissions that are equivalent to that of 5 automobiles over their entire lifetime.
  2. Creating an AI image is equivalent to recharging a smartphone.
  3. Writing a 100-word email is equivalent to using up a bottle of water.

So what do you think the environmental impact of a million AI followers would be?

Sources:

  1. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
  2. https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/
  3. https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers
mario
mario
Reply to  Patrick Lin
7 months ago

I believe that Microsoft just bought 3-mile island in order to deal with #1 and #2 -type issues.

Nicolas Delon
Nicolas Delon
Reply to  Patrick Lin
7 months ago

These estimates are deceptive absent comparisons with alternatives. There is no doubt computing technology is very energy (and land and water) intensive—excessively so and this is no good. But for any use of such technology, assuming defensible ends and holding such ends fixed, you have to ask what the costs of the alternatives are. Of course, here it’s not obvious the ends are worth much, but people not using this technology will still be doing something else instead and it’s probably not going to be carbon neutral. Five lifetimes worth of cars is a lot but if your figure is referring to the training of a publicly available model, then, assuming potentially major gains in efficiency afforded by the model at scale, five cars is negligible. YMMV.

Last edited 7 months ago by Nicolas Delon
Nicolas Delon
Nicolas Delon
Reply to  Nicolas Delon
7 months ago
Kenny Easwaran
Reply to  Patrick Lin
7 months ago

I take it that all of this is meant to show that AI has *lower* carbon impact than most people seem to think, and is generally comparable to lots of other ordinary activities we engage in, like riding a bus to campus, even ignoring the activities we often feel carbon guilt about.

Nicolas Delon
Nicolas Delon
Reply to  Kenny Easwaran
7 months ago

Indeed, we have to ask: what is the environmental impact of commenting on Daily Nous? Let’s all feel some carbon guilt about it too! 🙂

Simon
Simon
Reply to  Patrick Lin
7 months ago

“Training an AI model creates emissions that are equivalent to that of 5 automobiles over their entire lifetime.”

This is actually a very small number of emissions, given that it is used to train an entire model.

One test here is to check whether your attitude towards the number of emissions is number sensitive. If you had learned it was 50 cars, or 500 cars, or .5 cars, or .05 cars, how would your attitude towards the carbon impact change? We need a way of assessing *how significant* the emissions are. In my opinion, one good way to do this is to use a hypothetical carbon tax to get a sense of the relevant negative externalities. At 40 dollars per ton, for example, the negative externalities from training a model are relatively small compared to most industries

Kenny Easwaran
Reply to  Simon
7 months ago

I do think there’s an interesting question about how quickly models are growing, and how much more energy successor models require to train (and even to run) than the first GPTs a couple years ago. It does look like these AI training runs could add up to a few percent of all electricity use within a decade – but this will be at a time when the vast majority of electricity is generated in carbon-neutral ways, if current trends in generation continue.

Simon
Simon
Reply to  Kenny Easwaran
7 months ago

Definitely, this has a nice analysis of potential power costs in the future https://epochai.org/blog/can-ai-scaling-continue-through-2030. Another thing to remember is that the models are also becoming more efficient: the cost per token of ChatGPT has fallen dramatically over time.

Charles Pigden
7 months ago

Isn’t this just a less extreme version of Nozick’s Experience machine?

Nicolas Delon
Nicolas Delon
Reply to  Charles Pigden
7 months ago

It’s much less extreme in that after you join you still know you’re in the machine and can presumably exit as you please. In the Experience Machine, once you’re hooked you don’t know that you’re hooked.

Barry Lam
7 months ago

Oh this is so awesome, why don’t we just migrate everyone on Twitter over there without telling them, and gradually replace their followers with the AI one at a time.

A-S
A-S
7 months ago

Experience machine no thank you. (Very creepy.)

Kenny Easwaran
7 months ago

To me this sounds valuable as a harm-reduction tool. If you find yourself addicted to social media, despite the negative feelings it often creates about the world and other people, this can be a way to wean yourself from it while getting a version of the high without the negative emotions. (Or if you, like a spicy food lover or horror movie lover, are addicted to the negative emotions, you can at least get a version of them without having made someone else’s day worse.)

Simon
Simon
7 months ago

I think a very interesting next step would be to create a version of twitter with just AIs, and have them generate, share, and like content using the same algorithms as with twitter. The network could be initialized with AIs with different ideologies and epistemic strategies to test memetic fitness: which AIs get the most views over time? This could then be used to empirically test hypotheses about ‘mind viruses’ etc (as in the literature cited here https://www.conspicuouscognition.com/p/there-is-no-woke-mind-virus)