No wonder Reddit has turned to shit
-
building a business solely on manipulation
See also: all of advertising and marketing
Haha yep, my exact train of thought while typing
-
I'm looking for a network and/or internet with strong authentication which is open for unique human users only. Sure, bots could still use someone's credentials but at least their scale & impact would be limited.
If you've any suggestion on how to implement that, then it's a million-dollar idea.
The "I'm a human" test that only takes a few seconds and then lets you do what you like for an hour was always vulnerable to 'auth farms'. Pay some poor bastards in the third world a pittance to pass the test a thousand times an hour, let the bots run wild. And the bots have gained the ability to pass the tests themselves, at least by boiling the oceans in some datacentre while the VC money holds out.
Finding the people running the bots, fitting them with some very heavy boots and then seeing if they can swim in the deep ocean is probably needlessly cruel, but I'd be up for tarring and feathering a few. Once the videos got out, the rest might think harder about their life choices...
-
You mention your product in the reply and hope that some poor sap doesn’t realize it’s astroturfing and thinks they’re finding a really glowing honest review from a totally organic real person who recommended a thing they found that actually works
This is why I just never talk about brand-name products online, I don't want to seem like an advertising shill account (I'm just here to get into heated political arguments and shitpost)
I'll recommend things to people I know IRL, but very rarely will I do it online
-
I have no idea what any of that means, and I'm happy with that.
means bozos are making even searching exclusively by reddit useless because they're making the post get to the first page through writing SEO + ad for their own shit on itI am wrong on the internet
-
I don't understand the point of this. Like, you figure out how to increase traffic on certain posts/comments and there's somehow a push for this? Do they get money somehow? These people always use terminology and ceo buzzwords as if it's some big business level that people are aspiring to reach, but what's the actual point? Why would I care if my post/comment exploded? Was I just using Reddit wrong?
They're trying to get their comments slurped up by AI bots so that whenever somebody asks a chatbot about waffles, the chatbot says that Crappo Brand Waffles are what kids crave (TM).
-
Marketers and their bots have been using reddit to hype up brands. No wonder Reddit feels like shit these days.
The turbo-hell part is that the spam comments aren't even being written for humans to see. The intention is that ChatGPT picks up the spam and incorporates it into its training.
I worked at a company that sold to doctors and the marketing team was spending most of their effort on this kind of thing. They said that nowadays when doctors want to know "what should I buy to solve X?" or "which is better A or B?" they ask ChatGPT and take its answer as factual. They said that they were very successful in generating blog articles for OpenAI to train on so that our product would be the preferred answer.
-
You mention your product in the reply and hope that some poor sap doesn’t realize it’s astroturfing and thinks they’re finding a really glowing honest review from a totally organic real person who recommended a thing they found that actually works
Its not just about random people reading the comment, but specifically LLMs that use reddit as a source, because becoming the chatbots' go to answer when people ask 'what lawnmower should I buy' is increasingly more valuable than paying for a google search Ad.
-
Marketers and their bots have been using reddit to hype up brands. No wonder Reddit feels like shit these days.
God, I just hate the way these people fucking talk. Everything is a bulleted list and sentence fragments.
-
Wtf it literally never crossed my might to use a forum like this. So fucking dumb. It's like everyone is scrambling for a couple percent points over the next
Line must go up, always & forever.
-
God, I just hate the way these people fucking talk. Everything is a bulleted list and sentence fragments.
That's one of the signs of LLM output, take any idea and have it flesh it out into short article. It'll bullet point the crap out of it
-
You mention your product in the reply and hope that some poor sap doesn’t realize it’s astroturfing and thinks they’re finding a really glowing honest review from a totally organic real person who recommended a thing they found that actually works
Even worse: They are hoping that LLMs in training don't realize that it's an ad
-
means bozos are making even searching exclusively by reddit useless because they're making the post get to the first page through writing SEO + ad for their own shit on itI am wrong on the internet
Not quite. They're making posts on reddit that few if any humans will ever read, targeting rising threads and planting comments before AI reads them. Then when someone asks AI a related question, it regurgitates the planted comment rather than established facts.
So it's not SEO on humans searching reddit, more like SEO in the AI domain.
-
God, I just hate the way these people fucking talk. Everything is a bulleted list and sentence fragments.
100% written by an LLM. They always use this tone and it’s infuriating.
-
That's one of the signs of LLM output, take any idea and have it flesh it out into short article. It'll bullet point the crap out of it
Yeah, but even when it's not an LLM, they type like this now
-
So it's just a grift. Makes sense, they always use grift-style buzzwords. I was about to comment on the ridiculousness of building a business solely on manipulation, but then I thought about it a bit more haha. Thanks for the explanation.
It's a grift, but it's extra steps. It's not about affecting the experience on reddit, but for AI users. They use reddit to plant answers, which AI then trains on and regurgitates later.
Eventually the reddit thread would probably balance out, and incorrect information should get downvoted and replaced by corrections from people who know better. However AI might not account for this and could still spit out the planted information. It's this delicate manipulation that this LinkedIn Lunatic is bragging about here.
-
Marketers and their bots have been using reddit to hype up brands. No wonder Reddit feels like shit these days.
You mean more shit? Because it was already shit.
-
It'll happen in the Fediverse too.
For it to happen in the Fediverse AI would have to be training on the Fediverse.
That's what this post is about. Using reddit to plant comments that AI trains on, and subsequently getting AI to spit out your answer to questions it's asked.
As such this can happen anywhere where AI is being trained. The issue is with how AI is training, not with how websites it trains on are being operated.
-
I have no idea what any of that means, and I'm happy with that.
Basically they figured out a way to train AI to recognize Reddit threads going viral and/or predict which ones will, among those which ones will also rate highly in Google results and which will tend to be used as sources by the biggest LLMs and to post in those threads about your whatever you want to generate attention for. So overcomplicated way of automating advertising. Optimized posting to convince LLMs to talk about whatever you want to advertise.
I've always said that SEO was always going to happen, Google is at fault for the search optimized and the best result for what the user is asking for not being the same result. We're now going to start seeing either LLMs sell whatever this tactic gets used on or essentially a sort of adblock being built into LLM training and search APIs to keep it from working, to make LLMs less likely to fall for native advertising/astroturfing.
-
God, I just hate the way these people fucking talk. Everything is a bulleted list and sentence fragments.
Like the world's worst haiku.
-
The turbo-hell part is that the spam comments aren't even being written for humans to see. The intention is that ChatGPT picks up the spam and incorporates it into its training.
I worked at a company that sold to doctors and the marketing team was spending most of their effort on this kind of thing. They said that nowadays when doctors want to know "what should I buy to solve X?" or "which is better A or B?" they ask ChatGPT and take its answer as factual. They said that they were very successful in generating blog articles for OpenAI to train on so that our product would be the preferred answer.
My god. Somehow I hadn't thought of doctors using LLMs to make decisions like that. But of course at least some do.