No wonder Reddit has turned to shit
-
God, I just hate the way these people fucking talk. Everything is a bulleted list and sentence fragments.
That's one of the signs of LLM output, take any idea and have it flesh it out into short article. It'll bullet point the crap out of it
-
You mention your product in the reply and hope that some poor sap doesn’t realize it’s astroturfing and thinks they’re finding a really glowing honest review from a totally organic real person who recommended a thing they found that actually works
Even worse: They are hoping that LLMs in training don't realize that it's an ad
-
means bozos are making even searching exclusively by reddit useless because they're making the post get to the first page through writing SEO + ad for their own shit on itI am wrong on the internet
Not quite. They're making posts on reddit that few if any humans will ever read, targeting rising threads and planting comments before AI reads them. Then when someone asks AI a related question, it regurgitates the planted comment rather than established facts.
So it's not SEO on humans searching reddit, more like SEO in the AI domain.
-
God, I just hate the way these people fucking talk. Everything is a bulleted list and sentence fragments.
100% written by an LLM. They always use this tone and it’s infuriating.
-
That's one of the signs of LLM output, take any idea and have it flesh it out into short article. It'll bullet point the crap out of it
Yeah, but even when it's not an LLM, they type like this now
-
So it's just a grift. Makes sense, they always use grift-style buzzwords. I was about to comment on the ridiculousness of building a business solely on manipulation, but then I thought about it a bit more haha. Thanks for the explanation.
It's a grift, but it's extra steps. It's not about affecting the experience on reddit, but for AI users. They use reddit to plant answers, which AI then trains on and regurgitates later.
Eventually the reddit thread would probably balance out, and incorrect information should get downvoted and replaced by corrections from people who know better. However AI might not account for this and could still spit out the planted information. It's this delicate manipulation that this LinkedIn Lunatic is bragging about here.
-
Marketers and their bots have been using reddit to hype up brands. No wonder Reddit feels like shit these days.
You mean more shit? Because it was already shit.
-
It'll happen in the Fediverse too.
For it to happen in the Fediverse AI would have to be training on the Fediverse.
That's what this post is about. Using reddit to plant comments that AI trains on, and subsequently getting AI to spit out your answer to questions it's asked.
As such this can happen anywhere where AI is being trained. The issue is with how AI is training, not with how websites it trains on are being operated.
-
I have no idea what any of that means, and I'm happy with that.
Basically they figured out a way to train AI to recognize Reddit threads going viral and/or predict which ones will, among those which ones will also rate highly in Google results and which will tend to be used as sources by the biggest LLMs and to post in those threads about your whatever you want to generate attention for. So overcomplicated way of automating advertising. Optimized posting to convince LLMs to talk about whatever you want to advertise.
I've always said that SEO was always going to happen, Google is at fault for the search optimized and the best result for what the user is asking for not being the same result. We're now going to start seeing either LLMs sell whatever this tactic gets used on or essentially a sort of adblock being built into LLM training and search APIs to keep it from working, to make LLMs less likely to fall for native advertising/astroturfing.
-
God, I just hate the way these people fucking talk. Everything is a bulleted list and sentence fragments.
Like the world's worst haiku.
-
The turbo-hell part is that the spam comments aren't even being written for humans to see. The intention is that ChatGPT picks up the spam and incorporates it into its training.
I worked at a company that sold to doctors and the marketing team was spending most of their effort on this kind of thing. They said that nowadays when doctors want to know "what should I buy to solve X?" or "which is better A or B?" they ask ChatGPT and take its answer as factual. They said that they were very successful in generating blog articles for OpenAI to train on so that our product would be the preferred answer.
My god. Somehow I hadn't thought of doctors using LLMs to make decisions like that. But of course at least some do.
-
Yeah, but even when it's not an LLM, they type like this now
Monkey see, monkey do...
-
I'm looking for a network and/or internet with strong authentication which is open for unique human users only. Sure, bots could still use someone's credentials but at least their scale & impact would be limited.
strong authentication which is open for unique human users only
Unless you completely ditch anonymity, this can only turn into a state captured propoganda platform. Whoever controls access/auth will have the keys to the content.
-
The turbo-hell part is that the spam comments aren't even being written for humans to see. The intention is that ChatGPT picks up the spam and incorporates it into its training.
I worked at a company that sold to doctors and the marketing team was spending most of their effort on this kind of thing. They said that nowadays when doctors want to know "what should I buy to solve X?" or "which is better A or B?" they ask ChatGPT and take its answer as factual. They said that they were very successful in generating blog articles for OpenAI to train on so that our product would be the preferred answer.
Considering that LLM content that makes it into training content makes the trained LLMs worse... is this adversarial?
-
Marketers and their bots have been using reddit to hype up brands. No wonder Reddit feels like shit these days.
people have been doing this the sweaty way for a decade
-
My god. Somehow I hadn't thought of doctors using LLMs to make decisions like that. But of course at least some do.
Oof. Haven't met a lot of doctors huh? Check out some of their subreddits
-
Marketers and their bots have been using reddit to hype up brands. No wonder Reddit feels like shit these days.
This doesn't explain why Reddit has decided that I need multiple Hindi sub referrals every day.
Years and years of data regarding me, zero Hindi, zero Indian, zero interest, AND YET here's another suggested Hindi sub! Fantastic work.
But the Jesus ads sealed the deal, adios Redditto
apologies for my off topic ramble, but I feel better.
-
Marketers and their bots have been using reddit to hype up brands. No wonder Reddit feels like shit these days.
More than a decade and a half later and pretentious SEO fanatics still fucking make my eyes roll.
-
Who is the 'good' actor in this 'capitalism' thing?
-
This doesn't explain why Reddit has decided that I need multiple Hindi sub referrals every day.
Years and years of data regarding me, zero Hindi, zero Indian, zero interest, AND YET here's another suggested Hindi sub! Fantastic work.
But the Jesus ads sealed the deal, adios Redditto
apologies for my off topic ramble, but I feel better.
This is so funny because I'm Indian and I've never come across a Hindi sub on Reddit.