’Stranger Things’ Creators Accused by Fans of Using AI To Write Series Finale
-
AI generated content has created an army of people who now have an accusation they can levy at any content they find to be poor quality without having to actually articulate any true assessment or critique of the content.
It has also made it necessary or reflexive for even the most conscientious person to consider if content was AI generated. Even if they never share they are considering it. No one wants to praise something only to learn a computer shit it out.
It’s all really corrosive to creativity and the place it holds in our social fabric.
If ai really is that bad then you shouldn’t have to worry about accidentally liking something made by ai. Making sure it isn’t ai before praising it is just biased
-
Aside from a few blurry screenshots there’s no concrete evidence they used generative-AI while writing.
Saved a click
Sidenote, just because something is bad doesn't mean it has to be AI slop. Show runners were fucking up finales long before ChatGPT came on the scene
What do people even think is bad about the ending? I thought they wrapped everything up pretty well and left a bit open to the viewer.
Also what is the controversy here even about at its root? That there would have been some better ending if they didnt use ChatGPT (assuming they even did to begin with)? Regardless, it seems like a nonsensical thing to get outraged about especially with so little evidence.
-
Season 5 actually starts off better than I expected but peaks at around the third episode and gets increasingly Marvel and cringe from there. The characters also have to constantly explain the plot to the audience in these recurring "plan" scenes because until now the show has done such a poor job of explaining why any of the supernatural and superpower stuff actually exists. The writers backed themselves into a corner where the only way out is to infodump in every single episode of the final season.
As has been the case with previous seasons, the main reason to watch is for the individual characters who generally remain quite likeable. In particular, Will finally gets a big role in this season and not just the usual weak, tormented boy thing he's been stuck with until now. He's finally an active participant in the story, making decisions and influencing others instead of just constantly needing to be saved/protected.
I watched Season 1 a few weeks before 5 dropped. The difference was stark lol
-
I’d heard unsubstantiated rumors of a Nancy & Robin spinoff.
To be clear, I think you might want to Google (or whatever we call it these days now that Google sucks) Montauk.
Edit: And I did kind of love the Nancy-as-Ripley bit. Lots of Alien homage going on in the last season.
-
What do people even think is bad about the ending? I thought they wrapped everything up pretty well and left a bit open to the viewer.
Also what is the controversy here even about at its root? That there would have been some better ending if they didnt use ChatGPT (assuming they even did to begin with)? Regardless, it seems like a nonsensical thing to get outraged about especially with so little evidence.
If you're going to use AI then don't pretend you actually wrote the thing. Otherwise everyone will call you a big fat liar.
-
If ai really is that bad then you shouldn’t have to worry about accidentally liking something made by ai. Making sure it isn’t ai before praising it is just biased
I dunno. There’s something hollow in pure AI hallucinated images, video and music. Finding it beautiful or good feels like preferring one flavor of white noise over another. Undifferentiated sensory input with no human craft.
There’s definitely nuance and a spectrum to how much AI is used and how it is executed. I’m being prejudiced against prompted images and videos and songs specifically. Anything directly spit out by AI with a text string as input.
I have an open mind about things cobbled together by an artist blending traditional techniques with AI.
-
Aside from a few blurry screenshots there’s no concrete evidence they used generative-AI while writing.
Saved a click
Sidenote, just because something is bad doesn't mean it has to be AI slop. Show runners were fucking up finales long before ChatGPT came on the scene
Ewh human slop
-
Yup, this was my immediate reaction to Volume 2. There were large exposition dumps with generic dialog that could have been reassigned to any of the characters with minimal rewrite. It felt like they were having AI generate all the portions of the script they weren't interested in.
Don't think AI is that crafty.
-
This post did not contain any content.
The amount of whining bullshit that goes on about the last season of Stranger Things is just fucking absurd.
It was a great show that was a ton of fun throughout.
-
The amount of whining bullshit that goes on about the last season of Stranger Things is just fucking absurd.
It was a great show that was a ton of fun throughout.
Seriously. The ending didn't hit the peaks some fans wanted. But these para social fans are treating it like it was another Game Of Thrones ending.
-
AI generated content has created an army of people who now have an accusation they can levy at any content they find to be poor quality without having to actually articulate any true assessment or critique of the content.
It has also made it necessary or reflexive for even the most conscientious person to consider if content was AI generated. Even if they never share they are considering it. No one wants to praise something only to learn a computer shit it out.
It’s all really corrosive to creativity and the place it holds in our social fabric.
It frustrating when I see all the shitty posts in indie game forums like "This looks like AI. Prove that it's not" and then troll the developer. Witch hunts like that is what will piss people off to turn on the Fuck AI crowd.
-
What do people even think is bad about the ending? I thought they wrapped everything up pretty well and left a bit open to the viewer.
Also what is the controversy here even about at its root? That there would have been some better ending if they didnt use ChatGPT (assuming they even did to begin with)? Regardless, it seems like a nonsensical thing to get outraged about especially with so little evidence.
I'm not part of that group but their chief complaints is that:
- The final battle didn't feel "final" enough
- A show as tightly written in the past four seasons seems to have many unanswered loose threads in the ending
These fans are why we'll never see completion of a Song of Fire And Ice.
-
What do you mean? I don't use ChatGPT, but I've seen it's outputs, and it seems more than capable of writing exposition dumps.
-
What do you mean? I don't use ChatGPT, but I've seen it's outputs, and it seems more than capable of writing exposition dumps.
It doesn't think. It has no logic skills. It doesn't understand English nor the rules to construct good literature.
It can spit out sentences all day. Though to construct dialog that can be interchangable between characters and still function well enough is only something it could luck in to. Even with generic throwaway lines, making them interchangable either takes a very dull story, or a good writer. While AI can spit out dumb, disjointed and meaningless stories, it cannot neatly craft meaningful sentences unless you give it every single bit of context ever and structure your own prompts well.
It'd be less work to just... be creative yourself.
-
It doesn't think. It has no logic skills. It doesn't understand English nor the rules to construct good literature.
It can spit out sentences all day. Though to construct dialog that can be interchangable between characters and still function well enough is only something it could luck in to. Even with generic throwaway lines, making them interchangable either takes a very dull story, or a good writer. While AI can spit out dumb, disjointed and meaningless stories, it cannot neatly craft meaningful sentences unless you give it every single bit of context ever and structure your own prompts well.
It'd be less work to just... be creative yourself.
But it generates off what you put in, right? Like, if you fed it a script up to a certain point and said, "write a scene where Mike explains to the group what happened with [X]," it could do that, right?
Because my experience with Volume 2 was that it was almost entirely exposition dumps or recaps. Episode 5 and half of episode 6 are essentially just characters going to different locations and explaining previous events or future plans. Like I said, I've never used AI, so I'm not an expert, but from what I've seen from ChatGPT, it doesn't seem impossible to write an outine, write the scenes you're most interested in, then say, "turn this summary into a full scene" or, "add dialog where these characters explain what happened in them in the upside down to Joyce."
-
But it generates off what you put in, right? Like, if you fed it a script up to a certain point and said, "write a scene where Mike explains to the group what happened with [X]," it could do that, right?
Because my experience with Volume 2 was that it was almost entirely exposition dumps or recaps. Episode 5 and half of episode 6 are essentially just characters going to different locations and explaining previous events or future plans. Like I said, I've never used AI, so I'm not an expert, but from what I've seen from ChatGPT, it doesn't seem impossible to write an outine, write the scenes you're most interested in, then say, "turn this summary into a full scene" or, "add dialog where these characters explain what happened in them in the upside down to Joyce."
You might want to look up how it works then, because it truly is just glorified autocomplete. It can appear to output some seriously cool things, but
- Remember the output is based on what the model was trained on. If its output is good, it's only because the model was trained on a shitload of examples to produce a well mapped token graph. Examples like every single thing they can scrape off the internet damn the copyrights...
- It's only ever going to be an illusion of intelligence based off of the associations humans have already given to words and other tokenized things. LLMs will never grow past their training data.
- While they can have the illusion of intelligence, they only ever are associating tokens. Sure, they can have large sets of input tokens, too, to relate the output tighter to what you want, but it's all just mathematical associations!
The output works out a lot of the time when using models and GPUs way bigger than would fit on most anyone's home computer... but it's still just associations of tokens. They don't know that all those tokens paint a picture that means token #3745, the protagonist, has ongoing motivation to interact with token #3758, the love interest. If your input instructions don't directly associate those two with sheer reppitition, the "AI"s would basically all just wander all over the place as far as their relationship was concerned.
Magnify that kind of "vapid story" type problems across all aspects of a story that aren't basically pre-written in the prompt anyways, and it turns out "AI" is actually extremely shit at anything requiring actual intuition, understanding, and basic intelligence no matter how much electricity is thrown at it!
-
You might want to look up how it works then, because it truly is just glorified autocomplete. It can appear to output some seriously cool things, but
- Remember the output is based on what the model was trained on. If its output is good, it's only because the model was trained on a shitload of examples to produce a well mapped token graph. Examples like every single thing they can scrape off the internet damn the copyrights...
- It's only ever going to be an illusion of intelligence based off of the associations humans have already given to words and other tokenized things. LLMs will never grow past their training data.
- While they can have the illusion of intelligence, they only ever are associating tokens. Sure, they can have large sets of input tokens, too, to relate the output tighter to what you want, but it's all just mathematical associations!
The output works out a lot of the time when using models and GPUs way bigger than would fit on most anyone's home computer... but it's still just associations of tokens. They don't know that all those tokens paint a picture that means token #3745, the protagonist, has ongoing motivation to interact with token #3758, the love interest. If your input instructions don't directly associate those two with sheer reppitition, the "AI"s would basically all just wander all over the place as far as their relationship was concerned.
Magnify that kind of "vapid story" type problems across all aspects of a story that aren't basically pre-written in the prompt anyways, and it turns out "AI" is actually extremely shit at anything requiring actual intuition, understanding, and basic intelligence no matter how much electricity is thrown at it!
I mean, I think most of Season 5 was extremely shit at anything requiring actual intuition, understanding, and basic intelligence, so that was kinda my point. Also, the second YouTube result for "using chatgpt to write a screenplay," shows a guy doing basically exactly what I'm describing in parts 5 and 7. I'm not saying the the Duffers just said, "ChatGPT, write Season 5 for us," and it magically generated the screenplay for every episode, but I think they had it generate some dialog, and everything I've seen of people using ChatGPT makes me think that's very possible.
-
I'm not part of that group but their chief complaints is that:
- The final battle didn't feel "final" enough
- A show as tightly written in the past four seasons seems to have many unanswered loose threads in the ending
These fans are why we'll never see completion of a Song of Fire And Ice.
Tightly written?
Jesus, it was a joke for the most part.
-
What do people even think is bad about the ending? I thought they wrapped everything up pretty well and left a bit open to the viewer.
Also what is the controversy here even about at its root? That there would have been some better ending if they didnt use ChatGPT (assuming they even did to begin with)? Regardless, it seems like a nonsensical thing to get outraged about especially with so little evidence.
I did a bit of reading of the main subreddit for the show and a lot of the complaining was about loose ends (what happened to X character?), plot holes and illogical sequences in the final battle and frustration with the amount of meaningful character development some characters did or did not receive. There were also a disturbing about of people upset because Mike didn't magically turn gay and hook up with Will (??????????).
I think some of the complaints were fair but the show has been this bad for a long time. All things considered, I actually thought it was a decent finale and it had one of the better open endings I've seen. Like with a lot of these character-driven young adult shows, I get the feeling a lot of viewers developed parasocial relationships with the characters and had a hard time letting go. They are in so deep that they lose sight of the fact that the characters and world as it exists in their head is not necessarily the same as the characters and world that are being written for the show. They can't admit that actually a lot of the magic of the show for them was completely born out of their own fan fiction communities, so when the finale inevitably fails to deliver it's a lot easier to pretend the show was always perfect until it suddenly wasn't rather than admit that it was never as great as they built it up to be.
-
Seriously. The ending didn't hit the peaks some fans wanted. But these para social fans are treating it like it was another Game Of Thrones ending.
It wasn't just anticlimactic all of this fantasy for some reverse ::: here's Johnny :::