’Stranger Things’ Creators Accused by Fans of Using AI To Write Series Finale
-
AI generated content has created an army of people who now have an accusation they can levy at any content they find to be poor quality without having to actually articulate any true assessment or critique of the content.
It has also made it necessary or reflexive for even the most conscientious person to consider if content was AI generated. Even if they never share they are considering it. No one wants to praise something only to learn a computer shit it out.
It’s all really corrosive to creativity and the place it holds in our social fabric.
It frustrating when I see all the shitty posts in indie game forums like "This looks like AI. Prove that it's not" and then troll the developer. Witch hunts like that is what will piss people off to turn on the Fuck AI crowd.
-
What do people even think is bad about the ending? I thought they wrapped everything up pretty well and left a bit open to the viewer.
Also what is the controversy here even about at its root? That there would have been some better ending if they didnt use ChatGPT (assuming they even did to begin with)? Regardless, it seems like a nonsensical thing to get outraged about especially with so little evidence.
I'm not part of that group but their chief complaints is that:
- The final battle didn't feel "final" enough
- A show as tightly written in the past four seasons seems to have many unanswered loose threads in the ending
These fans are why we'll never see completion of a Song of Fire And Ice.
-
What do you mean? I don't use ChatGPT, but I've seen it's outputs, and it seems more than capable of writing exposition dumps.
-
What do you mean? I don't use ChatGPT, but I've seen it's outputs, and it seems more than capable of writing exposition dumps.
It doesn't think. It has no logic skills. It doesn't understand English nor the rules to construct good literature.
It can spit out sentences all day. Though to construct dialog that can be interchangable between characters and still function well enough is only something it could luck in to. Even with generic throwaway lines, making them interchangable either takes a very dull story, or a good writer. While AI can spit out dumb, disjointed and meaningless stories, it cannot neatly craft meaningful sentences unless you give it every single bit of context ever and structure your own prompts well.
It'd be less work to just... be creative yourself.
-
It doesn't think. It has no logic skills. It doesn't understand English nor the rules to construct good literature.
It can spit out sentences all day. Though to construct dialog that can be interchangable between characters and still function well enough is only something it could luck in to. Even with generic throwaway lines, making them interchangable either takes a very dull story, or a good writer. While AI can spit out dumb, disjointed and meaningless stories, it cannot neatly craft meaningful sentences unless you give it every single bit of context ever and structure your own prompts well.
It'd be less work to just... be creative yourself.
But it generates off what you put in, right? Like, if you fed it a script up to a certain point and said, "write a scene where Mike explains to the group what happened with [X]," it could do that, right?
Because my experience with Volume 2 was that it was almost entirely exposition dumps or recaps. Episode 5 and half of episode 6 are essentially just characters going to different locations and explaining previous events or future plans. Like I said, I've never used AI, so I'm not an expert, but from what I've seen from ChatGPT, it doesn't seem impossible to write an outine, write the scenes you're most interested in, then say, "turn this summary into a full scene" or, "add dialog where these characters explain what happened in them in the upside down to Joyce."
-
But it generates off what you put in, right? Like, if you fed it a script up to a certain point and said, "write a scene where Mike explains to the group what happened with [X]," it could do that, right?
Because my experience with Volume 2 was that it was almost entirely exposition dumps or recaps. Episode 5 and half of episode 6 are essentially just characters going to different locations and explaining previous events or future plans. Like I said, I've never used AI, so I'm not an expert, but from what I've seen from ChatGPT, it doesn't seem impossible to write an outine, write the scenes you're most interested in, then say, "turn this summary into a full scene" or, "add dialog where these characters explain what happened in them in the upside down to Joyce."
You might want to look up how it works then, because it truly is just glorified autocomplete. It can appear to output some seriously cool things, but
- Remember the output is based on what the model was trained on. If its output is good, it's only because the model was trained on a shitload of examples to produce a well mapped token graph. Examples like every single thing they can scrape off the internet damn the copyrights...
- It's only ever going to be an illusion of intelligence based off of the associations humans have already given to words and other tokenized things. LLMs will never grow past their training data.
- While they can have the illusion of intelligence, they only ever are associating tokens. Sure, they can have large sets of input tokens, too, to relate the output tighter to what you want, but it's all just mathematical associations!
The output works out a lot of the time when using models and GPUs way bigger than would fit on most anyone's home computer... but it's still just associations of tokens. They don't know that all those tokens paint a picture that means token #3745, the protagonist, has ongoing motivation to interact with token #3758, the love interest. If your input instructions don't directly associate those two with sheer reppitition, the "AI"s would basically all just wander all over the place as far as their relationship was concerned.
Magnify that kind of "vapid story" type problems across all aspects of a story that aren't basically pre-written in the prompt anyways, and it turns out "AI" is actually extremely shit at anything requiring actual intuition, understanding, and basic intelligence no matter how much electricity is thrown at it!
-
You might want to look up how it works then, because it truly is just glorified autocomplete. It can appear to output some seriously cool things, but
- Remember the output is based on what the model was trained on. If its output is good, it's only because the model was trained on a shitload of examples to produce a well mapped token graph. Examples like every single thing they can scrape off the internet damn the copyrights...
- It's only ever going to be an illusion of intelligence based off of the associations humans have already given to words and other tokenized things. LLMs will never grow past their training data.
- While they can have the illusion of intelligence, they only ever are associating tokens. Sure, they can have large sets of input tokens, too, to relate the output tighter to what you want, but it's all just mathematical associations!
The output works out a lot of the time when using models and GPUs way bigger than would fit on most anyone's home computer... but it's still just associations of tokens. They don't know that all those tokens paint a picture that means token #3745, the protagonist, has ongoing motivation to interact with token #3758, the love interest. If your input instructions don't directly associate those two with sheer reppitition, the "AI"s would basically all just wander all over the place as far as their relationship was concerned.
Magnify that kind of "vapid story" type problems across all aspects of a story that aren't basically pre-written in the prompt anyways, and it turns out "AI" is actually extremely shit at anything requiring actual intuition, understanding, and basic intelligence no matter how much electricity is thrown at it!
I mean, I think most of Season 5 was extremely shit at anything requiring actual intuition, understanding, and basic intelligence, so that was kinda my point. Also, the second YouTube result for "using chatgpt to write a screenplay," shows a guy doing basically exactly what I'm describing in parts 5 and 7. I'm not saying the the Duffers just said, "ChatGPT, write Season 5 for us," and it magically generated the screenplay for every episode, but I think they had it generate some dialog, and everything I've seen of people using ChatGPT makes me think that's very possible.
-
I'm not part of that group but their chief complaints is that:
- The final battle didn't feel "final" enough
- A show as tightly written in the past four seasons seems to have many unanswered loose threads in the ending
These fans are why we'll never see completion of a Song of Fire And Ice.
Tightly written?
Jesus, it was a joke for the most part.
-
What do people even think is bad about the ending? I thought they wrapped everything up pretty well and left a bit open to the viewer.
Also what is the controversy here even about at its root? That there would have been some better ending if they didnt use ChatGPT (assuming they even did to begin with)? Regardless, it seems like a nonsensical thing to get outraged about especially with so little evidence.
I did a bit of reading of the main subreddit for the show and a lot of the complaining was about loose ends (what happened to X character?), plot holes and illogical sequences in the final battle and frustration with the amount of meaningful character development some characters did or did not receive. There were also a disturbing about of people upset because Mike didn't magically turn gay and hook up with Will (??????????).
I think some of the complaints were fair but the show has been this bad for a long time. All things considered, I actually thought it was a decent finale and it had one of the better open endings I've seen. Like with a lot of these character-driven young adult shows, I get the feeling a lot of viewers developed parasocial relationships with the characters and had a hard time letting go. They are in so deep that they lose sight of the fact that the characters and world as it exists in their head is not necessarily the same as the characters and world that are being written for the show. They can't admit that actually a lot of the magic of the show for them was completely born out of their own fan fiction communities, so when the finale inevitably fails to deliver it's a lot easier to pretend the show was always perfect until it suddenly wasn't rather than admit that it was never as great as they built it up to be.
-
Seriously. The ending didn't hit the peaks some fans wanted. But these para social fans are treating it like it was another Game Of Thrones ending.
It wasn't just anticlimactic all of this fantasy for some reverse ::: here's Johnny :::