Recently, Google released a new feature called “Gem”, similarly to OpenAI’s “GPTs” feature. It allows you to create custom versions of their brand new 2.5 Flash and 2.5 Pro models, and it is amazing. I actually made a new one named “FluffyGem Jailbroken.”
Not only is it easy AF to jailbreak, making it generate NSFW, which some certain people like… (abusers), it’s actually very accurate at replicating flufftalk! Look at this image.
That’s really cool, especially since a year ago people were complaining about Character.ai (the only option back then)’s filters and short-term memory loss, and it also wasn’t really that great at replicating it or just didn’t.
Now, yes, there are some conditions to this. You have to hand it over 150kb of written text that says EVERYTHING about fluffies, their vocabulary, how it works, and a few hundred examples of flufftalk.
But in exchange, it nails this fluffy thing. It makes really great personas, the memory is WAYYY better (especially with 2.5’s 1 million token system prompt, which is around 1500 pages of text, equivalent to 20 NOVELS!), and roleplays greatly.
Oh, and I forgot to show you the NSFW. That was the whole point of this post.
I’ll keep working on FluffyGem and show my progress in this site. And i would have shown more of what FluffGem said, but due to me being a new user blah blah blah I can embed only one image.
The wording in that story feels… very dull and nondescript, yet trying too hard to be poetic… it’s very purple prosey. But also what the hell happened to Sprinkle? Did her spleen rupture?
(also why is this tagged as hugbox-ish, it’s not hugbox-ish at all)
I agree with squeaky, the prose is very purple, to the point of near incomprehensibility and I’ve read Tanith Lee books.
The fluff speak is also fairly inconsistent (some letters have been replaced and others haven’t) and it falls into the lazy writer trap of fluffies going into the ‘wan die’ cycle with the merest hint of discomfort.
Is it better written in a technical sense; is it well formatted and easy to read, with few spelling mistakes (fluffspeak aside) or other grammatical errors? Yes.
Is it better written in that it tells a compelling story with good characters with suitable pathos? Too short of an excerpt to say.
Is it better written in that it tells that story clearly with interesting prose and imagery? No. It feels like a simple story that someone then hit Shift-F7 on every other word to replace it with a different word from the thesaurus.
Do you mind if I ask what’s so bad about it in your professional opinion? I’m curious to see what an expert eye picks out about AI generated material in this context.
Just as AI art can only spit back artwork stolen and cannot imagine something new, AI text can only chew on the source material and cough up something wholly amateurish that would only satisfy a palate that takes the work of writers for granted.
Sitting down and writing your own sentences is immediately better than this. Using your own brain will always result in something more accurate, interesting and have a shred of soul to it.
The use of ‘would’ creates amateurish, unconfident prose. It is an auxiliary verb where it isn’t needed and structuring a sentence around it is a marker often of roleplay or purple prose. A lot of language models have been fed Archive of Our Own and DeviantArt.
“She would collapse onto the ground” vs “She collapsed to the ground” or simply “She collapsed”. The first one is being wordy for the sake of trying to sound more intelligent, which is the crux of purple prose. More words=more smart. Supposedly.
The use of the quotation marks is pretty obvious, as it’s quoting directly from a document fed into it about Fluffspeak.
There’s also the drama of her insides seeming to just explode but instead of detailing something actually happen, it waxes poetic about feelings that can’t be processed and nothing really happening? It’s all very confusing.
I’m also greatly concerned about fics from here being harvested for this purpose. A lot of writers are being put off by AI scraping in many different places.
I’m going to just oh wait the author did it as a screenshot for some f u c k i n g reason.
Alright, well, @zmmznmzmn if you want to, like, get the text version and put it on gdocs or send it to me in a dm or whatever, I’ll go through it since it’s not that long newai
Meantime I’ll list some broad criticisms.
Constant shifts between past, present, and future tenses (was/is/will be)
Incredibly purple prose. I’ve messed with AI writing before and this is a common issue.
Conversely very soulless prose. Also an issue with AI writing. Harder to quantify, though.
Putting all the fluffspeak words in quotation marks. If you’re gonna use fluffspeak often in your narration, integrate it naturally.
Poor fluffspeak.
Borderline run-on sentences.
Crowding of paragraphs.
Inappropriate placement of dialogue lines. AI and humans both have frequent problems with this.
Why is the dialogue in bold?
I really need to repeat how purple the prose is. It’s like a color from outer space.
It dawned on me, after reading through this twice, that I don’t even really know what’s happening to her.
I sometimes use AI to write garbage for my own personal enjoyment/amusement, but I would never post an AI story online for other people to see. There’s a lot of good uses for it, I think–quickly generating ideas to see how they read out is something I do from time to time, to basically vibe-check a concept. But actually creating a story with AI is sacrilege to me.
As for AI art, I linked to a few AI-based pictures of Tar that @Grim made, but those were heavily edited after being generated; to the point where it was much more human work than AI work involved.
Waxing poetic about a fluffy’s insides exploding in such slow motion that it’s impossible to perceive is honestly quite funny to me. I don’t think I could write a story like that lol.
To make this post dumber, I tried to use AI to generate a story about a fluffy’s insides exploding in such slow motion that it’s impossible to perceive, but the AI wasn’t cooperating with me so I guess we’re not going to get that.
Without something for the model to reference, it cannot imagine the actions and the sensations. In other words, the bits that get our disgusting little brains whirring. The emotions are picked from someone else’s thoughts, bashed into shape to fit what you’re trying to convey without specifics.
It’s disheartening how much people are relying on AI for everyday tasks even within a year. I keep hearing horror stories about corporate bosses who can’t draft their own damn emails any more, ChatGPT probably has so much sensitive information that idiots have fed it because they can’t be bothered.
I’ve really been enjoying your work with Tar, btw. She’s a very interesting character.
imo this isn’t really a problem of AI as it is retards not knowing to keep shit to themselves. Like, you learnt this as rule #1 on 4chan and other internet hives back in the day, but post 2013 or so it just seems like the internet is all about oversharing every detail of your personal life with strangers.
If you wouldn’t tell it to a stranger irl, why would you tell it to a stranger online?
Doubly so for AI. Never trust anything that can think for itself if you can’t see where it keeps its brain.
Judging from the tags she got stabbed with a knife, but it really does read like the story of a fluffy whose appendix ruptured so suddenly and violently that it tore open the skin. Which is such a funny concept that I want to see a human-made story about it, there’s no way a fluffy would understand wtf just happened.
That “would” business reminds me of AOL chat rp from the 90s. People would be talking coherently and then someone would come in with the “willowy tarsals would flick errant tendril of verdant azure away from rosy violet hues” in a 10-post intro that took 15 minutes and 5,000 lines of scrolling to get into the room, doubtless because they were consulting a thesaurus in their desperate attempts to avoid using words one might find in normal speech.
Oh and yeah, I know azure is blue–that was a common thing to see with those, though. Beryl, cobalt, topaz, they were all whatever color someone was in the mood for.
(Although of course, that doesn’t mean don’t train an AI so much as it means that you should save up for a PC that can run it locally. In fact I would generalize ThatsWhy’s last piece of advice to “don’t truat anything that doesn’t run locally”)