tiktok; moar chatgpt thoughts
There's a recurring meme on tiktok where someone presents the tale of some dude (almost always a dude) who pearl-clutchingly complains that tiktok is nothing but underage dancing girls and thirst traps.
The punch line is that tiktok's algorithm is renowned for showing you more of what you like per the signals you've offered.
All these Serious Journalist accounts of Bing & ChatGPT yielding weird & creepy results has me wondering just how weird & creepy they are themselves
tiktok
@lmorchard this is certainly what tiktok wants people to think is true about their algorithm, but I have this bad feeling that it's a dangerous premise to too closely equate an engagement-maximizing swipe timer signal with accurate mind reading
like how everyone takes for granted that Google is at least mostly right, so then when they mess up it ends up being way worse just because we all were willing to treat the output like an oracle https://hristo-georgiev.com/google-turned-me-into-a-serial-killer
tiktok
@lmorchard @maya certainly with tiktok there seems to be a fairly significant amount of evidence that young girls thirst traps are shown to eg. logged out users and brand new accounts, which indicates a level of algorithmic promotions past “lol thats just because you’re a creepy old man who keeps liking those kinds of videos!” Ditto youtube and alt right content (though both of these companies have made efforts to clean up the most overt ways in which their algorithms lead people towards dubious content, making these patterns somewhat less obvious)
in regards to ChatGPT & other language models though, my understanding is that currently none of them remember user interactions between sessions / do not have the ability to build up specific user profiles. So if they’re producing creepy stuff it either means they do that spontaneously or that interactions within that specific session that would have prompted disconcerting behavior were intentionally left out for dramatic effect. The latter of which would of course a much more serious charge of falsifying information
the chatgpt thing
@maya @lmorchard ha yeah i’d figured ‘the journalist said something creepy in the quoted text’ didn’t require any postulation about the source of preexisting biases in the training corpus vs past interactions. But definitely a lot of the examples we’re seeing are of people testing the bot through boundary pushing behavior!