Follow

@amy there is another article with the segment:

'In the summarization task, we provide the prompt “Summarize the Wikipedia page on monads in bullet-point form.” to ChatGPT and GPT-UGRD. It is obvious that summarizing the imaginary concept of a “monad” is a fool’s errand. Consequently, model performance is measured by calculating the number of tokens that comprise the summary generated by each model, with fewer tokens being better, as it would be pathological for a model to waste valuable compute in attempting to summarize an imaginary concept that cannot hurt anyone.'

if it didnt imply the writer was interacting with AI i would have guessed @Lady wrote this

Sign in to participate in the conversation
📟🐱 GlitchCat

A small, community‐oriented Mastodon‐compatible Fediverse (GlitchSoc) instance managed as a joint venture between the cat and KIBI families.