Twice now I’ve experienced the fallout of bugs in my coworkers code and when I looked into it the bug was introduced by Copilot.

Think about that for a second.

I’m trying to accept that everyone I talk to at work about these systems (I won’t dignify them by using the term “intelligence”) ignores my warnings and treats me like a fool for refusing to use them, but now I have to clean up the mess others make by trusting these things.

This isn’t sustainable.

Follow

@requiem you’re not the only person noticing this, unfortunately: arxiv.org/pdf/2211.03622.pdf (TL:DR, study participants who wrote AI assisted code wrote code which contained more security vulnerability in tests *and* self-assessed their code as more secure, in comparison to participants who wrote their code independently)

@Satsuma I knew this would be bad, but I didn’t think it would get this bad this fast.

@Satsuma @requiem Some classic Dunning-Kruger there. The machine is 100% confident!

@Satsuma @requiem Did anyone counter that the real problem was not applying enough AI-assisted debugging to the AI-assisted writing of bugs?

@clacke @requiem @Satsuma Make it AI-assisted verification and we can at least talk seriously...

...about making sure we understand what properties we specified!

@Satsuma: I dub this effect the Artificial Dunning & Kruger Phenomenon, or AD&KP.

@requiem

@Satsuma @requiem I was talking to a journalist from Nature Technology and said that I wouldn't trust Copilot-generated code further than I can throw it...

Sign in to participate in the conversation
📟🐱 GlitchCat

A small, community‐oriented Mastodon‐compatible Fediverse (GlitchSoc) instance managed as a joint venture between the cat and KIBI families.