"AI could"
"AI will"
"Bad actors might use AI to"
"[In our fantasy scenario created by drawing line going up fast on a graph], AI is the biggest threat to humanity next to climate change and nuclear bombs" (an actual statement that I've seen in the media )
No
Stop that
The so-called "AI" has been there for a few years now, and there is plenty of bad things that it's being used for, so if you want to talk about the dangers of that stuff, talk about these
Things that "AI" is being used for:
- Deflecting responsibility for evil deeds that they wanted to do anyway ("computer told us that these are the targets for the drone strike", "computer told us to deny your insurance claims", etc.)
- Flooding everything with low-quality slop text, spam, etc.
- Firing the workers and replacing their work with "AI" output, and then potentially having to hire someone again to clean up the resulting mess
Things that "AI" is not being used for:
- Taking over the world and bringing about the apocalypse
- Making new bioweapons or whatever
@vaporeon_ often it’s a new case of people seeing god in the machine usually, which is actually not even really a new thing
the extent to which “AI”’s developers can be held accountable is the way that the models are designed to be constantly flattering to the user, which can end up inadvertantly validating delusional thinking