The Social Media AI bots seem to lack the intelligent parts.
Over the past few years, a number of think tanks and research groups have been touting the economic and lifestyle improvements that would result from significant investment into Artificial Intelligence (AI). Research teams within the five largest technology companies, known as FAANG – Facebook (now known as Meta), Apple, Amazon, Netflix, Google (its parent Alphabet) – are deploying AI bots to help manage the ever-growing spread of social media around the world.
After Donald Trump won in 2016, the big technology companies decried the results as a direct effect of misinformation within their platforms. Democrats in Congress placed partial blame on them and encouraged the CEO’s of the largest social media platforms to “do something” to stop the MAGA movement as it represented a direct threat to their power democracy.
The result has been a major disappointment. Bots lack self-awareness because they have no idea that a bias exists. In fact, when it comes to bias, developers have no problem injecting their own into their algorithms. Nothing seems more ironic that when Climate Alarmist activist John Cook decided to work on an AI bot to help hunt down and detect climate denialism in social platforms, it flagged a couple of his own blog posts. The parameters were so stringent, it began rejecting the work of its own creators as denialism.
The end game on stopping the spread of misinformation means that platforms must also reduce engagement with patrons who are deemed peddlers based on a criteria developed by AI itself. Facebook attempted this and discovered that doing so effectively silenced accounts, which was counter to company growth. AI could not get around that users will undoubtedly peddle misinformation, and countering misinformation through the use of limiting users engagement would reduce revenue. In short, there isn’t anything profitable if users are blocked, even if the misinformation might be benign, demonstrably wrong, or even politically motivated.
Other platforms have experienced the same situation, so the workaround is not to prevent users from engaging, but for bots to tag posts perpetuating misinformation and directing readers to the facts and/or resources to help educate.
Example:

The meme is meant to be funny. However, it appears to have upset someone at USA Today that they actually wrote a fact check article on the meme itself. USA Today is one of the dozens of left-leaning and outright leftist outlets that provide input into the AI systems meant to destroy disinformation that threatens the very democratic foundations in America. It has essentially created its own feedback loop that exaggeration, hyperbole, sarcasm, comedy and the desire of humanity to make fun of itself is a target when it offends someone on the left. The bots are programmed this way and it makes a mockery of the human condition.
How bad it is? Well, a friend of mine posted a picture of the moon she took the other night. This is how Facebook’s bots have flagged it:
The two comments were “WTF??” responses to the Facebook tag. I captured her post into my own. Guess what Facebook did to my post?
Twitter goes a bit farther by limiting posts the bots deem as false that “could cause harm” so that the ONLY thing you can do is quote tweet and not directly engage the author:
In this case, Nick made the mistake of saying that inexpensive drugs could treat and cure your Covid – that is backed by medical evidence – but runs afoul of the AI bot’s leftist parameters. Twitter bots have become arbiters of science fiction and fact.
I can’t tell you how grateful I am these companies have invested significant money they raised from people socializing on their platforms to protect democracy in our country. Can’t wait until the T-1000 series comes out.