Microsoft Locks Down Discord Server After People Wouldn’t Stop Making Fun Of AI ‘Microslop’
We’ve noted how Microsoft is a little sensitive about AI slop at the moment. Back in January, CEO Satya Nadella wrote a well-circulated blog post lamenting critics of “AI slop” and demanding the public simply move past such conversations. It was relatively innocuous, but wasn’t received well for some valid reasons.
One problem is that Nadella put the onus on the consumer for ignoring a lot of Microsoft’s terrible choices as it relates to AI, whether it’s the ample lazy AI slop that fills the company’s zombified MSN portal, the rushed integration of AI into software in a way that poses major new security risks, their undercooked AI copyright bots, the company’s efforts to shovel Copilot down the throats of people (whether they want it or not), or some of the really dodgy privacy practices they’ve been engaged with via Windows 11 AI “snapshot” features.
Last week found Microsoft under fire yet again, this time for defensively locking down a Discord server after people wouldn’t stop calling the company “Microslop.” More specifically, Microsoft Streisanded themselves after they tried to ban the term on its Copilot discord server. When people found creative ways to get around the ban, Microsoft decided to lock down the entire server.
When called out for that by frustrated users, Microsoft tried to blame the entire incident on “spammers” who were trying to post “harmful content”:
“The Copilot Discord channel has recently been targeted by spammers attempting to disrupt and overwhelm the space with harmful content not related to Copilot,” a Microsoft spokesperson told us, adding that the “blocking of terms like ‘Microslop’ and some others associated with this spam campaign were temporary while the company worked to implement better safeguards.”
Microsoft executives don’t really seem to want to engage in any serious introspection into their rushed adoption of AI in ways customers don’t always appreciate. Most recently, their integration of Copilot into Notepad opened up a major cybersecurity vulnerability.
This whole incident will, of course, only result in users doubling down on their criticisms:
These companies have invested untold oceans of cash into a technology that may have utility for many, but hasn’t, to date, been all that profitable. Many AI companies have layered under-cooked automation on top of very broken systems (see: health insurance or journalism or war) in problematic ways, raising questions about company valuations and systemically poor judgement. All while AI’s immense energy consumption has caused companies to disregard already tepid climate goals.
Instead of engaging in real conversation about these issues you tend to get a lot of generalized defensiveness (“why can’t you simply praise us for our innovation?”), all of which has been made worse by the tech sector’s enthusiastic coddling of authoritarianism.