Welcome to AI This Week, Gizmodoâs weekly deep dive on whatâs been happening in artificial intelligence.
For months, Iâve been harping on a particular point, which is that artificial intelligence toolsâas theyâre currently being deployedâare mostly good at one thing: Replacing human employees. The âAI revolutionâ has mostly been a corporate one, an insurrection against the rank-and-file that leverages new technologies to reduce a companyâs overall headcount. The biggest sellers of AI have been very open about thisâadmitting time and again that new forms of automation will allow human jobs to be repurposed as software.
We got another dose of that this week, when the founder of Googleâs DeepMind, Mustafa Suleyman, sat down for an interview with CNBC. Suleyman was in Davos, Switzerland, for the World Economic Forumâs annual get-together, where AI was reportedly the most popular topic of conversation. During his interview, Suleyman was asked by news anchor Rebecca Quirk whether AI was âgoing to replace humans in the workplace in massive amounts.â
The tech CEOâs answer was this: âI think in the long termâover many decadesâwe have to think very hard about how we integrate these tools because, left completely to the market…these are fundamentally labor replacing tools.â
And there it is. Suleyman makes this sound like some foggy future hypothetical but itâs obvious that said âlabor replacementâ is already happening. The tech and media industriesâwhich are uniquely exposed to the threat of AI-related job lossesâsaw huge layoffs last year, right as AI was âcoming online.â In only the first few weeks of January, well-established companies like Google, Amazon, YouTube, Salesforce, and others have announced more aggressive layoffs that have been explicitly linked to greater AI deployment.
The general consensus in corporate America seems to be that companies should use AI to operate leaner teams, the likes of which can be bolstered by small groups of AI-savvy professionals. These AI professionals will become an increasingly sought after class of worker, as theyâll offer the opportunity to reorganize corporate structures around automation, thus making them more âefficient.â
For companies, the benefits of this are obvious. You donât have to pay a software program, nor do you have to supply it with health benefits. It wonât get pregnant and have to take six months off to care for its newborn child, nor will it ever become disgruntled with its working conditions and try to start a union drive in the break room.
The billionaires who are marketing this technology have made vague rhetorical gestures to things like universal basic income as a cure for the inevitable worker displacements that are going to happen, but only a fool would think those are anything other than empty promises designed to stave off some sort of underclass uprising. The truth is that AI is a technology that was made by and for the managers of the world. The frenzy in Davos this weekâwhere the worldâs wealthiest fawned over it like Greek peasants discovering Promethean fireâis only the latest reminder of that.

Question of the day: Whatâs OpenAIâs excuse for becoming a defense contractor?
The short answer to that question is: Not a very good one. This week, it was revealed that the influential AI organizationwas working with the Pentagon to develop new cybersecurity tools. OpenAI had previously promised not to join the defense industry. Now, after a quick edit to its terms of service, the company is charging full-steam ahead with the development of new toys for the worldâs most powerful military. After getting confronted about this pretty drastic pivot, the companyâs response was basically: ÂŻ_(ă)_/ÂŻ …âBecause we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,â a company spokesperson told Bloomberg. Iâm not sure what the hell that means but it doesnât sound particularly convincing. Of course, OpenAI is not alone. Many companies are currently rushing to market their AI services to the defense community. It only makes sense that a technology that has been referredto as the âmost revolutionary technologyâ seen in decades would inevitably get sucked up into Americaâs military industrial complex. Given what other countries are already doing with AI, Iâd imagine this is only the beginning.
More headlines this week
The FDA has approved a new AI-fueled device helps doctors hunt for signs of skin cancer. The Food and Drug Administration has given its approval to something called a DermaSensor, a unique hand-held device that doctors can use to scan patients for signs of skin cancer; the device leverages AI to conduct ârapid assessmentsâ of skin legions and assess whether they look healthy or not. While there are a lot of dumb uses for AI floating around out there, experts contend that AI could actually prove quite useful in the medical field.
OpenAI is establishing ties to higher education. OpenAI has been trying to reach its tentacles into every strata of society and the latest sector to be breached is higher education. This week, the organization announced that it had forged a partnership with Arizona State University. As part of the partnership, ASU will get full-access to ChatGPT Enterprise, the companyâs business-level version of the chatbot. ASU also plans to build a âpersonalized AI tutorâ that students can use to assist them with their schoolwork. The university is also planning a âprompt engineering courseâ which, I am guessing, will help students learn how to ask a chatbot a question. Useful stuff!
The internet is already infested with AI-generated crap. A new report from 404 Media shows that Google is algorithmically boosting AI-generated content from a host of shady websites. Those websites, the report shows, are designed to hoover up content from other, legitimate websites and then repackage them using algorithms. The whole scheme revolves around automating content output to generate advertising revenue. This regurgitated crap is then getting promoted by Googleâs News algorithm to appear in search results. Joseph Cox writes that the âpresence of AI-generated content on Google News signalsâ how âGoogle may not be ready for moderating its News service in the age of consumer-access AI.â