I’ve seen how OpenAI’s GPT2 system can produce a column in my style. We must heed Elon Musk’s warnings of AI doom
Elon Musk, recently busying himself with calling people “pedo” on Twitter and potentially violating US securities law with what was perhaps just a joke about weed – both perfectly normal activities – is now involved in a move to terrify us all. The non-profit he backs, OpenAI, has developed an AI system so good it had me quaking in my trainers when it was fed an article of mine and wrote an extension of it that was a perfect act of journalistic ventriloquism.
As my colleague Alex Hern wrote yesterday: “The system [GPT2] is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.” GPT2 is so efficient that the full research is not being released publicly yet because of the risk of misuse.
And that’s the thing – this AI has the potential to absolutely devastate. It could exacerbate the already massive problem of fake news and extend the sort of abuse and bigotry that bots have already become capable of doling out on social media (see Microsoft’s AI chatbot, Tay, which pretty quickly started tweeting about Hitler). It will quash the essay-writing market, given it could just knock ‘em out, without an Oxbridge graduate in a studio flat somewhere charging £500. It could inundate you with emails and make it almost impossible to distinguish the real from the auto-generated. An example of the issues involved: in Friday’s print Guardian we ran an article that GPT2 had written itself (it wrote its own made-up quotes; structured its own paragraphs; added its own “facts”) and at present we have not published that piece online, because we couldn’t figure out a way that would nullify the risk of it being taken as real if viewed out of context. (Support this kind of responsible journalism here!)
The thing is, Musk has been warning us about how robots and AI will take over the world for ages – and he very much has a point. Though it’s easy to make jokes about his obsession with AI doom, this isn’t just one of his quirks. He has previously said that AI represents our “biggest existential threat” and called its progression “summoning the demon”. The reason he and others support OpenAI (a non-profit, remember) is that he hopes it will be a responsible developer and a counter to corporate or other bad actors (I should mention at this point that Musk’s Tesla is, of course, one of these corporate entities employing AI). Though OpenAI is holding its system back – releasing it for a limited period for journalists to test before rescinding access – it won’t be long before other systems are created. This tech is coming.
Traditional news outlets – Bloomberg and Reuters, for example – already have elements of news pieces written by machine. Both the Washington Post and the Guardian have experimented – earlier this month Guardian Australia published its first automated article written by a text generator called ReporterMate. This sort of reporting will be particularly useful in financial and sports journalism, where facts and figures often play a dominant role. I can vouch for the fact newsrooms have greeted this development with an element of panic, even though the ideal would be to employ these auto-generated pieces to free up time for journalists to work on more analytical and deeply researched stories.