In a hilarious turn of events last month, a Russian robot named Boris was unmasked as a man in a robot suit.

Likewise, state-run media in China unveiled its AI reporter in November, and to this day it’s not clear if this is an actual AI system boiling down news stories or just a synthesized voice with an avatar.

More fabricated robotic theatrics appeared to be on display this week at the Consumer Electronics Show in Las Vegas, where LG CTO I.P. Park delivered the opening CES keynote address.

Park was accompanied onstage for the hour-long presentation by CLOi, a conceptual robot line perhaps best known for failing during a live demo at CES a year ago.

This year, however, CLOi did a bit of everything: The robot acted as co-host, cracked jokes, delivered some LG HomeBrew beer, and even helped some guy who hates blind dates find true love.

Its prowess with love — and, more importantly, its conversational AI — seemed too good to be true. The things CLOi said onstage did not appear to be a demonstration of what the robot’s NLP can actually do, but rather a script that the robot read to the audience, and that’s a disservice.

LG did not respond to repeated questions about whether the responses the robot shared onstage were scripted or in any way generated by an AI system, but here’s a clip of the presentation.

Whatever CLOi is actually able to do, the performance by the robot brought to mind Facebook’s AI chief calling Hanson Robotics’ Sophia bullshit a year ago, as well as a conversation I had last year with a business executive last year who said he was being forced to talk about Duplex and address misguided questions about what’s possible today rather than talk about his product.

That’s why tech companies should spare the world overblown or fabricated depictions of what their AI can do.

False perceptions about what is and isn’t possible have potentially negative implications for public perception, business decisions, and even government policy. More lawmakers talked about AI last year in Congress and the United Kingdom’s Parliament, according to the AI Index report, and in her prediction shared with VentureBeat for the year ahead, Accenture’s responsible AI lead Rumman Chowdhury said she expects to see more regulation of AI in the year ahead.

False perceptions can also take oxygen away from more important issues surrounding AI, such as bias or the concerns of people worried their job will disappear.

One of the toughest things about artificial intelligence today is the extent to which it makes humans question what’s real and what’s fake. Yes, deepfakes and artificial intelligence that can manipulate images have a lot to do with that, but it’s also overzealous marketing and overblown claims of what they can do.

When artificial intelligence is wrapped in so much potential as well as fake bull shit, the average person should be forgiven for sometimes being duped by high profile marketing, but the confusion this creates can have consequences.

Fabricated examples of what an AI system can do take advantage of the hopes and fears of a public being told on loop that robots will make their lives better but also that they can take their jobs and become killing machines.

Even if it sometimes means tripping over your own shoe laces and making mistakes on stage, I think demonstration should present robots with their limitations in order to set realistic expectations.

There’s nothing wrong with showing conceptual features that aren’t yet available but demonstrate where we might be heading, but when there’s so much wrapped up in the discussion around AI these days, sharing the inspiring vision should reflect what’s actually possible.

To be clear: CLOi isn’t a total fabrication: The line of robots is being used to help people in stores and an airport in South Korea and LG is investing tens of millions of dollars in its line of robots, but companies should stick to sharing what’s actually possible, otherwise they’re no better than Boris.

Content Credits: VentureBeat

Image Credits: VentureBeat

Facebook Comments