Fantasy chatbots sex
What if board executives are duped by hostile chatbots and act on misinformation?
None of these instances of bot-gone-bad scenarios are science fiction fantasy.
“Basically, anything that is required of a human we apply to our AI tools.
It’s not designed for AI, but it’s a start,” he says.
Who owns that data and what becomes of it has entered the realm of science fiction,” he says.
As executive director of Seed Vault, a not-for-profit fledgling platform launched to authenticate bots and build trust in AI, Professor Shedroff thinks transparency is a starting point.
Professor Winfield is optimistic about workforce augmentation and proposes a black box with investigatory powers in the event of an AI catastrophe.
But not all AI is easy to police with some varieties more traceable than others, says Nils Lenke, board member of the German Research Institute for Artificial Intelligence, the world’s largest AI centre.
“There are research groups in the US that claim to be able to diagnose mental illness by analysing 45 seconds of video.Earlier this year, the EU voted to legislate around non-traceable AI, including a proposal for an insurance system to cover shared liability by all parties.More work is needed to create AI accountability, however, says Bertrand Liard, partner at global law firm White & Case, who predicts proving liability will get more difficult as technology advances faster than the law.Manufacturers are installing more intelligent robots on the factory floor and Ian Joesbury, director at Vendigital, anticipates a mixed workforce in the future.“Skilled technicians will work alongside a co-bot that does the heavy lifting and quality assurance,” he says.
“With [Google’s] Deep Mind now creating an AI capable of imagination, businesses will soon face the challenge of whether AI can own or infringe intellectual property rights,” says Mr Liard.