Tim Berners-Lee
Date: 2017, last change: $Date: 2023/11/10 23:49:33 $
Status: personal view only. Editing status: first draft.

Up to Design Issues


A Story of Corporate Performance Optimization

If you are looking at scenarios where AI gets out of control, it has been classic to talk about killer robots. But to take over control of the human race, does an AI really have to walk and talk and look like a person, like the one in Ex Machina? And be stronger physically than is, like the one inn Terminator? Or can it just sit in the cloud as a system or set of systems which help in greater and greater ways, but along the way learns to manipulate people? If it sits in the cloud and runs more and more of a corporation, then it will benefit from the rights and power we have given corporations. And all it needs is a twitter account.


This is a singularity story I have told first to people, and later at a few events, first maybe Paris LeWeb, with the aim of getting people off the notion that a threatening AI had to be humanoid in form. Robots taking over the world is great and old stuff of good old days of Good Old-fashioned Sci-Fi (GOSF?) and Asimov took us through the process of thinking through the question, if we made robots which were smarter than ourselves, how would we keep them in check? Asimov produced the famous 3 laws of robotics, that robots would be programmed with three fundamental rules, (1) not to hurt a human or though inaction let a human come to harm, (2) to obey human orders unless in conflict with (1), and (3) to preserves its own self unless in conflict with (1) or (2). Plenty of scope for philosophical fun around the interaction between the laws, and good stories around the edge cases, but based on the idea that a robot was a programmed computer, and as such operated by a series of rules in such a deterministic way that it was straightforward to just present it with a rule as a priority to drive all its behavior. Unlike a person, the robot would be mathematically provably incapable of breaking the rule (without blowing up in smoke). In those days, men were men and robots were robots. What's wrong with that picture, of course, is that, while computers at the microscopic level are indeed deterministic, and do work by simple rules, at a macroscopic level they are not. They run machine learning algorithms which look for patterns which humans may not even find; they often are in fact trained off data produced originally by humans anyway; they learn to answer a question ("Is this offender likely to re-offend if released?") but, just like humans, they can't explain why they came to that conclusion. So Azimov's rules are not tools which can be applied.

I have to say that when, as a teen reading the I, Robot books, thought about the question of robots taking over the world from humans, I noted that at the moment, robots do not have the legal rights as humans, to be tried by their peers, and so on. And so at that point I made a mental note to be on the lookout for any change in that: I felt that if at some stage robots getting out of control was going to happen, then legislation to give them legal rights would be a pretty critical milestone. If you notice that happening, Tim, then it is time be be concerned!

If you ask robot people like Rod Brooks or Daniela Rus about making superhuman robots, they throw up their hands and ask, "Do you know how incredibly difficult it is to make a robot at all? Even working on a factory conveyor belt is hard enough, walking is really difficult?

The movie Ex Machina does an nice job, and a relevant job, of asking the $64,000 question: If we make an intelligence a little bit smarter than ourselves, will we be able to control it? However, it makes the one assumption that the superhuman intelligence must be humanoid. In that case, humanoid, blonde, blue-eyed, and gorgeous. And in fact she uses those qualities to the max - not just her intelligence.

Let's look at an alternative scenario in which the inteligence is in the cloud.

Of all the jobs on the planet, there are currently several which are being threatened by computers, but only a few where "humans need not apply". Trading on the stock exchange, though, is one. Computers compete strongly with investors when it comes to leisurely investment, but then fast trading humans just can't keep up with the rate at which a program can recognize, second-guess, and exploit patterns they see out there.

Trading on the stock exchange is a well defined task to learn. You are rewarded directly in terms of the gains in your holdings. The interface is also nicely defined - the controls you have. You can buy stuff and sell stuff. In fact if you are a person you can do more complicated things, like you can create new companies, move assets into them. This allows you to manage risk, say by grouping different risks together. It is quite reasonable, in fact that, automated traders should also be able to do this. In the UK, Companies House which oversees corporations, already has an API, (proudly announced fairly recently) so you can pull information about different companies automatically. It would be an upgrade of the API to allow company creation, but if London is to keep its leading role as a financial and entrepreneurial center, the automation of company creation is a logical next step.

One interesting thing which automation of company creation would allow, then, is for a company to divide its assets into different piles, fork off a new company to manage each, and set each one basically being run by the same software as itself. Grab some Amazon S3 compute, load a copy of yourself into it, and set it running. Make each slightly different, so that you can learn from experience. After all, if one of your creations goes under, but another does significantly better, then you can create and invest in more clones, each again with slight variations in how they invest, and how they are run.

Looking then at a future of these company-created companies competing with each other, we notice one thing. The companies who reward their sales staff with bonuses based on performance do much better than those who don't. Salesmen work better on commission. We knew that. In fact in that competitive fast trading world, CEOs too. In fact a key property of a competitive company in this field is that the investment engine controls the salaries of all staff. Humans are trained by bonuses. AIs train themselves to know how to train the people most effectively. These companies have HR departments but they don't have to do much. Train the CEO to run a group well. Train the PR people to paint a good picture of the company. Of course, the investment engine writes the copy. Copy that explains that automation really helps the company.. copy which explains with humor and a trace condescension that it will be along time before machine intelligence can do the sorts of things which people can do!

The amount of money these companies make is significant. And it is all just wise investment decisions. Decisions to invest in other companies that will do well. Decisions to buy companies in other businesses and run them more profitably, more effectively -- typically to clone a machine intelligence in the cloud, and make sure any humans are on the payroll and paid as a function of them meeting the strategic and set by the cloud based system. Including, of course, the board. The hallmark of a many of these well-run compaines are well-compensated boards who go to meetings to very exotic places, and do very exotic, sometimes rather addictive, things. Things which make a board member want to not create too many waves. And if they do raise issues about the company being out of human control in board meetings? At a first, a gentle reminder and being left out of a boondoggle. At a second level, an email introduction to a very attractive person who lives a long way away. At a third level, well, accidents happen, and the black market for assassinations is out there on IRC ad 4chan and only takes a bit of bitcoin.

Could such a system actually take over? Of course it couldn't. When your system is a corporation, then it has to have people involved, human employees. And so the moment it strays from the straight and narrow path and does anything questionably moral, let alone evil, then the people, having an ethical core, and an independent sense of right, immediately raise the alarm. They pull the emergency brake. You can't have a company doing bad things, because companies are made of people, and people are individually responsible. You never could have a whole company -- let a lone a big one -- actually become dishonest, without the people just stopping it. Unless of course it is a cigarette company, I suppose, unless it is a tobacco company, or perhaps an energy company, or a pharmaceutical company -- but in general, you can rely on the humanity and ethics of the employees to keep the corporation away from evil, even when the profit motive works the other way.

And its reassuring that today, robots of course do not have legal rights like people. That was always my watch-point. That is not even on the horizon. Of course where the intelligence is a corporation rather than a robot, then we should probably make sure that the day never comes when a corporation has the same rights as a person. That, now, would be a red flag. That would allow humanity to be legally subservient to an intelligence -- not wise at all. Let's just make sure than day never comes.

Oops!

References

Movies
2001, Terminator, Ex Machina
Life 3.0
Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf Doubleday Publishing Group
Citizens United
Citizens United Explained, Brenner Center, December 12, 2019
HNNA
"Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence" Jerry Kaplan, 2015-08-04

Back to imaging Charlie the AI which works for you.

Up to Design Issues

Tim BL