Increasingly powerful AI is poised to threaten many well-established professions. A strong case can be made that no industry or job is safe — including the upper echelons of corporate governance. The era is fast approaching when companies will be able to operate with virtually no human intervention. Here's how it will work.
During the Industrial Revolution, workers rightfully worried about losing their jobs to machines. It's a trend that's still ongoing as newer, more powerful robots continue to displace manual laborers. But over the past few years, we've started to catch a glimpse of an entirely new kind of automation, one driven by advances in information technology and artificial intelligence.
For the first time in history, so-called "thought workers" — those employed in white collar positions — are in danger of being replaced. Here, the tired adage that new work will be created through the introduction of new automation technologies falls flat; narrow AI systems will eventually be capable of doing practically anything a human can do. The coming era of technological unemployment could prove to be a disruptive one, indeed.
Advances in automation are set to trickle-up all the way to the top levels of corporate governance. Business will soon be able to operate themselves, free from human intervention.
They're called Distributed Autonomous Corporations, or Decentralized Autonomous Corporations. More simply, they're just DACs. Conceived a few years ago in the chatrooms of the Bitcoin community, these systems will be able to operate without any human involvement as they're guided by an inviolable set of business rules.
As noted by Vitalik Buterin in Bitcoin Magazine, corporations are essentially groups of people that work together under a specific set of rules. A corporation "acts" because its board of directors agreed on a particular course of action. In turn, employees are hired to provide services to the corporation's customers, also under a certain set of rules (including the incentive of payment for services rendered).
The argument can be made that a corporation is little more than a bunch of upper-level managers who make decisions in accordance with a mission statement. A corporation's directives can be understood as a set of rules — such as maximizing profit for shareholders from the sale of goods or services — that can be executed by an automated system.
As noted by Buterin:
Most companies have some kind of mission statement; often it's about making money for shareholders; at other times, it includes some moral imperative to do with the particular product that they are creating, and other goals like helping communities sometimes enter the mix, at least in theory. Right now, that mission statement exists only insofar as the board of directors, and ultimately the shareholders, interpret it. But what if, with the power of modern information technology, we can encode the mission statement into code; that is, create an inviolable contract that generates revenue, pays people to perform some function, and finds hardware for itself to run on, all without any need for top-down human direction?
To work, a DAC would perform an "output-maximizing production function," which is a fancy way of saying that it'll do its best to accomplish a set of pre-designated goals. It would then divide the required labor into discrete and computationally challenging tasks — some of which a computer can do, some of which only a human can do. And indeed, DACs may be autonomous, but they will likely still require humans for certain jobs, such as customer service or for performing creative tasks.
These autonomous systems, or cloud robots, would be reminiscent of a computer virus, a piece of code that survives by replicating itself from machine to machine without any deliberate human action. DACs, on the other hand, would thrive on deliberate, rational human action.
If much of this sounds similar to Bitcoin, it's because it is. And in fact, it was Bitcoin's remarkably powerful blockchain model — a cryptographically-secure open ledger of transactions distributed to nodes — that revealed the potential for DACs.
It can even be said that Bitcoin is a kind of proto-DAC, an autonomous system that engineers an incentive for people to do things, such as hosting nodes, building infrastructures, writing code, and promoting the digital currency.
"Bitcoin has 21 million shares, and these shares are owned by what can be considered Bitcoin's shareholders," notes Buterin, "It has employees, and it has a protocol for paying them: 25 BTC to one random member of the workforce roughly every ten minutes. It even has its own marketing department, to a large extent made up of the shareholders themselves." That said, Bitcoin doesn't actually do anything, it simply exists while the world recognizes it. DACs, on the other hand, would actually work.
Buterin, a developer at the Toronto-based company Ethereum, is hoping to facilitate the development of DACs by providing a framework for various projects. His company's cryptocurrency, Ether, would be used to power the apps on the decentralized Ethereum network.
Buterin's cryptocurrency colleague, Daniel Larimer, is developing BitShares to build the DACs themselves, including a decentralized music sales service. These guys represent the first generation of DAC developers, and judging by their work, it'll only be a matter of time before we see our first viable, profitable, automated company.
DACs would be programmed with an unmodifiable and sacrosanct set of business rules, i.e. they would be programmed such that they cannot modify themselves or break the rules. Importantly, they can still be modified externally by humans. These rules would be open source and known to the public, which will make DACs, in theory, more ethical and trustworthy than human-run firms.
BitShare's Stan Latimer has taken it upon himself to devise Three Laws of Robotics for Distributed Autonomous Corporations:
- A DAC must always obey its own published business rules.
- A DAC must never change its rules without consent of its stakeholders, except where such change would conflict with the First Law.
- A DAC must protect its own existence, as long as such protection does not conflict with the first two laws.
According to Larimer, a DAC's mission statement — or utility function — would be "typically implemented as publicly auditable open-source software distributed across the computers of their stakeholders."
A human can become a stakeholder by purchasing stock in the company or by agreeing to be paid in that stock for providing services to the company. In turn, this stock would entitle its owner or stakeholder to a share of the profits derived by the DAC. They could even have say in how the DAC should grow and be run.
Once set up and ready to go, a DAC could own capital, enter contracts, manage supply chains, set up insurance policies, forge rental agreements and futures contracts, sell products, and employ robots or humans for certain tasks. They could even develop new products and services and open-up entirely new markets.
A DAC would have no physical workspace, instead residing on the Internet in the form of hundreds or thousands of distributed nodes hosted on stakeholder computers. According to The Economist, such a configuration would likely increase its efficiency within markets and make it capable of engaging in instant trust-less business transactions, peer-to-peer bond and stock trading, "verifiable-yet-anonymous" voting, and decentralized currency exchange.
What's more, DACs won't have to pay a board of directors, instead channeling the money to its shareholders in the form of dividends.
So what would a DAC actually look like? David Morris from Aeon offers an example:
Imagine, for instance, a bike-rental system administered by a DAC hosted across hundreds or thousands of different computers in its home city. The DAC would handle the day-to-day management of bikes and payments, following parameters laid down by a group of founders. Those hosting the management programme would be paid in the system's own cryptocurrency – let's call it BikeCoin. That currency could be used to rent bikes – in fact, it would be required to, and would derive its value on exchanges such as BitShares from the demand for local bike rentals.
Guided by its management protocols, our bike DAC would use its revenue to pay for repairs and other upkeep. It could use online information to find the right people for various maintenance tasks, and to evaluate their performance. A sufficiently advanced system could choose locations for new stations based on analysis of traffic information, and then make the arrangements to have them built.
With a model like this, DACs could be transplanted to other cities at virtually no cost.
As noted, DACs will be totally transparent companies. Their core values and activities will be there for all to see. But by its very nature — a massively distributed and decentralized artificially intelligent autonomous agent — it could be abused very easily. It could even be misprogrammed in a way that results in a complete catastrophe.
A rogue DAC, whether it's one designed to violate basic business laws or one that's just sloppily programmed, would be incredibly hard to contain and control — particularly if its utility function was modified to be evasive and its core programming capable of working beyond conventional rules. And owing to their distributed nature, these things would be very difficult for police and governments to shut-down. According to Daniel Latimer, who Aeon Magazine describes as a "fairly radical" libertarian, "decentralized technologies will make governments entirely irrelevant, ineffective and unable to do anything."
Another particularly troubling prospect is the self-improving, self-modifying DAC. Again, DACs aren't supposed to operate in this way, but there's always the chance that unscrupulous developer-entrepreneurs will deliberately endow their DACs with this capability. This brings to mind the nightmarish scenario in which a recursively improving AI quickly develops to the point where it becomes unstoppable as it works to fulfill its utility function. Imagine, for example, a superintelligent DAC armed with the mission statement of securing a monopoly for itself at the expense of other companies, including other DACs. It could wage a cyberwar in an extra-human digital ecology against these rival entities as it works to corner the market.
This also brings to mind the old "paperclip scenario" in which an ASI is programmed with the injunction to build as many paperclips as possible, and in the process of doing so, converts all of the matter on Earth into paperclips.
Lastly, it's also worth considering which humans DACs will benefit. On one hand, because it works outside of geographical and national boundaries, it could empower people from virtually anywhere. However, as noted by Morris, they may only work to benefit the technologically savvy, resource rich, and well educated. In other words, those people who are already privileged.
Regardless, DACs are coming and we'd best be prepared to take full advantage of them while also being extremely wary of the consequences.