Photo illustration by Slate |
According to the BBC, Vital, a program by UK-based Aging Analytics, "will vote on whether to invest in a specific company or not. ... [Deep Knowledge Ventures] said that Vital would make its recommendations by sifting through large amounts of data." Furthermore, "The algorithm looks at a range of data when making decisions - including financial information, clinical trials for particular drugs, intellectual property owned by the firm and previous funding."
Mr. Pugh notes that "as of now, AI directors would be illegal under U.S. corporate law, which requires directors to be 'natural persons.' But the idea of putting AI on a corporate board isn't as far-fetched as it may seem. In a 2015 study by the World Economic Forum, which surveyed over 800 IT executives, 45 percent of respondents expected that we'd see the first AI on a corporate board by 2025, and that such a breakthrough would be a tipping point for more."
With respect to the two aforementioned difficulties boards encounter, (1) limited time and attention spans and (2) their flow of information regarding corporate affairs is typically controlled by the CEO, Mr. Pugh suggests thinking "of how much further it could go if a company were to supplement" high-level "supervision from, say, sophisticated AI that could independently monitor fine-tuned goals ... and even balance competing interests on a more nuanced level. It's the kind of technology that could help those human board members transition from high-level supervisory entities to effective micromanagers."
What is more, "Consider the data-hungry environments where AI thrives. Machine learning is ideal when you need to find hidden patterns in vast troves of data. An AI director could consume huge amounts of information about the company and the business environment to make good decisions on issues like the future demand for the company's products or whether the company should expand to China. This is exactly how the first AI director, appointed by the Hong Kong company Deep Knowledge Ventures, is being used: It's tasked with consuming data about life science companies and then voting on which companies are good investments. The company says that it relies on the AI's recommendations by refraining from making any investments that the AI doesn't approve—which they say has helped with eliminating some kinds of bias and avoiding 'overhyped' investments."
Mr. Pugh asks: "But why go to the extreme of giving A.I. its own seat when, theoretically, the board could just consult such algorithmic assessments to inform its decisions? This gets back to the issues of time, loyalty, and access to information. Unlike a human, an AI director is appealing as a potential independent tiebreaker on any disagreement between the human board members. What's more, if such algorithms cast votes, it will be harder for other directors to disregard those votes, and it will force those directors to find compelling reasons to oppose them. In some cases, an AI director's vote could be a red flag, an antidote to groupthink. In others, it may force human directors to confront potential biases in their thinking, like loyalty to a particularly charismatic CEO. Think of what an AI director at General Electric might have focused on in recent years when the company appeared to disregard its plummeting cash flow and mounting pension liabilities from operations over many years."
Crucially, Mr. Pugh observes "[t]here are, of course, limitations and issues to overcome before giving software a seat at the directors’ table. For one, many forms of AI 'learn' from human-generated and human-curated data—which has been known to replicate human bias. This kind of bias can be hard to fix because it can creep in at many different stages of AI training, including the goals programmers assign the AI to achieve, the data sets they feed it, the data attributes they choose to focus on, and the data they use to test it. Many programmers are becoming more cognizant of these issues, however, and are looking at better ways to address these biases in the process of developing these tools—including projects like AI that aims to 'de-bias' other AI tools."
Moreover, "Deep learning techniques are currently 'black boxes.' A self-driving car may be able to identify a crosswalk, and a valuation algorithm may be able to say that a company is worth $X, but if AI directors are going to interact with shareholders and human directors, they need to be able to explain their conclusions. If we can't look under the hood and see their reasoning, AI directors will be hard to trust, and courts won't be able to ensure that they are fulfilling their legal duties to provide shareholders 'candor'—i.e., all information that would be important to a shareholder. Under securities law, one of the most common disclosure items for directors is an explanation of how and why directors are handling risk in a specific way. If machine learning algorithms can reveal their internal logic and are designed to analyze and communicate such risks well, they may even do a better job at providing such disclosures by helping humans focus on the right details by filtering out noise in data.
"This also gets at another advantage that a transparent algorithm could have: a refreshing lack of personal ambition or interests. Assuming sufficient advancement in AI technology, shareholders and stakeholders alike could trust AI directors to be forthcoming about why they are taking a specific action—an attribute not always found in their human counterparts. Courts have recognized that, while directors may ostensibly be trying to benefit shareholders, there's an 'omnipresent specter' that members of the board are, intentionally or not, actually pursuing self-interest. On a hybrid board with both humans and AI, the AI could provide shareholders, as well as other directors, with a more objective analysis when it comes to, say, questions like how a potential merger could affect directors own net worth."
Mr. Pugh writes that "legislative proposals in the U.S. call for directors to consider shareholders and other stakeholders' interests. This could be achieved by requiring a subset of human directors to look out for employees while others remain focused on shareholders—or it could be achieved by fine-tuning an individual AI director's ultimate goals. If AI technology advances to the point where AI directors could explain how they reach their conclusions, then a single AI director could, for example, be programmed to consider both shareholder and stakeholder interests in a more transparent way than a human director could."
In my experience of serving on the board of directors of several companies, I can attest to the problem of limited time, loyalty and access to information. While AI is not a complete solution to solving corporate governance issues or strategic planning, I see value in including an AI director. Doing so could provide objectivity that will force human directors to confront there biases or determine the impact of a decision through machine learning that human directors could not ascertain on their own.
What do you think? Will your company appoint an AI director to its corporate board if or when it is lawfully to do so?
Aaron Rose is an advisor to talented entrepreneurs and co-founder of great companies. He also serves as the editor of Solutions for a Sustainable World.
No comments:
Post a Comment