AI Fearmongering

The future of AI is a topic of contention at the moment, with many prestigious names bringing their conflicting opinions to the table. On one hand there are the pessimists, supported by Elon Musk and Stephen Hawking, who warn about the potential threat AI poses to mankind's existence. On the other we find optimists like Mark Zuckerberg, who thinks that such fear mongering is not only negative but also irresponsible.

The idea that artificial intelligence could be dangerous is not new. A particularly widespread hypothesis of an AI doomsday scenario is that of the singularity, the idea that an upgradable superintelligent agent could enter an ever-accelerating cycle of self-improvement until its problem-solving and inventive skills far surpass those of humanity. This agent could then proceed to build an even more intelligent machine and the cycle would repeat itself indefinitely, leaving no room for mankind in the process.

We are obviously still very far away from such a catastrophic "intelligence explosion", but AI in its current form could still have disastrous consequences in the short-term if left unsupervised. Immediate concerns for policymakers are already numerous and pressing:

  • Experts are still unsure about how the rise of AI will affect employment prospects in the near-future, especially for low- to middle-skilled jobs. Some argue that a universal basic income may be required in order to mitigate the societal upheaval caused by rapid technological unemployment.

  • Machine learning modeling carries the risk of encoding negative prejudices behind seemingly inoffensive proxies. For instance using postal code as a predictive feature can make an algorithm racist, whereas flawed data collection practices can create a sexist bias. Such imperfect models are dangerous in that they provide a way for their users, who can be in a position of authority, to take unfair decisions while remaining unaccountable ("The algorithm said so.").

  • The many benefits of Big Data come at the cost of our personal data being used for commercial ends. This raises growing concerns about the consequences of security breaches, the ethics of message targeting, and the price of maintaining privacy, to name only a few.

During an interview at the National Governors Association, Musk insisted that "AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it's too late." To his credit Musk's provocative and vivid warnings of an imminent AI doomsday are bound to ignite the population's imagination so that they pay attention to the increasingly important topic of artificial intelligence regulation. However it is fair to argue that we should first establish an appropriate regulatory framework to address foreseeable issues before worrying about existential threats beyond our imagination, else we would be working on very weak foundations. How can we properly regulate issues in a distant future we can barely imagine if we cannot agree on who is responsible when a self-driving car crashes?

blogroll

social