“It is the mark of an educated mind to be able to entertain a thought without accepting it” Aristotle.
Uber and Airbnb are two of the most innovative companies in the world with mind-blowing valuations (X Media Lab is a heavy user of both). Both Bloomberg Business and The Guardian have run excellent edited extracts this week from Brad Stone’s new book The Upstarts: How Uber, Airbnb, and the Killer Companies from the New Silicon Valley are Changing the World on the behind-the-scenes stories of their origins, battles, and staggering successes. If you want more, or just short and sharp, here’s a Q&A interview with Brad Stone.
Whether it’s HAL 9000 in Stanley Kubrick’s 2001: A Space Odyssey or the computer that learned to recognize cats by watching YouTube videos, artificial intelligence has a long and storied history of crossing over with movies. A new project, created by Copenhagen-based creative coding studio Støj, represents the next step of that process with an AI designed to watch — and try and make sense of — Hollywood movie trailers. The main component of the project involves YOLO-2, a system for real-time object detection that can recognize everyday objects like persons, ties, cars, and chairs as they appear. Short for “You Only Look Once,” YOLO is extremely fast, with a degree of accuracy, and the ability to detect and classify multiple objects in the same image. Watch the algorithm try to make sense of the trailer from The Wolf of Wall Street (Vid: 2.08). Awesome.
What happens when you have Deep Learning begin to generate your designs? The common misconception would be that a machine’s design would look ‘mechanical’ or ‘logical’. However, what we seem to be finding is that they look very organic, in fact they look organic or like an alien biology. What is surprising is that these designs do not exist for the sake of style. Rather, these designs are actually the optimal solutions to multiple competing design requirements. Why do they look organic or biological? Is there some underlying fundamental principle that exists in biological systems that leads to this? Why aren’t the solutions sparse, but rather complex? While Herzog and De Meuron’s stunning $843 million Hamburg philharmonic is filled with stunning architectural gems, its most interesting feature is the central auditorium, a gleaming ivory cave built from 10,000 unique acoustic panels that line the ceiling, walls, and balustrades. The room looks almost organic—like a rippling, monochromatic coral reef—but bringing it to life was a technological feat (check out all 9 photos of this amazing building built with algorithms as well as other materials).
Our life is increasingly shaped by such algorithmic processes, from the fluctuations of financial markets to facial recognition technology. Manichean arguments for or against digital algorithms are hardly relevant. Rather, we need to understand how algorithms embedded in widespread technologies are reshaping our societies. And we should imagine ways to open them up to public scrutiny — thus grounding shared practices of accountability, trust, and transparency. This is essential for the simple reason that algorithms are not neutral. They are emblematic artefacts that shape our social interactions and social worlds. They open doors on possible futures. We need to understand their concrete effects — for example, the kinds of social stratification they reinforce. We need to imagine how they might work if they were designed and deployed differently, based on different priorities and agendas — and different visions of what our life should be like. Massimo Mazzotti on Algorithmic Life in this week LA Review of Books.
This kind of algorithmic opacity is a great concern to DARPA (the Defence Advanced Research Projects Agency – progenitor of the original internet) as they develop their Human-Machine “partnerships” built on prediction accuracy balanced with intelligibility. The Explainable AI (XAI) program aims to create a suite of machine learning techniques that: 1) Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and 2) Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.
Harvard Business Review outlines four models for using AI to make decisions: as Autonomous/Autonomy Advisor; as Autonomous Outsourcer; as World Class Challenging/Challenged Employee; or as All-in Autonomy. They conclude: “Without question, [your] smartest competitors will be data-driven autonomous algorithms”. (In which case, here’s another great primer about What You Need to Know About Artificial Intelligence from Tomorrow Edition with great explanatory videos and introductions to the superstars of the AI firmament).
Great News! Apple has joined the 5 Unicorns of the Apocalypse – Amazon, Google, Facebook, IBM, and Microsoft – in, wait for it, a Partnership on AI to Benefit People and Society. That is probably the scariest euphemism I’ve ever heard. The five largest corporations in the world, whose primary product is you and your data, in a cabal to control the development of AI. Short-listed for the CEO role, no doubt, are Dr. Strangelove, and O’Brien, the Director of the Ministry of Truth from Nineteen Eighty-Four. (Something prompted me to re-read Orwell’s masterpiece last year since I hadn’t read it since, well, 1984, i.e., since before the internet. I was astonished at how well and how much Orwell foresaw the technology and psychology of these times: concepts such as Big Brother, doublethink, thoughtcrime, Newspeak, Room 101, telescreen, 2 + 2 = 5, and the memory hole. Anyone watching the comparative crowds at the Inauguration knows very well that these days 2+2=5. While you’re waiting for your copy to arrive, perhaps you can wonder whatever happened to the Deep Mind AI Ethics board that Google promised three years ago.