Sign Up For NYT Now's Morning Briefing Newsletter
One of the most significant uses of 3-D scanning in the years to come will not be by humans at all but by autonomous vehicles. Cars are already learning to drive themselves, by way of scanner-assisted braking, pedestrian-detection sensors, parallel-parking support, lane-departure warnings and other complex driver-assistance systems, and full autonomy is on the horizon. Google’s self-driving cars have logged more than a million miles on public roads; Elon Musk of Tesla says he’ll probably have a driverless passenger car by 2018; and the Institute of Electrical and Electronics Engineers says autonomous vehicles ‘‘will account for up to 75 percent of cars on the road by the year 2040.’’ Driver-controlled cars remade the world in the last century, and there is good reason to expect that driverless cars will remake it again in the century to come: Gridlock could become extinct as cars steer themselves along a cooperatively evolving lacework of alternative routes, like information traversing the Internet. With competing robot cars just a smartphone tap away, the need for street parking could evaporate, freeing up as much as a third of the entire surface area of some major American cities. And as distracted drivers are replaced by unblinking machines, roads could become safer for everyone.
Lidar, however, has its own flaws and vulnerabilities. It can be thrown off by reflective surfaces or inclement weather, by mirrored glass or the raindrops of a morning thunderstorm. As the first wave of autonomous vehicles emerges, engineers are struggling with the complex, even absurd, circumstances that constitute everyday street life. Consider the cyclist in Austin, Tex., who found himself caught in a bizarre standoff with one of Google’s self-driving cars. Having arrived at a four-way stop just seconds after the car, the cyclist ceded his right of way. Rather than coming to a complete halt, however, he performed a track stand, inching back and forth without putting his feet on the ground. Paralyzed with indecision, the car mirrored the cyclist’s own movements — jerking forward and stopping, jerking forward and stopping — unsure if the cyclist was about to enter the intersection. As the cyclist later wrote in an online forum, ‘‘two guys inside were laughing and punching stuff into a laptop, I guess trying to modify some code to ‘teach’ the car something about how to deal with the situation.’’
Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University and author of the book ‘‘Robot Futures,’’ uses the metaphor of the perfect storm to describe an event so strange that no amount of programming or image-recognition technology can be expected to understand it. Imagine someone wearing a T-shirt with a stop sign printed on it, he told me. ‘‘If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore — well, now your car just sees a stop sign. The chances of all that happening are diminishingly small — it’s very, very unlikely — but the problem is we will have millions of these cars. The very unlikely will happen all the time.’’
The sensory limitations of these vehicles must be accounted for, Nourbakhsh explained, especially in an urban world filled with complex architectural forms, reflective surfaces, unpredictable weather and temporary construction sites. This means that cities may have to be redesigned, or may simply mutate over time, to accommodate a car’s peculiar way of experiencing the built environment. The flip side of this example is that, in these brief moments of misinterpretation, a different version of the urban world exists: a parallel landscape seen only by machine-sensing technology in which objects and signs invisible to human beings nevertheless have real effects in the operation of the city. If we can learn from human misperception, perhaps we can also learn something from the delusions and hallucinations of sensing machines. But what?
All of the glares, reflections and misunderstood signs that Nourbakhsh warned about are exactly what ScanLAB now seeks to capture. Their goal, Shaw said, is to explore ‘‘the peripheral vision of driverless vehicles,’’ or what he calls ‘‘the sideline stuff,’’ the overlooked edges of the city that autonomous cars and their unblinking scanners will ‘‘perpetually, accidentally see.’’ By deliberately disabling certain aspects of their scanner’s sensors, ScanLAB discovered that they could tweak the equipment into revealing its overlooked artistic potential. While a self-driving car would normally use corrective algorithms to account for things like long periods stuck in traffic, Trossell and Shaw instead let those flaws accumulate. Moments of inadvertent information density become part of the resulting aesthetic.
The London that their work reveals is a landscape of aging monuments and ornate buildings, but also one haunted by duplications and digital ghosts. The city’s double-decker buses, scanned over and over again, become time-stretched into featureless mega-structures blocking whole streets at a time. Other buildings seem to repeat and stutter, a riot of Houses of Parliament jostling shoulder to shoulder with themselves in the distance. Workers setting out for a lunchtime stroll become spectral silhouettes popping up as aberrations on the edge of the image. Glass towers unravel into the sky like smoke. Trossell calls these ‘‘mad machine hallucinations,’’ as if he and Shaw had woken up some sort of Frankenstein’s monster asleep inside the automotive industry’s most advanced imaging technology.
ScanLAB’s project suggests that humans are not the only things now sensing and experiencing the modern landscape — that something else is here, with an altogether different, and fundamentally inhuman, perspective on the built environment. If the conceptual premise of the Romantic Movement can somewhat hastily be described as the experience and documentation of extreme landscapes — as an art of remote mountain peaks, abyssal river valleys and vast tracts of uninhabited land — then ScanLAB is suggesting that a new kind of Romanticism is emerging through the sensing packages of autonomous machines. While artists once traveled great distances to see sights of sublimity and grandeur, equally wondrous and unsettling scenes can now be found within the means of travel itself. As we peer into the algorithmic dreams of these vehicles, we are perhaps also being given the first glimpse of what’s to come when they awake.
No comments:
Post a Comment