Binoculars, blindness and invisibility

Binoculars, blindness and invisibility

In recent years, I’ve regularly chosen gear bedecked with hi-viz yellow to increase my conspicuity on the road. I also added LED driving lights to my bike, more for this purpose than to augment nighttime illumination, since I rarely ride after dark these days. Maximizing my visibility to car drivers is a primary consideration as I continuously choose where to position myself in my lane and in relation to other traffic. However, I’m also aware all these measures may actually do little to reduce my chance of a collision with an automobile, even if a relatively conscientious, motorcycle-alert driver is behind the wheel. Let’s discuss why.

Mark looking very serious in his hi-viz jacket and spiffy Aerostich silk scarf.

First, a bit about vision. Predator animals like us have eyes positioned in the front of the head to allow focused attention on a specific point of interest, our prey. For predators, survival depends on being able to track and pursue food—including judgment of its distance and speed in preparation for an effective lunge—so characteristics of our visual processing are optimized for these activities. Prey animals, on the other hand, depend on wide-ranging awareness for survival and need to notice predators approaching from as many angles as they can possibly monitor. To facilitate this, their eyes are located on the sides of their heads. The advantage of this arrangement is a large area of detection, but it comes with a penalty: because there’s no overlap in the images from each eye, prey animals lack something called binocular depth perception (BDP).


Listen to this column as Episode 34 of The Ride Inside with Mark Barnes. Submit your questions to Mark for the podcast by emailing [email protected]. This episode will be available starting 18 August 2023.


Among other things, BDP incorporates a measure of ocular convergence in addition to data collected by the retinas. If I look at a finger held out in front of my face, then move that finger toward my nose, my eyes point further and further inward as they track it. Registering the muscular activity involved in this convergence is part of my brain’s perceptual calculation of my finger’s distance, based on mechanical triangulation. Likewise, my eye lenses contract and relax to focus near or far; this, too, affects my sense of a visual target’s distance (whether I’m looking with one eye or two). Neither of these inputs is particularly noticeable by itself unless our eyes start to cross or we’re struggling at the limits of nearsightedness or farsightedness. Yet these muscular operations (yes, eye lenses get stretched and released by tiny muscles) are being factored in unconsciously as our brains assemble 3D perceptual packages for our conscious consideration.

In addition, our brains use binocular stereopsis—small disparities in the images collected by each eye, which view objects from slightly different angles—to calculate distance through a non-muscle-derived version of triangulation. The further away something is, the less difference there will be in each eye’s image of it, and eventually the less relevant stereopsis is in the perception of distance. This is also true for ocular convergence and changes in lens tension.

More purely visual information, such as the relative size of an object within a scene’s totality, provide implications about its distance, in part based on what we already know about the typical size of similar objects. We also use motion parallax (closer objects move further and faster across a scene than background objects), linear convergence (parallel lines angle toward each other as their distance increases), one object’s obscuring of another, and other clues as we judge an object’s absolute and relative distance. This 2D monocular data set is available without the additional information provided by binocular vision, as is the previously mentioned lens muscle activity. Optical illusions are created by manipulating monocular or binocular cues to trick our brains; then “our eyes deceive us.”

Cutest cat ever! Photo by Liberation Cat House.

When someone loses an eye, they typically have difficulty judging distance and find themselves knocking into things they expected to avoid. You can see for yourself (pun intended) by simply closing or covering one eye and then navigating through familiar spaces or reaching for nearby objects. If you pay close attention, you’ll notice there’s a little discrepancy between where something appears to be and where you actually encounter it with your body or hand. Now consider the driver (or rider) who has pulled up to an intersection and scans left and right. Unless they turn their head far enough for both eyes to get an unobstructed view 90-degrees from straight ahead, the bridge of their nose will block much of the image collected by the eye on the opposite side. Since most people don’t routinely turn their heads that far, their ability to judge the distance—and closing speed—of an approaching object is compromised by the subtraction of BDP from the perceptual equation. They are limited to monocular data as they assess approaching traffic.

Some monocular cues are mainly useful for judging distance and movement in planes transverse to our vantagepoint (passing across in front of us). Those cues better suited to assessing an object’s distance and closing speed (which is really just a dynamic appreciation of changing distance) as it approaches in a longitudinal plane will depend on the proportion of our visual field the object occupies. A person’s ability to judge a motorcycle’s distance and closing speed is hindered by the bike’s small size. A car at the same distance occupies more of the total visual field, making it more salient, but also making changes in its size more apparent as it approaches. Hence, its closing speed is more readily and accurately determined. Note that none of this is enhanced by Hi-Viz colors, extra lights, or lane positioning. Such measures might get a bike/rider to register as an object in an observer’s visual field, but they don’t improve the observer’s judgment of the bike/rider’s distance or closing speed. These things are hard enough to assess with binocular enhancement; they’re even harder to do when essentially glancing with just one eye.

Layered atop this is the problem of “inattentional blindness:” we see what we look for and don’t see what we don’t look for. You might assume uniqueness would increase an object’s likelihood of detection, but that’s not necessarily the case. An oft-referenced study in this area had people watch a 75-second video wherein a group of college students pass basketballs to one another; the instructions were simply to count the number of times the balls change hands. About halfway through the video, a person in a gorilla suit walks casually into the mix, looks straight into the camera, does a little chest-beating, and then walks off. When asked if they noticed anything unusual after watching the video, half the people hadn’t seen the gorilla. That’s because they weren’t trying to see a big black beast; they were trying to see basketball passes.

Human perception is largely circumscribed by intention, because intention directs attention. We mainly see what we’re predisposed to see—not only ideologically, but concretely. Magicians and card sharks know how to make sleight-of-hand actions (occurring in plain view) utterly undetectable to the average observer. They do this by directing our attention elsewhere, often setting up some mental expectation to which our perceptual processes become momentarily enslaved. What gets noticed can depend much more on the perceiver than what’s available to be perceived.


The Ride Inside with Mark Barnes is brought to you by the MOA Foundation. You can join the BMW Motorcycle Owners of America quickly and easily to better take advantage of the Paul B Grant program mentioned in this episode.


Unfortunately, this has grave implications for motorcyclist conspicuity, and some of our most basic common-sense assumptions aren’t supported by research. Like uniqueness, location—even right in the middle of a scene—is no guarantee of getting noticed; the same goes for other presumably attention-getting qualities, such as bright colors, strong contrasts, and flickering lights. These factors aren’t completely irrelevant, but they pale in comparison to others that aren’t so intuitively obvious. And these other factors are largely beyond our control as things to be perceived.

Attention is directed and limited by individualistic factors in the perceiver. These include the intentional or motivational context of the moment (what are they looking for?), and the relevance of a stimulus within their hierarchy of priorities (what is important to them?). Even if I’m not looking for a snake on my walk through the woods, a snake-like shape nearby may grab my attention because my survival could be at stake and my perceptual reflexes have been shaped, either by my personal experience or the evolution of my species, to prioritize this particular danger.

SNAKE! (courtesy of pixabay)

A couple of features research has found to be reliable in getting the average human’s attention are looming (an object quickly growing larger in the visual field, which wouldn’t apply to a motorcycle until impact was imminent and probably unavoidable) and the sudden, abrupt appearance or movement of an object (again, not helpful to us). It’s easy to imagine how these phenomena could signify potential threats, and why directing our attention to them would have become a hard-wired reflex (like the startle response evoked by a snake-like shape). But, aside from a few hard-to-emulate attributes such as these, what is eye-catching to any particular person in any particular situation is likely to be based on that person’s particular preoccupation of the moment, rather than a distinguishing feature of the object (read: bike/rider).

If our best efforts to be conspicuous can fail because of others’ personal idiosyncrasies and immediate situational concerns, what’s a poor motorcyclist to do?

One of the best pieces of advice I ever received about riding on the road was, “Pretend you’re invisible.” This means never assuming other drivers have noticed us, anticipated our speed/trajectory, or developed any interest in whether or not we collide. When we ride this way, we see things differently, even if others don’t. We’re actively scanning for threats and opportunities to avoid them. Just as the absence of intention can leave us blind, the investment of intention can enhance our vision and allow us to notice problems and solutions we’d have otherwise overlooked.

Sure, continue doing all the traditional things to be seen. Wear bright, reflective gear, use your high-beam during the day, add a headlight modulator, use the left car-tire-track as your default lane position to place yourself squarely in front of the driver behind you, etc. Just don’t lapse into complacency, thinking your work is done. We all need to ride as though we’re invisible, because we really may be exactly that to a driver we’re approaching; or, even if they do see us, their ability to accurately judge our distance and closing speed may be severely compromised. Of course, this doesn’t even count the multitude of drivers who are simply distracted and/or careless.


Mark Barnes is a clinical psychologist and motojournalist. To read more of his writings, check out his book Why We Ride: A Psychologist Explains the Motorcyclist’s Mind and the Love Affair Between Rider, Bike and Road, currently available in paperback through Amazon and other retailers.