Technology columnist

The little camera on this phone has a superpower: It can see things our eyes cannot.

At night for the past few weeks, I’ve been tromping around dark places taking photos using a new mode on Google’s $800 Pixel 3 called Night Sight. Friends in a candlelit bar look like they brought a lighting crew. Dark streets are flush with reds and greens. A midnight cityscape lights up as though it were late afternoon. It goes way beyond an Instagram filter into you-gotta-see-this territory.

Night Sight is a super step forward for smartphone photography — and an example of how our photos are becoming, well, super fake.

It’s true — you don’t look like your photos. Photography has never been just about capturing reality, but the latest phones are increasingly taking photos into uncharted territory.

For now, Night Sight is only a mode that pops up in dark shots on Google’s Pixel phones. But it’s hardly alone: All sorts of phonemakers brag about how awesome their photos look, not how realistic they are. The iPhone’s “portrait mode” applies made-up blur to backgrounds and identifies facial features to reduce red-eye. Selfies on phones popular in Asia automatically slim heads, brighten eyes and smooth skin. And most recent phones use a technique called HDR that merges multiple shots to produce a hyper-toned version of reality.

When I recently took the same sunset photo with an iPhone 6 from 2014 and this year’s iPhone XR, I was gobsmacked at the difference — the newer iPhone shot looked as though it had been painted with watercolors.

What’s happening? Smartphones democratized photography for 2.5 billion people — taking a great photo used to require special hardware and a user manual.

Now artificial intelligence and other software advances are democratizing creating beauty. Yes, beauty. Editing photos no longer requires Photoshop skills. Now when presented with a scenic vista or smiling face, phone cameras tap into algorithms trained on what humans like to see and churn out tuned images.

Your phone has really high-tech beer goggles. Think of your camera less as a reflection of reality and more an AI trying to make you happy. It’s faketastic.

Software is king

Snapping a photo on a phone has become so much more than passing light through a lens onto a sensor. Of course, that hardware still matters and has improved over the past decade.

But increasingly, it’s software — not hardware — that’s making our photos better. “It is hyperbole, but true,” says Marc Levoy, a retired Stanford computer-science professor who once taught Google founders Larry Page and Sergey Brin and now works for them on camera projects including Night Sight.

Levoy’s work is rooted in the inherent size limitations of a smartphone. Phones can’t fit big lenses (and the sensors underneath them) like traditional cameras, so makers had to find creative ways to compensate. Enter techniques that replace optics with software, such as digitally combining multiple shots into one.

New phones from Apple, Samsung and Huawei use it, too, but “we bet the ranch on software and AI,” Levoy says. This liberated Google to explore making images in new ways.


Two portraits taken during San Francisco's Day of the Dead parade. The one shot with Google's Night Sight is sharper and more colorful than the one shot with the iPhone XS. (Geoffrey A. Fowler/The Washington Post) (Geoffrey Fowler/San Francisco)

“Google in terms of software has got an edge,” says Nicolas Touchard, the vice president of marketing at DxOMark Image Labs, which produces independent benchmark ratings for cameras. (Whether any of this is enough to help the Pixel win converts from Apple and Samsung is a separate question.)

With Night Sight, Google’s software is at its most extreme, capturing up to 15 low-light shots and blending them to brighten up faces, provide sharp details and saturate colors in a way that draws in the eye. No flashes go off — it artificially enhances the light that’s already there.

Anyone who has attempted a low-light shot on a traditional camera knows how hard it is not to take blurry photos. With Night Sight, before you even press the button, the phone measures the shake of your hand and the motion in the scene to determine how many shots to take and how long to leave the shutter open. When you press the shutter, it warns “hold still” and shoots for up to six seconds.

Over the course of the next second or two, Night Sight divides all its shots into a bunch of tiny tiles, aligning and merging the best bits to make a complete image. Finally, AI and other software analyze the image to pick the colors and tones.

Night Sight had some difficulty with focus and in scenes with almost no light. And you — and your subject — really do have to hold that pose. But in most of my test shots, the product was fantastical. Portraits smoothed out skin while keeping eyes looking sharp. Night landscapes illuminated hidden details and colored them like Willy Wonka’s chocolate factory.

The problem is: How does a computer choose the tones and colors of things we experience in the dark? Should it render a starlit sky like dusk?

“If we can’t see it, we don’t know what it looks like,” Levoy says. “There are a lot of aesthetic decisions. We made them one way, you could make them a different way. Maybe eventually these phones will need a ‘What I see’ versus ‘What is really there’ button.”


Here's what the camera can produce vs. what it actually sees. This same shot was taken with a Google Pixel 3 using Night Sight in the camera app and then again using the "raw" shooting mode in the Adobe Lightroom app. (Geoffrey A. Fowler/The Washington Post)

Faketastic

So if our phones are making up colors and lighting to please us, does it really count as photography? Or is it computer-generated artwork?

Some purists argue the latter. “This is always what happens with disruptive technology,” Levoy says.

What does “fake” even mean? he asks. Pro photographers have long made adjustments in Photoshop or a darkroom. And before that, makers of film tweaked colors for a certain look. It might be an academic concern if we weren’t talking about the hobby — not to mention the memories — of a third of humanity.

How far will phones remove our photos from reality? What might software train us to think looks normal? What parts of images are we letting computers edit out? In a photo I took of the White House (without Night Sight), I noticed the algorithms in the Pixel 3 trained to smooth out imperfections actually removed architectural details that were still visible in a shot on the iPhone XS.

At DxOMark, the camera measurement firm, the question is how to even judge images when they’re being interpreted by software for features such as face beauty.

“Sometimes manufacturers are pushing too far,” Touchard says. “Usually we say it is okay if they have not destroyed information — if you want to be objective, you have to consider the camera a device that captures information.”

For another perspective, I called Kenan Aktulun, the founder of the annual iPhone Photography Awards. Over the past decade, he has examined over a million photos taken with iPhones, which entrants are discouraged from heavily editing.

The line between digital art and photography “gets really blurry at some point,” Aktulun says. Yet he ultimately welcomes technological improvements that make the photo-creating process and tools invisible. The lure of smartphone photography is that it’s accessible — one button and you’re there. AI is an evolution of that.

“As the technical quality of images has improved, what we are looking for is the emotional connection,” Aktulun says. “The ones that get a lot more attention are not technically perfect. They’re photos that provide insight into the person’s life or experience.”

Read more tech advice and analysis from Geoffrey A. Fowler:

The explosive problem with recycling iPads, iPhones and other gadgets: They literally catch fire.

It’s not your imagination: Phone battery life is getting worse

Hands off my data! 15 default privacy settings you should change right now.