Video Boost on the Google Pixel 8 Pro is a useful video tool in low light

When Google introduced Night Sight on the Pixel 3, it was a revelation.

It was as if someone had literally turned on the light on your low-light photos. Previously impossible shots became possible: no tripod flash or deer-in-the-headlights required.

Five years later and taking photos in the dark is old fashioned: virtually every phone across the price spectrum comes with some sort of night mode. Video, however, is a different story. Night modes for photos capture multiple frames to create one brighter image, and it’s just not possible to copy and paste the operation of that feature into video, which is naturally already a series of images. The answer, as it seems to be lately, is to rely on AI.

When the Pixel 8 Pro launched this fall, Google announced a feature called Video Boost with Night Sight, which would arrive in a future software update. It uses AI to process your videos, bringing out more details and improving colors, which is especially useful in low-light clips. There’s just one problem: this processing takes place in the cloud on Google’s servers, not on your phone.

As promised, Video Boost appeared a few weeks ago on devices with the December Pixel update, including my Pixel 8 Pro review unit. And it’s good! But it’s not quite the turning point that the original Night Sight was. That speaks both to how impressive Night Sight was when it debuted, and to the specific challenges that video presents for a smartphone camera system.

Video Boost works like this: First, and crucially, you need a Pixel 8 Pro, not a regular Pixel 8 – Google did not respond to my question as to why that is the case. You turn it on in your camera settings when you want to use it and then start recording your video. Once you’re done, the video should be automatically or manually backed up to your Google Photos account. Then you wait. And wait. And in some cases, keep waiting: Video Boost works on videos up to ten minutes long, but even a clip of just a few minutes can take hours to process.

Depending on the type of video you’re recording, that wait may or may not be worth it. Google’s support documentation says it’s designed to “let you shoot videos on your Pixel phone in higher quality and with better lighting, colors, and details,” in any lighting. But the mainly What Video Boost serves is better video in low light – that’s what the group’s product manager Isaac Reynolds tells me. “Think of it as Night Sight Video, because all the adjustments to the other algorithms are all focused on Night Sight.”

All the processes that make our videos look better in good light (stabilization, tone mapping) stop working when you try to record video in very low light. Reynolds explains that even the friendly blur you get with low light video is different. “OIS [optical image stabilization] can stabilize a frame, but only of a certain length.” Low-light video requires longer frames, which poses a major challenge for stabilization. “When you start running in low light, with such long frames you can get a certain kind of intraframe blur, which is just the residual that the OIS can compensate for.” In other words, it’s incredibly complicated.

All of this helps explain what I see in my own Video Boost clips. In good light I don’t see much difference. Some colors pop a little more, but I don’t see anything that would force me to use it regularly when there’s enough light. In extremely In low light, Video Boost can bring back certain colors and details that are completely lost in a standard video clip. But it’s not nearly as dramatic as the difference between a regular photo and a Night Vision photo under the same conditions.

However, there’s a real sweet spot between these extremes, where I can see Video Boost really coming in handy. In one clip of me walking along a path at dusk to a dark pergola containing the Kobe Bell, there is a noticeable improvement in shadow detail and stabilization after Boost. The more I used Video Boost in normal, medium indoor lighting, the more I saw the case for it. You start to see what washed out standard videos look like under these conditions – like my son playing with trucks on the dining room floor. Enabling Video Boost restored some of the vibrancy I forgot I was missing.

Video Boost is limited to the Pixel 8 Pro’s main rear camera and records in 4K (standard) or 1080p at 30fps. Using Video Boost results in two clips: an initial “preview” file that is unboosted and immediately available for sharing, and finally the second “boost” file. However, there is a lot more going on under the hood.

Reynolds explained to me that Video Boost uses an entirely different processing pipeline that retains much more of the captured image data that would normally be discarded when you record a standard video file – similar to the relationship between RAW and JPEG files. A temporary file stores this information on your device until it is sent to the cloud; then it is removed. That’s a good thing, because the temporary files can be huge: several gigabytes for longer clips. However, the final amplified videos are a much more reasonable size: 513 MB for a three-minute clip I recorded versus 6 GB for the temporary file.

My initial reaction to Video Boost was that it seemed like a stopgap: a feature demonstration of something that the cloud needed to function now, but would appear on the device in the future. Qualcomm showed off an on-device version of something similar this fall, so that must be the endgame, right? Reynolds says he doesn’t think so. “The things you can do in the cloud will always be more impressive than the things you can do on a phone.”

The distinction between what your phone can do and what a cloud server can do fades into the background

One example: He says Pixel phones currently use several smaller, optimized versions of Google’s HDR Plus model on the device. But the entire ‘older’ HDR Plus model that Google has been developing for its Pixel phones over the past decade is too big to realistically run on any phone. And the AI ​​capabilities on the device will improve over time, so that’s likely some things that can only be done in the cloud will move to our devices. But what is possible in the cloud will also change. Reynolds says he views the cloud as just “another part” of Tensor’s capabilities.

In this sense, Video Boost is a glimpse of the future: it’s simply a future where the AI ​​on your phone works hand in hand with the AI ​​in the cloud. More functions will be handled by a combination of on- and off-device AI, and the distinction between what your phone can do and what a cloud server can do will fade into the background. It’s not quite the ‘aha’ moment that Night Sight was, but it will still be a significant change in the way we think about our phone’s capabilities.

Leave a Reply

Your email address will not be published. Required fields are marked *