A brief experiment in detecting ambient light

A brief experiment in detecting ambient light

Inspired by my previous project, lighten, I experimented with automatically updating a laptop screen's brightness using a webcam rather than a dedicated sensor. This required determining the ambient light level via a captured webcam image—with all the variance introduced by the outside world. The takeaway: it's much harder!

There are a few naive approaches to making this calculation:

  1. Convert the image to greyscale and use either the average or the root mean square pixel brightness.

  2. Calculate perceived brightness using the pixel average.

The latter uses magic numbers found in this article about the HSP color model:

$$brightness = sqrt( .299 R^2 + .587 G^2 + .114 B^2 )$$

This Stack Overflow answer provides some alternatives and their implementations.

I chose the first option, so let's see what the full implementation looks like in Python:

import io
import subprocess

from PIL import Image, ImageStat


def get_brightness(path):
        # Get a single frame from the webcam device using ffmpeg,
        # sending the image bytes to standard output to be captured.
        ps = subprocess.run(
            f"ffmpeg -i {path} -vframes 1 -f image2pipe -".split(),
            capture_output=True,
        )
        # Convert the image bytes into a BytesIO object and open it
        # as a Pillow Image.
        im = Image.open(io.BytesIO(ps.stdout))
        # Convert the image to greyscale.
        im = im.convert("L")
        # Return the root mean square (rms) brightness value.
        return ImageStat.Stat(im).rms[0]

get_brightness("/dev/video0")

The Python Pillow library makes this easy, so what's the problem? 🤔

The first problem arose immediately: on my laptop, a 7th generation ThinkPad X1 Carbon, the webcam doesn't seem to fully calibrate itself to ambient light within the time it takes to capture a single frame, so this calculation would yield extremely low values.

I worked around this by "waking up" the webcam using this function:

def wakeup(path):
        subprocess.run(
            f"ffmpeg -i {path} -vframes 10 -f image2pipe -".split(),
            stdout=subprocess.DEVNULL,
            stderr=subprocess.DEVNULL,
        )

I chose 10 frames arbitrarily. These frames are discarded by directing ffmpeg's output to the null device. Afterward, the above get_brightness function returns a value that much better reflects the ambient brightness level.

Indoors, at least. Office? No problem. Coffee shop? Should be fine. Outside, lit by the sun from the front, against a backdrop of dark-leaved trees? In this case, the calculation is wildly undervalued—which makes sense: most of the image is dark.

How can this be worked around? It's possible to detect the user's face in this image and calculate the brightness of that area of the image alone, which could work in this specific case, but consider another scenario: in a dark room, a very bright laptop screen illuminates its user's face. These captured images would likely produce similar pixel brightness values, yet the ambient light levels are completely opposite! Clearly, these methods utilizing averages are too dependent on an image's background composition.

Ultimately, I don't think there's a reliable and consistent method of correctly detecting ambient light level based on a webcam image, but the resulting code might still be useful in some situations.

If you have any insights, please share them!