A Quick Idea – Image Sensor Based on Time-to-saturate

Apologies (as always) for the infrequent updates to this blog. This semester has been a lot rougher than in the past, so I don’t know if I’ll have time to post anything more until the end of break.

I had a quick idea I wanted to jot down, and I haven’t found anything on it. I feel like someone out there must have thought up something similar already, or it’s already in the works at some black lab of a sensor company or something.

The idea I have is an image sensor that measures light intensity based on time-to-saturate – the time it takes for a particular photowell (representing a pixel) to saturate to its maximum capacity. The concept I’ve come up with has some interesting theoretical advantages in dynamic range over conventional photon-counting designs used today.

Imaging today – photon counting

First, a layman’s overview of how the conventional photon-counting design works in today’s sensors:

The sensor is a light sensitive device, and whenever photons come into contact with it, they are absorbed and a proportional number of photoelectrons are “knocked out” by the photon energy and collected in a photowell. From this photowell, a voltage measurement is taken, and this ultimately translates to a brightness value in the resulting image. In essence: Image brightness ∝ voltage reading ∝ electrons collected ∝ photons collected.

When taking an image, there is a set exposure duration, often referred to as a “shutter speed” in photography terms. This defines the time when a sensor is exposed to and detecting light – the exposure starts, light hits the sensors, exposure stops, and then we count the photons.

A limiting factor in this design is the photowell capacity. The number of electrons that can be stored in a well is finite, and once the photowell capacity is saturated, any additional electrons are not collected and hence the photons they correlate to are not counted. On the flipside, there is also a noise floor, where enough electrons must be gathered to produce a signal that is discernible from the random signal variation due to various forms of dark (thermal), electronics (read), and shot (photon) noise.

These two attributes lead to a problem of dynamic range – in scenes where light intensity differs greatly between the darkest and brightest areas, the sensor is simply unable to measure the full range of brightnesses and must cap measurements above and/or below a certain threshold.  This leads to the “blown highlights” and “crushed shadows” attribute often found in photos of large dynamic range scenes.

Time-to-saturate

The idea behind a time-to-saturate sensor is fairly simple. What we aim to measure in an image is light intensity – the flux of photons per time per area. The area is cancelled out of the equation by the photosite corresponding to a pixel being a certain area, so the measure we are really after is photons per time, for each pixel.

With photon counting, we fix a shutter speed (time duration), and then count the number of photons (via voltage measurement of photoelectrons) captured in that span, and use both to derive the intensity:

Intensity = photons / time = photons recorded / shutter speed

In time-to-saturate, the photon count is fixed at the capacity of the photowell, and the variable we measure is the time it takes for an individual well to saturate fully to the capacity.

Intensity = photons / time = max photon capacity / time-to-reach max-photon-capacity

How would the system work exactly? With a time-to-saturate sensor, we use as long a shutter speed as needed to fully saturate all photowells (in a conventional sensor, this is the minimum shutter speed to generate an all-white (max brightness) image). At the moment a photowell reaches capacity, it records a timestamp which will indicate how long it took to reach capacity. Once the exposure is finished, we are then left with a two-dimensional array of saturation times, rather than photon counts. Rather than recording 100k photons at one photosite, and 50k photons at a neighboring photosite where light was half as intense, the readings we get from this sensor would be along the lines of 1 millisecond time-to-saturate for the first photosite, and 2 millisecond time-to-saturate for the second, half-intensity photosite.

Key Advantages

There are two key advantages in our ability to take light intensity readings, both ultimately advancing dynamic range:

  • There is virtually no limitation to the range of highlights we can capture, unlike the limitation imposed by the photowell capacity with photon-counting sensors. In our example, if there was a third photosite which had double the intensity of the first 100k photosite, and was exposed to 200k photons, it would only end up recording 100k photons since this is the capacity of the photosite, and thus both pixels would record the same white (max brightness) value, even though the 200k photosite pixel clearly represents a brighter area in the scene than the 100k photosite. A time-to-saturate measurement, by contrast, would simply produce a shorter time measurement: the 200k photosite saturates in 0.5 milliseconds, which we can compare to the 1 millisecond measurement for the first photosite and clearly conclude that the 200k photosite is twice as bright.
  • Noise levels are reduced to the level of a maximally-saturated photowell. In a photon-counting sensor, any photosite that does not record a max white value by definition recorded a fewer number of photons, and thus produces a sub-optimal signal-to-noise ratio (SNR). Photon or “shot” noise has a standard deviation of the square root of the signal – thus for 100k photons we have √(100,000) = 316.2 photons of standard deviation, and a SNR of N/√(N) = √(N) = 316.2. For 50k photons, however, we have an SNR of √(50,000) = 223.6. In contrast, all photosites in a time-to-saturate sensor reach the max well capacity, and will thus all have the max SNR. This ensures that all photosites record values well above the noise floor, and additionally reduces photon noise for all pixels to the level of a maximally saturated photosite (the 100k photon, 316.2 SNR in this example).

In theory, such a sensor would have an infinite dynamic range – the brightest intensities are simply recorded as short time-to-saturate durations, and enough samples are recorded from the darkest areas to place the measurement well above the noise floor.  This would have huge implications for large dynamic range photography and imaging in general, to be able to record the entire dynamic range of a scene in a single exposure, without having to resort to processing tricks like selective shadow/highlight adjustment or high dynamic range (HDR) blending.

Potential Feasibility Issues

I’m not aware of any sources that have thought of this idea before, but if there are then there must be some large feasibility (or perhaps cost) issues that have prevented its development thus far. The few issues that I can imagine, none of which seem like dealbreakers and none of which would place performance any worse than that of photon-counting methods, in theory:

  • Timing accuracy/precision of photowell saturation. While photon-counting relies on accurate and precise voltage readings from the photowells, a time-to-saturate sensor would need good accuracy and precision in recording time when a photowell reaches saturation. How precise does the time need to be, to equal the theoretical precision of today’s cameras? Taking the contemporary example of a 100k photon capacity photowell, hooked up to a sensor/imaging pipeline with a 14-bit analog-to-digital converter (found on most high-end cameras today), we would need to quantize measurable photon counts into 2^14 = 16,384 steps. 100,000 / 16,384 = ~6 photons, which is the precision we need to be able to measure time-to-saturation by. Most high-end cameras today operate with a minimum shutter speed of 1/8000 second (125 microseconds) – a 100k photowell that fully saturates in this time (this is the maximum light intensity the photon-counting sensor is able to record, under any settings) is thus 100,000 photons/125 microseconds = 800,000,000 (0.8 billion) photons / second.  Finally, we use this intensity along with our 6 photon steps to arrive at 6 photons / (0.8 billion photons/second) = 7.6 nanoseconds. This is the precision with which a time-to-saturate sensor needs to record time by. Of course, depending on the application the numbers can vary – with fewer bits per pixel, we would need less precision (an 8-bit jpeg in this example would need just ~0.5 microseconds of precision), with lower photowell capacity we would need greater precision, and with a larger minimum exposure time we would need less precision.
  • To take advantage of the greater dynamic range capabilities of a time-to-saturate sensor, the exposure duration must be longer than a conventional photon-counting sensor, to capture more light. For static scenes, this is unlikely to be an issue, but for dynamic scenes (e.g. moving subjects), the exposure duration can only be stretched so far before issues such as motion blur or camera shake blur are introduced. At worst, however, the exposure can simply stop after a defined maximum exposure time – at this point any photowells which have not reached capacity simply output a voltage reading like in a conventional sensor – this reading is then used to extrapolate a time-to-saturate which can then be compared with the other photosites. In the worst case, the maximum exposure time is the same as the exposure time in a conventional photon-counting sensor, and would produce an noise level and at least the same dynamic range, if not a greater dynamic range captured in the highlights. For any exposure duration exceeding that of the conventional sensor however, noise levels will be reduced and a greater dynamic range in the shadow regions will be achieved as well.

What do you think?  Any potential pitfalls or feasibility issues I might have missed? I’m especially interested if anyone has come across a source with similar ideas before. Feel free to post links in the comments!

6 Responses to “A Quick Idea – Image Sensor Based on Time-to-saturate”

  1. Sloan Lindsey says:

    The largest issue I see with this method is in recording the time to saturation.
    Using your number of 7.6ns gives us a polling rate of about 131mhz. We need to be able to check the state of the pixel at that rate without adding any adittional noise. I’m not sure that is feasible as we need to cycle the entire chip at that rate which will likely increase the temperature. Its an interesting idea though.

    I found your post looking for information of another idea that I had. What if like in your scenario you only measure full saturation but instead of sampling over time you sample over space. Imagine an array of pixels with decreasing saturation limits (for simplicity assume halving). If in any time interval we reach saturation we record a bit. Now we have a 1 bit posterized image of the places where light hit with greater than a certain intensity. Now the section with half as much sensitivity gives us a one bit image of where the light hit with twice as much intensity. We continue building this array with as many steps as the total number of stops we wish to capture letting us choose the amount of dynamic range we capture. Now we have a heavily posterized image (film was binary in this manner as well) we can sub sample with another binary string offset by 1/2 and get twice as many gray scale graduations.

    The merit of this idea is that it allows for very high resolution DSLR sensors are not feature size limited (as shown by 1/2.3″ sensors with comparable resolution) the limiting factor for this system is then decoupled from the read noise and dependent only upon thermal noise. Furthermore if we build our sensor in such a way that we can program the initial charge (like memory and likely rather slow) we can then set the sensitivity and the dynamic range (we can choose our significant figure -> range ratio) on the fly per scene.
    Truly though I see the strength of my idea simply to be as a novel idea in order to increase resolution. flash memory has much higher densities than sensors so the resolution is possible. This also opens ideas for different modes of sensing instead of using the photovoltaic effects we could use photoresistive materials to act as our photon counters.

  2. Robert says:

    I think the largest issues are the dynamic scene and the time recording for each pix.
    I am also a layman interested in improvement of dyna-range and noise. Wondering if you know how long it takes to complete 4 shots in Fujifilm’s Pro Low-Light Mode? I mean the time starting at the beginning of the first shot till the ending of the 4th shot? Thanks.

  3. I think the idea is brilliant for landscape photographers, although I often find myself shooting at 1-30 second shutter speeds when the light gets really low. If I’m not mistaken, shooting at 10 seconds and wishing to record an extra 2 stops of detail in shadows would require what, 40 seconds? And that’s not even filling a photosite to 100% full, that’s merely adding ~2 stops of DR. Plenty of times when I’m shooting at dawn, dusk, or especially at night, I could be making a 30 second exposure and STILL have completely black shadows in certain areas of the image. I would hate to think how many seconds / minutes / hours those photosites would take to fill all the way up! Of course, I’m sure your solution of a cap-time would work. It would essentially just be adding a few stops of DR to the image, *NOT* creating a 100% white image with time values for brightness calculations. Dunno if that’s possible though.

    And of course, either way it’s DEFINITELY gonna hafta be tripod-only for the type of work I do at f/8 with a polarizer etc, where my average shutter speed is a 1/10 or 10″ even in half-decent light…

    I’m sure the main problem will be in designing the circuitry for a sensor that has individually sensitive pixels. Having a different “shutter speed” for each pixel individually could be a monumental programming task…

    The closest thing, and an indicator that you may be on the right track, is a sensor like Fuji’s legendary DSLR sensor, with completely different types of pixels on the same sensor. Using the same “shutter speed” but different photosite size / type, you definitely get added DR as Fuji has proven. I think their S3 or S5 still tops the charts for it’s JPG default output DR.

    My bet, however, is that this concept is as probable as a digital B&W or digital panoramic sensor. ;-) What we’re more likely to see is, improved DR using different ISO’s, through simple advances in current “Active D-Lighting” technology. If for example I shoot a landscape at ISO 100 or ISO 50, but the sensor can boost shadow sensitivity to 200 or 400, well, that’d be decent. Or if I shoot a D3s at ISO 1600 haha, I could gather better shadows by blending ISO 3200 and 6400 into the shadow pixels… Not perfect, but a small advantage with no jeopardizing of shutter speed for low-light sports / action photographers.

    =Matt=

  4. The place where technology like this might be the most feasible would be in 4×5 large format scanning backs. They’re already kinda throwing shutter speed out the window. ALSO, the entire sensor is just a few rows of pixels, so the physical circuitry and programming involved would both be a LOT simpler. You should contact the Betterlight guys with this idea!

  5. Gerbils says:

    See the following, they seem to be along the lines of your idea.

    1) “A TIME-BASED CMOS IMAGE SENSOR,” Qiang Luo John G. Harris
    http://www.cnel.ufl.edu/hybrid/_private/publications/tcmosimager_01329135.pdf

    2) “A time-to-first spike CMOS image sensor with coarse temporal sampling,” Qiang Luo, John G. Harris and Zhiliang J. Chen

    3) US Patent 6069377

  6. Andy says:

    Check out PIXIM DPS technology developed at Stanford.
    They incorporate the rate of saturation to extend the dynamic range

Leave a Reply