Camera Guide, November 2010 Part 2 (Consumer full-size)

Simple Full-size

For some casual snapshooters, size is really no object.  In terms of usage, these users might use a camera very similarly to how they’d use a simple ultracompact – snapping a photo or two at social events, or taking casual photos around the house or room.  In contrast to a simple ultracompact, a simple full-size camera doesn’t have portability as its main concern; you probably won’t be able to stuff it in your jeans pocket, and it might even require its own bag.  Instead, the larger formfactor of a simple full-size camera often allows for better overall image quality.

Simple Full-size, premium: Pentax K-x with 18-55mm lens

Pentax K-xIf size doesn’t matter and your budget reaches up to $500 or so, your best bet is a basic DSLR camera that provides superb image quality and quick response times that simply blow away anything you can find on a small-sensor compact or bridge camera, especially in low-light situations. While more advanced photographers appreciate them for their interchangeable lenses and manual controls, all of them still have fully automatic modes for simple point-and-shoot use.

The Pentax K-x is Pentax’s entry-level DSLR, but you wouldn’t know it based on its featureset and image quality.  While priced in the same sub-$500 range as Canon’s Rebel XS and Nikon’s D3000, the Pentax K-x’s plethora of features rival many midrange DSLRs, with fast-firing 5fps continuous shooting, sensor-based image stabilization that works with any lens, expansive 11-point autofocus system, and even 720p HD video. Image quality on the K-x, especially in low-light situations, is a bit better than the Rebel XS, and both easily outclass the Nikon D3000.

  • 12MP resolution
  • 27-83mm (3x) zoom range
  • f/3.5-5.6 aperture
  • 1280×720, 24fps video
  • Sensor-based Image stabilization
  • 516g (18.2oz) – not including lens
  • 122.0 x 91 x 69.0 mm (4.89 x 3.6 x 2.7 in) – not including lens
  • 420 shots battery life (CIPA)
  • $490 on Amazon

Read the rest of this entry »

Camera Guide, November 2010 Part 1 (Consumer compacts)

We’re nearing the holiday shopping season once again, so as an exercise to familiarize myself with all the product lines out there (a lot has changed in the 17 months since I did the last one) and to provide a one-stop quick read for digital camera recommendations, here are recommendations for cameras that run the gamut of common use-cases and prices.

All prices are based on the lowest of amazon.com or bhphotovideo.com

Have you got a use case/need that isn’t covered here?  Feel free to post it in the comments, and I’ll keep it in mind for future guides (or maybe update this guide, if there’s a glaring omission in a category).  And if you think differently about any of the cameras, feel free to share that too!

General advice:

To give you all an idea of the perspective these recommendations are written from, here are a few guidelines I mostly go by:

Features trump image quality: With modern cameras, many image quality differences are mostly a consideration of the past.  Almost every camera released today has megapixel resolution far in excess of what’s needed (or even usable) for the majority of applications (like making a 4×6″ print, uploading to Facebook, or even displaying full-size on the biggest computer monitor or LCD screen you can buy), and in most daylight scenarios there is practically zero difference between cameras, especially among the top tier of manufacturers.  The main differentiator in your photographic experience and capability is what features you’ll have to work with – being able to take a wide shot with a 28mm wide-angle lens, or having a fast 5fps continuous shooting mode for action shots, for instance, is going to go a long way towards getting you the photographs you want, compared to minute differences in image quality or resolution.

Price/performance: The recommendations for different categories will mostly recommend the camera with the best value proposition – a lot of these are often written in the format of: Best budget camera under $200, best midrange camera under $300, best premium camera under $500, etc. While oftentimes, yes it’s true that Camera Xa has a slightly bigger LCD screen than Camera Xb and is therefore better, and the $50 premium still puts it under the $300 budget, as a knowledgeable consumer you wouldn’t want to spend that much more on a mostly cosmetic difference, and as an informed friend you would do best by recommending Camera Xb to your friend.

Simple Ultracompact

For many people, cameras are just cameras, and all they need is something that, for lack of a less-hackneyed phrase, they can “point and shoot”.  They’re not interested in photography and don’t need nor want full manual controls, and can make do without a huge zoom range.  They’ll take snaps while they’re out at social events or just randomly at home or in their room, but that’s about it.  For this group there’s thesimple ultracompact – a basic camera that has a few useful features (wide-angle lenses for photos in restricted interior space – group photos at a restaurant, for example – and image stabilization for low-light situations) but otherwise just provides good overall quality and a small formfactor that can be taken just about anywhere.

Simple Ultracompact, midrange: Canon SD3500 IS

Canon SD3500 ISFor years, Canon’s iconic SD line has been the quintessential ultracompact point and shoot, and their popularity has good reason: they deliver solid image quality, decent featureset, no-frills point-and-shoot control, and aren’t overly expensive.  The SD3500 is one of the better featured packages available, providing a 5x lens with extremely versatile 24mm wide-angle (perfect for taking photos indoors and getting everything in the frame), 720p HD video resolution, and the increasingly common image stabilized lens.

  • 14MP resolution
  • 24-120mm (5x) zoom range
  • f/2.8-5.9 aperture
  • 1280×720, 30fps video (720p)
  • Lens-based Image stabilization
  • 160g (5.6oz)
  • 99 x 56 x 22 mm (3.9 x 2.2 x 0.9 in)
  • 220 shots battery life (CIPA)
  • $249 on Amazon

Read the rest of this entry »

A Quick Idea – Image Sensor Based on Time-to-saturate

Apologies (as always) for the infrequent updates to this blog. This semester has been a lot rougher than in the past, so I don’t know if I’ll have time to post anything more until the end of break.

I had a quick idea I wanted to jot down, and I haven’t found anything on it. I feel like someone out there must have thought up something similar already, or it’s already in the works at some black lab of a sensor company or something.

The idea I have is an image sensor that measures light intensity based on time-to-saturate – the time it takes for a particular photowell (representing a pixel) to saturate to its maximum capacity. The concept I’ve come up with has some interesting theoretical advantages in dynamic range over conventional photon-counting designs used today.

Imaging today – photon counting

First, a layman’s overview of how the conventional photon-counting design works in today’s sensors:

The sensor is a light sensitive device, and whenever photons come into contact with it, they are absorbed and a proportional number of photoelectrons are “knocked out” by the photon energy and collected in a photowell. From this photowell, a voltage measurement is taken, and this ultimately translates to a brightness value in the resulting image. In essence: Image brightness ∝ voltage reading ∝ electrons collected ∝ photons collected.

When taking an image, there is a set exposure duration, often referred to as a “shutter speed” in photography terms. This defines the time when a sensor is exposed to and detecting light – the exposure starts, light hits the sensors, exposure stops, and then we count the photons.

A limiting factor in this design is the photowell capacity. The number of electrons that can be stored in a well is finite, and once the photowell capacity is saturated, any additional electrons are not collected and hence the photons they correlate to are not counted. On the flipside, there is also a noise floor, where enough electrons must be gathered to produce a signal that is discernible from the random signal variation due to various forms of dark (thermal), electronics (read), and shot (photon) noise.

These two attributes lead to a problem of dynamic range – in scenes where light intensity differs greatly between the darkest and brightest areas, the sensor is simply unable to measure the full range of brightnesses and must cap measurements above and/or below a certain threshold.  This leads to the “blown highlights” and “crushed shadows” attribute often found in photos of large dynamic range scenes.

Time-to-saturate

The idea behind a time-to-saturate sensor is fairly simple. What we aim to measure in an image is light intensity – the flux of photons per time per area. The area is cancelled out of the equation by the photosite corresponding to a pixel being a certain area, so the measure we are really after is photons per time, for each pixel.

With photon counting, we fix a shutter speed (time duration), and then count the number of photons (via voltage measurement of photoelectrons) captured in that span, and use both to derive the intensity:

Intensity = photons / time = photons recorded / shutter speed

In time-to-saturate, the photon count is fixed at the capacity of the photowell, and the variable we measure is the time it takes for an individual well to saturate fully to the capacity.

Intensity = photons / time = max photon capacity / time-to-reach max-photon-capacity

How would the system work exactly? With a time-to-saturate sensor, we use as long a shutter speed as needed to fully saturate all photowells (in a conventional sensor, this is the minimum shutter speed to generate an all-white (max brightness) image). At the moment a photowell reaches capacity, it records a timestamp which will indicate how long it took to reach capacity. Once the exposure is finished, we are then left with a two-dimensional array of saturation times, rather than photon counts. Rather than recording 100k photons at one photosite, and 50k photons at a neighboring photosite where light was half as intense, the readings we get from this sensor would be along the lines of 1 millisecond time-to-saturate for the first photosite, and 2 millisecond time-to-saturate for the second, half-intensity photosite.

Key Advantages

There are two key advantages in our ability to take light intensity readings, both ultimately advancing dynamic range:

  • There is virtually no limitation to the range of highlights we can capture, unlike the limitation imposed by the photowell capacity with photon-counting sensors. In our example, if there was a third photosite which had double the intensity of the first 100k photosite, and was exposed to 200k photons, it would only end up recording 100k photons since this is the capacity of the photosite, and thus both pixels would record the same white (max brightness) value, even though the 200k photosite pixel clearly represents a brighter area in the scene than the 100k photosite. A time-to-saturate measurement, by contrast, would simply produce a shorter time measurement: the 200k photosite saturates in 0.5 milliseconds, which we can compare to the 1 millisecond measurement for the first photosite and clearly conclude that the 200k photosite is twice as bright.
  • Noise levels are reduced to the level of a maximally-saturated photowell. In a photon-counting sensor, any photosite that does not record a max white value by definition recorded a fewer number of photons, and thus produces a sub-optimal signal-to-noise ratio (SNR). Photon or “shot” noise has a standard deviation of the square root of the signal – thus for 100k photons we have √(100,000) = 316.2 photons of standard deviation, and a SNR of N/√(N) = √(N) = 316.2. For 50k photons, however, we have an SNR of √(50,000) = 223.6. In contrast, all photosites in a time-to-saturate sensor reach the max well capacity, and will thus all have the max SNR. This ensures that all photosites record values well above the noise floor, and additionally reduces photon noise for all pixels to the level of a maximally saturated photosite (the 100k photon, 316.2 SNR in this example).

In theory, such a sensor would have an infinite dynamic range – the brightest intensities are simply recorded as short time-to-saturate durations, and enough samples are recorded from the darkest areas to place the measurement well above the noise floor.  This would have huge implications for large dynamic range photography and imaging in general, to be able to record the entire dynamic range of a scene in a single exposure, without having to resort to processing tricks like selective shadow/highlight adjustment or high dynamic range (HDR) blending.

Potential Feasibility Issues

I’m not aware of any sources that have thought of this idea before, but if there are then there must be some large feasibility (or perhaps cost) issues that have prevented its development thus far. The few issues that I can imagine, none of which seem like dealbreakers and none of which would place performance any worse than that of photon-counting methods, in theory:

  • Timing accuracy/precision of photowell saturation. While photon-counting relies on accurate and precise voltage readings from the photowells, a time-to-saturate sensor would need good accuracy and precision in recording time when a photowell reaches saturation. How precise does the time need to be, to equal the theoretical precision of today’s cameras? Taking the contemporary example of a 100k photon capacity photowell, hooked up to a sensor/imaging pipeline with a 14-bit analog-to-digital converter (found on most high-end cameras today), we would need to quantize measurable photon counts into 2^14 = 16,384 steps. 100,000 / 16,384 = ~6 photons, which is the precision we need to be able to measure time-to-saturation by. Most high-end cameras today operate with a minimum shutter speed of 1/8000 second (125 microseconds) – a 100k photowell that fully saturates in this time (this is the maximum light intensity the photon-counting sensor is able to record, under any settings) is thus 100,000 photons/125 microseconds = 800,000,000 (0.8 billion) photons / second.  Finally, we use this intensity along with our 6 photon steps to arrive at 6 photons / (0.8 billion photons/second) = 7.6 nanoseconds. This is the precision with which a time-to-saturate sensor needs to record time by. Of course, depending on the application the numbers can vary – with fewer bits per pixel, we would need less precision (an 8-bit jpeg in this example would need just ~0.5 microseconds of precision), with lower photowell capacity we would need greater precision, and with a larger minimum exposure time we would need less precision.
  • To take advantage of the greater dynamic range capabilities of a time-to-saturate sensor, the exposure duration must be longer than a conventional photon-counting sensor, to capture more light. For static scenes, this is unlikely to be an issue, but for dynamic scenes (e.g. moving subjects), the exposure duration can only be stretched so far before issues such as motion blur or camera shake blur are introduced. At worst, however, the exposure can simply stop after a defined maximum exposure time – at this point any photowells which have not reached capacity simply output a voltage reading like in a conventional sensor – this reading is then used to extrapolate a time-to-saturate which can then be compared with the other photosites. In the worst case, the maximum exposure time is the same as the exposure time in a conventional photon-counting sensor, and would produce an noise level and at least the same dynamic range, if not a greater dynamic range captured in the highlights. For any exposure duration exceeding that of the conventional sensor however, noise levels will be reduced and a greater dynamic range in the shadow regions will be achieved as well.

What do you think?  Any potential pitfalls or feasibility issues I might have missed? I’m especially interested if anyone has come across a source with similar ideas before. Feel free to post links in the comments!

An explanation of Fujifilm’s Super CCD EXR sensor

A look at Fujifilm’s innovative EXR sensor, the latest iteration of its flagship Super CCD sensor, along with some analysis of images from production cameras. Admittedly this would have been more interesting as a speculative piece a year ago, but better late than never

tl;dr: Fujifilm’s EXR sensor is extraordinary, mostly for its dynamic range. If you’re after the best non-DSLR image quality around, your choices start at the Fuji F200EXR, F70EXR, S200EXR, and end there.

Fujifilm has long been a leader in revolutionary sensor technology, particularly at the smaller scale sensor market where the majority of manufacturers have long been content pumping out traditional, vanilla CCD sensors with square grid-based Bayer Filter Arrays.

In September of 2008, announced plans for their latest sensor: the Super CCD EXR, which combines the unique color filter array (CFA) and pixel binning features of various previous sensors into a single “switchable” sensor that can be optimized in one of several areas (which are typically mutually exclusive when designing a sensor): high resolution, high dynamic range, and low noise.

High resolution

High resolution mode is the default mode, which utilizes the full set of photosites on the sensor and produces an image with a corresponding pixel on each photosite – nothing too special here, though Fuji claims the diagonal layout of photosites (as opposed to simple square grid) helps to improve resolution.

High sensitivity

A comparison of a typical Bayer CFA (left) and the CFA on Fujifilm's new EXR sensor (right)

The second mode of operation for the EXR sensor is a high-sensitivity mode which Fuji calls “Pixel Fusion Technology”, which is fancy marketspeak for pixel-binning (combining reading from adjacent pixels together to produce a better signal). With the EXR’s pair-based CFA layout, Fujifilm claims that interpolation (and thus color resolution) will be more accurate because the binned pixels are closer together (e.g. the pair blue pixels are pretty much in the same location, while they’re separated by two pixel lengths in a standard square-grid Bayer array. I don’t know that I buy this argument particularly well – it’s true that same-color pixel values will be more accurate since they’re closer, but you can’t get something for nothing: for example, the average distance from red-to-blue is going to be increased, which lowers accuracy for interpolating blue values at red pixels.

Read the rest of this entry »

Canon 7D and 1D Mark IV: new 1D and 1D junior

TL;DR version: A long diatribe on how the latest Canon releases completely underwhelm in the face of competition, especially from Nikon.  The 7D is a decent upgrade that’s completely overrated simply due to marketing. The 1D Mark IV sounds nice and has the capability the 1D Mark III probably should’ve had – unfortunately its functionality has been completely eclipsed by Nikon’s D3(s) and even D700, which unlike the 1D’s 1.3x crop sensor, are able to pull double-duty as both heavy duty sports bodies and general purpose cameras.

Canon's 7D, which is essentially a 60D with fancy marketing and a higher price tag

Canon's 7D, which is essentially a 60D with fancy marketing and a higher price tag

It’s interesting to see how much an effect marketing has on the general photography consumer. Over the past few months, Canon has released a couple of moderate upgrades, one of which has been hailed as revolutionary and game-changing, and the other which was met with a big collective yawn and cries that Canon has fallen behind the cutting edge and is playing catch-up with Nikon. The biggest difference? One camera was given an incremental version number, and the other was given a new model number as the start of a different series. Read the rest of this entry »

Tamron 17-50mm f/2.8 VC: An image-stabilized, midrange crop lens for the masses

The Tamron 17-50 f/2.8 VC - currently one of only two f/2.8 midrange zoom lenses on the market, and the only one under a grand ($650 to be exact)

When it comes to midrange lenses, there’s a few different approaches. Of course, a lot of people start at the low end with an 18-55mm kit or so, but eventually most people graduate and there are pretty much two ways to go:

  1. A small range, high-quality, high-aperture zoom.
  2. A large range ultrazoom with (usually) lower quality and smaller aperture

One of the advantages where ultrazooms seemed to gain the leg-up on large aperture zooms was in the image stabilization department, where nearly every single ultrazoom lens does, but up until recently only one large aperture crop lens (Canon’s $1000+ 17-55mm f/2.8 IS) did.

That left just one (very expensive) option for Canon users, and Nikon users completely out in the cold (they pay $1300 for a 17-55 f/2.8 without VR). Third party manufacturers, have as always had cheaper alternatives, such as Sigma’s 18-50mm f/2.8 and Tokina’s 16-50mm f/2.8, but all of these lacked any sort of stabilization as well.

Now finally, Tamron has gone ahead and introduced their VC stabilization to their flagship crop standard zoom, the 17-50mm f/2.8 (or rather, Tamron SP AF 17-50mm f/2.8 XR Di II VC LD Aspherical [IF]), which finally delivers a large aperture, image-stabilized standard zoom for an affordable price ($650 currently).

It’s only knock is that it doesn’t yet have the fast USM or SWM-based autofocus of the Canon or Nikon models, although it’s a feature I’ve long-regarded as over-rated for standard zooms for most people’s actions.

Panasonic does Micro Four-Thirds right with the GF1

In what many see as the next big evolutionary step for digital cameras, Panasonic and Olympus made a bold move with their introduction of the Micro Four-Thirds system, an electronic viewfinder, interchangeable lens (so-called “EVIL”) system that eschewed the mirror assembly found in traditional SLR cameras and offering image preview via a live view feed only.

Aside from the numerous advantages associated purely with live view (and could technically be realized with a traditional DSLR – it’s just that forcing live view only is likely to spur much more rapid development), the one key advantage to Micro Four Thirds (and upcoming systems like it, such as Samsung’s NX system) is that the removal of the mirror assembly allows lenses to sit much closer to the image plane, making for much smaller camera bodies and lenses.

The first few of these cameras – the Panasonic G and GH1 – failed completely to live up to the small form factor potential – they were shaped much like traditional SLRs, albeit slightly smaller.

Panasonic GH1

The Panasonic GH1 - one of the first Micro Four Thirds cameras which didn't quite realize the potential of the formfactor

Next, Olympus released a Micro Four Thirds of its own: the E-P1 “Pen” which harked back to Olympus’ historical line of compact film cameras. Unlike the G1, the E-P1 actually began to approach what some would call “compact” – it was just 1.4in thick, though that’s not taking into account the attached lens.

Now Panasonic is jumping in on the bandwagon with their E-P1-esque GF1, which sports a slim compact-like body. The specs are nothing to get excited about, though it does have the a built-in flash that was notably missing from the E-P1. In a puzzling decision though, Panasonic decided not to implement any sensor-based image stabilization, relying on lens-based IS to counter camera shake. Unless they were denied a sensor IS license by Olympus (a possibility), I’d say this is a rather bone-headed decision, since any stabilized lenses will add weight unnecessarily (or in the case of pancake lenses that are pretty much made for this kind of camera, impossible to add in), defeating the entire purpose of Micro Four-Thirds.

The two kit lenses offered with the GF1 are a bit more appealing than the E-P1 package: a standard 14-45mm OIS kit lens and a 20mm f/1.7 pancake prime. The prime still isn’t quite there to portrait range and gets even further away from all-around wide angle utility than Oly’s 17mm f/2.8 pancake, but it does offer a much larger f/1.7 aperture.

A comparison of the new landscape in premium compacts:

Panasonic GF1 size comparison

Camera Size Focal range (equiv) Aperture (equiv)
Canon G10 4.3x3.1x1.8in 28-140mm f/13-21
Fujifilm F200EXR 3.8x2.3x0.9in 28-140mm f/14-22
Panasonic LX3 4.3x2.3x1.5in 24-60mm f/9.4-13
Sigma DP1 4.5x2.3x2.3in 28mm f/6.7
Sigma DP2 4.5x2.3x2.3in 42mm f/4.7
Olympus E-P1 w/ 17mm f/2.8 4.7x2.8x2.3in 34mm f/5.6
Olympus E-P1 w/ 14-42mm f/3.5-5.6 4.7x2.8x3.1in 28-84mm f/7-11
Panasonic GF1 w/ 20mm f/1.7 4.7x2.8x2.4in 40mm f/3.4
Panasonic GF1 w/ 14-45mm f/3.5-5.6 OS 4.7x2.8x3.8in 28-90mm f/7-11

As expected, the added IS to the Panasonic kit lens makes it much larger (22.6% longer) than the E-P1 setup. Panasonic’s pancake, however, is about the same size as Oly’s 17mm and with its f/1.7 aperture is by far the best in terms of large aperture performance (35mm equivalent of f/3.4)

If you’re in the market for this kind of camera though, the most sensible thing seems to be taking the E-P1 to get yourself sensor-based IS, and combining that with Panny’s 17mm pancake prime. Though you will be losing out on the built-in flash, which is somewhat of a must-have for a camera like this (since again, needing to carry around a huge external flash defeats the size advantage).

Pentax K-x

Pentax K-x (space white)

Following up on their K-7, Pentax has now come up with an entry-level K-x. While it doesn’t bring anything groundbreaking that wasn’t already seen on the K-7, it packs in many of the features seen on many competitors’ midrange model, and perhaps pending reviews on image quality and disregarding the overall Pentax system upgrade options, is probably the best choice out there currently for the beginning photographer/student.

The big headline features:

  • 12.4MP CMOS (different from the K-7 14.6MP sensor, but interestingly also uses CMOS unlike all previous Pentaxes which used CCDs)
  • ISO up to 12.8k
  • Live-view with face-detect AF
  • 720p, 24fps video
  • 4.7fps continuous shooting
  • $650 MSRP with 18-55 kit lens (and likely to drop further once it gets off pre-order)
With the specs listed, this is a camera you’d expect in the high-hundreds, competing with the likes of Canon’s Rebel T1i or Nikon’s D5000/D90, yet it’s got a price closer to that of the entry-level Rebel XS or D3000.

A comparison:

Pentax K-x comparison

Camera Canon Rebel XS Nikon D3000 Pentax K-x Nikon D5000 Canon Rebel T1i
Sensor, crop 10MP, 1.6x 10MP, 1.5x 12MP, 1.5x 12MP, 1.5x 15MP, 1.6x
ISO range 100-1600 100-3200 100-12800 200-6400 100-12800
Live-view? Yes No Yes Yes Yes
Live view AF Yes None Yes, face-detect Yes, face-detect Yes, face-detect
Video None None 1280x720, 24fps 1280x720, 24fps 1920x1080, 20fps
AF 7pt, 1 cross-type 11pt, no cross 11pt, 9 cross-type 11pt, 1 cross-type 9pt, 1 cross-type
Continuous FPS 3fps jpg, 1.5fps raw 3fps 4.7fps 4fps 3.4fps
Image stabilization lens-based lens-based sensor-based lens-based lens-based
Size 127 x 97 x 61mm 127 x 97 x 64mm 122 x 91 x 69mm 127 x 104 x 79mm 130 x 97 x 61mm
Weight 450g 485g 516g 560g 480g
Price (with kit lens, Amazon) $499.95 $529.95 $649.95 $719.63 $781.89

In a comparison with the $500 entry-level cameras, the K-x blows them away in nearly aspect, and goes toe-to-toe or even exceeds the D5000 and Rebel T1i in every single category, despite being significantly cheaper (especially once the street price drops lower from MSRP)

Interestingly enough, the Pentax K-x will come in a variety of colors, including an ultra-spiffy red (below), the space white shown above, and your ordinary black.

Interestingly enough, Pentax Japan features a site where you can come up with your own custom color scheme, and apparently order it as well, which personally is an insanely appealing prospect.

Pentax K-x (red)

Pentax K-x custom design - design your own!

Pentax K-x custom design - design your own!

I think this is a ranking of custom designs that users have created

I think this is a ranking of custom designs that users have created

Pentax K-x press release

Past three months in camera news: Nikon

Apologies to all for dropping the ball for the past three months – it’s been a whirlwind start to the semester here. Big, recent developments:

Nikon SLR Refresh

Nikon introduced a couple of new SLR updates, the updated D300s and D3s. The D300 is a pretty incremental upgrade to the mid-level D300, offering a modest +1fps improvement in continuous shooting (up to 7fps), and bringing the video capability that’s now standard on every new DSLR.

The bigger story came a few months later, in the form of the D3s. While still not a revolutionary introduction, it is much more than a software refresh. Among the features of note were a video mode (at 24fps!, albeit only at 1280×720 (720p) resolution), 11fps available in a higher-res crop mode (it now crops only 1.2x instead of 1.5x), and an increase in ISO range, up to ISO12.8k natively with a boost to ISO100k. The D3s presumably packs a different sensor, though it still maintains the same 12.1MP resolution.

People have been going goo-gah over the last spec in particular, especially given such a high linear number for ISO (and from here, it’s just four more stops til we get to ISO1.6 MILLION), though it’s really just +1 stop natively and +2 stop boost over the previous D3. And it’s important to note that the simple availability of an ISO capability says nothing about image quality at that level – that would be the same mistake as having the maximum shutter speed expanded from 30 seconds to a minute, and somehow thinking this magically makes photos at 1/500s less blurry. Given that the resolution (and thus pixel pitch) remained the exact same, I certainly wouldn’t expect quality to be any worse than the D3, and quite probably will be a tad better (although I have extreme doubts about the ISO100k mode, which is digitally boosted 3 stops; things have always looked terrible at just +2 stops digitally, even boosting ISO100+2 to 400.)

All in all, about as much as you could expect from Nikon, who seems to do very incremental updates and waits a long time to deliver big, revolutionary refreshes. Here’s hoping we see that D3s sensor in a D700s soon, though 1080p at 24fps would be nice (and completely feasible: 1920x1080x24fps = 49.8MP/s throughput, while we definitely know that the D700 supports 12.1MPx9fps = 108.9MP/s throughput in its continuous shooting mode).

Nikon D3s press release (I don’t know why they keep referring to it as “D3S”, past history and even the logo in the image clearly denote “D3s”)

The Nikon DS3, now with 720p video and ISO up to 12.8K/100K(boost)

The Nikon DS3, now with 720p video and ISO up to 12.8K/100K(boost)

Nikon also announced a couple of lens refreshes, with Version II’s of their popular 18-200mm VR ultrazoom and a long-awaited update to the 70-200 f/2.8 VR to optimize it for full-frame (FX) sensors. As Nikon had long trumpeted 1.5x crop DX sensors before their introduction of the full-frame D3 in 2007, they cut corners with their introduction of the 70-200 f/2.8 VR in 2003, building a lens that was technically full-frame but had an abysmal drop-off in performance once you actually got to the corners outside of a 1.5x imaging circle. This wasn’t found out until a bit after the D3 was released, finally giving digital photographers a platform to test the lens’ full frame performance, which resulted in tests like these:

http://www.dpreview.com/lensreviews/nikon_70-200_2p8_vr_n15/page6.asp

http://www.dpreview.com/lensreviews/widget/Fullscreen.ashx?reviews=17&fullscreen=true&av=3&fl=105&vis=VisualiserSharpnessMTF&stack=horizontal&lock=&config=/lensreviews/widget/LensReviewConfiguration.xml%3F4

The new 70-200 II promises to fix all of these problems with a new optical design and coatings, and promises to throw in a more effective “4-stop” VR system as well. There haven’t been too many authoritative tests yet to show how it performs (if you’ve found any, send me a link!), but presumably they should have no problem building such a lens – Canon has had two 70-200 2.8’s that’ve performed flawlessly on full-frame, and Nikon itself had a great 80-200 2.8 lens prior to the 70-200 VR I.

The one stickler? As if Nikon’s $2019 price on the original 70-200 I wasn’t enough, the 70-200 VR II will now set you back a cool $2400.

Nikon 18-200 II and 70-200 2.8 VR II press release

A real TZ-killer: Fujifilm’s F70EXR

Possibly the biggest announcement in the compact sector since the Panasonic TZ1 – Fujifilm finally puts together not just a compact ultrazoom but an ultracompact ultrazoom (0.9 in thin), and manages to fit in a half-inch SuperCCD sensor to boot.  If ever a camera came along with the potential to dethrone Panasonic’s vaunted TZ/ZS-series, this is it.

Have got a big night tonight – will update later, but for now you can munch on the details of the press release and Imaging Resource’s short overview/analysis.