The iPhone 5S announcement this week was punctuated with a lot of specs and buzzwords. Much of it centered around the new Touch ID fingerprint scanner and the 64-bit processor. But the most intriguing to me was the camera advancements.
Apple has been putting a major focus on the camera in the iPhone for a couple of years now. A recent Apple ad touted that more people take pictures with the iPhone than any other camera. And a while back the iPhone became the number one camera on the photo sharing site Flickr and it has never lost its crown. Despite the proliferation of point-and-shoot cameras with impressive technology and ever-cheaper DSLRs, the smartphone is and will probably remain the primary camera for a lot of people.
Unfortunately, cameras from many other phone companies like Samsung and Motorola simply don’t match up to the quality of images coming out of the iPhone. I’ve tried many, many different Android devices over the years which promised better images but none have delivered. The only real smartphone contender in the camera space is Nokia, which is doing some great stuff with the Lumia line. But where Nokia is pushing the pixel-count boundaries with the 41 megapixel Lumia 1020, Apple has chosen to go in a different direction.
Before I launch into the stuff that I found interesting about the new camera’s technology, a bit of background. I’m a reformed professional photographer that has shot just about every kind of camera from film to digital, professional and pocket. Weddings, portraits, landscape, wildlife, sports, industrial, you name it. I’ve processed film and prints by hand and machine and have taught photography as well. I don’t know everything photographic there is to know, far from it, but I’ve been around a bit.
Over the last few years, the iPhone has really become my go-to camera. The DSLRs have sat on the shelf and even a compact Panasonic 4/3 camera only comes out infrequently. This means that when Apple introduces a new device I’m all ears when it comes to what they say about its camera.
The iPhone 5S is no exception, and there is some pretty great stuff here. Obviously, this is not a review of the camera, just an exploration of the specs and what they might mean for other iPhoneographers.
THE SENSOR
The sensor in the iPhone 5S remains at 8 megapixels, which is a bold choice given that competitors like Nokia are shooting for the moon as far as pixel count is concerned. But, as with many things, the sheer number of pixels is not as important as the quality of those pixels, and that’s what Apple has focused on here.
The individual photo receptors that correspond to a ‘pixel’ in your image have been enlarged to 1.5 microns to present more surface area for photons to strike. The iPhone 5, like many other smartphones of its generation, featured a 1.4-micron pixel size.
Think of this as holding a thimble in a rain storm to try to catch water. The bigger your thimble, the easier it is to catch more drops in a shorter amount of time. This will also aid light gathering and should improve both color saturation and noise (or grain) levels in images.
Notably, the competing HTC One bests these specs with 2.0-micron pixels and an f2 aperture, but features a sensor with half the resolution at 4 megapixels and an odd 16:9 ratio for still images. But in shootouts, the iPhone 5 still won out in most situations with subtler and more accurate color. That’s likely due to the fact that Apple also designs its own ISP (image signal processor) and fine tunes it to work with its hardware.
In order to accommodate the larger pixel size, the ‘active surface area’ of the sensor has been increased 15%. More surface area but no more pixels means bigger and more light-sensitive pixels. Apple says that this adds up to a 33% increase in overall light sensitivity.
The iPhone 5S features a 5-element lens which Apple describes as ‘new’ for this device. That’s likely because of the increase in sensor size, as it will have to project a larger light circle onto the sensor itself. The lens has an f2.2 aperture, which is a 1/4 stop improvement over the iPhone 5’s f2.4 aperture. That should result in a .5 factor gain in light gathering ability for the lens, adding to the iPhone’s low-light abilities.
Apple says that the new sensor has better dynamic range and less noise, with more detail in highlight and shadow. That fits with the specs, but we’ll have to reserve judgement until we’ve had time to play with the camera.
The sensor in the iPhone 5S remains at 8 megapixels, which is a bold choice given that competitors like Nokia are shooting for the moon as far as pixel count is concerned. But, as with many things, the sheer number of pixels is not as important as the quality of those pixels, and that’s what Apple has focused on here.
The individual photo receptors that correspond to a ‘pixel’ in your image have been enlarged to 1.5 microns to present more surface area for photons to strike. The iPhone 5, like many other smartphones of its generation, featured a 1.4-micron pixel size.
Think of this as holding a thimble in a rain storm to try to catch water. The bigger your thimble, the easier it is to catch more drops in a shorter amount of time. This will also aid light gathering and should improve both color saturation and noise (or grain) levels in images.
Notably, the competing HTC One bests these specs with 2.0-micron pixels and an f2 aperture, but features a sensor with half the resolution at 4 megapixels and an odd 16:9 ratio for still images. But in shootouts, the iPhone 5 still won out in most situations with subtler and more accurate color. That’s likely due to the fact that Apple also designs its own ISP (image signal processor) and fine tunes it to work with its hardware.
In order to accommodate the larger pixel size, the ‘active surface area’ of the sensor has been increased 15%. More surface area but no more pixels means bigger and more light-sensitive pixels. Apple says that this adds up to a 33% increase in overall light sensitivity.
The iPhone 5S features a 5-element lens which Apple describes as ‘new’ for this device. That’s likely because of the increase in sensor size, as it will have to project a larger light circle onto the sensor itself. The lens has an f2.2 aperture, which is a 1/4 stop improvement over the iPhone 5’s f2.4 aperture. That should result in a .5 factor gain in light gathering ability for the lens, adding to the iPhone’s low-light abilities.
Apple says that the new sensor has better dynamic range and less noise, with more detail in highlight and shadow. That fits with the specs, but we’ll have to reserve judgement until we’ve had time to play with the camera.
WHAT THE A7 DOES FOR YOU
With the iPhone 4S, Apple introduced its own ISP or image signal processor. This is a common component typically referred to as a digital signal processor in digital cameras. It’s the thing that color corrects your image, converts formats, applies color and tone adjustments and a bunch more. Think of it as a brain that only thinks about images.
Apple has continued to evolve that ISP, though it didn’t refer to it directly in this weeks presentation. Instead, Phil Schiller continuously referred to the A7 as doing those things for you. That’s technically true as the A7 SoC is where the ISP is housed.
In the iPhone 5S, we get a bunch of cool new tricks being performed by the ISP. Some of them have been standards on high-end DSLRs for a while now, and some are really bleeding edge.
Though the new ISP still does stuff like white balance and auto-exposure, which is pretty standard. But it also now does dynamic tone mapping. Tone mapping is a technology that allows an image to be adjusted independently in various areas for brightness, contrast and color — or ‘tone’. It’s a similar procedure to the one used to make High Dynamic Range (HDR) images. In this case, Apple is using it to improve detail in the light and dark areas of the image. It should map the contrast levels of the various areas in the image and make pre-capture readings that help your post-capture image.
Apple also touts the new iPhone 5S’ as having autofocus matrix metering with 15 focus zones. This is a common feature on decent DLSR cameras and some high-end compacts. It allows the camera to split the scene into various zones, determining what the subject of focus is and adjusting metering according to where it focuses. This typically increases the speed and accuracy of focus and helps to reduce errors in auto-exposure. Where a face turns out really dark or a sunset is blown out, for instance.
This should mean less manual tapping around on the image to get the right focus and exposure, if it works as advertised.
The speed of the ISP in the A7 is also shown off by the new multi-shot feature, which takes several exposures and then picks the sharpest one. This happens, in typical Apple fashion, in the background without your input. What’s happening here (though I’ll go into detail more in a bit) is that you typically move around a bit even as you press a shutter button, causing a slight blur. Having a couple of shots to pick from can result in finding a frame where your shake stopped, giving you a sharper image.
Realistically, this should take a very small fraction of a second, so it won’t ‘feel’ any different, you’ll just have a sharper image.
With the iPhone 4S, Apple introduced its own ISP or image signal processor. This is a common component typically referred to as a digital signal processor in digital cameras. It’s the thing that color corrects your image, converts formats, applies color and tone adjustments and a bunch more. Think of it as a brain that only thinks about images.
Apple has continued to evolve that ISP, though it didn’t refer to it directly in this weeks presentation. Instead, Phil Schiller continuously referred to the A7 as doing those things for you. That’s technically true as the A7 SoC is where the ISP is housed.
In the iPhone 5S, we get a bunch of cool new tricks being performed by the ISP. Some of them have been standards on high-end DSLRs for a while now, and some are really bleeding edge.
Though the new ISP still does stuff like white balance and auto-exposure, which is pretty standard. But it also now does dynamic tone mapping. Tone mapping is a technology that allows an image to be adjusted independently in various areas for brightness, contrast and color — or ‘tone’. It’s a similar procedure to the one used to make High Dynamic Range (HDR) images. In this case, Apple is using it to improve detail in the light and dark areas of the image. It should map the contrast levels of the various areas in the image and make pre-capture readings that help your post-capture image.
Apple also touts the new iPhone 5S’ as having autofocus matrix metering with 15 focus zones. This is a common feature on decent DLSR cameras and some high-end compacts. It allows the camera to split the scene into various zones, determining what the subject of focus is and adjusting metering according to where it focuses. This typically increases the speed and accuracy of focus and helps to reduce errors in auto-exposure. Where a face turns out really dark or a sunset is blown out, for instance.
This should mean less manual tapping around on the image to get the right focus and exposure, if it works as advertised.
The speed of the ISP in the A7 is also shown off by the new multi-shot feature, which takes several exposures and then picks the sharpest one. This happens, in typical Apple fashion, in the background without your input. What’s happening here (though I’ll go into detail more in a bit) is that you typically move around a bit even as you press a shutter button, causing a slight blur. Having a couple of shots to pick from can result in finding a frame where your shake stopped, giving you a sharper image.
Realistically, this should take a very small fraction of a second, so it won’t ‘feel’ any different, you’ll just have a sharper image.
TRUE TONE FLASH
This thing is the crown jewel of the new iPhone’s camera capabilities, in my opinion. Yes, many people will probably still avoid using a flash, but the sheer engineering prowess here is insane.
The dual-LED flash in the iPhone 5S is not about providing more light, instead, it’s about providing light of a more accurate color. The flash in your pocket camera or DSLR, or in the current iPhone, is calibrated to a single color that approximates sunlight. This is fine in the sun as a fill light, but goes all wrong when you try to shoot an image with it indoors or under artificial light.
Where daylight is very cool and ‘blue’, indoor light is often very warm and ‘orange’. That goes for your tungsten (think typical light bulbs) lights, sodium bulbs and others in your home. This means that flash images pop blue light onto your subject’s face while orange light bathes the background. The camera’s ISP tries to balance the two and fails miserably on both counts.
The True Tone flash has both an amber and a white LED to produce two tones of light that can balance the foreground ‘faces’ with the background ambient light. If the two tones of the image are the same, then the iPhone’s ISP can color-correct the image and produce something decent. But it even goes further than that.
This thing is the crown jewel of the new iPhone’s camera capabilities, in my opinion. Yes, many people will probably still avoid using a flash, but the sheer engineering prowess here is insane.
The dual-LED flash in the iPhone 5S is not about providing more light, instead, it’s about providing light of a more accurate color. The flash in your pocket camera or DSLR, or in the current iPhone, is calibrated to a single color that approximates sunlight. This is fine in the sun as a fill light, but goes all wrong when you try to shoot an image with it indoors or under artificial light.
Where daylight is very cool and ‘blue’, indoor light is often very warm and ‘orange’. That goes for your tungsten (think typical light bulbs) lights, sodium bulbs and others in your home. This means that flash images pop blue light onto your subject’s face while orange light bathes the background. The camera’s ISP tries to balance the two and fails miserably on both counts.
The True Tone flash has both an amber and a white LED to produce two tones of light that can balance the foreground ‘faces’ with the background ambient light. If the two tones of the image are the same, then the iPhone’s ISP can color-correct the image and produce something decent. But it even goes further than that.
A flash with an orange color-correcting gel
Professional photographers have been balancing flash and ambient indoor light for a long time. Typically this was done with gels — clear pieces of orange plastic that are placed over the front of a flash in order to simulate the light that comes out of regular bulbs. Then the camera’s white balance is set to tungsten and the image looks good. In the film days, you would use a special tungsten-balanced film. Either method is annoying and, in the end, you could never get the temperature exactly right.
The iPhone 5S doesn’t just pop the amber flash if it’s in tungsten lighting. Instead, it reads the scene and fires off both LEDs in varying intensities to create up to 1,000 different color temperatures. This should allow it to match the foreground flash exposure color up perfectly with the background color.
The long and short of it is that indoor images should be much more balanced in their color, with more natural skin tones and a balanced foreground and background. You might even like to use your flash again.
This probably won’t help all that much in fluorescent light, as that’s much more ‘green’ in spectrum, but there’s a chance that it could. Couldn’t be worse than the iPhone 5’s flash indoors anyway.
One thing that this new innovation won’t do, however, is increase the range of your flash much. Just because there are two LEDs doesn’t mean that they’re both going to be firing at full power. It’s likely that one or the other will be much lower power with any given image. So you might get a bit more range but don’t count on the extra bulb for extra brightness.
AUTO IMAGE STABILIZATION
Both the burst mode and image stabilization are probably only going to be useful in bright light. Both require that multiple shots be taken and quick shots mean less light makes it to the sensor. Still, both are nice to have.
The stabilization system especially is interesting. Instead of just taking multiple shots and picking a sharp one, the system appears to use technology similar to the current HDR feature. It takes multiple images and then uses the best bits of each picture based on exposure and sharpness to composite together a final image.
Theoretically, we’re looking at something that could replace a blurry face, for instance, with a sharp one from just a second later. Typically, shooting a sharp picture in really low light requires two things: a steady hand and a steady subject. Stabilizing only the lens solves only one of those problems. It doesn’t matter how steady your lens is if your subject is fidgety.
Utilizing a compositing method for ‘stabilization’ allows Apple to tackle both your movement and subject movement at the same time, which is pretty clever.
The burst mode is a pretty standard 10 frames per second, a speed that can be matched by some third-party apps on the App Store already. The fact that Apple says you can capture ‘hundreds’ of images in a row without stopping is something worth noting, though. That’s normally related to how fast your ISP can process those images on the fly.
But the post-shooting procedure is the really interesting bit. Firing off a burst of a hundred images is nice but potentially extremely difficult to weed through to find the best images. So, says Schiller, the iPhone 5S’ ISP will weed through those based on a bunch of factors in real-time:
The bursting stuff is cool but nothing new for DSLR shooters. It’s long been one of the strengths of the mirrored or even high-end mirrorless cameras. The image processing to choose the best image for you has even been dabbled in by some companies. but the sheer number of signals checked on in each image and the seemingly pleasant UI for shooting and picking should set this apart from stuff we’ve seen from camera makers.
Both the burst mode and image stabilization are probably only going to be useful in bright light. Both require that multiple shots be taken and quick shots mean less light makes it to the sensor. Still, both are nice to have.
The stabilization system especially is interesting. Instead of just taking multiple shots and picking a sharp one, the system appears to use technology similar to the current HDR feature. It takes multiple images and then uses the best bits of each picture based on exposure and sharpness to composite together a final image.
Theoretically, we’re looking at something that could replace a blurry face, for instance, with a sharp one from just a second later. Typically, shooting a sharp picture in really low light requires two things: a steady hand and a steady subject. Stabilizing only the lens solves only one of those problems. It doesn’t matter how steady your lens is if your subject is fidgety.
Utilizing a compositing method for ‘stabilization’ allows Apple to tackle both your movement and subject movement at the same time, which is pretty clever.
The burst mode is a pretty standard 10 frames per second, a speed that can be matched by some third-party apps on the App Store already. The fact that Apple says you can capture ‘hundreds’ of images in a row without stopping is something worth noting, though. That’s normally related to how fast your ISP can process those images on the fly.
But the post-shooting procedure is the really interesting bit. Firing off a burst of a hundred images is nice but potentially extremely difficult to weed through to find the best images. So, says Schiller, the iPhone 5S’ ISP will weed through those based on a bunch of factors in real-time:
- exposure
- sharpness
- face detection
- subject smiling
- subject blinking
The bursting stuff is cool but nothing new for DSLR shooters. It’s long been one of the strengths of the mirrored or even high-end mirrorless cameras. The image processing to choose the best image for you has even been dabbled in by some companies. but the sheer number of signals checked on in each image and the seemingly pleasant UI for shooting and picking should set this apart from stuff we’ve seen from camera makers.
SLO-MO
Slow motion video takes a lot of light. When you’re capturing 120 images per second, you need to fire your shutter off quickly to move on to the next one (1/120th of a second or faster, to be exact). So I wouldn’t expect to see this work well in anything but broad daylight.
But it’s a testament to the light-gathering capability of the new sensor and the sheer brute strength of Apple’s ISP that it’s able to do up to 120fps at 720p at all. That’s beyond the capabilities of most DSLRs, which top out at 60fps.
Once you’ve shot the video you can specify the segment that you’d like to be slow motion. That segment can even be changed later, indicating that this is locally processed. Apple’s Schiller did note that you can share these segments with friends, though, indicating that some processing to create a final shareable clip will take place at some point. Perhaps after you choose to share it.
Brian Klug of AnandTech got a look at the slo-mo feature of the iPhone 5S and it’s a separate mode like pano or video mode. Here’s an image he shot of the mode in action:
As a bonus, panoramic images also get a boost from the new A7 ISP, capturing images 50% faster at 30fps, making for faster sweeps.
Slow motion video takes a lot of light. When you’re capturing 120 images per second, you need to fire your shutter off quickly to move on to the next one (1/120th of a second or faster, to be exact). So I wouldn’t expect to see this work well in anything but broad daylight.
But it’s a testament to the light-gathering capability of the new sensor and the sheer brute strength of Apple’s ISP that it’s able to do up to 120fps at 720p at all. That’s beyond the capabilities of most DSLRs, which top out at 60fps.
Once you’ve shot the video you can specify the segment that you’d like to be slow motion. That segment can even be changed later, indicating that this is locally processed. Apple’s Schiller did note that you can share these segments with friends, though, indicating that some processing to create a final shareable clip will take place at some point. Perhaps after you choose to share it.
Brian Klug of AnandTech got a look at the slo-mo feature of the iPhone 5S and it’s a separate mode like pano or video mode. Here’s an image he shot of the mode in action:
As a bonus, panoramic images also get a boost from the new A7 ISP, capturing images 50% faster at 30fps, making for faster sweeps.
WHAT DOES IT ALL MEAN?
So, we’ve got a bunch of improvements here that cross over from hardware to software and touch on user experience. All three Apple’s strong suits when it comes to integrated devices like the iPhone. If you peer more closely though, the biggest differences between an iPhone shooting experience and that of a traditional camera comes down to one thing: the image signal processor in the A7 chip.
The aperture isn’t that much bigger than competitors and the pixel pitch is actually smaller than the HTC One, for instance. And the sensor, though increased in size, is very tiny when compared to even point-and-shoot cameras.
The differences, then, come largely in how Apple’s ISP hardware and its front-end software mesh to make life easier for photographers. There’s a quote on Apple’s iPhone page which I think is nicely phrased:
“It just makes more sense to teach iPhone how to take a great picture rather than teach people how to be expert photographers.”
If you’re a photographer, you might actually rankle a bit at first, because you know as well as I that most of a good photograph happens at the photographer, not the camera. But, remember, most people are not trained photographers. They’re interested in getting the best picture possible but lack the formal training to compensate for the vagaries of poor sensors and lenses.
Note that Apple says ‘teach people how to be expert photographers’. That’s key because everyone with a smartphone is now — whether they see themselves that way or not — a photographer. Apple just sees the value in taking the burden of having to be an expert off of their shoulders. And it has the software and hardware prowess to (maybe) pull it off.
This discussion has been all about the potential of the new camera, as we’ve yet to put it through its paces. But as a photographer and as someone who likes to see what others capture, I’m fairly optimistic.
Image Credit: Apple, The Ewan Flickr/CC
Data source: via TC (By Matthew Panzarino)
So, we’ve got a bunch of improvements here that cross over from hardware to software and touch on user experience. All three Apple’s strong suits when it comes to integrated devices like the iPhone. If you peer more closely though, the biggest differences between an iPhone shooting experience and that of a traditional camera comes down to one thing: the image signal processor in the A7 chip.
The aperture isn’t that much bigger than competitors and the pixel pitch is actually smaller than the HTC One, for instance. And the sensor, though increased in size, is very tiny when compared to even point-and-shoot cameras.
The differences, then, come largely in how Apple’s ISP hardware and its front-end software mesh to make life easier for photographers. There’s a quote on Apple’s iPhone page which I think is nicely phrased:
“It just makes more sense to teach iPhone how to take a great picture rather than teach people how to be expert photographers.”
If you’re a photographer, you might actually rankle a bit at first, because you know as well as I that most of a good photograph happens at the photographer, not the camera. But, remember, most people are not trained photographers. They’re interested in getting the best picture possible but lack the formal training to compensate for the vagaries of poor sensors and lenses.
Note that Apple says ‘teach people how to be expert photographers’. That’s key because everyone with a smartphone is now — whether they see themselves that way or not — a photographer. Apple just sees the value in taking the burden of having to be an expert off of their shoulders. And it has the software and hardware prowess to (maybe) pull it off.
This discussion has been all about the potential of the new camera, as we’ve yet to put it through its paces. But as a photographer and as someone who likes to see what others capture, I’m fairly optimistic.
Image Credit: Apple, The Ewan Flickr/CC
Data source: via TC (By Matthew Panzarino)
Post a Comment