Apple’s product images are world class. The detail and clarity is stunning. The lighting and composition is perfect. How are they made?
After spending some time photographing the iPhone X, I compared my images to Apple’s promotional images on apple.com. I humbly admit I was so perplexed by Apple’s images that I wondered if they were actually renders. The control of lighting is so precise that I’m uncertain how much is actually captured in camera if any at all.
When I am looking at a photo and I see a highlight in a reflective surface, I know that there is a light placed at a position that will reflect the light. In photography this is called the incident angle. So when I see a highlight in an image I can try to imagine the quality of the light that made that highlight. Controlling these highlights is utterly important when lighting a reflective surface.
For instance, looking at this close-up of the front angle notice how the highlights on the side are so finely controlled. Starting from the rear, there is a shadow that runs precisely along the edge. Then a band of highlight that runs along the side and wraps around the rounded corner while maintaining even illumination. Here I would expect some variance in the illumination as the light moves around the corner. In order for the light to maintain constant illumination around the corner, the light would need to maintain equal distance from the surface otherwise the light would vary in intensity.
Towards the front there are two highlights that flank a darker band. These two highlights do have a gradation as they curve and taper to the top. The gradation does look natural but the tapering to a point is impeccable.
Putting it all together, try to imagine what the lights look like that make these highlights and shadow areas. It’s either an amazingly precise lighting set-up, a composite image from multiple captures, or maybe it’s simply a render. I’m not a 3D render artist but it would seem to be easier to illustrate a render than try to capture this image in camera with photography. I imagine that all these elements of the image that are astonishing to me as a photographer are not an issue in the virtual world of illustration. Creating the images as renders also solves the issue of depth-of-field, the area of focus, which is extremely shallow when shooting this close-up with photography. Depth of field is not insurmountable challenge in photography but it a layer of complexity that would not be an issue with a render.
Also note that there is zero surface texture in these images. Contrast this to an image of the iPhone 5 at a similar angle. In the iPhone 5 image there is visible texture on the side, and the light’s gradation appears more natural as it moves from highlight to shadow along the band. This doesn’t necessarily mean that the iPhone 5 image is a photograph, but it is a different aesthetic and more natural looking than the iPhone X images.
Even if the iPhone images are actual photographs, or at the very least originate as photographs, their final appearance is decidedly surreal. These images are not meant to convey reality. Does it matter whether the images are “real” or not? Maybe some people would argue that product images should be photographs so that we know what the product actually looks like, that a perfect computer generated image is an unfair trick meant to seduce the unknowing viewer. However, even product images captured in camera are manipulated so much in post-production that I’d be reluctant to claim that photographs somehow represent reality. If we can get philosophical for a moment, this was true even before the age of Photoshop, as photography manipulates scale, idealizes, and transforms reality.
I do know that at some point several years ago that Apple’s product images were actual photographs. Are they now renders? Does it matter? Am I completely wrong in my assessment of these images? Let me know what you think.