Is Computational Photography the Future?

In my recent camera sales is declining video and article I received several comments claiming that the introduction and advancement of computational photography in smartphones is the main reason that negates the need for a camera, and implications were made about how cameras should adopt computational photography and "catch" up to smartphones. However, this is something that deeply troubles me - computational photography has been around since the beginning of digital photography and the level of computational photography applied in dedicated cameras today is not lagging behind smartphones. I want explore the significance of computational photography that has already been adopted integrally in Olympus OM-D cameras. 

The core of digital photography comprises of 3 components - Lens, Image Sensor and Digital Processing. The lens allows light to enter the camera and focuses light onto the image sensor to form an image. The image sensor then translates this light from analogue into digital signal. Then the digital processing converts this into the final output, resulting in JPEG files which can be viewed on digital devices and shared on online social media platforms. Computational photography comes into play during the capturing process of the images as well as during the digital processing stage. A lot of camera operations rely on software algorithms and the final optimization process to produce that beautiful JPEG file relies heavily on computational photography. 

How is smartphone's computational photography better than camera's? I just do not see it. A lot of reference was made to the fake bokeh rendering, smart HDR processing and auto-compositing of images done by smartphones with ease and minimal effort, to produce amazing results. I argue that these advancement in software has been seen in cameras for a while now and there is nothing new in the smartphones that can even come  close to what the dedicated cameras can do. Except maybe the fake bokeh artifical rendering but in cameras why do we need fake bokeh when we can acquire real and better more natural looking real ones?

Now let's take a look at the processing chip found in Olympus OM-D current line-up of cameras - Truepic VIII. In a single Truepic VIII chip, Olympus claims that there are two Quad Core processors. There are 8 independent processing cores in an Olympus camera with a Truepic VIII processor, and you know what? Olympus E-M1X has two Truepic VIII processors, meaning E-M1X  has a total of 16 cores! Each core is assigned for a singular, computational heavy task - one core to compute image stabilization real time, one care for AF operation, one for smart image processing, one for writing/reading to SD card, one for EVF/Live View display, etc. Olympus also claims that the processor that they used in their cameras are more powerful than any consumer mass available Intel processor (true in January 2019, source here)

So why does Olympus need so much processing power (more powerful than any smartphones) in their camera? Is the answer not obvious enough? Of course - computational photography. 

Here is a list of instances when computational photography plays a crucial role in modern digital cameras, especially in Olympus OM-D cameras

1) AF operations
In a single-AF operation, at the half-press of the shutter button the camera quickly captures 240 frames per second, and these images are not stored in the SD card, but in the temporary buffer. The camera's processing will analyze all these images quickly and using smart contrast detection the "computer" will quickly acquire and lock focus. 
In Continuous AF, or 3D tracking, computational photography plays an even more important role to analyze the pattern of subject movement and apply an adaptive smart algorithm to predict where the subject will move to next, allowing the tracking to work efficiently. None of this is possible without raw computational power, and believe me when I tell you the C-AF or 3D tracking in any top level Canon, Nikon, Sony or Olympus cameras are superior than the best smartphone camera you can find today. 

2) Composite Modes
There are many composite modes in camera - High Res 50MP shot, Live Composite, in camera HDR, Focus Stacking and Hand-held multishot noise reduction, all taking multiple images at once, and merging them together with smart real time analysis and effective processing to accomplish selected, desired results. Each composite settings require the camera to perform some computational photography magic to selectively take some parts of an image and merge them all into a final composite image. Most of these composite modes can be executed with just one click of the shutter button. Computational photography has been used by cameras to push and break boundaries - to acquire more high resolution image, to achieve better image quality (less noise in high ISO, wider dynamic range than single shot), to capture more depth of field and to prevent overexposure in long exposure modes. If this is not pure computational photography, I don't know what is. 

3) 5-Axis IS
How does image stabilization work? The gyroscope will detect movement of camera shake, and the camera will use the efficient computational power to counter these movements, all happening so fast that the image or video is fully stabilized. We know how capable the 5-Axis IS in Olympus camera is, and then there is the 5-Axis Sync IS, when the body IS works with lens IS in sync to further improve stability of the shot. 

4) Smart JPEG Processing
Modern digital cameras have superior advanced JPEG processing that a lot of people take for granted. The images are not uniformly sharpened, and the noise reduction application is not done on a global level. The camera will analyze each image separately and apply variable sharpening and noise reduction on images with different shooting parameters (different apeture used, ISO number and lens attached). If the lens is sharp, shooting at optimum aperture and lower ISO, the in camera sharpening is lowered and less noise reduction is applied to achieve a more natural look. Also, the camera will study different areas of the image and apply noise reduction and sharpening selectively. There are a lot of processing happening to optimize a JPEG file in camera - barrel distortion correction, vignetting compensation and chromatic aberration suppression just to name a few. Also, since Truepic 6, Olympus uses no low pass filter on their image sensors, hence they have advanced the smart Moire correction algorithm in their processing engine ever since. Furthermore, there is compensation for diffraction when narrow aperture is being used. All this, happening at a click of a shutter button, almost instantaneously, with virtually zero shutter lag when shooting with a camera. 

Do not get me wrong, I am not bashing smartphone photography, in fact, far from it. I am a firm believer that smartphone photography is the future. However the claim that the computational photography in cameras are falling behind and that camera manufacturer's should play catch up - that is fraudulent. 

The problem with smartphone cameras is not the software. I admit the software is improving, and there will be progress and we will see more exciting things happening in computational photography soon. The real limitations to progress of digital camera in a smartphone is the actual lens and image sensor used in smartphones. The multiple  camera setup is a good idea, but it is not the ultimate solution for smartphone photography. I would be terrified to think of iPhone 15 wiith 15 camera modules at the back of the phone. There is no point having so many cameras, all with similar physical limitations. 

How to improve smartphone cameras? Use larger sensor, maybe include a 1 inch image sensor (like what Panasonic did once), and use larger and higher quality optics to complement the more capable image sensor. Combine that with truly powerful software, then we can talk. At this moment, no matter how advanced the computational photography is in an iPhone or Samsung camera, the fact that those tiny image sensor and crappy lens still render sub-par quality images. I have tested the Samsung Note 10+ recently and trust me, I am NOT impressed. For a smartphone camera, yes it is possibly the best now in the market, but compare that with even an entry level mirrorless camera, say an Olympus PEN E-PL9, there is still a serious gap. 

Let me know if you still hold firm to the believe that, today, the smartphones have higher level computational photography power than cameras? Share your thoughts!

Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H. 


  1. next generation of computational photography :
    when you take a picture of a landscape, the smartphone search for the best landscape in Internet database (facebook, instagram,...) and replace your shot by this one, idem for portrait :) the smartphone search for the best profile

  2. As always, it depends. My opinion: Mainstream is "lost" to Smartphone (whether Bokeh is "Fake" or not is not relevant, since generated blur probably will still look ok enough, even if it's not result of optics but electronics ... )
    Photography probably will be left to "connaisseurs" ... but I also got lazy: For example I do not take too much old school Panoramic Photos any more, but for the ultimate sake of convenience I now use a small Panorama camera (that in comparison provides meager quality, but it is so much more convenient to capture a full sphere as photo memory)
    And on the future of imaging / what will be possible with Ai: Check out ... if you're not true to the real world, you'd probably be able to construct landscape shots at your computer. So in essence Photography will be rather an activity for experience and not results any more ... resonating with your words of "Shutter Therapy"
    Nevertheless, will keep reading your blog ! Best from Germany!

    1. Thanks for sharing your thoughts. Go out and do more shutter therapy!

  3. There are some computational photography features that would be welcome in the "real camera" world. Mainly, those advances made by Google in their "Night Sight" mode for Pixel phones. It is amazing what those phones can do in low light despite having tiny sensors.

    1. I don't feel the night sight is that impressive. tried it and it requires very slow shutter speeds, which can be extremely dangerous for hand-held shots.

  4. A very thought provoking post.
    I have a number of good cameras but unless it's a planned photo event (like a birthday party), the only thing I really carry and use is my iphone.
    This is because:
    -I don't have to lug anything around with me
    -image quality is really 'good enough' since people seem to mostly just want to look at photos on their cell phones. No one (at least that I know) wants prints.
    -It's super quick and easy to send people photos as soon as you take them, an ability which seems to be becoming more important to people now.
    -It's very easy to switch to video for a few minutes.
    -The phone camera is fun and super easy to use and you don't have to study a long manual and wade through obscure menus, unlike a lot of much more capable cameras that I could think of.

    I have to think that if enough long time enthusiasts like me start thinking that an iphone is good enough it isn't a good sign for the camera makers.

    1. No worries, most people are happy with what their smartphone cameras can do, hence the declining sales of cameras.

  5. What once happened to Kodak, is now happening to most if not all camera makers. When image was captured on film, a rather rare commodity compared to the billions of digital images today, with no alternatives, the only option is film.

    Then comes digital images that rendered Kodak and its purist view of holding firm to film photography irrelevant and obsolete.

    This is the 3rd wave that images are no longer just stored in memory cards and PCs, but in servers on the cloud instantly shared with millions, most of which view them on their 5-7 inch smartphones. Purist who insist on "better" images captured on DLSR or digital cameras could probably end in the same fate as Kodak.

    In our less than perfect world, most of the 7 billion people settles for mediocrity. Smartphone images now has improved by leaps and bounds, similar to digital vs film image quality arguments once upon a time. Unfortunately, camera makers despite making huge profit margins on high end pro equipment they sell to the pros or those who demands nothing less than the best; survive on revenue of the rest who deems mediocre images viewed on smartphones as acceptable. Without this small profit margin revenue, camera makers will either go into liquidation, M&A or diversify into something else, like what Fujifilm (they are still called Fujifilm) did - moved from film to digital cameras.

    I guess this is no place for dissertation on photography. But i foresee the larger conglomerate like Sony, Samsung, Panasonic, Canon and maybe Nikon surviving and diversify their offering into something else rather than purely a full frame DSLR. For the purely camera companies (like Leica, Minolta, Practica, Konica, Contax, Yashica, etc) they are ripe for acquisition by ruthless mercenaries in a dog eat dog world we live in to reduce competition.

    1. Kodak did not just rely on film, but also printing in their core business. When both film and printing went down, the company could not change fast enough. While smartphones are making fast advancement, it is still not too late for the camera companies. They are still around, and now they have to be clever moving forward.