Street Shooting with Olympus OM-D E-M5 Mark III

With so much heavy talk going on here recently, I decided to slow down a little bit and enjoy shutter therapy! I joined a recent photowalk organized by Olympus Malaysia, led by the amazing Syazwan Basri yesterday. I decided to use the new Olympus OM-D E-M5 Mark III and did a video showing what happens before each shot as well. It was indeed a fun and enjoyable walk, a much needed shutter therapy on a weekend morning. 


I only brought one camera (E-M5 Mark III) and two lenses (17mm F1.8 and 12mm F2) along for this session. I used the E-M5 Mark III for both video and stills. Knowing that we will be constantly moving all the time, I left the tripod behind, which would have slowed me down tremendously. Instead I used the "selfie video vlogging" approach hand-holding the camera in one hand, recording myself as I walked, something which I did not like doing, but I guess it was the best option for this particular case. The lens for the scenes of myself talking in the video was shot on the 12mm F2, while everything else during the photowalk, all images were taken with 17mm F1.8. 

I also shot behind the scenes of each shot, quickly switching between video and photo mode of the camera. I also narrated in the video my thought process behind each shot, how I executed them and why I did certain things with the camera settings. I shared tips on how to create starburst effect, using gaps to create foreground blur, finding the right moment, approaching strangers for portraits and generally being respectful to people you encounter during a photowalk. I think the last point was the most important one. 

Here are some shots taken from the photowalk! Now tell me, have you had your shutter therapy lately?













Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H. 

Why I Don't Use Back Button Focusing

Back button focus has been compared to the invention of sliced bread, and many who went BBF said they never went back.  However, that was not the case for me. I acknowledge the advantages of separating the AF function away from the shutter release button, and this is a popularly used,  much highly regarded photography technique by many. Nonetheless, back button focusing just does not work for me, and my reasons may not be universally applicable to all of you, but I would like to explore my reasons why I still stay with the shutter button as my to-do-it all options. 

For those of you who prefer a video version, here is a short, 10 minutes version of me ranting about back button focus. For the first time in almost half a year, I was recording myself indoor, in the living room. Surely this was not my favourite way of doing a YouTube video, but I admit the convenience was too hard to pass. 


I am not a wildlife or sports photographer. I totally understand how the back button focus can make a huge difference in such shooting scenarios, especially when you don't want to keep refocusing before each shot, that will increase a chance of miss focus. Hence using back button focus, locking it and waiting for the action to happen, and when it happens immediately pressing the shutter button can minimize the risk of error.  This is applicable for wildlife, for example bird shooting, as well as sports where some of the players are holding their position and you wait for them to spring to action. However, I shoot events, weddings, portraits, and products, and back button focus just does not help in any way. I find myself rarely needing to wait for my shots, when I see something happen I normally have to react quickly, and have the shutter button released as quickly as possible. 

I find that when I use back button focus, the handling of the camera is compromised. This is true for Olympus cameras, and I cannot speak for other cameras. Olympus cameras, especially the OM-D cameras were designed with prominent thumb resting area, allowing comfortable and secure gripping if your thumb is resting tightly on the hook. The smaller size of the camera makes it even more difficult when back button focus is being used, moving the thumb away from the thumb rest, which means effectively only 3 other fingers (other than your thumb on BBF and index finger on shutter button) are used to hold the camera in such an awkward manner. Then camera then slides into the palm, and digs into it, which is very uncomfortable for long hour shooting. I have full day shoots (corporate events, weddings) and I need to have balanced, comfortable and secure handling without worrying about the camera slipping off my hand. I don't like to move my thumb away from the thumb rest!





I shoot insect macro and portraits a lot. Critical focus is priority when shooting close up.  For insect macro, even 1-2mm movement away from the focusing plane will throw the subject entirely out of focus. Similarly, when I shoot portraits of strangers on the street, I normally shoot wide open at F1.8 or F1.2, and any movement on the subject's part, even just a few centimeters can cause softness in my image. No amount of post-processing or photo manipulation can save an out of focus image. Having the AF assigned to back button means I need to lock AF first in one action, before capturing the image with another action by pressing the shutter button. The delay between AF lock and shutter release is increased when using back button focus. This is significantly improved when using the shutter button for both AF operation and shutter release, because after the AF is acquired, I can immediately fully press the shutter button instantaneously with almost no delay. This is of course less crucial for shooting landscapes, subjects that are perfectly still or anything that are too far away. For me, back button focus causes the slight delay which can amplify the chance of critical focus accuracy error. 

Finally, during my brief experience using back button focus, my thumb suffers cramps and sore after a long day of shooting. It happened a few times that I decided it was definitely not the right technique for me. Besides, most of my photography subjects are constantly in motion, I am always moving around, nothing stays still, and I need to refocus my shots all the time before releasing the shutter button. The index finger is already always on the shutter button, having another finger locked on another button just makes things a bit more complicated and uncomfortable. I will have to press the shutter button to capture the image anyway, why do I need to use another button? Doing everything with one button works more efficiently for me as I need to refocus almost every shot. 

I don't think there is any right and wrong technique, as long as you find the most suitable one for your own photography needs. Back button focus certainly is not for me, but if it works for you, then stay with it!

What are your thoughts? Share your experience!

Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H. 

Is Computational Photography the Future?

In my recent camera sales is declining video and article I received several comments claiming that the introduction and advancement of computational photography in smartphones is the main reason that negates the need for a camera, and implications were made about how cameras should adopt computational photography and "catch" up to smartphones. However, this is something that deeply troubles me - computational photography has been around since the beginning of digital photography and the level of computational photography applied in dedicated cameras today is not lagging behind smartphones. I want explore the significance of computational photography that has already been adopted integrally in Olympus OM-D cameras. 


The core of digital photography comprises of 3 components - Lens, Image Sensor and Digital Processing. The lens allows light to enter the camera and focuses light onto the image sensor to form an image. The image sensor then translates this light from analogue into digital signal. Then the digital processing converts this into the final output, resulting in JPEG files which can be viewed on digital devices and shared on online social media platforms. Computational photography comes into play during the capturing process of the images as well as during the digital processing stage. A lot of camera operations rely on software algorithms and the final optimization process to produce that beautiful JPEG file relies heavily on computational photography. 

How is smartphone's computational photography better than camera's? I just do not see it. A lot of reference was made to the fake bokeh rendering, smart HDR processing and auto-compositing of images done by smartphones with ease and minimal effort, to produce amazing results. I argue that these advancement in software has been seen in cameras for a while now and there is nothing new in the smartphones that can even come  close to what the dedicated cameras can do. Except maybe the fake bokeh artifical rendering but in cameras why do we need fake bokeh when we can acquire real and better more natural looking real ones?





Now let's take a look at the processing chip found in Olympus OM-D current line-up of cameras - Truepic VIII. In a single Truepic VIII chip, Olympus claims that there are two Quad Core processors. There are 8 independent processing cores in an Olympus camera with a Truepic VIII processor, and you know what? Olympus E-M1X has two Truepic VIII processors, meaning E-M1X  has a total of 16 cores! Each core is assigned for a singular, computational heavy task - one core to compute image stabilization real time, one care for AF operation, one for smart image processing, one for writing/reading to SD card, one for EVF/Live View display, etc. Olympus also claims that the processor that they used in their cameras are more powerful than any consumer mass available Intel processor (true in January 2019, source here)

So why does Olympus need so much processing power (more powerful than any smartphones) in their camera? Is the answer not obvious enough? Of course - computational photography. 

Here is a list of instances when computational photography plays a crucial role in modern digital cameras, especially in Olympus OM-D cameras

1) AF operations
In a single-AF operation, at the half-press of the shutter button the camera quickly captures 240 frames per second, and these images are not stored in the SD card, but in the temporary buffer. The camera's processing will analyze all these images quickly and using smart contrast detection the "computer" will quickly acquire and lock focus. 
In Continuous AF, or 3D tracking, computational photography plays an even more important role to analyze the pattern of subject movement and apply an adaptive smart algorithm to predict where the subject will move to next, allowing the tracking to work efficiently. None of this is possible without raw computational power, and believe me when I tell you the C-AF or 3D tracking in any top level Canon, Nikon, Sony or Olympus cameras are superior than the best smartphone camera you can find today. 

2) Composite Modes
There are many composite modes in camera - High Res 50MP shot, Live Composite, in camera HDR, Focus Stacking and Hand-held multishot noise reduction, all taking multiple images at once, and merging them together with smart real time analysis and effective processing to accomplish selected, desired results. Each composite settings require the camera to perform some computational photography magic to selectively take some parts of an image and merge them all into a final composite image. Most of these composite modes can be executed with just one click of the shutter button. Computational photography has been used by cameras to push and break boundaries - to acquire more high resolution image, to achieve better image quality (less noise in high ISO, wider dynamic range than single shot), to capture more depth of field and to prevent overexposure in long exposure modes. If this is not pure computational photography, I don't know what is. 

3) 5-Axis IS
How does image stabilization work? The gyroscope will detect movement of camera shake, and the camera will use the efficient computational power to counter these movements, all happening so fast that the image or video is fully stabilized. We know how capable the 5-Axis IS in Olympus camera is, and then there is the 5-Axis Sync IS, when the body IS works with lens IS in sync to further improve stability of the shot. 

4) Smart JPEG Processing
Modern digital cameras have superior advanced JPEG processing that a lot of people take for granted. The images are not uniformly sharpened, and the noise reduction application is not done on a global level. The camera will analyze each image separately and apply variable sharpening and noise reduction on images with different shooting parameters (different apeture used, ISO number and lens attached). If the lens is sharp, shooting at optimum aperture and lower ISO, the in camera sharpening is lowered and less noise reduction is applied to achieve a more natural look. Also, the camera will study different areas of the image and apply noise reduction and sharpening selectively. There are a lot of processing happening to optimize a JPEG file in camera - barrel distortion correction, vignetting compensation and chromatic aberration suppression just to name a few. Also, since Truepic 6, Olympus uses no low pass filter on their image sensors, hence they have advanced the smart Moire correction algorithm in their processing engine ever since. Furthermore, there is compensation for diffraction when narrow aperture is being used. All this, happening at a click of a shutter button, almost instantaneously, with virtually zero shutter lag when shooting with a camera. 

Do not get me wrong, I am not bashing smartphone photography, in fact, far from it. I am a firm believer that smartphone photography is the future. However the claim that the computational photography in cameras are falling behind and that camera manufacturer's should play catch up - that is fraudulent. 



The problem with smartphone cameras is not the software. I admit the software is improving, and there will be progress and we will see more exciting things happening in computational photography soon. The real limitations to progress of digital camera in a smartphone is the actual lens and image sensor used in smartphones. The multiple  camera setup is a good idea, but it is not the ultimate solution for smartphone photography. I would be terrified to think of iPhone 15 wiith 15 camera modules at the back of the phone. There is no point having so many cameras, all with similar physical limitations. 

How to improve smartphone cameras? Use larger sensor, maybe include a 1 inch image sensor (like what Panasonic did once), and use larger and higher quality optics to complement the more capable image sensor. Combine that with truly powerful software, then we can talk. At this moment, no matter how advanced the computational photography is in an iPhone or Samsung camera, the fact that those tiny image sensor and crappy lens still render sub-par quality images. I have tested the Samsung Note 10+ recently and trust me, I am NOT impressed. For a smartphone camera, yes it is possibly the best now in the market, but compare that with even an entry level mirrorless camera, say an Olympus PEN E-PL9, there is still a serious gap. 

Let me know if you still hold firm to the believe that, today, the smartphones have higher level computational photography power than cameras? Share your thoughts!

Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H.