Why Olympus Is Not Going Full Frame

I have received quite a number of comments and feedback especially at my YouTube videos as well as in my recent articles here, with some of you commenting that if Olympus is to survive the current camera sales thunderstorm, one of the viable solution is for them to go full frame. While I am excited with the idea of Olympus making full frame cameras, being an Olympus photographer, ex-employee, and an ex-engineer looking at the whole situation through realistic filter, I strongly believe full frame is not the answer for Olympus at this point of time. I made a video to explore the reasons why it is not the best option for Olympus moving forward, and why staying committed to the current maturing Micro Four Thirds system is the better strategy. 

Kindly take note that I do not represent Olympus in sharing any of my opinion, and I am not doing this to defend Olympus, or with intention of bashing any other brands. I am a photographer, a camera lover and there will be no bashing or negativity toward any camera or products in the industry. 


If you are not the video watching kind of person, here is a very short summary of my points discussed, explaining reasons why Olympus will not consider going full frame. 
1) Maintaining balanced size and weight for both camera and lens combo
2) Larger image sensor is not the solution
3) Entering full frame war is not a good strategy, and is a losing battle
4) Olympus should focus on pushing imaging innovation and technological boundaries


Olympus had a vision when they started the Four Thirds format, they knew to achieve the optimum size and weight balance for camera + lens combo, the best format was the Four Thirds sensor format. They have stayed committed to this format and philosophy all this time, and honestly it is a good strategy, knowing that it will be an impossible battle to collide head on against the big players such as Canon and Sony who, not in secret have infinitely more funds to burn for R&D and marketing. Olympus strives to provide alternative products that are portable and compact, yet at the same time delivering professional grade performance and results, and they have not been complacent in pushing innovations and technological boundaries in their product development. Therefore, instead of fighting a losing battle by entering the full frame market, I strongly believe Micro Four Thirds has a strong footing and is a great alternative, which honestly is more than sufficient for most photographers who don't shoot in the most extreme conditions. 

I am not saying that I do not want to see improvements from Olympus, it is quite the opposite. By committing to the Micro Four Thirds format, knowing well the technical restrictions of the smaller image sensor size, Olympus will need to work doubly hard to come up with advancement in their camera innovations to appeal to mass consumers. They are indeed heading the right direction with the improvement of groundbreaking shooting features that greatly benefit real world photography such as the 5-Axis Image Stabilization, Hand-Held High Res Mode, and computational deep learning focus tracking as seen in the E-M1X. I hope Olympus will surprise us more with new features and continue to push what the smaller camera system can achieve, and I am sure we will find out sooner rather than later. 

So what are your thoughts? Do you think the only answer is for Olympus to go full frame? Do you think Micro Four Thirds has a place in the camera market in the future? Share your thoughts!



Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H. 

Street Shooting with Olympus OM-D E-M5 Mark III

With so much heavy talk going on here recently, I decided to slow down a little bit and enjoy shutter therapy! I joined a recent photowalk organized by Olympus Malaysia, led by the amazing Syazwan Basri yesterday. I decided to use the new Olympus OM-D E-M5 Mark III and did a video showing what happens before each shot as well. It was indeed a fun and enjoyable walk, a much needed shutter therapy on a weekend morning. 


I only brought one camera (E-M5 Mark III) and two lenses (17mm F1.8 and 12mm F2) along for this session. I used the E-M5 Mark III for both video and stills. Knowing that we will be constantly moving all the time, I left the tripod behind, which would have slowed me down tremendously. Instead I used the "selfie video vlogging" approach hand-holding the camera in one hand, recording myself as I walked, something which I did not like doing, but I guess it was the best option for this particular case. The lens for the scenes of myself talking in the video was shot on the 12mm F2, while everything else during the photowalk, all images were taken with 17mm F1.8. 

I also shot behind the scenes of each shot, quickly switching between video and photo mode of the camera. I also narrated in the video my thought process behind each shot, how I executed them and why I did certain things with the camera settings. I shared tips on how to create starburst effect, using gaps to create foreground blur, finding the right moment, approaching strangers for portraits and generally being respectful to people you encounter during a photowalk. I think the last point was the most important one. 

Here are some shots taken from the photowalk! Now tell me, have you had your shutter therapy lately?













Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H. 

Why I Don't Use Back Button Focusing

Back button focus has been compared to the invention of sliced bread, and many who went BBF said they never went back.  However, that was not the case for me. I acknowledge the advantages of separating the AF function away from the shutter release button, and this is a popularly used,  much highly regarded photography technique by many. Nonetheless, back button focusing just does not work for me, and my reasons may not be universally applicable to all of you, but I would like to explore my reasons why I still stay with the shutter button as my to-do-it all options. 

For those of you who prefer a video version, here is a short, 10 minutes version of me ranting about back button focus. For the first time in almost half a year, I was recording myself indoor, in the living room. Surely this was not my favourite way of doing a YouTube video, but I admit the convenience was too hard to pass. 


I am not a wildlife or sports photographer. I totally understand how the back button focus can make a huge difference in such shooting scenarios, especially when you don't want to keep refocusing before each shot, that will increase a chance of miss focus. Hence using back button focus, locking it and waiting for the action to happen, and when it happens immediately pressing the shutter button can minimize the risk of error.  This is applicable for wildlife, for example bird shooting, as well as sports where some of the players are holding their position and you wait for them to spring to action. However, I shoot events, weddings, portraits, and products, and back button focus just does not help in any way. I find myself rarely needing to wait for my shots, when I see something happen I normally have to react quickly, and have the shutter button released as quickly as possible. 

I find that when I use back button focus, the handling of the camera is compromised. This is true for Olympus cameras, and I cannot speak for other cameras. Olympus cameras, especially the OM-D cameras were designed with prominent thumb resting area, allowing comfortable and secure gripping if your thumb is resting tightly on the hook. The smaller size of the camera makes it even more difficult when back button focus is being used, moving the thumb away from the thumb rest, which means effectively only 3 other fingers (other than your thumb on BBF and index finger on shutter button) are used to hold the camera in such an awkward manner. Then camera then slides into the palm, and digs into it, which is very uncomfortable for long hour shooting. I have full day shoots (corporate events, weddings) and I need to have balanced, comfortable and secure handling without worrying about the camera slipping off my hand. I don't like to move my thumb away from the thumb rest!





I shoot insect macro and portraits a lot. Critical focus is priority when shooting close up.  For insect macro, even 1-2mm movement away from the focusing plane will throw the subject entirely out of focus. Similarly, when I shoot portraits of strangers on the street, I normally shoot wide open at F1.8 or F1.2, and any movement on the subject's part, even just a few centimeters can cause softness in my image. No amount of post-processing or photo manipulation can save an out of focus image. Having the AF assigned to back button means I need to lock AF first in one action, before capturing the image with another action by pressing the shutter button. The delay between AF lock and shutter release is increased when using back button focus. This is significantly improved when using the shutter button for both AF operation and shutter release, because after the AF is acquired, I can immediately fully press the shutter button instantaneously with almost no delay. This is of course less crucial for shooting landscapes, subjects that are perfectly still or anything that are too far away. For me, back button focus causes the slight delay which can amplify the chance of critical focus accuracy error. 

Finally, during my brief experience using back button focus, my thumb suffers cramps and sore after a long day of shooting. It happened a few times that I decided it was definitely not the right technique for me. Besides, most of my photography subjects are constantly in motion, I am always moving around, nothing stays still, and I need to refocus my shots all the time before releasing the shutter button. The index finger is already always on the shutter button, having another finger locked on another button just makes things a bit more complicated and uncomfortable. I will have to press the shutter button to capture the image anyway, why do I need to use another button? Doing everything with one button works more efficiently for me as I need to refocus almost every shot. 

I don't think there is any right and wrong technique, as long as you find the most suitable one for your own photography needs. Back button focus certainly is not for me, but if it works for you, then stay with it!

What are your thoughts? Share your experience!

Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H. 

Is Computational Photography the Future?

In my recent camera sales is declining video and article I received several comments claiming that the introduction and advancement of computational photography in smartphones is the main reason that negates the need for a camera, and implications were made about how cameras should adopt computational photography and "catch" up to smartphones. However, this is something that deeply troubles me - computational photography has been around since the beginning of digital photography and the level of computational photography applied in dedicated cameras today is not lagging behind smartphones. I want explore the significance of computational photography that has already been adopted integrally in Olympus OM-D cameras. 


The core of digital photography comprises of 3 components - Lens, Image Sensor and Digital Processing. The lens allows light to enter the camera and focuses light onto the image sensor to form an image. The image sensor then translates this light from analogue into digital signal. Then the digital processing converts this into the final output, resulting in JPEG files which can be viewed on digital devices and shared on online social media platforms. Computational photography comes into play during the capturing process of the images as well as during the digital processing stage. A lot of camera operations rely on software algorithms and the final optimization process to produce that beautiful JPEG file relies heavily on computational photography. 

How is smartphone's computational photography better than camera's? I just do not see it. A lot of reference was made to the fake bokeh rendering, smart HDR processing and auto-compositing of images done by smartphones with ease and minimal effort, to produce amazing results. I argue that these advancement in software has been seen in cameras for a while now and there is nothing new in the smartphones that can even come  close to what the dedicated cameras can do. Except maybe the fake bokeh artifical rendering but in cameras why do we need fake bokeh when we can acquire real and better more natural looking real ones?





Now let's take a look at the processing chip found in Olympus OM-D current line-up of cameras - Truepic VIII. In a single Truepic VIII chip, Olympus claims that there are two Quad Core processors. There are 8 independent processing cores in an Olympus camera with a Truepic VIII processor, and you know what? Olympus E-M1X has two Truepic VIII processors, meaning E-M1X  has a total of 16 cores! Each core is assigned for a singular, computational heavy task - one core to compute image stabilization real time, one care for AF operation, one for smart image processing, one for writing/reading to SD card, one for EVF/Live View display, etc. Olympus also claims that the processor that they used in their cameras are more powerful than any consumer mass available Intel processor (true in January 2019, source here)

So why does Olympus need so much processing power (more powerful than any smartphones) in their camera? Is the answer not obvious enough? Of course - computational photography. 

Here is a list of instances when computational photography plays a crucial role in modern digital cameras, especially in Olympus OM-D cameras

1) AF operations
In a single-AF operation, at the half-press of the shutter button the camera quickly captures 240 frames per second, and these images are not stored in the SD card, but in the temporary buffer. The camera's processing will analyze all these images quickly and using smart contrast detection the "computer" will quickly acquire and lock focus. 
In Continuous AF, or 3D tracking, computational photography plays an even more important role to analyze the pattern of subject movement and apply an adaptive smart algorithm to predict where the subject will move to next, allowing the tracking to work efficiently. None of this is possible without raw computational power, and believe me when I tell you the C-AF or 3D tracking in any top level Canon, Nikon, Sony or Olympus cameras are superior than the best smartphone camera you can find today. 

2) Composite Modes
There are many composite modes in camera - High Res 50MP shot, Live Composite, in camera HDR, Focus Stacking and Hand-held multishot noise reduction, all taking multiple images at once, and merging them together with smart real time analysis and effective processing to accomplish selected, desired results. Each composite settings require the camera to perform some computational photography magic to selectively take some parts of an image and merge them all into a final composite image. Most of these composite modes can be executed with just one click of the shutter button. Computational photography has been used by cameras to push and break boundaries - to acquire more high resolution image, to achieve better image quality (less noise in high ISO, wider dynamic range than single shot), to capture more depth of field and to prevent overexposure in long exposure modes. If this is not pure computational photography, I don't know what is. 

3) 5-Axis IS
How does image stabilization work? The gyroscope will detect movement of camera shake, and the camera will use the efficient computational power to counter these movements, all happening so fast that the image or video is fully stabilized. We know how capable the 5-Axis IS in Olympus camera is, and then there is the 5-Axis Sync IS, when the body IS works with lens IS in sync to further improve stability of the shot. 

4) Smart JPEG Processing
Modern digital cameras have superior advanced JPEG processing that a lot of people take for granted. The images are not uniformly sharpened, and the noise reduction application is not done on a global level. The camera will analyze each image separately and apply variable sharpening and noise reduction on images with different shooting parameters (different apeture used, ISO number and lens attached). If the lens is sharp, shooting at optimum aperture and lower ISO, the in camera sharpening is lowered and less noise reduction is applied to achieve a more natural look. Also, the camera will study different areas of the image and apply noise reduction and sharpening selectively. There are a lot of processing happening to optimize a JPEG file in camera - barrel distortion correction, vignetting compensation and chromatic aberration suppression just to name a few. Also, since Truepic 6, Olympus uses no low pass filter on their image sensors, hence they have advanced the smart Moire correction algorithm in their processing engine ever since. Furthermore, there is compensation for diffraction when narrow aperture is being used. All this, happening at a click of a shutter button, almost instantaneously, with virtually zero shutter lag when shooting with a camera. 

Do not get me wrong, I am not bashing smartphone photography, in fact, far from it. I am a firm believer that smartphone photography is the future. However the claim that the computational photography in cameras are falling behind and that camera manufacturer's should play catch up - that is fraudulent. 



The problem with smartphone cameras is not the software. I admit the software is improving, and there will be progress and we will see more exciting things happening in computational photography soon. The real limitations to progress of digital camera in a smartphone is the actual lens and image sensor used in smartphones. The multiple  camera setup is a good idea, but it is not the ultimate solution for smartphone photography. I would be terrified to think of iPhone 15 wiith 15 camera modules at the back of the phone. There is no point having so many cameras, all with similar physical limitations. 

How to improve smartphone cameras? Use larger sensor, maybe include a 1 inch image sensor (like what Panasonic did once), and use larger and higher quality optics to complement the more capable image sensor. Combine that with truly powerful software, then we can talk. At this moment, no matter how advanced the computational photography is in an iPhone or Samsung camera, the fact that those tiny image sensor and crappy lens still render sub-par quality images. I have tested the Samsung Note 10+ recently and trust me, I am NOT impressed. For a smartphone camera, yes it is possibly the best now in the market, but compare that with even an entry level mirrorless camera, say an Olympus PEN E-PL9, there is still a serious gap. 

Let me know if you still hold firm to the believe that, today, the smartphones have higher level computational photography power than cameras? Share your thoughts!

Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H. 

Why Your Smartphone Photographs Suck and Here is How You Can Fix Them

Smartphone photography is the future, there is no denying that, and the declining traditional camera sales is the proof of that. Almost everyone owns a smartphone, and the camera is everyone's hands. When it come to assessing the smartphone camera's quality, all the rage goes to computational/AI advancement, the pixel count, night mode (low light shooting), HDR implementation, the fake bokeh and overall smart software handling overall shooting process. I admit the smartphone camera is improving drastically, but a lot of people are missing one very critical factor that determines the look or outcome of a photograph - the choice of lens focal length and the corresponding perspective. With the majority of lens being the default wide angle lens, with the new inclusion of ultra wide angle, not many smartphone users know how to manage wide angle shooting effectively, or deal with the big problem that comes with it that can completely destroy a photograph - perspective distortion. 


Wide angle lens when not used carefully, can cause excessive distortion to an image. This has nothing to do with the quality of the camera or lens, perspective distortion happens to all wide angle use, it is just the nature of the lens fitting as much as possible within a frame. Perspective distortion can be seen as subjects in an image appear being artificially stretched and looking disproportionate. For example, close up wide angle portraits usually will either elongate or shorten limbs, stretches the shape of the head, making it bigger than normal in comparison to the body size. Also, when shooting buildings, looking up, the verticals are not perfectly straight and it appear as if the buildings are falling down. 

Example of a wide angle shot with corrected perspective

wide angle with perspective distortion, uncorrected, straight out of camera

The biggest problem comes when shooting human subjects with wide angle lens. Since it is the default on most smartphones, you have no choice but to deal with it. When going too close, for example taking a selfie shot, most people will try to hide the double chin hence angling the camera upward, looking down from the eye , and that can cause the forehead to be wider than usual, nose appear larger, and if done wrongly, the face shape might appear nothing that dissimilar to an alien head. 

Alien head selfie

Wide angle lens problem shooting humans - Limbs being stretched and look ugly/unnatural

Disproportionate looking human subjects - head appearing very big in comparison to body size, and shortened legs. 

Left:  28mm equivalent, looking unnatural 
Right: 50mm equivalent, looking more realistic and appealing

Typically for fashion and portrait photographers, the popular choice to shoot flattering portraits are lenses that are much longer, such as 85mm equivalent or longer. 70-200mm telephoto lens is a popular choice to render realistic and natural looking portraits, with minimal perspective distortion, hence no stretched limbs, no weird looking head or ugly disproportionate looks. This perspective distortion is not a problem that can be solved via software or computational/AI photography, it is just the nature of how the images look, there is no avoiding it if you are utilizing wide angle lens all the time. There are some ways to manage the amount of distortion (we will explore this) but it is crucial to recognize the cause of the problem and how ugly the results can be, if care is not taken during the shooting process. 

Some newer smartphone cameras include a "telephoto" lens which is roughly equivalent to 50mm, that could solve a lot of perspective distortion issues. 

It was not easy to use wide angle lens to begin with. Besides excessive perspective distortion (which is destructive) wide angle lens fits EVERYTHING. Most photographers photographers prefer clean, simple, straight to the point framing so that attention is not diverted away from the main subject in the image. When wide angle is being use, you may accidentally include some subjects that can be distracting in the background, causing a messy composition. 

90mm equivalent lens used to create flattering, proportional look

28mm causes excessive perspective distortion

If you have your portrait taken, would you really want to look like the Antman on the left? Wide angle makes ugly human portraits. 

So how can we fix this problem? Here are a few pointers. 

1) Don't Use Wide Angle If Possible
Not all situations require the use of wide angle, and sometimes we can get away with longer focal lengths. Consciously trying to use longer focal lengths can minimize the problem of perspective distortion. If your smartphone has an option for longer lens camera, use that, many flagship or higher tier smartphones have a "medium telephoto" lens for this purpose. I am not saying avoid wide angle or ultra wide angle at all costs, just use them when absolutely necessary, for example  to fit as many people in a group photograph as possible shooting in a tight space. 

2) Center Your Subject
Usually perceptive distortion gets worse being away from the center of the frame.Avoid placing anything important at the sides, edges and corners of the frame. Keep the subject, especially human right at the center. And do not tilt the camera unnecessarily, or shoot from high or low angles. Try to keep the straight lines straight - verticals and horizontals. 

3) Dno't Shoot Too Close
I know I know, the famous saying by Robert Capa, if your photo is not good enough you are not close enough. That is not valid for wide angle photo, if you just step back a little bit, allowing a bit of negative space around your subject, and then crop in, you will get a much better, more proportionate looking image. The nearer your wiee angle lens is to the subject, the more exaggerated the perspective becomes. 

Keep the subject in the center, and do not shoot too close. This minimizes perspective distortion when shooting with wide angle lens.

Same image as above, but cropped in a little. 

I hope you find this article useful and that you will start to consider minimizing perspective distortion when you shoot with your smartphone camera! Do you have any more tips? Share with me in the comments below.

Please follow me on my social media: Facebook PageInstagram and Youtube

Please support me and keep this site alive by purchasing from my affiliate link at B&H.