Five aspects to consider before saying smartphones are replacing dedicated cameras in photography
- Lee Phan
- Oct 31, 2021
- 10 min read
Since I started to pursue my professional photography journey in 2019, I have received quite a lot of questions about the capability of cameras on smartphones in making as great photos as professional or dedicated cameras can. In the past decade, we observed a vast change in camera upgrades on mobile devices, particularly smartphones. Higher image resolution, improved shootings in low light condition, multiple lenses, just to name a few. Also, we can not forget to mention that post-processing software has made a big leap in running time and render efficiency, where desirable output can be quickly done and visualized after the photos are taken. Accessories for smartphones are also more and more available for various usages despite its small effective contribution to image quality. Let's take a look at this example.
HDR (before this function is integrated in cameras): On cameras we have to take 3 photos at different exposure settings so as to combine them into a single photo on computers. On phones, it is set as an automatic shooting mode where all required settings and commands are programmed in advance so that they are sequentially executed by just a button clicked.
Background blur: The story is quite similar to the HDR. On camera, either we use long focal length technique, large aperture (shallow DOF) or post-processing in editing software like Photoshop. On phones, a background detection algorithm is integrated in the processing workflow so that the target blurry background can be shown in real time, even before the photo is taken. We can change the level of blurriness as well as fine tune background contour to make desirable results displayed on screen, even before the photo is finally taken.

Figure 1: Camera and smartphone sales by year, from 1933 to 2016, showing the explosive growth of smartphones compared to standalone cameras.
Speaking at this point, you may have come to the very first conclusion that phones are obviously superior with endless software support, and that they are going to replace cameras in a day or two. This statement seems reasonable but not yet completely fair. In this post, I am going to show you the strengths of cameras that even the most powerful and expensive phones on the market still can not be comparable, and although things are changing quickly, each device type always has its own place to serve present and future needs that are becoming more and more diverse and task-oriented. Though here I defend cameras more to cope with the most asked question related to the rise of phones, I'll still show both the pros and cons of each device so that we can have a thorough understanding.
*Context notes:
- As we are concerned the most about photography, the art of producing pleasing photos, here in this post’s scope, we consider the comparison between cameras on smartphones (briefly mentioned as phones) and dedicated/standalone cameras like DSLR and mirrorless (briefly mentioned as cameras) in the context of how the imaging system performs to create equivalent photos in the same shooting conditions.
- Because both phones and cameras have a vast variety of different builds from low to high end, we will only keep in mind the consideration of peer range devices (i.e. high end phones vs high end cameras with respect to imaging build) when comparing on a specific feature or task. However, most of the supports are derived from physical design so basically the conclusion can apply to almost all devices for each side.
- Phones are intentionally designed as mini computers to carry on multiple tasks rather than a sole-purpose image producing device. Therefore, phones obviously have the advantage of extended application and post-processing software. However, we focus on the quality aspect as producing images is the common feature between them. We only debate on points that affect image quality and operating experience to achieve certain photography tasks. To understand more about image quality and related matters, you can read my other post: Cameras vs Image quality.
Let’s dive in.
1. Physical Sensor size / pixel density
Phones are generally smaller than cameras, and so do their image sensors. The image sensors are typically the size of 1/2.55” on common phones and up to 1/1.9” - 1/1.33” on high-end phones; however, this size is still a lot smaller than on cameras, starting with compact (1/1.7”), APS-C, full-frame or even medium format. Fig. 2 shows the size comparison in one universal unit (area in mm2) on a log scale (steps of 10 times), from the biggest medium format to the smallest at 1/6". Size in width and height dimensions are shown in Fig. 3. For example, full-frame sensors are 35 times bigger than 1/2.5”, or in other words, 1/2.5” sensors gather approximately 35 times less light than full-frame sensors, or the equivalent of 5 EV (Exposure Value) stops lower.

Figure 2: Image sensor size in area (mm2) from large format (towards left) to mobile formats (towards right)

Figure 3: Image sensor sizes in width and height dimensions.
Moreover, dedicated space for the camera module is always limited and not the only priority because it has to share with other important components like CPU, memory, antennas, battery as well as to keep overall compact design. This leads to an obsolete fact that images on phones usually have low resolution, which is now improved by higher pixel density and larger image sensors (still not as large as ones on cameras). This again leads to a new but not so beneficial fact that phones tend to behave badly in low light conditions due to the physical size of pixel sensors being reduced. Shot noise (or noise in general, mostly from heat and electric charge interferences) starts to kick in and degrade the quality as the signal-to-noise ratio (SNR) on a pixel falls. Less light is received on the pixel surface, and when the exposure system tries to electronically boost light sensitivity (relating to ISO), noise is amplified as well and becomes dominant as an unwanted signal. Fig. 4 illustrates the appearance of noise on photos from low to high ISO.

Figure 4: Noise appears more as ISO increases.
However, noise will be less noticeable when images are displayed on a small screen or printed in small paper sizes, as discussed in section 4. Besides, even cameras are still struggling with noise in low-light conditions. In fact, cameras usually have less serious noise-related problems than phones configured at the same ISO, and only high-end cameras can overcome this challenge thoroughly with a sophisticated sensor design and integrated post-processing software.
2. Lenses
Smartphones in the early stage (2000s-2010s) have only one lens on the back (Fig. 5). The fact that it comes with a fixed focal length limits an application that can be easily and only done on cameras: zooming or tele-view. Actually, this can still be done on phones; however, output results are not true optical enlargement but pixel interpolation digitally achieved by software. The simplest form of interpolation is to duplicate nearby pixels to enlarge the whole image, and this leads to a poor result of zooming that is never meant to be used in any case (except when quality is not prioritized) but just a marketing feature. In contrast, on cameras there are plenty of lens choices that would suit all needs of photography, from macro lenses for capturing the tiny hairs of a fly to tele-zoom lenses for shooting wild animals or sport events at a safe long distance. There are also some weird lenses that go beyond common usage like the Laowa Macro Probe with an extensive long tube or the tilt-shift lens that can bend.

Figure 5: Single-camera on smartphones before the era of multiple cameras.
In the past 10 years, we have seen a lot of lens upgrades in both quality and quantity. Multiple cameras (lenses) now become a norm to compensate for what was missing (Fig. 6), and even an optical zoom capability lens is already being under study. The most common design is a set of 3 lenses: wide-angle, ultra-wide and tele-photo. This helps cover most photo categories that need to be done at close, far and any in between distances. Each extra lens can be specialized to perform application-orientation tasks like depth estimation, background blur, monochrome for detail enhancement. An advantage is that you don’t have to carry and change lenses as all are integrated in the phone and ready to use. However, as the matter of fact, integrated lenses are not always comparable in quality and features to camera lenses, despite the existing attachable/external lenses for phones.

Figure 6: Nowaday multiple camera designs.
Furthermore, unlike phone accessories, filter caps on camera lenses (Fig. 7) are much more effective to enhance image quality, like CPL (Circular Polarizer/Linear) helps reduce glare from reflected surfaces, or ND (Neutral-Density) helps reduce or modify the intensity of all wavelengths or colors of incoming light. Despite the emerging good AI software solution, these modifications from source give analog processing effects that we can’t completely replicate in post digital processing.

Figure 7: Filters on cameras.
3. Image Stabilization (IS)
As the compact design applies to phones, optical image stabilization (OIS) will not be considered as a must-have feature. In contrast, on cameras this feature contributes greatly to the sharpness of the output images, or in cases where the sharpness is secured, stabilization allows to use a wider range of parameters that we can shoot with slower shutter speed (beneficial in low light), smaller aperture (greater DoF) and/or lower ISO (less noise). Either being implemented in body or lens, or even both, the IS system helps reduce the shakiness of the camera due to slight movement of hands, body or external source (for example, wind force). It continuously measures the physical angular and shifting movements of the camera and then converts them into counter-movements of lens elements or the sensor to compensate for camera shake, as shown in Fig. 8, and as a result, a sharper photo is obtained (Fig. 9).
There exists Electronic (or Digital) Image Stabilization (EIS) that works similarly to OIS but without any moving components. The sharpness is obtained by cropping (downscale resolution) or sharpness enhancement algorithm which takes into account a set of images (frames) before and after shooting to detect blurs and pick up details for the final output image. Both phones and cameras can benefit from this EIS design; however, mostly it is used for smoothing videos rather than sharpening photos.
Modern IS systems offer 3 - 5 extended stops of EV, which theoretically means, for example, you are limited to take a photo at 1/250 of a second, with IS on you can shoot another photo with the same exposure at slower speed, ranging from 1/30 to 1/8 (other exposure parameters unchanged), without degrading the level of sharpness.


Figure 8: Illustration of OIS system on lens (optical components) and in body (image sensor).
Moreover, the shape and weight of the device can significantly support the IS goal. On cameras, external lenses allow users to hold their cameras more firmly with an extra hand hold point, while heavier weight provides steady shooting with stronger fictitious force that is also beneficial in the presence of external forces (winds). Obviously, on phones we need to spend more effort doing the same task for the same result.

Figure 9: An example of OIS to tackle with sharpness in constraint night shooting: slower shutter speed and hand shake.
4. Where photos are displayed (visualization)
How big a photo is displayed will affect the way we evaluate its visual quality, not the source. In a fair assumption that photos to be displayed have a resolution equal to or greater than screen resolution so that no pixel interpolation is performed, small displays tend to hide small-size flaws like image noise, compression artifacts, slight blurriness, etc. Therefore, photos captured by phones and displayed on phones, as well as other similar small size screens and prints, will not show much imperfection (or less noticeable) as they have to on bigger screen or print sizes. There are numerous applications where photo size is adapted: web-based or social network is the place where small photos are preferred for easy and quick sharing, whereas magazines, posters or advertising materials require a larger size of photo with high image quality to meet the outstanding visual quality possible.


Figure 10: Photos displayed on phones and in prints.
To check the real size of a displayed photo, that is to say 1:1 or 100% zoom where 1 pixel on the photo is represented by 1 pixel of the screen, you can activate “View actual size” in the right click menu (Tested on Windows 10, photo system app). By default, when opened, the photo is zoomed out to fit the displayed window if its size is larger than window size (Fig. 11). On phones, I have not found yet an app or option in default gallery app for this purpose. I’ll update it later or if you know, please let me know.



Figure 11: Fit zoom (<100%), 100% zoom and 500%magnification to observe details.
5. How the device is operated (control)
This section talks about the practical design of devices that allows us to perform photographic tasks with the goal to take the same photo in the same condition on phones and cameras. There is already a published article on the comparison between compact and DSLR cameras, which is quite related to this section and does mention some points on phones. You can find it here: What camera type to start photography with?
Phones are definitely more convenient to carry and take quick shots. Together with auto mode (also found on cameras from mid-range and below), instant filters and post-processing, capturing moments and publishing photos can never be easier and faster. However, when it comes to tough conditions such as to realize a complex idea, shoot in low light, aim at distant subjects, etc., cameras undoubtedly show their superiority. That is when you need full control of the imaging system. Cameras allow you to change exposure setting (shutter speed, aperture & ISO) on the tip of fingers on one hand and focus point with a ring turn around the lens on the other hand (a recent technology allows you to focus on what you aim with your eyes looking through the viewfinder, what is called Eye Control AF). The control arrangement is designed so that we can quickly change the setting without taking eyes out of the frame while staying concentrated on observing what the subject is doing in the frame. On phones, despite the convenient touch interface, you need to do more steps to control all at once, like switching between setting menus, sliding the finger tip on value bars, poking at the screen to set focus point and exposure compensation, etc. (Fig. 12)

Figure 12: Manual camera control on phones.
Conclusion
We can not deny the fact that phone usage in photography is becoming more and more popular, especially in moment shooting and quick sharing. The mass production of smartphones brings photography to more people than ever, so that people can enjoy making photos and editing in their own ways at any time, no matter who they are, photo lovers, amateurs or professionals. This would change the way we think about modern photography and how it works in the photographic industry and daily life. Dedicated cameras still have their irreplaceable roles and positions in making fine arts or conquering extreme conditions of shooting (mostly about lighting and environment). They are a guarantee of high quality production and not to miss a chance of making outstanding photos.
In short, nothing is perfect. You need to understand the properties of each device type to find which one suits you the most. That could mean both where, in my case, phones are for occasional shooting (both JPEG and RAW formats) and cameras for devoted purposes like an outdoor trip shooting, set-up studio or events. I also enjoy modern features on cameras like quick photo transfer through Wi-fi/Bluetooth connection with phones, over-exposed or in-focus region display in electronic viewfinder/live view, all of which help facilitate the photography workflow in this new era of machine computing.
--
Photo credit
Figure 1 - CIPA, http://www.cipa.jp/stats/documents/e/d-201701_e.pdf
Figure 2 - Wikipedia, https://en.m.wikipedia.org/wiki/Image_sensor_format
Figure 3 - Tom Dempsey,
Figure 4 - Andrew Schär,
Figure 5, 6 7 - Internet
Figure 8 - Nasim Mansurov,
and Deepak Singh,
Figure 9, 10 - Internet
Comments