Table of contents:
- The first camera phones
- Dawn of Instagram
- More cameras, good and different
- Increasing characteristics
- What's next
2023 Author: Malcolm Clapton | [email protected]. Last modified: 2023-05-22 06:26
A brief excursion into the history of mobile photography.
The camera in a smartphone has become an integral part of our life: with its help, you can always capture an important moment and share it with others. However, for this to become real, it took 20 years of technical progress, redistribution of the photographic equipment market and many innovations. We decided to remember how mobile photography burst into our everyday life and which companies made it simple and accessible.
The first camera phones
For the first time the camera appeared in the phone in 1999: the Japanese company Kyocera released the VP-210 model, which allowed making video calls. The camera was located in front and captured the owner's face at a rate of 2 frames per second. She could also take selfies with a resolution of 0, 11 megapixels and store them in the device's memory in an amount of up to 20 pieces.
In subsequent years, mobile cameras developed rapidly under the onslaught of competition, and already in 2004 the milestone of 1 million pixels (1 megapixel) was taken. And in 2005 the market was shocked by two models that can be called the first camera phones: Nokia N90 and Sony Ericsson k750i. They sported 2-megapixel autofocus cameras and captured sharp pictures, not blurry abstractions. It was then that the attitude of users to mobile photography began to change: thematic groups appeared on Flickr, people began to exchange pictures received on their phones and discuss them.
With each subsequent year, the number of people taking pictures on the phone has grown exponentially. The release of the iPhone in 2007 changed the attitude towards monofunctional devices: smartphones began to replace MP3-players, and then amateur photo and video cameras.
Dawn of Instagram
The collapse of the camera market occurred in 2010 with the launch of the Instagram service. Users wanted to get an attractive photo as easily and quickly as possible and post it on social networks.
At the same time, the quality of mobile cameras has improved. Introduced in 2011, the iPhone 4s received an 8-megapixel camera and light-sensitive optics with an aperture of f / 2, 4. These characteristics covered most of the needs: press a button, get a bright shot and upload it to Instagram.
Over time, the processing of images in smartphones has become more aggressive: contrast, saturation and contour sharpness in priority, and the naturalness of the picture faded into the background. But there have also been attempts to bring professional technology to mobile cameras. So, Nokia in 2012 made the 808 PureView camera phone.
The model was distinguished by characteristics that were phenomenal for its time. The camera resolution was 41 megapixels, and the physical size of the sensor was 1/1, 2 ″. It was also equipped with a mechanical shutter, built-in ND ‑ filter, Carl Zeiss lens with f / 2, 4 aperture and xenon flash.
Unfortunately, other manufacturers were in no hurry to follow Nokia's example, relying on filters and other embellishments.
More cameras, good and different
At some point, the companies decided to increase the number of cameras in smartphones. Back in 2011, HTC Evo 3D and LG Optimus 3D were released, which used two lenses each to create stereoscopic photographs. However, the technology turned out to be unclaimed and manufacturers forgot about such experiments for several years.
In the spring of 2014, the market saw the HTC One M8. The smartphone received an auxiliary module for measuring depth and separating the object from the background. Thus, the company implemented portrait mode two years earlier than Apple did.
A real boom happened in 2016, when the largest manufacturers presented their solutions. At the same time, there was no single view of why a smartphone needs two cameras. For example, Huawei promoted monochrome photography with the P9, which it co-developed with Leica. The LG G5 relied on shirik, and Apple introduced a telephoto lens for portrait photography and optical zoom in the iPhone 7 Plus.
As it turned out, two cameras are not the limit. Now almost all smartphones on the market are equipped with three lenses with different focal lengths, as well as cameras for macro photography and depth measurement.
The quality of mobile cameras has always been limited by physical limitations: the small thickness of the case did not allow equipping smartphones with high-quality optics and large sensors. However, users were demanding improvements, companies were trying to satisfy their needs.
So we ended up with cameras protruding a few millimeters from the body. The physical dimensions of the sensors have also grown: if five years ago they fluctuated within 1/3 ″, now the Samsung Galaxy S20 Ultra and Huawei P40 with 1/1, 3 ″ sensors have appeared on the market. The image sensors have increased almost nine times, which has significantly improved the quality of photographs.
The large area of the sensors allowed to increase the resolution. 48MP and 64MP mobile cameras have become the norm, while Samsung and Xiaomi have already hit the 108MP milestone. However, photos with such a resolution weigh too much, so the engineers went for a trick: information from neighboring pixels is combined. This lowers the resolution, but in return we get less noise and a wider dynamic range.
All of these innovations have made smartphones an ideal replacement for digital point-and-shoot cameras. Nevertheless, they still have room to grow. And even if the physical characteristics hit the ceiling, software will always come to the rescue.
Now computational photography is gaining momentum: the camera takes a series of images, and neural networks based on them collect the perfect frame, suppressing noise, equalizing brightness and correcting color. The method is used in Google Pixel 4, iPhone 11, Huawei P40 and many other smartphones. Processing takes place automatically and imperceptibly for the user - he sees only the result.
As performance increases, the capabilities of cameras become broader. They can already record video and carry out its processing in real time: blur the background or make it black and white, leaving objects in color. The direction of augmented reality is also developing: Apple has already equipped the iPad Pro with a LiDAR sensor for working with AR applications, and soon the technology will also appear in the iPhone.
Mobile cameras are becoming a hardware-software complex, the capabilities of which we do not fully understand. That is why it is more interesting to follow the latest developments in this area and test them yourself.