A leaker has provided further details about the Vivo X100 Pro+ or Vivo X100 Ultra camera flagship, which is expected to be launched in 2024. Based on a prototype, the triple cam equipped with 4.3x optical zoom appears to be quite versatile and can apparently reach ridiculous sounding digital zoom-levels.
Pixel binning is the reason.
The caveat is that the software used to process all that data needs to be good.
Well, I fully agree with this article. There is one other good use of binning/supersampling though, and that is better chroma resolution relative to luma.
But even that won’t do much, with all the other shortcomings already present.
As the article says, it’s marketing. If groups of 4 pixels are binned into 1, it’s really a 50mpixel sensor.
Yes, many of these phones won’t give you 200mp images (unless under a specific mode like RAW), so you’re always getting something more reasonable.
Pixel binning can help with low light (effectively doubling the light available if binning with the next pixel over), or it can help to extend the telephoto range, or it can pull details that’d be harder to get with fewer MP.
Most would probably argue that it’s better to have this option than not.
I think that diffraction limit effects already happen at 50mp cameras so tiny phone sensors would be worse. ( https://blog.kasson.com/the-last-word/diffraction-and-sensors/)
In this case, adding more pixels only slows down the camera without improving the picture.
Let me preface this by admitting that I’m not a camera expert. That being said, some of the claims made in this article don’t make sense to me.
A sensor effectively measures the sum of the light that hits each photosite over a period of time. Assuming a correct signal gain (ISO) is applied, this in effect becomes the arithmetic mean of the light that hits each photosite.
When you split each photosite into four, you have more options. If you simply take the average of the four photosites, the result should in theory be equivalent to the original sensor. However, you could also exploit certain known characteristics of the image as well as the noise to produce an arguably better image, such as by discarding outlier samples or by using a weighted average based on some expectation of the pixel value.
Yes, that is one use case for pixel binning. Apple uses it to reduce noise in low light photos, but it can also be used to improve telephoto images where more data (from neighboring pixels) can be used to yield cleaner results.