got any source for that? all the ones i can find say its a physical 23Mpix sensor.
Well OK, I was mistaken there, the sensor is IMX300 which really is physically 5520x4140 pixel array which makes it 22,8Mpix totals.
The other thing again is whether such a device makes sense as an image sensor since the surface area is so puny 6,05mm*5,03mm.
If you assume the whole surface is crammed edge-to-edge so full of pixels you can stuff 'em, the pixel size is 1,09um*1,21um, which makes a pixel area of 1,33um2 which so tiny it is absolutely no use to anybody seriously contemplating photography.
Add to that the miniature optics that the light has to pass through to the sensor any you see the whole thing just cannot work due to laws of physics.
In my Canon DSLR dating from the last decade the pixel area is 55um2 which is more than 40 times larger than on the IMX300.
There are fundamental physics laws in play here, you cannot get around those by having better materials and manufacturing.
Due to the properties of reality you just cannot cram more than a certain number of photons in a certain sized pixel in given time, the wave function just doesn't fold that way.
Due to the properties of reality you just cannot cram more than a certain number of photons in a certain sized pixel in given time, the wave function just doesn't fold that way.
I found it interesting reading how living organisms cope with the same problem. Some of them have several optical sensors (cells) in their eyes in series (physically on top of each other), so that any photon escaping capture may have a chance to be detected in the next layer. But you very quickly run into the law of diminishing returns.
Actions from past meetings: Xperia X: Camera autofocus or picture sharpness seems non-optimal (sledges, 09:14:13)
https://together.jolla.com/question/...s-non-optimal/ (sledges, 09:15:13)
we think that the issue should go away or at least be easier traceable/addressed when BSP is updated to Android 7 (sledges, 09:27:10)
as usual, no promises when that will happen, but we are working on aosp7 (sledges, 09:27:41)
I found it interesting reading how living organisms cope with the same problem. Some of them have several optical sensors (cells) in their eyes in series (physically on top of each other), so that any photon escaping capture may have a chance to be detected in the next layer. But you very quickly run into the law of diminishing returns.
Living organisms cope with that easily; they can throw superior processing at it
Think about the human eye for example; the specifications of it are worse than any webcam you can find.
The mechanics are really poor, it is irregular shaped and optically inadequate, focusing method is squeezing the thing out-of-shape!
The sensor part is nonuniform and to top it, the databus runs on the wrong side of the sensor elements!
the resolution is poor, the color sensing is poor, the bandwidth is poor
the system runs on glucose ATP/ADP burner, electricity is used only partially as signals traverse with potassium-ion transferense!
etc... etc...
The only way to make it work at all is to train the most complex neural network in the universe to use it for several years.
Sony is well known for heavy postprocessing (just search for "star eater"). Without it, as it's probably patented, DRMed and what else, reduced resolution seems the simplest method to curb noise.