top of page
Vadzo imaging logo - header
Writer's pictureVadzo Imaging

Understanding Pixel: The Sensitivity of an Image Sensor

Updated: 14 hours ago

pixel is a single point in an image. Images are made up of pixels, which are tiny dots that form a complete image when viewed from a distance.

Google Photos may have made you feel nostalgic when it displayed your beautiful memories from the past. Pictures of what you’ve seen in the past come to you by way of a digital camera. We like to look back at our past and reflect on it.

What is a Pixel? Which illuminated sensors are best to enhance the sensitivity of a pixel

Digital cameras change the way we look at the world. Besides reasonable size and weight and much less lag time (less than a second between taking the picture and when you see it on the LCD) – you can shoot images as quickly as your fingers will allow.

Dive deeper into the science of Pixel Sensitivity with Falcon-233CRS. designed for unmatched Pixel Sensitivity.


What are the factors to consider when choosing a camera for personal or business purposes?

Digital cameras on the market today are more technologically advanced, sharper, and more responsive than their predecessors. They allow for greater flexibility in capture. Several opportunities have also been created for users – not just photographers, but also other industries – to share, use, and monetize their images.

Choosing a suitable camera for your business or buying a reliable camera for your personal use takes more than just an eye for photography. The field of digital photography includes technical and artistic terms such as aperture, depth of field, dynamic range, exposure triangle, focal length, ISO, and many others.

Early cameras used photographic film to capture images, but today’s digital cameras use optoelectronic sensors. Image sensors perform several processes, and their technical layout determines their quality and appearance.

What is a Pixel? 

A pixel is a single point in an image. Images are made up of pixels, which are tiny dots that form a complete image when viewed from a distance. The term “pixel” is a combination of the words “picture” and “element.”

Pixels are the basic building blocks of a digital image or display and are created using geometric coordinates. The pixel resolution spread also determines the quality of the display; more pixels per inch of monitor screen yields better image results. It also means that the pixel is the smallest element on the sensor that converts light energy (photons) to electrons, which are subsequently digitized on the sensor to produce an image.

The physical size of a pixel varies, depending on the resolution of the display. The size of each pixel shows how many photons the sensor can accept throughout the time it is exposed to light. The larger the pixel size, the lighter it accepts, resulting in a better image. It will equal the size of the dot pitch if the display is set to its maximum resolution and will be larger if the resolution is lower since each pixel will use more dots.

In most high-end display technologies, each pixel has a distinct logical address, a weight of eight bits or more, and the capacity to cast millions of different colors. The perfect blending of the three primary elements of the RGB color spectrum produces the color of each pixel. Each color component of the pixel can be specified using a variable number of bytes, based on the color scheme.

The term “pixel dimensions” or “pixel resolution” refers to the total number of pixels and how they are distributed on a 2D plane, more precisely, the total number of pixels in the image’s width and height. For instance, the number of pixels is indicated by values like 1920 x 1080.


What is Well Capacity? 

As we know, each pixel converts the falling photons to an electrical charge. The maximum electrical charge that each pixel can hold is termed “full well capacity”. The full well capacity determines the dynamic range.

The fill factor is determined by the area of the pixel that can gather light.

filename

The sensor architecture determines the active area in the substrate. As discussed, not the entire sensor surface is receptive to light. This is because pixels require control circuitry, which limits the fill factor.

Now, let’s see the technological advancements in pixel architecture that enable us to gain picture quality.


What is a Microlens? 

When we talk about pixel architecture, we must look at microlenses.

To improve illumination ability, digital cameras have “microlenses” over each photosite. Microlens are designed in the form of arrays that are placed on top of the sensor area. They are designed to focus the light onto the active pixel area. These lenses act as funnels, directing photons into the photosite where they would otherwise be wasted.

When the lens pitch is a few hundred micrometers, a few tens of micrometers, or even less, the lenslets typically form a periodic pattern of the square or hexagonal form.

Due to compressing more megapixels into the same sensor area, camera manufacturers have been able to employ advancements in microlens design to decrease or preserve noise in the most recent high-resolution cameras, while having smaller photosites.


How to Enhance the Sensitivity? 

The sensitivity of an image sensor strongly depends on how much of the total pixel area is used for light-to-charge carrier conversion (light-sensitive).

To maximize sensitivity, we use microlens to redirect more light into the photosensitive area. Furthermore, the backside-illuminated (BSI) pixel architecture has the advantage of not limiting the light-sensitive region of the control circuitry.

Different types of pixel designs in sensor architectures

  1. FSI – Front Side Illumination

  2. BSI – Back Side Illumination

  3. BSI II – Back Side Illumination II

FSI – Front Side Illumination 

Front Side Illumination is the architecture where the light falls on the sensor's active area which is spatially below the rest of the substrate that comprises other parts of the sensor that are light sensitive by critical such as circuitry, substrate, etc.

filename

In a front-side-illuminated (FSI or FI) imager, gate structures can absorb many of the incident photons, reducing the number of photons that reach the photodiode.

It wouldn’t matter which side was up if a sensor was just a layer of photosensitive silicon. Nevertheless, a pixel is much more than just a photodiode. Transistors and wire are often used to increase the charge, deliver it to the chip’s signal processing area, and reset the chip between frames. These circuitry are mounted on top of the silicon layer, partially obstructing it from light and giving it a pixel-like appearance.

The substrate has many interconnects and circuitry that are essential for charge transfer and sensor functioning. For the photon to reach the active area, it must pass through the gate. This reduces the QE (Quantum Efficiency) of the device and can attenuate the response at certain wavelengths, particularly in the blue region of the spectrum. This results in the loss of a higher percentage of photons.

BSI – Back Side Illumination 

A method has been developed by semiconductor manufacturers to reverse the image sensor by producing light from the back. We don’t lose photons now that light can reach the photodiodes directly. This allows a significantly higher percentage of photons to land on the active area even with small pixels. The pixel size was reduced to 1.1um using BSI architecture.

Furthermore, contemporary BSI II technology allows for a deeper well depth at the same 1.1um, resulting in greater responsiveness and dynamic range.

filename

Quantum efficiencies for the backside lighting of modern CMOS image sensors have exceeded 95%. With the disclaimer that many backside illuminated image sensors have more dark current than their frontside illuminated equivalents, an additional surface (the surface of the backside) also introduces additional dark current and noise sources.

Sharpness is reduced because of the higher sensitivity that comes from having fewer layers above the photodiodes, which is known technically as the modulation transfer function (MTF). Backside illuminated image sensors typically display a lower MTF due to the substrate that is still above the photodiodes, and if the light comes at specific angles, it might be dispersed or improperly guided to the next pixel. Fortunately, the same microlens technique that was used to boost the fill factor has also improved the MTF.


Sensor Architectures: Which is Better – Front or Back Illumination?  

Backside illuminated image sensors have fewer obstructions in the incoming light’s path. As a result, backside-illuminated sensors can convert lighter into charge carriers, producing larger signals and better images.

Simplify your understanding of the perfect sensor. Let's see more about our product AR0233 Color 1080P HDR USB 3.0 Camera

Contact Us 

We don’t want you to choose the wrong equipment and end up fighting with it rather than working. A unique camera solution is something we can offer you. Numerous standard cameras, assemblies and solutions, value-added services for component modification, and specialized designs are among our offerings.

We will give you our recommendation for the best camera. From lens assembly and general design standards to budgets and timelines, we’ll assist you with every aspect of your design.

Feel free to contact us.

37 views
bottom of page