Many of the first digital cameras used a separate metering system to set exposure duration, rather than using data acquired from the sensor chip.
exposure value (EV) specify the relationship between the f-number, F(or N: relative aperture, N=f/D, f is focal length, D is the diameter of the entrance pupil), and exposure duration, T (shutter speed):
EV 0 (zero) corresponds to an exposure time of 1s and a relative aperture of f/1.0. If the EV is known, it can be used to select combinations of exposure time and f-number, which can be looked up in a Table . The exposure value becomes smaller as the exposure duration increases, and it becomes larger as the f-number grows.
Most auto exposure algorithms work this way:
· Take a picture with a pre-determined exposure value (EVpre)
· Convert the RGB values to brightness, B.
· Derive a single number Bpre (like center-weighted mean, median, or more complicated weighted method as in matrix-metering) from the brightness picture
· Based on linearity assumption and equation , the optimum exposure value EVopt should be the one that, the picture we take at this EVopt will give us a number close to an pre-defined ideal value Bopt, or:
The ideal value Bopt for each algorithm is typically selected empirically.
From my understanding, just like get a reference image from a sequence of images, using running average or Gaussian mixture model. Check the brightness of this reference image (how to define brightness? Historgram?).
Somebody used EV = \sum Yi/N (Yi: luminance value i, N: number of pixels in ROI). For every image, the EV is calculated and compared to a so-called 'predefined EV'. This predefineEV is defined as a certain number to denote best lighted picture.
Ext_new = Ext_old * (EV_pre/EV_old). The process is: capture image with exposure time -> calculate EV -> calculate new exposure time from EV_pre ->Set new exposure time and re-capture image.