

Apart from achieving illumination invariance for video segmentation, so that, e.g.an actor stepping out of a shadow does not trigger the declaration of a false cut, the metric reduces all videos to a uniform scale. In this project, we are interested in capturing a video of a possibly dynamic scene using a set of repeating basis illuminations and then relighting the. The new method performs better than previous methods tested for image or texture recognition and operates entirely in the compressed domain, on feature vectors. We show that the color constancy step – color band normalization – can be carried out in the compressed domain for images that are stored in compressed form, and that only a small amount of image information need be decompressed in order to calculate the new metric. Treating chromaticity histograms as images, we perform an effective low-pass filtering of the histogram by first reducing its resolution via a wavelet-based compression and then by a DCT transformation followed by zonal coding. The new image metric is based on a color-channel-normalization step, followed by reduction of dimensionality by going to a chromaticity space. We develop a feature vector of only 36 values that can also be used for both these objectives as well as for retrieval of video proxy images from a database. Here we show that a very simple method of discounting illumination changes is adequate for both image retrieval and video segmentation tasks. Changing illumination color in particular may confound recognition algorithms based on color histograms or video segmentation routines based on these. Images or videos may be imaged under different illuminants than models in an image or video proxy database.
