Monday, June 3, 2019

Computer Vision In Bad Weather.

Computer Vision In Bad Weather.Saswati RakshitAimTo take advantage of bad persist in estimation of skill of a pic from its image. As in bad weather atmosphere modulates original information of an image to the percipient so based on observation,we develop example methods for find scene properties(e.g. 3D structure,depth etc).Scope/ApplicationComputer Vision is widely used in various fields now a days.It is used in Optical character recognition Technology to convert scanned docs to textFace detection,Smile detection Many new digital cameras now detect faces and smiles.surveillance and trading monitoring.Image to a 3D model turning a collection of photographs into a 3D modelGoogle Self driving Car uses figurer mass for distance estimationIntroduction Vision and AtmosphereNorm exclusivelyy in good weather we assume reflected send passes through air without it is assumed smart of an image point in the scene will be same.But due to atmospheric scattering,absorption and emission light chroma and color are altered. here our chief(prenominal) consideration is on scattering.Bad weather(Particles in space)- weather condition differ in type and size of particles and their concentration.Air (molecule) scattering due to air is stripped-downHaze (aerosol) haze is certain to effect visibility.Fog (water droplet) Fog and haze has similar origins.but haze extends to altitude of several miles while blot out is few snow feet thick.Cloud is present in high altitude.Rain and snow both effects in image.Here our main consideration is on haze and mist over because they appear in low altitude as compared to cloud.Mechanisms of atmospheric scattering dissipate is dependent on particle size and shape.small particles scatter equally in forward and backward,medium size particle scatters more in forward direction and large particle scatters all in forward direction.In nature particles are separated from each other so they scatter separatistly.i.e. do non in terfere others.but In multiple scattering a particle is exposed not only incident light but also light scattered by other particles.Single scattering function can be written as followsI(,)=E().(,) (1)Where E() is total incident flux on the volume per unit of measurement cross section areaI(,) is flux radiated per unit steady angle per unit volume of medium and (,) is the angular scattering coefficientObjectives To identify effects caused by bad weather that can be turned to our advantages.understanding attenuation and airlight model that is right-hand to measure depth maps of scenes without making assumption about scene properties or the atmospheric conditions.System flowHere our main goal is to idea depth and forming 3D of a scene in bad weather condition.For this purpose we used Two different scattering model1) Attenuation model2) Airlight modelNow first we have used attenuation model and In this model image is taken at environmental illumination are minimal. To esti mate depth of light characters in the scene from 2 images taken under different atmospheric conditions.And applying different mathematical formula used in attenuation model we can compute relative depth of all sources in the scene from two images taken under two different weather condition. next to work with airlight model we need images in day or when environmental illumination can not be ignored.that is image of a scene is complete by airlight.After selecting the 2D image we apply mathematical formulas of airlight model and comparing the speciality of scene point depth can be substantially measured an 3D reconstruction of that scene is also possible.Mathmatics And DescriptionAttenuation ModelWe know that beam of light that travels through atmosphere can be attenuated by scattering.and the radiance(intensity) decreases if pathlength increases.Attenuation model developed by McCartney is summarized belowIf a beam passing through a small sheet(medium) of thickness dx, intensity s cattered by the sheet can be written as followsI(,)=E().(,) dxit represents scattering in directionNow total flux scattered in all direction is obtained by integrating over entire spherical sheet()=E().() dx -(2)fractional change in irradiance at location x can be written as follows-(3)By integrating both side of eqn(3) between limits x=0 and x=d we getE(d,)= -(4)Where I0() is the intensity of the point source and d is the distance between object lens and observerSometimes attenuation due to scattering can be expressed in terms of optic thickness which isT=here is aeonian over horizontal pathHere eqn (4) gives direct transmission which we get after removing scattered flux.Airlight ModelHere atmosphere behaves as source of light.environmental illumination has several light sources including direct sunlight,diffuse skylight and light reflected by the ground.In airlight model light intensity increases with pathlength and so apparent brightness increases. If the object is in infini te distance the radiance of airlight is maximum and radiance of airlight for an object right in front of the observer is zero.To describe the geometry of that model,first we need to consider environmental illumination along the observers line of sight is assumed to be constant but direction and intensity is unknown.Let the cone of solid angle d subtended by a receptor at observer end.and truncated by the object at distance d.This cone between observer and object scatters environmental illumination in the direction of it acts as airlight(source of light) whose brightness increases with pathlength.So the small volume dV at distance x from observer is dV= d x2 dxNow the intensity of light incident on dV isdI(x,)= dV k = d x2 dx k (5)now light scatters in irradiance it produces at observer end isdE(x,) = .(6)also condition in eqn (4)Now we can find radiance of dV from its irradiance asdL(x,) = = ..(7)by substituting (5) we get, dL(x,)=now we will find total radiance of pathlength d from observer to object by integrating the above expression between x=0 to x=dL(d,)= k (1-) .(8)If d = the radiance of airlight is maximum L(,=kSo , L(d,)= L(, (1-) (9)Estimation of depth using Attenuation ModelIn this model image is taken at environmental illumination are minimal and so airlight model is not chosen.At night bright points of image are normally street light,windows of lit rooms.In clear night these light sources are in sight to observer in brightest and clearest form but in bad weather condition the intensity diminish due to attenuation.Our goal is to estimate depth of light sources in the scene from two images taken under different atmospheric conditions.Here image irradiance can be written using eqn(4) asE(d,)= g (10)g is optical parameters of cameraIf the detector of the camera has spectral response s(),he final image brightness value isE/== (11)We know spectral bandwidth of camera is limited so we can assume as constant.And we can write ,E/=g=g I/ (12)Now if we take image in two different weather condition i.e. in mild and dense fog then there will be two different scattering coefficient. Let it will be 1 and if we take ratio of two resulting image brightness we getR== -(13)Using natural log R/=ln R= ..(14)This ratio is independent of camera sensor gain and intensity of source.In fact it is only difference in optical thickness(DOT) of the source for two weather conditions.Now if we compute the DOT of two different light source and take the ratio we determine relative depths of two source locationsSo we can write, = .(15)Since we may not entirely trust the DOT computed for any single above calculation can be made more robust = ..(16)here we assume to find the intensity of a single source pi,which is at distance di from to calculate its relative depth from other sources we need to compute depth of all sources of the scene upto a scale factorThe main goal of using this model is to compute r elative depth of all sources in the scene from two images taken under two different weather condition.Estimation of depth using Airlight ModelAt noon or daytime in dense haze or fog or mild fog most visible scene points are not illuminated and airlight effects.airlight causes intensity to increase when distance increases.Here we consider a single airlight image and try to compute 3d scene structure by measuring depth cues.Let,a scene point is at distance d and produce airlight radiance L(d,).if our camera has spectral response S(The brightness value of that scene point isE/(d)= .(17)Substituting it by eqn (9),we getE/(d)= (18)If is constant we can write,E/(d)= (19)Now Let,S= (20)By substituting eqn(19) at eqn (20),and taking natural logarithm we can write,S/= ln S = -d (21)Here S/ is scale factor and a 3D structure of scene can be recovered upto this scale factorThe part of horizon in the image which has intensity will be the brightest region of the image.(sky background)Future wo rkNext we will understand and discuss about Dichromatic Atmospheric Scattering and structure from Chromatic Decomposition.Referenceshttp// (Accessed on 20.04.2015)Narasimhan, S. G., Nayar, S. K., Vision and the Atmosphere, International Journal of Computer Vision, vol. 48(3), pp. 233254, 2002.Allards Law, http// (Accessed on 18.03.2015)Relation between radiance and Irradiance, 2013, http// (Accessed on 18.03.2015)Radiaometry and Photometry, http// (Accessed on 28.03.2015

No comments:

Post a Comment