Digital Image Processing in Hindi Urdu

Welcome to the course on digital image processing in Hindi Urdu .

Now,  we will have an introduction to the various image processing techniques and their applications and in subsequent lectures, we will go to the details of different image processing algorithms. To start with, let us see that what does digital image processing mean.

What are terms in Digital Image processing|in Hindi Urdu

So, if you just look at this name – digital image processing in Hindi urdu ; you find that there are 3 terms. First one is processing, then image, then digital. So, a digital image processing means processing of images which are digital in nature by a digital computer. Before we come to the come to the other details, let us see that why do we need to process the images.

Digital Image Processing in Hindi Urdu

So, you find that digital image processing techniques is motivated by 2 major applications. The first application is improvement of pictorial information for human perception. So, this means that whatever image you get, we want to enhance the quality of the image so that the image will have a better look and it will be much better when you look at the image.The second important application of the digital image processing techniques is for autonomous machine applications. This has various applications in industries, particularly for quality control in assembly automation and many such applications. We will look at them one after another. And of course, they use a third application which is efficient storage and transmission. Say for example, if we want to store an image on a computer, then this image will need certain amount of disk space.

Now, we will look at whether it is possible to process the image using certain image properties so that the disk space required for storing the image will be less. Not only that, we can also have applications where we want to we want to transmit the image or the video signal over a transmission media and in that case, if the bandwidth of the transmission medium is very low, we will see that how to process the image or the video so that the image or the video can be transmitted over low bandwidth communication channels. So, let us first look at the first major application which is meant for human perception.

Methods Applied in Digital Image Processing In |Hindi Urdu
Now, these methods mainly employ the different image processing techniques to enhance the pictorial information for human interpretation and analysis. Typical applications of these kinds of techniques are noise filtering. In some cases, the images that you get may be very very noisy. So, we have to filter the images filter those images so that the noise present in that image can be removed and the image appears much better.

In some other kind of applications, we may have to enhance certain characteristics of the image. So, the different kind of applications under this category: one is the contrast enhancement. So, sometimes the image may be very very poor contrast and we have to enhance the contrast of that image so that it is better visually.

In some other cases, the image may be blurred. This blurring may occur because of various reasons. May be, the camera setting is not proper or the lens is not focused properly. So, that leads to one kind of blurring. The other kind of blurring can be if we take a picture from a moving platform; say for example, from a moving car or from a moving train. In that case also you might have absorbed that the image that you get is not a clear image. But it is a blurred image.

So, we look whether the image processing techniques can help to rectify those images. The other kind of application is remote sensing. In remote sensing, the types of images which are used are the aerial images and in most of the cases, the aerial images are taken from a satellite.

Now, let us look at the different examples under these different categories.

Here you find that you have a noisy image. The first image that is shown in this slide is a noisy image and this kind of image is quite common on a TV screen. Now, find that the digital image processing techniques can be used to filter these images and the filtered image is shown on the right hand side and you find that the filtered image looks much better than the noisy image that we have shown on the left side.

In the second category, for image enhancement; you find that again on the left hand side we have an image and on the right hand side, we have the corresponding image which is processed to enhance its contrast. If you compare these two images, you find that the low contrast image in

this case, there are many details which are not clearly visible. Say for example, the water line of the river. Simultaneously, if you look at the right image which is the processed and enhanced version of the low contrast image, you find that the water lines of the river are clearly visible.
So, here after processing, we have got an image which is visually much more better than the original low contrast image.

There is another example of image enhancement here. On the left hand side, we have a low contrast image but in this case it is a color image and I am sure that none of you will like to have a color image of this form. On the other hand, on the right hand side, we have the same image which is enhanced by using the digital image processing techniques and you find that in this case, again the enhanced image is much better than the low contrast image.

I talked about the other content enhancement where we have said that in some of the applications, some cases, the image may be blurred. So here, on the top row, the left side, you find an image which is blurred and in this case, the blurring has occurred because of the defocused lens. When the image was taken, the lens was not focused properly.

 

And, we have talked about another kind of blurring which occurs if you take a picture from a moving platform, may be from a moving train or from a moving car. In such cases, the type of image that we usually get is the kind of image that we have shown on the right hand side of the top row.

 

And, here you find that this kind of blurring is mostly the motion blurring and the third image on the bottom row, it shows the processed image where by processing these different blurred images, we have been able to improve the quality of the image.

Now, other major application of digital image processing techniques is in the area of medicine. I am sure that many of you must have seen the CT scan images where the images of the human brain are formed by using the CT scan machines. Here, it shows one slice of a CT scan image and the image is used to determine the location of a tumor.

 

So, you find that the left hand image is the original CT scan image and the middle and the image on the right hand side, there are the processed images. So, here in the processed image, the region of yellow and red that tell you that the presence of a tumor in the brain.

 

Now, these kind of images and image processing techniques are very very important in medical applications because by this processing techniques, the doctors can find out the exact location of the tumor, the size of the tumor and many other things which can help the doctor to plan the operation process and obviously this is very very important because in many cases, its saves our lives.

images, some mammogram images which shows the presence of cancerous tissues. So, this image processing techniques in the medical field is very very helpful to detect the formation of cancers.

This shows another image which is very very popular and I believe most of you have heard the name Ultra Sonography. So, here we have shown 2 images, 2 ultrasonic images which are used to study the growth of a baby while the baby is in the mother’s womb and this also helps the doctor to monitor the health of the health of the baby before the baby is actually born.

 

The image processing techniques are also very very important for remote sensing applications. So here, this is an image, a satellite image which is taken over the region of Calcutta and you find that many of the informations which are present in the image; the blue line, the blue thick line, it shows the river Ganges and there are different color coding used for indicating different regions and when we have a remote sensing image and aerial image of this form which is taken from a satellite, we can study various things.

For example, we can study that whether the river has changed its path, we can study what is the growth of vegetable over a certain region, we can study if there is any pollution in some region in that area. So, these are various applications of these remote sensing images. Not only that such kind of remote sensing images or aerial images can also be used for planning a city.

 

Suppose, we have to form a city; we have to build a city over certain region; then through this aerial images what we can study is what is the nature of the region over which the city has to be built and through this, one can determine that where the residential area has to be grown, where an industrial area has to be grown, through which regions the paths have to be formed, the roads have to become constructed, where you can construct a car parking region and all those things can be planned if you have an aerial image like this.

Here is another application of remote sensing images. So, here you find that the remote sensing images are used for terrain mapping. So, this shows the terrain map of a hilly region which is not accessible very easily. So, what we can do is we can get the images to the satellite of that region which is not accessible. Then process those images to find out the 3D terrain map and here this particular image shows such a terrain map of a hilly region.

you identify that, you can determine that what is the loss that has been made by the wake of this fire and not only that; if we can find out the direction in which the fire is moving, we can warn the people before hand much early so that the precautionary action can be taken and many lives as well as the property can be saved.

The image processing techniques is also very very important for weather forecasting. I am sure that whenever you look at the TV news on a television channel, when the weather forecasting is given; in that case, on a map some images are overlapped which tells you that what is the cloud formation in certain regions. That gives you an idea that whether there is going to be some rain, that is going to be some storms and things like that.

 

This is an image which shows the formation of hurricane over Dennis which happened in 1990 and through this image, we can find out that what is the extent of this hurricane, what is strength of this hurricane and what are the precautionary measures that can be taken to save live as well as property before. Image processing techniques are also very useful for atmospheric study.

So, if you look at this image, you find that in a center part of the image what has been shown is the formation of an ozone hole. Many of you know that this ozone layer is very very important for us because it gives us a protective layer over our atmosphere and because of this ozone protective layer, many of the unwanted rays from the sun cannot enter our earth’s atmosphere and by that our health is saved.

Whenever there is formation of such an ozone hole, so this indicates that all those unwanted rays can enter the earth’s surface through those through that ozone hole. So, the region over which such ozone hole is formed, people of that region has to take some precautionary measure to protect them against such unwanted radiation. So, this is also very very important. Such image processing techniques are very very important for atmospheric study.

 

 

Image processing techniques are also important for astronomical studies. Say for example, in this particular case, it shows the image of a star formation process.

Again, the next image, it shows the image of a galaxy. So, you find that the application of the image processing techniques is becoming unlimited. So, these are applied in various fields for various purposes.

Next we come to the other domain of application of image processing techniques which is the machine vision applications. You find that all the earlier applications which we have shown; there the purpose was the visualization, the improvement of the visual quality of the image so that it becomes better for human perception.

 

When it comes to machine vision application, the purpose of image processing techniques is different. Here, we are not much interested in improving the visual quality. But here, what we are interested in is processing the images to extract some description or some features which can be used for further processing by a digital computer and such a kind of processing can be applied in industrial machine vision for product, assembly and inspection. It can be used for automated target detection and tracking. This can be used for finger print recognition. This can also be used for processing of aerial and satellite images for weather prediction, crop assessment and many other applications.

 

So, let us look at this different applications one after another.

So, this shows an application of automation of a bottling plant here. What the plant does is it fills of some liquid, some chemical into a bottle and after it is filled up the bottled is bottles are carried away by the conveyor belts and after that this are packed and finally sent to the customers.

So, here checking the quality of the product is very very important and in this particular application, the quality of the product indicates that whether the bottles are filled properly or some bottles are coming out empty or partially filled. So naturally, the application will be that if we can find out that some bottles are partially filled or some bottles are empty, then naturally we do not want those bottles to be delivered to the customers because if the customer gets such bottles, then the good will of that company will be lost.

 

So, detection of the empty bottles or partially filled bottles is very very important and here image processing techniques can be used to automate this particular process. So here, you find that we have shown an image, the snap shot of this bottling process or you find that the awesome bottles which are completely filled up and one bottle in the middle which is partially filled.

So naturally, we want to detect this particular bottle and remove it from the production line so that finally when the bottles go to the customer, no empty bottle or no partially filled bottle are given to the customers.

Let us see another application of image processing in machine vision for machine vision purpose. Now, before I go to that application, I have shown an image to highlight the importance of boundary informations in image processing. So here, you find that we have shown the boundary image of an animal. There is no other information available in this image except the boundary contours and you find that if I ask you that can you identify this particular animal and I am sure that all of you will identify this to be a giraffe.

So, you find that even though we do not have any other information except the boundary or the border of the giraffe; but still we have been able to identify this particular animal. So, in many cases or in most of the cases, the boundaries contain most of the information of the objects present in the scene and using this boundary information, we can develop various applications of image processing techniques. Here is an application.

So, this is again an automated inspection process and here the objects that we are interested to inspect are some refractive kits. So here, you find that we have shown 4 different images. The first one is the original image which is off the refractive which is captured by the camera. The second one is what we call is a thresholded image or a segmented image, we will come to the details of this later; where we have been able to identify that what are the regions which actually belong to this object and what are the regions which belong to the boundary.

 

Naturally, when we are interested in inspecting this particular object, we will not be interested in the background region. What we will be interested is in is the region that belongs to the particular object. So, this background and object separation process is very very important in all these kind of applications.

digital image processing in Hindi Urdu

digital image processing in Hindi Urdu

 

The third image that is the left one on the bottom is a field image. You find that the second image, it is not very smooth. There are a number of black patches over the white region. So, the second one has filled up all those black patches and it shows that what is the profile of the object, the 2 D projection of the object that we can get.

 

And the fourth image, you find that it shows that what is the boundary of this object and using this boundary information, we can inspect various properties of this particular object. For example, in this particular application there can be 2 different types of defects. One kind of defect is the structural defect.

 

Now, when you say structural defect, by structure what I mean is what is the dimension of every side of the object, what is the angle at every corner of the object; these are the structural informations of that particular object. And other kind of inspection that we are interested to do is what are the surface characteristics of this particular object; whether the surface is uniform or the surface is non uniform. So, let us see that how these inspections can be made.

So, here you find that the first image, what we have done is we have processed the boundary image in such a way that since there are 4 different boundary regions, we have fitted 4 different straight lines and these 4 different straight lines that tells you that what should be the ideal boundary of the object. And, once you get this 4 different straight lines; using this 4 different straight lines, we can find out what are the points of intersections of this 4 different straight lines and using this point of intersections, we know that in the ideal situation, those point of intersections are actually the location of the corners of the object.

 

So, you find that in the first image, there are 4 white dots which indicate the corners of the object and once you get these informations – the corners of the object, the boundary line of the object; we can find out what is the dimension of or the length of each and every side of the object. We can also find out what is the corner subtended, what is the angle subtended at every corner of the object.

 

And from this, if I compare these informations that we have obtained through image processing with the information which is all ready stored in the database for this particular object; we can find out whether the dimensions that we have got is within the tolerable limit are not. So, if it is within the tolerable limit, then we can accept the object. If it is not within the tolerable limit, then we will not accept the object.

 

Now, if you look at the original image once more, you will find that there are 2 different corners; the corner on the right hand side and the corner on the left hand side. These corners are broken. Not only that; on the left hand side, if you look at the middle, you can identify that there is certain crack. So, these are also the defects of this particular object and through this image processing technique, we are interested to identify this defects.

 

Now, let us see that how these defects have been identified. So, here again, in the first image; one we have once we have got the ideal boundary and ideal corners of the object, we can fill up

the region bounded by these 4 different edges to get an ideal projection of the object. So, the second image in this particular slide shows you that what is the ideal projection. The third image that shows you that what is the actual projection that has been obtained after image processing techniques.

 

Now, if this ideal projection, if we take the difference of this ideal projection and the actual projection; then we can identify these defects. So, you find that in that fourth image, the 2 different corner breaks have been represented by white patches and also in the left hand side in the middle, you can see that the crack is also identified. So, these image processing techniques can be used for inspection of the industrial objects like this.

And as we mentioned, the other application or the other kind of inspection that we are interested in is the surface characteristics; whether the surface is uniform or the surface is non uniform. So, when we want to find out or study the surface characteristics, the type of processing techniques which will be used is called texture processing. And, this one shows that for the surface of the object it is not really uniform, rather it contains 2 different textures and in the right image, those two textures are indicated by 2 different gray shades.

It shows the application of the image processing techniques for automated inspection in other applications. For example, the inspection of integrated circuits during the manufacturing phase. Here you find that in the first image, there is a broken bond, whereas in the second image some bond is missing which should have been there. So, naturally these are the defects which ought to be identified because otherwise if this IC is made, then the IC will not function properly.

So, those are the applications which are used for mission vision applications for automating some operation and in most of the cases; it is used for automating the inspection process or automating the assembly process. Now, we have another kind of applications by processing the

sequence of images which are known as video sequence. The video sequence is nothing but the different image frames which are displayed one after another.

 

So naturally, when the image frames are displayed one after another, then if there is any movement in the image that movement is clearly detected. So, the major emphasis in image processing, in a sequence processing is to detect the moving parts. This has various applications. For example, detection and tracking of moving targets and major application is in security surveillance.

 

The other application can be to find the trajectory of a moving target. Also, monitoring the movement of organ boundaries in medical applications is very very important and all these operations can be done by processing video sequences. So, let us take one such example.

Here you find that in the first image, some person is moving against a green background. So, let us see this. So, here you find that a person is moving against the background. So, through image processing techniques, we can identify this movement. So, in the second processed sequence, you find that the person is moving which is clearly shown against a black background. That means we have been able to separate the background from the moving object.

Now, this particular application which has been shown; here the image was taken or the video sequence was taken on broad day light but in many other application, for example particularly for security applications, the images ought to be taken during the night also, when there is no sun light. Then what kind of image processing techniques can be used for such surveillance applications?

 

So, here we find that we have shown an image which is or a sequence which is taken during the night and the kind of imaging that you have to take is not the ordinary optical imaging. But here we have to go for infrared imaging or thermal imaging. So, this particular sequence is a thermal sequence. So, here again you find that a person is moving against a steel background.

 

So, if you just concentrate in this region, you find that the person is moving and again through image processing techniques, we have identified just this moving person against the steel background. So, here you find that the person is moving and the background is completely black. So, these kinds of image processing techniques can also be used for video sequence processing. Now, let us take look at another application of this image processing technique.

Let us look at this. Here, we have a moving target, say like this and our interest is to track this particular target. That is we want to find out what is the trajectory that this particular moving object is following. So, what we will do is we will just highlight the particular point that we want to track. So, I make a window like this with the window covers the region that I want to track. And, after making the window, I want to make a template of the object region within this window.

 

So, after making the template, I go for tracking this particular object. So, you find that again in this particular application, the object is being tracked in this video sequence. Just look at this, so find that over the sequences, the object is changing it is shape. But even after changed shape, we have not been able to track this particular object. But when the object cannot be matched any further, the shape has changed so much that the object cannot be matched any further, it indicates poor detection.

 

So, what is the application of this kind of image processing techniques? Here, the application is if I track this moving object using a single camera, then with help of single camera, I can find out what is the Azimuth and elevation of that particular object with respect to certain difference coordinate system.

 

If I track the object with 2 different cameras and find out the Azimuth and elevation with the help of 2 different cameras, then I can identify that what is the X Y Z coordinate of that object with respect to that 3 D coordinate system and by locating those locations in different frames, I can find out that over the time which path the object is following and I can determine that what is the trajectory that the moving object follows. So, these are the different applications of video sequence processing.

So, we have mentioned that we have a third application which is compression. And in compression, what we want is we want to process the image to reduce the space required to store that image or if you want to transmit the image, we should be able to transmit the image over a low bandwidth channel.

 

Now, let us look at this image and let us look at the region that blue circular region; you find that in this particular region, the intensity of the image is more or less uniform. That is if I know that the intensity of the image at a particular point, I can predict what is the intensity of it is neighboring point.

 

So, if the prediction is possible, then it can be approved that why do we have to store all those image points. Rather, I store one point and the way it can be predicted, its neighborhood can be predicted; we just mention that prediction mechanism. Then the same information can be stored in a much lower space. You find the second region. Here again, in most of the regions, you find that the intensity is more or less uniform except certain regions like eye like the head boundary like the ear and things like that. So, these are the kind of things which are known as redundancy.

So, whenever we talk about an image, the image usually shows 3 kinds of redundancies. The first kind of redundancy is called a pixel redundancy which is just shown here. The second kind of redundancy is called a coding redundancy and third kind of redundancy is called a psycho visual redundancy. So, these are the 3 kinds of redundancy which are present in an image.

 

So, whenever we talk about an image, the image contains 2 types of entities. The first one is information content of the image and the second one is the redundancy and these are the 3 different kinds of redundancies.

 

So, what is termed for image compression purpose is you process the image and try to remove the redundancy present in the image, retain only the information present in the image. So, if we

 

24

 

retain only the information; then obviously, the same information can be stored using a much lower space.

 

The applications of this are reduced storage as I have all ready mentioned. If i want to store this image on a hard disk or if I want to store the video sequence on a hard disk, then the same image or the same digital video can be stored in a much lower space.

 

The second application is reduction in bandwidth. That is if I want to transmit this image over a communication channel or if I want to transmit the video over a communication channel, then the same image or the same video will take much lower bandwidth of the communication channel. Now, given all these applications, this again shows that what we get after compression.

So here, we find that we have the first image which is the original image. The second one shows the same image but here it is compressed 55 times. So you find, if I compare the first image and the second image; I find that the visual quality of the 2 images is almost same at least visually we cannot make much of difference.

 

Whereas, if we look at third image which is compressed 156 times; now if you compare this third image with the original image, you find that in the third image there are a number of blocked regions or blocky, these are called blocking artifacts which are clearly visible when you compare it with the original image.

 

The reason is as we said that the image contains information as well as redundancy. So, if I remove the redundancy, maintain only the information; then the reconstructed image does not look much different from the original image. But there is another kind of compression techniques which are called lossy compression.

In case of lossy compression, what we remove is not only the redundancy but we also remove some of the informations so that after removing those informations, the quality of the reconstructed image is still acceptable.

 

Now, in such cases, because you are removing some of the information which is present in the image; so naturally, the quality of the reconstructed image will not be as the original image. So, naturally there will be some loss or some distortion and this is taken care by what is called rate distortion theorem.

 

Now, if I just compare the space requirement of these 3 images; if the original image is of size say 256 by 256 bytes that is 64 kilobytes, the second image which is compressed 55 times, the second image will take slightly above something around say 10 kilobytes. So, you find that the difference; the original image takes 64 kilobytes, where the second one takes something around 10 kilobytes whereas, the third one will take something around 500 bytes or even less than 500 bytes. So, you find that how much reduction in the space requirement we can achieve by using these image compression techniques.

So, given this various applications, now let us look at some history of image processing. Though the application of digital image processing has become very very popular for last 1 or 2 decades but the concept of image processing is not that young. In fact, as early as 1920’s, image processing techniques were been used and during those days, the image processing techniques or the digital images were used to transmit the news paper pictures between London and New York and these digital pictures were carried by submarine cables: the systems which was known as Bartlane systems.

 

Now, when you transmit these digital images via submarine cable; then obviously on the transmitting side, I have to have a facility for digitization of the image. Similarly, on the receiving side, I have to have a facility for reproduction of the image.

So, in those days, on the receiving side, the pictures were been reproduced by the telegraphic printers. And, here you find a particular picture which was reproduced by a telegraphic printer. Now next, in 1921, there was an improvement in the printing procedure. In the earlier case, the images were reproduced by telegraphic printers.

 

In 1921, what was introduced was the photographic process for picture reproduction and in this case, on the receiver; instead of using the telegraphic printer, the digital images or the codes of digital images were perforated on a tape and photographic printing was carried on using those tapes.

 

So, here you find that there are 2 images. The second image is obviously the image that we have shown in the earlier slide; the first image is the image which has been produced using this photographic printing process.

 

So, here you find that the improvement, both in terms of tonal quality as well as in terms of resolution is quite evident. So, if you compare the first image and the second image, the first image appears much better than the second image.

Now, the Bartlane system that I said which was being used during 1920’s, that was capable of coding 5 distinct brightness levels. This was increased to 15 levels by 1929. So, here we find an image with 15 different intensity levels and here the quality of this image is better than the quality of the image which was produced by the Bartlane system.

Now, since 1929, for next 35 years; the researches have paid their attention to improve the image quality or to improve the reproduction quality. And in 1964, these image processing techniques were being used at Jet Propulsion Laboratory to improve the pictures of moon which have been transmitted by ranger 7. And, we can say that this is the time from where the image processing

 

28

 

techniques or the digital image processing techniques has got a boost and this is considered to be the basis of modern image processing techniques.

Now, given the applications as well as and the history of the digital image processing techniques; now let us see that how an image is to be represented in a digital computer. This representation is very very important because unless we are able to represent the image in a digital computer, obviously we cannot process the image.

 

So, here you find that we have shown an image and at a particular point X Y in the image, conventionally the X coordinate is taken vertically downwards and the Y axis is taken horizontally towards right. And if I look at this image, this image is nothing but a 2 dimensional intensity function which is represented by f (x, y).

 

Now, if at any particular point X Y, we find out the intensity value which is represented by f (x, y). This f (x, y) is nothing but a product of 2 terms. So, here you find that this f (x, y) is represented by product of 2 terms; one term is r (x, y) and the other term is i (x, y). This r (x, y) is the refractivity of the surface of the corresponding image point.

 

After all, how do you get an image or how do we, how can we see an object? You find that there must be some light source. If I take an image in the day light, this light source is usually the sun. So, the light from the light source falls on the object surface, it gets reflected, reaches our eye and then only we can see that particular object.

 

So, here we find that this r (x, y) that represents the reflectivity of the point on the object surface which is used from where the light gets reflected and falls on the imaging plane. And this i (x, y), it represents the intensity of the incident light. So, if I take the product of the reflectivity and the intensity, these 2 terms are responsible for giving intensity at a particular point in the image.

&an be infinite.

So, can we represent or can we store such an image in a digital computer where I have infinite number of points and I have infinite possible intensity values? Obviously not, so what we have to do is we have to go for some processing of this image.

And, what we do is instead of storing all the intensity values at all possible points in the image; we try to take samples of the image on a regular grid. So here, the grid is superimposed on this particular image and what we do is we take samples image samples at various grid points.

So, the first level that we need for representation of an image in a digital computer is spatial discretization by grids. And once we get these sample values, at every point, the value of that particular sample is again continuous. So, it can assume any of the infinite possible values which again cannot be represented in a digital computer. So, after sampling the second operation that we have to do is discretization of the intensity values of different samples: the process which is called quantization.

So, effectively what we need is an image is to be represented by a matrix like this. So, this is a matrix of finite dimension, it has n number of row m number of rows and n number of columns. Typically, for image processing applications, the image size which is used is either 256 by 256 elements, 512 by 512 elements, 1 k by 1 k elements and so on; each of these elements in this matrix representation is called a pixel or a pale.

Now, quantizations of these matrix elements; now you find that each of the locations represents a particular grid location where I have to I have stored a particular sample value. Each of this sample values are quantized and typically for image processing applications, the quantization is done using 8 bits for black and white image and using 24 bits for color image. Because in case of color, there are 3 color planes – red, green and blue. For each of the planes, if I use 8 bits for quantization, then it gives us 24 bits which is used for representation of digital color image.

So, here we find that it just shows an example that given this image; if I take a small rectangular area somewhere here, then the intensity values of that rectangular area is given by a matrix like this.

acquisition. The next step after acquisition is we have to do some kind of processing which are known as pre processing which takes care of removing the noise or enhancement of the contrast and so on. The third operation is segmentation. That is partitioning

an input image into constituent parts of objects. The segmentation is also responsible for extracting the object points from the boundary points.

recognition. So, once we get description of the objects; from those descriptions, we have to interpret or recognize what that object is. And, the last step is the knowledge base where the knowledge bases helps for efficient processing as well as inter module cooperation of all the previous processing steps.

And at the core of this system, we have shown a knowledge base and here you find that the knowledge base has a link with all these different modules. So, the different modules can take help of the knowledge base for efficient processing as well as for communicating or exchanging the information from one module to the another module.

So, these are the different steps which are involved in digital image processing techniques and in our subsequent lectures, we will elaborate on this digital on this different processing steps one after another.

Thank you.

For example, we can study that whether the river has changed its path, we can study what is the growth of vegetable over a certain region, we can study if there is any pollution in some region in that area. So, these are various applications of these remote sensing images. Not only that such kind of remote sensing images or aerial images can also be used for planning a city.

Suppose, we have to form a city; we have to build a city over certain region; then through this aerial images what we can study is what is the nature of the region over which the city has to be built and through this, one can determine that where the residential area has to be grown, where an industrial area has to be grown, through which regions the paths have to be formed, the roads have to become constructed, where you can construct a car parking region and all those things can be planned if you have an aerial image like this.

Here is another application of remote sensing images. So, here you find that the remote sensing images are used for terrain mapping. So, this shows the terrain map of a hilly region which is not accessible very easily. So, what we can do is we can get the images to the satellite of that region which is not accessible. Then process those images to find out the 3D terrain map and here this particular image shows such a terrain map of a hilly region.

This is another application of the remote sensing images. Here you find that this particular satellite image shows a fire which took place in Borneo. Then you see that these kind of images are useful to find out what is the extent of fire or in which direction the fire is moving and once

you identify that, you can determine that what is the loss that has been made by the wake of this fire and not only that; if we can find out the direction in which the fire is moving, we can warn the people before hand much early so that the precautionary action can be taken and many lives as well as the property can be saved.

The image processing techniques is also very very important for weather forecasting. I am sure that whenever you look at the TV news on a television channel, when the weather forecasting is given; in that case, on a map some images are overlapped which tells you that what is the cloud formation in certain regions. That gives you an idea that whether there is going to be some rain, that is going to be some storms and things like that.

 

greater work to do. Due to this attention, we commonly express the walking time of an set of rules as a function of the scale of the enter. Thus, if the enter size is n, we express the strolling time as t(n). This manner we keep in mind the input size but not as a defining element of the set of rules

Updated: July 12, 2019 — 6:14 pm

Leave a Reply

Your email address will not be published. Required fields are marked *