Image fusion using segmentation
![](http://www.weebly.com/weebly/images/file_icons/gz.png)
image_fusion.zip | |
File Size: | 3917 kb |
File Type: | zip |
![](http://www.weebly.com/weebly/images/file_icons/file.png)
image_fusion.z01 | |
File Size: | 10276 kb |
File Type: | z01 |
![](http://www.weebly.com/weebly/images/file_icons/gz.png)
rahul.zip | |
File Size: | 707 kb |
File Type: | zip |
http://www.wisdom.weizmann.ac.il/~vision/alumni/hassner/Fusion/
Abstract:
This project is an implementation of algorithm for multi-focus image fusion in spatial domain based on iterative segmentation and edge information of the source images. The basic idea is to divide the images into smaller blocks, gather edge information for each block and then select the region with greater edge information to construct the resultant ‘all-in-focus’ fused image. To improve the fusion quality further, an iterative approach is proposed. Each iteration selects the regions in focus with the help of an adaptive threshold while leaving the remaining regions for analysis in the next iteration. A further enhancement in the technique is achieved by making the number of blocks and size of blocks adaptive in each iteration. The pixels which remain un-selected till the last iteration are then selected from the source images by comparison of the edge activities in the corresponding segments of the source images. The performance of the method have been extensively tested on several pairs of multifocus images and compared quantitatively with existing methods. Experimental results show that the proposed method improves fusion quality by reducing loss of information by almost 50% and noise by more than 99%.
This project is an implementation of algorithm for multi-focus image fusion in spatial domain based on iterative segmentation and edge information of the source images. The basic idea is to divide the images into smaller blocks, gather edge information for each block and then select the region with greater edge information to construct the resultant ‘all-in-focus’ fused image. To improve the fusion quality further, an iterative approach is proposed. Each iteration selects the regions in focus with the help of an adaptive threshold while leaving the remaining regions for analysis in the next iteration. A further enhancement in the technique is achieved by making the number of blocks and size of blocks adaptive in each iteration. The pixels which remain un-selected till the last iteration are then selected from the source images by comparison of the edge activities in the corresponding segments of the source images. The performance of the method have been extensively tested on several pairs of multifocus images and compared quantitatively with existing methods. Experimental results show that the proposed method improves fusion quality by reducing loss of information by almost 50% and noise by more than 99%.
Introduction:
The concept of Image Fusion has been widely used in a wide variety of applications like medicine, satellite imaging, remote sensing, machine vision, automatic change detection, biometrics etc. Image fusion is a concept of combining multiple images into one single image containing more information than that of individual source images. With the existing image capturing devices, it is not always possible to obtain a single image with all the desired information. When capturing an image of a three dimensional scene it is desirable to have all the objects in the scene to be in focus. However, it is not always feasible to capture an all-in focus image; since optical lenses of imaging sensor, especially with long focal length, only have a limited depth of field. The goal of image fusion is to integrate complementary multi sensor, multi temporal and/or multi view data into a new image containing all the necessary information from the various source images. In case of multifocus image fusion, the aim is to obtain an all-in-focus image by acquiring information from different focal planes of the various source images and fusing them together into one single image where all the objects in the scene appear to be in focus. In this Project a novel approach to multifocus image fusion have been proposed based on region based edge information of the source images.
At first, the source images are segmented into smaller blocks.
Then edge information of each block is gathered and selection of any block from the source images is done by comparison of the corresponding edge activity.
Next, we introduces an adaptive threshold for comparison between the corresponding regions of the source images.
Lastly, an iterative method is proposed to facilitate the division of required regions into appropriate number of blocks and subsequent selection of block based on an efficient adaptive threshold for comparison.
Each iteration preserves the subblocks of the source images which are in focus and then passes the remaining regions to the next iteration.
The resultant fused images are both quantitatively and visually better than those produced by various other algorithms.
Literature survey:
Image fusion can be as simple as taking pixel-by-pixel average of the source images, but that often leads to undesirable side effects such as reduced contrast. Fusion can broadly be classified as, fusion in frequency domain and in spatial domain.
It can be implemented using various fusion rules e.g.
mean or max where fused coefficient is average or maximum of source coefficients respectively.
One can also take weighted average instead, where fused coefficient is weighted average of source coefficients.
In recent years, various multiscale transforms have become very popular, such as wavelet, wavelet packet, curvelet and contourlet where taken weighted average in wavelet domain using fixed weights (e.g. 0.6 for CT and 0.4 for PET).
A wavelet based fusion method for multifocus images using weighted average fusion rule in which, weights are based on local statistical features like mean and standard deviation. Similarly,one can use weights based on local mean and energy to fuse medical and surveillance images respectively in wavelet packet domain.
One can fuse surveillance images using contourlet and have fused multifocus images combining curvelet and wavelet. Both of them have used maximum fusion rule.
The basic idea in all these transform based method is to perform a multiresolution decomposition on each source image, then integrate all these decompositions to form a composite representation, and finally reconstruct the fused image by performing an inverse multiresolution transform. This type of algorithm can avoid the discontinuity in the transition zone, but it is computationally expensive. Besides, the frequency algorithm may produce artifacts such as Gibbs phenomenon.
The basic idea of algorithms proposed project is to select an image block from one of the source images, having greater edge information compared to other source image iteratively. The work is mainly focused on finding optimal block size. As the fusion method is in spatial domain we save on time compared to frequency domain techniques which need to transform image to and from frequency domain. Besides, instead of taking weighted average of source pixel, we propose to select one of the source pixel as it is; to avoid blurring caused by ’average’ or ’weighted average’ fusion rule.
Why Edge information:
Edges characterize boundaries and therefore have a fundamental importance in image processing.
Edges in images are areas with strong intensity contrasts a jump in intensity from one pixel to the next.
Edge detection of an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image.
In case of multifocus image fusion, if the edge information of the source images is correctly extracted, the subsequent task of interpreting the information content and detecting the in-focus regions becomes a lot easier.
There are many ways to perform edge detection.
In case of multifocus image fusion, the purpose of extracting edge information is to provide strong visual clues that can help the recognition process and can make a clear distinction between the in-focus regions of the source images.
Methodology:
In this project we have used the canny edge detector.
Basic idea is to detect at the zero crossings of the second directional derivative of the smoothed image in the direction of the gradient where the gradient magnitude of the smoothed image being greater than some threshold depending on image statistics.
After extracting the edge information as illustrated in the earlier section, the source images are divided into a fixed number of blocks.
The images can be divided into 16 blocks. Next, the edge information obtained from the two source images are compared and the image block with higher edge activities are selected to be part of the fused image.
However, that the certain blocks extracted from different source images might contain almost similar number of edges and thus the selection procedure needed to be refined.
Algorithm:
In this algorithm , selection is made in three iterations described as follows
Firstly, the source images are divided into a certain number of blocks.
Then, the difference between edge information from the two source images is computed for each block.
Next, the mean of all these differences is calculated and set as the adaptive threshold (T).
Now, the differences are compared with this threshold T and only those blocks for which the difference exceeds the threshold are chosen and incorporated into the final fused image from their corresponding source image.
The rest of the blocks are passed on to the next iteration.
In the second iteration, the mean of the differences of the regions passed over from the last iteration is calculated and set as the new threshold. Once again, the difference between the number of edge pixels for corresponding image block from different source images, is compared with the threshold, and if the difference is higher than the threshold then the respective block with higher edge information is incorporated into the fused image.
In the third iteration, all the blocks for which no decision has been made are analyzed and the blocks with relatively higher edge information is selected to be part of the fused image.
The concept of Image Fusion has been widely used in a wide variety of applications like medicine, satellite imaging, remote sensing, machine vision, automatic change detection, biometrics etc. Image fusion is a concept of combining multiple images into one single image containing more information than that of individual source images. With the existing image capturing devices, it is not always possible to obtain a single image with all the desired information. When capturing an image of a three dimensional scene it is desirable to have all the objects in the scene to be in focus. However, it is not always feasible to capture an all-in focus image; since optical lenses of imaging sensor, especially with long focal length, only have a limited depth of field. The goal of image fusion is to integrate complementary multi sensor, multi temporal and/or multi view data into a new image containing all the necessary information from the various source images. In case of multifocus image fusion, the aim is to obtain an all-in-focus image by acquiring information from different focal planes of the various source images and fusing them together into one single image where all the objects in the scene appear to be in focus. In this Project a novel approach to multifocus image fusion have been proposed based on region based edge information of the source images.
At first, the source images are segmented into smaller blocks.
Then edge information of each block is gathered and selection of any block from the source images is done by comparison of the corresponding edge activity.
Next, we introduces an adaptive threshold for comparison between the corresponding regions of the source images.
Lastly, an iterative method is proposed to facilitate the division of required regions into appropriate number of blocks and subsequent selection of block based on an efficient adaptive threshold for comparison.
Each iteration preserves the subblocks of the source images which are in focus and then passes the remaining regions to the next iteration.
The resultant fused images are both quantitatively and visually better than those produced by various other algorithms.
Literature survey:
Image fusion can be as simple as taking pixel-by-pixel average of the source images, but that often leads to undesirable side effects such as reduced contrast. Fusion can broadly be classified as, fusion in frequency domain and in spatial domain.
It can be implemented using various fusion rules e.g.
mean or max where fused coefficient is average or maximum of source coefficients respectively.
One can also take weighted average instead, where fused coefficient is weighted average of source coefficients.
In recent years, various multiscale transforms have become very popular, such as wavelet, wavelet packet, curvelet and contourlet where taken weighted average in wavelet domain using fixed weights (e.g. 0.6 for CT and 0.4 for PET).
A wavelet based fusion method for multifocus images using weighted average fusion rule in which, weights are based on local statistical features like mean and standard deviation. Similarly,one can use weights based on local mean and energy to fuse medical and surveillance images respectively in wavelet packet domain.
One can fuse surveillance images using contourlet and have fused multifocus images combining curvelet and wavelet. Both of them have used maximum fusion rule.
The basic idea in all these transform based method is to perform a multiresolution decomposition on each source image, then integrate all these decompositions to form a composite representation, and finally reconstruct the fused image by performing an inverse multiresolution transform. This type of algorithm can avoid the discontinuity in the transition zone, but it is computationally expensive. Besides, the frequency algorithm may produce artifacts such as Gibbs phenomenon.
The basic idea of algorithms proposed project is to select an image block from one of the source images, having greater edge information compared to other source image iteratively. The work is mainly focused on finding optimal block size. As the fusion method is in spatial domain we save on time compared to frequency domain techniques which need to transform image to and from frequency domain. Besides, instead of taking weighted average of source pixel, we propose to select one of the source pixel as it is; to avoid blurring caused by ’average’ or ’weighted average’ fusion rule.
Why Edge information:
Edges characterize boundaries and therefore have a fundamental importance in image processing.
Edges in images are areas with strong intensity contrasts a jump in intensity from one pixel to the next.
Edge detection of an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image.
In case of multifocus image fusion, if the edge information of the source images is correctly extracted, the subsequent task of interpreting the information content and detecting the in-focus regions becomes a lot easier.
There are many ways to perform edge detection.
In case of multifocus image fusion, the purpose of extracting edge information is to provide strong visual clues that can help the recognition process and can make a clear distinction between the in-focus regions of the source images.
Methodology:
In this project we have used the canny edge detector.
Basic idea is to detect at the zero crossings of the second directional derivative of the smoothed image in the direction of the gradient where the gradient magnitude of the smoothed image being greater than some threshold depending on image statistics.
After extracting the edge information as illustrated in the earlier section, the source images are divided into a fixed number of blocks.
The images can be divided into 16 blocks. Next, the edge information obtained from the two source images are compared and the image block with higher edge activities are selected to be part of the fused image.
However, that the certain blocks extracted from different source images might contain almost similar number of edges and thus the selection procedure needed to be refined.
Algorithm:
In this algorithm , selection is made in three iterations described as follows
Firstly, the source images are divided into a certain number of blocks.
Then, the difference between edge information from the two source images is computed for each block.
Next, the mean of all these differences is calculated and set as the adaptive threshold (T).
Now, the differences are compared with this threshold T and only those blocks for which the difference exceeds the threshold are chosen and incorporated into the final fused image from their corresponding source image.
The rest of the blocks are passed on to the next iteration.
In the second iteration, the mean of the differences of the regions passed over from the last iteration is calculated and set as the new threshold. Once again, the difference between the number of edge pixels for corresponding image block from different source images, is compared with the threshold, and if the difference is higher than the threshold then the respective block with higher edge information is incorporated into the fused image.
In the third iteration, all the blocks for which no decision has been made are analyzed and the blocks with relatively higher edge information is selected to be part of the fused image.
![](http://www.weebly.com/weebly/images/file_icons/gz.png)
papers.zip | |
File Size: | 5732 kb |
File Type: | zip |
![](http://www.weebly.com/weebly/images/file_icons/file.png)
papers.z01 | |
File Size: | 9437 kb |
File Type: | z01 |
![](http://www.weebly.com/weebly/images/file_icons/file.png)
papers.z02 | |
File Size: | 9437 kb |
File Type: | z02 |
![](http://www.weebly.com/weebly/images/file_icons/file.png)
papers.z03 | |
File Size: | 9437 kb |
File Type: | z03 |
![](http://www.weebly.com/weebly/images/file_icons/file.png)
papers.z04 | |
File Size: | 9437 kb |
File Type: | z04 |
![](http://www.weebly.com/weebly/images/file_icons/file.png)
papers.z05 | |
File Size: | 9437 kb |
File Type: | z05 |
![](http://www.weebly.com/weebly/images/file_icons/file.png)
papers.z06 | |
File Size: | 9437 kb |
File Type: | z06 |
![](http://www.weebly.com/weebly/images/file_icons/file.png)
papers.z07 | |
File Size: | 9437 kb |
File Type: | z07 |
![](http://www.weebly.com/weebly/images/file_icons/rtf.png)
chapter1imagefusion.docx | |
File Size: | 1433 kb |
File Type: | docx |