The research presented in this dissertation focuses on multisensor image level fusion. The objective is to design a reliable method that integrates prominent features intelligently without the introduction of distortion or loss of information In the presence of noise, fusion performance is usually extremely degraded. Sifting noise out of images is a difficult problem, in part because noise varies without a traceable form. In addition, blind denoising prior to or after fusion potentially causes unrecoverable loss of image features. To account for these problems, we develop an adaptive wavelet-based fusion approach that distinguishes noise components from signal and combines them differently. Using a global threshold derived from the analysis of subband noise, coefficients are classified into signal intensive coefficients and noise intensive coefficients. We aggregate signal intensive coefficients by minimizing the linear mean square error. On the other hand, the combination of noise intensive coefficient is derived according to their consistency. We develop a voting-shrinkage scheme that inherits features confirmed by majority and eliminates the obvious noise (inconsistent coefficients). By integrating coefficients according to their noise conditions, we ensure that only coefficients that reconstruct salient features are retained, whereas noise components are discarded We demonstrate the performance of our approach with respect to noise suppression and feature preservation on multispectral images. The comparisons with state-of-the-art fusion methods illustrate that adaptive fusion outperforms the existing techniques, especially when noise is significant