Automatic Correction of Intensity Inhomogeneities from Transmission Distribution in Medical Images

The field of image processing has made significant progress in the quantitative analysis of biomedical images over the last 30 years. Scientists are collecting large amount of electron microscopy image data to gain a better understanding of neuron organization. However, many of the image data are distorted by artificial light intensity inhomogeneity, such as magnetic resonance imaging (MRI), computed tomography (CT), X-ray, transmission electron microscopy (TEM), etc. In this paper, we propose a novel automatically non-uniformity correction method, which can estimate the illumination field using image intensity gradients and spatial information. The proposed method is very fast to compute, and the result is better than the state-of-the-art methods. Based on the intensity gradients in medical images, we proposed a non-parametric approach for the automatic illumination correction. The experiments demonstrate that the proposed method is not only faster, but also can eliminate the non-uniformity well.


Introduction
With the application of the medical imaging technology to clinical diagnosis, automatic analysis of the captured images using computer vision and pattern recognition technologies are becoming increasingly important.Intensity non-uniformity is an important issue in medical images [1].It can affect the post-processing, e.g., medical image segmentation [2], image inpainting [3].
There exist large numbers of methods to correct the intensity non-uniformity in medical images.Generally, these methods are classified into five parts.(1) low-pass filtering based methods, which can be used with care to avoid intensity distortion and artificial artifacts in the corrected images [4], (2) surface fitting based methods based on the intensities of major tissues or the image gradients [5], (3) statistical modeling based methods based on model the non-uniformity distribution as the statistics model [6], (4) image intensity histogram based methods through the law of histogram distribution [7], (5) other methods.[8] In this paper, we propose to apply the novel method to solve the non-uniformity issue.The model image is characterized by bilateral filtering.From this model image, we obtain a non-parametric method to estimate the non-uniformity surface in the medical images.Then, the corrected medical images are obtains by segment the surface.Our method is also remarkably accurate, as shown by our results on simulated and real MR brain images, real TEM rabbit retina images.

Imaging model
In the optical model [9], the acquired image can be modelled as a composition of two components.One component is the direct transmission of light from the object, and the other is the transmission due to scattering from the particles of the medium (e.g., air or light).Mathematically, it can be written as where I is the achieved image, and J is the scene radiance or haze-free image that we want to recover.
is the transmission along the cone of vision, where is the attenuation coefficient of the medium, and d(x) is the Huimin Lu a, * , Yujie Li a , Shiyuan Yang a and Seiichi Serikawa a a Department of Electrical Engineering and Electronics, Kyushu Institute of Technology E4-403, 1-1 Sensui, Tobata, Kitakyushu 804-8550, Japan distance between the camera and the object.A is the veiling colour constant, and x = (x, y) is a pixel.The optical model assumes a linear correlation between the reflected light and the distance between the object and observer.

Estimating transmission
Using a dark part, the distance from the camera to the object can be estimated.Thus, we utilize the maximum intensity of the color channel to compare with the maximum intensity.Consequently, the dark channel J dark (x) for the underwater image J(x) can be defined as where J c (x) refers to pixel x in color channel c {r} in the Taking the min operation in the local patch on the haze imaging function of Eq.( 1), we have: where is the patch's transmission, and.A c is the homogeneous background light.The above equation performs one more min operation among all three color channels as follows: Finally, the transmission of each patch can be written as We set V(x) = A c (1-t(x)) as the transmission veil and W = min c (I c (x)) as the minimum color components of I(x).Thus, we have 0 V(x) W(x).For a grayscale image, W = I.Utilizing the guided trigonometric bilateral filter (GTBF) discussed later, we can compute T(x) = median(x) -GTBF (|W -median(x)|).Then, we can calculate V(x) = max{min[wT(x), W(x)], 0}, where w is a parameter in (0,1).A c is usually assumed to be the pixel intensity with the highest brightness value in an image.However, this simple assumption often renders erroneous results in practice due to the presence of self-luminous organisms.Thus, we compute the brightest pixel value among all local minimums corresponding to A c as follows: , (6) where I c (y) is the local color components of I(x) in each patch.

Results and Discussion
We test our algorithm with the state-of-the-art methods.In the first experiment, we compare with the template-based non-uniformity correction method, which is proposed by IDeA Lab., http://idealab.ucdavis.edu/software/bias_correction.php, the results through Figure 1(b) shown that there is a darker zone in the centre of the template-based bias correction.However, this issue is overcome by our proposed method in Figure 1(c).In our experiments, we also run our method on real data.The results confirm the excellence of our method.The template-based correction method is hardly to represent the

Figure 1 .
BrainWeb simulated image: (a) uncorrected MRI image showing bias, (b) corrected image after template-based bias correction, (c) our proposed method.

Figure 2 (
a) shows a TEM image of a portion of the rabbit retina.
Figure 2(b) is the histogram distribution of Figure 2(a).

Figure 2 (
c) and Figure2(e) are the corrected images of sparseness of gradient distribution-based method and our proposed method, respectively.

Figure 2 (Figure 2 .
d) and Figure 2(f) are correspond estimated bias field images.Corrections of the bias field images.(a) TEM image, (b) histogram image, (c) sparseness of gradient distribution-based bias correction, (d) bias field image of Figure 2(c), (e) our proposed method, (f) bias field image of Figure 2(e).Compare with Figure 2(c) and Figure 2(e), we found that the corrected image of the previous one is darker than the latter.The details of our method are much clearly than the previous one.