KEMBAR78
Image Processing Basics | PPS
Saudi Board of Radiology: Physics Refresher Course Kostas Chantziantoniou, MSc 2 , DABR Head, Imaging Physics Section King Faisal Specialist Hospital & Research Centre Biomedical Physics Department Riyadh, Kingdom of Saudi Arabia Image Processing Basics
Image Processing: Basics They are many factors that determine the diagnostic usability of a digital image: exposure techniques detector quality (technology dependent) scatter viewing conditions quality of readers number of readers image processing
Image Processing: Basics Why do we need image processing?   since the digital image is “invisible” it must be  prepared  for viewing on one  or more output device (laser printer, monitor, etc) the digital image  can  be optimized for the application by enhancing or  altering the appearance of structures within it (based on: body part,  diagnostic task, viewing preferences, etc) it might be possible to analyze the image in the computer and provide  cues to the radiologists to help detect important/suspicious structures  (e.g.: Computed Aided Diagnosis, CAD)
Image Processing: Transformations They are  three  types of image processing (transformation algorithms) used: image-to-image transformations image-to-information transformations information-to-image transformations
Image Processing: Image-to-Image Transformations Image In    Image Out  enhancement (make image more useful, pleasing) restoration (compensate for known image degradations to produce an image that is “closer” to the (aerial) image that came out of the patient - e.g: deblurring, grid line removal) geometry (scaling/sizing/zooming, morphing one object into  another, distorting or altering the spatial relationship between  pixels)
Image Processing: Image-to-Image Transformations They are  three  types of image-to-image transformations:  point transformation local transformation global transformation
Image Processing: Image-to-Image Transformations Point Transformation (use Look-up Tables to adjust Tonescale or image contrast) the shape of the LUT depends on the desired “look” of the output image  and the structure of the histogram
Image Processing: Image-to-Image Transformations
Image Processing: Image-to-Image Transformations
Image Processing: Image-to-Image Transformations
Image Processing: Image-to-Image Transformations
Image Processing: Image-to-Image Transformations Image contrast  window
Image Processing: Image-to-Image Transformations Image brightness window
Image Processing: Image-to-Image Transformations Non-linear LUTs can be used as well (but more complex to implement)
Image Processing: Image-to-Image Transformations What LUT shape should be used?
Image Processing: Image-to-Image Transformations Local Transformation (Edge Enhancement, Zooming)
Image Processing: Image-to-Image Transformations Edge Enhancement (Un-sharp Masking Technique)
Image Processing: Image-to-Image Transformations Creating a blurred image The pixels within the   kernel   are averaged to determine the value of the center pixel for the output image Repeat process for  all  pixels in image
Image Processing: Image-to-Image Transformations Kernel  size  will have a large effect on the level of smoothing that is  performed Sum of  all  pixel weight factors in kernel must equal 1
Image Processing: Image-to-Image Transformations
Image Processing: Image-to-Image Transformations Creating a “amplified” difference image
Image Processing: Image-to-Image Transformations Creating the final edge enhanced output image
Image Processing: Image-to-Image Transformations Global Transformation (Spatial frequency “Fourier” decomposition):
Image Processing: Image-to-Image Transformations
Image Processing: Image-to-Information Transformations Image In    Information (Data) Out  image statistics (histograms) image compression image analysis (image segmentation, feature extraction, pattern recognition) computer-aided detection and diagnosis (CAD)
Image Processing: Image-to-Information Transformations Image Statistics (Histogram)  the  histogram  is the fundamental tool for  image analysis and image processing it histogram is created by examining each  pixel in the digital image and  counting  the  number of occurrences of each pixel value  (or Code Value)
Image Processing: Image-to-Information Transformations
Image Processing: Image-to-Information Transformations Low Contrast Image Histograms low contrast images produce tall  and narrow histograms histogram covers a short range of  pixels values High Contrast Image Histograms high contrast images produce short and flat (wide) histograms histogram covers a wide range of  pixels values NOTE:  histograms do not care for the location of the pixels (both high contrast images shown above have the same histogram)
Image Processing: Image-to-Information Transformations
Image Processing: Image-to-Information Transformations Image Compression  medical images can contain huge amounts of data (CT image: 0.25 MB,  CR chest image: 8 MB, Digital Mammo: 32 MB)  image compression aims to reduce the total number of bits needed to  represent the image without compromising image quality, which in turn: reduces storage requirements reduces the time required to transmit images uses existing network bandwidth more effectively Image compression is more than: sampling at a lower rate or throwing away pixels quantizing each pixel more coarsely or reducing the precision of each pixel
Image Processing: Image-to-Information Transformations Why can images be compressed?   Redundancy : relationship do exist between pixels in an image based on their location algorithms can be spatial (statistical), temporal or spectral (wavelet) in nature Irrelevancy : pixels included in image that do not add to the diagnostic information
Image Processing: Image-to-Information Transformations Two types of image compression are used: Lossless (Reversible) Compression   uses statistical redundancy  only compression ratios range from 2:1 to 4:1 decompressed (reconstructed) images is numerically  identical  to original Lossy (Irreversible) Compression   uses statistical redundancy and irrelevancy compression ratios range from 6:1 to 20:1 and more decompressed image is  degraded  relative to original
Image Processing: Image-to-Information Transformations Image Analysis (Segmentation) Original image Original image with segmentation data
Image Processing: Information-to-Image Transformations Information (Data) In    Image Out  decompression of compressed image data reconstruction of image slices from CT or MRI raw data computer graphics, animations and virtual reality (synthetic objects)
Image Processing: Information-to-Image Transformations 3D Image Reconstruction
Image Processing: Information-to-Image Transformations Image Synthesis
Image Output (Reconstruction): Basics Why do we need to reconstruct the image?   the  digital image  is still a 2D array of numbers (pixels values) if it is to be viewed by a human it must be converted back to an  analog   image  on some display device and/or medium (e.g.: CRT monitor,  hardcopy) so digital image must be  reconstructed  for output device
Image Output (Reconstruction): What is the problem? Nuclear medicine image (96 x 128, 6 bit) to be printed on a laser printer film (4k x 5k, 12 bit) The problem is: how do we match the gray scales (tonescale)? how do we match the image size?
Image Output (Reconstruction): What is the problem? CR image (2k x 2.5k, 12 bit) to be displayed on a CRT monitor (1.2k x 1k, 8 bit)
Image Output (Reconstruction): Tonescale Output system tonescale depends on:  image processing applied (output device should not change any post  processing that was done on the image prior to this step)  calibration of output device (very important & can vary with time) dynamic range of output device viewing conditions observer
Image Output (Reconstruction): Tonescale Output Calibration (needs to be performed frequently) every output device has a LUT that relates its output pixel values to the  input pixel values that generated them Laser Printer CRT Monitor
Image Output (Reconstruction): Tonescale Dynamic Range every output device has a different dynamic range that must be considered  when selecting or calibrating the device LUT Dynamic Range =  Highest signal value device can produce   Lowest signal value device can produce Dynamic Range = antilog(3.0) = 1000 therefore dynamic range of film is 1,000:1
Image Output (Reconstruction): Tonescale Must use LUTs that compensate for differences in  dynamic range : CRT monitors: non-linear Laser printers: linear or non-linear (to introduce additional contrast)
Image Output (Reconstruction): Tonescale Viewing Conditions
Image Output (Reconstruction): Output Geometry Image Scaling Techniques in order to  display  images properly on the output device, the image may  have to be  scaled  by the use of one of the following techniques: decimation (sub-sampling) interpolation
Image Output (Reconstruction): Decimation this technique is required when image matrix size is too  big  for output  device method of decimation is determined by degree of reduction (may have  image quality concerns)
Image Output (Reconstruction): Decimation Methodology
Image Output (Reconstruction): Decimation Imaging Concerns decimation can be dangerous high frequency signals can be  removed  during sub-sampling and  cause artifacts proper decimation requires that the  digital image be  smoothed  (blurred)  first to remove any signal frequencies  that are higher then half of the  new   sampling frequency
Image Output (Reconstruction): Decimation
Image Output (Reconstruction): Interpolation Why do we need to interpolation?  the digital image is too  small  for output device and we have to scale it up problem is that when we scale the image up, we have new pixels that will  require new pixel values, that should make the new image appear  continuous in space and in gray scale, note output devices are analog  devices (e.g.: laser printer, CRT monitor) three  interpolation techniques are often used: nearest neighbor interpolation (pixel replication) linear (or bilinear) interpolation cubic (spline) interpolation (nonlinear interpolation)
Image Output (Reconstruction): Interpolation What are the effects of interpolation? NOTE  the human eye-brain system is an efficient interpolator After blurring your eyes
Image Output (Reconstruction): Interpolation What does an interpolator do?   creates  enough pixels in the new digital image such that the matrix sent to  the output device produces an image of the right size generates new pixels with gray values in such a way that when the display  aperture (electron gun: CRT’s or laser spot: laser cameras) marks the  output medium, it  creates the impression  that the image is  continuous  in  space  and continuous in  values NOTE  excessive interpolation can  degrade image quality
Image Output (Reconstruction): Interpolation the interpolator uses the  known  pixels values to calculate or produce new  pixels anywhere within the image interpolation adds  no  new information or detail to the image
Image Output (Reconstruction): Nears Neighbor Interpolation Methodology
Image Output (Reconstruction): Bi-linear Interpolation Methodology
Image Output (Reconstruction): Cubic Interpolation Methodology
Image Output (Reconstruction): Interpolation all reconstructions of analog signals are approximations which interpolator to use depends on the application needs: Nearest neighbor : maintains/inserts hard edges around pixels (good  for text and some images like nuclear medicine) Linear : smoothing effect, sometimes excessive (good to suppress  high frequency structures or noise), very easy to implement Cubic : can produce very accurate reconstructions but more complex  and costly to implement
Image Output (Reconstruction): Display Aperture Output device  aperture size  does effect image quality and  perceived  image resolution
Image Output (Reconstruction): Addressability/Resolution Because output device has 2k x 2.5 pixels it does not mean you can see all of them Addressability (matrix size) is the data capacity of the output device characterized by the number of  values that are addressable by the user (a 4k x 5k laser printer has about  4000 x 5000 = 20,000,000 addressable points (pixels) over its usable area Resolution the ability to see or measure details in the output  device more important than addressability since it  determines the usefulness of a given output device is usually lower than addressability (due to effects of  display aperture)

Image Processing Basics

  • 1.
    Saudi Board ofRadiology: Physics Refresher Course Kostas Chantziantoniou, MSc 2 , DABR Head, Imaging Physics Section King Faisal Specialist Hospital & Research Centre Biomedical Physics Department Riyadh, Kingdom of Saudi Arabia Image Processing Basics
  • 2.
    Image Processing: BasicsThey are many factors that determine the diagnostic usability of a digital image: exposure techniques detector quality (technology dependent) scatter viewing conditions quality of readers number of readers image processing
  • 3.
    Image Processing: BasicsWhy do we need image processing? since the digital image is “invisible” it must be prepared for viewing on one or more output device (laser printer, monitor, etc) the digital image can be optimized for the application by enhancing or altering the appearance of structures within it (based on: body part, diagnostic task, viewing preferences, etc) it might be possible to analyze the image in the computer and provide cues to the radiologists to help detect important/suspicious structures (e.g.: Computed Aided Diagnosis, CAD)
  • 4.
    Image Processing: TransformationsThey are three types of image processing (transformation algorithms) used: image-to-image transformations image-to-information transformations information-to-image transformations
  • 5.
    Image Processing: Image-to-ImageTransformations Image In  Image Out enhancement (make image more useful, pleasing) restoration (compensate for known image degradations to produce an image that is “closer” to the (aerial) image that came out of the patient - e.g: deblurring, grid line removal) geometry (scaling/sizing/zooming, morphing one object into another, distorting or altering the spatial relationship between pixels)
  • 6.
    Image Processing: Image-to-ImageTransformations They are three types of image-to-image transformations: point transformation local transformation global transformation
  • 7.
    Image Processing: Image-to-ImageTransformations Point Transformation (use Look-up Tables to adjust Tonescale or image contrast) the shape of the LUT depends on the desired “look” of the output image and the structure of the histogram
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
    Image Processing: Image-to-ImageTransformations Image contrast window
  • 13.
    Image Processing: Image-to-ImageTransformations Image brightness window
  • 14.
    Image Processing: Image-to-ImageTransformations Non-linear LUTs can be used as well (but more complex to implement)
  • 15.
    Image Processing: Image-to-ImageTransformations What LUT shape should be used?
  • 16.
    Image Processing: Image-to-ImageTransformations Local Transformation (Edge Enhancement, Zooming)
  • 17.
    Image Processing: Image-to-ImageTransformations Edge Enhancement (Un-sharp Masking Technique)
  • 18.
    Image Processing: Image-to-ImageTransformations Creating a blurred image The pixels within the kernel are averaged to determine the value of the center pixel for the output image Repeat process for all pixels in image
  • 19.
    Image Processing: Image-to-ImageTransformations Kernel size will have a large effect on the level of smoothing that is performed Sum of all pixel weight factors in kernel must equal 1
  • 20.
  • 21.
    Image Processing: Image-to-ImageTransformations Creating a “amplified” difference image
  • 22.
    Image Processing: Image-to-ImageTransformations Creating the final edge enhanced output image
  • 23.
    Image Processing: Image-to-ImageTransformations Global Transformation (Spatial frequency “Fourier” decomposition):
  • 24.
  • 25.
    Image Processing: Image-to-InformationTransformations Image In  Information (Data) Out image statistics (histograms) image compression image analysis (image segmentation, feature extraction, pattern recognition) computer-aided detection and diagnosis (CAD)
  • 26.
    Image Processing: Image-to-InformationTransformations Image Statistics (Histogram) the histogram is the fundamental tool for image analysis and image processing it histogram is created by examining each pixel in the digital image and counting the number of occurrences of each pixel value (or Code Value)
  • 27.
  • 28.
    Image Processing: Image-to-InformationTransformations Low Contrast Image Histograms low contrast images produce tall and narrow histograms histogram covers a short range of pixels values High Contrast Image Histograms high contrast images produce short and flat (wide) histograms histogram covers a wide range of pixels values NOTE: histograms do not care for the location of the pixels (both high contrast images shown above have the same histogram)
  • 29.
  • 30.
    Image Processing: Image-to-InformationTransformations Image Compression medical images can contain huge amounts of data (CT image: 0.25 MB, CR chest image: 8 MB, Digital Mammo: 32 MB) image compression aims to reduce the total number of bits needed to represent the image without compromising image quality, which in turn: reduces storage requirements reduces the time required to transmit images uses existing network bandwidth more effectively Image compression is more than: sampling at a lower rate or throwing away pixels quantizing each pixel more coarsely or reducing the precision of each pixel
  • 31.
    Image Processing: Image-to-InformationTransformations Why can images be compressed? Redundancy : relationship do exist between pixels in an image based on their location algorithms can be spatial (statistical), temporal or spectral (wavelet) in nature Irrelevancy : pixels included in image that do not add to the diagnostic information
  • 32.
    Image Processing: Image-to-InformationTransformations Two types of image compression are used: Lossless (Reversible) Compression uses statistical redundancy only compression ratios range from 2:1 to 4:1 decompressed (reconstructed) images is numerically identical to original Lossy (Irreversible) Compression uses statistical redundancy and irrelevancy compression ratios range from 6:1 to 20:1 and more decompressed image is degraded relative to original
  • 33.
    Image Processing: Image-to-InformationTransformations Image Analysis (Segmentation) Original image Original image with segmentation data
  • 34.
    Image Processing: Information-to-ImageTransformations Information (Data) In  Image Out decompression of compressed image data reconstruction of image slices from CT or MRI raw data computer graphics, animations and virtual reality (synthetic objects)
  • 35.
    Image Processing: Information-to-ImageTransformations 3D Image Reconstruction
  • 36.
    Image Processing: Information-to-ImageTransformations Image Synthesis
  • 37.
    Image Output (Reconstruction):Basics Why do we need to reconstruct the image? the digital image is still a 2D array of numbers (pixels values) if it is to be viewed by a human it must be converted back to an analog image on some display device and/or medium (e.g.: CRT monitor, hardcopy) so digital image must be reconstructed for output device
  • 38.
    Image Output (Reconstruction):What is the problem? Nuclear medicine image (96 x 128, 6 bit) to be printed on a laser printer film (4k x 5k, 12 bit) The problem is: how do we match the gray scales (tonescale)? how do we match the image size?
  • 39.
    Image Output (Reconstruction):What is the problem? CR image (2k x 2.5k, 12 bit) to be displayed on a CRT monitor (1.2k x 1k, 8 bit)
  • 40.
    Image Output (Reconstruction):Tonescale Output system tonescale depends on: image processing applied (output device should not change any post processing that was done on the image prior to this step) calibration of output device (very important & can vary with time) dynamic range of output device viewing conditions observer
  • 41.
    Image Output (Reconstruction):Tonescale Output Calibration (needs to be performed frequently) every output device has a LUT that relates its output pixel values to the input pixel values that generated them Laser Printer CRT Monitor
  • 42.
    Image Output (Reconstruction):Tonescale Dynamic Range every output device has a different dynamic range that must be considered when selecting or calibrating the device LUT Dynamic Range = Highest signal value device can produce Lowest signal value device can produce Dynamic Range = antilog(3.0) = 1000 therefore dynamic range of film is 1,000:1
  • 43.
    Image Output (Reconstruction):Tonescale Must use LUTs that compensate for differences in dynamic range : CRT monitors: non-linear Laser printers: linear or non-linear (to introduce additional contrast)
  • 44.
    Image Output (Reconstruction):Tonescale Viewing Conditions
  • 45.
    Image Output (Reconstruction):Output Geometry Image Scaling Techniques in order to display images properly on the output device, the image may have to be scaled by the use of one of the following techniques: decimation (sub-sampling) interpolation
  • 46.
    Image Output (Reconstruction):Decimation this technique is required when image matrix size is too big for output device method of decimation is determined by degree of reduction (may have image quality concerns)
  • 47.
    Image Output (Reconstruction):Decimation Methodology
  • 48.
    Image Output (Reconstruction):Decimation Imaging Concerns decimation can be dangerous high frequency signals can be removed during sub-sampling and cause artifacts proper decimation requires that the digital image be smoothed (blurred) first to remove any signal frequencies that are higher then half of the new sampling frequency
  • 49.
  • 50.
    Image Output (Reconstruction):Interpolation Why do we need to interpolation? the digital image is too small for output device and we have to scale it up problem is that when we scale the image up, we have new pixels that will require new pixel values, that should make the new image appear continuous in space and in gray scale, note output devices are analog devices (e.g.: laser printer, CRT monitor) three interpolation techniques are often used: nearest neighbor interpolation (pixel replication) linear (or bilinear) interpolation cubic (spline) interpolation (nonlinear interpolation)
  • 51.
    Image Output (Reconstruction):Interpolation What are the effects of interpolation? NOTE the human eye-brain system is an efficient interpolator After blurring your eyes
  • 52.
    Image Output (Reconstruction):Interpolation What does an interpolator do? creates enough pixels in the new digital image such that the matrix sent to the output device produces an image of the right size generates new pixels with gray values in such a way that when the display aperture (electron gun: CRT’s or laser spot: laser cameras) marks the output medium, it creates the impression that the image is continuous in space and continuous in values NOTE excessive interpolation can degrade image quality
  • 53.
    Image Output (Reconstruction):Interpolation the interpolator uses the known pixels values to calculate or produce new pixels anywhere within the image interpolation adds no new information or detail to the image
  • 54.
    Image Output (Reconstruction):Nears Neighbor Interpolation Methodology
  • 55.
    Image Output (Reconstruction):Bi-linear Interpolation Methodology
  • 56.
    Image Output (Reconstruction):Cubic Interpolation Methodology
  • 57.
    Image Output (Reconstruction):Interpolation all reconstructions of analog signals are approximations which interpolator to use depends on the application needs: Nearest neighbor : maintains/inserts hard edges around pixels (good for text and some images like nuclear medicine) Linear : smoothing effect, sometimes excessive (good to suppress high frequency structures or noise), very easy to implement Cubic : can produce very accurate reconstructions but more complex and costly to implement
  • 58.
    Image Output (Reconstruction):Display Aperture Output device aperture size does effect image quality and perceived image resolution
  • 59.
    Image Output (Reconstruction):Addressability/Resolution Because output device has 2k x 2.5 pixels it does not mean you can see all of them Addressability (matrix size) is the data capacity of the output device characterized by the number of values that are addressable by the user (a 4k x 5k laser printer has about 4000 x 5000 = 20,000,000 addressable points (pixels) over its usable area Resolution the ability to see or measure details in the output device more important than addressability since it determines the usefulness of a given output device is usually lower than addressability (due to effects of display aperture)