[Home]


Source code for 2D wavelets, wavelet packets (complete or overcomplete), complex wavelets, and complex wavelet packets:

[wavelet_code.zip] (after reading below you can click here to browse the source code through doxygen).
 

Overview: Wavelets and wavelet packets can be grown overcomplete (each overcomplete transform is invertible, etc.). Image boundaries are handled correctly. There is a test routine that implements a check for invertibility and another test routine that does denoising with hard-thresholding. The code is commented and should be easy to use and modify. Please report bugs and mistakes here. Please use the subject line "Regarding wavelet code". Thank you.
 

Notes:


Manifest:

o wav_basic.c: basic filtering, decimation and upsampling routines.

o wav_basic.h: interface to wav_basic.c.

o wav_trf.c: transform routines.

o wav_trf.h: interface to wav_trf.c.

o wav_filters.h: where filter banks and their properties are defined.

o wav_filters_extern.h: interface to wav_filters.h.

o wav_gen.h: some parameters and min/max macros.

o macros.h: pointer check macro.

o test_transforms.c: main routine for testing transforms and invertibility.

o test_denoise.c: main routine for the example denoising applications.

o alloc.c: pointer allocation/deallocation.

o alloc.h: interface to alloc.c.

o Makefile

o peppers.raw: 512x512 grayscale test image.


     Demo matlab code for linear worst-case estimators: Please see paper here.

       Example derivation of the estimators and some figures from the paper in matlab.

     Demo matlab code for subspaces of quantization artifacts (conference version of above): Please see paper here.

        Deblocking and deringing demo in matlab.



Image Recovery Proof of Concept Demo using DCTs:

[recover.zip] : recovers a single missing block at an arbitrary location in the image, see more below

(after reading the instructions below you can click here to browse the source code through doxygen).

 

[recover_arbitrary.zip] :  recovers arbitrary shape missing regions.

This version overestimates T_0 and is slower for block recovery

(instructions for this version are in the archive).
 

General Note: There are many ways to make this code run much faster. One can do fast transforms, fast overcomplete transforms, increase dt, etc., to get orders of magnitude improvements (read factor of ~100). The demo code is a proof of concept implementation and does none of that. I will implement a fast version in the future (in the unlikely event that I have some free time :).

 

Overview: This code contains a proof of concept implementation of the ideas in:

  1. Onur G. Guleryuz, "Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part I - Theory," IEEE Transactions on Image Processing, in review. [.pdf]

  2. Onur G. Guleryuz, "Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions and Iterated Denoising: Part II - Adaptive Algorithms,"IEEE Transactions on Image Processing, in review. [.pdf]

 

Please try it on different types of image regions. Please report bugs and mistakes here. Please use the subject line "Regarding image recovery code". Thank you.
 

Notes:

  1. First run, ./recover_demo, without any options  to see the usage info.

  2. Then try, ./recover_demo lena.raw lena_filled.raw 512 512 167 110 16

  3. Open lena_filled.raw in your favorite image viewer and go to row 167 and column 110 (start counting from 0 for both). The 16x16 region there has been erased and filled by the algorithm. Compare with the original. Also open mean_filled.raw in your favorite image viewer to compare with the image that has that block filled only by the surrounding mean. (On my linux machine I got: "Performed 458 iterations Recovered region, PSNR 26.526 dB").

  4. If you want you can erase that portion in lena.raw yourself and then try again in case you do not trust the program. In that case the PSNRs reported by the program will be wrong and you will have to calculate PSNR on your own.


Manifest:

o dct_trf.c: code that handles forward/inverse dcts.

o dct_trf.h: interface to dct_trf.c.

o init_vals.c: calculates rudimentary statistics to intialize the algorithm.

o init_vals.h: interface to init_vals.c.

o layer.c: two ways of implementing my layer recovery iterations.

o layer.h: interface to  layer.h.

o recover.c: main routine.

o support_routines.c: pointer allocation/deallocation, etc.

o support_routines.h: interface to support_routines.c.

o threshold.c: simple hard-thresholding.

o threshold.h: interface to threshold.c.

o Makefile

o lena.raw, barbara.raw:512x512 grayscale test images.



Weighted Avreaging for Denoising with Overcomplete Dictionaries Proof of Concept Demo using DCTs:

[weighted.zip] : does weighted averaging for denoising with an overcomplete DCT dictionary.


Overview: This code contains a proof of concept implementation of the ideas in:

Onur G. Guleryuz, "Weighted Averaging for Denoising with Overcomplete Dictionaries,'' IEEE Transactions on Image Processing, December 2007 [.pdf]

I had to put this code together very quickly :) Please report bugs and mistakes here. Please use the subject line "Regarding weighted denoising code". Thank you.

Notes:

  1. First run, ./weighted_dct_denoise, without any options  to see the usage info.

  2. Then try, ./weighted_dct_denoise photo1.raw  photo1_denoised.raw photo1_noisy.raw 512 512 10 8 1

  3. The software should take the input image photo1.raw, add Gaussian noise to it to obtain photo1_noisy.raw, and then denoise that to obtain photo1_denoised.raw.

  4. If you want you can directly input the noisy image by setting the last parameter to zero:

        ./
    weighted_dct_denoise photo1_noisy.raw  photo1_denoised.raw photo1_noisy_copy.raw 512 512 10 8 0.


Manifest:

o dct_trf.c: code that handles forward/inverse dcts.

o dct_trf.h: interface to dct_trf.c.

noise.c: adds Gaussian noise.

noise.h: interface to noise.c.

weighted_dct_denoise.c: main routine.

o support_routines.c: pointer allocation/deallocation, etc.

o support_routines.h: interface to support_routines.c.

o Makefile

o lena.raw, barbara.raw, photo1.raw, photo2.raw, graphics.raw, criss-cross.raw: 512x512 grayscale test images.



Spatial Sparsity-Induced Prediction (SIP) for Images and Video Proof of Concept Demo using the P_L predictor:

[sparsity_induced_prediction.zip] : predicts a target image based on an anchor image.


Overview: This code contains a proof of concept implementation of the ideas in:

G. Hua and O. G. Guleryuz, "Spatial Sparsity-Induced Prediction (SIP) for Images and Video: A Simple Way to Reject Structured Interference,'' to appear in IEEE Transactions on Image Processing [.pdf]

Please report bugs and mistakes here. Please use the subject line "Regarding sparsity induced prediction code". Thank you.

Notes:

  1. First run, ./predict.exe, without any options  to see the usage info.

  2. Then try, ./predict.exe noisy_x.raw noisy_y.raw noisy_pred.raw 512 512

  3. The software should use noisy_x.raw as anchor, predict noisy_y.raw, and put the prediction in noisy_pred.raw. The results should match Figure 6 (a) in the paper.

  4. To generate noncausal estimates set CAUSAL=0 and recompile.


Manifest:

helper.c: support routines.

o helper.h: interface to helper.c.

o predict.c: main routine.

o Makefile

o *.raw: 512x512 grayscale test images (bg_fg*.raw are 256x256) for the examples in Figure 6 and 7.