DeconvNetToolbox
File List
Here is a list of all files with brief descriptions:
setupDeconvNetToolbox.m [code]
AllLayers/convert3dto2d.m [code]This converts any 3D pooling into 2D pooling throughout all layers of the model so that in inference time it may result in sparser maps that are easier to classify
AllLayers/initialize_pooling.m [code]Initializes the pooling indices before they are used
AllLayers/initialize_variables.m [code]Initializes the variables of a deconvolutional network based on image size, filter size, and pooling sizes
AllLayers/insertBatch.m [code]Script to insert the batch variables back into z,y,etc based on the images in batch_indices
AllLayers/prepare_time_layer.m [code]Prepares before TIME_LAYER the layers below by replicating their filters, connectivity matrices, and adjusting sizes
AllLayers/remove_ordering.m [code]This converts an ordering pooling type from 2D to 3D and removes the ordering property
AllLayers/reorder_features.m [code]This reorders the feature maps based on similarity if the pooling type for the current layer has the 'Order' string in it
AllLayers/selectBatch.m [code]Script to select from z,y,etc the batches based on the images in batch_indices
AllLayers/train_recon_layer.m [code]This is a combined function to both train or reconstruct images in the pixel space
AllLayers/train_recon_phase_all.m [code]This is a combined function to both train or reconstruct images in the pixel space, but is different from train_recon_layer in that it can train any number of layers jointly with reconstruction terms back down to the original input images with pooling involved
AllLayers/tune_parameters.m [code]Method to tune various parameters of the Deconvolutional Network automatically
AllLayers/update_y.m [code]This updates the y (image) values to fill in location or denoise the images i
Connectivity_Matrices/conmat_alldoub.m [code]Creates a connectivity matrix where the features maps connect to every possible pair of input maps
Connectivity_Matrices/conmat_alltrip.m [code]Creates a connectivity matrix where the features maps connect to every possible triples of input maps
Connectivity_Matrices/conmat_exp.m [code]Creates a connectivity matrix where the features maps connect to only one input map in the layer below but there are multiple feature maps (actually num_input_maps many) that connect to each inputmap
Connectivity_Matrices/conmat_full.m [code]Creates a connectivity matrix where the features maps connect to every input maps
Connectivity_Matrices/conmat_lenet5.m [code]Creates a connectivity matrix modelled after LeNet5's S2 to C3 connectivity matrix with 6 input maps and 16 output maps
Connectivity_Matrices/conmat_randdoub.m [code]Creates a connectivity matrix where the features maps connect to randomly selected pairs of input maps
Connectivity_Matrices/conmat_randquad.m [code]Creates a connectivity matrix where the features maps connect to randomly selected quadruplets of input maps
Connectivity_Matrices/conmat_randsing_randdoub_randtrip.m [code]Creates a connectivity matrix where the features maps connect to rnadomly selected single connections to input maps, pairs of inputs maps, and triples of input maps
Connectivity_Matrices/conmat_randtrip.m [code]Creates a connectivity matrix where the features maps connect to randomly selected triples of input maps
Connectivity_Matrices/conmat_singles.m [code]Creates a connectivity matrix where the features maps connect to only single input maps
Connectivity_Matrices/conmat_singles_alldoub.m [code]Creates a connectivity matrix where the features maps connect to every single input map and all possible pairs of input maps
Connectivity_Matrices/conmat_singles_alldoub_alltrip.m [code]Creates a connectivity matrix where the features maps connect to every single input map, all possible pairs of input maps, and all possible triples of input maps
Connectivity_Matrices/conmat_singles_alldoub_randtrip.m [code]Creates a connectivity matrix where the features maps connect to every single input map, all possible pairs of input maps, and randomly selected input maps
Connectivity_Matrices/conmat_singles_alldoub_sometrip.m [code]Creates a connectivity matrix where the features maps connect to every single input map, all possible pairs of input maps, and the triples of input maps that lie on the diagonal of the connectivity matrix
Connectivity_Matrices/conmat_singles_randdoub.m [code]Creates a connectivity matrix where the features maps connect to every single input map and randomly selected pairs of input maps
Connectivity_Matrices/conmat_singles_randdoub_randtrip.m [code]Creates a connectivity matrix where the features maps connect to every single input map, randomly selected pairs of input maps, and randomly selected triples of input maps
Connectivity_Matrices/conmat_singles_randdoub_sometrip.m [code]Creates a connectivity matrix where the features maps connect to every single input map, randomly selected pairs of input maps, and the diagonal triples of input maps
Connectivity_Matrices/conmat_singles_somedoub.m [code]Creates a connectivity matrix where the features maps connect to every single input map and the diagonal pairs of input maps
Connectivity_Matrices/conmat_singles_somedoub_randtrip.m [code]Creates a connectivity matrix where the features maps connect to every single input map, the diagonal pairs of input maps, and randomly selected triples of input maps
Connectivity_Matrices/conmat_singles_somedoub_sometrip.m [code]Creates a connectivity matrix where the features maps connect to every single input map, the diagonal pairs of input maps, and the diagonal triples of input maps
Connectivity_Matrices/conmat_someall_start_ones.m [code]Creates a connectivity matrix where the features maps connect to the single connections along the diagonal, followed by the double connections, tiple connections and so on until ydim is reached
Connectivity_Matrices/conmat_someall_start_twos.m [code]Creates a connectivity matrix where the features maps connect to the double connections along the diagonal, followed by the triple connections, quadruple connections and so on until ydim is reached
Connectivity_Matrices/conmat_somedoub.m [code]Creates a connectivity matrix where the features maps connect to the pairs of input maps that lie along the diagonal of the connectivity matrix
Connectivity_Matrices/conmat_sometrip.m [code]Creates a connectivity matrix where the features maps connect to the triples of input maps along the diagonal of the connectivity matrix
Connectivity_Matrices/min_B.m [code]This function evaluates the cost function and its derivative with respect to the connectivity matrix's softmax variable, B, for use with minimize
Connectivity_Matrices/min_g.m [code]This function evaluates the cost function and its derivative with respect to the connectivity matrix, g, for use with minimize
Connectivity_Matrices/update_conmats.m [code]Creates any type of connectivity matrix using the conmat_..
Defaults/backwards_compatible.m [code]Converts any old fields in the model structure to new ones so that they are compatible with new changes to the code
Defaults/set_defaults_here.m [code]This is where you setup all the defaults in the model struct
GPU/gpu_kmeans2.m [code]This is an attempt to make Piotr's kmeans2 compatible with GPUmat
GPU/GPUNmT.m [code]A wrapper for cudaBlass calls that do C = alpha*A*B' + beta*C; Faster than doing the transpose then the multiple in matlab
GPU/GPUTmN.m [code]A wrapper for cudaBlass calls that do C = alpha*A'*B + beta*C; Faster than doing the transpose then the multiple in matlab
GPU/gwhos.m [code]A better whos function for displaying GPUmat variable bytes and sizes of all n-D arrays
GPU/moveAll2CPU.m [code]This does a whos of the Matlab workspace then moves all the variables that are (GPUmat variables) GPUsingle to singles and GPUdouble to doubles on CPU
GPU/moveAll2GPU.m [code]This does moves all the variables in AllGPUvars back to the GPU
GPU/mySetSize.m [code]A wrapper for GPUmat setSize() which makes sure that trailing 1's in the size vector are not used in setSize and therefore gives equivalent sizes to reshape in matlab
GPU/startGPU.m [code]A simple script to automatically select a GPU and start GPUmat
GPU/test_jacket.m [code]A test script for various GPU jacket functions to do convolutions and pooling
GUI/clrlast.m [code]
GUI/deconvPaths.m [code]This file sets up the necessary paths to be used by the DeconvNetToolbox
GUI/gui.m [code]A gui that provides access to launchable experiments of various types and setting of their parameters
Helper/classification/conf_matrix.m [code]An implementation of the confusion matrix between multiple classes
Helper/classification/gen_labels.m [code]Generates the svm labels based on the number of images in each folder (one folder per category)
Helper/classification/indice_diff.m [code]This computes the different between pooling indices for adjacent patches
Helper/classification/weighted_accuracy.m [code]Reports the weighted accuracy for the predictions versus the labels (weighted by the number of samples per category)
Helper/continuation/cubic_solve_image.m [code]Function to solve per-pixel cubic equations of the form:
Helper/continuation/min_recon_solve_image_w.m [code]Function to solve per-pixel continuation equations using minimize for any arbitrary sparsity norm on the reconstruction values (instead of the typical feature map sparsity)
Helper/continuation/min_recon_solve_image_wF.m [code]Function to solve per-pixel continuation equations using minimize for any arbitrary sparsity norm of the filters on the reconstruction values (instead of the typical feature map sparsity)
Helper/continuation/min_solve_image.m [code]Function to solve per-pixel continuation equations using minimize for any arbitrary sparsity norm
Helper/continuation/quartic_solve_image.m [code]Function to solve per-pixel quartic equations of the form:
Helper/continuation/solve_filter.m [code]Solves the following component-wise separable problem for filter updates

\[ min |w|^\alpha + \frac{\beta}{2} (w - v).^2 \]

Helper/continuation/solve_image.m [code]Solves the following component-wise separable problem

\[ min |w|^\alpha + \frac{\beta}{2} (w - v).^2 \]

Helper/cost_calculuation/compute_cost.m [code]This takes in the coefficients and variables of the model to compute the current cost of the parameter settings
Helper/cost_calculuation/graph_cost.m [code]This takes in the coefficients and variables of the model to compute the current cost of the parameter settings
Helper/dataset_processing/balls_resizer.m [code]Resizes the balls# matrix created from Ilya Sutskever's bouncing ball generator to the desired size and saves in new location
Helper/dataset_processing/brodatz_maker0.m [code]Splits each Brodatz texture image into GRID_SIZE X GRID_SIZE separate images
Helper/dataset_processing/brodatz_maker1.m [code]Resize all the images of the Brodatz dataset (in the specified location)
Helper/dataset_processing/brodatz_maker2.m [code]Splits the Brodatz dataset into training, testing, and held-out image folders
Helper/dataset_processing/caltech_maker1.m [code]Resize all the images of the Caltech 101 dataset (in the specified location)
Helper/dataset_processing/caltech_maker2.m [code]Splits the Caltech 101 dataset into training, testing, and held-out image folders
Helper/dataset_processing/caltech_maker3.m [code]This splits the resulting images and category folders from caltech_maker2.m into batches
Helper/dataset_processing/caltech_maker4.m [code]Splits the Caltech 256 dataset into training, testing, and held-out image folders but for the categories that have 101 in their name (came from Caltech 101) then it uses the images from the corresponding Caltech 101 split
Helper/dataset_processing/caltech_maker5.m [code]This makes a directory with all Caltech 101 images in it
Helper/dataset_processing/compile_filters.m [code]This function looks for all parts of an experiment in the input folder and compiles them into a single experiment MAT-file by concatenating the images by sample, the filters if they were trained per class for a layer, and updates the connectivity matrices accordingly
Helper/dataset_processing/compile_parts.m [code]This function looks for all parts of an experiment in the input folder and compiles them into a single experiment MAT-file by concatenating the images and feature maps by sample
Helper/dataset_processing/compile_pixel_filters.m [code]This function looks for all parts of an experiment in the input folder and loads that part and does top_down_select to generate the pixel space visualization for the pixels in that part
Helper/dataset_processing/compile_recons.m [code]This function looks for all parts of an experiment in the input folder and compiles them into a single experiment MAT-file by concatenating the images and feature maps by sample
Helper/dataset_processing/crop_all.m [code]Resize all the images in the specified location
Helper/dataset_processing/get_chains.m [code]Gets the data index for which frames are good in the One Frame Of Fame dataset
Helper/dataset_processing/getSTL10Labels.m [code]
Helper/dataset_processing/image_set_directories.m [code]Returns the training and testing set of images to be used to find the labels
Helper/dataset_processing/image_set_names.m [code]Returns the training and testing set names for use by image_set_directories which is used to find the labels
Helper/dataset_processing/kyoto_dataset.m [code]Resize all the images of the Kyoto dataset (in the specified location)
Helper/dataset_processing/mcgill_maker1.m [code]Resize all the images of the McGill dataset (in the specified location)
Helper/dataset_processing/mcgill_maker2.m [code]Splits the McGill dataset into training, testing, and held-out image folders
Helper/dataset_processing/num_images_in_dataset.m [code]This function has defaults for the number of train and test images for Caltech 101, Caltech 256, and MNIST datasets
Helper/dataset_processing/resize_all.m [code]Resize all the images in the specified location
Helper/dataset_processing/rotating_figure_random.m [code]Generates a video sequence (frames) of a figure moving around on a black background randomly with rotations
Helper/dataset_processing/rotating_in_place.m [code]Generates a video sequence (frames) of a figure on a black background with rotations about a fixed point
Helper/dataset_processing/sliding_figure_random.m [code]Generates a video sequence (frames) of a figure moving around on a black background randomly without rotations
Helper/dataset_processing/speed_maker1.m [code]Resize all the images Images/speed/ 1024 x 1024 images
Helper/dataset_processing/split_dataset.m [code]Splits a dataset into different training, testing, and validation sets (randomly) according to an input size
Helper/dataset_processing/sprite_maker.m [code]Uses 8 different simple sprite images defined below and randomly places them (non-overlapping) on generated images
Helper/file_manipulation/add_dot_mat.m [code]Adds .mat to the passed in filepath and returns it
Helper/file_manipulation/clean_results_dir.m [code]Recurses from the starting path to all subdirectories and removes unwanted files and folders
Helper/file_manipulation/clear_last.m [code]Clears (delets) the last results directory (Run_##)
Helper/file_manipulation/collect_batch_results.m [code]Collects all the feature maps that are saved for each batch into a single large file (all assumed to ahve the same three dimensions)
Helper/file_manipulation/convert_paths.m [code]Checks to see if the fullsavepath and fulldatapath exist on the given machine
Helper/file_manipulation/copy_lower_layers.m [code]Copies all the layers below the top layer you want to train above (which is specified in model.fullmodelpath of the gui model that is input)
Helper/file_manipulation/delete_old_phases.m [code]This simply searches the input path for saved epoch##_phase##.mat files for phases less than the input phase and epochs less than the current epoch
Helper/file_manipulation/dir_of_file.m [code]Returns the directory in which the input filepath resides
Helper/file_manipulation/extract_epoch_layer.m [code]Extracts the epoch number and layer number from the path string to a file
Helper/file_manipulation/gen_directories.m [code]Creates a new Run_## directory with the appropriate name (if needed)
Helper/file_manipulation/get_highest_epoch.m [code]Searches a directory for the .mat named as epoch##_layer# that has the highest epoch for the given input layer
Helper/file_manipulation/get_run_number.m [code]This checks the input folder for Run_## folders and returns the number of the next such folder to make
Helper/file_manipulation/getsubdirs.m [code]Recursively gets all the subdirectories of the input directory
Helper/file_manipulation/list_results.m [code]List the given parameters (from model structure) for each Run it finds in the given input directory
Helper/file_manipulation/load_lower_layers.m [code]Loads all the layers below the top layer you want to use (which is specified in model.fullmodelpath of the gui model that is input)
Helper/file_manipulation/mkdir2.m [code]Does the same operation as mkdir() except will not report warnings if the directory already exists
Helper/file_manipulation/parentdir.m [code]Returns the parent directory of a path (which could be a file or folder itself)
Helper/file_manipulation/pd.m [code]Returns the parent directory of a path (which could be a file or folder itself)
Helper/file_manipulation/remove_dot_mat.m [code]Removes the .mat extension (if one exists) of the input filepath
Helper/file_manipulation/remove_dots.m [code]Removes the dot and dotdot directories from a list of directories
Helper/file_manipulation/save_now.m [code]This simply saves the required files to disk for training phases of the model on top of one another
Helper/file_manipulation/separate_batch_results.m [code]Separates the batch results into individual files so that I can use them in MyBuildPyramid example (just like Svetlana loads individual images)
Helper/file_manipulation/split_folders_files.m [code]Return two struct arrays of just the folers and just the files of the input struct array
Helper/file_manipulation/zip_toolbox.m [code]Packages a toolbox with the desired files/folders and geneates the documentation from the code using doxygen
Helper/model/model_keep_structure.m [code]Simply limits the alpha, beta, etc parameters according to the number of layer and phases
Helper/model/model_limit_params.m [code]Simply limits the alpha, beta, etc parameters according to the number of layer and phases
Helper/normalization/compare_norms.m [code]Compares the norms of a set of images (set inside this script) with a visual plot of the norms of each feature map compared
Helper/normalization/plane_normalize.m [code]Normalizes each plane of the input (assumed to be in the first two dimensions of a 3 or 4-D matrix)
Helper/normalization/rescale2_all_n1_1.m [code]Very similar to svm_rescale2.m but I'm not exactly sure what range this guarantees (if any)
Helper/normalization/rescale_all_0_1.m [code]Scales all input data (as a lump) to [0,1] by shifting by the minimum up and then dividing by the max value
Helper/normalization/rescale_all_n1_1.m [code]Scales all input data (as a lump) to roughly [-1,1] by shifting by the minimum up and then dividing by the max value/2 and then making it zero mean
Helper/normalization/rescale_row_0_1.m [code]Scales each row of input data to [0,1] by shifting by the minimum up and then dividing by the max value
Helper/normalization/rescale_row_n1_1.m [code]Scales each row of input data to roughly [-1,1] by shifting by the minimum up and then dividing by the max value/2 and then making it zero mean
Helper/other_computations/alltop.m [code]This is meant to serve as a gui for top on all local machines in the server list enclosed so that you can refresh to see activity across multiple machines
Helper/other_computations/bsxremove.m [code]Selects just the specified index from the matrix and sets it to zero
Helper/other_computations/bsxselect.m [code]Selects just the specified index from the matrix
Helper/other_computations/checkgrad.m [code]
Helper/other_computations/compare_two_vars.m [code]Simply compares the size and the max(abs) difference between two variables
Helper/other_computations/compute_snr.m [code]Compute the Signal-to-Noise Ratio between to images
Helper/other_computations/computeAtAx.m [code]Coputes AtAx of the system
Helper/other_computations/condition_num.m [code]Computes the condiiton number for the first layer of the Deconvolution Networks
Helper/other_computations/ind2logical.m [code]Takes the indices which can be any arbitrary size (with one singleton dimension created from a max operation for example) and expands them in the singleton dimension to a specific size creating logical indices to be used for selection
Helper/other_computations/invconv.m [code]Just a test script to try and inver a convolution filter
Helper/other_computations/make_lower_layer.m [code]Simply loads a trained deconvolutional network file and removes the feautre maps above some layer and also the number of phases
Helper/other_computations/make_noz.m [code]Simply loads a trained deconvolutional network file and removes the feautre maps and saves as same_name_noz.mat
Helper/other_computations/minimize.m [code]
Helper/other_computations/model_summary.m [code]This generates a set of strings representing the model structure so that they can be displayed to the screen, set the xterm title, and email later on
Helper/other_computations/mysendmail.m [code]An attempt to get mail to work properly for emailing results or job completion
Helper/other_computations/receptive_fields.m [code]This estimates the receptive fields of a single element in the middle of the feature maps and pooled maps from each layer of the model
Helper/other_computations/remove_similar_filters.m [code]This convolves the absolute value of filters with each other to see if two of them are very similar, and if so removes one of the similar ones
Helper/other_computations/set_xterm_title.m [code]If matlab is currently running in an xterm window, this can be used to set the title of that xterm window directly from matlab
Helper/other_computations/shift_filters.m [code]This function shifts the filers based on the center of mass of all the filter planes connected to a given feature map (all the input maps planes)
Helper/other_computations/sizes.m [code]Just prints out various sizes important to the model
Helper/other_computations/va.m [code]Vectorizes the input array as in(:)
Helper/other_computations/vect_array.m [code]Vectorizes the input array as in(:)
Helper/plotting/endPlots.m [code]Displays some useful plots for analysis after training a deconv net model
Helper/plotting/error_surfaces.m [code]This plots the error surfaces after a model has been trained based on the computed errors stored in it's model struct
Helper/plotting/ExportFig.m [code]Save out a figure to a variety of image formats and resolutions
Helper/plotting/line_prop.m [code]This returns a line property sich as '-k' to be used in plotting
Helper/plotting/loghist.m [code]This plots a log histogram in the current figure
Helper/plotting/plot_clean_vs_noise1.m [code]Loads various epoch10_layer1.mat files, computes their total scalled energy for clean (first half of samples) and noisy (second half of sample) for different lambda values and then plots the result
Helper/plotting/plot_last.m [code]It reads the fullsavepath from guimodel (which it expects to be in the workspace) and appends the expfile name if needed
Helper/plotting/plot_surfaces.m [code]Plot the surface over epochs and samples (different blurs/noise possibly)
Helper/plotting/return_train_recon.m [code]Setup whether you're training or just inferring up the maps based on the expreiment type
Helper/plotting/sfigure.m [code]
Helper/plotting/view_results.m [code]View the specified figures for each Run it finds in a directory or recursively from that directory down
Helper/plotting/composites/montage.m [code]
Helper/plotting/composites/myimshow.m [code]
Helper/plotting/composites/sdf.m [code]This is a shorthand notation for calling either sdispims or sdispmaps depending on the size of the input array
Helper/plotting/composites/sdispfilts.m [code]Displays the filters from the model as a multiple images in the same ultimateSubplot and one subplot for each input map
Helper/plotting/composites/sdispims.m [code]Displays a stack of images in the specified color space separated by a border and scaled to have the same contrast
Helper/plotting/composites/sdispmaps.m [code]Dispalys feature or input maps from the model for a number of samples
Helper/plotting/composites/ultimateSubplot.m [code]A better subplot routine which allows a user defined margin between subplots
Helper/plotting/conferences/cvpr2011plots.m [code]Plots various figures per sample (used for the cvpr2011 submission)
Helper/plotting/conferences/iccv2011/classvsreconplot.m [code]This plots the classification accuracy versus the reconstruction
Helper/plotting/conferences/iccv2011/iccv2011plots.m [code]Plots various figures per sample (used for the iccv2011 submission)
Helper/plotting/keypress/dbc.m [code]A simple function to clear all dynamically set breakpoints (used mostly with the keypress function)
Helper/plotting/keypress/dbloop.m [code]
Helper/plotting/keypress/keypress.m [code]This function allows a user to stop the execution of a program by pausing it through a keypress of the specified key
Helper/plotting/train_recon_plotting/._plot_pixel_filters.m [code]
Helper/plotting/train_recon_plotting/init_plots.m [code]This script is used to initialize the sfigures for all the layers that will be used for the current running experiment
Helper/plotting/train_recon_plotting/plot_feature_hist.m [code]Plots a histogram of the feature maps for all images within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_feature_maps.m [code]Plots the feature maps for all images within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_feature_maps_sample.m [code]Plots the feature maps for a single image within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_filter_hist.m [code]Plots a histogram of the filters within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_filters.m [code]Plots filters within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_input_hist.m [code]Plots the histogram of the input maps for the current layer within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_input_maps.m [code]Plots the input maps for the current layer within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_original_images.m [code]Plots the original (clean) images before noising (only for layer 1) within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_pixel_filters.m [code]Plots pixel space visualization of filters within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_pooled_maps.m [code]Plots the feature maps (after pooling) for all images within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_pooled_maps_sample.m [code]Plots the feature maps (after pooling) for a single image within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_recon_y.m [code]Plots the reconstructions from the model for all images within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_recon_y_sample.m [code]Plots the reconstructions from the model for a single image within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_updated_y.m [code]Plots the updated images combined from noisy input and reconstruction within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_updated_y_sample.m [code]Plots the updated image combined from noisy input and reconstruction within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_z0_maps.m [code]Plots the z0 feature maps for all images within train_recon_layer.m
Helper/plotting/train_recon_plotting/plot_z0_maps_sample.m [code]Plots the z0 feature maps for a given image within train_recon_layer.m
Helper/timing/current_time.m [code]This function either receives the current time or gets the current time automatically if no inputs are passed in and then outputs it sepearated into hours, minutes, and seconds
Helper/timing/estimate_runtime.m [code]Computes the condiiton number for the first layer of the Deconvolution Networks
Helper/timing/secs2hms.m [code]This function converts seconds into hours, minutes, and seconds
Image_Functions/check_imgs_path.m [code]This is a helper function to check that there are valid image files or a .mat in the input path that can be used by CreateImages.m
Image_Functions/CreateImages.m [code]This takes all images from the input folder, converts them to the desired colorspace, removes mean/divides by standard deviations (if desired), and constrast normalizes the image (if desired)
Image_Functions/CreateResidualImages.m [code]This takes all images in as a matrix and then compues the contrast normalized image I and resI=input-I residual image
Image_Functions/contrast_normalization/inv_f_dewhiten.m [code]The function to dewhiten an image with 1/f whitening
Image_Functions/contrast_normalization/inv_f_whiten.m [code]The function to whiten an image with 1/f whitening
Image_Functions/contrast_normalization/region_zca.m [code]This functin is used to whiten an image with patch based ZCA whitening that is applied convolutionally to the image
Image_Functions/contrast_normalization/unwhiten.m [code]This function is to unwhiten a whitened image in the fourier domain
Image_Functions/contrast_normalization/whiten.m [code]This functino is to whiten an image in the fourier domain
Image_Functions/corruptions/erase_corner_boxes.m [code]Uses a corner detector to find corners in the image and erases a box around them
Image_Functions/corruptions/erase_corners.m [code]Uses a corner detector to find corners in the image and erases only the pixels of this corner metric that are above a threshold
Image_Functions/corruptions/erase_half_of_image.m [code]Erases half th eimage (the first half of the image columns)
Image_Functions/corruptions/noise_images.m [code]This function applies various types of noise or corruptions to the images
Image_Functions/corruptions/scattered_boxes.m [code]Scatter boxes over the image to corrupt it
Image_Functions/corruptions/scattered_crosses.m [code]Scatter crosses over the image to corrupt it
Image_Functions/corruptions/scattered_lines.m [code]Scatter lines over the image to corrupt it
Image_Functions/corruptions/varying_crosses.m [code]Creates an array of arbitrary line crosses which can be defined to have various lengths and image sizes
Inference/infer_all.m [code]Updates the feature maps for a single training sample (image) using either the IPP libraries (and thus is fast if they are complied) or the non-compiled version of the same files jointly for all layers including the reconstruction terms from the layers down to the image, down to the layer below and sparsity terms
Inference/infer_APPROX.m [code]Updates the feature maps for a single training sample (image) using either the IPP libraries (and thus is fast if they are complied) or the non-compiled version of the same files jointly for all layers including the reconstruction terms from the layers down to the image, down to the layer below and sparsity terms
Inference/infer_CG.m [code]Updates the feature maps for a single training sample (image) using either the IPP libraries (and thus is fast if they are complied) or the non-compiled version of the same files jointly for all layers including the reconstruction terms from the layers down to the image, down to the layer below and sparsity terms
Inference/infer_FISTA.m [code]Updates the feature maps for a single training sample (image) using either the IPP libraries (and thus is fast if they are complied) or the non-compiled version of the same files jointly for all layers including the reconstruction terms from the layers down to the image, down to the layer below and sparsity terms
Inference/infer_ISTA.m [code]Updates the feature maps for a single training sample (image) using either the IPP libraries (and thus is fast if they are complied) or the non-compiled version of the same files jointly for all layers including the reconstruction terms from the layers down to the image, down to the layer below and sparsity terms
Inference/min_z0.m [code]This function evaluates the cost function and its derivative with respect to the z0 feature maps, z0, for use with minimize
IPPConvsToolbox/GPUmat/full_each4_sum3_gpu.m [code]This is a GPUmat implementation for doing the full_each4_sum3 operation:

This is a CPU/GPU implementation that full convolves each set of image planes with corresponding filters as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (y_j^i \oplus_{full} f^{j(i)}_{k})) \]

where for each input map (prhs[0]) j (ie y(:,:,j,i)), it is convolved with each filter (prhs[1]) over all k for that specific j (ie f(:,:,j,k,i)) using a 'full' convolution if the binary connectivity matrix has g(j,k) == 1.

IPPConvsToolbox/GPUmat/gpu_conv2.m [code]An alternative implemention of gpu 2D convolution that replicates the functionality of ipp_conv2.m
IPPConvsToolbox/GPUmat/test_gpu_convs.m [code]A test script for gpu_conv2.m, valid_each3_sum4_gpu.m, valid_each3_each4_gpu.m and full_each4_sum3_gpu.m which compares speed for different types of convolutions, different sizes of images and filters, and different numbers of images and filters
IPPConvsToolbox/GPUmat/valid_each3_each4_gpu.m [code]This is a GPUmat implementation for doing the valid_each3_each4 operation:

This is a CPU/GPU implementation that valid convolves each feature map plane with each image plane and returns all of these resulting convolutions as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (z_k^i \oplus_{valid} y_j^i) \]

where for each feature map (prhx[0]) k (ie z(:,:,k,i)), it is convolved with each input map (prhs[1]) j (ie y(:,:,j,i)) using a 'valid' convolution if the binary connectivity matrix has g(j,k) != 0.

IPPConvsToolbox/GPUmat/valid_each3_sum4_gpu.m [code]This is a GPUmat implementation for doing the valid_each3_sum4 operation:

This is a CPU/GPU implementation that valid convolves each set of image planes with corresponding filters as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (z_k^i \oplus_{valid} f^{j(i)}_k) \]

where for each input map (prhs[0]) k (ie z(:,:,k,i)), it is convolved with each filter (prhs[1]) over all j for that specific k (ie f(:,:,j,k,i)) using a 'valid' convolution if the binary connectivity matrix has g(j,k) == 1.

IPPConvsToolbox/MEX/compilemex.m [code]Compiles all the required IPP Library based files used in the IPP implementation of the Deconvoltuional Network
IPPConvsToolbox/MEX/full_each4_sum3_ipp.cpp [code]MEX wrapper for IPP and OpenMP implementation of the full_each4_sum3 operation.

This is a CPU/GPU implementation that full convolves each set of image planes with corresponding filters as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (y_j^i \oplus_{full} f^{j(i)}_{k})) \]

where for each input map (prhs[0]) j (ie y(:,:,j,i)), it is convolved with each filter (prhs[1]) over all k for that specific j (ie f(:,:,j,k,i)) using a 'full' convolution if the binary connectivity matrix has g(j,k) == 1.

IPPConvsToolbox/MEX/ipp_conv2.cpp [code]MEX wrapper for 2-D image convolutions using Intel's Integrated Performance Primitive (IPP) Libraries and multi-threading
IPPConvsToolbox/MEX/sparse_conv2.cpp [code]MEX wrapper for 2-D image convolutions that involves an image with many element that are zero (but it is of single type, not MATLAB sparse type)
IPPConvsToolbox/MEX/sparse_valid_each3_sum4.cpp [code]MEX wrapper for reconstruction through Deconv Network with OpenMP implementation of a sparse version of valid_each3_sum4 operation. If the input maps are sparse then this may be much faster as it just places down the corresponding filters under nonzero location instead of doing the complete convolution.

This is a CPU/GPU implementation that valid convolves each set of image planes with corresponding filters as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (z_k^i \oplus_{valid} f^{j(i)}_k) \]

where for each input map (prhs[0]) k (ie z(:,:,k,i)), it is convolved with each filter (prhs[1]) over all j for that specific k (ie f(:,:,j,k,i)) using a 'valid' convolution if the binary connectivity matrix has g(j,k) == 1.

IPPConvsToolbox/MEX/valid_each3_each4_ipp.cpp [code]MEX wrapper for IPP and OpenMP implementation of the valid_each3_each4 operation.

This is a CPU/GPU implementation that valid convolves each feature map plane with each image plane and returns all of these resulting convolutions as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (z_k^i \oplus_{valid} y_j^i) \]

where for each feature map (prhx[0]) k (ie z(:,:,k,i)), it is convolved with each input map (prhs[1]) j (ie y(:,:,j,i)) using a 'valid' convolution if the binary connectivity matrix has g(j,k) != 0.

IPPConvsToolbox/MEX/valid_each3_sum4_ipp.cpp [code]MEX wrapper for IPP and OpenMP implementation of the valid_each3_sum4 operation.

This is a CPU/GPU implementation that valid convolves each set of image planes with corresponding filters as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (z_k^i \oplus_{valid} f^{j(i)}_k) \]

where for each input map (prhs[0]) k (ie z(:,:,k,i)), it is convolved with each filter (prhs[1]) over all j for that specific k (ie f(:,:,j,k,i)) using a 'valid' convolution if the binary connectivity matrix has g(j,k) == 1.

IPPConvsToolbox/MFiles/batch_conv_test.m [code]Tests that doing multiple single image convolutions gives the same result as a single batch convolution
IPPConvsToolbox/MFiles/full_each4_sum3.m [code]This is a CPU/GPU implementation that full convolves each set of image planes with corresponding filters as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (y_j^i \oplus_{full} f^{j(i)}_{k})) \]

where for each input map (prhs[0]) j (ie y(:,:,j,i)), it is convolved with each filter (prhs[1]) over all k for that specific j (ie f(:,:,j,k,i)) using a 'full' convolution if the binary connectivity matrix has g(j,k) == 1

IPPConvsToolbox/MFiles/full_eachJ_loopK.m [code]This is a backward compatibility file for doing the full_each4_sum3 operation
IPPConvsToolbox/MFiles/ipp_conv2.m [code]Non-IPP wrapper for 2-D image convolutions to be compatible with the version that uses the IPP libraries in the IPPConvToolbox
IPPConvsToolbox/MFiles/rconv2.m [code]Convolution of two matrices, with boundaries handled via reflection about the edge pixels
IPPConvsToolbox/MFiles/valid_each3_each4.m [code]This is a CPU/GPU implementation that valid convolves each feature map plane with each image plane and returns all of these resulting convolutions as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (z_k^i \oplus_{valid} y_j^i) \]

where for each feature map (prhx[0]) k (ie z(:,:,k,i)), it is convolved with each input map (prhs[1]) j (ie y(:,:,j,i)) using a 'valid' convolution if the binary connectivity matrix has g(j,k) != 0

IPPConvsToolbox/MFiles/valid_each3_sum4.m [code]This is a CPU/GPU implementation that valid convolves each set of image planes with corresponding filters as follows:

\[ out(:,:,j,k,i) = g_{j,k} \times (z_k^i \oplus_{valid} f^{j(i)}_k) \]

where for each input map (prhs[0]) k (ie z(:,:,k,i)), it is convolved with each filter (prhs[1]) over all j for that specific k (ie f(:,:,j,k,i)) using a 'valid' convolution if the binary connectivity matrix has g(j,k) == 1

IPPConvsToolbox/MFiles/valid_eachK_loopJ.m [code]This is a backward compatibility file for doing the valid_each3_sum4 operation
IPPConvsToolbox/MFiles/valid_loopK_loopJ.m [code]This is a backward compatibility file for doing the valid_each3_each4 operation
IPPConvsToolbox/Test/conv_test.m [code]Tests the relative speed of different convolution methods, including ipp_conv2
IPPConvsToolbox/Test/test_sparse_conv2.m [code]A test script for sparse_conv2.m which compares speed for different types of convolutions, different sizes of images and filters, and different numbers of images and filters
Learn_Filters/collapse_filters.m [code]Collapses filters from all layers down into pixel space representations of the filters which can be then plotted or used in a 1-layer model
Learn_Filters/greedy_filter_shutoff.m [code]Greedily shut off a certain ratio of the input planes that have a total gradient response
Learn_Filters/learn_filters_all.m [code]Updates the filters for a batch of training samples using either the IPP libraries (and thus is fast if they are complied) or the non-compiled version of the same files jointly for all layers including the reconstruction terms from the layers down to the image, down to the layer below and sparsity terms
Learn_Filters/learn_filters_FISTA.m [code]Updates the filters for a batch of training samples using either the IPP libraries (and thus is fast if they are complied) or the non-compiled version of the same files jointly for all layers including the reconstruction terms from the layers down to the image, down to the layer below and sparsity terms
Learn_Filters/learn_filters_ISTA.m [code]Updates the filters for a batch of training samples using either the IPP libraries (and thus is fast if they are complied) or the non-compiled version of the same files jointly for all layers including the reconstruction terms from the layers down to the image, down to the layer below and sparsity terms
Learn_Filters/learn_kmeans_filters.m [code]Attempts to find the most prominent combinations of filters below based on the activations of the feature maps and the pooling locations (indices)
Learn_Filters/normalize_filters.m [code]Conducts filter shifting, zero mean, and normalization (in that order) of the filters over various dimensions
modules/mdz/avg_pool/cuAvgPool.cpp [code]GPUmat GPU implementation of ag pooling over subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If there are negative values in the input, this pooling takes the average value in 2d regions. Note: input images must be single precision GPU floats
modules/mdz/avg_pool/cuAvgPool.cu [code]
modules/mdz/avg_pool/cuAvgPool3d.cpp [code]GPUmat GPU implementation of avg pooling over 3D subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If there are negative values in the input, this pooling takes the average value in each 3D region. Note: input images must be single precision GPU floats
modules/mdz/avg_pool/cuRevAvgPool.cpp [code]GPUmat GPU implementation of unpooling a 2D avg pooling over subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. Note: input images must be single precision GPU floats
modules/mdz/avg_pool/cuRevAvgPool3d.cpp [code]GPUmat GPU implementation of unpooling a avg pooling over 3D subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. Note: input images must be single precision GPU floats
modules/mdz/avg_pool/make.m [code]
modules/mdz/avg_pool/moduleinit.m [code]
modules/mdz/avg_pool/reloadmodule.m [code]
modules/mdz/avg_pool/runme.m [code]
modules/mdz/avg_pool/testcuMaxPool.m [code]A test script for variosu gpu cuda pooling and unpooling routines which compares speed for different types of pooling, different sizes of images, and different numbers of images
modules/mdz/convs/conv.cuh [code]
modules/mdz/convs/conv4.cu [code]
modules/mdz/convs/conv5.cu [code]
modules/mdz/convs/conv6.cu [code]
modules/mdz/convs/conv7.cu [code]
modules/mdz/convs/conv8.cu [code]
modules/mdz/convs/cuConv6.cpp [code]This function does the valid convolutions between at set of feature maps and corresponding filters, summing over the feature maps to give a reconstruction of numInputMap maps for each of numCases sets of feature maps (and, possibly different, filters). The result is numInputMaps x numCases in dimension
modules/mdz/convs/cuConv6_2.cpp [code]This function does the valid convolutions between at set of feature maps and corresponding filters, summing over the feature maps to give a reconstruction of numInputMap maps for each of numCases sets of feature maps (and, possibly different, filters). The result is numInputMaps x numCases in dimension
modules/mdz/convs/cuConv7.cpp [code]This function does the valid convolutions between at set of feature maps and corresponding images convolving each possible pais of feautre maps and input map in a set for each numCases sets of them.
modules/mdz/convs/cuConv7_2.cpp [code]This function does the valid convolutions between at set of feature maps and corresponding images convolving each possible pais of feautre maps and input map in a set for each numCases sets of them.
modules/mdz/convs/cuConv8_2.cpp [code]This function does the valid convolutions between at set of feature maps and corresponding filters, summing over the feature maps to give a reconstruction of numInputMap maps for each of numCases sets of feature maps (and, possibly different, filters). The result is numInputMaps x numCases in dimension
modules/mdz/convs/make.m [code]
modules/mdz/convs/moduleinit.m [code]
modules/mdz/convs/reloadmodule.m [code]
modules/mdz/max_pool/cuMaxPool.cpp [code]GPUmat GPU implementation of max pooling over subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If there are negative values in the input, this pooling takes the maximum absolute value (but keeps its sign as well). Note: input images must be single precision GPU floats
modules/mdz/max_pool/cuMaxPool.cu [code]
modules/mdz/max_pool/cuMaxPool3d.cpp [code]GPUmat GPU implementation of max pooling over 3D subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If there are negative values in the input, this pooling takes the maximum absolute value (but keeps its sign as well). Note: input images must be single precision GPU floats
modules/mdz/max_pool/cuRevMaxPool.cpp [code]GPUmat GPU implementation of unpooling a max pooling over subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If more planes are input for the provided indices than the images, only the first corresponding number of image planes will be used from the indices. Note: input images must be single precision GPU floats
modules/mdz/max_pool/cuRevMaxPool3d.cpp [code]GPUmat GPU implementation of unpooling a max pooling over 3D subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If more planes are input for the provided indices than the images, only the first corresponding number of image planes will be used from the indices. Note: input images must be single precision GPU floats
modules/mdz/max_pool/cuShrink.cpp [code]This GPUmat GPU implementation does the L1 shrinkage function max(z-beta,0).*sign(z); This does it in place so the input matrix is also the result
modules/mdz/max_pool/make.m [code]
modules/mdz/max_pool/moduleinit.m [code]
modules/mdz/max_pool/reloadmodule.m [code]
modules/mdz/max_pool/runme.m [code]
modules/mdz/max_pool/testcuMaxPool.m [code]A test script for variosu gpu cuda pooling and unpooling routines which compares speed for different types of pooling, different sizes of images, and different numbers of images
modules/mdz/max_pool/testMaxMin.m [code]A test script for variosu gpu max and min variations which compares speed for different types of pooling, different sizes of images, and different numbers of images
modules/mdz/other/cuOther.cu [code]
modules/mdz/other/cuShrink.cpp [code]This GPUmat GPU implementation does the L1 shrinkage function max(z-beta,0).*sign(z); This does it in place so the input matrix is also the result
modules/mdz/other/empties.cpp [code]GPUmat function to create an empty array of a certain size (ie. the array is not initialized so it is much faster than zeros or ones if needed)
modules/mdz/other/make.m [code]
modules/mdz/other/moduleinit.m [code]
[email protected]/all.cpp [code]GPUmat GPU implementation of the matlab all function (see implementation for max)
[email protected]/all.m [code]
[email protected]/any.cpp [code]GPUmat GPU implementation of the matlab any function (see implementation for max)
[email protected]/any.m [code]
[email protected]/dot.m [code]
[email protected]/isinf.cpp [code]GPUmat GPU implementation of the matlab max function (see implementation for max)
[email protected]/isinf.m [code]
[email protected]/isnan.cpp [code]GPUmat GPU implementation of the matlab max function (see implementation for max)
[email protected]/isnan.m [code]
[email protected]/max.cpp [code]GPUmat GPU implementation of the matlab max function (see implementation for max)
[email protected]/max.m [code]
[email protected]/min.cpp [code]GPUmat GPU implementation of the matlab min function (see implementation for min)
[email protected]/min.m [code]
[email protected]/norm.m [code]
[email protected]/padarray.cpp [code]
[email protected]/padarray.m [code]
[email protected]/sign.cpp [code]GPUmat GPU implementation of the matlab max function (see implementation for max)
[email protected]/sign.m [code]
PoolingToolbox/abs_avg_pool.m [code]Average pools the absoluve value of the input maps within pool_size region
PoolingToolbox/avg_pool.cpp [code]MEX implementation of avg pooling over subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. Note: input images must be single precision floats
PoolingToolbox/avg_pool.m [code]Average pools the input maps within pool_size region
PoolingToolbox/avg_pool3d.cpp [code]MEX implementation of avg pooling over 3D subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. Note: input images must be single precision floats
PoolingToolbox/compilemex.m [code]Compiles all the subsampling mex files which are 10-1000x faster than the included M-files
PoolingToolbox/max_pool.cpp [code]MEX implementation of max pooling over subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If there are negative values in the input, this pooling takes the maximum absolute value (but keeps its sign as well). Note: input images must be single precision floats
PoolingToolbox/max_pool.m [code]Max pools the input maps within pool_size region
PoolingToolbox/max_pool3d.cpp [code]MEX implementation of max pooling over 3D subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If there are negative values in the input, this pooling takes the maximum absolute value (but keeps its sign as well). Note: input images must be single precision floats
PoolingToolbox/max_pool3d.m [code]Max pools the input maps within pool_size region
PoolingToolbox/max_pool_gpu.m [code]Max pools the input maps within pool_size region ona GPU
PoolingToolbox/pool_all_layers.m [code]A simple helper function that gets the indices and pooled versions of the input maps (input as a cell array) for each layer of a hierarchical model
PoolingToolbox/pool_wrapper.m [code]A wrapper that abstracts out any type of pooling operation (to reduce planes over certain regions into smaller regions)
PoolingToolbox/reverse_avg_pool.cpp [code]MEX implementation of unpooling an average pooling over subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. The indices input are ignored, they are just there so that each pooling operation takes in the same format of parameters. Note: input images must be single precision floats
PoolingToolbox/reverse_avg_pool.m [code]Undoes the average pooling by placing the average back into each location within the pool region
PoolingToolbox/reverse_avg_pool3d.cpp [code]MEX implementation of unpooling a avg pooling over 3D subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. The indices input are ignored, they are just there so that each pooling operation takes in the same format of parameters. Note: input images must be single precision floats
PoolingToolbox/reverse_max_pool.cpp [code]MEX implementation of unpooling a max pooling over subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If more planes are input for the provided indices than the images, only the first corresponding number of image planes will be used from the indices. Note: input images must be single precision floats
PoolingToolbox/reverse_max_pool.m [code]Undoes the max pooling by placing the max back into it's indexed location
PoolingToolbox/reverse_max_pool3d.cpp [code]MEX implementation of unpooling a max pooling over 3D subregions of image planes. The size of the subregions can be arbitrary, as can the number of planes. If more planes are input for the provided indices than the images, only the first corresponding number of image planes will be used from the indices. Note: input images must be single precision floats
PoolingToolbox/reverse_max_pool3d.m [code]Undoes the max pooling by placing the max back into it's indexed location
PoolingToolbox/reverse_max_pool_gpu.m [code]Undoes the max pooling by placing the max back into it's indexed location
PoolingToolbox/threshold_maps.m [code]Threshold the output of pooled feature maps in order to suppress pooling blocks that do not have strong activatiosn
PoolingToolbox/unpool_all_layers.m [code]A simple helper function that uses the indices and pooled versions of the input maps (input as a cell array) to unpool each layer of a hierarchical model
PoolingToolbox/unpool_wrapper.m [code]A wrapper that abstracts out any type of unpooling operation (to invert pooling of planes over certain regions into larger regions)
PoolingToolbox/Tests/test_conv_max.m [code]A test script for maxminFilter convolutional max filtering
PoolingToolbox/Tests/test_gpu_pooling.m [code]A test script for variosu gpu pooling and unpooling routines which compares speed for different types of pooling, different sizes of images, and different numbers of images
PoolingToolbox/Tests/test_subspeed.m [code]This tests the speed of differnt pooling implementations
PoolingToolbox/Tests/testavg.m [code]
PoolingToolbox/Tests/testmax.m [code]
Reconstructing/forward_indices.m [code]Forward propagate the signal from the pixel level to each layer of the deconvnvolutional network using full convolution with the filters and pooling between layers and return the pooling indices
Reconstructing/forward_prop.m [code]Forward propagate the signal from the pixel level to each layer of the deconvnvolutional network using full convolution with the filters and pooling between layers
Reconstructing/recon_down.m [code]Reconstruct the input signal from a top_layer down through the filters to some specified bottom_layer
Reconstructing/recon_down_now.m [code]A simple script wrapper for recon_down to use workspace variables to reconstruct the input signal from a top_layer down through the filters to some specified bottom_layer
Reconstructing/reconAll.m [code]The entry point to infer images with a generic deconvolutional network with any number of layers and any generic connections back down to the input image
SPM/patch_CD.m [code]A patch based implementation of Coordinate Descent meant to replicate the mexCD function provided in the SPAMS toolbox
SPM/patch_ISTA.m [code]A patch based implementation of ISTA meant to replicate the mexFistaFlat function provided in the SPAMS toolbox
SPM/Batch/batchBuildHistograms.m [code]This takes the ditionary for descriptors and the feature descriptors and builds a histogram into the dictionary for each feature if using K-means or uses the coefficients from sparse coding if using sparse coding
SPM/Batch/batchCalculateDictionary.m [code]Takes in a set of feature descriptors and builds a sictionary using either K-means or Sparse Coding
SPM/Batch/batchCompilePyramid.m [code]This takes the coefficients of descriptors into the dictionary and forms a spatial pyramid from them
SPM/Batch/batchGenerateConvDescriptors.m [code]Generate the dense grid of convolutional descriptors for each image (based on their feature maps)
SPM/Batch/compile_pyramids.m [code]This compiles the pyramids for classification just like splitCompilePyramids does on the clusters
SPM/Batch/splitBuildPyramid.m [code]This pbs script maker is used to start the classification chain of pbs creation and calls for Caltech101 classification on the HPC cluster
SPM/Batch/splitCompileFeatures.m [code]This brings all the reduced features together, then saves them out to reduced_features.mat
SPM/Batch/splitCompilePyramids.m [code]This is the last step of SPM pipeline bringing together the pyramid_all files into the pyramids for all the images
SPM/Batch/splitGenerateConvDescriptors.m [code]This pbs script maker, makes pbs scripts to the training and test set parts which load the feature maps and makes descriptors from then using batchConvDescriptos
SPM/Batch/splitHistJobs.m [code]This launches the waiting function that launches and waits for all the parts to get histogramed and pyramids
SPM/Batch/splitHistPyr.m [code]This loads the dictionary and generates pbs scripts which build histograms based on the dictionary and that part of the dataset's feature maps and tehn compiles the pyramid for those parts as well
SPM/ConvDescriptors/compare_descriptors.m [code]Plots many different commparisons between SIFT and Layers 1 and 2 Deconvolutional Network descriptors
SPM/ConvDescriptors/compilemex.m [code]Compiles all the subsampling mex files which are 10-1000x faster than the included M-files
SPM/ConvDescriptors/dessimate_absavg.cpp [code]Finds patches of patchSize spaced patchSpacing pixels over from one another in both x and y directions and then returns a larger plane(s) with all these patches spread apart and stitched together, but pooled by taking the average absolute value over some pooling size (not necessarily the same size as the pathSize)
SPM/ConvDescriptors/dessimate_spread.cpp [code]Finds patches of patchSize spaced patchSpacing pixels over from one another in both x and y directions and then returns a larger plane(s) with all these patches spread apart and stitched together
SPM/ConvDescriptors/dessimate_spread.m [code]Finds patches of patchSize spaced patchSpacing pixels over from one another in both x and y directions and then returns a larger plane(s) with all these patches spread apart and stitched together
SPM/ConvDescriptors/GenerateCombinedDescriptors.m [code]The concatenates the scaled versions of the feature maps before forming a joint descriptor (as in GenerateConvDescriptors.m)
SPM/ConvDescriptors/GenerateConcatenatedDescriptors.m [code]% This concatenates the two descriptors formed for each layer (after they are formed just like GenerateConvDesciptors but before making the dictionaries)
SPM/ConvDescriptors/GenerateConvDescriptors.m [code]Generate the dense grid of convolutional descriptors for each image (based on their feature maps)
SPM/ConvDescriptors/GenerateHistDescriptors.m [code]Generate the dense grid of convolutional descriptors for each image (based on their feature maps)
SPM/ConvDescriptors/GenerateSiftMaps.m [code]Generate the dense grid of sift descriptors for each image and shapes them into 128 feature maps
SPM/ConvDescriptors/make_descriptors.cpp [code]This concatenates each gridSize region of each plane over all planes for a given image
SPM/ConvDescriptors/make_descriptors.m [code]This concatenates each gridSize region of each plane over all planes for a given image
SPM/ConvDescriptors/unmake_descriptors.m [code]This is the exact opposite of make_descriptors in that it takes in descriptors and returns the original feature map size
SPM/ErrorClassify/CG_LOGREG.m [code]This function computes the logistic regression cost
SPM/ErrorClassify/CG_MLP.m [code]This function computes the logistic regression cost with a signle hidden layer as well in between
SPM/ErrorClassify/class_energy.m [code]This classifies images based on their energies under class specific models directly
SPM/ErrorClassify/log_reg_classify.m [code]This classifies images based on their energies under class specific models using logistic regression on these energies
SPM/ErrorClassify/MNIST_linearSVM.m [code]Classifies the resulting feature maps of a deconv net using a linear SVM
SPM/KMeansConvDesc/combineBuildPyramid.m [code]Complete all steps necessary to build a spatial pyramid based on high level convolutional feature maps
SPM/KMeansConvDesc/combined_pyramid.m [code]This is for concatenating first and second layers together before creating pyramids (concatenates the features maps)
SPM/KMeansConvDesc/km_pyramid.m [code]Builds a spatial pyramid matching SVM classifier with Deconvolutional descriptors that are vector quantized with K-means
SPM/KMeansConvDesc/km_pyramid2.m [code]This is for concatenating multiple layer pyramids (made with K-means) together before creating the histogram intersections
SPM/KMeansConvDesc/km_pyramid3.m [code]This is for computing the histogram intersection kernels for multiple different models and then summing their kernels to classify
SPM/KMeansConvDesc/km_pyramid4.m [code]The classification is done here with liblinear instead of histogram intersection kernels and an svm
SPM/KMeansConvDesc/km_switches.m [code]This attempts to cluster the switches using k-means
SPM/KMeansConvDesc/kmBuildPyramid.m [code]Complete all steps necessary to build a spatial pyramid based on high level convolutional feature maps
SPM/KMeansConvDesc/maxrecon_km_pyramid.m [code]This script reconstructs down from the given top layer to z1's using only a preset number of max activations and then builds a spatial pyramid based on those
SPM/KMeansConvDesc/recon_colmultimax_activation.m [code]This reconstructs multiple images (and z maps) by finding the max in the top_layer and using that (x,y) column for each image
SPM/KMeansConvDesc/recon_km_pyramid.m [code]This script reconstructs down from the given top layer to z1's and then builds a spatial pyramid based on those
SPM/KMeansConvDesc/recon_max_activation.m [code]Reconstruct the input signal from a top_layer down through the filters to some specified bottom_layer using NUM_MAXES max locations and activations to sum together and give a reconstructed image
SPM/KMeansConvDesc/recon_multimax_activation.m [code]This reconstructs multiple images (and z maps) by finding the single max in the top_layer and using that single point for each image
SPM/Lazebnik/sift_linear.m [code]This generates the sift descriptors for a dataset and then classifies with a linear SVM kernel
SPM/Lazebnik/svet_pyramid.m [code]Builds a spatial pyramid matching SVM classifier with SIFT descriptors that are vector quantized with K-means
SPM/LinearClassification/conv_linearSVM.m [code]Classifies the pixel space filter of a deconv net using a linear SVM
SPM/LinearClassification/filter_SVM.m [code]This attempts to classify images based on their strongest activated set of filters
SPM/LinearClassification/indices_linearSVM.m [code]Classifies the resulting feature maps of a deconv net using a linear SVM
SPM/LinearClassification/linearSVM.m [code]Classifies the resulting feature maps of a deconv net using a linear SVM
SPM/LinearClassification/sift_linearSVM.m [code]This generates the sift descriptors for a dataset and then classifies with a linear SVM kernel
SPM/other/compareBatchRecons.m [code]This attempts to compute reconstruction using the max activation of the feature maps of one image using the pooling indices of another
SPM/other/separate_batch_results.m [code]Separates the batch results into individual files so that I can use them in MyBuildPyramid example (just like Svetlana loads individual images)
SPM/other/splitCompareBatchRecons.m [code]This pbs maker attempts to compute reconstruction using the max activation of the feature maps of a set of images using the pooling indices of another set of images
SPM/SparseConvDesc/sp_BuildHistograms.m [code]This uses mexOMP to assign the sparse coded coefficients for all images
SPM/SparseConvDesc/sp_CalculateDictionary.m [code]Create the texton dictionary using sparse coding instead of k-means
SPM/SparseConvDesc/sp_CompilePyramid.m [code]Generate the pyramid from the sparse coefficients
SPM/SparseConvDesc/sp_pyramid.m [code]Builds a spatial pyramid matching SVM classifier with convolutional descriptors that are vector quantized with sparse coding
SPM/SparseConvDesc/sp_pyramid2.m [code]This is for concatenating first and second layer pyramid (made with Sparse Coding) together before creating the histogram intersections
SPM/SparseConvDesc/spBuildPyramid.m [code]Complete all steps necessary to build a spatial pyramid based on high level convolutional feature maps
SPM/Textures/compare_textures.m [code]Generates my convolutional descriptors and SIFT descriptors for the same set of Brodatz textures and makes comparison plots
SPM/Textures/find_best_confusion_improvement.m [code]This searches the entire confusion matrices for layers 1 and 2 and finds the elements that have the highest improvement of layer 2 over layer 1
SPM/Textures/lay2_over_lay1_and_sift.m [code]This plots the most confused (off diagonal) pairs of images from the confusion matrix between categories SIFT (sorted by most confused SIFT ones) but that also are beat by our layer 1 model and our layer 2 model beats both layer 1 and SIFT
SPM/Textures/most_confused_versus_sift.m [code]This plots the most confused (off diagonal) pairs of images from the confusion matrix between categories for a cell array full of confusion matrices that you want to compare against the confusion matrix for SIFT
SPM/Textures/my_textures.m [code]Generates my convolutional descriptors for the set of Brodatz textures and attempts to classify them
SPM/Textures/plot_most_confused.m [code]This plots the most confused (off diagonal) pairs of images from the confusion matrix between categories
SPM/Textures/sift_textures.m [code]Generates my SIFT descriptors for the set of Brodatz textures and attempts to classify them
Top_Down/._top_down_core.m [code]
Top_Down/._top_down_noload.m [code]
Top_Down/._top_down_select.m [code]
Top_Down/sample_z_map.m [code]Draws samples from the distribution of the feature map, z, over their activations for a number of trainin examples
Top_Down/select_top_num_maxes.m [code]This looks for the top NUM_MAXES maxes either per image (searches across feature maps) or per map (searches across images) and returns copies of the input feature maps with only these maxes, no maxes, and the indices of where they came from
Top_Down/select_top_num_multimaxes.m [code]This looks for the top NUM_MAXES maxes either per image (searches across feature maps) or per map (searches across images) and returns each map in a separate feature map
Top_Down/top_down.m [code]Visualizes the filters in pixel space from models above by placing a single one in each of the top feature maps independtly and then reconstructing downwards
Top_Down/top_down_cascade.m [code]Uses the model struct in the workspace to attempt to visualize how each layer interactes with the layers below on a visualization downwards
Top_Down/top_down_core.m [code]The basis of going from the top of the model down to the bottom
Top_Down/top_down_last.m [code]Visualizes the filters in pixel space from the last run experiment
Top_Down/top_down_maxes.m [code]Uses the model struct in the workspace to visualize the reconstructions using a single max placed in the top layer
Top_Down/top_down_noload.m [code]Uses the model struct in the workspace to visualize the filters in pixel space from models above by placing a single one in each of the top feature maps independtly and then reconstructing downwards
Top_Down/top_down_sampling.m [code]Samples the feature maps to reconstruct the samples in pixel space for the distribution of feature map activations from the training images
Top_Down/top_down_select.m [code]Uses the model struct in the workspace to visualize the filters in pixel space from models above by placing a single one in each of the top feature maps independtely and then reconstructing downwards
Top_Down/top_down_sizer.m [code]This sizes the region in pixel space that the top layer receptive field would represent
Top_Down/top_down_spm.m [code]Visualizes the dictionary elements of the spatial pyramid matching kernel's dictionary in pixel space
Top_Down/top_down_switches.m [code]Uses the model struct in the workspace to visualize the reconstructions using a single max placed in the top layer
Training/._train.m [code]
Training/trainAll.m [code]The entry point to train a generic deconvolutional network with any number of layers and any generic connections back down to the input image
 All Files Functions Variables Typedefs Enumerations Enumerator Defines