Generalization analysis tutorial

The following tutorial shows how to use the Neural Decoding Toolbox (NDT) to conduct a generalization analysis which can test whether a population of neural activity contains information in a invariant/abstract representation. In particular, we will use the Zhang-Desimone 7 object dataset to examine how invariant neural representations in the Inferior Temporal cortex are to changes in the position of stimuli. This tutorial assumes that one is already familiar with the basics of the NDT as covered in the introductory tutorial.

Testing invariant neural representations using the NDT

An important step in solving many complex tasks involves creating abstract/invariant representations from widely varying input patterns. For example, in order to act appropriately in social settings it is important to be able to recognize individual people. However the images of a particular person that is projected on our retinas can be very different due to the fact that the person might be at different distances from us, in different lighting conditions, etc. Thus at some level in our brains, there must be a neural representation that has abstracted away all the details present in particular images, in order to create abstract/invariant representations that are useful for behavior.

A powerful feature of the Neural Decoding Toolbox is that it can be used to test whether a particular population of neural activity has created representations that are invariant to particular transformations. To do such a ‘generalization analysis’, one can train a classifier under one set of conditions, and then see if the classifier can generalize to a new related set of conditions in which a particular transformation has been applied. The datasource object generalization_DS is designed for this purpose and we will explain how to use it below.

Using the Zhang-Desimone 7 object dataset to test position invariance

In this tutorial we will use the Zhang-Desimone 7 object dataset to test how invariant neural representations in Inferior Temporal (IT) cortex are to changes in the retinal position of objects. The Zhang-Desimone 7 object dataset consists of neural responses to 7 different objects that were shown to a monkey at three different retinal locations. To test how invariant neural representations in IT are, we will train the classifier with data from one location, and then test the classifier either at the same location or at a different location. If the neural representation in IT are invariant to position, then training and testing a classifier at different locations should yield an equal level of performance as training and testing a classifier at the same location.

Setting the path and binning the data

Before starting this tutorial, make sure that the path to the NDT has been set as described here. For this tutorial we will use binned-format data that consists of the firing rate in a 400 ms window that starts 100 ms after the onset of the stimulus. The following code shows how to create this binned data.

1
2
3
4
5
6
7
8
9
10
% change the line below to the directory where your raster format data .mat files are stored
raster_file_directory_name = 'Zhang_Desimone_7objects_raster_data/' 
save_prefix_name = 'Binned_Zhang_Desimone_7objects_data';
 
bin_width = 400;
step_size = 400;
start_time = 601;
end_time = 1000;
 
create_binned_data_from_raster_data(raster_file_directory_name, save_prefix_name, bin_width, step_size, start_time, end_time);

Creating a classifier and a preprocessor

Next we will create a classifier object and a preprocessor object. We will use the same classifier and preprocessor as was used in the basic tutorial.

1
2
the_classifier = max_correlation_coefficient_CL;
the_feature_preprocessors{1} = zscore_normalize_FP;

Using the generalization_DS to train and test at different locations

In order to train with data from one location and test with data from a different location we will use the generalization_DS datasource. To use this datasource, we first need to specify which labels belong which each training class and which labels belong to each test class using two cell arrays, which we will call the_training_label_names, and the_test_label_names respectively. Each entry in the cell array corresponds to the labels for one class. For example, if we set the_training_label_names{1} = {'car_upper'}, and the_test_label_names{1} = {'car_lower'}, this means that the first class will be trained with data from trials when the car was shown in the upper position, and the first class will have test data from trials in which cars where shown in the lower position.

To create training data for each object identity at location 1, and test data for each object at location 3, we can use the following code:

1
2
3
4
5
6
id_string_names = {'car', 'couch', 'face', 'kiwi', 'flower', 'guitar', 'hand'};
 
for iID = 1:7   
   the_training_label_names{iID} = {[id_string_names{iID} '_upper']};
   the_test_label_names{iID} = {[id_string_names{iID} '_lower']};
end

Now that we have created cell array that specify which labels are to be mapped on to which classes, we can create the generalization_DS object. The first three arguments to the constructor of this object are the same as those used for the basic_DS object, and the last two arguments are the training and test label remapping cell arrays we just created.

1
2
3
4
5
6
num_cv_splits = 18;
 
binned_data_file_name = 'Binned_Zhang_Desimone_7objects_data_400ms_bins_400ms_sampled_601start_time_1000end_time';
specific_labels_names_to_use = 'combined_ID_position';  % use the combined ID and position labels
 
ds = generalization_DS(binned_data_file_name, specific_labels_names_to_use, num_cv_splits, the_training_label_names, the_test_label_names);

We can then create a cross-validator as we did in the basic tutorial and get the results for training at upper location and testing at lower location.

1
2
3
the_cross_validator = standard_resample_CV(ds, the_classifier, the_feature_preprocessors);
the_cross_validator.num_resample_runs = 10;
DECODING_RESULTS = the_cross_validator.run_cv_decoding;

Training and testing at all locations

We will now create a full set of 9 results that are based on training at each of the three locations and testing at each of the three locations. This can be done by creating two loops, one for each training location, and one for each test location, and creating a new datasource each iteration. It should be noted that the generalization_DS is built in such a way that the if a particular original label is mapping into the same training and test class (e.g., if the_training_label_names{1} = {‘car_upper’} and also the_test_label_names{1} = {‘car_upper’}), the data used in the training set will still come from different trials then the data used for the test set . Thus the cross-validation splits will still be valid because there will not be any of the same data in the training and test sets (if this mapping was done for all classes, one would end up getting the same results as using the basic_DS). Because there is not ‘data leakage’ following code allows for fair comparisons between data that was trained and tested at the same location vs. data that was trained at one location and tested with data from a different location.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
mkdir position_invariance_results;  % make a directory to save all the results
num_cv_splits = 18;
 
id_string_names = {'car', 'couch', 'face', 'kiwi', 'flower', 'guitar', 'hand'};
pos_string_names = {'upper', 'middle', 'lower'};
 
for iTrainPosition = 1:3
   for iTestPosition = 1:3
 
      for iID = 1:7
            the_training_label_names{iID} = {[id_string_names{iID} '_' pos_string_names{iTrainPosition}]};
            the_test_label_names{iID} =  {[id_string_names{iID} '_' pos_string_names{iTestPosition}]};
      end
 
      ds = generalization_DS(binned_data_file_name, specific_labels_names_to_use, num_cv_splits, the_training_label_names, the_test_label_names);
       
      the_cross_validator = standard_resample_CV(ds, the_classifier, the_feature_preprocessors);
      the_cross_validator.num_resample_runs = 10;
      DECODING_RESULTS = the_cross_validator.run_cv_decoding;
 
      save_file_name = ['position_invariance_results/Zhang_Desimone_pos_inv_results_train_pos' num2str(iTrainPosition) '_test_pos' num2str(iTestPosition)]
 
      save(save_file_name, 'DECODING_RESULTS')
 
   end
end

Plotting the results

To plot all these results we will create some custom code. The standard decoding accuracy results are saved in the field DECODING_RESULTS.ZERO_ONE_LOSS_RESULTS.mean_decoding_results. For each training location, we will plot the results of testing at all three locations. We will also add some captions to make the plots easier to read.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
position_names = {'Upper', 'Middle', 'Lower'}
 
for iTrainPosition = 1:3
   
   for iTestPosition = 1:3
       load(['position_invariance_results/Zhang_Desimone_pos_inv_results_train_pos' num2str(iTrainPosition) '_test_pos' num2str(iTestPosition)]);
 
        all_results(iTrainPosition, iTestPosition) = DECODING_RESULTS.ZERO_ONE_LOSS_RESULTS.mean_decoding_results;
   end
 
   subplot(1, 3, iTrainPosition)
   bar(all_results(iTrainPosition, :) .* 100);
 
   title(['Train ' position_names{iTrainPosition}])
   ylabel('Classification Accuracy');
   set(gca, 'XTickLabel', position_names);
   xlabel('Test position')
   xLims = get(gca, 'XLim')
   line([xLims], [1/7 1/7], 'color', [0 0 0]);  % put line at chance decoding accuracy
 
end
 
set(gcf, 'position', [250 300 950 300])  % expand the figure

From looking at the results we can see that the best decoding accuracies are always obtained when training and testing the classifier at the same location, however the results are well above chance when training the classifier at one location and testing the classifier at a different location showing there is a large degree of position invariance in the Inferior Temporal Cortex.