This is a cross-validator object that run a cross-validation procedure multiple times (as specified by the num_resample_runs property) and returns decoding results calculated over the resample runs and cross-validation splits. This cross-validator calculates a decoding measures based on the zero-one loss function (i.e., classification accuracy and mutual information based on the confusion matrix). If a classifier is used that returns decision values, then one can also get results based on these decision values, normalized rank results, and results based on the area under ROC curves that are calculated separately for each class. More information about the methods, different properties that can be set, and different results that can be returned as described below.
cv = standard_resample_CV(the_datasource, the_classifier, the_feature_preprocessors)
This constructor sets the datasource and classifier objects that will be used in the cross-validation procedure. One can also optionally pass a cell array of feature preprocessor objects that will be applied (in the order they are listed in the cell array) prior to the data being passed to the classifier.
DECODING_RESULTS = run_cv_decoding(cv)
This method runs a cross-validation decoding procedure multiple times (in a bootstrap-like manner) using different splits of the data that are generated by the datasource, and returns the results in the DECODING_RESULTS structure.
The following properties can be set to change the behavior of this CV object.
num_resample_runs (default = 50)
Sets the minimum number of resample runs that the decoding procedure will be run.
test_only_at_training_times (default = 0)
If this is set to zero than the classifier will be trained and testing at all times, which will create a “temporal cross-training” matrix of results that have the decoding accuracies for training at time 1 and testing at time 2. This allows one to test if information is contained in a dynamic population code (e.g., see Meyers et al., J. Neurophysiology, 2008). If this properties is set to 1, then the results will only be run for training and testing at the same time, which can cause the run_cv_decoding method to potentially run much faster, particularly if it classifier is slow.
stop_resample_runs_only_when_specfic_results_have_converged (default = 0)
Setting the different fields in this structure causes the resample runs to continue beyond those specified by num_resample_runs if the results have not converged to a table estimate. stop_resample_runs_only_when_specfic_results_have_converged has the following fields that can be set which control the resample run stopping criteria for different types of results:
controls whether the zero one loss results have converged
controls whether the normalized rank results have converged
controls whether the decision value results have converged; .combined_CV_ROC_results: controls whether the combined CV ROC results have converged
controls whether the separate CV ROC results have converged. By default all these fields are set to empty meaning that no converge stoping criteria are set by default.
Setting any of these fields to a particular value causes the run_resample_cv method to keep running the resample run loop until the given mean result (over resample runs) changes by less than the specified value (over all training and test time periods) when any one resample run is left out. This can be useful for speeding up the run-time when the decoding results have converged to a stable value. For example, one could set num_resample_runs to a lower number (say 20) and then setting .zero_one_loss_results to a smallish value (say .1), which might cause fewer than the default value of 50 resample runs to be executed while getting results that are almost as accurate – i.e., there would at most be a .1 change in the decoding accuracy (and any point in time) if the most sensitive resample run was left out. If any of these fields are set, there will still be a minimum number of resample runs executed that is specified by the num_resample_runs property, and then there will be additional resample runs to be executed until the desired convergence level is achieved. There is also an additional field .stop_criteria_is_absolute_result_value (default = 1), which specifies whether the value set should be taken as an absolute change in the decoding accuracy – e.g., the actual zero-one decoding result values should change by less than 1 when any resample run is left out. If this field is set to 0, then the values specified are given as a percentage change that should not occur if any resample run is left out relative to the maximum of the mean decoding acccuracy – i.e., a value of 1 would mean that the results of leaving the ith resample run out, should not chance by more than 1% at that time point relative to the maximum decoding accuracy achieved (since the scale of a plot is determined relative to the maximum decoding accuracy this shows how much variance there is in the results on a plot due to not using more resample runs). [added in NDT version 1.4]
If a classifier is used that returns decision values along with zero-one loss results, then additional decoding accuracy results can be calculated and returned by setting the following options:
.normalized_rank (default = 1)
Calcuates and returns normalized rank results.
.decision_values (default = 1)
Returns the decision values generated by the classifier to make a classification.
.extended_decision_values (default is 0)
If this property is set to 1 then all the decision values for the correct class are saved. If this property is set to 2, then all decision values from all classes are saved (doing this can potentially take up a lot of memory/disk space)
.ROC_AUC (default = 1)
Returns area under ROC curve results. These results are calculated separately for each class k, with the test decision values from class k being the positive examples, and all other decision values being the negative examples. The results can be calculated separately on the test points on each CV split, (save_results.ROC_AUC = 3) or from combining all decision values over all cross-validation splits (save_results.ROC_AUC = 2). If save_results.ROC_AUC = 1, then both separate CV results, and combined CV results will be saved.
.mutual_information (default = 1)
Returns estimates of mutual information calculated from the confusion matrix.
A number of parameters can be set to create confusion matrices for the results. A confusion matrix is a matrix in which the columns express the true class label and the rows express the predicted label. For example, if column 2, and row 3 of a confusion matrix had a value of 7, it would mean that there were 7 times in which class 2 was mistakenly predicted to be class 3 (summing the columns will give the total number of examples for each class, which can be used to turn the confusion matrix into a misclassification probability distribution). The following parameters can be set to create confusion matrices:
.create_confusion_matrix (default is 1)
Creates and saves a zero-one loss confusion matrix.
.save_confusion_matrix_only_train_and_test_at_same_time (default is 1)
Saves the confusion matrix only for when training and testing was done at the same time.
.create_all_test_points_separate_confusion_matrix (default is 0)
Creates a confusion matrix where all test points are separate, i.e., if there are two test points from the same class on a given CV split, these two test points will be given separate column in the confusion matrix (this is useful when labels have been remapped using the generalization_DS datasource).
The display_progress options allow one to display the decoding result (for different result types) as the decoding procedure as it is running. The results displayed show the mean decoding results, as well as as a measure of the variability of the results, which gives a sense of whether enough resample iterations have been run so that the results have converged to a stable solution. The measure of variability of the results is calculated by computing the mean decoding results if the i-th resample run was left out (i.e., if there was one less resample run). This is done for each resample run, and the standard deviation is taken over these one-resample-left-out means. Thus this measure gives a sense of how much the results would vary if one less resample iteration was run, which gives a rough sense of whether the results have converged if one adds these to the mean results (overall these numbers should be very small). The following options allow one to display progress of different result types. The results for different types are displayed for all times, and are separated by NaNs?, (i.e,. mean_type1 stdev_type1 NaN mean_type2 stdev_type2, NaN, etc.).
.resample_run_time (default = 1)
Displays how many resample runs have completed, the amount of time the last resample run took, and an estimate of the time when the code will be done running.
.zero_one_loss (default = 1)
Display zero-one loss results.
.normalized_rank (default = 0)
Display normalized rank results.
.decision_values (default = 0)
Display decision values.
.separate_CV_ROC_results (default = 0)
Display ROC AUC results computed separately on each CV split.
.combined_CV_ROC_results (default = 0)
Display ROC AUC results combined points from all CV splits.
.training_time_to_display_results (default = -1)
The training time for which to display the test result values. A value of -1 means that the displayed results will for training at the test time points (or if only one training time was used, then a value of -1 means use that time for all test times).
.convergence_values (default = 0)
Displays the current resample run convergence values and convergence target value (see the explanation above of the field stop_resample_runs_only_when_specfic_results_have_converged for more information). [added in NDT version 1.4]
The run_cv_decoding method returns a structure called DECODING_RESULTS that contains the following properties.
If any of the feature preprosessing algorithms had returned information to be saved via their get_current_FP_info_to_save method, then this structure will contain the additional information that the FP algorithm wanted to be saved.
This structure contains additional information about parameters used in the decoding procedure. This structure has the following fields:
The number of test points on each CV split.
The number of training points on each CV split.
The dimensionality of the training/test data points.
A vector containing the unique test labels that were used.
The number of unique labels (classes)
The number of cross-validation splits of the data.
The number of resample runs the cross-validator went through.
The toolbox version number.
The class name of the classifier.
The class names of the feature preprocessors.
If the datasource used has a method called get_properties, then the properties returned by this method will be saved in the structure DS_PARAMETERS.
This structure contains the main (zero-one loss) decoding results. The following results can be returned.
A [num_resample_runs x num_CV_splits x num_training_times x num_test_times] matrix that contains all the decoding results separate for each resample run and cross-validation split.
A [num_training_times x num_test_times] matrix that contains the mean decoding results averaged over all resample runs and CV splits.
If the confusion matrix properties have been set (e.g., cv.confusion_matrix_params.create_confusion_matrix == 1), then this structure can contain the following fields:
A [num_predicted_classes x num_actual_classes x num_training_times x num_test_times] confusion matrix specifying how often a test point j was classified as belonging to class i (where j indicates column indices, and i indicates row indices). If .save_confusion_matrix_only_train_and_test_at_same_time = 1 , then the confusion matrix is [num_predicted_classes x num_actual_classes x num_training_times] large and only contains the confusion matrices when training and testing at the same time period (which saves a lot of disk space when saving the results).
If the labels given to the classifier are not consecutive integers, this vector indicates how the labels have been remapped on to the columns of the confusion matrix (i.e., the first value in the vector indicates the class in the first row of the confusion matrix, etc.).
If the create_all_test_points_separate_confusion_matrix flag is set to 1, then this variable contains a confusion matrix that is [num_test_points x num_actual_classes x num_training_times x num_test_times] large, with each test point in a given cross-validation split being given a separate row in the confusion matrix (this confusion matrix will only differ from the regular confusion matrix if there are multiple test points from the same class in a given cross-validation split). This matrix can be useful if the labels have been remapped to different classes using the generalization_DS datasource in order to see what classes were confused in the original unremapped labels.
This structure contains information about the variability of the results. This structure has the following fields:
Each cross-validation run produces a value for each test point (i.e., 0 and 1’s, normalized ranks, or decision values). This [num_resamples, num_CV_splits, num_train, num_test] matrix contains the standard deviation over these test points for each cross-validation run.
A [num_resamples, num_train, num_test] matrix that is the same as .all_single_CV_vals but all the values from the cross-validation runs are combined together first before the standard diviation is taken (this is done separately for each resample run).
A [num_resamples, num_train, num_test] matrix that is calculate by taking the mean decoding results in each cross-validation run (i.e., the mean of the 0, 1’s, ranks or decision values of the points in one cross-validation split) and then taking the standard deviation over these mean cross-validation split results (this is done separately for each resample run).
This is the same as .over_CVs but all the values from all the resample runs are combined before the standard deviation is taken.
This [num_train, num_test] matrix calculates the mean over all the mean CV values for a resample run, and then calculates the standard deviation over the different resample runs.
It should be noted that .over_CVs, .over_CVs_combined_over_resamples and .over_boostraps could have all been computed running the decoding experiment using the values in .decoding_results, but we precompute them here for convenience.
For classifiers that return decision values, this cross-validator object can return additional results in the structures NORMALIZED_RANK_RESULTS, DECISION_VALUES and ROC_AUC_RESULTS that are described below.
This structure contains the normalized rank results. Normalized rank results are the results based on using the prediction values from the classifier to create an ordered list of predictions (i.e., the most likely class is x, second most likely class is y, etc.), and then assessing how far done on the list is the correct label. The results are normalized so that perfect prediction has a value of 1, chance has a value of .5 and having the last prediction be the correct one leads to a value of 0. The .NORMALIZED_RANK_RESULTS has the same .mean_decoding_results and .stdev results as .ZERO_ONE_LOSS_RESULTS but the different confusion matrix values. The confusion matrix for the normalized rank results is in the field .confusion_matrix_results.rank_confusion_matrix and contains matrix that is [num_predicted_classes x num_actual_classes x num_training_times x num_test_times] which contains values in the ith row and jth column for the average normalized rank ith predicted class when test points from the jth actual class were presented (i.e., how high up on the predicted labels list is the ith class when test points from the jth class are shown). There is also a field .confusion_matrix_results.rank_confusion_matrix_label_mapping that contains the labels that correspond to the columns of the rank confusion matrix.
This structure contains the decision values. It has all the same fields as the ZERO_ONE_LOSS_RESULTS except that there is no confusion matrix for this result type. If save_results.extended_decision_values == 1, then a matrix .classifier_decision_values of dimension [num_resample_runs x num_cv_splits x num_test_points x num_train_times x num_test_times] will be returned that will have the decision values for the correct/actual class for each test point. If save_results.extended_decision_values = 2, then a matrix .all_classifier_decision_values is returned that has dimensions [num_resample_runs x num_cv_splits x num_test_points x num_classes x num_train_times x num_test_times] that contains all the decision values for every class (not just the correct class). Thus this result contains all the information from the whole decoding process (and consequentially it could take up a lot of disk space/memory). A matrix .all_classifier_decision_labels [num_resample_runs x num_cv_splits x num_test_points x num_train_times x num_test_times] will also be returned that contains all the labels that were used. From these two structures it is possible to derive all other decoding measures that are returned.
This structure contains results that measure the area under an receiver operator characteristic (ROC) curves that are created separately for each class from the decision values. ROC curves graph the proportion of positive test points correctly classified (true positive rate) as a function of the proportion of negative test points incorrectly classified (false positive rate). The area under this curve (ROC AUC) gives a measure of decoding accuracy that has a number of useful properties, including the fact that it is invariant to the ratio of positive to negative examples, and that it can be used to determine decoding accuracies when multiple correct classes are present at the same time. The results in this structure have the following fields:
This structure calculate the ROC AUC separately for each cross-validation split of the data. The advantage of calculating this separately for each cross-validation split is that it maintains the independence of the decoding results across cross-validation splits. The disadvantage is that most times there will only be one or a couple test points for each class which will lead to a highly variable estimate of the ROC AUC. For this reason it is often better to use the .combined_CV_ROC_results results described below.
This structure calculate the ROC AUC by combining all the decision values from all the test points across the different cross-validation splits. While the test points should all be independent from one another the classifiers used to evaluate these test points are highly related (since they are most likely trained using very similar data), thus these results could be slightly biased. However, since a much larger number of points are used to create these ROC curves, the results are more likely to be more sensitive (i.e., these results are slightly more likely to contain type 1 errors but much less likely to create type 2 errors compared to using the .separate_CV_ROC_result structure).
Both the .separate_CV_ROC_results .combined_CV_ROC_results have the following fields:
This contains a [num_resample_runs x (num_CV_splits) x num_classes x num_training_times x num_test_times] matrix that contains the ROC AUC results (for the separate_CV_ROC_results there are 5 ‘dimensions’ while for the combined_CV_ROC_results results there are only 4 dimensions since the results combine data from the different cross-validation splits when calculating the ROC AUC results).
This is a [num_training_times x num_test_times] sized matrix that contains the mean ROC AUC values averaged over resample runs the different classes and for the separate_CV_ROC_results, the results are also averaged over CV splits.
This field contains the following measure of variability of the ROC AUC results:
.over_classes(num_resample_runs, num_CV_splits, num_train_times, num_test_times)
Computes the standard deviation over the results for each class.
Takes the mean over all the classes (and cross-validation runs), and computes the standard deviation over the resample runs. Additionally, .separate_CV_results has the following measure:
.over_CVs(num_resample_runs, num_classes, num_train_times, num_test_times)
Takes the mean over classes and computes the variability over the cross-validation runs.
This structure contains results that measure the mutual information that is calculated from the confusion matrix (created from the 0-1 loss results). For more information on the relationship between mutual information and decoding see Quian Quiroga and Panzeri, Nature Reviews Neuroscience, 2009. The results in this structure have the following fields:
The mutual information in this structure is calculated based on a confusion matrix that combines all the results from all the resample runs (i.e., it is the mutual information calculated from confusion matrix that is returned in the field ZERO_ONE_LOSS_RESULTS.confusion_matrix_results.confusion_matrix). This mutual information should be the most accuracy (and have the least bias during any baseline period), and the accuracy of these results should increase if more resample runs are used. The disadvantage of this estimate is that no estimate of the variability of this measure is possible since it uses data from all resample runs, thus one can not plot errorbars with this measure.
The mutual information in this structure is calculated based on a confusion matrix that is created separately for each resample run. This mutual information will most likely be biased upward due to limited sampling unless a very large number of test points are used (thus the mutual information in the any baseline period is likely to be greater than 0). The advantage of using this method is that one can now estimate a measure of this mututal information’s variability calculated over resamples, which is in the field .stdev.over_resamples.