The “weights” used will depend on the classifier used (some classifiers have weights others do not). I don’t think there is a function built in that returns the weights particularly since they change every time the classifier is trained.
There are two options if you are interested in questions of what I was calling “information sparsity” (or what I called “compactness” in my 2008 J. Neurophysc paper). The first is to write your own custom code to capture this information from the classifier which might not be that easy. The second way (which is much easier) is to run the analysis using only the k most selective neurons for a range of k values (e.g., run the analysis for k = 2, k = 4, k = 8, etc.). This can be done using the feature preprocessor select_or_exclude_top_k_features_FP. If the classifier works just as well using only 8 neurons as it does using 100’s of neurons then this tells your there is a information rich sparse subset of neurons that contains all the information that is available in the whole population (check out figure 4 of my 2008 J. Neurophys paper and figure S2 of my 2012 PNAS paper).