Machine learning model (neural network or SVM) for unequal feature matrices size

5 visualizaciones (últimos 30 días)
I have feature matrices obtained from visual bags of words model for various dictionary sizes. Example, Nx5, Nx10, …., Nx15000. Where N is the number of samples and 5, 10, …15000 are the visual vocabulary or dictionary size or feature vectors size. There are classification labels for each samples. There are 13 methods (SIFT, SURF, BRISK and others) to extract the feature descriptors which are eventually used in visual bags of words to encode the images to obtain the feature vector. Also, there are 13 different dictionary sizes 5, 10 to 15000.
If I had only one method and one visual vocabulary, I could have created one multi class classifier. But, here I have for 13 methods and for each of them there are 13 visual vocabulary sizes. Thus, making 169 classifiers if I want to try to train machine learning models individually. This makes things cumbersome to present.
So I would like to ask you if there is some method where I can combine these unequal Nx5, Nx10, …., Nx15000 feature matrices to have one classifier model for each 13 different feature descriptors, instead of having 169 classifiers.
Please let me know if you have any suggestions.

Respuestas (1)

Yash Sharma
Yash Sharma el 13 de Feb. de 2024
Hi Preetham,
Combining different feature matrices with varying dimensions into a single classifier for each feature descriptor method can be challenging due to the differing sizes of the visual vocabulary. However, there are a few strategies you could consider to handle this situation:
  1. Feature Aggregation or Pooling: You could aggregate features across different dictionary sizes to create a fixed-size representation for each descriptor method. For example, you could use average pooling, max pooling, or a more sophisticated method like VLAD (Vector of Locally Aggregated Descriptors) or Fisher Vectors to aggregate features from all dictionary sizes into a single descriptor per image.
  2. Dimensionality Reduction: Apply dimensionality reduction techniques such as PCA (Principal Component Analysis) to reduce all feature matrices to the same size. This approach would allow you to create a single classifier for each descriptor method, with each classifier trained on the reduced feature set.
  3. Canonical Correlation Analysis (CCA): CCA can be used to find a common space where feature vectors from different sizes can be compared. This would allow you to project all feature vectors to a shared space and train a single classifier for each descriptor method.
  4. Neural Network with Multiple Inputs: Design a neural network architecture that can handle multiple inputs, where each input corresponds to a different dictionary size. The network can learn to combine these inputs in a meaningful way to make a prediction.
  5. Feature Selection: Perform feature selection on each feature matrix to keep only the most informative features. This could lead to feature matrices of the same size, which you can then use to train a single classifier for each descriptor method.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by