Meta-learning

From NNML

Contents

Meta-learning

Objective

The objective of this project is to do large-scale meta-learning using various techniques and get several papers accepted in top tier venues so we can graduate with good jobs.

Action Items

Overall

  • 1st--create an extensible data set available to the community similar to the UCI data repository. meta-data set
  • Look at machine-learned ranking (MLR) algorithms . This will compare our ranking with other ranking algorithms
    • Maybe look for some implementations of MLR algorithms
    • Also look at the evaluation metrics that they use for ranking
  • Look at getting more familiar with Recommender System stuff that is already out there.

Our Algorithms

  • Unsupervised backpropagation (maybe other collaborative filtering algorithms as well)
    • Just the accuracies
    • Accuracies and the data set meta-features
    • Also try using the rankings instead of the accuracies
  • Trained neural network trained with the data set meta-features
  • Trained neural network with latent variables from unsupervised backprop
Things to think about
  • Think about how to incorporate the time that an algorithm takes to run
  • Think about to incorporate parameter settings into the algorithm such that we can rank the algorithm and the parameter settings.

Competitors

  • Brazdil 5-NN with his meta-features
  • Maybe some ranking algorithms??
    • RT Rank is an available implementation that we should try

Evaluation Metrics

Data Points

  • Continue to gather more accuracies from random parameter settings
  • Look at this data set: HERE
@inproceedings{Reif2012,
  author    = {Matthias Reif},
  title     = {A Comprehensive Dataset for Evaluating Approaches of Various
               Meta-learning Tasks},
  title     = {ICPRAM 2012 - Proceedings of the 1st International Conference
               on Pattern Recognition Applications and Methods, Volume
               1, Vilamoura, Algarve, Portugal, 6-8 February, 2012},
  year      = {2012},
  pages     = {273-276},
}

Other Thoughts/Ideas

Experiments

  • Collaborative filtering (Just accuracies and accuracies with meta-features)
    • For the initial experiments, run with just the accuracies missing but with the meta-features in the training data
    • Models to try:
      • Matrix Factorization
      • Non-linear PCA
      • Unsupervised Backprop
        • Try without hidden layer to be able to compute intrinsic variables for novel instances
      • Fuzzy K-Means
  • Classification-based approaches
    • Make sure to use a neural network with Backprop
  • Combination of classification with collaborative filtering
  • Ranking algorithms

Logan

5-2-2013

  • Get more data points (i.e. data sets). More data sets can be found at:
  • Run learning algorithms over the new data sets
    • Set upper limit to 100 hours? If a few don't finish, that's OK
    • Parameter optimization (10 random searches?)
  • Get meta-features on the data sets
    • Hardness heuristics (See Mike)
    • Brazdil
    • Ho and Basu (Download source code for DCoL )
  • Run waffles over the results
    • Just collaborative filtering
      • Just accuracies
        • using the Spearman Corralation Coefficient: .64822 removing 30% of data
      • Adding data set meta-features
        • using the Spearman Corralation Coefficient: .67633 removing 30% of data
    • Just a neural network
    • Both
    • Previous work (Brazdil)
  • compare results of other methods
  • Ranking Algorithms
    • Use and implement other ranking algorithms
    • Compare how often recommend selects as the best: the best. the second best... the worst.
    • Compare how often recommend selects as 2nd best: the best. the second best... the worst.
    • etc.
  • Future Ideas
    • parameter optimization
      • train model to predict accuracies of a specific model/dataset given meta-features of the dataset and parameters of the model.

Mike

5-2-2013

  • Get Logan code for meta-features
  • Get Logan code for random hyper-parameter selection
  • Help Logan see overall picture to help with design for the application

Rob

5-2-2013

  • Get data sets for Logan
  • Help Logan see overall picture to help with design for the application

Related Works

Random Hyper-Parameter Optimization

@article{Bergstra2012,
 author = {Bergstra, James and Bengio, Yoshua},
 title = {Random Search for Hyper-Parameter Optimization},
 journal = {Journal of Machine Learning Research},
 volume = {13},
 month = March,
 year = {2012},
 issn = {1532-4435},
 pages = {281--305},
 numpages = {25},
 url = {http://dl.acm.org/citation.cfm?id=2188385.2188395},
 acmid = {2188395},
 publisher = {JMLR.org},
} 

Brazdil: Ranking Learning Algroithms

@Article{ Brazdil2003,
	author = "Pavel B. Brazdil and Carlos Soares and Joaquim Pinto Da Costa",
	title = "Ranking Learning Algorithms: Using IBL and Meta-Learning on Accuracy and Time Results",
	journal = "Machine Learning",
	volume = "50",
	number = "3",
	year = "2003",
	pages = "251--277",
	publisher = "Kluwer Academic Publishers",
	address = "Hingham, MA, USA",
	doi = "http://dx.doi.org/10.1023/A:1021713901879",
	annote = "This work presents a method to rank learning algorithms according to their utility given a data set. The similarity of a data set is compared to a set of previously processed data sets using \textit{k}-NN on a set of meta-features. The learning algorithm ranking method uses aggregate information concerning classification accuracy as well as the run-time of the learning algorithm. This work ranks learning algorithms as will be done in a part of this thesis, however, this thesis rank learning algorithms according to classification accuracy according to instance hardness."
}

Ho and Basu: Complexity Measures

@Article{ Ho2002,
	author = "Tin Kam Ho and Mitra Basu",
	title = "Complexity Measures of Supervised Classification Problems",
	journal = "IEEE Trans. Pattern Anal. Mach. Intell.",
	volume = "24",
	issue = "3",
	month = "March",
	year = "2002",
	pages = "289--300",
	numpages = "12",
	acmid = "507476",
	publisher = "IEEE Computer Society",
	address = "Washington, DC, USA"
}
Views
Personal tools
  • Log in
Navigation
Toolbox