- C. M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995.
- C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
- N. Cristianini and J. Shawe-Taylor, Kernel methods for pattern recognition, Cambridge University Press, 2004.
- L. Devroye, L. Györfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition, Springer, New York, 1996. [CrossRef]
- T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer-Verlag, 2009. [CrossRef] [MathSciNet]
- K.P. Murphy, Machine Learning: A Probabilistic Perspective, MIT Press, Cambridge, MA, 2012.
- Y. Freund and R.E. Schapire, Boosting: Foundations and Algorithms, MIT Press, Cambridge, MA, 2012.
- V. N. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, New York, 1995. [CrossRef]
- V. N. Vapnik, Statistical Learning Theory, Wiley, New York, 1998.
- J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” Journal of Machine Learning Research, 2012.
- Y. Bengio, “Gradient-based optimization of hyperparameters,” Neural Computation, vol. 12, no. 8, pp. 1889–1900, 2000. [CrossRef] [PubMed]
- C. E. Rasmussen and C. K. I. Williams, http://www.gaussianprocess.org/gpmlGaussian Processes for Machine Learning, MIT Press, 2006.
- Y. Bengio, A.C. Courville, and P. Vincent, “Unsupervised feature learning and deep learning: A review and new perspectives,” CoRR, vol. abs/1206.5538, 2012.
- R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,” Journal of Machine Learning Research, vol. 12, pp. 2493–2537, 2011.
- A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems. 2012, vol. 25, MIT Press.
- J. Bergstra, R. Bardenet, B. Kégl, and Y. Bengio, “Algorithms for hyperparameter optimization,” in Advances in Neural Information Processing Systems (NIPS). The MIT Press, 2011, vol. 24.
- J. Snoek, H. Larochelle, and R. P. Adams, “Practical Bayesian optimization of machine learning algorithms,” in Advances in Neural Information Processing Systems, 2012, vol. 25.
- C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Auto-WEKA: Automated selection and hyper-parameter optimization of classification algorithms,” Tech. Rep., http://arxiv.org/abs/1208.3719, 2012.
- R. Bardenet, M. Brendel, B. Kégl, and M. Sebag, “Collaborative hyperparameter tuning,” in International Conference on Machine Learning (ICML), 2013.
- D. S. Johnson and F. P. Preparata, “The densest hemisphere problem,” Theoretical Computer Science, vol. 6, pp. 93–107, 1978. [CrossRef] [MathSciNet]
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by backpropagating errors,” Nature, vol. 323, pp. 533–536, 1986. [NASA ADS] [CrossRef]
- B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Fifth Annual Workshop on Computational Learning Theory, 1992, pp. 144–152. [CrossRef]
- C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
- Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, pp. 119–139, 1997. [CrossRef]
- P. Bartlett, “The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network,” IEEE Transactions on Information Theory, vol. 44, no. 2, pp. 525–536, 1998. [CrossRef]
- I. Nabney, Netlab: Algorithms for Pattern Recognition, Springer, 2002.
- L. Bottou and O. Bousquet, “The tradeoffs of large scale learning,” in Advances in Neural Information Processing Systems, 2008, vol. 20, pp. 161–168.
- L. Mason, P. Bartlett, J. Baxter, and M. Frean, “Boosting algorithms as gradient descent,” in Advances in Neural Information Processing Systems. 2000, vol. 12, pp. 512–518, The MIT Press.
- L. Mason, P. Bartlett, and J. Baxter, “Improved generalization through explicit optimization of margins,” Machine Learning, vol. 38, no. 3, pp. 243–255, March 2000. [CrossRef]
- M. Collins, R.E. Schapire, and Y. Singer, “Logistic regression, AdaBoost and Bregman distances,” Machine Learning, vol. 48, pp. 253–285, 2002. [CrossRef]
- R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, “Boosting the margin: a new explanation for the effectiveness of voting methods,” Annals of Statistics, vol. 26, no. 5, pp. 1651–1686, 1998. [CrossRef] [MathSciNet]
- R. E. Schapire and Y. Singer, “Improved boosting algorithms using confidence-rated predictions,” Machine Learning, vol. 37, no. 3, pp. 297–336, 1999. [CrossRef]
- P. Viola and M. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, pp. 137–154, 2004. [CrossRef]
- G. Dror, M. Boullé, I. Guyon, V. Lemaire, and D. Vogel, Eds., Proceedings of KDD-Cup 2009 competition, vol. 7 of JMLR Workshop and Conference Proceedings, 2009.
- Olivier Chapelle and Yi Chang, “Yahoo! Learning-to-Rank Challenge overview,” in Yahoo! Learning-to-Rank Challenge (JMLR W&CP), 2011, vol. 14, pp. 1–24.
- L. von Ahn, B. Maurer, C. McMillen, D. Abraham, and Blum M., “reCAPTCHA: Human-based character recognition via web security measures,” Science, vol. 321, no. 5895, pp. 1465–1468, 2008. [CrossRef] [PubMed]
- L. Von Ahn and L. Dabbish, “Labeling images with a computer game,” in Conference on Human factors in computing systems (CHI04), 2004, pp. 319–326.
- L. Von Ahn, “Games with a purpose,” Computer, vol. 39, no. 6, 2006. [CrossRef]
- S. Roweis and Saul L. K., “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, pp. 2323–2326, 2000.
- J. B. Tenenbaum, V. de Silva, and Langford J. C., “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, pp. 2319–2323, 2000. [CrossRef] [PubMed]
- A. Beygelzimer, J. Langford, and B. Zadrozny, Performance Modeling and Engineering, chapter Machine Learning Techniques–Reductions Between Prediction Quality Metrics, Springer, 2008.
- J. Bennett and S. Lanning, “The Netflix prize,” in KDDCup 2007, 2007.
- N. Casagrande, D. Eck, and B. Kégl, “Geometry in sound: A speech/music audio classifier inspired by an image classifier,” in International Computer Music Conference, Sept. 2005, vol. 17.
- J. Bergstra, N. Casagrande, D. Erhan, D. Eck, and B. Kégl, “Aggregate features and AdaBoost for music classification,” Machine Learning Journal, vol. 65, no. 2/3, pp. 473–484, 2006. [CrossRef]
- B. Kégl, R. Busa-Fekete, K. Louedec, R. Bardenet, X. Garrido, I.C. Mariş, D. Monnier-Ragaigne, S. Dagoret-Campagne, and M. Urban, “Reconstructing Nµ19(1000),” Tech. Rep. 2011-054, Auger Project Technical Note, 2011.
- Pierre Auger Collaboration, “Pierre Auger project design report,” Tech. Rep., Pierre Auger Observatory, 1997.
- R. Busa-Fekete, B. Kégl, T. Éltető, and Gy. Szarvas, “Ranking by calibrated AdaBoost,” in Yahoo! Ranking Challenge 2010 (JMLR workshop and conference proceedings), 2011, vol. 14, pp. 37–48.
- R. Busa-Fekete, B. Kégl, T. Éltető, and Gy. Szarvas, “A robust ranking methodology based on diverse calibration of AdaBoost,” in European Conference on Machine Learning, 2011, vol. 22, pp. 263–279.
- V. Gligorov and M. Williams, “Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree,” Tech. Rep., arXiv:1210.6861, 2012.
- Y. Takizawa, T. Ebisuzaki, Y. Kawasaki, M. Sato, M. E. Bertaina, H. Ohmori, Y. Takahashi, F. Kajino, M. Nagano, N. Sakaki, N. Inoue, H. Ikeda, Y. Arai, Y. Takahashi, T. Murakami, James H. Adams, and the JEM-EUSO Collaboration, “JEM-EUSO: Extreme Universe Space Observatory on JEM/ISS,” Nuclear Physics B - Proceedings Supplements, vol. 166, pp. 72–76, 2007. [CrossRef]
- V. Gligorov, “A single track HLT1 trigger,” Tech. Rep. LHCb-PUB-2011-003, CERN, 2011.
- L. Bourdev and J. Brandt, “Robust object detection via soft cascade,” in Conference on Computer Vision and Pattern Recognition. 2005, vol. 2, pp. 236–243, IEEE Computer Society.
- R. Xiao, L. Zhu, and H. J. Zhang, “Boosting chain learning for object detection,” in Ninth IEEE International Conference on Computer Vision, 2003, vol. 9, pp. 709–715. [CrossRef]
- J. Sochman and J. Matas, “WaldBoost – learning for time constrained sequential detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005, pp. 150–156.
- B. Póczos, Y. Abbasi-Yadkori, Cs. Szepesvári, R. Greiner, and N. Sturtevant, “Learning when to stop thinking and do something!,” in Proceedings of the 26th International Conference on Machine Learning, 2009, pp. 825–832.
- M. Saberian and N. Vasconcelos, “Boosting classifier cascades,” in Advances in Neural Information Processing Systems 23. 2010, pp. 2047–2055, MIT Press.
- D. Benbouzid, R. Busa-Fekete, and B. Kégl, “Fast classification using sparse decision DAGs,” in International Conference on Machine Learning, June 2012, vol. 29.
- V. M. Abazov et al., “Observation of single top-quark production,” Physical Review Letters, vol. 103, no. 9, 2009.
- Aaltonen, T. et. al, “Observation of electroweak single top-quark production,” Phys. Rev. Lett., vol. 103, pp. 092002, Aug 2009. [CrossRef] [PubMed]
- G. Cowan, K. Cranmer, E. Gross, and O. Vitells, “Asymptotic formulae for likelihood-based tests of new physics,” The European Physical Journal C, vol. 71, pp. 1–19, 2011. [CrossRef] [EDP Sciences]
EPJ Web of Conferences
Volume 55, 2013SOS 2012 – IN2P3 School of Statistics
|Number of page(s)||20|
|Section||Multivariate Analysis Tools|
|Published online||01 July 2013|