Dieses Bild zeigt Ingo Steinwart

Ingo Steinwart

Univ.-Prof. Dr. rer. nat.

Professor, Institutsleitung
Institut für Stochastik und Anwendungen
Lehrstuhl für Stochastik

Kontakt

Pfaffenwaldring 57
70569 Stuttgart
Deutschland
Raum: 8.544

Sprechstunde

Nach Vereinbarung via E-Mail

Fachgebiet

  • Statistische Lerntheorie
  • Kernbasierte Lernverfahren
  • Cluster Analyse
  • Neuronale Netze
  • Effiziente Lernverfahren für große Datenmengen
  • Verlustfunktionen
  • Lernen mit nicht .i.i.d. Daten
  • Anwendungen von Lernverfahren
  • Reproduzierende Kern-Hilberträume

Kurzbeschreibungen zu diesen Themen und unseren entsprechenden Publikationen finden Sie hier.

Education

02/2000 Doctorate (Dr. rer. nat.) in Mathematics, Friedrich-Schiller-University, Jena
03/1997 Diploma in Mathematics, Carl-von-Ossietzky University, Oldenburg

Appointments

07/2017 — Faculty Member, International Max Planck Research School for Intelligent Systems, Stuttgart/Tübingen
04/2010 — Full Professor, Institute for Stochastics and Applications, Department of Mathematics, University of Stuttgart
01/2010 — 06/2011 Associate Adjunct Professor, Jack Baskin School of Engineering, Department of Computer Science, University of California, Santa Cruz
07/2008 — 04/2010 Scientist Level 4, CCS-3, Los Alamos National Laboratory
03/2003 — 04/2010 Technical Staff Member, CCS-3, Los Alamos National Laboratory
03/2002 — 09/2002 Visiting Scientist, Johannes-Gutenberg University, Mainz
03/2000 — 03/2003 Scientific Staff Member, Friedrich-Schiller-University, Jena
04/1997 — 02/2000 Stipendiary, DFG graduate college “Analytic and Stochastic Structures and Systems”, Friedrich-Schiller-University, Jena

Administrative Services

10/2010 — Member of the Senate Committee for Organisation, University Stuttgart
04/2011 — 10/2012 Vice Dean for Mathematics, Faculty of Mathematics and Physics, University of Stuttgart

Editorial Services

01/2013 — Associate Editor, Journal of Complexity
12/2008 — Action Editor (Associate Editor), Journal of Machine Learning Research
01/2010 — 12/2012 Associate Editor, Annals of Statistics

Program Responsibilities at Conferences

Chair COLT 2013
Program Committee NIPS 2008, 2011
Program Committee ICML 2020
Program Committee ICLR 2020
Program Committee COLT 2006, 2008, 2009, 2011, 2012, 2015

Book

I. Steinwart and A. Christmann, Support Vector Machines. New York: Springer, 2008. [ final ]

Preprints

T. Hamm and I. Steinwart, Adaptive learning rates for support vector machines working on data with low intrinsic dimension, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2020. [ preprint.pdf ]

D. Holzmüller and I. Steinwart, Training two-layer ReLU networks with gradient descent is inconsistent, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2020. [ preprint.pdf ]

I. Steinwart, Reproducing kernel Hilbert spaces cannot contain all continuous functions on a compact metric space, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2020. [ preprint.pdf ]

I. Blaschzyk and I. Steinwart, Improved classification rates for localized SVMs, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2019. [ preprint.pdf ]

N. Mücke and I. Steinwart, Global minima of DNNs: The plenty pantry, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2019. [ preprint.pdf ]

I. Steinwart, A sober look at neural network initializations, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2019. [ preprint.pdf ]

H. Hang and I. Steinwart, Optimal learning with anisotropic Gaussian SVMs, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2018. [ preprint.pdf ]

I. Steinwart, B. Sriperumbudur, and P. Thomann, Adaptive clustering using kernel density estimators, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2017. [ preprint.pdf ]

I. Steinwart and P. Thomann, liquidSVM: A fast and versatile SVM package, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2017. [ preprint.pdf ]

I. Steinwart, P. Thomann, and N. Schmid, Learning with hierarchical Gaussian kernels, tech. rep., Fakultät für Mathematik und Physik, Universität Stuttgart, 2016. [ preprint.pdf ]

Accepted

I. Steinwart and S. Fischer, A closer look at covering number bounds for Gaussian kernels, J. Complexity, vol. ---, pp. ---, 2020. [ final | preprint.pdf ]

I. Steinwart and J. Ziegel, Strictly proper kernel scores and characteristic kernels on compact spaces, Appl. Comput. Harmon. Anal., vol. ---, pp. ---, 2019. [ final | preprint.pdf ]

2020

S. Fischer and I. Steinwart, Sobolev norm learning rates for regularized least-squares algorithm, J. Mach. Learn. Res., no. 205, pp. 1--38, 2020. [ final | preprint.pdf ]

2019

A. Defant, M. Mastylo, E. Sánchez-Pérez, and I. Steinwart, Translation invariant maps on function spaces over locally compact groups, J. Math. Anal. Appl., vol. 470, pp. 795--820, 2019. [ final ]

M. Farooq and I. Steinwart, Learning rates for kernel-based expectile regression, Mach. Learn., vol. 108, pp. 203--227, 2019. [ final | preprint.pdf ]

I. Steinwart, Convergence types and rates in generic Karhunen-Loève expansions with applications to sample path properties, Potential Anal., vol. 51, pp. 361--395, 2019. [ final | preprint.pdf ]

2018

H. Hang, I. Steinwart, Y. Feng, and J. Suykens, Kernel density estimation for dynamical systems, J. Mach. Learn. Res., vol. 19, pp. 1--49, 2018. [ final ]

I. Blaschzyk and I. Steinwart, Improved classification rates under refined margin conditions, Electron. J. Stat., vol. 12, pp. 793--823, 2018. [ final | preprint.pdf ]

2017

P. Thomann, I. Steinwart, I. Blaschzyk, and M. Meister, Spatial decompositions for large scale SVMs, in Proceedings of Machine Learning Research Volume 54: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics 2017 (A. Singh and J. Zhu, eds.), pp. 1329--1337, 2017. [ final ]

M. Farooq and I. Steinwart, An SVM-like approach for expectile regression, Comput. Statist. Data Anal., vol. 109, pp. 159--181, 2017. [ final | preprint.pdf ]

H. Hang and I. Steinwart, A Bernstein-type inequality for some mixing processes and dynamical systems with an application to learning, Ann. Statist., vol. 45, pp. 708--743, 2017. [ final | preprint.pdf ]

I. Steinwart, A short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widths, J. Approx. Theory, vol. 215, pp. 13--27, 2017. [ final | preprint.pdf ]

I. Steinwart, Representation of quasi-monotone functionals by families of separating hyperplanes, Math. Nachr., vol. 290, pp. 1859--1883, 2017. [ final | preprint.pdf ]

2016

H. Hang, Y. Feng, I. Steinwart, and J. Suykens, Learning theory estimates with observations from general stationary stochastic processes, Neural Computation, vol. 28, pp. 2853--2889, 2016. [ final | preprint.pdf ]

M. Meister and I. Steinwart, Optimal learning rates for localized SVMs, J. Mach. Learn. Res., vol. 17, pp. 1--44, 2016. [ final ]

2015

I. Steinwart, Measuring the capacity of sets of functions in the analysis of ERM, in Festschrift in Honor of Alexey Chervonenkis (A. Gammerman and V. Vovk, eds.), ch. 16, pp. 223--239, Berlin: Springer, 2015. [ final ]

I. Steinwart, Fully adaptive density-based clustering, Ann. Statist., vol. 43, pp. 2132--2167, 2015. [ final | preprint.pdf ]

P. Thomann, I. Steinwart, and N. Schmid, Towards an axiomatic approach to hierarchical clustering of measures, J. Mach. Learn. Res., vol. 16, pp. 1949--2002, 2015. [ final ]

2014

I. Steinwart, C. Pasin, R. Williamson, and S. Zhang, Elicitation and identification of properties, in JMLR Workshop and Conference Proceedings Volume 35: Proceedings of the 27th Conference on Learning Theory 2014 (M. F. Balcan and C. Szepesvari, eds.), pp. 482--526, 2014. [ final ]

H. Hang and I. Steinwart, Fast learning from α-mixing observations, J. Multivariate Anal., vol. 127, pp. 184--199, 2014. [ final ]

2013

I. Steinwart, Some remarks on the statistical analysis of SVMs and related methods, in Empirical Inference -- Festschrift in Honor of Vladimir N. Vapnik (B. Schölkopf, Z. Luo, and V. Vovk, eds.), ch. 4, pp. 25--36, Berlin: Springer, 2013. [ final ]

M. Eberts and I. Steinwart, Optimal regression rates for SVMs using Gaussian kernels, Electron. J. Stat., vol. 7, pp. 1--42, 2013. [ final ]

2012

B. Sriperumbudur and I. Steinwart, Consistency and rates for clustering with DBSCAN, in JMLR Workshop and Conference Proceedings Volume 22: Proceedings of the 15th International Conference on Artificial Intelligence and Statistics 2012 (N. Lawrence and M. Girolami, eds.), pp. 1090--1098, 2012. [ final ]

I. Steinwart and C. Scovel, Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs, Constr. Approx., vol. 35, pp. 363--417, 2012. [ final ]

2011

M. Eberts and I. Steinwart, Optimal learning rates for least squares SVMs using Gaussian kernels, in Advances in Neural Information Processing Systems 24 (J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds.), pp. 1539--1547, 2011. [ final ]

I. Steinwart, Adaptive density level set clustering, in JMLR Workshop and Conference Proceedings Volume 19: Proceedings of the 24th Conference on Learning Theory 2011 (S. Kakade and U. von Luxburg, eds.), pp. 703--738, 2011. [ final ]

I. Steinwart and A. Christmann, Estimating conditional quantiles with the help of the pinball loss, Bernoulli, vol. 17, pp. 211--225, 2011. [ final ]

I. Steinwart, D. Hush, and C. Scovel, Training SVMs without offset, J. Mach. Learn. Res., vol. 12, pp. 141--202, 2011. [ final ]

2010

A. Christmann and I. Steinwart, Universal kernels on non-standard input spaces, in Advances in Neural Information Processing Systems 23 (J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, eds.), pp. 406--414, 2010. [ final ]

I. Steinwart, J. Theiler, and D. Llamocca, Using support vector machines for anomalous change detection, in IEEE Geoscience and Remote Sensing Society and the IGARSS 2010, pp. 3732--3735, 2010. [ preprint.pdf ]

C. Scovel, D. Hush, I. Steinwart, and J. Theiler, Radial kernels and their reproducing kernel Hilbert spaces, J. Complexity, vol. 26, pp. 641--660, 2010. [ final ]

2009

I. Steinwart and A. Christmann, Sparsity of SVMs that use the ε-insensitive loss, in Advances in Neural Information Processing Systems 21 (D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, eds.), pp. 1569--1576, 2009. [ final ]

I. Steinwart and A. Christmann, Fast learning from non-i.i.d. observations, in Advances in Neural Information Processing Systems 22 (Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, eds.), pp. 1768--1776, 2009. [ final ]

I. Steinwart, D. Hush, and C. Scovel, Optimal rates for regularized least squares regression, in Proceedings of the 22nd Annual Conference on Learning Theory (S. Dasgupta and A. Klivans, eds.), pp. 79--93, 2009. [ final ]

A. Christmann, A. van Messem, and I. Steinwart, On consistency and robustness properties of support vector machines for heavy-tailed distributions, Stat. Interface, vol. 2, pp. 311--327, 2009. [ final ]

I. Steinwart, Oracle inequalities for SVMs that are based on random entropy numbers, J. Complexity, vol. 25, pp. 437--454, 2009. [ final ]

I. Steinwart, Two oracle inequalities for regularized boosting classifiers, Stat. Interface, vol. 2, pp. 271--284, 2009. [ final ]

I. Steinwart, D. Hush, and C. Scovel, Learning from dependent observations, J. Multivariate Anal., vol. 100, pp. 175--194, 2009. [ final | preprint.pdf ]

I. Steinwart and M. Anghel, Consistency of support vector machines for forecasting the evolution of an unknown ergodic dynamical system from observations with unknown noise, Ann. Statist., vol. 37, pp. 841--875, 2009. [ final | preprint.pdf ]

2008

I. Steinwart and A. Christmann, Support Vector Machines. New York: Springer, 2008. [ final ]

I. Steinwart and A. Christmann, How SVMs can estimate quantiles and the median, in Advances in Neural Information Processing Systems 20 (J. Platt, D. Koller, Y. Singer, and S. Roweis, eds.), (Cambridge, MA), pp. 305--312, MIT Press, 2008. [ final ]

A. Christmann and I. Steinwart, Consistency of kernel based quantile regression, Appl. Stoch. Models Bus. Ind., vol. 24, pp. 171--183, 2008. [ final ]

2007

N. List, D. Hush, C. Scovel, and I. Steinwart, Gaps in support vector optimization, in Proceedings of the 20th Conference on Learning Theory (N. Bshouty and C. Gentile, eds.), (New York), pp. 336--348, Springer, 2007. [ final ]

I. Steinwart, D. Hush, and C. Scovel, An oracle inequality for clipped regularized risk minimizers, in Advances in Neural Information Processing Systems 19 (B. Schölkopf, J. Platt, and T. Hoffman, eds.), (Cambridge, MA), pp. 1321--1328, MIT Press, 2007. [ final ]

A. Christmann, I. Steinwart, and M. Hubert, Robust learning from bites for data mining, Comput. Statist. Data Anal., vol. 52, pp. 347--361, 2007. [ final ]

A. Christmann and I. Steinwart, Consistency and robustness of kernel-based regression in convex risk minimization, Bernoulli, vol. 13, pp. 799--819, 2007. [ final ]

D. Hush, C. Scovel, and I. Steinwart, Stability of unstable learning algorithms, Mach. Learn., vol. 67, pp. 197--206, 2007. [ final ]

C. Scovel, D. Hush, and I. Steinwart, Approximate duality, J. Optim. Theory Appl., vol. 135, pp. 429--443, 2007. [ final ]

I. Steinwart and C. Scovel, Fast rates for support vector machines using Gaussian kernels, Ann. Statist., vol. 35, pp. 575--607, 2007. [ final ]

I. Steinwart, How to compare different loss functions, Constr. Approx., vol. 26, pp. 225--287, 2007. [ final ]

2006

I. Steinwart, D. Hush, and C. Scovel, Function classes that approximate the Bayes risk, in Proceedings of the 19th Annual Conference on Learning Theory (G. Lugosi and H. U. Simon, eds.), (New York), pp. 79--93, Springer, 2006. [ final ]

I. Steinwart, D. Hush, and C. Scovel, A new concentration result for regularized risk minimizers, in High Dimensional Probability IV (E. Giné, V. Koltchinskii, W. Li, and J. Zinn, eds.), (Beachwood, OH), pp. 260--275, Institute of Mathematical Statistics, 2006. [ final ]

D. Hush, P. Kelly, C. Scovel, and I. Steinwart, QP algorithms with guaranteed accuracy and run time for support vector machines, J. Mach. Learn. Res., vol. 7, pp. 733--769, 2006. [ final ]

I. Steinwart, D. Hush, and C. Scovel, An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels, IEEE Trans. Inform. Theory, vol. 52, pp. 4635--4643, 2006. [ final ]

2005

D. Hush, P. Kelly, C. Scovel, and I. Steinwart, Provably fast algorithms for anomaly detection, in International Workshop on Data Mining Methods for Anomaly Detection at KDD 2005, pp. 27--31, 2005. [ preprint.pdf ]

I. Steinwart, D. Hush, and C. Scovel, Density level detection is classification, in Advances in Neural Information Processing Systems 17 (L. K. Saul, Y. Weiss, and L. Bottou, eds.), (Cambridge, MA), pp. 1337--1344, MIT Press, 2005. [ final ]

I. Steinwart and C. Scovel, Fast rates for support vector machines, in Proceedings of the 18th Annual Conference on Learning Theory (P. Auer and R. Meir, eds.), (New York), pp. 279--294, Springer, 2005. [ final ]

C. Scovel, D. Hush, and I. Steinwart, Learning rates for density level detection, Anal. Appl., vol. 3, pp. 356--371, 2005. [ final ]

I. Steinwart, Consistency of support vector machines and other regularized kernel machines, IEEE Trans. Inform. Theory, vol. 51, pp. 128--142, 2005. [ final ]

I. Steinwart, D. Hush, and C. Scovel, A classification framework for anomaly detection, J. Mach. Learn. Res., vol. 6, pp. 211--232, 2005. [ final ]

2004

I. Steinwart, Sparseness of support vector machines---some asymptotically sharp bounds, in Advances in Neural Information Processing Systems 16 (S. Thrun, L. Saul, and B. Schölkopf, eds.), (Cambridge, MA), pp. 1069--1076, MIT Press, 2004. [ final ]

I. Steinwart and C. Scovel, When do support vector machines learn fast?, in 16th International Symposium on Mathematical Theory of Networks and Systems, 2004. [ preprint.pdf ]

A. Christmann and I. Steinwart, On robustness properties of convex risk minimization methods for pattern recognition, J. Mach. Learn. Res., vol. 5, pp. 1007--1034, 2004. [ final ]

I. Steinwart, Entropy of convex hulls---some Lorentz norm results, J. Approx. Theory, vol. 128, pp. 42--52, 2004. [ final ]

2003

K. Mittmann and I. Steinwart, On the existence of continuous modifications of vector-valued random fields, Georgian Math. J., vol. 10, pp. 311--317, 2003. [ final ]

I. Steinwart, Entropy numbers of convex hulls and an application to learning algorithms, Arch. Math., vol. 80, pp. 310--318, 2003. [ final ]

I. Steinwart, On the optimal parameter choice for ν-support vector machines, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 1274--1284, 2003. [ final ]

I. Steinwart, Sparseness of support vector machines, J. Mach. Learn. Res., vol. 4, pp. 1071--1105, 2003. [ final ]

2002

J. Creutzig and I. Steinwart, Metric entropy of convex hulls in type p spaces---the critical case, Proc. Amer. Math. Soc., vol. 130, pp. 733--743, 2002. [ final ]

I. Steinwart, Support vector machines are universally consistent, J. Complexity, vol. 18, pp. 768--791, 2002. [ final ]

2001

I. Steinwart, On the influence of the kernel on the consistency of support vector machines, J. Mach. Learn. Res., vol. 2, pp. 67--93, 2001. [ final ]

2000

I. Steinwart, Entropy of C(K)-valued operators, J. Approx. Theory, vol. 103, pp. 302--328, 2000. [ final ]

Theses

I. Steinwart, Entropy of C(K)-valued operators and some applications. PhD thesis, Friedrich-Schiller Universität Jena, Fakultät für Mathematik und Informatik, 2000. [ final ]

I. Bartels, Gewichtete Normungleichungen für Operatoren zwischen Räumen Bochner-integrierbarer Funktionen, Master's thesis, Carl-von-Ossietzky Universität Oldenburg, Fachbereich Mathematik, 1997. [ final ]

Unpublished

I. Steinwart, Simons' SVM: A fast SVM toolbox. http://www.isa.uni-stuttgart.de/software/, 2016.

I. Steinwart, Which data--dependent bounds are suitable for SVM's?. 2002. [ preprint.pdf ]

Support Vector Maschines (SVMs) und zugehörige kernbasierte Lernalgorithmen sind eine bekannte Klasse von maschinellen Lernalgorithmen für die nichtparametrische Klassifikation und Regression. liquidSVM ist eine Implementierung von SVMs mit den folgenden Hauptmerkmalen:

  • vollständig integrierte Hyper-Parameter Auswahl
  • extreme Geschwindigkeit bei kleinen und großen Datensätzen,
  • Bindings für R, Python, MATLAB/Octav, Java und Spark
  • Integration einer Vielzahl von Lernszenarien:
    • kleinste Quadrate-, Quantil- und Expektil-Regression
    • binäre and multi-Klassen Klassifikation, ROC und Neyman-Pearson Klassifikation
  • volle Flexibilität für Experten.

Das Paket und zusätzliche Informationen finden Sie hier.

liquidCluster schätzt den Clusterbaum mit Hilfe einiger Dichte-Schätzer. Die wichtigsten Merkmale von liquidCluster sind:

  • automatisiertes Auswählen der Hyper-Parameter
  • Geschwindigkeit

Die derzeit verfügbare Linux-Kommandozeilenversion hat eine Schnittstelle, die der von liquidSVM sehr ähnlich ist.

Das Paket zusammen mit einigen zusätzlichen Informationen finden Sie hier.

Prof. Steinwart, was hat Maschinelles Lernen mit Mathematik zu tun?

Zum Interview

Porträts am Fachbereich Mathematik

Zum Seitenanfang