Numerical linear algebra methods are widely considered as basis for computational mathematics and data analysis. Indeed, these methods formed the mainstream of research in the second half of the 20th century. By the advent of 21st century this domain has reached maturity, and the research community had to face the question, what topic is going to be the 21st century mainstream? We argue that it is the study of multi-dimensional matrices, or tensors. In the 20th century, tensors were studied and applied mostly in physics as a description tool. In theoretical physics, particular tensor constructions were used to describe quantum systems. In mathematics, study of tensors led to some famous results, such as Strassen's algorithm of matrix multiplication, which has complexity less than n^3 for matrices of order n. However, 20th century did not see effective tensor-based computational methods. It is only in 21st century that tensors became a computational tool. In multidimensional problems, even "simple" cases involve datasets whose cardinality is greater than that of the set of all atoms in the Universe. The key idea that makes these data manageable is to consider the specific structure of the data, data compression methods, and algorithms that exploit special parameterizations of the data. Various tensor decompositions of multi-dimensional matrices have long been known, but they are fundamentally insufficient to develop efficient methods of data analysis. We consider novel representations, in which a multi-dimensional matrix is replaced with an associated sequence of usual matrices. The main assumption is that these matrices either have low rank or can be approximated well by low-rank matrices. A compressed representation for a multi-dimensional matrices, or tensors, is constructed using the well-studied decompositions of these matrices. This allows to develop algorithms whose complexity is linear or polynomial in the dimension. Applications of these methods include interpolation of multivariate functions, multi-dimensional integration, solution of the Fokker-Planck and Smoluchowski equations, spin systems modeling, construction of wavelet filters, and many other problems. The most simple and practical are "tensor train" methods developed in the Institute of Computational Mathematics of RAS since 2009; principal publications are available at http://pub.inm.ras.ru. We recommend also the following works: [1] I. Oseledets, E. Tyrtyshnikov, Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput., vol 31, no. 5 (2009), pp. 3744-3759. [2] I. Oseledets, E. Tyrtyshnikov, TT-cross approximation for multidimensional arrays, Linear Algebra Appl., 432 (2010), pp. 70-88. [3] V. A. Kazeev, B. N. Khoromskij, E. E. Tyrtyshnikov, Multilevel Toeplitz Matrices Generated by Tensor-Structured Vectors and Convolution with Logarithmic Complexity. SIAM J. Sci. Comput. 35 (2013), no. 3, A1511-A1536. [4] J. A. Roberts, D. V. Savostyanov D.V., E. E. Tyrtyshnikov, Superfast solution of linear convolutional Volterra equations using QTT approximation, Journal of Computational and Applied Mathematics, vol.260, pp. 434-448 (2014).