![]() Then in the next iteration, you must access matice2, which is the k-th element of another array (the array 1). Indeed, when you access matice2, you must get the k-th element of the array 0 of your matrix. Since I suspect you created an array of pointers to row arrays, your accesses in your inner loop to the k-th column of "matice2": matice2 are very slow. In more complex terms: see the article provided by Jonathan Moore.īasically, when you perform your multiplication in the C++ code you provided, you are not at all cache-friendly. So why is Matlab (the MKL) so fast at dgemm (double-precision general matrix-matrix multiplication)? In simple terms: because it uses vectorization and good caching of data. Technical details on Matrix multiplication: ![]() Your Matlab uses the Intel MKL, which is a very good, optimized BLAS, and that explains the great performance you see. I don't know all the historical implementations (I was not born or a kid back then), but two of the most notable ones came out in the early 2000s: the Intel MKL and GotoBLAS. Different vendors then came along with their implementation of BLAS routines which were more and more efficient. These evolutions made it possible to increase the performance of the BLAS subroutines substantially. So over the years (notably between the BLAS level 1 and level 2 releases: early 80s), hardware changed, with the advent of vector operations and cache hierarchies. The original FORTRAN 77 implementations are still available on Netlib's website. Engineers could then call these standard, well-tested BLAS routines in their code, making their work much easier.īLAS evolved from level 1 (the first version which defined scalar-vector and vector-vector operations) to level 2 (vector-matrix operations) to level 3 (matrix-matrix operations), and provided more and more "kernels" so standardized more and more of the fundamental linear algebra operations. ![]() These basic operations were then standardized in a specification called: Basic Linear Algebra Subprograms (BLAS). Some standardization then came along, with the identification of "kernels" (basic routines) that most linear algebra problems needed in order to be solved. ![]() I'm not an expert on the history, but apparently back then, everybody just rewrote his FORTRAN version with simple loops. Engineers have been solving these problems with computers since the early days. Matrix multiplication (together with Matrix-vector, vector-vector multiplication and many of the matrix decompositions) is (are) the most important problems in linear algebra. This kind of question is recurring and should be answered more clearly than "MATLAB uses highly optimized libraries" or "MATLAB uses the MKL" for once on Stack Overflow. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |