banner.jpg

Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication

Full textClick to download.
CitationYale University Technical Report YALEU/DCS/TR-1269, 2004
AuthorsPetros Drineas
Ravi Kannan
Michael W. Mahoney

Abstract

Motivated by applications in which the data may be formulated as a matrix, we consider algorithms for several common Linear Algebra problems. These algorithms make more efficient use of computational resources, such as the computation time, Random Access Memory (RAM), and the number of passes over the data, than do previously known algorithms for these problems; in addition, they achieve their greater efficiency at the cost of some error. In this paper, we devise two algorithms for the Matrix Multiplication Problem. Suppose A and B (which are m x n and n x p respectively) are the two input matrices. In our main algorithm, we perform c = O(1) independent trials, where in each trial we randomly sample an element of {1,2,...n} with an appropriate probability distribution (insert here) on {1,2,...n}. We form a m x c matrix C consisting of the sampled columns of A, each scaled appropriately, and we form a c x n matrix R using the same rows of B, again scaled appropriately. The choice of (insert here) and the column and row scaling are crucial features of the algorithm. When these are chosen judiciously, we show that CR is a good approximation to AB; more precisely, we show that, with high probability, ||AB - CR||_F \in O(||A||_F||B||_F/sqrt(c)), where ||.||_F denotes the Frobenius norm, i.e., ||A||_F^2 = \sum_{i,j}A_ij^2. This algorithm can be implemented without storing the matrices A and B in RAM, provided it can make two passes over the matrices stored in external memory and use O(m + p) additional RAM memory to construct C and R. We then present a second matrix multiplication algorithm which is similar in spirit to our main algorithm. In addition, we present a model (the Pass-Efficient model) in which the efficiency of these and other approximate matrix algorithms may be studied and which we argue is well-suited to many applications involving massive data sets. In this model, the scarce computational resources are the number of passes over the data and the additional space and time required by the algorithm. The input matrices may be presented in any order of the the entries (and not just row or column order), as is the case in many applications where, e.g., the data has been written in by multiple agents. In addition, the input matrices may be presented in a sparse representation, where only the non-zero entries are written.

Back to publications
Back to previous page