One of the most important problems in all of computer science is Matrix Multiplication. Matrices are a very general way of encoding data, and manipulating them quickly is crucial to many modern algorithms. A very natural way of encoding graphs is as an adjacency matrix, and some shortest path algorithms make use of matrix multiplication of boolean matrices to compute shortest paths. We have also seen in class that the PageRank formula involves the computation of an N x N matrix, where N is a very, very large number. When dealing with large matrices, the runtime of multiplication is dominated by the exponent of the algorithm. For years, the most efficient known algorithm was called the Coppersmith-Winograd algorithm, and achieved a runtime of O(n^2.3755). However, a recent generalization of this algorithm by Vassilevska Williams has yielded a breakthrough of O(n^2.3727).

The CW-algorithm is a relaxation of the original bilinear problem to a trilinear form and recursively multiply the matrices, similar to Strassen's famous algorithm, but with more leeway from extra coefficients in the trilinear form. What Williams did to improve this algorithm was to expand the base case. She moved the base case up to a size quartic in the former base case. People had tried expanding the base case before, but the analysis is very difficult, and there was a strange drop-off in efficiency for one of the powers, so many had given up on improving the algorithm in this way. This marks the defeat of a considerable barrier to optimizing matrix multiplication, and much work is directed towards achieving the optimal O(n^2), or lower bounding the problem past this threshold. For more information on Williams's algorithm, visit

http://rjlipton.wordpress.com/2011/11/29/a-breakthrough-on-matrix-product/

or

http://www.scottaaronson.com/blog/?p=839

## Tuesday, February 7, 2012

Subscribe to:
Post Comments (Atom)

Actually, I was under the impression that even for huge matrices of the scale that Google deals with, Coppersmith-Winograd was never feasible even with its lower exponent, simply because of the massive lower-order terms hidden by the O notation. (This paper: http://www.siam.org/pdf/news/174.pdf seems to claim that Professor Umans regards his group's ~2.41 algorithm as slower than Strassen's ~2.81 algorithm, along similar logic--the legwork you need to do to set it up is just too big to be used in practice.) Do you happen to know if there have been practical applications of these algorithms? (I'm actually curious--it'd be really cool to see, I just didn't think it had come true yet.)

ReplyDelete