Columbia University │ New York, New York
For contributions to theoretical computer science through the design and analysis of algorithms for fundamental operations that are routinely performed in all fields of science and engineering.
What happens when you press “play” on a song, ask your phone to clean up a blurry photo, or watch an AI tool spin words into a paragraph? It feels instantaneous. Effortless. But behind that ease is a relentless drumbeat of tiny operations repeated at staggering scale: combining long lists of numbers, converting information from one form into another. The best way to picture it is not as “intelligence,” but as work—billions upon billions of mechanical steps. If you can make those steps faster, you don’t just improve one app. You improve whole categories of technology: medical imaging, climate modeling, data compression, machine learning, and a slew of others.
That is the kind of problem that motivates theoretical computer scientist Josh Alman. His work sits at the intersection of algorithm design and complexity theory. He develops faster methods for fundamental tasks, and he also proves when popular strategies hit their limits for speed improvement. Both contributions matter. In fast-moving fields, knowing which roads are dead ends is as valuable as finding a new shortcut.
Two threads run through Alman’s research, each a mathematical operation Alman puzzles over in abstraction. Practical implementation happens elsewhere, in industry venues that turn the base math into code and hardware protocols.
The first is the family of Fourier-style transforms. These computational tools reorganize information, so patterns become easier to detect and manipulate. The second is matrix multiplication, the operation behind many of the most computationally expensive parts of scientific computing, machine learning, video, 3D graphics, and many more computational tasks we encounter daily. Alman has advanced both, and he has helped clarify the limits that govern how efficient they can be.
Fourier transforms are often described as a translation of data. A sound wave, an image, or a sensor signal can be represented in different “coordinates.” Some computing tasks, like manipulating images and audio, become dramatically easier with the right representation. The Fast Fourier Transform (FFT) is the best-known algorithmic workhorse in this area. It makes said computational efforts efficient enough to be used everywhere, from audio compression to signal processing to numerical simulation. Alman’s work increases the efficiency of FFT, making it faster and computationally cheaper with improvements to its algorithmic framework.
The same holds true for Matrix Multiplication. Like FFT, the underlying principle is a translation of data. Specifically, two or more sets of numbers represented as matrices—grids, much like spreadsheets—are combined through the matrix multiplication algorithms into a new matrix of data. Matrix multiplication sounds like a textbook topic until you notice how often it is the bottleneck in many modern computing tasks. Matrices are a convenient way to represent systems with many interacting variables: graphics pipelines, simulations, optimization problems, the linear-algebra at the heart of many machine-learning models, and more.
For decades, computer scientists have searched for ways to multiply large matrices faster. Speed up matrix multiplication, speed up many computing tasks. Progress comes from finding unexpected algebraic structure, ways to reorganize the computation so it uses fewer basic multiplications. Alman and collaborators have produced results that sharpen the best-known approaches for fast matrix multiplication, expanding the set of techniques the field can use to hunt for still-faster methods.
Just as importantly, Alman has contributed “barrier” results: proofs that certain widely used approaches cannot achieve greater efficiency. These results help explain why long-running lines of attack have stalled, and they prevent the field from over-investing in refinements that will never deliver the hoped-for gains. In a discipline where improvements can take years of work and move by small increments, that kind of clarity accelerates progress.
Josh Alman’s work is a reminder that the future is often decided by the invisible. The technologies that feel most dazzling rest on foundational operations repeated at immense scale. When those operations become more efficient—or when researchers gain clarity about which routes are worth pursuing—entire fields benefit. By advancing our understanding of faster matrix multiplication, refining the deep toolkit around Fourier-style transforms, and illuminating both the opportunities and the limits in this landscape, Alman is helping to redraw the practical speed limits of computation. And in an era when science, engineering, and discovery increasingly run through computation, that kind of progress is not merely technical. It is enabling, making more of the future feasible.
Josh Alman is based in New York City, where he is a faculty member in computer science at Columbia University. He earned a B.S. in mathematics from the Massachusetts Institute of Technology (MIT), an M.S. in computer science from Stanford University, and a Ph.D. in computer science from MIT. He previously held a Michael O. Rabin postdoctoral fellowship in theoretical computer science at Harvard University.
Information as of March 2026