Nominated by: Andre DeHon
This article was the beginning of academic and quantitative FPGA
architecture in the open literature. This article came a mere four years
after the first commercial FPGAs and four years before the first FPGA
workshop. While commercial FPGA offerings existed, there were no public
analysis of why FPGAs were built the way they were. Here, for the first
time, the authors proposed a model for the complete area of an FPGA, not
just counting cells, but also accounting for interconnect and modeling how
interconnect requirements change with cell complexity. Furthermore, they
parameterized it around the size of the compute block so they could ask
clean, systematic questions about the appropriate granularity for FPGAs,
specifically, how big should the Lookup Table be to minimize area? This
was the beginning of what would come to be known as the "Toronto Model" for
FPGAs. The model has been refined and enhanced over the years, and many of
those papers are already recognized in the Hall-of-Fame, but this is where
it all started.
The parametric LUT size optimization is beautiful. They identify
a clear, important parameter. They develop the CAD tools and area modeling
to vary the parameter and automatically map to the parametric design and
assess area. They sweep the parameter and quantify resources, and they are
able to show a clear minimum area point with consistency across a set of
designs. This is a model and template for how to perform architectural
studies, and this paper should be read by anyone entering the field of
architecture for that reason alone.
Furthermore, they assessed sensitivity to their technology assumptions
about switch size and, perhaps surprisingly, showed that their optimization
point was robust to change in switch size. This strengthened the result.
It showed that the results, which pointed to the value of using gates with
3-4 inputs, should also bear on the most area-efficient metal gate-array
architecture, where designs were still being built with 2-input functions.