The FPGA turns 40. Where does it go from here?

Freeman passed away in 1989, so he never lived to see his creation become a success.

Kirk Saban, corporate vice president of marketing in the adaptive and embedded computing group at AND, said of Freeman: “I can’t imagine that in his wildest dreams he ever thought that we’d be where we’re at today. Now that’s my interpretation, but I mean, if you look at how big it got to be – the complexity, the kinds of devices that we have – I can’t imagine that he ever imagined it doing what it’s done.”

Today we have “monstrous devices,” as Saban put it, that have embedded ARM processors, high speed serial transceivers, and integrated hard IP built in AI blocks. “These things have evolved massively. They’ve really become the heart of our customer systems in many, many cases,” he said.

Saban acknowledges that even with newer tools, developing on the FPGA “does require some specialized skills, which doesn’t make it as pervasive and ubiquitous as a CPU.”

So, what does the AI era hold for FPGA?

Saban believes an FPGA plays a role when AI is infused into some other kind of application. One example is vision, where there are multiple cameras streaming data and a need to make quick, real-time decisions. The same applies with the edge, when resources are not necessarily connected to the cloud and there’s a need to be able to make decisions locally and quickly.



Source link

Leave a Comment