Semiconductor Engineering: Architecting for AI
, 2018年07月27日
Architecting for AI
July 7th, 2018 – By Ann Steffora Mutschler
What kind of processing is required for inferencing, what is the best architecture, and can they be debugged?.
Semiconductor Engineering sat down to talk about what is needed today to enable artificial intelligence training and inferencing. What follows are excerpts of that conversation.
SE: How do we get AI into use as quickly as possible? Obviously inferencing is a huge part of it. What is the best way to understand it?
Ty Garibay, CTO, for Arteris IP replies to the question with, “What we’re here to talk about is the rapid evolution of inference hardware in terms of, most critically, reducing the power consumption, given that a lot of inference where the real opportunity for dramatic growth in inference is at the edge where there’s either battery constraints or thermal envelope constraints, the limitations for the most part on deploying inference in the millions and billions at the edge is in power efficiency”.
“One of the things that I think is going to have to happen is we have to be able to tune the training, to some extent, on site. There’s algorithmic work being done to figure out how I do the base-level training and then add on without having to redo it all” said Garibay.
{{cta(‘e989304e-741e-4693-a19f-18865d3293db’)}}
To read the entire article, please click here:
https://semiengineering.com/architecting-for-ai