AI, Neural Networks or Machine Learning technologies are being adopted in many circumstances, but usually the data is sent to a server for processing before being analysed and interpreted.
There would be many advantages of being able to do this analysis on the embedded system itself – faster decision making and the ability to implement AI in situations where the data link is not possible.
But the problem is that traditional embedded compute platforms cannot handle the processing power required. The solution is not just faster silicon – but new architectures and ways to implement systems.
This event will bring together technology suppliers and technology users to discuss the opportunities and barriers to the deployment of next generation systems. There will be presentations from experts (industry and academic) as well as current and potential users. They will present their knowledge, promote discussion and share learning.
The event is designed for technologists and implementers across research and industry. To ensure a good exchange of information the size of this event is restricted. Note this event is not designed as an introduction to embedded AI.