Visit our other websites:    Consumer IT    On CE    Mobile Channels    ECI news    rAVe Europe    Digital Signage News    

 

eSP - IT Solution Providers in Europe

  • Full Screen
  • Wide Screen
  • Narrow Screen
  • Increase font size
  • Default font size
  • Decrease font size

Intel Details AI Hardware at Hot Chips

E-mail Print PDF

Intel uses the Hot Chips 31 conference to present the latest from its AI division-- the Nervana neural network processors, with the NNP-T (codenamed "Spring Crest") aimed at training and NNP-I (aka "Spring Hill") for inference.

Intel Nervana

As the company puts it, dedicated accelerators such as the Nervana NNPs are built with a focus on AI to provide "the right intelligence at the right time." The NNP-T is designed for deep learning training, with the two aims of training the network as fast as possible while doing so within a given power budget. It promises the features and requirements needed to solve for large models, without the overhead to support legacy technology, and can be tailored for a wide range of workloads.

Meanwhile the NNP-I is purpose-built for inference, or actually running a now-trained neural network model. Designed to accelerate deep learning deployment at scale, it leverages the Intel 10nm process with Ice Lake cores to offer "industry-leading" performance per watt across all workloads. It also promises a high degree of programmability, short latencies, fast code porting and support for major deep learning frameworks.

“To get to a future state of "AI everywhere," we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," Chipzilla says. "Datacentres and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”

Of course, Intel is hardly the only company working on AI accelerators-- Google has Tensor Processing Units (TPUs), Nvidia has NVDLA (Nvidia Deep Learning Accelerator) and Amazon has the Infertia Inference chip. Furthermore, startup Cerebras Systems recently revealed a slab of silicon carrying a massive 1.2 trillion transistors, making it the biggest semiconductor chip ever.

Go At Hot Chips Intel Pushes "AI Everywhere"