The new custom accelerator named Intel® Nervana™ Neural Network Processor for Training (NNP-T) is a new class of efficient deep learning system hardware designed to accelerate distributed training at scale.
Baidu and Intel are working together to develop AI hardware and software solution for training deep learning models at the fastest pace. Intel Corporate Vice President Naveen Rao, announced this collaboration at the Baidu Create AI developer conference held in Beijing last week.
Artificial intelligence (AI) isn’t a single workload; it’s a pervasive capability that will enhance every application, whether it’s running on a phone or in a massive data center. Phones, data centers and everything in between have different performance and power requirements, so one-size AI hardware doesn’t fit all. Intel offers exceptional choice in AI hardware with enabling software, so customers can run complex AI applications where the data lives. The NNP-T is a new class of efficient deep learning system hardware designed to accelerate distributed training at scale. Close collaboration with Baidu helps ensure Intel development stays in lock-step with the latest customer demands on training hardware.
For more information visit Intel