Ceva has enhanced its NeuPro-M NPU IP household to carry the facility of generative AI to infrastructure, industrial, automotive, shopper, and cellular markets. The redesigned NeuPro-M structure and improvement instruments help transformer networks, convolutional neural networks (CNNs), and different neural networks. NeuPro-M additionally integrates a vector processing unit to help any future neural community layer.

The ability-efficient NeuPro-M NPU IP delivers peak efficiency of 350 tera operations per second per watt (TOPS/W) on a 3-nm course of node. It’s also able to processing greater than 1.5 million tokens per second per watt for transformer-based massive language mannequin (LLM) inferencing.
To allow scalability for numerous AI markets, NeuPro-M provides two new NPU cores: the NPM12 and NPM14 with two and 4 NeuPro-M engines, respectively. These two cores be part of the prevailing NPM11 and NPM18 with one and eight engines, respectively. Processing choices vary from 32 TOPS for a single-engine NPU core to 256 TOPS for an eight-engine NPU core.
NeuPro-M meets stringent security and high quality compliance requirements, equivalent to ISO 26262 ASIL-B and Automotive Spice. Improvement software program for NeuPro-M consists of the Ceva Deep Neural Community (CDNN) AI compiler, system structure planner software, and neural community coaching optimizer software.
The NPM11 NPU IP is usually accessible now, whereas the NPM12, NPM14, and NPM18 can be found to steer prospects.
Discover extra datasheets on merchandise like this one at Datasheets.com, searchable by class, half #, description, producer, and extra.
