Neurxcore Unveils NPU Product Line for AI Inference Excellence, Leveraging NVIDIA’s Deep Learning Accelerator Technology

TL;DR:

  • Neurxcore unveils SNVDLA NPU product line for AI inference, powered by NVIDIA technology.
  • SNVDLA IP series sets new standards in energy efficiency, performance, and versatility.
  • Heracium SDK enhances the configurability and optimization of neural network applications.
  • Neurxcore’s product line serves a wide range of industries and applications.
  • Custom NPU solutions development package offered by Neurxcore.
  • CEO Virgile Javerliac emphasizes the significance of energy efficiency in AI inference.
  • SNVDLA product line excels in energy efficiency, performance, and cost-efficiency.
  • Competitive pricing and an open-source environment ensure accessible AI solutions.
  • Gartner predicts AI semiconductors market to reach $111.6 billion by 2027.

Main AI News:

Neurxcore takes a bold step forward with the introduction of its groundbreaking Neural Processor Unit (NPU) product line tailored specifically for AI inference applications. This innovative leap is made possible through the integration of an enhanced and extended version of the renowned open-source NVIDIA Deep Learning Accelerator (Open NVDLA) technology, bolstered by Neurxcore’s proprietary in-house architectural innovations.

Neurxcore’s SNVDLA IP series sets an elevated benchmark, excelling in energy efficiency, performance, and versatility, with a paramount focus on image processing, encompassing critical domains such as classification and object detection. Notably, SNVDLA also flexes its capabilities in the realm of generative AI applications, underlining its adaptability. This dynamic solution has already proven its mettle, operating seamlessly on a 22nm TSMC platform and impressively showcased on a demonstration board, deftly handling a diverse array of applications.

The trailblazing IP package extends its prowess with the inclusion of the Heracium SDK (Software Development Kit), meticulously crafted by Neurxcore. Built upon the open-source Apache TVM (Tensor-Virtual Machine) framework, this toolkit empowers users to configure, optimize, and compile neural network applications effortlessly on SNVDLA products. Neurxcore’s product line caters comprehensively to a wide spectrum of industries and applications, spanning from ultra-low power requirements to high-performance scenarios. Its reach extends to cover sectors such as sensors and IoT, wearables, smartphones, smart homes, surveillance, Set-Top Box and Digital TV (STB/DTV), smart TV, robotics, edge computing, AR/VR, ADAS, servers, and beyond.

In addition to this pioneering product line, Neurxcore extends an all-encompassing package for the development of bespoke NPU solutions. This comprehensive offering includes the integration of new operators, AI-optimized subsystem design, and meticulous model development. Neurxcore’s commitment extends from the training phase to quantization, ensuring a holistic approach to custom AI solutions.

Virgile Javerliac, the visionary founder and CEO of Neurxcore, emphatically stated, “In the realm of AI, approximately 80% of computational tasks revolve around inference. Achieving energy efficiency and cost reduction while upholding peak performance levels is paramount.” He expressed his deep appreciation for the dedicated team behind this groundbreaking innovation and underscored Neurxcore’s unwavering commitment to delivering exceptional value to customers while actively seeking collaborative opportunities.

The inference phase, a critical juncture where AI models make predictions and generate content, stands as a pivotal aspect of AI technology. Neurxcore’s trailblazing solutions efficiently tackle this phase, making them an ideal fit for diverse applications, even in scenarios where multiple users are served simultaneously.

Comparatively, the SNVDLA product line outshines the original NVIDIA version, showcasing significant advancements in energy efficiency, performance, and feature set. It leverages NVIDIA’s industrial-grade development foundation while offering fine-grain tunable capabilities, including the number of cores and multiply-accumulate (MAC) operations per core. This adaptability facilitates its deployment across diverse markets. Notably, it stands out as a paragon of energy and cost efficiency, solidifying its position as a class-leading solution. Moreover, its competitive pricing and open-source software environment, thanks to Apache TVM, ensure accessibility and adaptability, democratizing AI solutions for all.

As we delve into the future of AI semiconductors, Gartner’s 2023 AI Semiconductors report, aptly titled “Forecast: AI Semiconductors, Worldwide, 2021-2027,” sheds light on the pivotal role of optimized semiconductor devices in data centers, edge computing, and endpoint devices. According to the report, revenue from these AI semiconductors is poised to soar to $111.6 billion by 2027, reflecting a robust five-year CAGR of 20%. Neurxcore’s cutting-edge SNVDLA product line is well-poised to play a significant role in this transformative journey, redefining the landscape of AI inference technology.

Conclusion:

Neurxcore’s SNVDLA NPU product line, powered by NVIDIA technology, represents a significant leap in AI inference technology. With its enhanced energy efficiency, performance, and versatility, it is poised to disrupt various industries and applications. The inclusion of the Heracium SDK and the offering of custom NPU solutions further strengthen its position in the market. As AI semiconductors continue to grow, Neurxcore’s innovation aligns perfectly with the evolving needs of the industry, ensuring accessibility and adaptability for all.

Source