“The new Samsung NPU technology is claimed to reduce the cost of cloud construction for AI operations and consume low power”
Samsung has announced a new NPU tech that is aimed at improving the on-device AI speed and is also more energy efficient. It comes with Quantization Interval Learning (QIL) that retains data accuracy by re-organising the data in bits smaller than their current size. This means the 4-bit neural networks can be created while maintaining the accuracy of a 32-bit network. Samsung says the on-device AI technology can reduce the cloud construction cost for AI operations since it operates on its own and provides quick and stable performance. The on-device AI can also save biometric information such as fingerprint, iris, and face scans right on the smartphone. It is also claimed to achieve the same results 8x faster while reducing the number of transistors 40x to 120x.
Furthermore, on-device AI technology is capable of computing large amounts of data at high speed without consuming excessive power. Samsung’s Exynos 9820, introduced last year, features proprietary NPU inside the mobile System on Chip (SoC). This allows it to perform AI computations independent of any external cloud server. The South Korean giant says that it plans to extend its AI algorithm not only to mobile SoC but also to memory and smart sensor solutions in the future.
Chang-Kyu Choi, Vice President and Head of Computer Vision Lab of SAIT, said, “Ultimately, in the future, we will live in a world where all devices and sensor-based technologies are powered by AI. Samsung’s On-Device AI technologies are lower-power, higher-speed solutions for deep learning that will pave the way to this future. They are set to expand the memory, processor and sensor market, as well as other next-generation system semiconductor markets.“