The GNA co-processor from Intel was introduced in Cannon Lake in 2018. It was constructed on the 10nm die and last saw the introduction of AVX-512 in the series. The architecture was short-lived, one of Intel’s shortest architecture life, and only one mobile processor was released. Cannon Lake was replaced by Intel’s Ice Lake processor architecture and discontinued the Cannon Lake line in early 2020. GNA co-processors are currently found in Gemini Lake, Elkhart Lake, Ice Lake, and higher, and the purpose of the co-processor is to allow the CPU resources to be opened for use in other processing areas while it is simultaneously seeing help in recognizing speech and reducing noise and more. Last year, engineers from Intel continued to update and advance the quality and purpose of the GNA co-processor in Linux to assist with future technology. Currently, the GNA co-processor integration is on its fourth variation, and Intel has altered the coding to include the Linux Direct Rendering Manager framework or DRM. DRM engineers highly requested this integration to place the GNA library within the AI and DRM placements in the main Linux kernel and its subsystems. The library consists of TensorFlow, Caffe, PaddlePaddle, PyTorch, mxnet, Keras, and ONNX, and Intel utilizes that for optimization in their CPUs, iGPUs, GPUs, VPUs, and FPGA. Intel’s GNA library also employs the company’s OpenVINO software toolkit, allowing developers to use the deep learning development kit for streamlined development and easy distribution to several platforms simultaneously with a broader support base, optimized API, and integrations, along with performance and portability. The integration flows into Windows, macOS, and Linux operating systems. In Linux, version three was introduced with updated support for newer development and deep learning scenarios. News Sources: Phoronix, Intel