Intel expands its AI developer toolkit

OpenVINO toolkit now supports 12th Gen Intel Core CPUs

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

Ahead ofMWC 2022,Intelhas released a new version of the Intel Distribution of the OpenVINO toolkit which induces major upgrades to accelerateAIinferencing performance.

Since the launch ofOpenVINOin 2018, the chip giant has enabled hundreds of thousands of developers to accelerate the performance of AI inferencing beginning at the edge and extending to both enterprise and clients.

This latest release includes new features built upon three-and-a-half years of developer feedback and also includes a greater selection ofdeep learningmodels, more device portability choices and higher inferencing performance with fewer code changes.

VP of OpenVINO developer tools in Intel’s Network and Edge Group, Adam Burns provided further insight on this latest version of the company’s distribution of the OpenVINO toolkit in apress release, saying:

“The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimizations. The latest upgrade adds hardware auto-discovery and automatic optimization, so software developers can achieve optimal performance on every platform. This software plus Intel silicon enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network.”

OpenVINO toolkit

OpenVINO toolkit

Built on the foundation of oneAPI, the Intel Distribution of OpenVINO toolkit is a suite of tools for high-performance deep learning targeted at enabling faster, more accurate real-world results deployed into production from the edge to the cloud. New features in the latest release make it easier for developers to adopt, maintain, optimize and deploy code with ease across an expanded range of deep learning models.

The latest version of the Intel Distribution of OpenVINO toolkit features an updated, cleaner API that requires fewer code changes when transitioning from another framework. At the same time, the Model Optimizer’s API parameters have been reduced to minimize complexity.

Are you a pro? Subscribe to our newsletter

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Microsoft lifts the lid on plans for ‘planet-scale’ AI infrastructure

DeepMind’s new AI can write code that’s better than yours

China outstrips GPT-3 with even more ambitious AI language model

Intel has also included broader support for natural language programming (NLP) models for use cases such as text-to-speech and voice recognition.  In terms of performance, AUTO device mode now self-discovers available system inferencing capacity based on model requirements so that applications no longer need to know their compute environment in order to advance.

Finally, Intel has added support for the hybrid architecture in12th Gen Intel Core CPUsto deliver enhancements for high-performance inferencing on both the CPU and integrated GPU.

After working with the TechRadar Pro team for the last several years, Anthony is now the security and networking editor at Tom’s Guide where he covers everything from data breaches and ransomware gangs to the best way to cover your whole home or business with Wi-Fi. When not writing, you can find him tinkering with PCs and game consoles, managing cables and upgrading his smart home.

3 reasons why PIA fell in our best VPN rankings

Nokia confirms data breach leaked third-party code, but its data is safe

A critical Palo Alto Networks bug is being hit by cyberattacks, so patch now