Facebook Accelerates AI Development with New Partners for PyTorch 1.0
Earlier this year, we shared a vision for making AI development faster and more interoperable. Today, during our first-ever PyTorch Developer Conference, we are announcing updates about the growing ecosystem of software, hardware, and education partners that are deepening their investment in PyTorch. We're also bringing together our active community of researchers, engineers, educators, and more to share how they're using the open source deep learning platform for research and production, and walking through more details on the preview release of PyTorch 1.0.
PyTorch 1.0 accelerates the workflow involved in taking breakthrough research in artificial intelligence to production deployment. With deeper cloud service support from Amazon, Google, and Microsoft, and tighter integration with technology providers ARM, Intel, IBM, NVIDIA, and Qualcomm, developers can more easily take advantage of PyTorch's ecosystem of compatible software, hardware, and developer tools. The more software and hardware that is compatible with PyTorch 1.0, the easier it will be for AI developers to quickly build, train, and deploy state-of-the-art deep learning models.
What's new in PyTorch 1.0
The latest additions to the framework include a new hybrid front end that enables tracing and scripting models from eager mode into graph mode for bridging the gap between exploration and production deployment, a revamped torch.distributed library that allows for faster training across Python and C++ environments, and an eager mode C++ interface (released in beta) for performance-critical research.
Currently, researchers and engineers have to work across a number of frameworks and tools, many of which are often incompatible, to prototype new deep learning models and transfer them to run at scale in a production environment. Doing this slows down the rate at which we can deploy AI research breakthroughs at production scale. With this latest release, we've combined the flexibility of the existing PyTorch framework with the production capabilities of Caffe2 to deliver a seamless path from research to production-ready AI.
Deeper support from the ecosystem
AWS, Google, and Microsoft are deepening their investment in PyTorch 1.0 through more robust support for the framework across their cloud platforms, products and services. For example, Amazon SageMaker, AWS's fully managed platform for training and deploying machine learning models at scale, now provides preconfigured environments for PyTorch 1.0, which include rich capabilities such as automatic model tuning.
Google is announcing new PyTorch 1.0 integrations across its software and hardware tools for AI development. Google Cloud Platform's Deep Learning VM has a new VM image with PyTorch 1.0 that comes with NVIDIA drivers and tutorials preinstalled. Google also offers Cloud Tensor Processing Units (TPUs), which are custom-developed application-specific integrated circuits (ASIC) for machine learning (ML). Engineers on Google's Cloud TPU team are in active collaboration with our PyTorch team to enable support for PyTorch 1.0 models on this custom hardware.
Microsoft, an early partner with Facebook on another important AI initiative, ONNX, is also furthering its commitment to providing first-class support for PyTorch across its suite of machine learning offerings. Azure Machine Learning service now allows developers to seamlessly move from training PyTorch models on a local machine to scaling out on the Azure cloud. For data science experimentation, Microsoft is offering preconfigured Data Science Virtual Machines (DSVM) that are preinstalled with PyTorch. For developers looking to start exploring PyTorch without having to install software and set up a local machine, Azure Notebooks provides a free, cloud-hosted Jupyter Notebooks solution set up with PyTorch tutorials. Finally, Visual Studio Code’s Tools for AI extension provides tight integration of Azure ML and PyTorch APIs for streamlined PyTorch code development and training.
In addition to software and cloud service providers, technology partners — including ARM, IBM, Intel, NVIDIA, and Qualcomm — are adding support for PyTorch 1.0 through direct optimizations, kernel library integration, and support for additional tools such as compilers and inference runtimes. This extra support ensures that PyTorch developers can run models across a broad array of hardware, optimized for training and inference, for both data center and edge devices.
Educating future AI developers
We've already seen a variety of education providers using the existing PyTorch framework to teach deep learning in online programs and university courses. The framework's approachability and deep integration into Python have made it easier for students to understand and experiment with various deep learning concepts. With the evolution of PyTorch 1.0, we're thrilled that more partners will be further focusing their curricula around it.
Udacity is partnering with Facebook to give developers access to a free Intro to Deep Learning course, which is taught entirely on PyTorch. In addition, Facebook will sponsor 300 students who have successfully completed this intermediate-level course to continue their education in Udacity's Deep Learning Nanodegree program, which has been revamped to run on PyTorch 1.0.
Fast.ai, which offers free online courses for introductory and advanced deep learning and machine learning using PyTorch, is announcing the first release of fastai, an open source software library built on top of PyTorch 1.0. The library offers improved accuracy and speed with significantly less code, making deep learning more accessible to new and experienced developers.
Continued collaboration
We are excited to hear from the community as you begin working with PyTorch 1.0 over the coming months. We also look forward to continuing our collaboration with leaders in the deep learning ecosystem, to help more people take advantage of AI and accelerate the path from research to production.
To get started, download the developer preview of PyTorch 1.0, or experience it with one of our cloud partners. We also welcome the entire PyTorch community to join the full day of live stream talks from the core PyTorch team at Facebook, as well as from contributors and organizations in academia, industry, and more at facebook.com/pytorch.
We'd like to thank the entire PyTorch 1.0 team for its contributions to this work.