The PyTorch 1.8 release brings a host of new and updated API surfaces ranging from additional APIs for NumPy compatibility, also support for ways to improve and scale your code for performance at both inference and training time. You can learn more about the definitions in the post here. As previously noted, features in PyTorch releases are classified as Stable, Beta and Prototype. For more on the library releases, see the post here. Significant updates and improvements to distributed training including: Improved NCCL reliability Pipeline parallelism support RPC profiling and support for communication hooks adding gradient compression.Īlong with 1.8, we are also releasing major updates to PyTorch libraries including TorchCSPRNG, TorchVision, TorchText and TorchAudio.Added or stabilized APIs to support FFTs ( torch.fft), Linear Algebra functions ( torch.linalg), added support for autograd for complex tensors and updates to improve performance for calculating hessians and jacobians and.Support for doing python to python functional transformations via torch.fx.It also provides improved features for large-scale training for pipeline and model parallelism, and gradient compression. It includes major updates and new features for compilation, code optimization, frontend APIs for scientific computing, and AMD ROCm support through binaries that are available via. This release is composed of more than 3,000 commits since 1.7. We are excited to announce the availability of PyTorch 1.8.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |