PyTorch is a package, and the tutorial recommend:

import torch

But I would say, in the old NumPy and TensorFlow way

import torch as tc



  • The default data type is float32, when repr() invoked from one tensor of this type, the dtype will not be displayed
  • The default int/float type means int32/float32
  • Basic functions are provided, like tc.rand, tc.randn, tc.empty, tc.zeros, tc.tensor, tc.new_ones
  • They accept both *size, and tuple(size,)
  • using tc.tensor() will always copy the data from the source

With NumPy

The PyTorch framework support from-and-to integration of NumPy.

NOTE that Tensor seems to be inherited from NumPy arrays. Therefore, if there is certain calculation, we could expect something like: the calculation happens to either of the two will be operating from those.

### Transfer different target to corresponding devices, device_name: str)


The torch.autograd module provide automatic gradients computation for every use cases.

– Torch.Tensor.requires(require_grad=True)

– Torch.Tensor.no_grad()

– Torch.Tensor.no_grad

Leave a comment

Your email address will not be published. Required fields are marked *