文章目录
  1. 1. What’s TorchScript?
  2. 2. 将PYTORCH模型转换为TORCH脚本
    1. 2.1. Converting to Torch Script via Tracing
    2. 2.2. Converting to Torch Script via Annotation
  3. 3. Serializing Your Script Module to a File
  4. 4. Loading Your Script Module in C++
    1. 4.1. A Minimal C++ Application
  5. 5. Executing the Script Module in C++
  6. 6. Getting Help and Exploring the API
  7. 7. JIT

What’s TorchScript?

TorchScript is a way to create serializable and optimizable models from PyTorch code. Any code written in TorchScript can be saved from your Python process and loaded in a process where there is no Python dependency.

We provide tools to incrementally transition a model from being a pure Python program to a TorchScript program that can be run independently from Python, for instance, in a standalone C++ program. This makes it possible to train models in PyTorch using familiar tools and then export the model to a production environment where it is not a good idea to run models as Python programs for performance and multi-threading reasons.

将PYTORCH模型转换为TORCH脚本

There exist two ways of converting a PyTorch model to Torch Script. The first is known as tracing, a mechanism in which the structure of the model is captured by evaluating it once using example inputs, and recording the flow of those inputs through the model. This is suitable for models that make limited use of control flow. The second approach is to add explicit annotations to your model that inform the Torch Script compiler that it may directly parse and compile your model code, subject to the constraints imposed by the Torch Script language.

有两种方法可以将PyTorch模型转换为Torch Script。
第一种称为跟踪,一种机制,通过使用示例输入一次评估模型的结构,并通过模型记录这些输入的流量来捕获模型的结构。这适用于限制使用控制流的模型。
第二种方法是向模型添加显式注释,以通知Torch脚本编译器它可以直接解析和编译模型代码,受Torch脚本语言强加的约束。

Converting to Torch Script via Tracing

1
2
3
4
5
6
7
8
9
10
11
import torch
import torchvision

# An instance of your model.
model = torchvision.models.resnet18()

# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
  • The traced ScriptModule can now be evaluated identically to a regular PyTorch module:
    1
    2
    3
    In[1]: output = traced_script_module(torch.ones(1, 3, 224, 224))
    In[2]: output[0, :5]
    Out[2]: tensor([-0.2698, -0.0381, 0.4023, -0.3010, -0.0448], grad_fn=<SliceBackward>)

Converting to Torch Script via Annotation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import torch

class MyModule(torch.jit.ScriptModule):
def __init__(self, N, M):
super(MyModule, self).__init__()
self.weight = torch.nn.Parameter(torch.rand(N, M))

@torch.jit.script_method
def forward(self, input):
if bool(input.sum() > 0):
output = self.weight.mv(input)
else:
output = self.weight + input
return output

my_script_module = MyModule(2, 3)
  • Creating a new MyModule object now directly produces an instance of ScriptModule that is ready for serialization.

Serializing Your Script Module to a File

  • Serializing Your Script Module to a File
    1
    traced_script_module.save("model.pt")

Loading Your Script Module in C++

A Minimal C++ Application

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#include <torch/script.h> // One-stop header.

#include <iostream>
#include <memory>

int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}

// Deserialize the ScriptModule from a file using torch::jit::load().
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]);

assert(module != nullptr);
std::cout << "ok\n";
}

Executing the Script Module in C++

1
2
3
4
5
6
7
8
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 224, 224}));

// Execute the model and turn its output into a tensor.
at::Tensor output = module->forward(inputs).toTensor();

std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';

Getting Help and Exploring the API

JIT

TorchScript is a subset of Python that can either be written directly (using the @script annotations) or generated automatically from Python code via tracing. When using tracing, code is automatically converted into this subset of Python by recording only the actual operators on tensors and simply executing and discarding the other surrounding Python code.

When writing TorchScript directly using @script annotations, the programmer must only use the subset of Python supported in TorchScript. This section documents what is supported in TorchScript as if it were a language reference for a stand alone language. Any features of Python not mentioned in this reference are not part of TorchScript.

As a subset of Python any valid TorchScript function is also a valid Python function. This makes it possible to remove the @script annotations and debug the function using standard Python tools like pdb. The reverse is not true: there are many valid python programs that are not valid TorchScript programs. Instead, TorchScript focuses specifically on the features of Python that are needed to represent neural network models in Torch.

1
PYTORCH_JIT=1

Setting the environment variable PYTORCH_JIT=0 will disable all script and tracing annotations. If there is hard-to-debug error in one of your ScriptModules, you can use this flag to force everything to run using native Python. This allows the use of tools like pdb to debug code.

文章目录
  1. 1. What’s TorchScript?
  2. 2. 将PYTORCH模型转换为TORCH脚本
    1. 2.1. Converting to Torch Script via Tracing
    2. 2.2. Converting to Torch Script via Annotation
  3. 3. Serializing Your Script Module to a File
  4. 4. Loading Your Script Module in C++
    1. 4.1. A Minimal C++ Application
  5. 5. Executing the Script Module in C++
  6. 6. Getting Help and Exploring the API
  7. 7. JIT