Does the server support PyTorch models? Or does it only support PyTorch TorchScripts converted models?
In case it does support PyTorch, can you refer to an example config?
The best way is to export to the onnx model, and then you can easily deploy on the Triton server.