v1.6.0
Release 1.6.0 can be used to run models compiled using the release-1.6.0 branch of neo-ai/TVM.
It enables specifying individual TVM artifacts to CreateDLRModel API and add APIs to use DLTensors for SetInput/GetOutput. It provides missing DLR C++ APIs to GraphRuntime & VMRuntime. Version 1.6.0 skips loading tvm artifacts from disk and allows passing in data directly for graph, params and relay_exec.
This Release supports Pytorch object detection models on CPU. Additional TensorFlow object detection models on GPU are supported like ssd mobilenet,mask_rcnn_resnet, faster_rcnn_resnet, etc.
It also supports NonMaxSuppressionV5 aka tf.image.non_max_suppression_with_scores which returns scores in addition to indices and size.
Pre-built wheels can be installed via pip install link-to-wheel. If you don't see your platform in the table, see Installing DLR for instructions to build from source.
| Manufacturer | Device Name | Wheel URL | |----|---------------|------| | Acer | TV AISage | https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.6.0/aisage/dlr-1.6.0-py3-none-any.whl | | Amazon | AWS p2/p3/g4 | https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.6.0/gpu/dlr-1.6.0-py3-none-any.whl | | NVIDIA | Jetson device with JetPack 4.2 | https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.6.0/jetpack4.2/dlr-1.6.0-py3-none-any.whl | | NVIDIA | Jetson device with JetPack 4.3 | https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.6.0/jetpack4.3/dlr-1.6.0-py3-none-any.whl | | NVIDIA | Jetson device with JetPack 4.4 | https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.6.0/jetpack4.4/dlr-1.6.0-py3-none-any.whl | | Raspberry | Rasp3b | https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.6.0/rasp3b/dlr-1.6.0-py3-none-any.whl | | Raspberry | Rasp4b | https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.6.0/rasp4b/dlr-1.6.0-py3-none-any.whl | | Rockchips | RK3399 | https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.6.0/rk3399/dlr-1.6.0-py3-none-any.whl |