Intel® Low Precision Inference Toolkit (iLiT) is an open-source python library which is intended to deliver a unified low-precision inference interface cross multiple Intel optimized DL frameworks on both CPU and GPU. It supports automatic accuracy-driven tuning strategies, along with additional objectives like performance, model size, or memory footprint. It also provides the easy extension capability for new backends, tuning strategies, metrics and objectives.
WARNING
GPU support is under development.
Currently supported Intel optimized DL frameworks are:
Currently supported tuning strategies are:
- Introduction explains iLiT infrastructure, design philosophy, supported functionality, details of tuning strategy implementations and tuning result on popular models.
- Tutorial provides comprehensive step-by-step instructions of how to enable iLiT on sample models.
git clone https://github.com/intel/lp-inference-kit.git
cd lp-inference-kit
python setup.py install
# install from pip
pip install ilit
# install from conda
conda config --add channels intel
conda install ilit
iLiT supports systems based on Intel 64 architecture or compatible processors.
iLiT requires to install Intel optimized framework version for TensorFlow, PyTorch, and MXNet.
The followings are the examples integrated with iLiT for auto tuning.
Model | Framework | Model | Framework | Model | Framework |
---|---|---|---|---|---|
ResNet50 V1 | MXNet | BERT-Large RTE | PyTorch | ResNet18 | PyTorch |
MobileNet V1 | MXNet | BERT-Large QNLI | PyTorch | ResNet50 V1 | TensorFlow |
MobileNet V2 | MXNet | BERT-Large CoLA | PyTorch | ResNet50 V1.5 | TensorFlow |
SSD-ResNet50 | MXNet | BERT-Base SST-2 | PyTorch | ResNet101 | TensorFlow |
SqueezeNet V1 | MXNet | BERT-Base RTE | PyTorch | Inception V1 | TensorFlow |
ResNet18 | MXNet | BERT-Base STS-B | PyTorch | Inception V2 | TensorFlow |
Inception V3 | MXNet | BERT-Base CoLA | PyTorch | Inception V3 | TensorFlow |
DLRM | PyTorch | BERT-Base MRPC | PyTorch | Inception V4 | TensorFlow |
BERT-Large MRPC | PyTorch | ResNet101 | PyTorch | Inception ResNet V2 | TensorFlow |
BERT-Large SQUAD | PyTorch | ResNet50 V1.5 | PyTorch | SSD ResNet50 V1 | TensorFlow |
-
KL Divergence Algorithm is very slow at TensorFlow
Due to TensorFlow not supporting tensor dump naturally, current solution of dumping the tensor content is adding print op and dumpping the value to stdout. So if the model to tune is a TensorFlow model, please restrict calibration.algorithm.activation and calibration.algorithm.weight in user yaml config file to minmax.
-
MSE tuning strategy doesn't work with PyTorch adaptor layer
MSE tuning strategy requires to compare FP32 tensor and INT8 tensor to decide which op has impact on final quantization accuracy. PyTorch adaptor layer doesn't implement this inspect tensor interface. So if the model to tune is a PyTorch model, please not choose MSE tuning strategy.
Please submit your questions, feature requests, and bug reports on the GitHub issues page. You may also reach out to [email protected].
We welcome community contributions to iLiT. If you have an idea on how to improve the library:
- For changes impacting the public API, submit an RFC pull request.
- Ensure that the changes are consistent with the code contribution guidelines and coding style.
- Ensure that you can run all the examples with your patch.
- Submit a pull request.
For additional details, see contribution guidelines.
This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.
iLiT is licensed under Apache License Version 2.0. This software includes components with separate copyright notices and license terms. Your use of the source code for these components is subject to the terms and conditions of the following licenses.
Apache License Version 2.0:
MIT License:
See accompanying LICENSE file for full license text and copyright notices.
If you use iLiT in your research or wish to refer to the tuning results published in the Tuning Zoo, please use the following BibTeX entry.
@misc{iLiT,
author = {Feng Tian, Chuanqi Wang, Guoming Zhang, Penghui Cheng, Pengxin Yuan, Haihao Shen, and Jiong Gong},
title = {Intel® Low Precision Inference Toolkit},
howpublished = {\url{https://github.com/intel/lp-inference-kit}},
year = {2020}
}