Go to file
Tian Jin c9199c9061
Programmatically ensure ONNX Dialect related generated files are up-to-date. (#58)
* Generate ONNX Dialect TableGen Inc files & operation importing inc files when necessary.

* Ensure TableGen inc file is generated before TableGen is invoked.

* Nit: capitalize builder -> Builder.

* Use file-same-as-stdout directive to ensure generated files are always up-to-date in our codebase.

* Use more up-to-date version of ONNXOps.td.inc.

* Do not automatically invoke gen_doc.py.

* Support dry run in gen_doc.py.

* Fix case.

* Remove debug code.

* Add test for new doc_check primitive.

* Add documentation for file-same-as-stdout.

* Provide more comments.

* Add DocCheck to DocCheck README.

* Nit: format CMake script.

* Update comments.

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-04-08 15:00:34 +08:00
.buildbot Name change for tests, to be check-onnx-(lit | backend) (#62) 2020-03-31 10:06:14 -04:00
.circleci Move to more recent LLVM commit ID (#64) 2020-04-01 12:38:34 -04:00
doc Programmatically ensure ONNX Dialect related generated files are up-to-date. (#58) 2020-04-08 15:00:34 +08:00
src Programmatically ensure ONNX Dialect related generated files are up-to-date. (#58) 2020-04-08 15:00:34 +08:00
test implement shape inference for concat (#74) 2020-04-07 16:13:41 -04:00
third_party Transition to ONNX-1.6.0. (#95) 2020-02-25 13:04:15 +08:00
utils Programmatically ensure ONNX Dialect related generated files are up-to-date. (#58) 2020-04-08 15:00:34 +08:00
.clang-format Introduce helper class to generate KRNL code and apply it to Convolution (#93) 2020-02-24 17:20:15 -05:00
.gitignore Update gitignore file to ignore Filesystem artifacts and python related temporary files. (#103) 2020-02-25 11:18:37 -05:00
.gitmodules Change variant repo from git to https. (#17) 2020-03-10 00:16:43 +08:00
CMakeLists.txt Programmatically ensure ONNX Dialect related generated files are up-to-date. (#58) 2020-04-08 15:00:34 +08:00
LICENSE Initial commit 2019-12-18 10:18:14 -05:00
MLIR.cmake Move to more recent LLVM commit ID (#64) 2020-04-01 12:38:34 -04:00
README.md Update README.md (#72) 2020-04-07 15:00:49 +08:00
SharingWork.md Update progress in SharingWork.md (#61) 2020-04-06 18:13:28 -04:00

README.md

ONNX MLIR

The Open Neural Network Exchange implementation in MLIR.

CircleCI

Prerequisites

gcc >= 6.4
libprotoc >= 3.11.0
cmake >= 3.15.4

Installation

Firstly, install MLIR (as a part of LLVM-Project):

git clone https://github.com/llvm/llvm-project.git
# Check out a specific branch that is known to work with ONNX MLIR.
cd llvm-project && git checkout 07e462526d0cbae40b320e1a4307ce11e197fb0a && cd ..
mkdir llvm-project/build
cd llvm-project/build
cmake -G Ninja ../llvm \
   -DLLVM_ENABLE_PROJECTS=mlir \
   -DLLVM_BUILD_EXAMPLES=ON \
   -DLLVM_TARGETS_TO_BUILD="host" \
   -DCMAKE_BUILD_TYPE=Release \
   -DLLVM_ENABLE_ASSERTIONS=ON \
   -DLLVM_ENABLE_RTTI=ON

cmake --build . --target -- ${MAKEFLAGS}
cmake --build . --target check-mlir

Two environment variables need to be set:

  • LLVM_PROJ_SRC should point to the llvm-project src directory (e.g., llvm-project/).
  • LLVM_PROJ_BUILD should point to the llvm-project build directory (e.g., llvm-project/build).

To build ONNX-MLIR, use the following command:

git clone --recursive https://github.com/onnx/onnx-mlir.git

# Export environment variables pointing to LLVM-Projects.
export LLVM_PROJ_SRC=$(pwd)/llvm-project/
export LLVM_PROJ_BUILD=$(pwd)/llvm-project/build

mkdir onnx-mlir/build && cd onnx-mlir/build
cmake ..
cmake --build . --target onnx-mlir

# Run FileCheck tests:
export LIT_OPTS=-v
cmake --build . --target check-onnx-lit

After the above commands succeed, an onnx-mlir executable should appear in the bin directory.

Using ONNX MLIR

The usage of onnx-mlir is as such:

OVERVIEW: ONNX MLIR modular optimizer driver

USAGE: onnx-mlir [options] <input file>

OPTIONS:

Generic Options:

  --help        - Display available options (--help-hidden for more)
  --help-list   - Display list of available options (--help-list-hidden for more)
  --version     - Display the version of this program

ONNX MLIR Options:
These are frontend options.

  Choose target to emit:
      --EmitONNXIR - Ingest ONNX and emit corresponding ONNX dialect.
      --EmitMLIR   - Lower model to MLIR built-in transformation dialect.
      --EmitLLVMIR - Lower model to LLVM IR (LLVM dialect).
      --EmitLLVMBC - Lower model to LLVM IR and emit (to file) LLVM bitcode for model.

Example

For example, to lower an ONNX model (e.g., add.onnx) to ONNX dialect, use the following command:

./onnx-mlir --EmitONNXIR add.onnx

The output should look like:

module {
  func @main_graph(%arg0: tensor<10x10x10xf32>, %arg1: tensor<10x10x10xf32>) -> tensor<10x10x10xf32> {
    %0 = "onnx.Add"(%arg0, %arg1) : (tensor<10x10x10xf32>, tensor<10x10x10xf32>) -> tensor<10x10x10xf32>
    return %0 : tensor<10x10x10xf32>
  }
}

Troubleshooting

If the latest LLVM project fails to work due to the latest changes to the MLIR subproject please consider using a slightly older version of LLVM. One such version, which we use, can be found here.