Go to file
Tung D. Le 7e05f371de
Replace std.load/std.store by affine.load/affine.store (#180)
* Move to more recent LLVM ID (May 15)

* clang-format

* Bump cache version up

* Update readme

* Fix doc check

* Move to a newer commit id

* Update LoopToStandard -> SCFToStandard

* Change MLIRSideEffects to MLIRSideEffectInterfaces

* Add AffineScope trait to KrnlIterateOp

* [ElementWise] Load/Store op to AffineLoad/AffineStore op

* [Gemm, MatMul, Reduction, Softmax] Load/Store op to AffineLoad/AffineStore op

* [Concat] Load/Store op to AffineLoad/AffineStore op

* [Pad, PadConstantValuePad, Reshape, Transpose] Load/Store op to AffineLoad/AffineStore op

* [LSTM] Load/Store op to AffineLoad/AffineStore op

* [Conv, Norm, Pooling] Load/Store op to AffineLoad/AffineStore op

* Add affine-loop-fusion pass

* Use Load/Store for scalar

* Use Load/Store for scalar

* Fix lit tests

* Unknown dimensions for broadcasting ops

* Affine Load/Store for scalar memref

* clang-format

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-05 16:20:21 +08:00
.azure-pipelines Add buildbot on Windows via Azure Pipeline (#107) 2020-05-08 12:18:40 +08:00
.buildbot Compile cruntime with fpic (#188) 2020-07-01 15:06:55 +08:00
.circleci Compiling Models with Large Constant Arrays (#146) 2020-06-12 10:27:05 +08:00
.github/workflows add Github workflow to build the prereq Docker image for TravisCI build (#137) 2020-05-19 15:48:47 -04:00
docker Multi build - add build for IBM Systems Z and P (#157) 2020-05-29 17:22:13 -04:00
docs README change to add info on using prebuilt Docker images (#201) 2020-07-02 13:54:38 +08:00
src Replace std.load/std.store by affine.load/affine.store (#180) 2020-07-05 16:20:21 +08:00
test Replace std.load/std.store by affine.load/affine.store (#180) 2020-07-05 16:20:21 +08:00
third_party Rapid check test (#141) 2020-06-08 10:18:55 +08:00
utils Lower SqueezeOp to Krnl dialect (#164) 2020-07-03 16:26:41 +08:00
.clang-format Introduce helper class to generate KRNL code and apply it to Convolution (#93) 2020-02-24 17:20:15 -05:00
.gitignore Constprop (#162) 2020-06-08 15:45:32 -04:00
.gitmodules Rapid check test (#141) 2020-06-08 10:18:55 +08:00
.travis.yml fix wrong tag on docker image (#193) 2020-06-26 11:43:32 -04:00
CMakeLists.txt Merge onnx ml into onnx (#176) 2020-06-22 20:01:56 -04:00
LICENSE Initial commit 2019-12-18 10:18:14 -05:00
MLIR.cmake Add new passes related to Linalg and Affine dialects (#181) 2020-06-25 13:36:39 -04:00
README.md README change to add info on using prebuilt Docker images (#201) 2020-07-02 13:54:38 +08:00
SharingWork.md Update SharingWork.md (#92) 2020-04-20 10:25:46 -04:00

README.md

ONNX MLIR

The Open Neural Network Exchange implementation in MLIR (http://onnx.ai/onnx-mlir/).

System Build Status
x86-Linux CircleCI
s390-Linux Build Status
x86-Windows Build Status

Prebuilt Container

An easy way to get started with ONNX-MLIR is to use a prebuilt docker image. These images are created as a result of a successful merge build on the trunk. This means that the latest image represents the tip of the trunk. Currently there are images for amd64, ppc64le and IBM System Z respectively saved in Docker Hub as onnxmlirczar/onnx-mlir-build:amd64, onnxmlirczar/onnx-mlir-build:ppc64le and onnxmlirczar/onnx-mlir-build:s390x. To use one of these images either pull it directly from Docker Hub, launch a container and run an interactive bash shell in it, or use it as the base image in a dockerfile. The container contains the full build tree including the prerequisites and a clone of the source code. The source can be modified and onnx-mlir rebuilt from within the container, so it is possible to use it as a development environment. It is also possible to attach vscode to the running container. An example Dockerfile and vscode configuration files can be seen in the docs folder. The Dockerfile is shown here.

FROM onnxmlirczar/onnx-mlir-build:amd64

WORKDIR /build
ENV HOME=/build
ENV PYENV_ROOT=$HOME/.pyenv
ENV PATH=$PYENV_ROOT/shims:$PYENV_ROOT/bin:$PATH
RUN pyenv global 3.7.0
RUN pyenv rehash

ENV PATH=$PATH:/build/bin
RUN apt-get update
RUN apt-get install -y python-numpy
RUN apt-get install -y python3-pip
RUN apt-get install -y gdb
RUN apt-get install -y lldb
RUN apt-get install -y emacs
WORKDIR /build/.vscode
ADD .vscode /build/.vscode
WORKDIR /build

Prerequisites

gcc >= 6.4
libprotoc >= 3.11.0
cmake >= 3.15.4

Installation on UNIX

MLIR

Firstly, install MLIR (as a part of LLVM-Project):

git clone https://github.com/llvm/llvm-project.git
# Check out a specific branch that is known to work with ONNX MLIR.
cd llvm-project && git checkout 0dc91bfd11e6cced0c46c1a25cc96edea0d8fc22 && cd ..
mkdir llvm-project/build
cd llvm-project/build
cmake -G Ninja ../llvm \
   -DLLVM_ENABLE_PROJECTS=mlir \
   -DLLVM_BUILD_EXAMPLES=ON \
   -DLLVM_TARGETS_TO_BUILD="host" \
   -DCMAKE_BUILD_TYPE=Release \
   -DLLVM_ENABLE_ASSERTIONS=ON \
   -DLLVM_ENABLE_RTTI=ON

cmake --build . --target -- ${MAKEFLAGS}
cmake --build . --target check-mlir

ONNX-MLIR (this project)

Two environment variables need to be set:

  • LLVM_PROJ_SRC should point to the llvm-project src directory (e.g., llvm-project/).
  • LLVM_PROJ_BUILD should point to the llvm-project build directory (e.g., llvm-project/build).

To build ONNX-MLIR, use the following command:

git clone --recursive https://github.com/onnx/onnx-mlir.git

# Export environment variables pointing to LLVM-Projects.
export LLVM_PROJ_SRC=$(pwd)/llvm-project/
export LLVM_PROJ_BUILD=$(pwd)/llvm-project/build

mkdir onnx-mlir/build && cd onnx-mlir/build
cmake ..
cmake --build .

# Run FileCheck tests:
export LIT_OPTS=-v
cmake --build . --target check-onnx-lit

After the above commands succeed, an onnx-mlir executable should appear in the bin directory.

Installation on Windows

Building onnx-mlir on Windows requires building some additional prerequisites that are not available by default.

Note that the instructions in this file assume you are using Visual Studio 2019 Community Edition. It is recommended that you have the Desktop development with C++ and Linux development with C++ workloads installed. This ensures you have all toolchains and libraries needed to compile this project and its dependencies on Windows.

Run all the commands from a shell started from "Developer Command Prompt for VS 2019".

Protobuf

Build protobuf as a static library.

set root_dir=%cd%
git clone --recurse-submodules https://github.com/protocolbuffers/protobuf.git
cd protobuf
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -T host=x64 -DCMAKE_BUILD_TYPE=Release -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF -Dprotobuf_WITH_ZLIB=OFF -DCMAKE_INSTALL_PREFIX="%root_dir%\protobuf\install"
call msbuild protobuf.sln /m /p:Configuration=Release
call msbuild INSTALL.vcxproj /p:Configuration=Release

Before running CMake for onnx-mlir, ensure that the bin directory to this protobuf is before any others in your PATH:

set PATH=%root_dir%\protobuf\install\bin;%PATH%

PDCurses

Build a local version of the curses library, used by various commandline tools in onnx-mlir. These instructions assume you use Public Domain Curses.

Run this from a Visual Studio developer command prompt since you will need access to the appropriate version of Visual Studio's nmake tool.

set root_dir=%cd%
git clone https://github.com/wmcbrine/PDCurses.git
set PDCURSES_SRCDIR=%root_dir%/PDCurses
cd PDCurses
call nmake -f wincon/Makefile.vc

MLIR

Install MLIR (as a part of LLVM-Project):

git clone https://github.com/llvm/llvm-project.git
# Check out a specific branch that is known to work with ONNX MLIR.
cd llvm-project && git checkout 0dc91bfd11e6cced0c46c1a25cc96edea0d8fc22 && cd ..
set root_dir=%cd%
md llvm-project\build
cd llvm-project\build
call cmake -G "Visual Studio 16 2019" -A x64 -T host=x64 ..\llvm ^
   -DCMAKE_INSTALL_PREFIX="%root_dir%\llvm-project\build\install" ^
   -DLLVM_ENABLE_PROJECTS=mlir ^
   -DLLVM_BUILD_EXAMPLES=ON ^
   -DLLVM_TARGETS_TO_BUILD="host" ^
   -DCMAKE_BUILD_TYPE=Release ^
   -DLLVM_ENABLE_ASSERTIONS=ON ^
   -DLLVM_ENABLE_RTTI=ON ^
   -DLLVM_ENABLE_ZLIB=OFF

call cmake --build . --config Release --target -- /m
call cmake --build . --config Release --target install
call cmake --build . --config Release --target check-mlir

ONNX-MLIR (this project)

The following environment variables need to be set before building onnx-mlir:

  • CURSES_LIB_PATH: Path to curses library (e.g. c:/repos/PDCurses)
  • LLVM_PROJ_BUILD: Path to the build directory for LLVM (e.g. c:/repos/llvm-project/build)
  • LLVM_PROJ_SRC: Path to the source directory for LLVM (e.g. c:/repos/llvm-project)

This project uses lit (LLVM's Integrated Tester) for unit tests. When running CMake, we will also specify the path to the lit tool from LLVM using the LLVM_EXTERNAL_LIT define.

To build ONNX MLIR, use the following command:

git clone --recursive https://github.com/onnx/onnx-mlir.git

REM Export environment variables pointing to LLVM-Projects.
set root_dir=%cd%
set CURSES_LIB_PATH=%root_dir%/PDCurses
set LLVM_PROJ_BUILD=%root_dir%/llvm-project/build
set LLVM_PROJ_SRC=%root_dir%/llvm-project

md onnx-mlir\build
cd onnx-mlir\build
call cmake -G "Visual Studio 16 2019" -A x64 -T host=x64 -DLLVM_EXTERNAL_LIT="%root_dir%\llvm-project\build\Release\bin\llvm-lit.py" -DCMAKE_BUILD_TYPE=Release ..
call cmake --build . --config Release --target onnx-mlir -- /m

REM Run FileCheck tests
set LIT_OPTS=-v
call cmake --build . --config Release --target check-onnx-lit

After the above commands succeed, an onnx-mlir executable should appear in the bin directory.

Using ONNX-MLIR

The usage of onnx-mlir is as such:

OVERVIEW: ONNX MLIR modular optimizer driver

USAGE: onnx-mlir [options] <input file>

OPTIONS:

Generic Options:

  --help        - Display available options (--help-hidden for more)
  --help-list   - Display list of available options (--help-list-hidden for more)
  --version     - Display the version of this program

ONNX MLIR Options:
These are frontend options.

  Choose target to emit:
      --EmitONNXIR - Ingest ONNX and emit corresponding ONNX dialect.
      --EmitMLIR   - Lower model to MLIR built-in transformation dialect.
      --EmitLLVMIR - Lower model to LLVM IR (LLVM dialect).
      --EmitLLVMBC - Lower model to LLVM IR and emit (to file) LLVM bitcode for model.

Example

For example, to lower an ONNX model (e.g., add.onnx) to ONNX dialect, use the following command:

./onnx-mlir --EmitONNXIR add.onnx

The output should look like:

module {
  func @main_graph(%arg0: tensor<10x10x10xf32>, %arg1: tensor<10x10x10xf32>) -> tensor<10x10x10xf32> {
    %0 = "onnx.Add"(%arg0, %arg1) : (tensor<10x10x10xf32>, tensor<10x10x10xf32>) -> tensor<10x10x10xf32>
    return %0 : tensor<10x10x10xf32>
  }
}

Troubleshooting

If the latest LLVM project fails to work due to the latest changes to the MLIR subproject please consider using a slightly older version of LLVM. One such version, which we use, can be found here.