MLCompilerBridge
Tools for streamlining communication with ML models for compiler optimizations.
|
ML-Compiler-Bridge
is a compiler agnostic library to aid in ML-Enabled Compiler Optimizations. ML-Compiler-Bridge supports both training and inference scenarios. Library exposes Python and C/C++ APIs to interface with the Python-based ML models and a C/C++ compiler. This design allows ML model development within a traditional Python framework while making end-to-end integration with an optimizing compiler possible and efficient.
This repo contains the source code and relevant information described in our paper, "The Next 700 ML-Enabled Compiler Optimizations" (arxiv). Please see here for documentation and other details.
The Next 700 ML-Enabled Compiler Optimizations, S. VenkataKeerthy, Siddharth Jain, Umesh Kalvakuntla, Pranav Sai Gorantla, Rajiv Shailesh Chitale, Eugene Brevdo, Albert Cohen, Mircea Trofin and Ramakrishna Upadrasta. CC 2024.
Build GRPC with cmake
v1.58 (protobuf v23.4) to build GRPC from source.DCMAKE_INSTALL_PREFIX
may not be necessary and the default install prefix can be used.pip install grpcio-tools
.(Experiments are done on an Ubuntu 20.04 machine)
ML-Compiler-Bridge
can be built as a stand-alone library to generate .a
files that can in turn be linked with any compiler.
mkdir build && cd build
cmake [-DCMAKE_BUILD_TYPE=Release|Debug] [-DCMAKE_INSTALL_PREFIX=<Install_path>] [-DMLBRIDGE_ENABLE_TEST=ON|OFF] -DONNXRUNTIME_ROOTDIR=<Path to ONNX install dir> -DPROTOS_DIRECTORY=<Path to protobuf files> -DTENSORFLOW_AOT_PATH=<Path to TensorFlow pip install dir> ../
make -j [&& make install]
pip install compilerinterface
This process would generate libMLCompilerBridge.a
and libMLCompilerBridgeC.a
libraries under build/lib
directory, required headers under build/include
directory. libMLCompilerBridgeC.a
exposes C APIs for using with C-based compilers like Pluto, where as libMLCompilerBridge.a
exposes C++ APIs that can be used with any compiler written in C++.
Python end points are available under CompilerInterface
. They can be downloaded as a package
from pypi.
To ensure the correctness, run make verify-all
. This would need enabling tests in cmake (-DMLBRIDGE_ENABLE_TEST=ON
) and PROTOS_DIRECTORY
should point to test/protos
.
ML-Compiler-Bridge
can be integrated and built along with the LLVM project. This can be done by adding this repository as a new project and setting LLVM_MLBRIDGE
option to ON
.
You can check the CMakeLists.txt of the ml-llvm-project
repository which demonstrates such an integration.
The passes that need to make use of this library can then just link with LLVMMLBridge
.
Example CMakeLists.txt
of an LLVM pass that would use the library is shown below.
To use TensorFlow AOT Model Runner, you need to make use of tf_find_and_compile
method exposed in cmake/modules/TensorFlowCompile.cmake
in the CMakeLists.txt of your pass with appropriate arguments. An example of integrating TF AOT Model with inlining pass is shown here.
Libraries are autogenerated for every relevant check-in with GitHub actions. Such generated artifacts are tagged along with the successful runs of [Publish
]() action.
Please feel free to raise issues to file a bug, pose a question, or initiate any related discussions. Pull requests are welcome :)
ML-Compiler-Bridge is released under Apache 2.0 license with LLVM Exceptions. See LICENSE file for more details.