Building AMD’s MIOpen on a non ROCm(AMD) system

AMD. Please, please write your build script PROPERLY. It really sucks that I have to hack your code to make it work on other’s platform. – and that’s when all components used are cross platform.

Sorry that I got a bit mad when I’m writing this post. There are too much basic things done wrong.


AMD released their machine learning library – MIOpen at July 30th, 2017. It is there response to Nvidia’s cuDNN. To hopefully gain some market share in the ML race. Supposedly it should provide high performance matrix and tensor operation into AMD’s platform.

Upon release, I grab the source code and stare at it for a while. Reading the README and the source code. Seems that MIOpen is calls either OpenCL or HIP to do the computing. Interesting. So it is possible to run MIOpen on Nvidia or Intel’s GPU or even a Xeon Phi? That’s amazing!

Unfortunately After days. I haven’t seen any technical article about this new library. Maybe it is too difficult to use? Anyway. Let’s try to do something with it.

Building Problems. And fixing them

After cloning the MIOpen repository  and running cmake -DMIOPEN_BACKEND=OpenCL .. to build MIOpen with OpenCL interface , it shows that it needs ROCm to build…?  No ROCm headers are included directly. Why ROCm is required?  I guess AMD uses a CMake template for all their ROCm products. Comment out all ROCm related command will hopefully fix this. This removes the ability to install MIOpen to system. But I can always copy the files by hand. 🙂

After doing so. CMake reports that it need a LaTeX compiler. Install a latex compiler fixes this.

Here is my repository of MIOpen that builds on a non ROCm system. run cmake -DMIOPEN_BACKEND=OpenCL . && make will build the thing.

Testing MIOpen

After that. cmake passes. And it builds without any problem. I build the tests according to the READEME.  It shows that 5/10 test failed with std::bad_alloc(Which is weird that I shouldn’t be running out of memory anytime soon). Here is my test log. Note that I have no idea which OpenCL device MIOpen is running off. I have both CPU and a GPU device, but MIOpen says nothing about what it is running on.

I’ll dive deeper into the code to find out why and fix them. Otherwise MIOpen works on a Nvidia GPU! and it has a good chance to work on a Intel GPU or even Xeon Phi with OpenCL SDK! Sweet!

Update: There are some OpenCL API misuses that caused the testes to fail. Current status: 8/10 test passes.

Update: MIOpen passes all of it’s test with some modifying of the source code.


It really sucks that every time AMD releases some exciting software. It is hard to get them working. Don’t get me wrong, AMD has some great software. But they are difficult to setup. CodeXL is a really good GPU debugger. But it never works properly on a Nvidia/Intel GPU for me(although it is documented to work so). ROCm is a great fully open source GPU computing platform. But it is a pain to setup. Problems like this really limits AMD’s ability to reach new programmer that want to get into their platform. This scares the developers away.



One thought on “Building AMD’s MIOpen on a non ROCm(AMD) system

Add yours

  1. The rocm cmake modules can be installed separately from the rocm repo, it also has tests for linux, mac, and windows. You can test and install MIOpen on any platform with cget:

    cget install –test RadeonOpenCompute/rocm-cmake ROCmSoftwarePlatform/MIOpenGEMM ROCmSoftwarePlatform/MIOpen

    Of course, AMD mainly supports their hardware(which is expected), but being open-source you can easily make changes to support other platforms as well.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Powered by

Up ↑

%d bloggers like this: