How fast is cling/ROOT? Comparing C++ JIT to real compilers on a ray tracer.

I have been in loved with ROOT - CERN's Data Analysis framework and it's interactive C++ shell. But how fast is the code JITed by ROOT? I have find myself compiling ROOT scripts (aka. ACLiC, basically clang) more then I run them in the interpreter. So, let's find that out today. Further more, how fast... Continue Reading →

Sparse Neural Network pt2 – Boosting. An additional layer of regularization.

It has been while since I touched upon the topic of Sparse Neural Networks. Let's finally get to the second part. Please make sure that you have read my last post about KWinner and sparse NNs. Otherwise you'll be confused. Boosting Boosting is the method which HTM uses to avoid common features in the input... Continue Reading →

Website Powered by WordPress.com.

Up ↑