Monday, August 24, 2015

The End of Summer, PyPy ♥ SIMD


The Summer of Code is approaching its end and it has been an amazing experience for me. Not only this, but I also approach the end of my masters thesis which for me was on of the main goals for the last five years. The sad story is that I most likely are not able to participate as a student in the future.

It has been a really great time for me, and for anyone feeling unsure of applying or not, I can warmly recommend to try.

PyPy's vectorizing optimizer

So what is the outcome of my GSoC? Let's quickly have a look on what I proposed:

The goal of this project is to enhance the trace optimizer of PyPy. By definition NumPy arrays are unboxed, homogeneous (one data type) and continuous in memory (despite certain exceptions). The same is true for the array module in the standard library. The new optimizer will use this opportunity and exploit a SIMD instruction set (such as SSE,AVX) present on modern CISC processors (e.g. x86). This should lead to enhanced execution speed for arithmetic intensive applications that run on top of PyPy.

I have already showed that individual traces get faster when the optimization is turned on. But that does not necessarily mean programs get faster. The optimization only helps if your program spends a significant fraction of time in the trace loop that is vectorizable.

The following shows the basic NumPy operations stressed. In a loop the NumPy operations are executed 1000 times. This sample program shows the basic setup of multiply-float64:

def bench (vector_a, vector_b):
    for i in range(1000):
        numpy.multiply(vector_a, vector_b, out=vector_a)


The speedup (bigger is better) show the theoretical maximum speedup (this is bounded because of the way SIMD works) and what is achieved by the optimization. The base line is the portable PyPy version 2.6.0 (none of the code I changed is included in this version). The version of Vector speedup is a026d96015e4.

Considering that currently aligned memory load cannot be used I think this is a pretty good result. For float64 multiply the maximum speedup is nearly reached. float32 performs very poorly because the JIT currently does not operate on float32, but always casts to float64, executes the arithmetic and casts back to float32.

Let's run some real programs. SOM (Self Organizing Maps) is an algorithm to map N dimensions onto a 2d grid. It contains a lot of vector distances and vector operations. Dot is the matrix dot product and wdist is the weighted euclidean distances. The number in the benchmark name indicates the size of the vector register.




As you can see, bigger vectors are better for the optimization. The cool thing is that PyPy is now able connect trace trees that use SIMD operations which is not possible if the numerical kernels are written in e.g. C. Again I think the results are pretty cool and speedup should get even more crazy if AVX is implemented in the JIT backend.

Non NumPy traces

At the beginning it was a bit unclear if this is possible, but let me show you a very basic example of a Python program where the optimization creates a very good piece of assembler.

# arrayloop.py
import array
w = array.array('d',[0]*10000) 
l = array.array('d',[1]*10000)
def f(): 
    i = 0 
    while i < 10000: 
        l[i] = l[i] + w[i] 
        i += 1

for i in range(100000): 
    f()

$ time ./pypy-c --jit vec_all=1 ~/arrayloop.py 
0.67user 0.01system 0:00.69elapsed 99%CPU
$ time ./pypy-c --jit vec_all=0 ~/arrayloop.py 
1.65user 0.01system 0:01.67elapsed 99%CPU

Boom. 1.65 / 0.67 = ~2.45 times faster. This is not super scalar because of some caching issues, but because it does less guard checking (all array bound checks are only checked once).

If you happen to be into PyPy, you know that the list strategy (e.g. [42.0] * 44) will not store Python objects, but store double floating points directly in memory. When the first non floating point value is stored into the array, it is transformed into a list of Python objects. Perfect! This makes it even possible to let the optimization run on loops that manipulate lists.

Unfortunately this is currently not possible, because there are some fragments in the trace that the optimizer cannot transform. We are working on it and the next major PyPy update might contain this as well!

Stats

It spans over at least seven files and needs about > 3000 lines of code. The test suite covers roughly > 4000 lines in five files. This is the newly added code in grand total. It excludes every line that I have changed in the existing source base.

Does this only work for PyPy?

If I have put the same effort into a different virtual machine, it would have been tied to the language and the virtual machine only.

That would be lame right?

But (you might be aware of this) any language interpreter written in RPython can reuse the tracing JIT compiler. That means that ``magically'' this optimization is added to the final executable by default.

At this point it should be said that programs will not automatically get faster, there are some rules your traces must obey. But this is easy. Much more easier than writing assembler or using compiler intrinsics.

This is great because if you want to write an array processing language in RPython, you gain all the features it already provides and get to use SSE4.1 (for now) to speed up your programs

Final words

I plan to further work on PyPy and I think soon this project will be finally merged into default. We have talked about it to include it in the upcomming 2.6.1 release, but there are other greate changes comming to enhance the trace compiler it was postponed.

Thank you PyPy Team, thank you Google & PSF! I had a great time tinkering on my proposal.

14 comments:

  1. Indeed, this might be a blogger bug when you pull images into the text field instead of clicking 'upload image'. Should work now.

    ReplyDelete
  2. So would this speed up the bf mandelbrot even further too? http://morepypy.blogspot.com/2011/04/tutorial-part-2-adding-jit.html

    ReplyDelete
  3. Just by looking at the last trace (above the Final words), the optimization would use SIMD instructions. The only question is if the most time is spent in this trace. I'll give it a shot at the weekend.

    ReplyDelete
  4. Nope, I looked at the traces mandel.b produces. There are quite a lot of traces produces and all of them are not vectorizable. The trace I referred to in my previous comment is from another program called 'test.b'.

    I biggest problem with this interpreter is that the tape position is constantly modified. It is hard to find a trace where the tape position is not set via 'setfield_gc'. This makes many operations dependent and bails the optimization.

    I'll keep looking, maybe I'll find a solution to this problem. I mean, BF is not really serious language, but it would still be cool to get it running!

    ReplyDelete
  5. > If I have put the same effort into a different virtual machine, it would have been tied to the language and the virtual machine only.

    If only the JVM or the CLR weren't so tied to one language! ;-)

    ReplyDelete
  6. I'm sure that you could write a pretty fast JVM/CLR using RPython. The question remains if there is any gain in doing that :)

    ReplyDelete
  7. After completing the RPython JVM (if you are a real cool kid), you could integrate Python with Java writing a VM like PyHyp. (More details http://arxiv.org/pdf/1503.08623.pdf)
    That would definitely untie the JVM written in RPython.

    ReplyDelete
  8. This is great article and so helpful.
    Star Shower

    ReplyDelete
  9. Nice article and informative as well. for few of people like me this article is really helpful.
    Offroad led light bar for truck

    ReplyDelete