I have never attended a programming conference before. Some thoughts and impressions:
- The architecture of conference center is impressive.
- Python is heavily used in numerical computation, data analysis and processing (I thought it to be less).
- Pick any bar and a vegetarian dish (if there is any): It will most certainly contain meat/fish
- PyPy is used, but most people are unaware of the fact that there is a JIT compiler for Python, that speeds up computations and reduces memory
I did as well work on my proposal all along. Here are some notes what I have been working on (before Bilbao).
ABC Optimization
One "roadblock" I did not tackle is vectorization of "user code". The vecopt branch at it's current shape was not able to efficiently transform the most basic Python for loop accessing array instances of the array module. (Micro)NumPy kernels work very well (which is the main use case), but for Python loops this is a different story. Obviously, it is not that easy to vectorize these, because it is much more likely that many guards and state modifying operations (other than store) are present.
In the worst case the optimizer just bails out and leaves the trace as it is.
But I think at least for the simplest loops should work as well.
So I evaluated what needs to be done to make this happen: Reduce the number of guards, especially Array Bound Checks (ABC). PyPy does this already, but the ones I would like to remove need a slightly different treatment. Consider:
i = 0
while i < X:
a[i] = b[i] + c[i]
i += 1
There are four guards in the resulting trace, one protecting the index to be below X, and three protecting the array access. You cannot omit them, but you can move them outside the loop. The idea is to introduce guards that make the checks (but the index guard) redundant. Here is an example:
guard(i < X) # (1) protects the index
guard(i < len(a)) # (2) leads ot IndexError
load(..., i, ...) # actual loading instruction
Assume X < len(a). Then (1) implies (2) is redundant and guard(X < len(a)) can be done before the loop is entered. That works well for a well behaved program and in order to pick the right guard as a reference (the index guard might not be the first guard), I'll take a look at the runtime value. The minimum value is preferable, because it is the strongest assumption.
I'm not yet sure if this is the best solution, but it is certainly simple and yields the desired result.
Off by one
Some commits ago the last few "off by one" iterations of a NumPy call were always handled by the blackhole interpreter, eventually compiling a bridge out of the guard. This makes the last iteration unnecessary slow. Now, PyPy has the bare essentials to create trace versions and immediately stitch them to an guarding instructions. The original version of the loop is compiled to machine code at the same time as the optimized version and attached to the guards within the optimized version.
Ultimately a vectorized trace exits an index guard immediately leading into a trace loop to handle the last remaining elements.
No comments:
Post a Comment