Thursday, November 19, 2015

PyPy 4.0.1 released please update

PyPy 4.0.1


We have released PyPy 4.0.1, three weeks after PyPy 4.0.0. We have fixed a few critical bugs in the JIT compiled code, reported by users. We therefore encourage all users of PyPy to update to this version. There are a few minor enhancements in this version as well.

You can download the PyPy 4.0.1 release here:
We would like to thank our donors for the continued support of the PyPy project.
We would also like to thank our contributors and encourage new people to join the project. PyPy has many layers and we need help with all of them: PyPy and RPython documentation improvements, tweaking popular modules to run on pypy, or general help with making RPython’s JIT even better.

 

CFFI update


While not applicable only to PyPy, cffi is arguably our most significant contribution to the python ecosystem. PyPy 4.0.1 ships with cffi-1.3.1 with the improvements it brings.

 

What is PyPy?


PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It’s fast (pypy and cpython 2.7.x performance comparison) due to its integrated tracing JIT compiler.
We also welcome developers of other dynamic languages to see what RPython can do for them.
This release supports x86 machines on most common operating systems (Linux 32/64, Mac OS X 64, Windows 32, OpenBSD, freebsd), newer ARM hardware (ARMv6 or ARMv7, with VFPv3) running Linux, and the big- and little-endian variants of ppc64 running Linux.

 

Other Highlights (since 4.0.0 released three weeks ago)

  • Bug Fixes
    • Fix a bug when unrolling double loops in JITted code
    • Fix multiple memory leaks in the ssl module, one of which affected CPython as well (thanks to Alex Gaynor for pointing those out)
    • Use pkg-config to find ssl headers on OS-X
    • Issues reported with our previous release were resolved after reports from users on our issue tracker at https://bitbucket.org/pypy/pypy/issues or on IRC at #pypy
  • New features
    • Internal cleanup of RPython class handling
    • Support stackless and greenlets on PPC machines
    • Improve debug logging in subprocesses: use PYPYLOG=jit:log.%d for example to have all subprocesses write the JIT log to a file called ‘log.%d’, with ‘%d’ replaced with the subprocess’ PID.
    • Support PyOS_double_to_string in our cpyext capi compatibility layer
  • Numpy
    • Improve support for __array_interface__
    • Propagate most NAN mantissas through float16-float32-float64 conversions
  • Performance improvements and refactorings
    • Improvements in slicing byte arrays
    • Improvements in enumerate()
    • Silence some warnings while translating
Please update, and continue to help us make PyPy better.

Cheers
The PyPy Team

Thursday, October 29, 2015

PyPy 4.0.0 Released - A Jit with SIMD Vectorization and More

PyPy 4.0.0

We’re pleased and proud to unleash PyPy 4.0.0, a major update of the PyPy python 2.7.10 compatible interpreter with a Just In Time compiler. We have improved warmup time and memory overhead used for tracing, added vectorization for numpy and general loops where possible on x86 hardware (disabled by default), refactored rough edges in rpython, and increased functionality of numpy.
You can download the PyPy 4.0.0 release here:
We would like to thank our donors for the continued support of the PyPy project.
We would also like to thank our contributors (7 new ones since PyPy 2.6.0) and encourage new people to join the project. PyPy has many layers and we need help with all of them: PyPy and RPython documentation improvements, tweaking popular modules to run on PyPy, or general help with making RPython’s JIT even better.

New Version Numbering


Since the past release, PyPy 2.6.1, we decided to update the PyPy 2.x.x versioning directly to PyPy 4.x.x, to avoid confusion with CPython 2.7 and 3.5. Note that this version of PyPy uses the stdlib and implements the syntax of CPython 2.7.10.

Vectorization


Richard Plangger began work in March and continued over a Google Summer of Code to add a vectorization step to the trace optimizer. The step recognizes common constructs and emits SIMD code where possible, much as any modern compiler does. This vectorization happens while tracing running code, so it is actually easier at run-time to determine the availability of possible vectorization than it is for ahead-of-time compilers.
Availability of SIMD hardware is detected at run time, without needing to precompile various code paths into the executable.
The first version of the vectorization has been merged in this release, since it is so new it is off by default. To enable the vectorization in built-in JIT drivers (like numpy ufuncs), add –jit vec=1, to enable all implemented vectorization add –jit vec_all=1
Benchmarks and a summary of this work appear here

Internal Refactoring: Warmup Time Improvement and Reduced Memory Usage


Maciej Fijalkowski and Armin Rigo refactored internals of Rpython that now allow PyPy to more efficiently use guards in jitted code. They also rewrote unrolling, leading to a warmup time improvement of 20% or so. The reduction in guards also means a reduction in the use of memory, also a savings of around 20%.

Numpy


Our implementation of numpy continues to improve. ndarray and the numeric dtypes are very close to feature-complete; record, string and unicode dtypes are mostly supported. We have reimplemented numpy linalg, random and fft as cffi-1.0 modules that call out to the same underlying libraries that upstream numpy uses. Please try it out, especially using the new vectorization (via –jit vec=1 on the command line) and let us know what is missing for your code.

CFFI


While not applicable only to PyPy, cffi is arguably our most significant contribution to the python ecosystem. Armin Rigo continued improving it, and PyPy reaps the benefits of cffi-1.3: improved manangement of object lifetimes, __stdcall on Win32, ffi.memmove(), and percolate const, restrict keywords from cdef to C code.

What is PyPy?


PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It’s fast (pypy and cpython 2.7.x performance comparison) due to its integrated tracing JIT compiler.
We also welcome developers of other dynamic languages to see what RPython can do for them.
This release supports x86 machines on most common operating systems (Linux 32/64, Mac OS X 64, Windows 32, OpenBSD, freebsd), as well as newer ARM hardware (ARMv6 or ARMv7, with VFPv3) running Linux.
We also introduce support for the 64 bit PowerPC hardware, specifically Linux running the big- and little-endian variants of ppc64.

Other Highlights (since 2.6.1 release two months ago)

  • Bug Fixes
    • Applied OPENBSD downstream fixes
    • Fix a crash on non-linux when running more than 20 threads
    • In cffi, ffi.new_handle() is more cpython compliant
    • Accept unicode in functions inside the _curses cffi backend exactly like cpython
    • Fix a segfault in itertools.islice()
    • Use gcrootfinder=shadowstack by default, asmgcc on linux only
    • Fix ndarray.copy() for upstream compatability when copying non-contiguous arrays
    • Fix assumption that lltype.UniChar is unsigned
    • Fix a subtle bug with stacklets on shadowstack
    • Improve support for the cpython capi in cpyext (our capi compatibility layer). Fixing these issues inspired some thought about cpyext in general, stay tuned for more improvements
    • When loading dynamic libraries, in case of a certain loading error, retry loading the library assuming it is actually a linker script, like on Arch and Gentoo
    • Issues reported with our previous release were resolved after reports from users on our issue tracker at https://bitbucket.org/pypy/pypy/issues or on IRC at #pypy
  • New features:
    • Add an optimization pass to vectorize loops using x86 SIMD intrinsics.
    • Support __stdcall on Windows in CFFI
    • Improve debug logging when using PYPYLOG=???
    • Deal with platforms with no RAND_egd() in OpenSSL
  • Numpy:
    • Add support for ndarray.ctypes
    • Fast path for mixing numpy scalars and floats
    • Add support for creating Fortran-ordered ndarrays
    • Fix casting failures in linalg (by extending ufunc casting)
    • Recognize and disallow (for now) pickling of ndarrays with objects embedded in them
  • Performance improvements and refactorings:
    • Reuse hashed keys across dictionaries and sets
    • Refactor JIT interals to improve warmup time by 20% or so at the cost of a minor regression in JIT speed
    • Recognize patterns of common sequences in the JIT backends and optimize them
    • Make the garbage collecter more incremental over external_malloc() calls
    • Share guard resume data where possible which reduces memory usage
    • Fast path for zip(list, list)
    • Reduce the number of checks in the JIT for lst[a:]
    • Move the non-optimizable part of callbacks outside the JIT
    • Factor in field immutability when invalidating heap information
    • Unroll itertools.izip_longest() with two sequences
    • Minor optimizations after analyzing output from vmprof and trace logs
    • Remove many class attributes in rpython classes
    • Handle getfield_gc_pure* and getfield_gc_* uniformly in heap.py
    • Improve simple trace function performance by lazily calling fast2locals and locals2fast only if truly necessary
Please try it out and let us know what you think. We welcome feedback, we know you are using PyPy, please tell us about it!
Cheers
The PyPy Team




Tuesday, October 20, 2015

Automatic SIMD vectorization support in PyPy

Hi everyone,

it took some time to catch up with the JIT refacrtorings merged in this summer. But, (drums) we are happy to announce that:

The next release of PyPy,  "PyPy 4.0.0", will ship the new auto vectorizer

The goal of this project was to increase the speed of numerical applications in both the NumPyPy library and for arbitrary Python programs. In PyPy we have focused a lot on improvements in the 'typical python workload', which usually involves object and string manipulations, mostly for web development. We're hoping with this work that we'll continue improving the other very important Python use case - numerics.

What it can do!

It targets numerics only. It will not execute object manipulations faster, but it is capable of enhancing common vector and matrix operations.
Good news is that it is not specifically targeted for the NumPy library and the PyPy virtual machine. Any interpreter (written in RPython) is able make use of the vectorization. For more information about that take a look here, or consult the documentation. For the time being it is not turn on by default, so be sure to enable it by specifying --jit vec=1 before running your program.

If your language (written in RPython) contains many array/matrix operations, you can easily integrate the optimization by adding the parameter 'vec=1' to the JitDriver.

NumPyPy Improvements

Let's take a look at the core functions of the NumPyPy library (*).
The following tests tests show the speedup of the core functions commonly used in Python code interfacing with NumPy, on CPython with NumPy, on the PyPy 2.6.1 relased several weeks ago, and on PyPy 15.11 to be released soon. Timeit was used to test the time needed to run the operation in the plot title on various vector (lower case) and square matrix (upper case) sizes displayed on the X axis. The Y axis shows the speedup compared to CPython 2.7.10. This means that higher is better


In comparison to PyPy 2.6.1, the speedup greatly improved. The hardware support really strips down the runtime of the vector and matrix operations. There is another operation we would like to highlight: the dot product.
It is a very common operation in numerics and PyPy now (given a moderate sized matrix and vector) decreases the time spent in that operation. See for yourself:

These are nice improvements in the NumPyPy library and we got to a competitive level only making use of SSE4.1.

Future work  


This is not the end of the road. The GSoC project showed that it is possible to implement this optimization in PyPy. There might be other improvements we can make to carry this further:
  • Check alignment at runtime to increase the memory throughput of the CPU
  • Support the AVX vector extension which (at least) doubles the size of the vector register
  • Handle each and every corner case in Python traces to enable it  globally
  • Do not rely only on loading operations to trigger the analysis, there might be cases where combination of floating point values could be done in parallel
Cheers,
The PyPy Team

(*) The benchmark code can be found here it was run using this configuration: i7-2600 CPU @ 3.40GHz (4 cores).

Friday, October 16, 2015

PowerPC backend for the JIT

Hi all,

PyPy's JIT now supports the 64-bit PowerPC architecture! This is the third architecture supported, in addition to x86 (32 and 64) and ARM (32-bit only). More precisely, we support Linux running the big- and the little-endian variants of ppc64. Thanks to IBM for funding this work!

The new JIT backend has been merged into "default". You should be able to translate PPC versions as usual directly on the machines. For the foreseeable future, I will compile and distribute binary versions corresponding to the official releases (for Fedora), but of course I'd welcome it if someone else could step in and do it. Also, it is unclear yet if we will run a buildbot.

To check that the result performs well, I logged in a ppc64le machine and ran the usual benchmark suite of PyPy (minus sqlitesynth: sqlite was not installed on that machine). I ran it twice at a difference of 12 hours, as an attempt to reduce risks caused by other users suddenly using the machine. The machine was overall relatively quiet. Of course, this is scientifically not good enough; it is what I could come up with given the limited resources.

Here are the results, where the numbers are speed-up factors between the non-jit and the jit version of PyPy. The first column is x86-64, for reference. The second and third columns are the two ppc64le runs. All are Linux. A few benchmarks are not reported here because the runner doesn't execute them on non-jit (however, apart from sqlitesynth, they all worked).

    ai                        13.7342        16.1659     14.9091
    bm_chameleon               8.5944         8.5858        8.66
    bm_dulwich_log             5.1256         5.4368      5.5928
    bm_krakatau                5.5201         2.3915      2.3452
    bm_mako                    8.4802         6.8937      6.9335
    bm_mdp                     2.0315         1.7162      1.9131
    chaos                     56.9705        57.2608     56.2374
    sphinx
    crypto_pyaes               62.505         80.149     79.7801
    deltablue                  3.3403         5.1199      4.7872
    django                    28.9829         23.206       23.47
    eparse                     2.3164         2.6281       2.589
    fannkuch                   9.1242        15.1768     11.3906
    float                     13.8145        17.2582     17.2451
    genshi_text               16.4608        13.9398     13.7998
    genshi_xml                 8.2782         8.0879      9.2315
    go                         6.7458        11.8226     15.4183
    hexiom2                   24.3612        34.7991     33.4734
    html5lib                   5.4515         5.5186       5.365
    json_bench                28.8774        29.5022     28.8897
    meteor-contest             5.1518         5.6567      5.7514
    nbody_modified            20.6138        22.5466     21.3992
    pidigits                   1.0118          1.022      1.0829
    pyflate-fast               9.0684        10.0168     10.3119
    pypy_interp                3.3977         3.9307      3.8798
    raytrace-simple           69.0114       108.8875    127.1518
    richards                  94.1863       118.1257    102.1906
    rietveld                   3.2421         3.0126      3.1592
    scimark_fft
    scimark_lu
    scimark_montecarlo
    scimark_sor
    scimark_sparsematmul
    slowspitfire               2.8539         3.3924      3.5541
    spambayes                  5.0646         6.3446       6.237
    spectral-norm             41.9148        42.1831     43.2913
    spitfire                   3.8788         4.8214       4.701
    spitfire_cstringio          7.606         9.1809      9.1691
    sqlitesynth
    sympy_expand               2.9537         2.0705      1.9299
    sympy_integrate            4.3805         4.3467      4.7052
    sympy_str                  1.5431         1.6248      1.5825
    sympy_sum                  6.2519          6.096      5.6643
    telco                     61.2416        54.7187     55.1705
    trans2_annotate
    trans2_rtype
    trans2_backendopt
    trans2_database
    trans2_source
    twisted_iteration         55.5019        51.5127     63.0592
    twisted_names              8.2262         9.0062      10.306
    twisted_pb                12.1134         13.644     12.1177
    twisted_tcp                4.9778          1.934      5.4931

    GEOMETRIC MEAN               9.31           9.70       10.01

The last line reports the geometric mean of each column. We see that the goal was reached: PyPy's JIT actually improves performance by a factor of around 9.7 to 10 times on ppc64le. By comparison, it "only" improves performance by a factor 9.3 on Intel x86-64. I don't know why, but I'd guess it mostly means that a non-jitted PyPy performs slightly better on Intel than it does on PowerPC.

Why is that? Actually, if we do the same comparison with an ARM column too, we also get higher numbers there than on Intel. When we discovered that a few years ago, we guessed that on ARM running the whole interpreter in PyPy takes up a lot of resources, e.g. of instruction cache, which the JIT's assembler doesn't need any more after the process is warmed up. And caches are much bigger on Intel. However, PowerPC is much closer to Intel, so this argument doesn't work for PowerPC. But there are other more subtle variants of it. Notably, Intel is doing crazy things about branch prediction, which likely helps a big interpreter---both the non-JITted PyPy and CPython, and both for the interpreter's main loop itself and for the numerous indirect branches that depend on the types of the objects. Maybe the PowerPC is as good as Intel, and so this argument doesn't work either. Another one would be: on PowerPC I did notice that gcc itself is not perfect at optimization. During development of this backend, I often looked at assembler produced by gcc, and there are a number of small inefficiencies there. All these are factors that slow down the non-JITted version of PyPy, but don't influence the speed of the assembler produced just-in-time.

Anyway, this is just guessing. The fact remains that PyPy can now be used on PowerPC machines. Have fun!

A bientôt,

Armin.

Monday, October 5, 2015

PyPy memory and warmup improvements (2) - Sharing of Guards

Hello everyone!

This is the second part of the series of improvements in warmup time and memory consumption in the PyPy JIT. This post covers recent work on sharing guard resume data that was recently merged to trunk. It will be a part of the next official PyPy release. To understand what it does, let's start with a loop for a simple example:

class A(object):
    def __init__(self, x, y):
        self.x = x
        self.y = y

    def call_method(self, z):
        return self.x + self.y + z

def f():
    s = 0
    for i in range(100000):
        a = A(i, 1 + i)
        s += a.call_method(i)

At the entrance of the loop, we have the following set of operations:

guard(i5 == 4)
guard(p3 is null)
p27 = p2.co_cellvars p28 = p2.co_freevars
guard_class(p17, 4316866008, descr=<Guard0x104295e08>)
p30 = p17.w_seq
guard_nonnull(p30, descr=<Guard0x104295db0>)
i31 = p17.index p32 = p30.strategy
guard_class(p32, 4317041344, descr=<Guard0x104295d58>)
p34 = p30.lstorage i35 = p34..item0

The above operations gets executed at the entrance, so each time we call f(). They ensure all the optimizations done below stay valid. Now, as long as nothing out of the ordinary happens, they only ensure that the world around us never changed. However, if e.g. someone puts new methods on class A, any of the above guards might fail. Despite the fact that it's a very unlikely case, PyPy needs to track how to recover from such a situation. Each of those points needs to keep the full state of the optimizations performed, so we can safely deoptimize them and reenter the interpreter. This is vastly wasteful since most of those guards never fail, hence some sharing between guards has been performed.

We went a step further - when two guards are next to each other or the operations in between them don't have side effects, we can safely redo the operations or to simply put, resume in the previous guard. That means every now and again we execute a few operations extra, but not storing extra info saves quite a bit of time and memory. This is similar to the approach that LuaJIT takes, which is called sparse snapshots.

I've done some measurements on annotating & rtyping translation of pypy, which is a pretty memory hungry program that compiles a fair bit. I measured, respectively:

  • total time the translation step took (annotating or rtyping)
  • time it took for tracing (that excludes backend time for the total JIT time) at the end of rtyping.
  • memory the GC feels responsible for after the step. The real amount of memory consumed will always be larger and the coefficient of savings is in 1.5-2x mark

Here is the table:

branch time annotation time rtyping memory annotation memory rtyping tracing time
default 317s 454s 707M 1349M 60s
sharing 302s 430s 595M 1070M 51s
win 4.8% 5.5% 19% 26% 17%

Obviously pypy translation is an extreme example - the vast majority of the code out there does not have that many lines of code to be jitted. However, it's at the very least a good win for us :-)

We will continue to improve the warmup performance and keep you posted!

Cheers,
fijal


Wednesday, September 9, 2015

PyPy warmup improvements

Hello everyone!

I'm very pleased to announce that we've just managed to merge the optresult branch. Under this cryptic name is the biggest JIT refactoring we've done in a couple years, mostly focused on the warmup time and memory impact of PyPy.

To understand why we did that, let's look back in time - back when we got the first working JIT prototype in 2009 we were focused exclusively on achieving peak performance with some consideration towards memory usage, but without serious consideration towards warmup time. This means we accumulated quite a bit of technical debt over time that we're trying, with difficulty, to address right now. This branch mostly does not affect the peak performance - it should however help you with short-living scripts, like test runs.

We identified warmup time to be one of the major pain points for pypy users, along with memory impact and compatibility issues with CPython C extension world. While we can't address all the issues at once, we're trying to address the first two in the work contributing to this blog post. I will write a separate article on the last item separately.

To see how much of a problem warmup is for your program, you can run your program with PYPYLOG=jit-summary:- environment variable set. This should show you something like this:

(pypy-optresult)fijal@hermann:~/src/botbot-web$ PYPYLOG=jit-summary:- python orm.py 1500
[d195a2fcecc] {jit-summary
Tracing:            781     2.924965
Backend:            737     0.722710
TOTAL:                      35.912011
ops:                1860596
recorded ops:       493138
  calls:            81022
guards:             131238
opt ops:            137263
opt guards:         35166
forcings:           4196
abort: trace too long:      22
abort: compiling:   0
abort: vable escape:        22
abort: bad loop:    0
abort: force quasi-immut:   0
nvirtuals:          183672
nvholes:            25797
nvreused:           116131
Total # of loops:   193
Total # of bridges: 575
Freed # of loops:   6
Freed # of bridges: 75
[d195a48de18] jit-summary}

This means that the total (wall clock) time was 35.9s, out of which we spent 2.9s tracing 781 loops and 0.72s compiling them. The remaining couple were aborted (trace too long is normal, vable escape means someone called sys._getframe() or equivalent). You can do the following things:

  • compare the numbers with pypy --jit off and see at which number of iterations pypy jit kicks in
  • play with the thresholds: pypy --jit threshold=500,function_threshold=400,trace_eagerness=50 was much better in this example. What this does is to lower the threshold for tracing loops from default of 1039 to 400, threshold for tracing functions from the start from 1619 to 500 and threshold for tracing bridges from 200 to 50. Bridges are "alternative paths" that JIT did not take that are being additionally traced. We believe in sane defaults, so we'll try to improve upon those numbers, but generally speaking there is no one-size fits all here.
  • if the tracing/backend time stays high, come and complain to us with benchmarks, we'll try to look at them

Warmup, as a number, is notoriously hard to measure. It's a combination of:

  • pypy running interpreter before jitting
  • pypy needing time to JIT the traces
  • additional memory allocations needed during tracing to accomodate bookkeeping data
  • exiting and entering assembler until there is enough coverage of assembler

We're working hard on making a better assesment at this number, stay tuned :-)

Speedups

Overall we measured about 50% speed improvement in the optimizer, which reduces the overall warmup time between 10% and 30%. The very obvious warmup benchmark got a speedup from 4.5s to 3.5s, almost 30% improvement. Obviously the speedups on benchmarks would vastly depend on how much warmup time is there in those benchmarks. We observed annotation of pypy to decreasing by about 30% and the overall translation time by about 7%, so your mileage may vary.

Of course, as usual with the large refactoring of a crucial piece of PyPy, there are expected to be bugs. We are going to wait for the default branch to stabilize so you should see warmup improvements in the next release. If you're not afraid to try, nightlies will already have them.

We're hoping to continue improving upon warmup time and memory impact in the future, stay tuned for improvements.

Technical details

The branch does "one" thing - it changes the underlying model of how operations are represented during tracing and optimizations. Let's consider a simple loop like:

[i0, i1]
i2 = int_add(i0, i1)
i3 = int_add(i2, 1)
i4 = int_is_true(i3)
guard_true(i4)
jump(i3, i2)

The original representation would allocate a Box for each of i0 - i4 and then store those boxes in instances of ResOperation. The list of such operations would then go to the optimizer. Those lists are big - we usually remove 90% of them during optimizations, but they can be a couple thousand elements. Overall, allocating those big lists takes a toll on warmup time, especially due to the GC pressure. The branch removes the existance of Box completely, instead using a link to ResOperation itself. So say in the above example, i2 would refer to its producer - i2 = int_add(i0, i1) with arguments getting special treatment.

That alone reduces the GC pressure slightly, but a reduced number of instances also lets us store references on them directly instead of going through expensive dictionaries, which were used to store optimizing information about the boxes.

Cheers!
fijal & arigo


Monday, August 31, 2015

PyPy 2.6.1 released

PyPy 2.6.1

We’re pleased to announce PyPy 2.6.1, an update to PyPy 2.6.0 released June 1. We have fixed many issues, updated stdlib to 2.7.10, cffi to version 1.3, extended support for the new vmprof statistical profiler for multiple threads, and increased functionality of numpy.
You can download the PyPy 2.6.1 release here:
We would like to thank our donors for the continued support of the PyPy project, and our volunteers and contributors.

We would also like to encourage new people to join the project. PyPy has many layers and we need help with all of them: PyPy and RPython documentation improvements, tweaking popular modules to run on pypy, or general help with making RPython’s JIT even better.

What is PyPy?

PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It’s fast (pypy and cpython 2.7.x performance comparison) due to its integrated tracing JIT compiler.

This release supports x86 machines on most common operating systems (Linux 32/64, Mac OS X 64, Windows 32, OpenBSD, freebsd), as well as newer ARM hardware (ARMv6 or ARMv7, with VFPv3) running Linux.

We also welcome developers of other dynamic languages to see what RPython can do for them.

Highlights

  • Bug Fixes
    • Revive non-SSE2 support
    • Fixes for detaching _io.Buffer*
    • On Windows, close (and flush) all open sockets on exiting
    • Drop support for ancient macOS v10.4 and before
    • Clear up contention in the garbage collector between trace-me-later and pinning
    • Issues reported with our previous release were resolved after reports from users on our issue tracker at https://bitbucket.org/pypy/pypy/issues or on IRC at #pypy.
  • New features:
    • cffi was updated to version 1.3
    • The python stdlib was updated to 2.7.10 from 2.7.9
    • vmprof now supports multiple threads and OS X
    • The translation process builds cffi import libraries for some stdlib packages, which should prevent confusion when package.py is not used
    • better support for gdb debugging
    • freebsd should be able to translate PyPy “out of the box” with no patches
  • Numpy:
    • Better support for record dtypes, including the align keyword
    • Implement casting and create output arrays accordingly (still missing some corner cases)
    • Support creation of unicode ndarrays
    • Better support ndarray.flags
    • Support axis argument in more functions
    • Refactor array indexing to support ellipses
    • Allow the docstrings of built-in numpy objects to be set at run-time
    • Support the buffered nditer creation keyword
  • Performance improvements:
    • Delay recursive calls to make them non-recursive
    • Skip loop unrolling if it compiles too much code
    • Tweak the heapcache
    • Add a list strategy for lists that store both floats and 32-bit integers. The latter are encoded as nonstandard NaNs. Benchmarks show that the speed of such lists is now very close to the speed of purely-int or purely-float lists.
    • Simplify implementation of ffi.gc() to avoid most weakrefs
    • Massively improve the performance of map() with more than one sequence argument
Please try it out and let us know what you think. We welcome success stories, experiments, or benchmarks, we know you are using PyPy, please tell us about it!
Cheers
The PyPy Team

Wednesday, June 17, 2015

PyPy and ijson - a guest blog post

This gem was posted in the ijson issue tracker after some discussion on #pypy, and Dav1dde kindly allowed us to repost it here:

"So, I was playing around with parsing huge JSON files (19GiB, testfile is ~520MiB) and wanted to try a sample code with PyPy, turns out, PyPy needed ~1:30-2:00 whereas CPython 2.7 needed ~13 seconds (the pure python implementation on both pythons was equivalent at ~8 minutes).

"Apparantly ctypes is really bad performance-wise, especially on PyPy. So I made a quick CFFI mockup: https://gist.github.com/Dav1dde/c509d472085f9374fc1d

Before:

CPython 2.7:
    python -m emfas.server size dumps/echoprint-dump-1.json
    11.89s user 0.36s system 98% cpu 12.390 total 

PYPY:
    python -m emfas.server size dumps/echoprint-dump-1.json
    117.19s user 2.36s system 99% cpu 1:59.95 total


After (CFFI):

CPython 2.7:
     python jsonsize.py ../dumps/echoprint-dump-1.json
     8.63s user 0.28s system 99% cpu 8.945 total 

PyPy:
     python jsonsize.py ../dumps/echoprint-dump-1.json
     4.04s user 0.34s system 99% cpu 4.392 total

"



Dav1dd goes into more detail in the issue itself, but we just want to emphasize a few significant points from this brief interchange:
  • His CFFI implementation is faster than the ctypes one even on CPython 2.7.
  • PyPy + CFFI is faster than CPython even when using C code to do the heavy parsing.
 The PyPy Team

Monday, June 1, 2015

PyPy 2.6.0 release

PyPy 2.6.0 - Cameo Charm

We’re pleased to announce PyPy 2.6.0, only two months after PyPy 2.5.1. We are particulary happy to update cffi to version 1.1, which makes the popular ctypes-alternative even easier to use, and to support the new vmprof statistical profiler.
You can download the PyPy 2.6.0 release here:
We would like to thank our donors for the continued support of the PyPy project, and for those who donate to our three sub-projects, as well as our volunteers and contributors.
Thanks also to Yury V. Zaytsev and David Wilson who recently started running nightly builds on Windows and MacOSX buildbots.
We’ve shown quite a bit of progress, but we’re slowly running out of funds. Please consider donating more, or even better convince your employer to donate, so we can finish those projects! The three sub-projects are:
  • Py3k (supporting Python 3.x): We have released a Python 3.2.5 compatible version we call PyPy3 2.4.0, and are working toward a Python 3.3 compatible version
  • STM (software transactional memory): We have released a first working version, and continue to try out new promising paths of achieving a fast multithreaded Python
  • NumPy which requires installation of our fork of upstream numpy, available on bitbucket
We would also like to encourage new people to join the project. PyPy has many layers and we need help with all of them: PyPy and RPython documentation improvements, tweaking popular modules to run on pypy, or general help with making RPython’s JIT even better. Nine new people contributed since the last release, you too could be one of them.

What is PyPy?

PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It’s fast (pypy and cpython 2.7.x performance comparison) due to its integrated tracing JIT compiler.
This release supports x86 machines on most common operating systems (Linux 32/64, Mac OS X 64, Windows, OpenBSD, freebsd), as well as newer ARM hardware (ARMv6 or ARMv7, with VFPv3) running Linux.
While we support 32 bit python on Windows, work on the native Windows 64 bit python is still stalling, we would welcome a volunteer to handle that. We also welcome developers with other operating systems or dynamic languages to see what RPython can do for them.

Highlights

  • Python compatibility:
    • Improve support for TLS 1.1 and 1.2
    • Windows downloads now package a pypyw.exe in addition to pypy.exe
    • Support for the PYTHONOPTIMIZE environment variable (impacting builtin’s __debug__ property)
    • Issues reported with our previous release were resolved after reports from users on our issue tracker at https://bitbucket.org/pypy/pypy/issues or on IRC at #pypy.
  • New features:
    • Add preliminary support for a new lightweight statistical profiler vmprof, which has been designed to accomodate profiling JITted code
  • Numpy:
    • Support for object dtype via a garbage collector hook
    • Support for .can_cast and .min_scalar_type as well as beginning a refactoring of the internal casting rules
    • Better support for subtypes, via the __array_interface__, __array_priority__, and __array_wrap__ methods (still a work-in-progress)
    • Better support for ndarray.flags
  • Performance improvements:
    • Slight improvement in frame sizes, improving some benchmarks
    • Internal refactoring and cleanups leading to improved JIT performance
    • Improved IO performance of zlib and bz2 modules
    • We continue to improve the JIT’s optimizations. Our benchmark suite is now over 7 times faster than cpython
Please try it out and let us know what you think. We welcome success stories, experiments, or benchmarks, we know you are using PyPy, please tell us about it!
Cheers
The PyPy Team


Thursday, May 21, 2015

CFFI 1.0.1 released

CFFI 1.0.1 final has now been released for CPython! CFFI is a (CPython and PyPy) module to interact with C code from Python.

The main news from CFFI 0.9 is the new way to build extension modules: the "out-of-line" mode, where you have a separate build script. When this script is executed, it produces the extension module. This comes with associated Setuptools support that fixes the headache of distributing your own CFFI-using packages. It also massively cuts down the import times.

Although this is a major new version, it should be fully backward-compatible: existing projects should continue to work, in what is now called the "in-line mode".

The documentation has been reorganized and split into a few pages. For more information about this new "out-of-line" mode, as well as more general information about what CFFI is and how to use it, read the Goals and proceed to the Overview.

Unlike the 1.0 beta 1 version (<- click for a motivated introduction), the final version also supports an out-of-line mode for projects using ffi.dlopen(), instead of only ffi.verify().

PyPy support: PyPy needs integrated support for efficient JITting, so you cannot install a different version of CFFI on top of an existing PyPy. You need to wait for the upcoming PyPy 2.6 to use CFFI 1.0---or get a nightly build.

My thanks again to the PSF (Python Software Foundation) for their financial support!

UPDATE:

Bug with the first example "ABI out-of-line": variadic functions (like printf, ending in a "..." argument) crash. Fixed in CFFI 1.0.2.

Tuesday, May 5, 2015

CFFI 1.0 beta 1

Finally! CFFI 1.0 is almost ready. CFFI gives Python developers a convenient way to call external C libraries. Here "Python" == "CPython or PyPy", but this post is mostly about the CPython side of CFFI, as the PyPy version is not ready yet.

On CPython, you can download the version "1.0.0b1" either by looking for the cffi-1.0 branch in the repository, or by saying

pip install "cffi>=1.0.dev0"

(Until 1.0 final is ready, pip install cffi will still give you version 0.9.2.)

The main news: you can now explicitly generate and compile a CPython C extension module from a "build" script. Then in the rest of your program or library, you no longer need to import cffi at all. Instead, you simply say:

from _my_custom_module import ffi, lib

Then you use ffi and lib just like you did in your verify()-based project in CFFI 0.9.2. (The lib is what used to be the result of verify().) The details of how you use them should not have changed at all, so that the rest of your program should not need any update.

Benefits

This is a big step towards standard practices for making and distributing Python packages with C extension modules:

  • on the one hand, you need an explicit compilation step, triggered here by running the "build" script;
  • on the other hand, what you gain in return is better control over when and why the C compilation occurs, and more standard ways to write distutils- or setuptools-based setup.py files (see below).

Additionally, this completely removes one of the main drawbacks of using CFFI to interface with large C APIs: the start-up time. In some cases it could be extreme on slow machines (cases of 10-20 seconds on ARM boards occur commonly). Now, the import above is instantaneous.

In fact, none of the pure Python cffi package is needed any more at runtime (it needs only an internal extension module from CFFI, which can be installed by doing "pip install cffi-runtime" [*] if you only need that). The ffi object you get by the import above is of a completely different class written entirely in C. The two implementations might get merged in the future; for now they are independent, but give two compatible APIs. The differences are that some methods like cdef() and verify() and set_source() are omitted from the C version, because it is supposed to be a complete FFI already; and other methods like new(), which take as parameter a string describing a C type, are faster now because that string is parsed using a custom small-subset-of-C parser, written in C too.

In practice

CFFI 1.0 beta 1 was tested on CPython 2.7 and 3.3/3.4, on Linux and to some extent on Windows and OS/X. Its PyPy version is not ready yet, and the only docs available so far are those below.

This is beta software, so there might be bugs and details may change. We are interested in hearing any feedback (irc.freenode.net #pypy) or bug reports.

To use the new features, create a source file that is not imported by the rest of your project, in which you place (or move) the code to build the FFI object:

# foo_build.py
import cffi
ffi = cffi.FFI()

ffi.cdef("""
    int printf(const char *format, ...);
""")

ffi.set_source("_foo", """
    #include <stdio.h>
""")   # and other arguments like libraries=[...]

if __name__ == '__main__':
    ffi.compile()

The ffi.set_source() replaces the ffi.verify() of CFFI 0.9.2. Calling it attaches the given source code to the ffi object, but this call doesn't compile or return anything by itself. It may be placed above the ffi.cdef() if you prefer. Its first argument is the name of the C extension module that will be produced.

Actual compilation (including generating the complete C sources) occurs later, in one of two places: either in ffi.compile(), shown above, or indirectly from the setup.py, shown next.

If you directly execute the file foo_build.py above, it will generate a local file _foo.c and compile it to _foo.so (or the appropriate extension, like _foo.pyd on Windows). This is the extension module that can be used in the rest of your program by saying "from _foo import ffi, lib".

Distutils

If you want to distribute your program, you write a setup.py using either distutils or setuptools. Using setuptools is generally recommended nowdays, but using distutils is possible too. We show it first:

# setup.py
from distutils.core import setup
import foo_build

setup(
    name="example",
    version="0.1",
    py_modules=["example"],
    ext_modules=[foo_build.ffi.distutils_extension()],
)

This is similar to the CFFI 0.9.2 way. It only works if cffi was installed previously, because otherwise foo_build cannot be imported. The difference is that you use ffi.distutils_extension() instead of ffi.verifier.get_extension(), because there is no longer any verifier object if you use set_source().

Setuptools

The modern way is to write setup.py files based on setuptools, which can (among lots of other things) handle dependencies. It is what you normally get with pip install, too. Here is how you'd write it:

# setup.py
from setuptools import setup

setup(
    name="example",
    version="0.1",
    py_modules=["example"],
    setup_requires=["cffi>=1.0.dev0"],
    cffi_modules=["foo_build:ffi"],
    install_requires=["cffi-runtime"],    # see [*] below
)

Note that "cffi" is mentioned on three lines here:

  • the first time is in setup_requires, which means that cffi will be locally downloaded and used for the setup.
  • the second mention is a custom cffi_modules argument. This argument is handled by cffi as soon as it is locally downloaded. It should be a list of "module:ffi" strings, where the ffi part is the name of the global variable in that module.
  • the third mention is in install_requires. It means that in order to install this example package, "cffi-runtime" must also be installed. This is (or will be) a PyPI entry that only contains a trimmed down version of CFFI, one that does not include the pure Python "cffi" package and its dependencies. None of it is needed at runtime.

[*] NOTE: The "cffi-runtime" PyPI entry is not ready yet. For now, use "cffi>=1.0.dev0" instead. Considering PyPy, which has got a built-in "_cffi_backend" module, the "cffi-runtime" package could never be upgraded there; but it would still be nice if we were able to upgrade the "cffi" pure Python package on PyPy. This might require some extra care in writing the interaction code. We need to sort it out now...

Thanks

Special thanks go to the PSF (Python Software Foundation) for their financial support, without which this work---er... it might likely have occurred anyway, but at an unknown future date :-)

(For reference, the amount I asked for (and got) is equal to one month of what a Google Summer of Code student gets, for work that will take a bit longer than one month. At least I personally am running mostly on such money, and so I want to thank the PSF again for their contribution to CFFI---and while I'm at it, thanks to all other contributors to PyPy---for making this job more than an unpaid hobby on the side :-)


Armin Rigo

Monday, March 30, 2015

PyPy-STM 2.5.1 released

PyPy-STM 2.5.1 - Mawhrin-Skel

We're pleased to announce PyPy-STM 2.5.1, codenamed Mawhrin-Skel. This is the second official release of PyPy-STM. You can download this release here (64-bit Linux only):

http://pypy.org/download.html

Documentation:

http://pypy.readthedocs.org/en/latest/stm.html

PyPy is an implementation of the Python programming language which focuses on performance. So far we've been relentlessly optimizing for the single core/process scenario. PyPy STM brings to the table a version of PyPy that does not have the infamous Global Interpreter Lock, hence can run multiple threads on multiple cores. Additionally it comes with a set of primitives that make writing multithreaded applications a lot easier, as explained below (see TransactionQueue) and in the documentation.

Internally, PyPy-STM is based on the Software Transactional Memory plug-in called stmgc-c7. This version comes with a relatively reasonable single-core overhead but scales only up to around 4 cores on some examples; the next version of the plug-in, stmgc-c8, is in development and should address that limitation (as well as reduce the overhead). These versions only support 64-bit Linux; we'd welcome someone to port the upcoming stmgc-c8 to other (64-bit) platforms.

This release passes all regular PyPy tests, except for a few special cases. In other words, you should be able to drop in PyPy-STM instead of the regular PyPy and your program should still work. See current status for more information.

This work was done by Remi Meier and Armin Rigo. Thanks to all donors for crowd-funding the STM work so far! As usual, it took longer than we would have thought. I really want to thank the people that kept making donations anyway. Your trust is greatly appreciated!


What's new?

Compared to the July 2014 release, the main addition is a way to get reports about STM conflicts. This is an essential new feature.

To understand why this is so important, consider that if you already played around with the previous release, chances are that you didn't get very far. It probably felt like a toy: on very small examples it would nicely scale, but on any larger example it would not scale at all. You didn't get any feedback about why, but the underlying reason is that, in a typical large example, there are some STM conflicts that occur all the time and that won't be immediately found just by thinking. This prevents any parallelization.

Now PyPy-STM is no longer a black box: you have a way to learn about these conflicts, fix them, and try again. The tl;dr version is to run:

    PYPYSTM=stmlog ./pypy-stm example.py
    ./print_stm_log.py stmlog

More details in the STM user guide.


Performance

The performance is now more stable than it used to be. More precisely, the best case is still "25%-40% single-core slow-down with very good scaling up to 4 threads", but the average performance seems not too far from that. There are still dark spots --- notably, the JIT is still slower to warm up, though it was improved a lot. These are documented in the current status section. Apart from that, we should not get more than 2x single-core slow-down in the worst case. Please report such cases as bugs!


TransactionQueue

As explained before, PyPy-STM is more than "just" a Python without GIL. It is a Python in which you can do minor tweaks to your existing, non-multithreaded programs and get them to use multiple cores. You identify medium- or large-sized, likely-independent parts of the code and to ask PyPy-STM to run these parts in parallel. An example would be every iteration of some outermost loop over all items of a dictionary. This is done with a new API: transaction.TransactionQueue(). See help(TransactionQueue) or read more about it in the STM user guide.

This is not a 100% mechanical change: very likely, you need to hunt for and fix "STM conflicts" that prevent parallel execution (see docs). However, at all points your program runs correctly, and you can stop the hunt when you get acceptable performance. You don't get deadlocks or corrupted state.

Thanks for reading!
Armin, Remi, Fijal

Thursday, March 26, 2015

PyPy 2.5.1 Released

PyPy 2.5.1 - Pineapple Bromeliad

We’re pleased to announce PyPy 2.5.1, Pineapple Bromeliad following on the heels of 2.5.0. You can download the PyPy 2.5.1 release here:
We would like to thank our donors for the continued support of the PyPy project, and for those who donate to our three sub-projects, as well as our volunteers and contributors. We’ve shown quite a bit of progress, but we’re slowly running out of funds. Please consider donating more, or even better convince your employer to donate, so we can finish those projects! The three sub-projects are:
  • Py3k (supporting Python 3.x): We have released a Python 3.2.5 compatible version we call PyPy3 2.4.0, and are working toward a Python 3.3 compatible version
     
  • STM (software transactional memory): We have released a first working version, and continue to try out new promising paths of achieving a fast multithreaded Python

  • NumPy which requires installation of our fork of upstream numpy, available on bitbucket
We would also like to encourage new people to join the project. PyPy has many layers and we need help with all of them: PyPy and Rpython documentation improvements, tweaking popular modules to run on pypy, or general help with making Rpython’s JIT even better.

What is PyPy?

PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It’s fast (pypy and cpython 2.7.x performance comparison) due to its integrated tracing JIT compiler.

This release supports x86 machines on most common operating systems (Linux 32/64, Mac OS X 64, Windows, and OpenBSD), as well as newer ARM hardware (ARMv6 or ARMv7, with VFPv3) running Linux.

While we support 32 bit python on Windows, work on the native Windows 64 bit python is still stalling, we would welcome a volunteer to handle that.

Highlights

  • The past months have seen pypy mature and grow, as rpython becomes the goto solution for writing fast dynamic language interpreters. Our separation of Rpython from the python interpreter PyPy is now much clearer in the PyPy documentation and we now have seperate RPython documentation. Tell us what still isn’t clear, or even better help us improve the documentation.
  • We merged version 2.7.9 of python’s stdlib. From the python release notice:
    • The entirety of Python 3.4’s ssl module has been backported. See PEP 466 for justification.
    • HTTPS certificate validation using the system’s certificate store is now enabled by default. See PEP 476 for details.
    • SSLv3 has been disabled by default in httplib and its reverse dependencies due to the POODLE attack.
    • The ensurepip module has been backported, which provides the pip package manager in every Python 2.7 installation. See PEP 477.

  • The garbage collector now ignores parts of the stack which did not change since the last collection, another performance boost
  • errno and LastError are saved around cffi calls so things like pdb will not overwrite it
  • We continue to asymptotically approach a score of 7 times faster than cpython on our benchmark suite, we now rank 6.98 on latest runs
Please try it out and let us know what you think. We welcome success stories, experiments, or benchmarks, we know you are using PyPy, please tell us about it!
Cheers
The PyPy Team

Friday, March 13, 2015

Pydgin: Using RPython to Generate Fast Instruction-Set Simulators

Note: This is a guest blog post by Derek Lockhart and Berkin Ilbeyi from Computer Systems Laboratory of Cornell University.

In this blog post I'd like to describe some recent work on using the RPython translation toolchain to generate fast instruction set simulators. Our open-source framework, Pydgin [a], provides a domain-specific language (DSL) embedded in Python for concisely describing instruction set architectures [b] and then uses these descriptions to generate fast, JIT-enabled simulators. Pydgin will be presented at the IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) and in this post we provide a preview of that work. In addition, we discuss some additional progress updates that occurred after the publishing deadline and will not appear in the final paper [1].

Our area of research expertise is computer architecture, which is perhaps an unfamiliar topic for some readers of the PyPy blog. Below we provide some brief background on hardware simulation in the field of computer architecture, as well as some context as to why instruction set simulators in particular are such an important tool.

Simulators: Designing Hardware with Software

For computer architects in both academia and industry, a key step in designing new computational hardware (e.g., CPUs, GPUs, and mobile system-on-chips) is simulation [c] of the target system. While numerous models for simulation exist, three classes are particularly important in hardware design.

Functional Level models simulate the behavior of the target system. These models are useful for creating a "golden" reference which can serve as an executable specification or alternatively as an emulation platform for software development.

Cycle Level models aim to simulate both the behavior and the approximate timing of a hardware component. These models help computer architects explore design tradeoffs and quickly determine things like how big caches should be, how many functional units are needed to meet throughput targets, and how the addition of a custom accelerator block may impact total system performance.

Register-Transfer Level (RTL) models specify the behavior, timing, and resources (e.g., registers, wires, logic gates) of a hardware component. RTL models are bit-accurate hardware specifications typically written in a hardware description language (HDL) such as Verilog or VHDL. Once verified through extensive simulation, HDL specifications can be passed into synthesis and place-and-route tools to estimate area/energy/timing or to create FPGA or ASIC prototypes.

An instruction set simulator (ISS) is a special kind of functional-level model that simulates the behavior of a processor or system-on-chip (SOC). ISSs serve an important role in hardware design because they model the instruction set architecture (ISA) interface: the contractual boundary between hardware designers and software developers. ISSs allow hardware designers to quickly experiment with adding new processor instructions while also allowing software developers to build new compilers, libraries, and applications long before physical silicon is available.

Instruction-Set Simulators Must be Fast and Productive

Instruction-set simulators are more important than ever because the ISA boundary has become increasingly fluid. While Moore's law has continued to deliver larger numbers of transistors which computer architects can use to build increasingly complex chips, limits in Dennard scaling have restricted how these transistors can be used [d]. In more simple terms, thermal constraints (and energy constraints in mobile devices) have resulted in a growing interest in pervasive specialization: using custom accelerators to more efficiently perform compute intensive tasks. This is already a reality for designers of mobile SOCs who continually add new accelerator blocks and custom processor instructions in order to achieve higher performance with less energy consumption. ISSs are indispensable tools in this SOC design process for both hardware architects building the silicon and software engineers developing the software stack on top of it.

An instruction set simulator has two primary responsibilities: 1) accurately emulating the external execution behavior of the target, and 2) providing observability by accurately reproducing the target's internal state (e.g., register values, program counter, status flags) at each time step. However, other qualities critical to an effective ISS are simulation performance and designer productivity. Simulation performance is important because shorter simulation times allow developers to more quickly execute and verify large software applications. Designer productivity is important because it allows hardware architects to easily experiment with adding new instructions and estimate their impact on application performance.

To improve simulation performance, high-performance ISSs use dynamic binary translation (DBT) as a mechanism to translate frequently visited blocks of target instructions into optimized sequences of host instructions. To improve designer productivity, many design toolchains automatically generate ISSs from an architectural description language (ADL): a special domain-specific language for succinctly specifying instruction encodings and instruction semantics of an ISA. Very few existing systems have managed to encapsulate the design complexity of DBT engines such that high-performance, DBT-accelerated ISSs could be automatically generated from ADLs [e]. Unfortunately, tools which have done so are either proprietary software or leave much to be desired in terms of performance or productivity.

Why RPython?

Our research group learned of the RPython translation toolchain through our experiences with PyPy, which we had used in conjunction with our Python hardware modeling framework to achieve significant improvements in simulation performance [2]. We realized that the RPython translation toolchain could potentially be adapted to create fast instruction set simulators since the process of interpreting executables comprised of binary instructions shared many similarities with the process of interpreting bytecodes in a dynamic-language VM. In addition, we were inspired by PyPy's meta-tracing approach to JIT-optimizing VM design which effectively separates the process of specifying a language interpreter from the optimization machinery needed to achieve good performance.

Existing ADL-driven ISS generators have tended to use domain-specific languages that require custom parsers or verbose C-based syntax that distracts from the instruction specification. Creating an embedded-ADL within Python provides several benefits over these existing approaches including a gentler learning curve for new users, access to better debugging tools, and easier maintenance and extension by avoiding a custom parser. Additionally, we have found that the ability to directly execute Pydgin ISA descriptions in a standard Python interpreter such as CPython or PyPy significantly helps debugging and testing during initial ISA exploration. Python's concise, pseudocode-like syntax also manages to map quite closely to the pseudocode specifications provided by many ISA manuals [f].

The Pydgin embedded-ADL

Defining a new ISA in the Pydgin embedded-ADL requires four primary pieces of information: the architectural state (e.g. register file, program counter, control registers), the bit encodings of each instruction, the instruction fields, and the semantic definitions for each instruction. Pydgin aims to make this process as painless as possible by providing helper classes and functions where possible.

For example, below we provide a truncated example of the ARMv5 instruction encoding table. Pydgin maintains encodings of all instructions in a centralized encodings data structure for easy maintenance and quick lookup. The user-provided instruction names and bit encodings are used to automatically generate decoders for the simulator. Unlike many ADLs, Pydgin does not require that the user explicitly specify instruction types or mask bits for field matching because the Pydgin decoder generator can automatically infer decoder fields from the encoding table.

encodings = [
  ['adc',      'xxxx00x0101xxxxxxxxxxxxxxxxxxxxx'],
  ['add',      'xxxx00x0100xxxxxxxxxxxxxxxxxxxxx'],
  ['and',      'xxxx00x0000xxxxxxxxxxxxxxxxxxxxx'],
  ['b',        'xxxx1010xxxxxxxxxxxxxxxxxxxxxxxx'],
  ['bl',       'xxxx1011xxxxxxxxxxxxxxxxxxxxxxxx'],
  ['bic',      'xxxx00x1110xxxxxxxxxxxxxxxxxxxxx'],
  ['bkpt',     '111000010010xxxxxxxxxxxx0111xxxx'],
  ['blx1',     '1111101xxxxxxxxxxxxxxxxxxxxxxxxx'],
  ['blx2',     'xxxx00010010xxxxxxxxxxxx0011xxxx'],
  # ...
  ['teq',      'xxxx00x10011xxxxxxxxxxxxxxxxxxxx'],
  ['tst',      'xxxx00x10001xxxxxxxxxxxxxxxxxxxx'],
]

A major goal of Pydgin was ensuring instruction semantic definitions map to ISA manual specifications as much as possible. The code below shows one such definition for the ARMv5 add instruction. A user-defined Instruction class (not shown) specifies field names that can be used to conveniently access bit positions within an instruction (e.g. rd, rn, S). Additionally, users can choose to define their own helper functions, such as the condition_passed function, to create more concise syntax that better matches the ISA manual.

def execute_add( s, inst ):
  if condition_passed( s, inst.cond() ):
    a,   = s.rf[ inst.rn() ]
    b, _ = shifter_operand( s, inst )
    result = a + b
    s.rf[ inst.rd() ] = trim_32( result )

    if inst.S():
      if inst.rd() == 15:
        raise FatalError('Writing SPSR not implemented!')
      s.N = (result >> 31)&1
      s.Z = trim_32( result ) == 0
      s.C = carry_from( result )
      s.V = overflow_from_add( a, b, result )

    if inst.rd() == 15:
      return

  s.rf[PC] = s.fetch_pc() + 4

Compared to the ARM ISA Reference manual shown below, the Pydgin instruction definition is a fairly close match. Pydgin's definitions could certainly be made more concise by using a custom DSL, however, this would lose many of the debugging benefits afforded to a well-supported language such as Python and additionally require using a custom parser that would likely need modification for each new ISA.

if ConditionPassed(cond) then
   Rd = Rn + shifter_operand
   if S == 1 and Rd == R15 then
     if CurrentModeHasSPSR() then CPSR = SPSR
   else UNPREDICTABLE else if S == 1 then
     N Flag = Rd[31]
     Z Flag = if Rd == 0 then 1 else 0
     C Flag = CarryFrom(Rn + shifter_operand)
     V Flag = OverflowFrom(Rn + shifter_operand)

Creating an ISS that can run real applications is a rather complex task, even for a bare metal simulator with no operating system such as Pydgin. Each system call in the C library must be properly implemented, and bootstrapping code must be provided to set up the program stack and architectural state. This is a very tedious and error prone process which Pydgin tries to encapsulate so that it remains as transparent to the end user as possible. In future versions of Pydgin we hope to make bootstrapping more painless and support a wider variety of C libraries.

Pydgin Performance

In order to achieve good simulation performance from Pydgin ISSs, significant work went into adding appropriate JIT annotations to the Pydgin library components. These optimization hints, which allow the JIT generated by the RPython translation toolchain to produce more efficient code, have been specifically selected for the unique properties of ISSs. For the sake of brevity, we do not talk about the exact optimizations here but a detailed discussion can be found in the ISPASS paper [1]. In the paper we evaluate two ISSs, one for a simplified MIPS ISA and another for the ARMv5 ISA, whereas below we only discuss results for the ARMv5 ISS.

The performance of Pydgin-generated ARMv5 ISSs were compared against several reference ISSs: the gem5 ARM atomic simulator (gem5), interpretive and JIT-enabled versions of SimIt-ARM (simit-nojit and simit-jit), and QEMU. Atomic models from the gem5 simulator were chosen for comparison due their wide usage amongst computer architects [g]. SimIt-ARM was selected because it is currently the highest performance ADL-generated DBT-ISS publicly available. QEMU has long been held as the gold-standard for DBT simulators due to its extremely high performance, however, QEMU is generally intended for usage as an emulator rather than a simulator [c] and therefore achieves its excellent performance at the cost of observability. Unlike QEMU, all other simulators in our study faithfully track architectural state at an instruction level rather than block level. Pydgin ISSs were generated with and without JITs using the RPython translation toolchain in order to help quantify the performance benefit of the meta-tracing JIT.

The figure below shows the performance of each ISS executing applications from the SPEC CINT2006 benchmark suite [h]. Benchmarks were run to completion on the high-performance DBT-ISSs (simit-jit, pydgin-jit, and QEMU), but were terminated after only 10 billion simulated instructions for the non-JITed interpretive ISSs (these would require many hours, in some cases days, to run to completion). Simulation performance is measured in MIPS [i] and plotted on a log scale due to the wide variance in performance. The WHMEAN group summarizes each ISS's performance across all benchmarks using the weighted harmonic mean.

A few points to take away from these results:

  • ISSs without JITs (gem5, simit-nojit, and pydgin-nojit) demonstrate relatively consistent performance across applications, whereas ISSs with JITs (simit-jit, pydgin-jit, and QEMU) demonstrate much greater performance variability from application-to-application.
  • The gem5 atomic model demonstrates particularly miserable performance, only 2-3 MIPS!
  • QEMU lives up to its reputation as a gold-standard for simulator performance, leading the pack on nearly every benchmark and reaching speeds of 240-1120 MIPS.
  • pydgin-jit is able to outperform simit-jit on four of the applications, including considerable performance improvements of 1.44–1.52× for the applications 456.hmmer, 462.libquantum, and 471.omnetpp (managing to even outperform QEMU on 471.omnetpp).
  • simit-jit is able to obtain much more consistent performance (230-459 MIPS across all applications) than pydgin-jit (9.6-659 MIPS). This is due to simit-jit's page-based approach to JIT optimization compared to pydgin-jit's tracing-based approach.
  • 464.h264ref displays particularly bad pathological behavior in Pydgin’s tracing JIT and is the only application to perform worse on pydgin-jit than pydgin-nojit (9.6 MIPS vs. 21 MIPS).

The pathological behavior demonstrated by 464.h264ref was of particular concern because it caused pydgin-jit to perform even worse than having no JIT at all. RPython JIT logs indicated that the reason for this performance degradation was a large number of tracing aborts due to JIT traces growing too long. However, time limitations before the publication deadline prevented us from investigating this issue thoroughly.

Since the deadline we've applied some minor bug fixes and made some small improvements in the memory representation. More importantly, we've addressed the performance degradation in 464.h264ref by increasing trace lengths for the JIT. Below we show how the performance of 464.h264ref changes as the trace_limit parameter exposed by the RPython JIT is varied from the default size of 6000 operations.

By quadrupling the trace limit we achieve an 11x performance improvement in 464.h264ref. The larger trace limit allows the JIT to optimize long code paths that were previously triggering trace aborts, greatly helping amortize the costs of tracing. Note that arbitrarily increasing this limit can potentially hurt performance if longer traces are not able to detect optimizable code sequences.

After performing similar experiments across the applications in the SPEC CINT2006 benchmark suite, we settled on a trace limit of 400,000 operations. In the figure below we show how the updated Pydgin ISS (pydgin-400K) improves performance across all benchmarks and fixes the performance degradation previously seen in 464.h264ref. Note that the non-JITted simulators have been removed for clarity, and simulation performance is now plotted on a linear scale to more clearly distinguish the performance gap between each ISS.

With these improvements, we are now able to beat simit-jit on all but two benchmarks. In future work we hope to further close the gap with QEMU as well.

Conclusions and Future Work

Pydgin demonstrates that the impressive work put into the RPython translation toolchain, designed to simplify the process of building fast dynamic-language VMs, can also be leveraged to build fast instruction set simulators. Our prototype ARMv5 ISS shows that Pydgin can generate ISSs with performance competitive to SimIt-ARM while also providing a more productive development experience: RPython allowed us to develop Pydgin with only four person-months of work. Another significant benefit of the Pydgin approach is that any performance improvements applied to the RPython translation toolchain immediately benefit Pydgin ISSs after a simple software download and retranslation. This allows Pydgin to track the continual advances in JIT technology introduced by the PyPy development team.

Pydgin is very much a work in progress. There are many features we would like to add, including:

  • more concise syntax for accessing arbitrary instruction bits
  • support for other C libraries such as glibc, uClibc, and musl (we currently only support binaries compiled with newlib)
  • support for self-modifying code
  • features for more productive debugging of target applications
  • ISS descriptions for other ISAs such as RISC-V, ARMv8, and x86
  • automatic generation of compilers and toolchains from Pydgin descriptions

In addition, we think there are opportunities for even greater performance improvements with more advanced techniques such as:

  • automatic generation of optimized instruction decoders
  • optimizations for floating-point intensive applications
  • multiple tracing-JITs for parallel simulation of multicore SOCs
  • a parallel JIT compilation engine as proposed by Böhm et al. [3]

We hope that Pydgin can be of use to others, so if you try it out please let us know what you think. Feel free to contact us if you find any of the above development projects interesting, or simply fork the project on GitHub and hack away!

-- Derek Lockhart and Berkin Ilbeyi

Acknowledgements

We would like to sincerely thank Carl Friedrich Bolz and Maciej Fijalkowski for their feedback on the Pydgin publication and their guidance on improving the JIT performance of our simulators. We would also like to thank for the whole PyPy team for their incredible work on the PyPy and the RPython translation toolchain. Finally, thank you to our research advisor, Prof. Christopher Batten, and the sponsors of this work which include the National Science Foundation, the Defense Advanced Research Projects Agency, and Intel Corporation.

Footnotes

[a]Pydgin loosely stands for [Py]thon [D]SL for [G]enerating [In]struction set simulators and is pronounced the same as “pigeon”. The name is inspired by the word “pidgin” which is a grammatically simplified form of language and captures the intent of the Pydgin embedded-ADL. https://github.com/cornell-brg/pydgin
[b]Popular instruction set architectures (ISAs) include MIPs, ARM, x86, and more recently RISC-V
[c](1, 2) For a good discussion of simulators vs. emulators, please see the following post on StackOverflow: http://stackoverflow.com/questions/1584617/simulator-or-emulator-what-is-the-difference
[d]http://en.wikipedia.org/wiki/Dark_silicon
[e]Please see the Pydgin paper for a more detailed discussion of prior work.
[f]

For more examples of Pydgin ISA specifications, please see the ISPASS paper [1] or the Pydgin source code on GitHub.

Pydgin instruction definitions for a simple MIPS-inspired ISA can be found here:

Pydgin instruction definitions for a simplified ARMv5 ISA can be found here:

[g]

gem5 is a cycle-level simulation framework that contains both functional-level (atomic) and cycle-level processor models. Although primarily used for detailed, cycle-approximate processor simulation, gem5's atomic model is a popular tool for many ISS tasks.

[h]All performance measurements were taken on an unloaded server-class machine.
[i]Millions of instructions per second.

References

[1](1, 2, 3)

Derek Lockhart, Berkin Ilbeyi, and Christopher Batten. "Pydgin: Generating Fast Instruction Set Simulators from Simple Architecture Descriptions with Meta-Tracing JIT Compilers." IEEE Int'l Symp. on Performance Analysis of Systems and Software (ISPASS), Mar. 2015.

[2]

Derek Lockhart, Gary Zibrat, and Christopher Batten. "PyMTL: A Unified Framework for Vertically Integrated Computer Architecture Research." 47th ACM/IEEE Int'l Symp. on Microarchitecture (MICRO-47), Dec. 2014.

[3]I. Böhm, B. Franke, and N. Topham. Generalized Just-In-Time Trace Compilation Using a Parallel Task Farm in a Dynamic Binary Translator. ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), Jun 2011.