The codecs have all implemented a very simple speed test, to gauge the throughput that can be expected with each codec. For now, this is not a very formal process.
The tests are all run on the same machine, a 2.1 Ghz MacBook computer on OS X. The tests seem to run much faster on Linux.
There are two tests. They are implemented in the form of a speed test environment for the codec of interest. The testing is done by running this codec-specific environment with the test_speed_experiment and test_1_agent from the C/C++ codec.
Test 1 (Big Observations)
This test is not a typical load test, it is meant to simulate performance in challenge problems, where observations numbers in the hundreds of thousands of components, like a camera image.
Length 200 Steps
Observation Size 50 000 ints and 50 000 doubles, allocated on each step
Throughput 24 steps per second (Python) to 38 steps per second (C/C++/Matlab)
Test 2 (Typical Observations)
This is a typical load test, with a small number of observations.
Length 5000 Steps
Observation Size 5 ints and 5 doubles, allocated on each step
Throughput 556 steps per second (Matlab) to 5000 steps per second (C/C++/Java)