#benchmark #benchmarks #performance #web-frameworks

Pēteris Ņikiforovs ca282f9b6f Removed the list of known tests to run on Windows from README.md 12 rokov pred
aspnet 08b304c350 change sort order 12 rokov pred
bottle 367333007a Fixed the multi-query test 12 rokov pred
cake 562d69711f Updated PHP frameworks for Windows 12 rokov pred
compojure 78158cbf71 updated sort order for compojure raw 12 rokov pred
config 5172fb42c7 fix apparmor settings for mysql 12 rokov pred
cowboy 70bfbb74a1 README changes 12 rokov pred
cpoll_cppsp a95917624d Used the correct text for request-time added fortune 12 rokov pred
dancer 51f698165d fixed nginx.conf 12 rokov pred
django 56443232fc update test for django 12 rokov pred
django-stripped 54f0d2cfd8 Missing commas 12 rokov pred
dropwizard b20701e5e1 Improve DropWizard test to use JSON generation similar to Servlet test -- more efficient that way 12 rokov pred
elli 70bfbb74a1 README changes 12 rokov pred
express 2d28bc5fd0 incorrectly checking for windows 12 rokov pred
finagle a00b3bab9f databind wasn't being added 12 rokov pred
flask 7224865ca5 Merge branch 'python-update' of https://github.com/methane/FrameworkBenchmarks into methane-python-update 12 rokov pred
gemini d25fb99052 the run script now understands the new update test. and initial implementation for gemini 12 rokov pred
go 53ea7dbfa3 Merge branch 'master' of github.com:TechEmpower/FrameworkBenchmarks 12 rokov pred
grails 3f765a4210 Removed 'optimistic locking' from grails so that the version column isn't created 12 rokov pred
grizzly-jersey 9a08709f3d Added db and fortunes tests. 12 rokov pred
http-kit 266f689825 Update http-kit README to current versions 12 rokov pred
kelp c749878143 merge naturalist-master 12 rokov pred
lift-stateless 52cbfc9363 #118 Updated connection string for JDBC based tests 12 rokov pred
mojolicious c749878143 merge naturalist-master 12 rokov pred
netty e4838b5033 Netty test wasn't compiling, needed to change protected method to public 12 rokov pred
nodejs b826299c1b forgot parentheses 12 rokov pred
onion 8bef6a6b75 Used teh correct url for the JSON test. This URL correctly creates a new object on every request 12 rokov pred
openresty cf823b8fdb move connect string into function 12 rokov pred
php 844f73ea70 Error in prepared statment 12 rokov pred
php-codeigniter 562d69711f Updated PHP frameworks for Windows 12 rokov pred
php-fuel 562d69711f Updated PHP frameworks for Windows 12 rokov pred
php-kohana 5935d22ad0 PHP for Windows 12 rokov pred
php-laravel 562d69711f Updated PHP frameworks for Windows 12 rokov pred
php-lithium 562d69711f Updated PHP frameworks for Windows 12 rokov pred
php-micromvc 562d69711f Updated PHP frameworks for Windows 12 rokov pred
php-phalcon a414872f83 added new route to the routes file 12 rokov pred
php-phalcon-micro dce68ac3fb JSON test now conforms to the correct output 12 rokov pred
php-silex 8ed1f2167e Merge pull request #211 from marcw/patch-2 12 rokov pred
php-silex-orm 81ea73f53c Added PDO persistent connections to silex-orm 12 rokov pred
php-slim 562d69711f Updated PHP frameworks for Windows 12 rokov pred
php-symfony2 e902fc681e Fix link to JSON test controller 12 rokov pred
phreeze 562d69711f Updated PHP frameworks for Windows 12 rokov pred
plack 828baf8053 Update app.psgi 12 rokov pred
play-java 03dd13cebf Reduced unnecessary futures. 12 rokov pred
play-java-jpa fa668f16ea removed json test for play-java-jpa since play-java implements the same test in the same way 12 rokov pred
play-scala e5f7c770d2 updated to conform with the test 12 rokov pred
play-scala-mongodb 5a0f38e34d Merge pull request #222 from Skamander/mongo-config 12 rokov pred
play1 fbb98f7f90 play1 setup file error 12 rokov pred
play1siena 52cbfc9363 #118 Updated connection string for JDBC based tests 12 rokov pred
rack e2d758b979 Updated readme files to reference unicorn rather than passenger 12 rokov pred
rails 9f7a8eb76c fix rails update urls 12 rokov pred
rails-stripped d05f15fa67 #247 - upgrade ruby to 3.2.13 12 rokov pred
restexpress 3b19abb2ff fix sort orders 12 rokov pred
results e2d368b4c5 Round 5 windows-ec2 data 12 rokov pred
revel 9889673b6e setup file was killing test runner 12 rokov pred
ringojs 69b326a874 update sort order and fix mongo findOne call 12 rokov pred
ringojs-convinient 49998de117 fix fortune query 12 rokov pred
scalatra 895c562488 Added -raw to scalatra db tests 12 rokov pred
servlet 0e0fc9eb5a Java on Windows 12 rokov pred
sinatra e2d758b979 Updated readme files to reference unicorn rather than passenger 12 rokov pred
snap 26f6fcf761 Correcting usage of randomRs by generating a new StdGen instance per handler call 12 rokov pred
spark 511b7ccc13 use non-jndi configuration for spark 12 rokov pred
spray 7cb491c68a update sort orer 12 rokov pred
spring d52d94474a Spring fortune test 12 rokov pred
tapestry f9391c5e02 Fix typo in log4j.properties 12 rokov pred
tornado b21ec94832 Update Python to 2.7.4 12 rokov pred
unfiltered 52cbfc9363 #118 Updated connection string for JDBC based tests 12 rokov pred
vertx b7c0e4a310 Mods url changed for vertx: https://groups.google.com/forum/?fromgroups=#!topic/vertx/dphzshlTN2E 12 rokov pred
wai 2e3721efad used correct port 12 rokov pred
web-simple 1e0206ecb1 lower case framework name 12 rokov pred
webgo a64e19810e Go on Windows 12 rokov pred
wicket 5203b6e936 Setup repository 12 rokov pred
wsgi cfaaf421bb wsgi: Use meinheld worker (same as other Python Frameworks) 12 rokov pred
yesod 3c32d44156 upgrade to yesod 1.2 12 rokov pred
.gitignore 843433c88b ignore hsenv & cabal-dev 12 rokov pred
README.md ca282f9b6f Removed the list of known tests to run on Windows from README.md 12 rokov pred
benchmarker.py a1d3d8e7a4 updated sort order of restexpress 12 rokov pred
framework_test.py 6aaec854b2 #266 - manually setting request headers 12 rokov pred
installer-bootstrap.ps1 4a87811517 POint to TechEmpower repo rather than pdonald repo 12 rokov pred
installer.ps1 c1e9b12f80 Upgrade PHP on Windows to 5.4.16 12 rokov pred
installer.py 5172fb42c7 fix apparmor settings for mysql 12 rokov pred
latest.json 97abbf9ec0 Fixed bug in result parsing 12 rokov pred
run-tests.py fda78a5c41 s/clinet/client/ 12 rokov pred
setup_util.py 320882bce3 small style cleanup 12 rokov pred

README.md

Web Framework Performance Comparison

This project is an attempt to provide representative and objective performance measures across a wide field of web application frameworks. With much help from the community, we now have very broad coverage and are happy to broaden it further with contributions. The project presently includes frameworks on many languages including Go, Python, Java, Ruby, PHP, Clojure, Groovy, JavaScript, Erlang, Haskell, Scala, Lua, and C. The current tests exercise the frameworks' JSON seralization and object-relational model (ORM). Future tests will exercise server-side template libraries and other computation.

Read more and see the results of our tests on Amazon EC2 and physical hardware at http://www.techempower.com/benchmarks/

Join in the conversation at our Google Group: https://groups.google.com/forum/?fromgroups=#!forum/framework-benchmarks

Running the test suite

We ran our tests using two dedicated i7 2600k machines as well as two EC2 m1.large instances. Below you will find instructions on how to replicate our tests using either EC2 or your own dedicated machines.

###EC2 Instructions

1. Create EC2 Instances

Create two EC2 instances running Ubuntu Server 12.04.1 LTS 64-bit. We tested on m1.large instances, but feel free to experiment with different configurations. Give the instance that will act as the application server more then the default 8GB of disk capacity (we used 20GB).

Security Group

When propmted to create a security group for the instances, here are the ports that you'll need to open.

  • 22 (SSH)
  • 8080 (Most of the tests run on 8080)
  • 3306 (MySQL)
  • 5432 (PostgreSQL)
  • 9000 (Play Framework)
  • 27017 (MongoDB)
  • 3000 (yesod)
  • 8000 (snap)
  • 16969 (cpoll)

2. Setting up the servers

To coordinate the tests via scripting, the servers need to be able to work together. So once the instances are running, the first thing you'll want to do is copy your ssh key to the application server instance so that you can ssh between the two machines:

sftp -i path-to-pem-file ubuntu@server-instance-ip
put path-to-pem-file .ssh/
exit

Now ssh into the server instance and clone the latest from this repository (the scripts we use to run the tests expect that you'll clone the repository into your home directory):

ssh -i path-to-pem-file ubuntu@server-instance-ip
yes | sudo apt-get install git-core
git clone https://github.com/TechEmpower/FrameworkBenchmarks.git
cd FrameworkBenchmarks

Next, we're going to setup the servers with all the necessary software:

./run-tests.py -s server-private-ip -c client-private-ip -i path-to-pem --install-software --list-tests
source ~/.bash_profile
# For your first time through the tests, set the ulimit for open files
ulimit -n 8192
# Most software is installed autormatically by the script, but running the mongo command below from 
# the install script was causing some errors. For now this needs to be run manually.
cd installs/jruby-rack && rvm jruby-1.7.3 do jruby -S bundle exec rake clean gem SKIP_SPECS=true
cd target && rvm jruby-1.7.3 do gem install jruby-rack-1.2.0.SNAPSHOT.gem
cd ../../..
cd installs && curl -sS https://getcomposer.org/installer | php -- --install-dir=bin
cd ..
sudo apt-get remove --purge openjdk-6-jre openjdk-6-jre-headless
  mongo --host client-private-ip < config/create.js

Assuming the above finished without error, we're ready to start the test suite:

nohup ./run-tests.py -s server-private-ip -c client-private-ip -i path-to-pem --max-threads number-of-cores &

For the number-of-cores parameter, you will need to know your application server's core count. For example, Amazon EC2 large instances have 2 cores.

This script will run the full set of tests. Results of all the tests will output to ~/FrameworkBenchmarks/results/ec2/timestamp. If you use a different configuration than two m1.large instances, please use the --name option to name the results appropriately.

nohup ./run-tests.py -s server-private-ip -c client-private-ip -i path-to-pem --max-threads cores --name ec2-servertype-clienttype &

So if you were running an m1.large and an m1.medium, it would look like this:

nohup ./run-tests.py -s server-private-ip -c client-private-ip -i path-to-pem --max-threads cores --name ec2-m1.large-m1.medium &

This will allow us to differentiate results.

Be aware that on Large instances, if you include the slower frameworks (and they are included by default), the total runtime of a full suite of tests can be measured in days, not just hours. The EC2 bill isn't going to break the bank, but it's also not going to be chump change.

Dedicated Hardware Instructions

If you have two servers or workstations lying around, then you can install and run the tests on physical hardware. Please be aware that these setup instructions can overwrite software and settings, It's best to follow these instructions on clean hardware. We assume that both machines are running Ubuntu Server 12.04 64-bit.

1. Prerequisites

Before you get started, there are a couple of steps you can take to make running the tests easier on yourself. Since the tests can run for several hours, it helps to set everything up so that once the tests are running, you can leave the machines unattended and don't need to be around to enter ssh or sudo passwords.

  1. Setup an ssh key for the client machine
  2. Edit your sudoers file so that you do not need to enter your password for sudo access

2. Setting up the servers

As it currently stands, the script that runs the tests makes some assumptions about where the code is placed, we assume that the FrameworkBenchmarks repository will be located in your home directory.

Check out the latest from github:

cd ~
git clone https://github.com/TechEmpower/FrameworkBenchmarks.git
cd FrameworkBenchmarks

Next, we're going to setup the servers with all the necessary software:

./run-tests.py -s server-ip -c client-ip -i path-to-ssh-key --install-software --list-tests
source ~/.bash_profile
# For your first time through the tests, set the ulimit for open files
# Most software is installed autormatically by the script, but running the mongo command below from
# the install script was causing some errors. For now this needs to be run manually.
cd installs/jruby-rack && rvm jruby-1.7.3 do jruby -S bundle exec rake clean gem SKIP_SPECS=true
cd target && rvm jruby-1.7.3 do gem install jruby-rack-1.2.0.SNAPSHOT.gem
cd ../../..
cd installs && curl -sS https://getcomposer.org/installer | php -- --install-dir=bin
cd ..
sudo apt-get remove --purge openjdk-6-jre openjdk-6-jre-headless
mongo --host client-ip < config/create.js

Assuming this finished without error, we're ready to start the test suite:

nohup ./run-tests.py -s server-ip -c client-ip -i path-to-ssh-key --max-threads cores --name unique-machine-name &

This will run the full set of tests. Results of all the tests will output to ~/FrameworkBenchmarks/results/unique-machine-name/timestamp.

Windows Instructions

Generously provided by @pdonald

Server installation scripts for Windows Server 2012 R2 on Amazon EC2.

Instructions:

  • Create an instance from the Microsoft Windows Server 2012 Base image on Amazon EC2
  • Connect to it via Remote Desktop
  • Copy installer-bootstrap.ps1 from this repo to the server (for files CTRL-C + CTRL-V works alright)
  • Copy your client private key too while you're at it
  • Right click on the installer script and select Run with PowerShell
  • It will ask something, just hit enter
  • It will install git and then launch installer.ps1 from the repo which will install everything else
  • Installation shouldn't take more than 5 to 10 minutes
  • Then you have a working console: try python, git, ssh, curl, node etc. everything works + PowerShell goodies

The client/database machine is still assumed to be a Linux box, you can install just the client software via

python run-tests.py -s server-ip -c client-ip -i "C:\Users\Administrator\Desktop\client.key" --install-software --install client --list-tests

Now you can run tests:

python run-tests.py -s server-ip -c client-ip -i "C:\Users\Administrator\Desktop\client.key" --max-threads 2 --duration 30 --sleep 5 --name win --test aspnet --type all

Result Files

After a test run, the directory ~/FrameworkBenchmarks/results/machine-name/timestamp will contains all the result files. In this folder are four files: three CSV files, one for each of the test types (json, db, query), and a single results.json file that contains all the results as well as some additional information. The results.json file is what we use to drive our blog post, and may or may not be useful to you. There are three subdirectories: one for each of the test types (json, db, query), each of these directories contain the raw weighttp results for each framework.

Benchmarking a Single Test

If you are making changes to any of the tests, or you simply want to verify a single test, you can run the script with the --test flag. For example, if you only wanted to run the JRuby tests:

nohup ./run-tests.py -s server-ip -c client-ip -i path-to-ssh-key --max-threads cores --name unique-machine-name --test rack-jruby sinatra-jruby rails-jruby

Updating Tests

We hope that the community will help us in making these tests better, so if you'd like to make any changes to the tests we currently have, here are some things to keep in mind.

Updating Dependencies

If you're updating a dependency of a framework that uses a dependency management system (Bundler, npm, etc.), please be specific with the version number that you are updating to.

Also, if you do change the dependency of any test, please update the README file for that test to reflect that change, we want to try and keep the README files as up to date as possible.

Updating Software

If you would like to update any of the software used, again, please be as specific as possible, while we still install some software via apt-get and don't specify a version, we would like to have as much control over the versions as possible.

The main file that installs all the software is in installer.py. It's broken up into two sections, server software and client software.

Additionally, it may be necessary to update the setup.py file in the framework's directory to use this new version.

If you update any software, please update the README files of any tests that use that software.

Adding Frameworks

When adding a new framework or new test to an existing framework, please follow these steps:

  • Update/add benchmark_config
  • Update/add setup file
  • When creating a database test, please use the MySQL table hello_world.World, or the MongoDB collection hello_world.world

There are three different tests that we currently run:

  • JSON Response
  • Database (single query)
  • Database (multiple query)

The single query database test can be treated as a special case of the multiple query test with the query-count parameter set to 1.

JSON Response

This test needs to follow the following conventions:

  • The message object should be instantiated as a new object for each request.
  • The test should use a JSON serializer to render the newly instantiated object to JSON.
  • Set the response Content-Type to application/json.
  • The response should be {"message": "Hello, World!"}
  • White space in the response does not matter.

Pseudo-code:

obj = { message : "Hello, World!" }
render json.encode(obj)

Database (single query)

This test will:

  • Access a database table or collection named "World" that is known to contain 10,000 rows/entries.
  • Query for a single row from the table or collection using a randomly generated id (the ids range from 1 to 10,000).
  • Set the response Content-Type to application/json.
  • Serialize the row to JSON and send the resulting string as the response.

By convention, if the test does not use an ORM, and instead uses the raw database connectivity provided by the platform (e.g., JDBC), we append a "-raw" to the test name in the benchmark_config file. For example, "php-raw".

Pseudo-code:

random_id = random(1, 10000)
world = World.find(random_id)
render json.encode(world)

Database (multiple queries)

This test is very similar to the single query test, and in some cases it will be implemented using the same code. A URL parameter is made available to specify the number of queries to run per request. The response is a list of objects resulting from the queries for random rows.

Pseudo-code:

number_of_queries = get("queries")
worlds = []
for i = 0; i < number_of_queries; i++
    random_id = random(1, 10000)
    worlds[i] = World.find(random_id)
render json.encode(worlds)

The benchmark_config File

The benchmark_config file is used by our run script to identify the available tests to be run. This file should exist at the root of the test directory. Here is its basic structure:

{
  "framework": "my-framework",
  "tests": [{
    "default": {
      "setup_file": "setup.py"
      "json_url": "/json",
      "db_url": "/db",
      "query_url": "/db?queries=",
      "port": 8080,
      "sort": 32
  }, {
    "alternative": {
      "setup_file": "alternate_setup.py"
      "json_url": "/json",
      "db_url": "/db",
      "query_url": "/db?queries=",
      "port": 8080,
      "sort": 33
    }
  }]
}
  • framework: Specifies the framework name.
  • tests: An array of tests that can be run for this framework. In most cases, this contains a single element for the "default" test, but additional tests can be specified.
    • setup_file: The location of the setup file that can start and stop the test. By convention this is just setup.py.
    • json_url (optional): The relative URL path to the JSON test
    • db_url (optional): The relative URL path to the database test
    • query_url (optional): The relative URL path to the variable query test. The URL must be set up so that an integer can be applied to the end of the url to specify the number of queries to run, i.e. /db?queries= or /db/
    • port: The port the server is listneing on
    • sort: The sort order. This is important for our own blog post which relies on consistent ordering of the frameworks. You can get the next available sort order by running: ./run-tests.py --next-sort

Setup Files

The setup file is responsible for starting and stopping the test. This script is responsible for (among other things):

  • Setting the database host to the correct IP
  • Compiling/packaging the code
  • Starting the server
  • Stopping the server

The setup file is a python file that contains a start() and a stop() function. Here is an example of Wicket's setup file.

import subprocess
import sys
import setup_util

##################################################
# start(args)
#
# Starts the server for Wicket
# returns 0 if everything completes, 1 otherwise
##################################################
def start(args):

# setting the database url
setup_util.replace_text("wicket/src/main/webapp/WEB-INF/resin-web.xml", "mysql:\/\/.*:3306", "mysql://" + args.database_host + ":3306")

# 1. Compile and package
# 2. Clean out possible old tests
# 3. Copy package to Resin's webapp directory
# 4. Start resin
try:
  subprocess.check_call("mvn clean compile war:war", shell=True, cwd="wicket")
  subprocess.check_call("rm -rf $RESIN_HOME/webapps/*", shell=True)
  subprocess.check_call("cp wicket/target/hellowicket-1.0-SNAPSHOT.war $RESIN_HOME/webapps/wicket.war", shell=True)
  subprocess.check_call("$RESIN_HOME/bin/resinctl start", shell=True)
  return 0
except subprocess.CalledProcessError:
  return 1

##################################################
# stop()
#
# Stops the server for Wicket
# returns 0 if everything completes, 1 otherwise
##################################################
def stop():
try:
  subprocess.check_call("$RESIN_HOME/bin/resinctl shutdown", shell=True)
  return 0
except subprocess.CalledProcessError:
  return 1