#benchmark #benchmarks #performance #web-frameworks

Shawn Bandy 7ca8e94e09 Merge branch 'development' of ssh://gitlab.techempower.com:22898/techempower/frameworkbenchmarks into multiproc_timeout_20131002 12 years ago
HttpListener 0315a9a6c8 Fixed merge conflict 12 years ago
aspnet 0315a9a6c8 Fixed merge conflict 12 years ago
aspnet-stripped 0315a9a6c8 Fixed merge conflict 12 years ago
beego 0315a9a6c8 Fixed merge conflict 12 years ago
bottle 657c9debc3 arbiter appears to be working needs to be tested however 12 years ago
cake 0315a9a6c8 Fixed merge conflict 12 years ago
compojure 0315a9a6c8 Fixed merge conflict 12 years ago
config 2a0aacbb38 Refactor benchmark_configs 12 years ago
cowboy 0315a9a6c8 Fixed merge conflict 12 years ago
cpoll_cppsp c75caa1fe6 Compiled resources from various tests 12 years ago
dancer 0315a9a6c8 Fixed merge conflict 12 years ago
dart 0315a9a6c8 Fixed merge conflict 12 years ago
dart-start 873dde0f06 Refactor benchmark_configs 12 years ago
dart-stream 873dde0f06 Refactor benchmark_configs 12 years ago
django 0315a9a6c8 Fixed merge conflict 12 years ago
django-stripped 0315a9a6c8 Fixed merge conflict 12 years ago
dropwizard 0315a9a6c8 Fixed merge conflict 12 years ago
elli 0315a9a6c8 Fixed merge conflict 12 years ago
express 0315a9a6c8 Fixed merge conflict 12 years ago
falcon 873dde0f06 Refactor benchmark_configs 12 years ago
falcore 873dde0f06 Refactor benchmark_configs 12 years ago
finagle 0315a9a6c8 Fixed merge conflict 12 years ago
flask 0315a9a6c8 Fixed merge conflict 12 years ago
gemini fe62426e76 Fixed benchmarker.py#__write_intermediate_results() - thank you Michael R. 12 years ago
go 9770efc9db a debugging line from Haley in toolset/benchmark/benchmarker.py 12 years ago
grails 0315a9a6c8 Fixed merge conflict 12 years ago
grizzly-bm b9dfc31064 Grizzly-bm pom.xml updated with newer version of source, previous was not available 12 years ago
grizzly-jersey 0315a9a6c8 Fixed merge conflict 12 years ago
hapi 0315a9a6c8 Fixed merge conflict 12 years ago
http-kit 0315a9a6c8 Fixed merge conflict 12 years ago
jester 0315a9a6c8 Fixed merge conflict 12 years ago
kelp 0315a9a6c8 Fixed merge conflict 12 years ago
lapis 0315a9a6c8 Fixed merge conflict 12 years ago
lift-stateless 0315a9a6c8 Fixed merge conflict 12 years ago
luminus 0315a9a6c8 Fixed merge conflict 12 years ago
mojolicious 0315a9a6c8 Fixed merge conflict 12 years ago
nancy 0315a9a6c8 Fixed merge conflict 12 years ago
netty 0315a9a6c8 Fixed merge conflict 12 years ago
nodejs 0315a9a6c8 Fixed merge conflict 12 years ago
onion 9770efc9db a debugging line from Haley in toolset/benchmark/benchmarker.py 12 years ago
openresty 0315a9a6c8 Fixed merge conflict 12 years ago
php 0315a9a6c8 Fixed merge conflict 12 years ago
php-codeigniter 0315a9a6c8 Fixed merge conflict 12 years ago
php-fuel c75caa1fe6 Compiled resources from various tests 12 years ago
php-kohana b87567463c Completed code and initial testing for batch resumption 12 years ago
php-laravel 0315a9a6c8 Fixed merge conflict 12 years ago
php-lithium 0315a9a6c8 Fixed merge conflict 12 years ago
php-micromvc 0315a9a6c8 Fixed merge conflict 12 years ago
php-phalcon c75caa1fe6 Compiled resources from various tests 12 years ago
php-phalcon-micro c75caa1fe6 Compiled resources from various tests 12 years ago
php-phpixie 3b03d9b7f9 initial go at batch recovery 12 years ago
php-silex b87567463c Completed code and initial testing for batch resumption 12 years ago
php-silex-orm c75caa1fe6 Compiled resources from various tests 12 years ago
php-silica c75caa1fe6 Compiled resources from various tests 12 years ago
php-slim 0315a9a6c8 Fixed merge conflict 12 years ago
php-symfony2 0315a9a6c8 Fixed merge conflict 12 years ago
php-yaf 0315a9a6c8 Fixed merge conflict 12 years ago
phreeze 0315a9a6c8 Fixed merge conflict 12 years ago
plack b87567463c Completed code and initial testing for batch resumption 12 years ago
plain 0315a9a6c8 Fixed merge conflict 12 years ago
play-java 0315a9a6c8 Fixed merge conflict 12 years ago
play-java-jpa 0315a9a6c8 Fixed merge conflict 12 years ago
play-scala 0315a9a6c8 Fixed merge conflict 12 years ago
play-scala-mongodb 0315a9a6c8 Fixed merge conflict 12 years ago
play-slick 0315a9a6c8 Fixed merge conflict 12 years ago
play1 0315a9a6c8 Fixed merge conflict 12 years ago
play1siena 0315a9a6c8 Fixed merge conflict 12 years ago
rack ad6ee87c08 Fixed rack-ruby, rack-jruby, rails-jruby by adding additional gems to Gemfiles. Fixed benchmarker.py by adding self to class method argument list 12 years ago
racket-ws 2a0aacbb38 Refactor benchmark_configs 12 years ago
rails c75caa1fe6 Compiled resources from various tests 12 years ago
rails-stripped ca66146d0b Updated rails-stripped gemfile with additional gems, both tests run. Fixed the OSError with servicestack-xsp-*, but all 3 tests get cannot to host errors still. 12 years ago
restexpress 0315a9a6c8 Fixed merge conflict 12 years ago
results d7254d41f8 Tested mojoliciousX2 and liminusX2 12 years ago
revel b87567463c Completed code and initial testing for batch resumption 12 years ago
revel-jet b87567463c Completed code and initial testing for batch resumption 12 years ago
revel-qbs b87567463c Completed code and initial testing for batch resumption 12 years ago
ringojs 0315a9a6c8 Fixed merge conflict 12 years ago
ringojs-convenient 0315a9a6c8 Fixed merge conflict 12 years ago
sbt 073afc2387 Add Windows support for Scala & Java frameworks 12 years ago
scalatra 0315a9a6c8 Fixed merge conflict 12 years ago
servicestack ca66146d0b Updated rails-stripped gemfile with additional gems, both tests run. Fixed the OSError with servicestack-xsp-*, but all 3 tests get cannot to host errors still. 12 years ago
servlet 0315a9a6c8 Fixed merge conflict 12 years ago
sinatra 2e10876a50 Sinatra gemfiles updated similar to rails/rack, now runs fine 12 years ago
snap 0315a9a6c8 Fixed merge conflict 12 years ago
spark 0315a9a6c8 Fixed merge conflict 12 years ago
spray 0315a9a6c8 Fixed merge conflict 12 years ago
spring 0315a9a6c8 Fixed merge conflict 12 years ago
tapestry 0315a9a6c8 Fixed merge conflict 12 years ago
toolset 7ca8e94e09 Merge branch 'development' of ssh://gitlab.techempower.com:22898/techempower/frameworkbenchmarks into multiproc_timeout_20131002 12 years ago
tornado 0315a9a6c8 Fixed merge conflict 12 years ago
treefrog 0315a9a6c8 Fixed merge conflict 12 years ago
undertow 0315a9a6c8 Fixed merge conflict 12 years ago
unfiltered 0315a9a6c8 Fixed merge conflict 12 years ago
uwsgi 873dde0f06 Refactor benchmark_configs 12 years ago
vertx 0315a9a6c8 Fixed merge conflict 12 years ago
wai 0315a9a6c8 Fixed merge conflict 12 years ago
web-simple 0315a9a6c8 Fixed merge conflict 12 years ago
webgo 0315a9a6c8 Fixed merge conflict 12 years ago
wicket 0315a9a6c8 Fixed merge conflict 12 years ago
wsgi 0315a9a6c8 Fixed merge conflict 12 years ago
yesod 0315a9a6c8 Fixed merge conflict 12 years ago
.gitignore de74479db5 Windows Azure automated deployment 12 years ago
LICENSE 4b43363ad7 Added license 12 years ago
README.md 9638e3ff0a Merge pull request #466 from fernandoacorreia/readme-fix 12 years ago

README.md

Web Framework Performance Comparison

This project provides representative performance measures across a wide field of web application frameworks. With much help from the community, coverage is quite broad and we are happy to broaden it further with contributions. The project presently includes frameworks on many languages including Go, Python, Java, Ruby, PHP, Clojure, Groovy, JavaScript, Erlang, Haskell, Scala, Lua, and C. The current tests exercise plaintext responses, JSON seralization, database reads and writes via the object-relational mapper (ORM), collections, sorting, server-side templates, and XSS counter-measures. Future tests will exercise other components and greater computation.

Read more and see the results of our tests on Amazon EC2 and physical hardware at http://www.techempower.com/benchmarks/

Join in the conversation at our Google Group: https://groups.google.com/forum/?fromgroups=#!forum/framework-benchmarks

Running the test suite

We ran our tests using two dedicated i7 2600k machines as well as two EC2 m1.large instances.

On the Benchmark Tools README file you will find tools and instructions to replicate our tests using EC2, Windows Azure or your own dedicated machines.

Updating Tests

We hope that the community will help us in making these tests better, so if you'd like to make any changes to the tests we currently have, here are some things to keep in mind.

Updating Dependencies

If you're updating a dependency of a framework that uses a dependency management system (Bundler, npm, etc.), please be specific with the version number that you are updating to.

Also, if you do change the dependency of any test, please update the README file for that test to reflect that change, we want to try and keep the README files as up to date as possible.

Updating Software

If you would like to update any of the software used, again, please be as specific as possible, while we still install some software via apt-get and don't specify a version, we would like to have as much control over the versions as possible.

The main file that installs all the software is in toolset/setup/linux/installer.py. It's broken up into two sections, server software and client software.

Additionally, it may be necessary to update the setup.py file in the framework's directory to use this new version.

If you update any software, please update the README files of any tests that use that software.

Adding Frameworks

When adding a new framework or new test to an existing framework, please follow these steps:

  • Update/add benchmark_config
  • Update/add setup file
  • When creating a database test, please use the MySQL table hello_world.World, or the MongoDB collection hello_world.world

There are three different tests that we currently run:

  • JSON Response
  • Database (single query)
  • Database (multiple query)

The single query database test can be treated as a special case of the multiple query test with the query-count parameter set to 1.

JSON Response

This test needs to follow the following conventions:

  • The message object should be instantiated as a new object for each request.
  • The test should use a JSON serializer to render the newly instantiated object to JSON.
  • Set the response Content-Type to application/json.
  • The response should be {"message": "Hello, World!"}
  • White space in the response does not matter.

Pseudo-code:

obj = { message : "Hello, World!" }
render json.encode(obj)

Database (single query)

This test will:

  • Access a database table or collection named "World" that is known to contain 10,000 rows/entries.
  • Query for a single row from the table or collection using a randomly generated id (the ids range from 1 to 10,000).
  • Set the response Content-Type to application/json.
  • Serialize the row to JSON and send the resulting string as the response.

By convention, if the test does not use an ORM, and instead uses the raw database connectivity provided by the platform (e.g., JDBC), we append a "-raw" to the test name in the benchmark_config file. For example, "php-raw".

Pseudo-code:

random_id = random(1, 10000)
world = World.find(random_id)
render json.encode(world)

Database (multiple queries)

This test is very similar to the single query test, and in some cases it will be implemented using the same code. A URL parameter is made available to specify the number of queries to run per request. The response is a list of objects resulting from the queries for random rows.

Pseudo-code:

number_of_queries = get("queries")
worlds = []
for i = 0; i < number_of_queries; i++
    random_id = random(1, 10000)
    worlds[i] = World.find(random_id)
render json.encode(worlds)

The benchmark_config File

The benchmark_config file is used by our run script to identify the available tests to be run. This file should exist at the root of the test directory. Here is its basic structure:

{
  "framework": "my-framework",
  "tests": [{
    "default": {
      "setup_file": "setup.py"
      "json_url": "/json",
      "db_url": "/db",
      "query_url": "/db?queries=",
      "port": 8080,
      "sort": 32
  }, {
    "alternative": {
      "setup_file": "alternate_setup.py"
      "json_url": "/json",
      "db_url": "/db",
      "query_url": "/db?queries=",
      "port": 8080,
      "sort": 33
    }
  }]
}
  • framework: Specifies the framework name.
  • tests: An array of tests that can be run for this framework. In most cases, this contains a single element for the "default" test, but additional tests can be specified.
    • setup_file: The location of the setup file that can start and stop the test. By convention this is just setup.py.
    • json_url (optional): The relative URL path to the JSON test
    • db_url (optional): The relative URL path to the database test
    • query_url (optional): The relative URL path to the variable query test. The URL must be set up so that an integer can be applied to the end of the url to specify the number of queries to run, i.e. /db?queries= or /db/
    • port: The port the server is listneing on
    • sort: The sort order. This is important for our own blog post which relies on consistent ordering of the frameworks. You can get the next available sort order by running: ./run-tests.py --next-sort

Setup Files

The setup file is responsible for starting and stopping the test. This script is responsible for (among other things):

  • Setting the database host to the correct IP
  • Compiling/packaging the code
  • Starting the server
  • Stopping the server

The setup file is a python file that contains a start() and a stop() function. Here is an example of Wicket's setup file.

import subprocess
import sys
import setup_util

##################################################
# start(args)
#
# Starts the server for Wicket
# returns 0 if everything completes, 1 otherwise
##################################################
def start(args):

# setting the database url
setup_util.replace_text("wicket/src/main/webapp/WEB-INF/resin-web.xml", "mysql:\/\/.*:3306", "mysql://" + args.database_host + ":3306")

# 1. Compile and package
# 2. Clean out possible old tests
# 3. Copy package to Resin's webapp directory
# 4. Start resin
try:
  subprocess.check_call("mvn clean compile war:war", shell=True, cwd="wicket")
  subprocess.check_call("rm -rf $RESIN_HOME/webapps/*", shell=True)
  subprocess.check_call("cp wicket/target/hellowicket-1.0-SNAPSHOT.war $RESIN_HOME/webapps/wicket.war", shell=True)
  subprocess.check_call("$RESIN_HOME/bin/resinctl start", shell=True)
  return 0
except subprocess.CalledProcessError:
  return 1

##################################################
# stop()
#
# Stops the server for Wicket
# returns 0 if everything completes, 1 otherwise
##################################################
def stop():
try:
  subprocess.check_call("$RESIN_HOME/bin/resinctl shutdown", shell=True)
  return 0
except subprocess.CalledProcessError:
  return 1