You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Hyper.Handlers.ChunkBatchSpawn implementation wound up having the most balanced performance behavior from preliminary testing, which is what is referred to as "hyperbeam chunked" below.
Wrote a simplified proc_lib-based Hyper.Protocol to replace Hyper.Server
Note: There is something wrong with the HTTP/1.1 path on the rusterlium/hyper@batched branch that occasionally causes the server to slow down to the ~100 req/sec range. It's inconsistent, and seems to have a more pronounced effect as more pipelining is used (keeping the number of clients small, but streams high). Not sure what that's all about.
All tests were done on a 2017 MacBook Pro 13" with a 2-core 2.5GHz Intel Core i7 and 16 GB of LPDDR3 RAM.
I played around with some load testing last night using
h2load
and wanted to put the results up somewhere in case it's useful.I also put together a few experiments with batched writing as well at potatosalad/hyper@chunked. The various patterns can be seen under
lib/hyper/handlers
.The
Hyper.Handlers.ChunkBatchSpawn
implementation wound up having the most balanced performance behavior from preliminary testing, which is what is referred to as "hyperbeam chunked" below.Changes made that increased performance slightly:
batch_send_resp
functionHyper.Batch
process to enqueue and send writes tobatch_send_resp
#[derive(NifMap)]
to#[derive(NifTuple)]
for the request structproc_lib
-basedHyper.Protocol
to replaceHyper.Server
Note: There is something wrong with the HTTP/1.1 path on the rusterlium/hyper@batched branch that occasionally causes the server to slow down to the ~100 req/sec range. It's inconsistent, and seems to have a more pronounced effect as more pipelining is used (keeping the number of clients small, but streams high). Not sure what that's all about.
All tests were done on a 2017 MacBook Pro 13" with a 2-core 2.5GHz Intel Core i7 and 16 GB of LPDDR3 RAM.
h1
100
10
140,000
h1
10
100
145,000
h2c
100
10
100,000
h2c
10
100
145,000
h1
100
10
220,000
h1
10
100
260,000
h2c
100
10
135,000
h2c
10
100
180,000
h1
100
10
65,000
h1
10
100
55,000
h2c
100
10
80,000
h2c
10
100
90,000
h1
100
10
70,000
h1
10
100
45,000
h2c
100
10
95,000
h2c
10
100
100,000
h1
100
10
80,000
h1
10
100
50,000
h2c
100
10
100,000
h2c
10
100
125,000
h1
100
10
30,000
h1
10
100
35,000
h2c
100
10
17,000
h2c
10
100
17,000
h2c
100
10
35,000
h2c
10
100
40,000
h1
100
10
85,000
h1
10
100
95,000
h2c
100
10
45,000
h2c
10
100
55,000
h1
100
10
155,000
h1
10
100
125,000
h2c
100
10
200,000
h2c
10
100
300,000
h1
100
10
95,000
h1
10
100
55,000
h2c
100
10
1,000
h2c
10
100
0
cargo run --release --example single_threaded
cargo run --release --example hello
elixir -S mix run --no-halt
elixir -S mix run --no-halt
elixir -S mix run --no-halt
elixir -S mix run --no-halt
elixir -S mix run --no-halt
go build server.go && ./server
h2o --conf "$(pwd)/h2o.conf"
nginx -c "$(pwd)/nginx.conf"
Client commands were:
Server implementations aren't fully documented anywhere (yet), but the cowboy and go related things are in potatosalad/ssm-stress-test.
The text was updated successfully, but these errors were encountered: