linux - Go HTTP server testing ab vs wrk so much difference in result -


i trying see how many requests go http server can handle on machine try test difference large confused.

first try bench ab , run command

$ ab -n 100000 -c 1000 http://127.0.0.1/ 

doing 1000 concurrent requests.

the result follows:

concurrency level:      1000 time taken tests:   12.055 seconds complete requests:      100000 failed requests:        0 write errors:           0 total transferred:      12800000 bytes html transferred:       1100000 bytes requests per second:    8295.15 [#/sec] (mean) time per request:       120.552 [ms] (mean) time per request:       0.121 [ms] (mean, across concurrent requests) transfer rate:          1036.89 [kbytes/sec] received 

8295 requests per second seems reasonable.

but try run on wrk command:

$ wrk -t1 -c1000 -d5s http://127.0.0.1:80/ 

and these results:

running 5s test @ http://127.0.0.1:80/   1 threads , 1000 connections   thread stats   avg      stdev     max   +/- stdev     latency    18.92ms   13.38ms 234.65ms   94.89%     req/sec    27.03k     1.43k   29.73k    63.27%   136475 requests in 5.10s, 16.66mb read requests/sec:  26767.50 transfer/sec:      3.27mb 

26767 requests per second? don't understand why there such huge difference.

the code run simplest go server

package main  import (     "net/http" )  func main() {      http.handlefunc("/", func(w http.responsewriter, req *http.request) {         w.write([]byte("hello world"))     })      http.listenandserve(":80", nil) } 

my goal see how many requests go server can handle increase cores, of difference before start adding more cpu power. know how go server scales when adding more cores? , why huge difference between ab , wrk?

firstly: benchmarks pretty artificial. sending handful of bytes going net very different results once start adding database calls, template rendering, session parsing, etc. (expect order of magnitude difference)

then tack on local issues - open file/socket limits on dev machine vs. production, competition between benchmarking tool (ab/wrk) , go server resources, local loopback adapter or os tcp stacks (and tcp stack tuning), etc. goes on!

in addition:

  • ab not highly regarded
  • it http/1.0 only, , therefore doesn't keepalives
  • your other metrics vary wildly - e.g. @ avg latency reported each tool - ab has higher latency
  • your ab test runs 12s , not 5s wrk test does.
  • even 8k req/s huge amount of load - that's 28 million requests hour. if—after making db call, marshalling json struct, etc—that went down 3k/req/s you'd still able handle significant amounts of load. don't tied in these kind of benchmarks early.

i have no idea kind of machine you're on, imac 3.5ghz i7-4771 can push upwards of 64k req/s on single thread responding w.write([]byte("hello world\n"))

short answer: use wrk , keep in mind benchmarking tools have lot of variance.


Comments

Popular posts from this blog

OpenCV OpenCL: Convert Mat to Bitmap in JNI Layer for Android -

android - org.xmlpull.v1.XmlPullParserException: expected: START_TAG {http://schemas.xmlsoap.org/soap/envelope/}Envelope -

python - How to remove the Xframe Options header in django? -