4 min read

Microbench 2023: Go / Java / Rust / JavaScript / Python

Microbenchmark the baseline langugage performance in a generic backend/tooling setup: some calculations, some memory wrangling, no IO and no concurrency.
a clipart drawing of an F1 cart speeding towards the viewer. It has a driver and 2 passengers. There are text labels on the image saying: “Rust”, “Go”, “Java”, “JS”, “Python”.

Hey friend,

You've probably noticed by now that I've been experimenting with Go lately and having a lot of fun with this very opinionated language.

Over the years however, I've been writing code in many different languages starting with C at university and then progressing through to Ruby, Perl and finally Java in my last Software Developer role.

I've taught myself Python and Rust as well. As you can see, I'm clearly getting a kick from learning new programming languages. 😅

Although a language is just a tool and most of us go through a few of these in our career, there are so many of them today that it's inefficient and ineffective to learn them all.

The question I'd like to ask now:

Would YOU like to be more productive and efficient or would you like to write CODE that's performant and fast?

Yeah but, no but

"But Serge, it's not that simple. There are certain languages for certain domains, you wouldn't write an OS kernel in JavaScript and you wouldn't use Assembly for Machine Learning." - you'd say.

And you are right.

However, in backend development these boundaries are blurred:

  • I've seen services written in Perl, Ruby, Python, Go, Java, C++.
  • I've known companies looking for backend devs in Erlang and Haskell. That's a long, difficult and expensive search.
  • Finally, there are emerging server-side languages like Swift, Rust, Nim and others.

I'd like to focus on a generic programming problem. And see how much effort YOU (the developer) need to put in to get some decent performance.

a kid doing a cool handbrake turn in a toy car

Benchmark

What I'd like to target in the benchmark is baseline performance: some calculations, some memory wrangling, no IO and no concurrency.

And I already have an ideal test for this: Discrete Event Simulation using a Priority Queue. This gives me a few math operations on ints and memory manipulation working with a heap binary tree.

For the most part, I've implemented the standard heap in a number of different languages exactly the way I've done in the earlier post. In some cases, I have used a built-in heap implementation as well for comparison.

Test languages/cases:

  • C (compiled with GCC 12.3.0 -O3 optimization)
  • C (compiled with Clang 15.0.7 -O3 optimization)
  • Java built-in PriorityQueue (openjdk 20.0.2 2023-07-18)
  • Go (go version go1.20.3 linux/amd64)
  • Go built-in container/heap (go version go1.20.3 linux/amd64)
  • Python built-in (Python 3.11.4)
  • Python built-in (PyPy 7.3.11)
  • JavaScript (node v18.13.0)
  • Rust (1.68.2 2023-03-27)
  • Rust built-in std::collections::BinaryHeap (1.68.2 2023-03-27)
  • Perl (v5.36.0)

Test system:

  • 6.2.0-24-generic Ubuntu SMP PREEMPT_DYNAMIC x86_64 GNU/Linux
  • Intel NUC i7-10710U CPU @ 1.10GHz

Test parameters:

  • Heap size is 10_000
  • Simulation size is 5_000_000
  • Number of runs: 1024

Notes and Caveats:

  • I intended C as a baseline "fast" version, however I was surprised by the results. I could probably make more optimizations, but that undermines the whole "how much effort you spend" idea.
  • For Java I used the standard PriorityQueue container, it's runtime was sensible and saw no point in reimplementing it.
  • For Python code, I compare the standard Python with the JIT-compiled Python implementation PyPy. Spoiler: it's super cool.
  • With Rust I intentionally compared my heap implementation with the standard built-in one. Read further and you'll see why. 🤦‍♂️
  • Perl. Well, I love Perl, but it has been so slow, it's not even on the same timescale. Dinosaurs peaked and went extinct before it finished. Sadly, I exclude it from the results. And no one cares about Perl in 2023 anyway.

Opinionated:

  • I judge language "productivity" or "developer efficiency" based on my subjective experience of how easy it is to learn it and use without going back into documentation much.
  • Although the experience is subjective, I've done some research and it generally aligns with the engineer sentiment and survey results. There won't be too much controversy here.
a labrador retriever dog digging a hole in the sand

Results: Let's dig in!

a histogram of various programming language performance: rust comes first at under 400ms, C variants are second fast. Then, they are followed by Java and Go at almost equal runtime at approx 750-800ms. Then comes Javascript and Python.

All code mentioned here is available on my Github

Takeaways and final thoughts

A few highlights that I have learnt from this experience, hopefully you've found some interesting takeaways as well: 😉

  1. Rust is super fast. I did not expect how fast it would be. The built-in PriorityQueue implementation is blazing! 🔥
    Even my default heap implementation without much optimization has given better results then the respective C variant.

  2. Go and Java run head to head. I still expected Go to do a little better since it's compiled native code. Alas.

  3. What really surprised me is that my Go heap implementation performed waaay better than the built-in one. Really odd. Event JavaScript was faster than the standard Go heap logic. 🤷‍♂️

  4. JavaScript and NodeJS did really well. Given that this is a dynamic language and super easy to learn, it offers a lot in code performance.

  5. Unsurprising, but Python was the slowest (although not as slow as Perl implementation which I had to exclude from the tests).
    However, JIT-compiling PyPy ran Python code almost twice faster, which is a super useful trick you get absolutely for free in most cases.

two husky dogs looking at the viewer

That's it for this little research!

Thank you so much for reading, you've been super awesome! 🙌

If you like my writing, please subscribe for more posts :)
I really appreciate it!