DISCLAIMER: Please skip to this part for the actual technical stuff and not the memes.
Gooooooooood morning to all of you beautiful Code Monkes! 🥰🙈 I am hella caffeinated and randomly decided that I’m gonna start writing blog posts now! I’d like to call these series: Code Monke.
Code Monkey: Is an offensive term to describe someone who just knows how to code and doesn’t do anything else in life.
Code Monke: Is what every Software Engineer, and I daresay any Engineer, Inventor, Philantropist, CEO, even Elon Musk, wants to become in life. It’s the ultimate destination, the pinnacle of SE clout, internships in Menlo Park, knowledge, actual love, and respect.
This series will serve as a documentation of MY journey to becoming a Code Monke. It is the perfection I know I’ll never reach, but will try my best nevertheless. So every week I’ll put all the interesting things I found at work, projects, etc. etc. here. I welcome you to join me in this beautiful journey. Apes together strong 🦍💯💦
I’m currently working on developing an online tool that measures Bufferbloat in your home router. Gamer boys/girls/persons PAY ATTENTION: Bufferbloat is a phenomenon that happens in your home router that can lead to unnecessarily high latencies for your realtime applications: Zoom Calls, Online Gaming, etc. You 👏 don’t 🙅 want 👏 a 👏 laggy ⏰ internet 🌐 when 👏 you’re 👏 CLAPPIN 👏👏👏 others 👏 in 🎮 COD 🔫 😤. So go ahead and try our tool! It’ll test your router for you and tell you if it has any problems.
…anyways, to make this tool I had to accurately be able to measure the latency to some location.
Measuring latency in the browser
Usually when you wanna check the latency in your network you’d fire up your terminal and do a
PING google.com (220.127.116.11): 56 data bytes
64 bytes from 18.104.22.168: icmp_seq=0 ttl=114 time=2.993 ms
64 bytes from 22.214.171.124: icmp_seq=1 ttl=114 time=3.289 ms
64 bytes from 126.96.36.199: icmp_seq=2 ttl=114 time=4.067 ms
But browsers don’t really have support for ICMP requests (it’s what
ping uses). So what’s the next best thing that we can do?
The Naive Way: Taking the time difference
You could always make a request to a really small file somewhere, and measure the time difference before making the request and after getting the response. Something like this:
const url = "https://api.github.com/users/ArshanKhanifar";
const before = Date.now();
const r = await fetch(url);
const after = Date.now();
console.log("Time took:", after - before);
This works, but it’s not very accurate, because it includes:
- The time taken for all the TCP handshakes and DNS lookups.
- Processing time at the server (it’s not zero).
- Download time.
…and many other random error sources. What we’re interested in is the RTT (round-trip-time).
The Better Way: Using the ResourceTiming API
Open the inspector for some webpage and take a look at the network tab, and click on one of the earlier requests. Now select the Timing tab. You’ll see something like this:
This is pretty cool! We can exactly see what the timing info for that specific request is! What we’re interested here is the green section: TTFB (Time to first Byte)
TTFB measures the amount of time your browser waits for until the first byte of your response is received from the server. It’s a much better representation of the RTT we’re trying to measure. Now how can we measure this our request? The ResourceTimingAPI is made just for that.
The ResourceTimingAPI gives you a much more granular control over the timing of your requests. It actually includes markers for measuring how long each stage of your request takes.
Now TTFB here is basically the difference between
requestStart . We can use this API like this:
const r = await fetch(url);
const perf = performance.getEntriesByName(url);
const ttfb = perf.responseStart - perf.requestStart;
YAY! Problem SOLVED!
But still, we still haven’t gotten rid of one thing: 👏 Server 👏 Processing 👏Time 👏
The Best Possible Way: 🦄
To get rid of the server processing time, unfortunately unless the server itself decides to let you know how much processing a request took, there isn’t any other way (that I know of). I don’t think a lot of servers do this, because the developers have to care enough to include that information in your response header. Nevertheless, CloudFlare’s Speed Test does include that info.
Looking at the developer tab of their Speed Test, I can tell that they measure latencies by making a
0KB download request to their backend.
Here is what Chrome’s inspector tab shows:
See that little beige bar at the bottom? That’s their server timing info. They measure
ttfb and deduct the
cfRequestDuration to get a latency measurement that’s as accurate as possible.
Although, this information is pretty much never available for any server of your choice, and the best you can do is TTFB.
Thanks for reading this! Feel free to give me a clap 👏 if you enjoyed it.