So the current total size of my website is 7.46 KB, to be exact
Its served on github pages and it uses content-encoding zstd by default, transferring only 4.17KB with compression.When you open up can.kurttekin.com from your browser the server has no idea about the amount of data can be transmitted per second (bandwith).Sending huge amount of data right off can cause packet loss, router owerwhelming etc.
So there is a method
TCP Congestion Control is a set of strategies that TCP uses to prevent overloading the network.
In Slow Start Phase the server begins with a small congestion window, its measured in segments(MSS 1460 bytes for ethernet)
After every RTT(Round Trip Time) which sending a packet until ACK comes back, congestion window grows exponentially.
The amount of data doubles every RTT, as long as all segments ack'ed.
initial congestion window = 2^0 = 1 segment (~1460bytes)
After 1 RTT / ACK cwnd = 2^1 segments (~2929 bytes)
2 RTT / Next ACK cwnd = 2^2 segments (~5840 bytes)
...
until a packet loss/no ACK received.
![]() |
https://filipv.net/effects-of-latency-and-packet-loss-on-tcp-throughput/ |
TCP enters Congestion Avoidance, where growth slows to linear.
Modern TCP (RFC 6928) start with 10 segments.
With 1460 bytes per segment, that is ~14.6 KB of data in the very first RTT.
40 bytes for header, 16 bytes for IP and 24 bytes for TCP.
![]() |
https://www.embedic.com/technology/details/introduction-to-ethernet-ip-tcp-http |
With some basic HTML tricks like coding whole page in single line to avoid space and lines, minimal style and removing some bytes from self closing tags
i managed to keep the info i want to give about me under 10kb which made my website widely accessible(even with 40kbps GPRS connection). Because the amount of data stored in data center and transferred is so small it significantly lowered the CO2 emission compared to "good looking" 500mb+ javascript bloated personal websites.