You've probably heard this one before.

"A server can only handle 65,535 connections because that's how many ports TCP supports."

Sounds logical, right?

16 bits for ports = 2¹⁶ = 65,536.

So that’s the limit… right?

Wrong. And understanding why matters if you're building anything that needs to scale.

The Misconception

That 65K limit only applies to outbound connections, like your laptop connecting to Google.

Example:

Your computer (one IP) can only make 65,536 connections to 172.217.13.174:443 (Google) before running out of source ports.

But when you’re running a server, things work differently.

Your server uses one listening port (say 8080).

Each new client connection is tracked by its unique combination of:

  • Source IP

  • Source Port

  • Destination IP

  • Destination Port

This combination is called a four-tuple, and it’s how TCP tells connections apart.

The Math:

Let’s break it down:

  • Source IPs → 2³² possible

  • Source Ports → 2¹⁶ possible

  • Total unique combinations = 2⁴⁸

That’s about 281 trillion possible connections to one port.

So no, your server is not bound by 65K.

It’s bound by something else entirely.

A Simple Example

You’re running a chat server on port 8080.

Client

Source IP

Source Port

Destination

1

192.168.1.100

51234

server:8080

2

192.168.1.100

51235

server:8080

3

192.168.1.101

51234

server:8080

All hit the same server port (8080).

But TCP sees them as three completely different connections.

Why??

Because the client IPs and ports are unique.

Your server isn’t running out of ports. It’s using file descriptors and memory.

The Real Limits

File Descriptors:

Each socket uses two file descriptors. One for sending, one for receiving.

On Linux, the default limit is around 1,024 file descriptors per process.

To handle 100,000 connections, you need 200,000 file descriptors.

Check your current limit:

ulimit -n

Increase it:

# Ubuntu/Linux
sudo sysctl -w fs.nr_open=1000000
ulimit -n 1000000

Memory

Each socket connection allocates buffers for sending and receiving data. On Linux:

cat /proc/sys/net/ipv4/tcp_rmem
4096    131072  6291456
# minimum, default, maximum (bytes)

cat /proc/sys/net/ipv4/tcp_wmem  
4096    16384   4194304
# minimum, default, maximum (bytes)

For 100,000 connections with default buffers:

  • Receive buffer: 131KB per connection

  • Send buffer: 16KB per connection

  • Total: (131KB + 16KB) × 100,000 = 14.7GB

But don’t panic. Linux allocates dynamically. Actual usage is often far lower.

What this means

  • If you’re designing a high-concurrency system:

  • Don’t shard or load-balance just because of the 65K myth

  • Focus on file descriptors and memory tuning

  • Modern servers can handle 100K+ connections easily

  • Your database pool is probably the real bottleneck

The 65K limit is real but it only affects clients making outbound connections.

Your server isn’t even close to that ceiling.

TLDR

Your server’s not limited by TCP ports.

It’s limited by your OS settings and RAM.

So stop fearing 65K. Start tuning your system.

Keep Reading

No posts found