Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wired:

PING 10.0.3.1 (10.0.3.1): 56 data bytes

64 bytes from 10.0.3.1: icmp_seq=0 ttl=64 time=0.737 ms

64 bytes from 10.0.3.1: icmp_seq=1 ttl=64 time=0.636 ms

64 bytes from 10.0.3.1: icmp_seq=2 ttl=64 time=0.701 ms

64 bytes from 10.0.3.1: icmp_seq=3 ttl=64 time=0.633 ms

...

--- 10.0.3.1 ping statistics ---

8 packets transmitted, 8 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 0.516/0.644/0.737/0.075 ms

Wifi:

PING 10.0.3.1 (10.0.3.1): 56 data bytes

64 bytes from 10.0.3.1: icmp_seq=0 ttl=64 time=6.713 ms

64 bytes from 10.0.3.1: icmp_seq=1 ttl=64 time=3.508 ms

64 bytes from 10.0.3.1: icmp_seq=2 ttl=64 time=2.425 ms

64 bytes from 10.0.3.1: icmp_seq=3 ttl=64 time=2.127 ms

64 bytes from 10.0.3.1: icmp_seq=4 ttl=64 time=4.057 ms

...

--- 10.0.3.1 ping statistics ---

22 packets transmitted, 22 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 1.429/3.769/16.722/4.054 ms

Depending on what you're doing, this can make a HUGE difference.



Although I'm skeptical (though see below), as the sibling comment is, that this small an increase in best case latency would make a difference in a typical office environment, I'm curious if it was (as close to) apples-to-apples as possible. That is, was it comparing 802.11ac to gigabit ethernet (or 54/108 Mb/s Wifi to 100Mb ethernet), and were the interfaces connected the same way (e.g. both via USB or both via PCIe/Thunderbolt)?

Despite my skepticism, I've seen that, in the typical office setting, wireless latencies can vary much higher. It stands to reason that, no matter how well-engineered, it's still a shared medium, which means that congestion or interference caused by a neighbor can ruin someone's VoiP call during the time it's happening.


Both computers are connected the same Unifi network. One was a Macbook Pro using the builtin wireless (ac; listed as 144 Mbps in the Unifi controller iirc) and the other a Mac Mini connected via cable (1Gbps). They both pinged the router, so all traffic was local. There are no other wifi networks nearby (single family house area, 50+ meters between houses), and basically no other traffic on the network.


I'm interested in what situations 3ms difference (16ms max) makes a difference in an average office environment?


It makes a huge difference when copying lots of files over SMB, for example.


I've never heard that, but I also don't use SMB heavily (especially over Wifi) to begin with.

Is there a site you can point to that details the protocol's latency-sensitivity?

If something like bulk file transfer is at issue, a ping test with sizes closer to the MTU (e.g. 1480 bytes) would be a closer simulation of those latencies.


It's just what I've experienced myself when copying a lot of files. E.g. "add 50k photos from network drive to Photos" or "perform initial Time Machine backup". Over cable I'd say it's at least twice as fast, haven't done any measurements though.


That also raises the question of if it's implementation-dependent (especially if Apple optimizes AFP over everything else), rather than generally applicable to SMB, not that this would negate the validity of the example.

> Over cable I'd say it's at least twice as fast

That could be be because of the bandwidth difference, if wireless isn't 802.11ac (or if the cable isn't 100Mb/s).

Is it also the case that copying a single file of comparable size doesn't show the same speedup?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: