Do NOT do this if you live in a densely populated area (e.g. apartment complex). You'll create noise for yourself and everybody else. Classic prisoner's dilemma - a few people could be assholes and profit from it, but if everyone's an asshole everybody suffers.
General rule on TX power: start on low and increase only if you know (or can confirm) it helps. Go back down if it doesn't.
For the 6GHz frequencies used, this isn’t really as big of a deal as everyone has made it out to be. The advice was shared in the early days of 2.4GHz WiFi with into 3 non-overlapping channels, higher penetration of 2.4GHz signals, and competition with all of the other cheap devices in the 2.4GHz space.
The 6GHz space isn’t even competing with classic WiFi. It’s really fine. There’s no prisoner’s dilemma or some moral high ground from setting it to low. It will make virtually no difference for your neighbors.
The real world difference is actually pretty minimal between power settings.
The actual risk with modern hardware is that the high power setting starts running the power amplifier in a higher distortion area of the curve which degrades signal quality in exchange for incrementally longer range.
Also, the higher frequencies are much more affected by absorption from little things like "walls" and "trees" which are occasionally part of the RF environment, so you're far less likely to interfere with your neighbors doing this, than you were with 2.4GHz.
Also the reason it makes such an enormous difference to put your AP in the same room, if at all possible. Sneak a cable somewhere, park the AP in the far corner of the room, sure. But with zero walls in between, it's huge.
Or conduit so you can run something else too later if you want. There is a reason why commercial almost always does that. But also because they have money.
Agree you want TX power as low as you can, but in practice, I've always found there's at least once device in my house that'll benefit from an increased TX power. Also I generally just the FCC in setting reasonable power limits for what 'high' should be.
In my experience concrete walls and ceilings in apartment complexes completely block 5 GHz signals. Even through modern triple glass windows most of signal is lost. I can't receive any other 5 GHz networks inside my apartment, but around 50 on 2.4 GHz, which makes 2.4 nearly unusable anyway.
This is even more true for the WiFi 7 frequencies at 6GHz
The old tales about interfering with your neighbors, prisoners dilemmas, and claiming moral high ground from setting it to low is old school WiFi mythology that continues to be parroted around
In the US, I would venture at least half, if not more, apartment complexes have wood and drywall walls and ceilings. No concrete is used above the first floor.
This may not help if you can’t control your environment. You will often benefit from nearby routers hearing you and each other if you are forced to share a channel with them, as that is what enables the carrier sensing to work correctly. Otherwise neighbouring APs that can’t hear your quieter use of the channel may shout over your devices rather than backing off, creating collisions and resulting in retransmits.
You can get this kind of interfere even if the signal from the router to your device is sitting just above the noise floor and the property next to you is doing the same thing. Both signals are so weak they get drowned out by an even weaker signal. The router on the other hand can’t tell the message is corrupted until your device responds.
You control your environment by not adding yourself to the dicks creating the bad environment. Everything else is just rationalizing for your own maximum convenience.
There is no such problem as "you have to shout enough so the others hear that you're there". There's no such thing, by at least 2 different vectors. 1, They hear everyone just fine, weak and strong, all at the same time. 2, It doesn't matter even if they didn't, because you obviously hear them if you're getting clobbered by them, and so your router can channel hop around them even if they don't channel hop around you.
While indeed you shouldn't fix noise by shouting louder, your justification isn't quite right.
1. It's the AP that has to decide to change channel, and if you live somewhere with channel contention, from its perspective all channels will be busy. At that point, if your channel appears the quietest (either by being the least noisy or by your clients not being active), then the AP will decide to clobber your channel. Their WiFi devices may also not hear you and won't back off to give your airtime, even though you hear theirs and give them airtime.
2. Having your AP change channel (note: channel hopping is something else entirely, which isn't used for WiFi) wouldn't help when all channels are busy. As long as your usage appears quiet, other APs will keep moving on top of you during their channel optimization.
For residential, the only solution is to use technology that cannot propagate to neighbors. 5/6GHz and many APs, and good thick walls (mmm, reinforced concrete). WiFi channels is a solution to make a few bits of equipment coexist in the same space, but is of limited use when it comes to segregating your space from that of your neighbors. Especially if you want good performance, as there's very few wide channels available.
If you’re using the same channel as a neighbouring router that’s close enough to overpower yours then you’ve already lost, pick a different channel. If you stick to 20 mhz there are plenty options, even more if you are able to use DFS channels.
How likely am I to even detect my neighbors 6GHz network?
I live in a very dense part of Chicago. 2.4 and 5 are a minefield, just a thick soup of interference on everything but the DFS channels (which I get kicked off of too often being close to two airports). While it could be that zero neighbors have 6E or 7 equipment, I find that hard to believe, but nothing comes up on the scan.
There might be more people than you, but 6GHz with wide channels doesn’t penetrate very far. You wouldn’t be able to see all of the networks, just maybe your adjacent neighbors.
Quite right. However, if your wifi bridge has an option for auto tuning the power then that might be a future proofing option, assuming that everyone uses it, which they probably won't sigh
If wifi becomes a pain within a shared building then seriously consider ethernet. Slimline stick on trunking will hide the wires at about £1-2/m. A box of CAT6, solid core is less than £1/m. You will also need some back boxes, modules and face plates (~£2.50 each) and a POST tool (fiver?) Or you can try and bodge RJ45 plugs onto the solid core CAT6 - please don't unless you really know what you are doing: it looks messy and is seriously prone to weird failures.
‘High power’ on a router is mostly useful to borderline clients. Without that, they likely won’t even be able to see it. It’s hard to auto detect that situation initially, since how can you tell someone you can’t hear to get louder?
Oh, good point. That was actually the first thing I missed, but when I created a new Wi-Fi 7-only SSID, Unifi wouldn’t let me pick anything lower than WPA3 if I only used 6 GHz. So that sort of fixed itself.
I'm lazy so I just fire off the occasional speed tests using Ookla.
It doesn't _really_ seem to matter what channel width or frequency I use, I tend to get around 600Gbps from my iPhone (17, pro).
When you make it a point to ensure you're on the correct AP, line of sight from a few feet away, sometimes I break 1Gbps. I was surprised, watching TV the other day, to randomly get a 1.2Gbps speedtest which is one of the faster ones I've seen on WiFi.
(10gbps internet, UDM Pro, UDM enterprise 2.5Gbps switch for clients, PoE WiFi 7 APs on 6ghz).
Honestly, I'd say overall 6ghz has been more trouble than it's worth. Flipping the switch to WPA2/3 as required by 6ghz broke _all_ of my clients last year, so I had to revert and now I just have a separate SSID for clients I have the energy to manually retype the password into. 6Ghz pretty much only works line of sight and from a handful of feet away. There were bugs last year in Apple's "Disable 6e" setting so it kept re-enabling itself. MLO was bad, so it would stick to 6ghz even when there was basically no usable signal.
Over the course of the past year, it's gotten pretty tolerable, but sometimes I still wonder why I bother-- I'm pretty sure my real world performance would be better if I just turned 6ghz off again.
I get 1,700 Mbps on Ookla with my iPhone 17 Pro. This is on 6ghz with line of sight to the AP, with MLO turned off.
I haven't experienced any issues with 6ghz enabled, although honestly there isn't much noticeable benefit on an iPhone either in real-world usage. MLO was causing some issues for my non-WiFi 7 Apple devices - since WiFi credentials are sync'd in iCloud, I found that my laptop was joining the MLO network even though I never explicitly told it to - so I have disabled MLO.
I just tested 1700mbits/s from my iPhone 17 PM in the next room over from my Ubiquiti E7 and I don’t even have MLO enabled. Something’s very wrong if you’re only getting 600mbit.
Optimizing for top speeds is the wrong way of looking at this.
Even the shittiest consumer WiFi will generally give a satisfactory speed test result with decent speeds, despite being completely unusable for anything real-time like video conferencing, Remote Desktop or gaming. Your random high-speed result may very well be down to luck and doesn’t represent how stable and usable the connection will be.
In fact what the author does here (crank up the channel width, etc) might do for a good speed test result but will start dropping out with terrible latency spikes and jitter the second he turns away from his WiFi AP.
Smaller channel widths are generally preferable as they provide a smaller top speed but said speed will be much more stable.
Sure. Those are also things I optimize for. I'm using 40mhz 5ghz channels and 20mhz 2.4ghz channels. I'm in the 'burbs, but silicon valley, and small lots, so there's definitely some contention for channels. Just sharing my experience.
I get consistently ~1.3-1.6gbps on fast.com with similar setup (10g fiber, UDM Pro, E7, etc). I think where I live there are very few / zero folks on 6ghz...so, win.
Yep! Every single client required the password be typed in again, which is problematic in a house full of wifi devices (~50), some of which don't have keyboards, or have janky setup processes. Surely you're aware of wifi devices that need you to connect to their own SSID to set them up, or require an app and a setup process.
That’s interesting. My testing for EAP-TLS and OWE networks has shown modern clients will simply create another profile when it detects the change in the AKM suite. Hard roam between wpa2/wpa3, but still seem less for the client.
I guess we're going to let AT&T, Verizon and everyone else just squat the entire spectrum. "5G" and the pillaging and theft of spectrum that seems to just sit idle anyways has been such a scam. If they wanted innovation there should be more ISM bands and less dependence being encouraged on wireless providers for "Internet access" as opposed to just biting the bullet and running more fiber and copper. But that would be bad for Verizon, T-Mobile and AT&T's bottom line so obviously we can't do that.
How would that work… will they have to force manufacturers to recall or issue mandatory updates to routers which already support it?
FCC enforcement for interference can work for occasional troublemakers but there’s no way they can go after every single consumer who (most likely not even realizing it) bought a 6Ghz-capable router that is encroaching on the now-privatized frequency band.
OpenWRT does support 802.11r fast roaming for multiple APs.
The problem with OpenWRT is/was the configuration of multiple APs. There is OpenWISP, but they mostly target very large setups (>100 APs).
So I built OpenSOHO using the OpenWISP daemons on the AP and a pocketbase frontend. (https://github.com/rubenbe/opensoho).
No band steering yet unfortunately.
(TLDR: if you want to use bleeding edge technology you must use bleeding edge drivers and firmware blobs)
We have tested WiFi-7 gear in our lab: from the cheapest TP Omada EAP783 to the latest most expensive Cisco AP+Controller.
Our findings:
- Driver quality from the modems is still below average on Linux. If you want to test Wifi-7 go with the Intel BE200 card - most stuff works there. Warning: this card does not work with AMD CPUs.
- We have seen quite a bit of problems from Qualcomm and Mediatek cards. Either latency issues, weirdo bugs on 6GHz (not showing all SSIDs) or throughput problems
- Always go with the latest kernel with the freshest firmware blobs
- MLO is difficult to get running properly. Very buggy from all sides. Also needs the latest version of wpa_supplicant - otherwise it will not come up. And be aware: there are several MLO modes and not all of them offer "two links for twice the bandwidth".
Also expect to hit problems from AP side. If you read the TP Omada firmware changelogs you see that they are still struggling with a lot of basic functionality. So keep them updated to the latest beta versions too.
I use a Qualcomm QCNCM865 in my privat setup with an AMD CPU. Feels like the latest firmware blobs and kernel drivers brought stability into their components.
As I have been trying to tell the world and keep repeating. The best version and implementation of WiFI 6E is WiFi 7. So if anyone want decent WiFi 7 they will have to wait till WiFi 8.
> Running iperf server on the router itself creates CPU contention between the WiFi scheduling and the iperf process. The router’s TCP stack isn’t tuned for this either. Classic mistake.
Can you elaborate on this? I don't know much about WiFi so I'm curious what CPU work the router needs to do and what wouldn't be offloaded to hardware somehow (like most routing/forwarding/QoS duties can be).
It has nothing to do with WiFi even; when running a test you need a server that emits the test data - this could be a standard HTTP server on the internet (in case of public speed tests) or a binary like iperf that synthesizes the test data on the fly.
You need to ensure the server is able to send the test data quickly enough so that the network link becomes the bottleneck.
In his case he was running the test server on the router, and the router’s CPU was unable to churn out the data quickly enough to actually saturate the network link (most network equipment does the network switching/routing/NAT in hardware and so doesn’t actually come equipped with a CPU that is capable of line-rate TCP because it’s not actually needed in normal operation).
the 2.5Gbit USB network adapters using Realtek driver are actually bugged on macOS and only max out at 1.9Gbit/sec or so. Sadly the solution has been to use non-realtek 2.5Gbit adapters or simply get the 5Gbit Realtek ones that sell for almost the same price.
Are you sure? I just bought the Ugreen 2.5Gbps yesterday for this, and it uses a Realtek RTL8156BG chip. That’s the one I used to get way above 2Gbps straight to the UDR.
I have a few cheap Realtek 2.5Gbps dongles spread around my house and get 2.3 Gbps TX and 1.9 Gbps RX (running iperf3 --bidir to a LAN machine with 10 Gbps).
Still beats Wi-Fi by a mile so I'm not complaining.
I had a similar issue but on unifi gateway lite after upgrading to 1gig fibre, I couldn't get above about 250-300mbps, even wired. Everything looked good in the unifi app. Turns out in the unifi web UI there was a "use hardware acceleration" checkbox for the gateway that was unticked and not even visible in the app. Ticked that and now I am getting 900+mbps
I also sometimes have alerts saying more than one device is using the same IP address (DHCP issues) but it won't tell me which ones! At least give me the MAC addresses!
Unifi's stuff is great, but the software is sometimes infuriating.
Ah haven't looked at their current offerings in a while, I am still on the first gen USG 1000fdx but hand-rolling a 2.5 router (Radxa E52C, it's nifty) to replace it when I stop being lazy.
You are right about Unifi's software being pain and I love that they keep changing the UI, the controller on the server side is dependency hell, and mongodb to boot just in case you need to manage n^webscale deployments.
It's been a problem for _years_. Basically the wifi card switches to another channel to see if anyone wants to do airdrop every so often. It's a bit of a joke to be honest that Apple still haven't fixed this.
I get 1.6 Gbps line-of-sight no trouble with my U7 Pro, but I haven't managed to get MLO working. My only device with MLO support is the iPhone 16 Pro Max, and it refuses to connect to an SSID with MLO turned on...
I brought my WiFi 7-capable ASUS RT-BE96U to Germany (from China) and I proudly notice that my average download speed is up to ~105 Mbit from ~95 Mbit with the stock Vodafone router.
It's nice to be able to do networked stuff with the network.
32GB isn't very big these days. In terms of cost, a decent cheeseburger costs more than a 32GB flash card does.
A few months ago I needed a friend to send me a 32GB file. This took over 8 hours to accomplish with his 10Mbps upstream. 8 hours! I felt like it was 1996 again and I was downloading Slackware disksets with a dialup modem.
We needed to set up a resumable way to get his computer to send that file to my computer, and be semi-formal about it because 8 hours presents a lot of time for stuff to break.
But if we had gigabit speeds, instead? We could have moved that file in less than 5 minutes. That'd have been no big deal, with no need to be formal at all: If a 5-minute file transfer dies for some reason, then it's simple enough to just start it over again.
This is like asking why would anyone need more than a standard 110v North American electrical outlet in their home? Why would you ever install a higher capacity 220v socket somewhere?
Because it's a utility and there's a wide world of use cases out there.
For electrical maybe someone wants to charge an electric car fully overnight, or use a welder in their garage. Or use some big appliance in their kitchen.
For Internet maybe they make videos, games or other types of data-heavy content and need to be able to upload and download it.
I have 1Gbit at home, but almost never reach those speeds when downloading games. It’s one of those cases where it makes sense (I want to play now!), but I’m under the impression the limit is upstream (at steam most likely), rather than on my connection. (I do get those speeds on speed tests, doesn’t seem to be my setup).
Steam is tricky cause it has multiple potential bottlenecks. The steam cache, internet connection, decompression (i.e. cpu) and storage. Often hard to tell which limit you're hitting
ISPs happily collaborate with and put speed test servers in privileged locations on their network so you will get higher speeds there even if the actual peering to the outside world is much slower.
You can try Fast.com (Netflix) or Cloudflare’s one which are explicitly designed to work around this by serving the test data from the same endpoints the serve actual customer data, so ISPs can’t cheat.
This still doesn’t guarantee however that you will achieve this speed to any random host on the internet - their pipe to Cloudflare/Netflix may very well be fat and optimized but it doesn’t guarantee their pipe to a random small hosting provider doesn’t go over a 56k modem somewhere (I jest.. but only a bit).
Given that whether you get 30mbit or 30gbit from Netflix won’t make a blind bit of difference it’s not that useful a test. It doesn’t do upload either as Netflix is all about consumption.
Test to where you want to exchange high speed traffic.
You might check what region Steam is downloading from (it's in settings -> Download or something similar). If it's selected poorly, you might do better by picking one yourself.
To transfer files? Like large virtual machines, huge video files. Backup their files quickly. To support a homelab to learn new skills. To stream uncompressed video. To download 300 GB monster games.
Some people can manage with slow network speeds at home, even though 100 Gbps single mode fiber is perfectly doable nowadays. And it's reasonable, because new SSDs do almost 120 Gbps.
1 Gbps made sense 20 years ago when single hard disks had similar performance. For some weird reason LAN speeds did not improve at the same rate as the disks did.
But then again, I guess many could also still manage with 100 Mbps connectivity at home. Still enough for 4k video, web browsing and most other "ordinary" use cases.
100Gbps over the LAN is unlikely to do you much good because not only is it expensive to get that kind of bandwidth end-to-end over the internet but most OS’ network stacks and protocols (HTTPS/etc) are not efficient enough to take advantage of it (you will be bottlenecked by the CPU). So there is very little consumer and even business (outside of datacenters) demand for it because even just sticking a 100Gbps NIC and pipe in a consumer machine is unlikely to give you any more than 10Gbps anyway.
> For some weird reason LAN speeds did not improve at the same rate as the disks did.
When it comes to wired, sending data 15cm is a very different problem than sending it 100M reliably - that and consumer demand for >1Gbps wasn't there which made the consumer equipment expensive because no mass market to drive it down, M.2 entirely removes the cable.
I figured 10Gbps would be the standard by now (and was way off) and yet its not even the default on high end motherboards - 2.5Gbps is becoming a lot more common though.
> I figured 10Gbps would be the standard by now (and was way off) and yet its not even the default on high end motherboards - 2.5Gbps is becoming a lot more common though.
All the new MacBook Pros come with 64Gbps wired networking.
With an adapter you can also connect 100GbE, but that’s not very special.
Most software and CDNs also don't utilise fast connections properly. It's kind-of a chicken and egg situation where hardware doesn't improve because customers don't demand it because software and services can't handle it (and you can start from the beginning).
It is very slowly improving, but by far the fastest widely used services I've seen are a few gacha games and Steam both downloading their updates. Which is rather sad.
Windows Update is slow, macOS update is abysmally slow, both iOS and Android stores also bottleneck somewhere. Most cloud storage services are just as bad. Most of these can't even utilise half a gigabit efficiently.
Not sure what GP’s situation is, but I have a 100Mb/s fibre internet package but all hooked up to 1Gbps capable equipment on my side.
My typical speed test results are around 104Mb/s. Before being upgraded, on the 50Mb/s package I was getting 52Mb/s.
My suspicion is that fibre network operator (OpenServe in South Africa) applies rate limits which are technically a little above what their customers are paying for, perhaps to avoid complaints from people who don’t understand overheads.
That's pretty typical. It's similar in the States: Spectrum, for example, generally overprovisions their connections a bit just because customer support is expensive to provide, and when things [ideally] work even better than advertised, support costs go down.
And on that ISP side of things, it's a software-defined limit; it's just a field in a database or a config file that can be tuned to be whatever they want it to be.
And that's just not possible*. The E60iUGS Mikrotik Hex S's own hardware Ethernet interfaces are 1000BASE-T, and it's simply not possible to squeeze more than 1.0Gbps through a 1000BASE-T interface. (It does also have an SFP port that it has one of is branded as "1.25Gbps," but reality is that it, too, is limited to no more than 1.0Gbps of data transfer.)
*: Except... the 2025 version of the Hex S, E60iUGS, does have a 2.5Gbps SFP port that could be used as an ISP connection, and a much-improved internal fabric compared to the previous version. But the rest of its ports are just 1Gbps, which suggests a hard 1Gbps limit for any single connected LAN device.
Except... Mikrotik's RouterOS allows hardware to be configured in many, many ways -- including using LACP to aggregate ports together. With the 2025 Hex S, an amalgamation could be created that would allow a single client computer to get >1Gbps from an ISP. It might even be possible to be similarly-clever with the previous version of the Hex S. But neither version will be able to do end-to-end >1Gbps without very deliberate and rather unusual effort.
> Set transmit power to High
Do NOT do this if you live in a densely populated area (e.g. apartment complex). You'll create noise for yourself and everybody else. Classic prisoner's dilemma - a few people could be assholes and profit from it, but if everyone's an asshole everybody suffers.
General rule on TX power: start on low and increase only if you know (or can confirm) it helps. Go back down if it doesn't.
For the 6GHz frequencies used, this isn’t really as big of a deal as everyone has made it out to be. The advice was shared in the early days of 2.4GHz WiFi with into 3 non-overlapping channels, higher penetration of 2.4GHz signals, and competition with all of the other cheap devices in the 2.4GHz space.
The 6GHz space isn’t even competing with classic WiFi. It’s really fine. There’s no prisoner’s dilemma or some moral high ground from setting it to low. It will make virtually no difference for your neighbors.
The real world difference is actually pretty minimal between power settings.
The actual risk with modern hardware is that the high power setting starts running the power amplifier in a higher distortion area of the curve which degrades signal quality in exchange for incrementally longer range.
Also, the higher frequencies are much more affected by absorption from little things like "walls" and "trees" which are occasionally part of the RF environment, so you're far less likely to interfere with your neighbors doing this, than you were with 2.4GHz.
Also the reason it makes such an enormous difference to put your AP in the same room, if at all possible. Sneak a cable somewhere, park the AP in the far corner of the room, sure. But with zero walls in between, it's huge.
I love that we're making wifi so high frequency that we're back to running cables to every room.
Running ethernet to every room is always going to be a good idea
Or conduit so you can run something else too later if you want. There is a reason why commercial almost always does that. But also because they have money.
> General rule on TX power: start on low and increase only if you know (or can confirm) it helps. Go back down if it doesn't.
The people reading this are techies. Nobody else will do this. Either it should be built into the protocol, or the advice should be abandoned.
Agree you want TX power as low as you can, but in practice, I've always found there's at least once device in my house that'll benefit from an increased TX power. Also I generally just the FCC in setting reasonable power limits for what 'high' should be.
In my experience concrete walls and ceilings in apartment complexes completely block 5 GHz signals. Even through modern triple glass windows most of signal is lost. I can't receive any other 5 GHz networks inside my apartment, but around 50 on 2.4 GHz, which makes 2.4 nearly unusable anyway.
This is even more true for the WiFi 7 frequencies at 6GHz
The old tales about interfering with your neighbors, prisoners dilemmas, and claiming moral high ground from setting it to low is old school WiFi mythology that continues to be parroted around
In the US, I would venture at least half, if not more, apartment complexes have wood and drywall walls and ceilings. No concrete is used above the first floor.
https://en.wikipedia.org/wiki/5-over-1
This may not help if you can’t control your environment. You will often benefit from nearby routers hearing you and each other if you are forced to share a channel with them, as that is what enables the carrier sensing to work correctly. Otherwise neighbouring APs that can’t hear your quieter use of the channel may shout over your devices rather than backing off, creating collisions and resulting in retransmits.
You're describing the situation where the prisoner's dilemma has already gone wrong, with someone else not-nice shouting over you trying to be nice.
In other words: you don't need carrier sensing to work if you're not getting drowned in noise to begin with.
You can get this kind of interfere even if the signal from the router to your device is sitting just above the noise floor and the property next to you is doing the same thing. Both signals are so weak they get drowned out by an even weaker signal. The router on the other hand can’t tell the message is corrupted until your device responds.
You control your environment by not adding yourself to the dicks creating the bad environment. Everything else is just rationalizing for your own maximum convenience.
There is no such problem as "you have to shout enough so the others hear that you're there". There's no such thing, by at least 2 different vectors. 1, They hear everyone just fine, weak and strong, all at the same time. 2, It doesn't matter even if they didn't, because you obviously hear them if you're getting clobbered by them, and so your router can channel hop around them even if they don't channel hop around you.
While indeed you shouldn't fix noise by shouting louder, your justification isn't quite right.
1. It's the AP that has to decide to change channel, and if you live somewhere with channel contention, from its perspective all channels will be busy. At that point, if your channel appears the quietest (either by being the least noisy or by your clients not being active), then the AP will decide to clobber your channel. Their WiFi devices may also not hear you and won't back off to give your airtime, even though you hear theirs and give them airtime.
2. Having your AP change channel (note: channel hopping is something else entirely, which isn't used for WiFi) wouldn't help when all channels are busy. As long as your usage appears quiet, other APs will keep moving on top of you during their channel optimization.
For residential, the only solution is to use technology that cannot propagate to neighbors. 5/6GHz and many APs, and good thick walls (mmm, reinforced concrete). WiFi channels is a solution to make a few bits of equipment coexist in the same space, but is of limited use when it comes to segregating your space from that of your neighbors. Especially if you want good performance, as there's very few wide channels available.
Notably even drywall attenuates 5/6ghz to an obvious degree. It’s quite useful in apartments.
If you’re using the same channel as a neighbouring router that’s close enough to overpower yours then you’ve already lost, pick a different channel. If you stick to 20 mhz there are plenty options, even more if you are able to use DFS channels.
Wifi7 can use 320MHz channels on 6GHz. There's only 1 of those in many locations.
Yes, exactly, this means you shouldn’t use 320Mhz.
Find quietest 20mhz available on 5 or 6 GHz. It’ll be far more reliable than trying to battle someone over the 320.
How likely am I to even detect my neighbors 6GHz network?
I live in a very dense part of Chicago. 2.4 and 5 are a minefield, just a thick soup of interference on everything but the DFS channels (which I get kicked off of too often being close to two airports). While it could be that zero neighbors have 6E or 7 equipment, I find that hard to believe, but nothing comes up on the scan.
But also faaaaaaar slower
I'm the only person with a router that's broadcasting 6GHz in my apartment complex, so until that changes I'm gonna keep using High transmit power :)
There might be more people than you, but 6GHz with wide channels doesn’t penetrate very far. You wouldn’t be able to see all of the networks, just maybe your adjacent neighbors.
Quite right. However, if your wifi bridge has an option for auto tuning the power then that might be a future proofing option, assuming that everyone uses it, which they probably won't sigh
If wifi becomes a pain within a shared building then seriously consider ethernet. Slimline stick on trunking will hide the wires at about £1-2/m. A box of CAT6, solid core is less than £1/m. You will also need some back boxes, modules and face plates (~£2.50 each) and a POST tool (fiver?) Or you can try and bodge RJ45 plugs onto the solid core CAT6 - please don't unless you really know what you are doing: it looks messy and is seriously prone to weird failures.
‘High power’ on a router is mostly useful to borderline clients. Without that, they likely won’t even be able to see it. It’s hard to auto detect that situation initially, since how can you tell someone you can’t hear to get louder?
Went through a similar tuning process with Wi-Fi 6 on OpenWRT recently: https://taoofmac.com/space/reviews/2025/09/14/1630
In my case, I forgot I had to change encryption type to associate at higher speeds.
Oh, good point. That was actually the first thing I missed, but when I created a new Wi-Fi 7-only SSID, Unifi wouldn’t let me pick anything lower than WPA3 if I only used 6 GHz. So that sort of fixed itself.
I'm lazy so I just fire off the occasional speed tests using Ookla.
It doesn't _really_ seem to matter what channel width or frequency I use, I tend to get around 600Gbps from my iPhone (17, pro).
When you make it a point to ensure you're on the correct AP, line of sight from a few feet away, sometimes I break 1Gbps. I was surprised, watching TV the other day, to randomly get a 1.2Gbps speedtest which is one of the faster ones I've seen on WiFi.
(10gbps internet, UDM Pro, UDM enterprise 2.5Gbps switch for clients, PoE WiFi 7 APs on 6ghz).
Honestly, I'd say overall 6ghz has been more trouble than it's worth. Flipping the switch to WPA2/3 as required by 6ghz broke _all_ of my clients last year, so I had to revert and now I just have a separate SSID for clients I have the energy to manually retype the password into. 6Ghz pretty much only works line of sight and from a handful of feet away. There were bugs last year in Apple's "Disable 6e" setting so it kept re-enabling itself. MLO was bad, so it would stick to 6ghz even when there was basically no usable signal.
Over the course of the past year, it's gotten pretty tolerable, but sometimes I still wonder why I bother-- I'm pretty sure my real world performance would be better if I just turned 6ghz off again.
I get 1,700 Mbps on Ookla with my iPhone 17 Pro. This is on 6ghz with line of sight to the AP, with MLO turned off.
I haven't experienced any issues with 6ghz enabled, although honestly there isn't much noticeable benefit on an iPhone either in real-world usage. MLO was causing some issues for my non-WiFi 7 Apple devices - since WiFi credentials are sync'd in iCloud, I found that my laptop was joining the MLO network even though I never explicitly told it to - so I have disabled MLO.
Huh, I have a random 2.5G Wi-Fi 6 router with 2.5G provider connection.
I just tested 1.3Gbps through some reinforced concrete on Wi-Fi 6, no line of sight.
Is all that tinkering really needed?
From how far away?
I just tested 1700mbits/s from my iPhone 17 PM in the next room over from my Ubiquiti E7 and I don’t even have MLO enabled. Something’s very wrong if you’re only getting 600mbit.
My MSM560 that's approximately 15 years old can do >700Mbps with a 13 Pro. If you're getting less on newer hardware something is terribly wrong.
Optimizing for top speeds is the wrong way of looking at this.
Even the shittiest consumer WiFi will generally give a satisfactory speed test result with decent speeds, despite being completely unusable for anything real-time like video conferencing, Remote Desktop or gaming. Your random high-speed result may very well be down to luck and doesn’t represent how stable and usable the connection will be.
In fact what the author does here (crank up the channel width, etc) might do for a good speed test result but will start dropping out with terrible latency spikes and jitter the second he turns away from his WiFi AP.
Smaller channel widths are generally preferable as they provide a smaller top speed but said speed will be much more stable.
Sure. Those are also things I optimize for. I'm using 40mhz 5ghz channels and 20mhz 2.4ghz channels. I'm in the 'burbs, but silicon valley, and small lots, so there's definitely some contention for channels. Just sharing my experience.
I get consistently ~1.3-1.6gbps on fast.com with similar setup (10g fiber, UDM Pro, E7, etc). I think where I live there are very few / zero folks on 6ghz...so, win.
>> get around 600Gbps from my iPhone 17
!
What kind of magic iPhone you have? I don't think there is any device to achieve anything close to that today[1]
---
[1] The recently(2024) record is claimed to be at 938 Gbps but it is only to a 12cm distance[2]
[2] https://discovery.ucl.ac.uk/id/eprint/10196331/1/938nbspGb_s...
Obviously he/she meant 600 Mbps.
> Flipping the switch to WPA2/3 as required by 6ghz broke _all_ of my clients last year
All? Really?
> and now I just have a separate SSID for clients I have the energy to manually retype the password into
Type it once and it will be saved, as has been the case for years.
Yep! Every single client required the password be typed in again, which is problematic in a house full of wifi devices (~50), some of which don't have keyboards, or have janky setup processes. Surely you're aware of wifi devices that need you to connect to their own SSID to set them up, or require an app and a setup process.
That’s interesting. My testing for EAP-TLS and OWE networks has shown modern clients will simply create another profile when it detects the change in the AKM suite. Hard roam between wpa2/wpa3, but still seem less for the client.
It's wasted effort in the US, since the 2025 budget bill directs the FCC to sell off much of the 6GHz band on which WiFi 7 depends.
https://arstechnica.com/tech-policy/2025/07/trump-and-congre...
I guess we're going to let AT&T, Verizon and everyone else just squat the entire spectrum. "5G" and the pillaging and theft of spectrum that seems to just sit idle anyways has been such a scam. If they wanted innovation there should be more ISM bands and less dependence being encouraged on wireless providers for "Internet access" as opposed to just biting the bullet and running more fiber and copper. But that would be bad for Verizon, T-Mobile and AT&T's bottom line so obviously we can't do that.
How would that work… will they have to force manufacturers to recall or issue mandatory updates to routers which already support it?
FCC enforcement for interference can work for occasional troublemakers but there’s no way they can go after every single consumer who (most likely not even realizing it) bought a 6Ghz-capable router that is encroaching on the now-privatized frequency band.
Germany also wants to sell the 6Ghz frequencies to MNOs
> I recently upgraded from a UniFi Dream Machine to a UniFi Dream Router 7
What do these devices do that can't be accomplished by an OpenWrt One + an external AP for less money and fully FOSS?
Another option would be a mini-PC running Linux, but it's perhaps overkill for a domestic router.
Edit: Actually the OpenWrt One does have built-in WiFi, so you don't even need the external AP.
> What do these devices do that can't be accomplished by an OpenWrt One + an external AP for less money and fully FOSS?
Nice UI (as the company is best known for https://ui.com)
Good band steering and roaming. I would not use openwrt if I had to use multiple APs to cover the area.
OpenWRT does support 802.11r fast roaming for multiple APs. The problem with OpenWRT is/was the configuration of multiple APs. There is OpenWISP, but they mostly target very large setups (>100 APs). So I built OpenSOHO using the OpenWISP daemons on the AP and a pocketbase frontend. (https://github.com/rubenbe/opensoho). No band steering yet unfortunately.
(TLDR: if you want to use bleeding edge technology you must use bleeding edge drivers and firmware blobs)
We have tested WiFi-7 gear in our lab: from the cheapest TP Omada EAP783 to the latest most expensive Cisco AP+Controller.
Our findings:
- Driver quality from the modems is still below average on Linux. If you want to test Wifi-7 go with the Intel BE200 card - most stuff works there. Warning: this card does not work with AMD CPUs.
- We have seen quite a bit of problems from Qualcomm and Mediatek cards. Either latency issues, weirdo bugs on 6GHz (not showing all SSIDs) or throughput problems
- Always go with the latest kernel with the freshest firmware blobs
- MLO is difficult to get running properly. Very buggy from all sides. Also needs the latest version of wpa_supplicant - otherwise it will not come up. And be aware: there are several MLO modes and not all of them offer "two links for twice the bandwidth".
Also expect to hit problems from AP side. If you read the TP Omada firmware changelogs you see that they are still struggling with a lot of basic functionality. So keep them updated to the latest beta versions too.
I use a Qualcomm QCNCM865 in my privat setup with an AMD CPU. Feels like the latest firmware blobs and kernel drivers brought stability into their components.
> Warning: this card does not work with AMD CPUs.
what causes that? (I have no idea how wifi cards work)
I'm just guessing, but I would say Intel CNVi. https://en.wikipedia.org/wiki/CNVi
That's the reason.
But I can confirm that the Intel BE200 works with the popular Intel n100/n305 Mini Computers.
As I have been trying to tell the world and keep repeating. The best version and implementation of WiFI 6E is WiFi 7. So if anyone want decent WiFi 7 they will have to wait till WiFi 8.
> Running iperf server on the router itself creates CPU contention between the WiFi scheduling and the iperf process. The router’s TCP stack isn’t tuned for this either. Classic mistake.
Can you elaborate on this? I don't know much about WiFi so I'm curious what CPU work the router needs to do and what wouldn't be offloaded to hardware somehow (like most routing/forwarding/QoS duties can be).
It has nothing to do with WiFi even; when running a test you need a server that emits the test data - this could be a standard HTTP server on the internet (in case of public speed tests) or a binary like iperf that synthesizes the test data on the fly.
You need to ensure the server is able to send the test data quickly enough so that the network link becomes the bottleneck.
In his case he was running the test server on the router, and the router’s CPU was unable to churn out the data quickly enough to actually saturate the network link (most network equipment does the network switching/routing/NAT in hardware and so doesn’t actually come equipped with a CPU that is capable of line-rate TCP because it’s not actually needed in normal operation).
the 2.5Gbit USB network adapters using Realtek driver are actually bugged on macOS and only max out at 1.9Gbit/sec or so. Sadly the solution has been to use non-realtek 2.5Gbit adapters or simply get the 5Gbit Realtek ones that sell for almost the same price.
Are you sure? I just bought the Ugreen 2.5Gbps yesterday for this, and it uses a Realtek RTL8156BG chip. That’s the one I used to get way above 2Gbps straight to the UDR.
https://forums.macrumors.com/threads/anyone-seeing-speed-dro...
I have a few cheap Realtek 2.5Gbps dongles spread around my house and get 2.3 Gbps TX and 1.9 Gbps RX (running iperf3 --bidir to a LAN machine with 10 Gbps).
Still beats Wi-Fi by a mile so I'm not complaining.
There are times when low power is better, because it allows the router to ignore far away clients.
In simple terms, far away = more work to communicate = more airtime = less throughput.
It probably only matters with multiple devices.
I had a similar issue but on unifi gateway lite after upgrading to 1gig fibre, I couldn't get above about 250-300mbps, even wired. Everything looked good in the unifi app. Turns out in the unifi web UI there was a "use hardware acceleration" checkbox for the gateway that was unticked and not even visible in the app. Ticked that and now I am getting 900+mbps
I also sometimes have alerts saying more than one device is using the same IP address (DHCP issues) but it won't tell me which ones! At least give me the MAC addresses!
Unifi's stuff is great, but the software is sometimes infuriating.
Other trap is some of the unifi features, IIRC their IDS is one of them, will cut throughput if you are running it.
You're right, however, that was one of the reasons I upgraded too. This one can handle the full 2.5 gigs even with IDS on.
Ah haven't looked at their current offerings in a while, I am still on the first gen USG 1000fdx but hand-rolling a 2.5 router (Radxa E52C, it's nifty) to replace it when I stop being lazy.
You are right about Unifi's software being pain and I love that they keep changing the UI, the controller on the server side is dependency hell, and mongodb to boot just in case you need to manage n^webscale deployments.
What hardware are you using? I'm not seeing near my advertised (and previously achieved, via Acer 'Gamer Router') with IDS on.
IDS is probably overkill for a home network anyway.
I recently replaced said router with a Dream Router 7.
The maximum routing speed Unifi Dream Router 7 can do with IDS on is 2.3Gbps according to their spec sheet
The wifi bottleneck is such a tragedy. We'd otherwise have 10G home broadband probably.
Just checked mine...getting exactly gigabit speeds. Weird because everything should be on 2.5.
Guess I need to do some debugging of my own
6Ghz for me causes random latency increases every 10 seconds or so, just had to stop using it because it drives me crazy
Mac? https://www.theregister.com/2025/10/23/apple_airdrop_awdl_la...
It's been a problem for _years_. Basically the wifi card switches to another channel to see if anyone wants to do airdrop every so often. It's a bit of a joke to be honest that Apple still haven't fixed this.
I get 1.6 Gbps line-of-sight no trouble with my U7 Pro, but I haven't managed to get MLO working. My only device with MLO support is the iPhone 16 Pro Max, and it refuses to connect to an SSID with MLO turned on...
I brought my WiFi 7-capable ASUS RT-BE96U to Germany (from China) and I proudly notice that my average download speed is up to ~105 Mbit from ~95 Mbit with the stock Vodafone router.
"Silicon Valley of Europe", my a*s.
Why do people need 2.5Gbps internet access or 1.7 Gbps on a home wifi network? What are folks doing at home?!?
It's nice to be able to do networked stuff with the network.
32GB isn't very big these days. In terms of cost, a decent cheeseburger costs more than a 32GB flash card does.
A few months ago I needed a friend to send me a 32GB file. This took over 8 hours to accomplish with his 10Mbps upstream. 8 hours! I felt like it was 1996 again and I was downloading Slackware disksets with a dialup modem.
We needed to set up a resumable way to get his computer to send that file to my computer, and be semi-formal about it because 8 hours presents a lot of time for stuff to break.
But if we had gigabit speeds, instead? We could have moved that file in less than 5 minutes. That'd have been no big deal, with no need to be formal at all: If a 5-minute file transfer dies for some reason, then it's simple enough to just start it over again.
This is like asking why would anyone need more than a standard 110v North American electrical outlet in their home? Why would you ever install a higher capacity 220v socket somewhere?
Because it's a utility and there's a wide world of use cases out there.
For electrical maybe someone wants to charge an electric car fully overnight, or use a welder in their garage. Or use some big appliance in their kitchen.
For Internet maybe they make videos, games or other types of data-heavy content and need to be able to upload and download it.
It is 2.5Gbps internet shared with everyone in the household. Once you start dividing by 4 or 5 people that number doesn't seems that impressive.
It wasn't that long ago "internet" at home is literally just one person using it.
My anecdata is that I can download a steam or Xbox game (100+GB) in a few minutes. Plus it's just fun. High number = better and all that.
A few things come to mind:
- Games (400GB for Ark, 235GB for Call of Duty, 190GB for God of War)
- LLMs (e.g. DeepSeek-V3.2-Exp at 690GB or Kimi-K2 at 1030GB unquantized)
- Blockchains (Bitcoin blockchain approaching 700GB)
- Deep learning datasets (1.1PB for Anna's Archive, 240TB for LAION-5B at low resolution)
- Backups
- Online video processing/storage
- Piracy (Torrenting)
Of course you can download those things on a slower connection, but I imagine that it would be a lot nicer if it went faster.
> 400GB for Ark
Ark is a strange case. It compresses very very well. Most of it ends up with compression ratios of around 80%.
> Total size on disk is 628.32 GiB and total download size is 171.42 GiB.
From SteamDB's summary of Ark's content depots.
I have 1Gbit at home, but almost never reach those speeds when downloading games. It’s one of those cases where it makes sense (I want to play now!), but I’m under the impression the limit is upstream (at steam most likely), rather than on my connection. (I do get those speeds on speed tests, doesn’t seem to be my setup).
Steam is tricky cause it has multiple potential bottlenecks. The steam cache, internet connection, decompression (i.e. cpu) and storage. Often hard to tell which limit you're hitting
ISPs happily collaborate with and put speed test servers in privileged locations on their network so you will get higher speeds there even if the actual peering to the outside world is much slower.
As I was typing this it came to mind. Will test against one of my own servers one of these days to confirm.
You can try Fast.com (Netflix) or Cloudflare’s one which are explicitly designed to work around this by serving the test data from the same endpoints the serve actual customer data, so ISPs can’t cheat.
This still doesn’t guarantee however that you will achieve this speed to any random host on the internet - their pipe to Cloudflare/Netflix may very well be fat and optimized but it doesn’t guarantee their pipe to a random small hosting provider doesn’t go over a 56k modem somewhere (I jest.. but only a bit).
Given that whether you get 30mbit or 30gbit from Netflix won’t make a blind bit of difference it’s not that useful a test. It doesn’t do upload either as Netflix is all about consumption.
Test to where you want to exchange high speed traffic.
Fast.com does an upload speed test, but it's hidden behind the "Show more info" button.
You might check what region Steam is downloading from (it's in settings -> Download or something similar). If it's selected poorly, you might do better by picking one yourself.
I have 5 gigabit and usually get ~1.2 gbps, sometimes get up to ~2 gbps from Steam.
I get full speed on steam downloads, even set the limit lower so youtube doesn't buffer.
To not bottleneck the mechanical hard drives in their NAS, or to download games at a reasonable speed.
Or even just work stuff, I've had to shift around several TB of 3D assets for my job while working from home.
Homelab or they are into big data set usage.
Or they seed large datasets for other researchers.
All the replies you get here are totally valid, but Ill throw in another one.
Why not? Life’s too short anyways, and playing around with tech is one of those things that bring me joy.
Shuffle around RAW files if you are doing photography. These are 50-150MB files. A lot of them.
To transfer files? Like large virtual machines, huge video files. Backup their files quickly. To support a homelab to learn new skills. To stream uncompressed video. To download 300 GB monster games.
Some people can manage with slow network speeds at home, even though 100 Gbps single mode fiber is perfectly doable nowadays. And it's reasonable, because new SSDs do almost 120 Gbps.
1 Gbps made sense 20 years ago when single hard disks had similar performance. For some weird reason LAN speeds did not improve at the same rate as the disks did.
But then again, I guess many could also still manage with 100 Mbps connectivity at home. Still enough for 4k video, web browsing and most other "ordinary" use cases.
100Gbps over the LAN is unlikely to do you much good because not only is it expensive to get that kind of bandwidth end-to-end over the internet but most OS’ network stacks and protocols (HTTPS/etc) are not efficient enough to take advantage of it (you will be bottlenecked by the CPU). So there is very little consumer and even business (outside of datacenters) demand for it because even just sticking a 100Gbps NIC and pipe in a consumer machine is unlikely to give you any more than 10Gbps anyway.
> For some weird reason LAN speeds did not improve at the same rate as the disks did.
When it comes to wired, sending data 15cm is a very different problem than sending it 100M reliably - that and consumer demand for >1Gbps wasn't there which made the consumer equipment expensive because no mass market to drive it down, M.2 entirely removes the cable.
I figured 10Gbps would be the standard by now (and was way off) and yet its not even the default on high end motherboards - 2.5Gbps is becoming a lot more common though.
> I figured 10Gbps would be the standard by now (and was way off) and yet its not even the default on high end motherboards - 2.5Gbps is becoming a lot more common though.
All the new MacBook Pros come with 64Gbps wired networking.
With an adapter you can also connect 100GbE, but that’s not very special.
Most software and CDNs also don't utilise fast connections properly. It's kind-of a chicken and egg situation where hardware doesn't improve because customers don't demand it because software and services can't handle it (and you can start from the beginning).
It is very slowly improving, but by far the fastest widely used services I've seen are a few gacha games and Steam both downloading their updates. Which is rather sad.
Windows Update is slow, macOS update is abysmally slow, both iOS and Android stores also bottleneck somewhere. Most cloud storage services are just as bad. Most of these can't even utilise half a gigabit efficiently.
I can't comment on the internet, but high-bandwidth wifi helps with VR streaming quality.
Running $60 Mikrotik HEX S 2025 and getting 1.2 Gbps on a “1G” connection !
If that router has a 1Gbit port it’s physically impossible and likely a measurement artifact.
Actual speed on a 1Gbit port is something like 940Mbps according to experience (I believe the theoretical max there is 970).
Not sure what GP’s situation is, but I have a 100Mb/s fibre internet package but all hooked up to 1Gbps capable equipment on my side.
My typical speed test results are around 104Mb/s. Before being upgraded, on the 50Mb/s package I was getting 52Mb/s.
My suspicion is that fibre network operator (OpenServe in South Africa) applies rate limits which are technically a little above what their customers are paying for, perhaps to avoid complaints from people who don’t understand overheads.
104mb/s is well under the theoretical max of 1gig networking, so you're truly just being limited by your ISP based on the plan you pay for.
The poster above is claiming to see a physically impossible speed on 1gig networking.
The ISP sells a 100mbit package and delivers more than that, as the line speed will be higher and it’s just policed in some fashion
That's pretty typical. It's similar in the States: Spectrum, for example, generally overprovisions their connections a bit just because customer support is expensive to provide, and when things [ideally] work even better than advertised, support costs go down.
And on that ISP side of things, it's a software-defined limit; it's just a field in a database or a config file that can be tuned to be whatever they want it to be.
But the fellow up there says that they got 1.2Gbps through a Mikrotik Hex S: https://mikrotik.com/product/hex_s
And that's just not possible*. The E60iUGS Mikrotik Hex S's own hardware Ethernet interfaces are 1000BASE-T, and it's simply not possible to squeeze more than 1.0Gbps through a 1000BASE-T interface. (It does also have an SFP port that it has one of is branded as "1.25Gbps," but reality is that it, too, is limited to no more than 1.0Gbps of data transfer.)
*: Except... the 2025 version of the Hex S, E60iUGS, does have a 2.5Gbps SFP port that could be used as an ISP connection, and a much-improved internal fabric compared to the previous version. But the rest of its ports are just 1Gbps, which suggests a hard 1Gbps limit for any single connected LAN device.
Except... Mikrotik's RouterOS allows hardware to be configured in many, many ways -- including using LACP to aggregate ports together. With the 2025 Hex S, an amalgamation could be created that would allow a single client computer to get >1Gbps from an ISP. It might even be possible to be similarly-clever with the previous version of the Hex S. But neither version will be able to do end-to-end >1Gbps without very deliberate and rather unusual effort.
Wired Ethernet is typically full duplex.