I readily admit that I could be all wet on this blog, so I have posed the title as a request rather than an assertion. I am going to state my current opinions on various aspects of 802.11ac Wave-2 technology, as it relates to enterprise customers, in the hopes that if I’m off-base, someone smarter than me will help me understand the error(s) of my thinking. If receive no arguments, or instruction, regarding the information listed below, then I will make the (hopefully correct) assumption that my current opinions are correct. I’m going to go through a variety of Wave-2 related topics and list my thoughts – in no specific order.
Transmit Beamforming (TxBF)
* No WiFi Alliance certification and no certified equipment
* High channel overhead due to channel matrix feedback for every participating client.
* Not needed at short range (because RSSI is already plenty high) unless you’re trying to do MU-MIMO (which has even more overhead).
* Not good at long range because the channel feedback matrix is too complicated.
* Can’t use with Spatial Multiplexing
MU-MIMO
* Requires TxBF support, which isn’t available.
* Downlink technology only, so even the theory isn’t that great.
* No WiFi Alliance certification and no certified equipment
* No client support (either for MU-MIMO or TxBF)
* Downlink traffic for orthogonally-positioned devices is unlikely to simultaneously fill AP queues unless every device in an area is streaming unicast simultaneously. This would only happen in niche use cases.
160MHz Channels
* 160MHz channels. Never, ever….EVER…use 160MHz channels in the enterprise.
* 80MHz channels. No. Don’t. Not even “dynamic” 80MHz channels, because they cause a CCI mess in enterprise environments. Been there, seen that. It’s horrible.
* 40MHz channels. No. Stop it. Unless… they are only used in very specific areas for very specific (and strongly justifiable) reasons. Use of ubiquitous 40MHz channels detracts greatly from system-wide capacity and greatly increases the chance of CCI with nearby systems. With small networks, where there aren’t many (or any) neighbors, you might be OK using 40MHz channels, but otherwise, NO.
* Enterprises use 20MHz to maximize system-wide capacity and to minimize CCI. Do this.
* Hint: enable 40MHz Intolerant so that your neighbors can’t use wide channels either. 😉
4 Spatial Streams (4SS)
* No clients available with 4SS capability.
* Eventual 4SS clients will only be desktops and large laptops, which are in the extreme minority due to today’s highly mobile society.
256QAM
* Great when you are within arm’s reach of the AP and can easily cable at 1Gbps full duplex, but even the best APs in the market, when used with any average client device, won’t be able to sustain 256QAM at anything over 50 feet (~16 meters). You can often expect client/AP connections to downshift modulation from 256QAM to 64QAM even at <30 feet.
* Requires very high SNR, which depends on a low noise floor. Noise floors are unpredictable and rarely changeable.
* The high RSSI needed can easily cause CCI in the network design, so its not a good idea to design for 256QAM.
Receive Sensitivity
* Wave-2 chipsets may have better Rx sensitivity than Wave-1 chipsets (or they may not), but that doesn’t help anything. In fact, it can be a hindrance given that most networks are poorly designed and have high CCI. The Rx Sensitivity difference between Wave-1 and Wave-2 chipsets would be miniscule anyway, so this is a “who cares” issue.
Better MRC
* With 4 receivers, an AP can hear better, so perhaps there will be less uplink retries, but most clients use too much power already (because they aren’t controlled by the infrastructure and don’t comply with 802.11h TPC), and a 4th receiver won’t add enough to matter in most real-world use cases.
PoE
* It’s already been proven that 4×4:4 can fit into 802.3af (standard PoE) but most high-end Wave-2 APs will operate on 802.3at (PoE+). It’s very unlikely that any Wave-2 APs will exceed PoE+, so PoE itself, in regard to Wave-2, is a non-issue.
Backhaul
* Under no enterprise circumstances, will you need more than 1Gbps for Wave-2 APs. #ThatIsAll
Client Types
* Most clients are either 1×1 or 2×2 11n or 11n/ac, and the bulk of the rest are legacy 11a/b/g crap. In what real-world use case (scenario) will Wave-2 help?
Throughput
* Throughput for a given BSS depends on the number of clients, type of clients (e.g. 1×1, 2×2), applications and QoS ACs in use by each client, simultaneous transmissions, instantaneous data rates of each client, interference (WiFi and non-WiFi), and so much more. An AP’s capabilities are in the minority of affecting parameters. Wave-2 capabilities would have little to no appreciable benefit in any reasonable use case.
Design
* Most WiFi networks are poorly designed, configured, and rarely ever tested. Wave-2 APs have no chance of adding value with a poor design, and due to the reasons above, add minimal or no value even with a good network design.
Data
* There is no vendor-specific or vendor neutral data showing real-world throughout benefits of MU-MIMO.
DFS
* If Wave-2 APs don’t yet support DFS channels due to not yet having FCC (or other regulatory authority) certification, they actually have LESS capacity than older APs due to less available channels. You especially need to remember this in very high density (VHD) use cases.
Future
* Don’t expect to see any 6×6 or 8×8 systems delivering client access, as 4×4 has already hit a ceiling of diminishing returns. The pricing of 4×4 systems are already at a shocking premium, but adding more powerful DSPs and double the number of radio chains is unreasonable for the foreseeable future.
Pricing
* The only real value that I see Wave-2 is that with the introduction of Wave-2 APs, there is a corresponding reduction in the price of Wave-1 APs. I now strongly suggest to my customers that they consider opting for mid-range 3×3:3 Wave-1 APs (very cost effective, while still having significant processing resources), and if they are itching to spend extra money, they should buy additional Wave-1 APs that will be deployed as dedicated performance/security sensors.
What To Do
* If you want a much faster network, don’t buy what you think are faster APs (when they aren’t), but instead, do the following:
1) Move to 5GHz: lock, stock, and barrel. Don’t use 2.4GHz at all. 2.4GHz is dead.
2) Get rid of old client devices, now. They are a major performance drag.
3) Increase your minimum basic rate to 24Mbps. Disable all rates below 24Mbps.
- Yes, I know that that would prevent you from using 802.11b – see item #2 above.
4) Use 20MHz channels.
5) Design your network for an RSSI of around -64 to -67dBm with minimal CCI.
6) Get rid of as many RF interference sources as possible.
7) Use only APs that support DFS channels, and replace client devices that don’t support DFS.
8) Minimize your SSID count. 4 is the max. 3 is better.
9) Use dedicated sensors rather than background scanning, and constantly monitor your network for performance-impacting issues.
10) Other stuff: https://divdyn.com/top-ten-tune-tips/
An excellent Q&A on 802.11ac Wave-2 featuring Matthew Gast is 802.11ac Q&A. I agree with Matthew’s points, across the board. He did a good job clarifying several salient points in this podcast.
That’s it. Please help me understand where my current thoughts are off-base.
Glad someone said it. I don’t do field work much anymore, can’t think of many customer Wi-Fi networks which couldn’t be dramatically improved with solutions which have been around for years, i.e. killing 802.11b, ideally 2.4Ghz all together. Smaller cell sizes (higher data rates), configuring dual-band clients to or steering to 5Ghz only. I am having a hard time thinking of clients utilizing 40Mhz wide channels let alone 80MHz+ for several reasons, lack of wide-spread client support, why rip up real-estate for a small percentage (if any) of clients which can support it?
I am interested to see what those in favor say as well?
PS: I had written a much longer post, when I hit submit it said “timed out”, just FYI. I don’t have the time to re-create all the original magic 🙂 Happened twice:(
Not sure how to correct that on my website, but have figured out that you can just hit the BACK button and you haven’t lost your text. 🙂 Thanks!!
you could have created in a notepad/wordpad/ms-word and pasted here 🙂
This reply-to-self may seem strange, but it has culminated from a extremely beneficial conversation with a very good (and extremely knowledgeable) friend in the industry. My interpretation of my takeaways from our conversation around the topics within this blog are listed below in a Q&A-to-Self. I would like to profusely thank this unnamed (at his request) engineer for his time, effort, and insights in contributing to the accuracy and value of this blog.
Q1: Why would we worry about a device having WiFi Alliance certification for TxBF?
A1: I would think that TxBF is like much else when it comes to interoperability, and I would even guess that given the complexity of it, that it would be very important to get it certified. It is already part of the WiFi Alliance’s certification as best I can tell, but it’s optional as you’d expect it to be. I would assume that the WiFi Alliance added it to their list of things to certify because they thought it would be hard to make it interoperable otherwise.
Q2: There’s no harm, in short-range uses cases, in adding additional SNR due to using TxBF, even when it’s already high.
A2: In general, I agree. Specifically my concern is the per-client airtime overhead (and possibly latency for mobile voice devices) that would be introduced for very little gain.
Q3: At longer ranges (however you may want to define “longer”), TxBF seems to add more value than anticipated. There is a small amount of testing data showing this.
A3: I would LOVE to see that data. Since there’s a bunch of marketing hype around TxBF, the WiFi Alliance has an optional certification for it, and MU-MIMO is predicated on TxBF being supported on both sides of a link, there’s some importance in understanding what is and isn’t factual about TxBF operation and overhead. I will add that we don’t actually want clients to be at longer ranges, which is why we are always suggesting that APs support a minimum basic rate of either 12 or 24Mbps and why we have infrastructure features that instruct the AP not to talk to clients with an SNR below X dB. Therefore, even if TxBF does work better than expected at longer ranges, it becomes a case of “just because you can doesn’t mean you should.”
Q4: TxBF overhead isn’t that large, so at longer ranges, the benefits outweigh the overhead.
A4: I don’t have any hard data (only word-of-mouth assertions I’ve heard in the past from esteemed engineers) to set a baseline for “large” or “small” overhead, and I would guess that the amount of overhead depends on the total amount of traffic moving. The biggest conceivable negative impact that I see is that in high density environments, even a little extra utilization from each client would result in much higher overall channel utilization. I guess this one comes down to “how much extra data would be “normal”?
Q5: There are a few clients that are already on the market that can have their firmware upgraded to support MU-MIMO, but even if there were none, we probably want to plan for client capabilities that are ~1 year down the road, and in 1 year, there will surely be lots of MU-MIMO capable client devices.
A5: I can’t factually argue what will or won’t be here in a year, but I hope that there are MU-MIMO clients readily available within a year so that we can capture meaningful data around the value of MU-MIMO in various environments. Today, the value is either zero or close to it. In a year, we’ll see.
Q6: 40MHz channels are OK.
A6: My thoughts on using 20MHz vs 40MHz channels has to do with two very specific things: 1) density of client devices, 2) size of network. More channels mean more separated collision domains, which equates to more simultaneous conversations, which equates to more system capacity. If you use a 40MHz channel, you have essentially borrowed from system-wide capacity in order to give more capacity to a single BSS. When the client device density is high, you want more channels. When the client device density is low, you could feasibly move to 40MHz channels – but with one major caveat: size of network. Significantly minimizing CCI in a multi-floor building, large building/campus, or the like requires as many channels as you can use in the channel re-use plan. The less channels in your channel re-use plan, the harder it is to get rid of CCI, and CCI is a HUGE capacity killer. In a small-to-medium sized network, 40MHz may be fine, but in larger networks, no way, no how. Splitting your collision domains offers far more system capacity.
Q7: The 4th transceiver (transmitter/receiver) is more beneficial for control than for extra throughput on a very few clients (in the future). 4SS SU-MIMO is not that interesting. Also, as long as the 4th transceiver doesn’t affect your power budget, you’re in good shape.
A7: Couldn’t agree more. Getting an extra (theoretical) 1.23dB of gain on the downlink is a good thing (which comes from 6dB (for 4Tx) – 4.77dB (for 3Tx) = 1.23dB), especially when it’s increasingly hard to wrangle an extra dB or two from antennas. TxBF seems to be important at mid-range for the same reason, provided the overhead isn’t large. Every extra dB helps to some degree. Totally agree about the power budget.
Q8: 256QAM is the the same for both Wave-1 and Wave-2, and if the additional benefits of Wave-2 (e.g. TxBF, better Rx Sensitivity, 4 Transceivers, etc) add enough value, then we could increase SNR and therefore get a little extra range out of 256QAM.
A8: Yes, agree, but unsure as to how much extra range or under what circumstances (e.g. type of client, RF environment, etc). This is hit-or-miss, but if it adds value, it does. I just wouldn’t want to pay for it given that it’s a complete unknown.
Q9: While it’s agreed that many networks are very poorly designed and configured — and rarely tested — it’s important to have the infrastructure add as much value as it can so that these poorly deployed networks perform as well as possible under the circumstances.
A9: I can’t argue against that. Give me all of the benefit you can from the infrastructure. However, the analogy I would use to put it into perspective is that you’re putting a new Ferrari in the hands of a 10 year old who has never driven a car. The capabilities are there, but the knowledge of how to use them isn’t.
Q10: There will be a significant differential on how well vendors implement MU-MIMO, which is arguably the best part of Wave-2.
A10: Yes, agreed. IF it adds value in normal use cases, then I can see that the complexities therein could be handled to different levels of expertise by different vendors. How well various vendors handle MU groups with varying degrees of orthogonality will be a big one.
Q11: Even if the benefit of MU transmissions is only something modest, like 30% (which of course wouldn’t apply to all situations), then as a zero-cost feature it’s worth it. NOTE: “Zero-cost” means that you don’t have to trade off some other benefit to get it).
A11: I agree that it’s zero-cost from a technical standpoint, and that’s a fair point. However, I don’t see this as a zero-cost (monetarily speaking) since Wave-1 APs are cheaper than Wave-2 APs. I agree with the premise that free extra capacity is always a good thing though.
Q12: There are still many unknowns when it comes to queuing issues, but the hope is that in client environments where 50-75% of the clients are MU capable, that we’ll see at least a 1.5X capacity gain, and perhaps in downlink-heavy environments with mobile devices, as much as 2X gain.
A12: I hope you’re right. I’m not naysaying for the heck of it, but I am skeptical. If we see a 1.5X or greater overall capacity, I’d be very happy about it. The main issue I still see though is that even though infrastructure manufacturers are giving their best to bring us new technology that has the possibility of better performance, implementations are so poor (in general) that many folks who purchase these systems will simply not see those benefits. 🙁 I would also like to add that MU-MIMO won’t be useful in many vertical markets because those markets either have: 1) legacy clients, 2) no HD or VHD requirements, and/or 3) no MU capable clients (e.g. healthcare). That will further narrow the use cases and target market for Wave-2 systems.
Q13: The normal lifetime of a WiFi network is 3-5 years.
A13: Some salient points to consider about the current and on-going value of Wave-2 could include:
MU-MIMO is the only possibly meaningful upgrade of Wave-2, and it’s going to take a year to get a reasonable amount of MU-MIMO clients on the market
We have to wait nearly a year for DFS certification (of Wave-2 products) to have reasonable system-wide capacity (due to more channels being available for use)
Therefore, the beneficial time period of Wave-2 APs is reduced (20 – 33%), which makes it even harder to justify a Wave-2 purchase (due to the price premium).
Q14: Is there any other value in buying Wave-2 APs that you can think of?
A14: The place where I find real value is in the fact that vendors launch new features alongside new high-end hardware. Those features may get put into older/lesser platforms at a later date, but when launching new high-end hardware, they add such special features in order to more significantly differentiate their existing high-end platform(s) from the brand new high-end platform.
Thanks so much for this super useful post!
Also, on your Top Ten Tips page…. I laughed out loud when you mentioned the ath9k “stuck beacons” bug. That’s been a thorn in my side for the last five years.
Would love to see a future post go more into detail about CCI and the effect of different types of interference on throughput (your APs on other channels, other people’s APs on the same channel, etc). Also any other tips you might have for improving performance in super noisy environments (I work in Manhattan and there’s a million SSIDs everywhere). Thanks!