For years, coffee shops, airports and hotels have offered Wi-Fi hotspots to entice clientele. But as consumer connectivity expectations have grown, so to has the proliferation of Wi-Fi hotspots into every facet of our daily lives, including barber shops, corner pubs, fast-food restaurants, bookstores, car dealerships, department stores, and more. Today’s mobile Internet travels with everyone, and it has redefined what it means to “be connected.” But it wasn’t always this easy.
The first hotspots were small-office/home-office (SOHO)-class access points, generally used for residential connectivity, with a simple Wi-Fi connectivity process and coverage that was designed for household use. While some businesses still try to leverage this approach, the method lacks the performance required for today’s public hotspots. This increased demand for bandwidth has left those offering hotspot connectivity with a choice: either deal with poor performance and frustrated customers or install enterprise-class equipment to support use expectations.
Connectivity is so important to consumers, that it’s not uncommon for them to select a destination or method of transport based on the cost and quality of Wi-Fi Internet access. It is also not uncommon for them to select one coffee shop over another based on high-speed Internet access. But, what do these consumers think about when looking for hotspot connectivity? And, how can a business ensure a positive experience for their customers?
First let’s look at some of the common hotspot features in-depth and how they are facilitated.
Ease of Use is a Feature Hotspots use the 802.11 open authentication method, meaning no authentication process at Layer 2 – at all. The customer’s client device (laptop, iPad, smartphone, etc.) joins the hotspot’s SSID, and is forwarded to the DHCP service, and the client device receives an IP address, default gateway and DNS. This, in its purest form, is hotspot connectivity.
At this point the client is now ready to access the Internet. One option is to just allow direct access. This is the easiest of all systems. It causes no difficulty with devices, because there is no user interaction.
However, most hotspot providers opt for a captive portal solution – whereby any attempt by the client device to either load a browser-based Internet session, check e-mail, etc., will all be redirected to an HTTP web page. By capturing all possible outbound ports, the customer’s experience is changed from what they would get at home.
On this captive portal page, the customer can choose to accept the terms of service, and/or pay for Internet usage. The use of a captive portal makes accessing the Internet via a hotspot quite difficult for devices that do not have native web browsing capabilities. The more “hoops” a customer has to go through, the lower their valuation of the hotspot service.
Bandwidth/Throughput The next feature that is on the top of customer’s mind is the actual throughput of the connection. If Internet access is slow or inconsistent, customer complaints rise. Gone are the days when a 100-bed hotel could utilize a single T-1 line (1.5MBs) being shared between all the guests.
With the advent of streaming audio and video services – like Spotify, Pandora, Hulu and Netflix – users expectations of throughput have increased faster than most hotspot providers have increased bandwidth. A business can have the best Wi-Fi system available, with fantastic data-rates going over the RF medium, but without an adequately sized backhaul, end users will still complain.
The article is accurate in that the potential bandwidth of the wireless network will vary according to the number of users and the content. The location of the user (unless they are a Hidden Note) is irrelevant. If they are associated to the AP they are taking up available BW and a "slice" of the available Data Rate. What is relevant is the potential Data Rate available to the user. This is controlled by: 1) The available DR's as determined by the AP in the Beacon and/or Probe Response frame, set in the AP config. 2) The perceived RSSI at the client device which is provided to the Client NIC Driver and then used as part of the clients DRS(Data Rate Shift)decision process (Not covered in the 802.11 standard). This information is available my consistently monitoring the WLAN 'Airspace' and can be reported on as a whole and/or by device depending on the product used for monitoring. This is because 802.11 WLAN's are half duplex and the medium is shared utilizing DCF (Distributed Coordination Function) for medium Access, part of the medium contention and collision avoidance mechanism(s) of 802.11 CSMA/CA.
With regard to Coverage Quality monitoring, your results will vary with the environment due to the irregular propagation characteristics of RF. We also cannot assume equal omni-directional coverage, because RF will not radiate equally in all directions and is also directly affected by the environmentals.
The best way to monitor WLAN Quality for SLA is to use an independent overlay system such as Fluke Networks - Airmagnet Enterprise. This product can monitor multiple sites and multiple areas, providing performance alerting and statistical monitoring along with Active Health Checking of the HotSpot environment. This provides real-time, factual and trending over time data for SLA Performance analysis with the ancillary benefit of being able to remotely troubleshoot to Root Cause any wireless issues noted.
The WAN throughput represents the utilization of the bandwidth, measuring the service quality that the ISP is providing. It varies according to the number of users and the content the users are pulling.
On the contrary, the potential bandwidth of the wireless per user varies according to the relative location of the user to the connected AP. It affects the throughput the user is getting. How can we assure the user is getting the service quality he/ she is paying for? Maybe, the data rate relative to the RSSI will give us a clue.
The article brings up a very good point. Service provider wants to know the information in different spot of the coverage. It isn't necessarily accurate. If we assume omnidirectional coverage, we will have an idea of signal strength vs distance from the AP. Smart antenna makes it complicated. Measuring quality in wide area deployment is one tough challenge. What's the best way to consistently monitor service level in a large scale deployment?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.