To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The advent of accurate, low-cost tracking solutions coupled with management requirements that demand to know where high-value assets are at all times have created a market for location systems that use Wi-Fi as their infrastructure. This chapter examines some of the issues and challenges associated with Wi-Fi location technology.
“Where's my stuff?”
One of the most sophisticated and promising applications that can grow out of Wi-Fi LAN implementation is the ability to track the location of assets and personnel within campus and enterprise environments. Network-based tracking uses a combination of networkcentric computers, radio tags or other wireless devices, base stations and application software to locate, track and monitor assets and personnel in real time. For the purpose of this book, we are only concerned with the ones which use existing Wi-Fi networks as their communications infrastructure.
This technology is a marvelous advantage over the manual process of searching for misplaced items and manual inventory. And, in critical healthcare and first-responder situations, the ability to immediately locate life-saving equipment or know the whereabouts of key personnel can be the difference between life and death.
Benefits of Tracking
Some important benefits of positioning and tracking systems include:
Track computers or assets without being in 'line-of-sight' inside or outside;
Monitor real time information via the corporate intranet (or remotely via an internet browser;
Solve expensive logistics and, in the case of hospitals, safety and liability problems by instantly locating high-value assets or the technically proficient people that are required to operate them;
A single-chip IEEE 802.11g compliant wireless LAN system-on-a-chip (SoC) that implements all RF, analog, digital PHY and MAC functions has been integrated in a 0.18-µm CMOS technology. The IC transmits 0 dBm EVM-compliant output power for a 64 QAM OFDM signal. The overall receiver sensitivities are better than -92 dBm and - 73 dBm for data rates of 6 Mbps and 54 Mbps, respectively.
Introduction
The IEEE 802.11g specification which was only ratified in June 2003, has become the most widely deployed wireless local area network (WLAN) standard today. Its popularity is due in large part to its support for higher data rates while maintaining backwards compatibility to legacy IEEE 802.11b WLANs. An IEEE 802.11g device achieves the higher data rate when communicating with other 802.11g devices by using orthogonal frequency division multiplexing (OFDM) modulation. When communicating with legacy 802.11b devices, it will revert back to either direct sequence spread spectrum (DSSS) or complementary code keying (CCK) modulation. The standard uses 83.5-MHz of available spectrum in the 2.4-GHz band and allows for three non-overlapping channels. The data rates range from 1-2 Mbps using DSSS modulation, 5.5-11 Mbps using CCK modulation, and 6-54 Mbps using OFDM modulation. As in the IEEE 802.11a specification the OFDM in 802.11g uses 52 sub-carriers, each of which can be modulated with BPSK, QPSK, 16-QAM or 64-QAM.
The rapid adoption of IEEE 802.11g WLANs and their growing popularity in portable applications such as PDAs and cellphones highlighted the need for a low-cost, small form factor solution.
The Open Source Initiative represents the formalization of one stream of the free and open software movement. We have described its establishment in 1998 by Raymond and Perens, and Peterson's coinage of the term open source as an alternative to what was thought to be the more ideologically laden phrase free software. Of course, ever since the mid-1980s, the other distinct stream of the movement represented by the Free Software Foundation (FSF) and the GNU project had already been active. The FSF and Richard Stallman initiated the free software concept, defined its terms, vigorously and boldly publicized its motivations and objectives, established and implemented the core GNU project, and led advocacy and compliance for the free software movement. They have been instrumental in its burgeoning success. We have already discussed the FSF's General Public License (GPL) in Chapter 6. This chapter describes the origin and technical objectives of the GNU project that represents one of the major technical triumphs of the free software movement. We also elaborate on some of the responsibilities, activities, and philosophical principles of the FSF, particularly as expressed by FSF General Counsel Eben Moglen.
The GNU project
The GNU project was founded to create a self-contained free software platform. The project was begun in 1983 by Stallman. It had an ambitious and arguably almost utopian vision. The acronym GNU stands for “GNU's Not Unix,” a kind of recursive acronym that was popular at MIT where Stallman worked.
Wireless networking is quickly becoming a defacto standard in the enterprise, streamlining business processes to deliver increased productivity, reduced costs and increased profitability. Security has remained one of the largest issues as companies struggle with how to ensure that data is protected during transmission and the network itself is secure. Wi-Fi Protected Access (WPA) offered an interim security solution, but was not without constraints that resulted in increased security risks. The new WPA2 (802.11i) standards eliminate these vulnerabilities and offer truly robust security for wireless networks. As a global leader in wireless networking, Motorola, through the acquisition of the former Symbol Technologies, not only offers this next-generation of wireless security - but also builds on the new standard with value-added features that further increase performance and the mobility experience for all users.
Overview
Corporations are increasingly being asked to allow wireless network access to increase business productivity, and corporate security officers must provide assurance that corporate data is protected, security risks are mitigated and regulatory compliance is achieved. This chapter will discuss:
The risks of wireless insecurity;
The progression of security standards and capabilities pertaining to Wi-Fi security;
How the 802.11i standard provides robust security for demanding wireless environments;
How Motorola incorporates 802.11i in its wireless switching products in a way that optimizes scalability, performance and investment protection.
Risks of Wireless Insecurity
The advent of wireless computing and the massive processing power available within portable devices provides organizations with an unprecedented ability to provide flexible computing services on-demand to enable business initiatives.
What you can learn from this chapter is what services are being commonly deployed in Municipal Wireless networks, for what type of customers, some of the networking considerations that each service may drive, and some high-level architectural diagrams. As you will see, one of the key issues is where and how much network control should be implemented. One of the fundamental decisions that a network operator has to determine for an IP network is whether to centralize or distribute the network control. In this context, network control is based on the control of data flow associated with each user. There are advantages and disadvantages to both approaches. Note that this chapter does not intend to make any recommendations regarding this design issue. The high-level diagrams shown throughout this chapter convey the design concepts that network operators will encounter as they build their network infrastructures.
Introduction
Municipal Wireless networks are a hot new topic that is changing the face of telecom today. With the ability to offer broadband speeds over the airwaves, governments and service providers have all looked at this network approach as a way to enhance their services to the community. Over 300 governments have created Municipal Wireless networks, ranging in size up to 2 square miles. Many more governments are planning deployments with the world's largest cities planning deployments of over 100 square miles.
The drivers for the creation of these networks are varied.
The proliferation of wireless multi-hop communication infrastructures in office or residential environments depends on their ability to support a variety of emerging applications requiring real-time video transmission between stations located across the network. We propose an integrated cross-layer optimization algorithm aimed at maximizing the decoded video quality of delay-constrained streaming in a multi-hop wireless mesh network that supports quality-of-service (QoS). The key principle of our algorithm lays in the synergistic optimization of different control parameters at each node of the multi-hop network, across the protocol layers - application, network, medium access control (MAC) and physical (PHY) layers, as well as end-to-end, across the various nodes. To drive this optimization, we assume an overlay network infrastructure, which is able to convey information on the conditions of each link. Various scenarios that perform the integrated optimization using different levels (“horizons”) of information about the network status are examined. The differences between several optimization scenarios in terms of decoded video quality and required streaming complexity are quantified. Our results demonstrate the merits and the need for cross-layer optimization in order to provide an efficient solution for real-time video transmission using existing protocols and infrastructures. In addition, they provide important insights for future protocol and system design targeted at enhanced video streaming support across wireless mesh networks.
Introduction
Wireless mesh networks are built based on a mixture of fixed and mobile nodes interconnected via wireless links to form a multi-hop ad-hoc network.
This chapter describes a number of open source applications related to the Internet that are intended to introduce the reader unfamiliar with the world of open development to some of its signature projects, ideas, processes, and people. These projects represent remarkable achievements in the history of technology and business. They brought about a social and communications revolution that transformed society, culture, commerce, technology, and even science. The story of these classic developments as well as those in the next chapter is instructive in many ways: for learning how the open source process works, what some of its major accomplishments have been, who some of the pioneering figures in the field are, how projects have been managed, how people have approached development in this context, what motivations have led people to initiate and participate in such projects, and what some of the business models are that have been used for commercializing associated products.
Web servers and Web browsers are at the heart of the Internet and free software has been prominent on both the server and browser ends. Thus the first open source project we will investigate is a server, the so-called National Center for Supercomputing Applications (NCSA) Web server developed by Rob McCool in the mid-1990s. His work had in turn been motivated by the then recent creation by Tim Berners-Lee of the basic tools and concepts for a World Wide Web (WWW), including the invention of the first Web server and browser, HTML (the Hypertext Markup Language), and the HTTP (Hypertext Transfer Protocol).
Wireless mesh networking is rapidly gaining in popularity with a variety of users: from municipalities to enterprises, from telecom service providers to public safety and military organizations. This increasing popularity is based on two basic facts: ease of deployment and increase in network capacity expressed in bandwidth per footage.
So what is a mesh network? Simply put, it is a set of fully interconnected network nodes that support traffic flows between any two nodes over one or more paths or routes. Adding wireless to the above brings the additional ability to maintain connectivity while the network nodes are in motion. The Internet itself can be viewed as the largest scale mesh network formed by hundreds of thousands of nodes connected by fiber or other means, including, in some cases, wireless links.
In this chapter we will look more closely into wireless mesh networks.
History
Mesh networking goes back a long time; in fact tactical networks of the military have relied on stored and forward nodes with multiple interconnections since the early days of electronic communications. The advent of packet switching allowed the forwarding function of these networks to be buried in the lower layers of communication systems, which opened up many new possibilities of improving the capacity and redundancy of these networks. Attracted by the inherent survivability of mesh networks, the US Defense research agency DARPA has funded a number of projects aimed at creating a variety of high-speed mesh networking technologies that support troop deployment on the battlefield as well as low speed, high survival sensor networks.
“Epilogue” – it sounds like the story is ending. But obviously the Wi-Fi story is continuing strong, evidenced by the contents of this book.
So let us consider this as not an “epilogue”, but as just a brief pause to catch our breath. This book has covered so many of the topics that we know are important today. But based on our past experience, who really knows what future applications will be dreamed up? Who really knows which new technologies will prove to be important in the future evolution of Wi-Fi? It is very humbling to recall that back in the early and mid-1990s, when the IEEE 802.11 standards were originally being developed, the primary application on the minds of the key participants was not networking in the home, or wireless Internet access, or public hotspots, or voice over IP, or multimedia services, or city-wide wireless – but things like wireless bar code scanning and retail store inventory management. These ”vertical” applications for Wi-Fi technology continue to be important today, but oh how far we have travelled.
So only an actual seer could predict the real future of Wi-Fi over the next 10 years. But one thing is clear: Wi-Fi will continue to play a role in our lives. Everything in technology has a finite lifespan – hardware products have a lifespan, software products have a lifespan – but the lifespan of a successful protocol, implemented in millions of devices worldwide, can be very, very long.
Design and quality are fundamental themes in engineering education. Functionalprogramming builds software from small components, a central element of gooddesign, and facilitates reasoning about correctness, an important aspect ofquality. Software engineering courses that employ functional programming providea platform for educating students in the design of quality software. This pearldescribes experiments in the use of ACL2, a purely functional subset of CommonLisp with an embedded mechanical logic, to focus on design and correctness insoftware engineering courses. Students find the courses challenging andinteresting. A few acquire enough skill to use an automated theorem prover onthe job without additional training. Many students, but not quite a majority,find enough success to suggest that additional experience would make themeffective users of mechanized logic in commercial software development. Nearlyall gain a new perspective on what it means for software to be correct andacquire a good understanding of functional programming.
We investigate the Laplacian eigenvalues of sparse random graphs Gnp. We show that in the case that the expected degree d = (n-1)p is bounded, the spectral gap of the normalized Laplacian is o(1). Nonetheless, w.h.p. G = Gnp has a large subgraph core(G) such that the spectral gap of is as large as 1-O (d−1/2). We derive similar results regarding the spectrum of the combinatorial Laplacian L(Gnp). The present paper complements the work of Chung, Lu and Vu [8] on the Laplacian spectra of random graphs with given expected degree sequences. Applied to Gnp, their results imply that in the ‘dense’ case d ≥ ln2n the spectral gap of is 1-O (d−1/2) w.h.p.
We bridge two distinct approaches to one-pass CPS transformations, i.e, CPS transformations that reduce administrative redexes at transformation time instead of in a post-processing phase. One approach is compositional and higher-order, and is independently due to Appel, Danvy and Filinski, and Wand, building on Plotkin's seminal work. The other is non-compositional and based on a reduction semantics for the lambda-calculus, and is due to Sabry and Felleisen. To relate the two approaches, we use three tools: Reynolds's defunctionalization and its left inverse, refunctionalization; a special case of fold–unfold fusion due to Ohori and Sasano, fixed-point promotion; and an implementation technique for reduction semantics due to Danvy and Nielsen, refocusing. This work is directly applicable to transforming programs into monadic normal form.