To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
From these assumptions comes the fundamental structure of the Internet: a packet switched communications facility in which a number of distinguishable networks are connected together using packet communications processors called gateways which implement a store and forward packet forwarding algorithm.
David D. Clark
The Internet
The Internet is a global information system consisting of millions of computer networks around the world. Users of the Internet can exchange email, access to the resources on a remote computer, browse web pages, stream live video or audio, and publish information for other users. With the evolution of e-commerce, many companies are providing services over the Internet, such as on-line banking, financial transactions, shopping, and online auctions. In parallel with the expansion in services provided, there has been an exponential increase in the size of the Internet. In addition, various types of electronic devices are being connected to the Internet, such as cell phones, personal digital assistants (PDA), and even TVs and refrigerators.
Today's Internet evolved from the ARPANET sponsored by the Advanced Research Projects Agency (ARPA) in the late 1960s with only four nodes. The Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite, first proposed by Cerf and Kahn in [1], was adopted for the ARPANET in 1983. In 1984, NSF funded a TCP/IP based backbone network, called NSFNET, which became the successor of the ARPANET. The Internet became completely commercial in 1995.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
An antenna (or hydrophone) is a linear device that forms the interface between free propagation of electromagnetic waves (or pressure waves) and guided propagation of electromagnetic signals. An antenna can be used either to transmit an electromagnetic signal or to receive an electromagnetic signal. During transmission, the function of the antenna is to concentrate the electromagnetic wave into a beam that points in the desired spatial direction. During reception, the function of the antenna is to collect the incident signal and deliver it to the receiver. An important theorem of antenna theory, known as the reciprocity theorem, allows us to deal with the antenna either as a transmitting device or as a receiving device, depending on which is more convenient for a particular discussion.
The only aspect of antennas that we shall study is the propagation and diffraction of waves traveling between the antenna aperture and points far away. During transmission, an antenna creates a time-varying and spatially distributed signal across its aperture to form a wave that will propagate as desired through free space. The spatial distribution of the signal across the antenna aperture is called the illumination function. The distribution in the far field of the waveform amplitude and phase over the spherical coordinate angles is called the antenna radiation pattern or the antenna pattern. The relationship between the antenna pattern and the aperture illumination function can be described with the aid of the two-dimensional Fourier transform.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
The flow on a TCP connection should obey a ‘conservation of packets’ principle. … A new packet isn't put into the network until an old packet leaves.
Van Jacobson
Objectives
TCP connection establishment and termination.
TCP timers.
TCP timeout and retransmission.
TCP interactive data flow, using telnet as an example.
TCP bulk data flow, using sock as a traffic generator.
Further comparison of TCP and UDP.
Tuning the TCP/IP kernel.
Study TCP flow control, congestion control, and error control using DBS and NIST Net.
TCP service
TCP is the transport layer protocol in the TCP/IP protocol family that provides a connection-oriented, reliable service to applications. TCP achieves this by incorporating the following features.
Error control: TCP uses cumulative acknowledgements to report lost segments or out of order reception, and a time out and retransmission mechanism to guarantee that application data is received reliably.
Flow control: TCP uses sliding window flow control to prevent the receiver buffer from overflowing.
Congestion control: TCP uses slow start, congestion avoidance, and fast retransmit/fast recovery to adapt to congestion in the routers and achieve high throughput.
The TCP header, shown in Fig. 0.16, consists of fields for the implementation of the above functions. Because of its complexity, TCP only supports unicast, while UDP, which is much simpler, supports both unicast and multicast. TCP is widely used in internet applications, e.g., the Web (HTTP), email (SMTP), file transfer (FTP), remote access (telnet), etc.
A two-dimensional radar can be described as a device for forming a two-dimensional convolution of the reflectivity density function of an illuminated scene with a two-dimensional function, called an ambiguity function, that is associated with the radar waveform. A radar uses the delay or the doppler of the received waveform as a means of obtaining surveillance information, and requires the use of waveforms that are carefully designed to provide adequate resolution and avoid ambiguity. The major analytical tool used to design such waveforms is the ambiguity function. The ambiguity function is a two-dimensional function defined as a functional of the one-dimensional waveform. Every one-dimensional waveform of energy Ep is associated with a two-dimensional ambiguity function of energy Ep2, which provides a surprising amount of insight into the performance of the waveform.
We shall introduce the ambiguity function formally here, proving a number of its mathematical properties. We will then study the ambiguity functions of some interesting waveforms. Later, in Chapter 7, we shall study the performance of imaging radars from the point of view of the ambiguity function, identifying the coordinates of the ambiguity function with the delay and the doppler of an echo. In Chapter 12, we shall study the performance of search radars from the point of view of the ambiguity function.
Suppose that one is given several images of a two-dimensional (or a multidimensional) object, but that the detail of each of these images is limited in some way. For example, the images may be projections of a multidimensional object onto a lower-dimensional space. By using sophisticated signal-processing techniques, many such limited images of a common object can be combined to construct a single enhanced image.
Techniques for combining multiple one-dimensional projections into a single two-dimensional image are known collectively as tomography (Greek toma: a cut + graphy). The term may also be used to describe techniques for combining several poor images into a single improved image. This is different from the practice of enhancing a single image by signal-processing techniques, although, of course, the two tasks are closely related.
The most widespread form of tomography, known as projection tomography, reconstructs an image from its projections. Projection tomography has a simple mathematical structure. The most familiar instance of projection tomography uses X-rays as the source of illumination and X-ray absorption as the observed phenomenon. In this case, the way the X-ray illumination is used is quite different from the case of molecular imaging where the observation in the far field is based on scattering of the illuminating X-rays. In projection tomography, the observation is based on attenuation of the illuminating rays in the geometrical-optics approximation.
Now we shall introduce a broader view of the topic of image formation. Rather than think of finding the single “correct” image, we shall consider a set of possible images. The task of image formation, then, is to choose one image from the set of all possible images. The chosen image is the one that best accounts for a given set of data, usually a noisy and incomplete set of data. This set of images may be the set of all real-valued, two-dimensional functions on a given support, or the set of all nonnegative real-valued, two-dimensional functions on that given support. For this prescription to be followed, one must define criteria upon which to decide which image best accounts for a given set of data. A powerful class of optimality criteria is the class of information-theoretic criteria. These are optimality criteria based on a probabilistic formulation of the image-formation problem.
Elementary imaging techniques may be built on the idea of estimating the value of each image pixel separately. Within each cell of the image, the data are processed to estimate the signal within that cell without consideration of the signal in all other cells. This criterion does have some intuitive appeal and leads to relatively straightforward and satisfactory computational procedures.
A coherent image-formation system is degraded if the coherence of the received waveform is imperfect. This reduction in coherence is due to an anomalous phase angle in the received signal, which is referred to as phase error. Phase errors can be in either the time domain or the frequency domain, and they may be described by either a deterministic model or a random model. Random phase errors in the time domain arise because the phase varies randomly with time. Random phase errors in the frequency domain arise because the phase of the Fourier transform varies randomly with frequency. We will consider both unknown deterministic phase errors and random phase errors in both the time domain and the frequency domain.
Random phase errors in the time domain appear as complex time-varying exponentials multiplying the received complex baseband signal and are called phase noise. Phase noise may set a limit on the maximum waveform duration that can be processed coherently. Random phase errors in the frequency domain appear as complex exponentials multiplying the Fourier transform of the received complex baseband signal and are called phase distortion. Phase distortion may set a limit on the maximum bandwidth that a single signal can occupy. We shall primarily study phase noise in this chapter. Some of the lessons learned from studying phase noise can be used to understand phase distortion.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
Some types of large data sets have a hierarchal substructure that leads to new kinds of surveillance algorithms for tasks such as data combination and tracking. The term data combination typically refers to a task in which several estimates based on partial data are combined. Perhaps several snapshots of the same scene or object are available, and multiple measurements or estimates are to be combined or averaged in some way. The topics of this chapter refer to partially processed data, and the methods may be used subsequent to correlation or tomographic processing.
Various sets of data may be combined either before detection or after detection. The combination of closely associated data prior to detection is called integration, and can be either coherent integration or noncoherent integration. The combination of data from multiple sensors, usually in the form of parameters that have been estimated from the data, is called data fusion. This term usually conveys an emphasis that there is a diversity of types of data. Sometimes, only a tentative detection is made before tracking, while a hard detection is deferred until after tracking. In some applications, this is called “track before detect.” In this chapter, we shall study the interplay between the functions of detection, data fusion, and tracking, which leads us to think of sensor processing on a much longer time scale than in previous chapters.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
A conventional radar consists of a transmitter that illuminates a region of interest, a receiver that collects the signal reflected by objects in that region, and a processor that extracts information of interest from the received signal. A radar processor consists of a preprocessor, a detection and estimation function, and a postprocessor. In the preprocessor, the signal is extracted from the noise, and the entire signal reflected from the same resolution cell is integrated into a single statistic. An imaging radar uses the output of the preprocessor to form an image of the observed scene for display. A detection radar makes further inferences about the objects in the scene. The detection and estimation function is where individual target elements are recognized, and parameters associated with these target elements are estimated. The postprocessor refines postdetection data by establishing track histories on detected targets.
This chapter is concerned with the preprocessor, which is an essentially linear stage of processing at the front end of the processing chain. The radar preprocessor usually consists of the computation of a sample cross-ambiguity function in some form. Sometimes the computation is in such a highly approximated form that it will not be thought of as the computation of a cross-ambiguity function. The output of the preprocessor can be described in a very compact way, provided that several easily satisfied approximations hold. The output is the two-dimensional convolution of the reflectivity density of the radar scene and the ambiguity function of the transmitted waveform.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
The dream behind the Web is of a common information space in which we communicate by sharing information.
Tim Berners-Lee
Objectives
The HyperText Transfer Protocol and the Apache web server.
The Common Gateway Interface.
The Dynamic Host Configuration Protocol.
The Network Time Protocol.
The Network Address Translator and the Port Address Translator.
An introduction to socket programming.
The HyperText Transfer Protocol
The HyperText Transfer Protocol and the Web
In the early days of the Internet, email, FTP, and remote login were the most popular applications. The first World Wide Web (WWW) browser was written by Tim Berners-Lee in 1990. Since then, WWW has become the second “Killer App” after email. Its popularity resulted in the exponential growth of the Internet.
In WWW, information is typically provided as HyperText Markup Language (HTML) files (called web pages). WWW resources are specified by Uniform Resource Locators (URL), each consisting of a protocol name (e.g., http, rtp, rtsp), a “://”, a server domain name or server IP address, and a path to a resource (an HTML file or a CGI script (see Section 8.2.2)). The HyperText Transfer Protocol (HTTP) is an application layer protocol for distributing information in the WWW. In common with many other Internet applications, HTTP is based on the client–server architecture. An HTTP server, or a web server, uses the well-known port number 80, while an HTTP client is also called a web browser.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level.
J. H. Saltzer, D. P. Reed and D. D. Clark
Objectives
Study sock as a traffic generator, in terms of its features and command line options.
Study the User Datagram Protocol.
IP fragmentation.
MTU and path MTU discovery.
UDP applications, using the Trivial File Transfer Protocol as an example.
Compare UDP with TCP, using TFTP and the File Transfer Protocol.
The User Datagram Protocol
Since the Internet protocol suite is often referred to as TCP/IP, UDP, it may seem, suffers from being considered the “less important” transport protocol. This perception is changing rapidly as realtime services, such as Voice over IP (VoIP), which use UDP become an important part of the Internet landscape. This emerging UDP application will be further explored in Chapter 7.
UDP provides a means of multiplexing and demultiplexing for user processes, usingUDP port numbers. It extends the host-to-host delivery service of IP to the application-to-application level. There is no other transport control mechanism provided by UDP, except a checksum which protects the UDP header (see Fig. 0.14), UDP data, and several IP header fields.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
We are now in a transition phase, just a few years shy of when IP will be the universal platform for multimedia services.
H. Schulzrinne
Objectives
Multicast addressing.
Multicast group management.
Multicast routing: configuring a multicast router.
Realtime video streaming using the Java Media Framework.
Protocols supporting realtime streaming: RTP/RTCP and RTSP.
Analyzing captured RTP/RTCP packets using Ethereal.
IP multicast
IP provides three types of services, i.e., unicast, multicast, and broadcast. Unicast is a point-to-point type of service with one sender and one receiver. Multicast is a one-to-many or many-to-many type of service, which delivers packets to multiple receivers. Consider a multicast group consisting of a number of participants, any packet sent to the group will be received by all of the participants. In broadcasts, IP datagrams are sent to a broadcast IP address, and are received by all of the hosts.
Figure 7.1 illustrates the differences between multicast and unicast. As shown in Fig. 7.1(a), if a node A wants to send a packet to nodes B, C, and D using unicast service, it sends three copies of the same packet, each with a different destination IP address. Then, each copy of the packet will follow a possibly different path from the other copies. To provide a teleconferencing-type service for a group of N nodes, there need to be N(N - 1)/2 point-to-point paths to provide a full connection.