There is a description available in the WWW.
With respect to the Internet, it would be interesting to investigate, whether the resource consumption of the net and delay variations for individual packets (which are the "messages" in the Internet today) scale uo in acceptable manner.
In this context, it may be useful to describe the development from the Plan 55 A system towards the Internet, as it is known today.
From Plan 55 A towards the Internet
From Plan 55 A towards the ARPANetIMP themselves established an independent subnet.
In 1969, Len Kleinrock et al. started the first four of a number of Interface Message Processors (IMP) which established the ARPANet. In a sense, the ARPANet was an electronic reimplementation of the Plan 55 A system, where IMP served as a replacement for the former electromechanical switching offices.
Heart, Frank; Kahn, Robert; Ornstein, Severo; Crowther, William; Walden, David (1970). "The Interface Message Processor for the ARPA Computer Network". 1970 Spring Joint Computer Conference. AFIPS Proc. 36: 551–567. describes the Host to Host Protol, which is used on the IMP.
Hence, for a more detailed discussion, two cases are to be distinguished.
A good description of the ARPANet (in German) can be found in a work by Stefan Kinkartz. Refer to this description of the ARPAnet and particularly to this sketch by Larry Roberts.ARPANEt page of Larry Roberts) which shows the basic architecture. Hosts and Terminals were connected to IMP, the set of interconnected IMP established the so called "subnet". AHIP ARPANET Host-IMP Protocol RFC 802, The ARPANET 1822L Host Access Protocol
In 1974, Vint Cerf published a SPECIFICATION OF INTERNET TRANSMISSION CONTROL PROGRAM as RFC 675.21. BELS74A D. Belsnes, "Flow control in packet switching networks," INWG Note #63, October 1974.
Congestion and Buffer Bloat
It is interesting to listen to the Talk by Jim Gettys:"Bufferbloat: Dark Buffers in the Internet" and to have a look at John Nagle'as Paper "On Packet Switches With Infinite Storage", which appeared as RFC 970.
I'm still to understand, at which time TCP finally adopted the sliding window scheme including Go Back N, as it was used in the 80ies in point to point protocols, e.g. in Kermit. In RFC 793, TCP adopts the sliding window scheme, however, the Go Back N scheme was not yet adopted in the RFC. However, to the best of my knowledge, the retransmission scheme as described in RFC 793, which uses individual retransmission timers and copies for each sent packet, was never used. All practical implementations adopted a Go Back N scheme.
Some Remarks on Flow Control
In RFC 675 we read the following about flow control.
4.5.3 The RECEIVE Side At the receiving side there are two requirements for buffering: (l) Rate Discrepancy: If the sender produces data much faster or much slower than the receiver consumes it, little buffering is needed to maintain the receiver at near maximum rate of operation. Simple queuing analysis indicates that when the production and consumption (arrival and service) rates are similar in magnitude, more buffering is needed to reduce the effect of stochastic or bursty arrivals and to keep the receiver busy. (2) Disorderly Arrivals: When packets arrive out of order, they must be buffered until the missing packets arrive so that packets (or letters) are delivered in sequence. We do not advocate the philosophy that they be discarded, unless they have to be, otherwise a poor effective bandwidth may be observed. Path length, packet size, traffic level, routing, timeouts, window size, and other factors affect the amount by which packets come out of order. This is expected to be a major area of investigation.
Actually, the second part is important. Of course, there may be a rate discrepancy between sender and receiver. However, using buffering and flow control to overcome this, is a quite cumbersome job. Basically, flow control can cause a sender to pause or to continue. So, when a sender's rate is too fast for the receiver, a receiver often does not slow down the sender's rate. Actually, it causes the sender to send data at it's rate in bursts and causes the sender to pause now and then, so that the average arrival rate at the receiver can be handled.
Unfortunately, in a TCP connection, we don't know, how fast data is offered to the network by the application, we don't know how fast an application can process incoming data, We don't even know for sure the throughput at the links in charge, which may be even volatile in some cases.
As a result, naive approaches for using flow control for rate adaptation may cause quite funny effects. (Actually the quoted text proposes some flow control scheme, we don't discuss it in detail here.
In summary, flow control is hardly a suitable means for making the sender send with an appropriate rate. Buffering and flow control are mainly used to keep incoming packets and to reconstruct the original data flow.
Some Notes on the Congavoid Paper
For the discussion of the congavoid paper itself, it is inevitable to discuss the terms stationarity and stability in the context of queueing systems. At least, what the congavoid paper is concerned, the difference between these is not perfectly clear. This might be due to the fact that the term "stability" is coined in the context of system theory, while the term "stationariy" is coined in the context of queueing systems and stochastic processes.
As a work in progress, I'm working on an annotated version of the congavoid paper. In addition, I work on an annotated version of RFC 896.
System Theory and Queueing Theory
System theory and queueing theory pursue completely different goals. In very brief terms, the difference can be described as follows.
- System theory attempts to explain systems with particular respect for the causal dependencies of their internal behaviour, system theory pursues a "white box approach".
- Queueing systems and stochstic processes attempt to describe systems from an outer perspective, they completely ignore their internal structure and causalities, this is a classical "black box approach".
As a typical example, the reader may consider the queue at the counter of his local supermarket. When this is described by a stochastic process, the individual events are time lines where the length of the queue is listed with respect to time. The event space is the set of all possible lists. Any information available for this system is the probability measure associated with the subsets of the event space as defined in the event algebra (which is of course defined upon the event pace).
Please note: When random experiments are conducted here, the random events are not "a customer enters the queue" or "a customer leaves the queue". Random experiments are conducted by choosing a complete time series here, otherwise a customer's enty to a queue or a customer's leaving may belong to completely different random events and a discussion would compare peaches with oranges.
The only informtion availabe for the system is the event space, the event algebra and the probabilty measure. There is absolutely no information available about the internal structure, reasons for a particular behaviour, (causal) interdependencies between states etc.
A very brief introduction into a system's representation in state space can be found here>.
With respect to system theory, there is a quite useful Wikipedida article on stabilty theory.
Stability in System Theory
As described in the Wikipedida article. system theory aims to describe systems in terms of differential or difference equations. In the ideal case, a system's behaviour can be described of a function of time. Please note: This function is deterministic.
A crucial point in stability theory, and perhaps this is a basic misconception in the congavoid paper, is the concept of equilibrium points.
When we describe a system in terms of equations, our equations describe state variables and their interaction in time. In colloquial speech, state variables are energy stores and the dynamic behaviour of a dynamic system is "doing work", i.e., coll., moving around energy from (one) store(s) to other(s). This process may come to an end, if the system is sufficiently damped, i.e. all energy in the system is eventually exhausted. And it may come to an end when the system reaches an equilibrium point, i.e.&nspb;the system will stay in this state permanently, unless it is deflected by an external influence.
The reader my find a deeper introduction to state variables in Rudolf Kalman, On the general theory of control systems in Proc. First IFAC Congress Automatic Control, Moscow, 1960, Butterworths, London, vol. 1, pages 481-492
NB in the congavoid paper, the term "equilibrium" refers to an equilibrium between the number of packets which enter a path and the number of packets which leave the path. This well make sense, but it is completely different from what is meant by the term equilibrium in system theory.
Please note that the notion of state variables is not really found in the congavoid paper. It would be quite interestig, nevertheless, to consider this concept in the TCP context, when packets are identified with "energy" (i.e. the ability to do work) and buffers in the systems are identified with "energy stores", i.e. state variables etc.. The tough problem however is to state the differential equations then, consider e.g. a Wifi system: How long does it take to convey a packet from one buffer to the next in a Wifi-system?
There is a huge bunch of work dealing exactly with questions like this (by Saverio Mascolo, Laurent Massoulie, only to mention two authors) and the whole work on "fluid flow models" which unfortunately fails in this point. And the very reason is that differential equations, though being well suited to describe deterministic state transions, are ill suited to describe non determimistic ones.
In very brief terms, a system is in equilibrium state, when no energy flow occurs. The system has calmed down, or thermodynamically: cooled down. In other words: Without external intervention, nothing is going to happen at all. In contrast to that, according to the congavoid paper a TCP in equilibrium state is surprisingly active.
The reader might rise the question, how an undamped pendulum will fit into this model. Now, Aleksandr Mikhailovich Lyapunov introduced the concept of Lyapunov stability, which means that a stable system will permanently follow a certain trajectory in state space, unless it is deflected. So, the pendulum follows a state space trajectory, where energy constantly swings between potential energy and kinetic energy.
The first question here is what "stability" in queueing systems should be at all. Emilio Leonardi el al. provide a possible answer
On the Stability of Input-Queued Switches
with Speed-Up. Particularly in the congavoid-paper, "congestion" appears to be a catchall element for "network problems". There is a basic concern, a lack of throughout, and the question arises: What's actually the problem? So, we gathered: Perhaps clumsy traffic is a the most important concern in congestion control, actually this is the reason
why the "isarithmic scheme" by Davies, 1972, was abandoned. End to end congestion control schemes suffer from the same
problem as the isarithmic scheme: It is possible, that the path from a packet's sender to its destination may carry 100 packets,
however the link and hop, which immediately follow the sender, can only carry one or two packets. Hence, when the sender offers 100 packets load to the network, packets will be dropped.
Stability in Queueing Theory
Congestion in General
4:56 wireless just bareley works, if you're moving around it doesn't
6:02 Data vs. telephony
7:02 Brief history of networking
8:52 the really hard and expensive thing is building paths and connections.
9:50 etc.: building paths by operators, a strowger gear,
11:00 structural probklems with pone lines.
13:00 reliability, problem of single points of failure.
14:00 increasing local reliability, component reliability vs. redundance.
15:35 sending data packets don't need to build up lines.
16:50 some remarks on ISDN. More speed: Focus on faster setups.
17:31 telephone is the wrong model
18:50 the paths exist, introduction of packet switching
19:30 slide "packet switching".
20:47 "On distributed communication networks"
21:10 Arpanet, in September 1971 BBN started the ARPAnet.
22:07 ARPAnet was build upon an existing system.
22:56 we see the struggle betweeen packet switching guys and telco guys.
23:40 the ARPANet is an overlay.
24:53 Cerf, Kahn
26:00 The topology is completely irrelevant, only a packet's source and destination are relevant.
26:50 Multinetwork demonstrated overlay network with common encapsulation. "TCP/IP wins".
28:50 all scheduling problems are NP hard.
33:31 TCP/IP problems, particularly: transport protocols hate multicast.
38:00 Turn from communication to dissemination.
40:00 we cannot really do dissemination over conversation.
42:45 Dissemination network
45:00 The focus is no longer on the conversation. When I request some data, anyone can answer. It must be made sure, that the answer I get, is correct.
46:00 We need some kind of authentication
49:00 Design philosophy: Data is assigned a name, not a location.
51:00 Very interesting: Telcos keep data on the line, because the use a central scheduling. Paket switching introduces buffers to avoid central scheduling. So, and that's important, one purpose of buffering is to do the job, which is done by scheduling in line switched networks. 1:07:30 Implict vs. explicit. Quite some data, which is made explicit in packet switching networks, e.g. sender, receiver, timestamps etc. given implicitly in line switched networks.
The first question here is what "stability" in queueing systems should be at all. Emilio Leonardi el al. provide a possible answer in On the Stability of Input-Queued Switches with Speed-Up.
Particularly in the congavoid-paper, "congestion" appears to be a catchall element for "network problems". There is a basic concern, a lack of throughout, and the question arises: What's actually the problem?
So, we gathered:
Perhaps clumsy traffic is a the most important concern in congestion control, actually this is the reason why the "isarithmic scheme" by Davies, 1972, was abandoned. End to end congestion control schemes suffer from the same problem as the isarithmic scheme: It is possible, that the path from a packet's sender to its destination may carry 100 packets, however the link and hop, which immediately follow the sender, can only carry one or two packets. Hence, when the sender offers 100 packets load to the network, packets will be dropped.