To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Timeout (computing)

From Wikipedia, the free encyclopedia

Network timeout preventing a Web browser from loading a page

In telecommunications and related engineering (including computer networking and programming), the term timeout or time-out has several meanings, including:

  • A network parameter related to an enforced event designed to occur at the conclusion of a predetermined elapsed time.
  • A specified period of time that will be allowed to elapse in a system before a specified event is to take place, unless another specified event occurs first; in either case, the period is terminated when either event takes place. Note: A timeout condition can be canceled by the receipt of an appropriate time-out cancellation signal.
  • An event that occurs at the end of a predetermined period of time that began at the occurrence of another specified event. The timeout can be prevented by an appropriate signal.

Timeouts allow for more efficient usage of limited resources without requiring additional interaction from the agent interested in the goods that cause the consumption of these resources. The basic idea is that in situations where a system must wait for something to happen, rather than waiting indefinitely, the waiting will be aborted after the timeout period has elapsed. This is based on the assumption that further waiting is useless, and some other action is necessary.

YouTube Encyclopedic

  • 1/5
    Views:
    21 831
    8 371
    41 177
    928
    13 856
  • A Brief Prehistory of Voice over IP parts 1 & 2
  • Wireless - Security (HD)
  • What is latency? What affects latency?
  • Computer, Automated Teller, and Office Machine Repairer Career Video
  • IEEE 1588 terminology: ordinary, boundary and transparent clocks

Transcription

>> COHEN: I just want to say that we have two, there are two parts to our talk. I'll give the first part today. And Steve who sits here will give the second part tomorrow. We give it together [PAUSE] because we worked on it together for many years in the early 70's, in the early 70's, for about two decades or so [PAUSE] okay. Now, before we start, we'd like to gauge how much your knowledge of the Internet. So here is the surprise quiz. The question is, in the "IPv4," yeah, what does "v4" indicate? You have time to think about it until the end of the class. The grade will not be given a [PAUSE] okay, the purpose of the talk or why we like to give it, is because Voice over IP became very, very popular and practically replaced the old telephony [PAUSE] in this business, it is over billion dollars a year. For those of you who don't remember what a billion is, it's like giga. It's a giga-buck business by now [PAUSE] and not only that it is big; it is also growing awfully fast. And those are--here are some quotes from the trade magazine, the trade publications. You can see number, like, 30 billion and 33% per year, and 40% per year and the [PAUSE] all in all in 2008 we are 106 million residential VoIP in the country and the rate just goes up and up. There's a diagram for that called Moore's Law for Voice over IP. Anything you measure, it goes exponentially of that and you get lots of data from IDC [PAUSE] what I want to show next is that at the very, very beginning. This is a request to start a new program. It's a request from the ARPA IPT. IPT stands for Information Process and Technology or Techniques. And the Director of ARPA IPT send this letter to the headquarters of ARPA, which at the time used to be ARPA and later they change to DARPA and later change to ARPA. And now it is DARPA for the second time and god knows for how long. And here the computer office which is--was called IPT, sends this memo to the director asking to start a program in computer networks [PAUSE] purpose for which the ARPAnet was build was for resource sharing. I'd like to read you some of that. This is for the same memo, the objective of the program. This is what made the ARPAnet happen. The memo is--writes from the same document which was written in June 3rd '68. At least try to remember this date, those are the milestone. So the first [PAUSE] to a contract is [PAUSE] government technology of a DoD and the government [PAUSE] trying to get more communication lines [PAUSE] this is the budget. Over the years, most of the money goes to communication line. Some of it went to the--just happens that the [PAUSE] that was exactly a million dollar. So, I think this is the answer to why 19. If it was 20, it would be a million and 50 [PAUSE] okay. This is a--what I show before was a timeline of the ARPAnet as predicted. This is the timeline of the voice activity. If you're--we have to really start from 1962 when Packet Switching was invented by Lenny Kleinrock at MIT in his PhD dissertation and he invented Packet Switching. And Packet Switching is more than saying what routers are, Packet Switching has to do with [PAUSE] packet switching has to do with all the issues like buffering, and bandwidths, and the--and lots of [INDISTINCT] that we learn the solution in. It was the paper--it was a dissertation that Lenny Kleinrock worked on with an officemate of him by the name of Larry Roberts, they were both students at MIT, in the same room. Yeah. And Lenny, which was not a great programmer as Larry was, needed lots of help so Larry helped him. And Larry learned about Packet Switching when he was a student from Lenny [PAUSE] and then when the ARPAnet started, the first problem was, "Will it really work as predicted?" So the first thing that ARPA did was to bring the first IMP to UCLA and called and established a network measurement center. In this center, they could verify the theory against reality. And it turns out the reality was not very much different in what was predicted. Okay, seven years later, Larry was at ARPA, in position to start doing it. Larry started--start by writing these memos that I showed you to his management and asking a million dollar for a network. That in retrospect, it was probably one of the best million, best-spent million dollars of the government. Then in 1969, it was born. In '73, they have a crazy idea which used Packet Speech for voice or telephony. And anyone who knew anything about telephony could explain why it was impossible, including Bell Lab and AT&T. You know, there are two--to follow this idea which was came from Bob Kahn who was a program manager at the ARPA at the time. He initiated the NSC Program, NSC stands for Network Secure Communication. The idea was that if we can do it digitally then we can secure. At that time, encryption was very complicated and he simplified it a lot. By the end of '73, we had a Network Voice Protocol running on the ARPAnet. At '74 we had several compression technique like CVSD and LPC. By the end of '74, the Cerf and Kahn paper came out. This is a paper that defined TCP or defined the Internet [PAUSE] '75 we demonstrated the voice message system. In '76 we had teleconferencing which should have [PAUSE]. In '77 Bell Lab or--and again Bell Lab issue a patent on packet speech. Yeah. We thought it was a bit funny that we were already demonstrating it publicly for three or four years, they invented it later. The IP was split in '78. I'm sorry, IP was split from TCP in '78, we defined UDP in order to demonstrate it to other people, we shot a movie of it [PAUSE] and then, the network voice protocol had to be redefined to run over IP. Then in 1995, Interim Voice over IP [PAUSE] anyone knows, please let me know. I noticed that [PAUSE] you find [PAUSE] '73 to '95 is like 22 years after it was demonstrated, Voice over IP was coined and it is in wide use of, yeah. RTP for Real Time Protocol was specified later as an RFC. In IETF, we start working on SIP [PAUSE] protocol that Steve spearheaded in through IETF. The crazy idea of ARPA was to get real-time voice. In real-time voice is different in FTP. Meanwhile to do FTP, we know how to send email, but real-time voice is very, very different [PAUSE] one of the more interesting thing was that the carriers, meaning AT&T and all the Baby Bells did not realize that it was competing with them. So, they totally ignored it, even what they called packet was very different on what we called packet. The objective of the NSC was to prove--was to do proof-of-concept of the feasibility of packet-switching network for interactive communication among people. Now, when you say interactive communication among people, if it should be able to wait, to understand every word, recognize a speaker, and maybe even better, like saying, "He is mad, he is angry, he is happy," whatever. It turns out that there is lots of source information carried beyond just understanding the world. It each [PAUSE] those were the explicit objective of the project in addition to have lots of implicit objective like high quality meanings that you've [INDISTINCT] as a people. Real-time teleconferencing, multi-media, voice-mail, and interoperability with the telephone network, all of that were obviously requirement, that were unfortunately were not written any place, but it was obvious that we have to do that [PAUSE] it had to compress the speech a lot. And the reason is, that the network that we had at this time had 50 kilobit lines and there are only three of them across the country. Because of there were so much--so little bandwidth across the country, we could not devote 64 kilobit to each telephone call because this is what a telephone company does, 64 kilobit per second. So, we obviously could not do it. So, we had to do lots of work to reduce the amount of bits that have to be transmitted for each telephone call. And each speech compression turned out to be much more complicated than what we thought. People who have working on it from the '60s and the '70s, though we used compression that was developed by some people on our project and some people [PAUSE] now, the amount of--in order to do a real-time voice compression, we needed equivalent of a [INDISTINCT] it was a super computer job and later there were several other array processors. All computers which were performing of super computer with [INDISTINCT] as much as memory and [PAUSE] optimize for very specific tasks. It was SPS-41 [PAUSE] this is a great machine, the first of them a bit less. And the challenge was to reduce 64 bit per second to 2.4 kilobit per second [PAUSE] by a factor of 30 and do it in real-time. Here's a picture of an FPS [PAUSE] is array processing. It was a super duper machine because it could do 12 megaflops per second. And the--some of you probably still remember when computers were that slow. A little about the ARPAnet. In every site of the ARPAnet it wasn't in--which is great invention of Wes Clark. The original design of the ARPAnet was a server host computer that will be wired between the host computers. So every host computer needed the [INDISTINCT] interface and they were all different and there was really complicated. And Wes Clark suggested to have a machine, that we called the IMP later, that will be exactly the same every place, except the local interface to the host computers. So, all the IMPs talked to each other at exactly the same way, it's simplified to no end. The IMPs were interconnected to each other by 50 kilobit lines which [PAUSE] 303 modems and on the other side it was digital at 50 kilobit per second and the other side was a bunch of several analog telephone lines. I forgot, I think they use six or something like that. And it was particularly a custom-made for each site. The designed called to--the design was very, very forward looking. They understood that there will be many computers, so therefore they left six bits [PAUSE] and this really was [PAUSE] which was one in each, one in each site. And they, being forward-looking, they also saw that in each site they may have maybe three computers. But what the heck, they gave it two bits. What was interesting in the ARPAnet was total of eight bits [PAUSE] and two for the host. Twenty years later I was so [PAUSE] two bits factor of four; we all know what happened later. About 20 or 30 years later, IPv6 replacing IPv4 with 128. So, you can see this progression. Moore's Law for Addresses [PAUSE] we have three data, data points [PAUSE] point and I'm sure that if you would keep going [PAUSE] we get a longer messages. The--we are going to talk a--talk mainly about the communication aspect of the project and not on the speech compression. However, those interested in the speech compression [PAUSE] came out by Robert Gray who is a professor of Signal Processing at Stanford and the book is called, "Linear Predictive Coding and the Internet Protocol." It's an interesting book, we highly recommend. That's what it looks, it's Amazonic, it's so Amazonic, looks slightly different [PAUSE] okay. This is what an IMP look like. Since Impulse bought by ARPA, and since ARPA is part of DoD, it was procured according to the way DoD procure things. So that our MIL-SPEC, that the IMP had to pass [PAUSE] including dropping gate four, I think, four feet or something like that and proving that it can survive nuclear war, you know, operate that the people may melt but the computer will keep working [PAUSE] here in 1969, December '69, ARPAnet [PAUSE] if you remember the milestone called for a operation in December '69. So, this is practically on target. Now, the [PAUSE] because it has lots of things that are interesting for routing, because it has the circle of three [PAUSE] check lots of algorithms [PAUSE] including the multiple [PAUSE] that was--UCLA was a network measurements center. And every few years, UCLA celebrates the birthday of the Internet. And the birthday is considered when the first IMP came to UCLA. Even in the East Coast, they don't like it and there are lots of arguments about it. In June '74, there were no hosts. After awhile, it became impossible to draw those maps. NCP was the Network Control Protocol. Totally no any mail for everyone because there was no IP, no TCP, and no UDP. And the NCP did what this protocol did later. NCP made sure that every bit, the integrity of the messages that were delivered. It did not allow for errors, flow control overflow, and it recovered from errors by using great transmission [PAUSE] act, if it wasn't acted in time, it was either lost and re-sent with a check-sum, and ten messages never made it. [PAUSE] Those are the protocol [PAUSE] it will define in BBNA and report number 1822, which was a bible of the early work on the ARPAnet [INDISTINCT] [PAUSE] to control [PAUSE] and that it was okay. The data [PAUSE] the type of service that you could get was what everyone wanted, reliable error-free-in-order delivery. That will [PAUSE] exempt correctly error-free is obvious. In order was important because the messages, separate messages, arrive at different times and messages could ask each other and arrive in [INDISTINCT] in orders. The job of the NCP was to make sure that it was in order or without yield, without errors. No one wants on this data and no one want to lose data. This was the only type of service that was offered. The problem was [PAUSE] everyone. But for us, it wasn't good enough for real-time speed. [PAUSE] way NCP interim reliability and error-free by repeating and resending and retransmitting [PAUSE] files properly, because it caused larger delays. That you [PAUSE] so, in regard to these ideas, there were three important properties. [PAUSE] [INDISTINCT] low latency and the CERF is data integrity. If you do FTP, which was what most of us did, you need both data integrity in high bandwidths. If you did Telnet, which was interactive communication, we need data integrity and low latency. But we in real-time speech, we needed both high bandwidth and low latency, in those lines to loosen the integrity. It's very important to understand that real-time communication is different in all real-time communication. It took us awhile to get it. [PAUSE] Real-time communication new data obsoletes previous data. The problem is, is that you have buffer overflow. The host did not take all the packets from the buffers and the communication lines bring you more data. And so, you have to discard some of the data. [PAUSE] Question, do you discard the latest packets or the oldest packets? The answer is, it also took us awhile to realize, that if it is real-time communication, to discard the old packets and keeps the new packet. For, if it is non-real-time communication, you keep the old packet and discard the new packet. [PAUSE] and there, for example, if you have weather forecast, I need to know--I need now the forecast for now and I don't mind if I lost the forecast for yesterday. It's FTP of programs I need the beginning of the program cannot [PAUSE] without it. [PAUSE] Example of real-time--and other real-time protocol, it was done in 1971. It was real-time flight simulation with a pilot. [PAUSE] pilot was, as a pilot that was at Harvard and the computing was at MIT. And then, as the pilot, they needs a joystick and some other devices like a throttle and you have a screen on which he saws the outside view and this mainly what a pilot got. In order to run it, it have about--heavy computers being at MIT we [PAUSE] packetize a data, send it, give it to our IMP, gave it to the BB&N IMP on the other side of town. Then give it to the MIT IMP with the other side of Cambridge. And PDP-6 was a computed dynamics of the plane and the LDS-1, Line Drawing System-1 of Evans & Sutherland, computed in real-time the view. [PAUSE] this time was something like few hundred lines [PAUSE] pixels, no area, no shading--but it totaled a lot about the issues in real-time. And when both camp saw it, they said, "That's exactly what I need for voice." [PAUSE] but later we did. The flight-simulator taught us about delay and jitter. Delay is bad but jitter is worse. Jitter is a variance of delay [PAUSE] better to drop packets than to retransmit [PAUSE] times that you want the latest information. The other things that we noticed, that we have to think not only of bit error-rate but with the packet error-rate. The problem is not if the bit is bad, the packet is bad. And the packet can be bad because one bit is bad or all of them. [PAUSE] so expensive to retrieve, we invented another, where to communicate, which we [PAUSE] and I showed you before that there was only one "A" for application on the left side of the slide. Now, we have NVP, the Network Voice Protocol, was added. [PAUSE] Both of them were then through the NCP except that the NCP did not give us the performance that we needed. What we did here from the [PAUSE] right side [PAUSE] bypass. Later it went through [PAUSE] bypass [INDISTINCT] NCP [PAUSE] IMP, and the "E" is the receiving IMP, looked at it. Had no idea if there were lost messages, had no idea if there were errors, just gave it to the application as a side and let NVP decide how to handle it. [PAUSE] the operations which was called Type 3. Packet of Type 3, were allowed to bypass the NCP. When we did it, the BB&N who controlled the network operation was afraid to let us do it because they think if we don't use the NCP, we may overflow the IMP driver and bring the network down. [PAUSE] and we have to convince them not to worry about it. This is the structure of the NVP, looks the same driver on this side and the other side there is a user with user interface. So, there is a controlled protocol CVSD that establish the connection, rings the phone, answer or not [PAUSE] what's important it handles the [PAUSE] the data protocol could be one of several. Like, for example, the LPC and CVSD that I mentioned before and PVP for video and PCM and Delta PCM and there are a list of about 20 or 30 different vocoders in existence today. And part of the NVP's Network Voice Protocol, was to agree on which vocoding "A" to use. So, as an extension, we defined the NVCP, which was conferencing and PVP, which was video [PAUSE] packet video. There were all on the market several video compression machines, none of them were packets, all of them were working with stream of bits. [INDISTINCT] we have to address about it. [PAUSE] like Speech Storage, like answering machine, like you're doing. And the lots of bits, so we had to send it to a machine called [PAUSE] like online. Like the [PAUSE] it was so expensive and so unbelievable so ARPA could do it only in one place. All in all, that the messages may be very long like, let's say, 60 seconds. One 60 second, one message [PAUSE] '09 I called Best Buy and they told me that 500 gigabyte cost $75. I already showed difficult [INDISTINCT] again and find out how much it cost now but that's not, you know, [PAUSE] about 2.7 micro-dollar we were going cross-country in the past, today voice is considered free. The Internet was born in 1974, [PAUSE] 1984. [INDISTINCT] idea of the Internet can be summarized in one line. [PAUSE] plan there the Internet works with O(N) interface, can be [PAUSE] the Internet were doing it not O(N2). The ways of seeing it, is the way most people did it before in the past was connecting each computer to each computer, which is N2 operation and N2 software, N2 interfaces. Then we do it in N interfaces. Once you do it for any computer type, you can replicate it. That's basically the idea of the Internet, O(N) rather than O(N2) [PAUSE] that we have to replace the ARPA's NCP with TCP [PAUSE] TCP's what makes the Internet work. One of 83, the end of NCP, long live TCP [PAUSE] and they move from the original operation [PAUSE] seems now very fast in comparison to the move to V6, but they know we'll get there one day. What do--the operation with TCP is [PAUSE] TCP can talk to application like the [PAUSE] into the IMP, into other application and get everything worked together nicely. Type of services that we got was exactly what everyone wanted: reliable, error-free, in-order delivery. No one wants errors in data and no one want to lose data. And there was no need to any [INDISTINCT] and, again, as before [PAUSE] time. So, UDP was added later [PAUSE] I was asked to do [PAUSE] in UDP [PAUSE] to the right, which was the Version 1203. Right side is TCP version four that only handed applications that doesn't need three time [PAUSE] to bypass TCP, you have to use UDP and UDP is for real-time. So, this is what we called earlier the split of TCP. And this was done in IPv4. The V4 indicated that this is version four of TCP; the five Ps, it was never IP3. Everyone who had this answer get two points [PAUSE] "E" that was meant to be for everything that doesn't want TCP [INDISTINCT] of the Internet of real-time and the movies use UDP one way or another. [PAUSE] And then when we got interested in the history, we decided that the best place to look for the history of Voice Over IP was in Google. So, we googled Voice Over IP and this is what we--so, if you look at other places on what used in Google, you find out that this is the [PAUSE] packet voice was invented several times independently including AT&T in 1977, which was [PAUSE] too bad but the next generation will know that we will know what is in Google as if they know what really happened [PAUSE] well, about IP and this is the other IP, the IP that lawyers like, the intellectual property, but what it sounds is nowadays in the 2000's companies sue for infringement that occurred in the '90s. Apple patents that were issued in the '80s about works that we did in the '70s [PAUSE] that we did was developed and funded by ARPA and put in the public domain [PAUSE] what bugs me personally to see how many patents our Voice Over IP nowadays and how much money spent on lawyers [PAUSE] owned it. Voice Over IP and Packet Video are major component of the Internet. All the information revolution of the communication revolution, were developed and demonstrated publicly by ARPA in the '70s. Advances in computing, communication, and storage made it practical and ubiquitous [PAUSE] the carriers don't think that we are crazy. The carriers are converging with us. End of Part One. Tomorrow, the same place. He will show you a movie to demonstrate a CVSD teleconference that we did in [PAUSE] also discuss the evolution of voice protocols for packet switching networks like Voice over IP and like Skype, which bypass the entire Standard Telephone Network long distance toll system [PAUSE] Questions, comments? Excellent question. If you note… >> Repeat the question. >> COHEN: I'm sorry, the question was, when we did it in the '70s, it was unbelievably impractical. And we need supercomputers and we need memories that didn't exist. The question was, what made us do it? Is it that we foresee the future or some other belief? That's the question? >> That's the question. >> COHEN: Okay. The thing is that the purpose of the NSC was to prove proof of concept. It was obvious to people at ARPA, it wasn't obvious to people, it was obvious to people at ARPA, especially to Bob Kahn, that computers will be cheap enough to do this thing. So, if you notice, there was a big gap between when we finished working toward all that stuff in mid-80s and until late '90s, when you'd start being picked up by industry. And the hibernation was caused--was waiting for the price to be right. And I'm glad that ARPA didn't try to work on it but let the industry gets it by itself. [PAUSE] I don't think it started around the '70s and I'm not sure if the satellite was, it was all digital. They used just the time, the signs, something, I'm sorry did I say some--I don't remember. Yeah, there's speech detection and the world was doing all kind of interesting things for satellites. And inside the network, I think T-1 was starting being digital. It took a long time for old T-1's to be digital and old T-0's to be digital. [PAUSE] Yes? [INDISTINCT] it was not a very--[INDISTINCT] as a standard offering, to get it like getting the lines between [INDISTINCT] was special. I apologize. >> CASNER: I'm Steve Casner. I'm giving the second half of this talk. Danny Cohen gave the first half yesterday. His part concentrated on the initiation of the ARPAnet, the building of the ARPAnet and more on the concepts involved in getting packet audio and video to work initially in our experiments. My talk is going to be more about the protocol development, both the roots of it and the work we did in the '70s, and then continuing on into the real-time transport protocol that is in use today. So, the timeline for this work began in the '70s. The ARPAnet began at the end of the '60s, 1969. But the work we did with packet audio and video together at ISI and with other contractors around the ARPAnet was in the late '70s and early '80s, this development of the coding algorithms, CVSD and LPC, plus development of the protocols to support transport of that data across the network. We covered both the real-time transport and voice messaging and we did not just point-to-point calls, but also conferencing with multiple sites. And the talk today will include a movie of--that we shot in 1978, showing a demonstration of the conferencing. Continuing from that--in the '70s, in 1978 was when TCP, the original TCP was split into IP and TCP. And from that point, we began to use really what was voice over IP even though it was not until a couple of decades later that the term Voice over IP was coined. We developed network voice protocol, running over IP. We'd began then a process to standardize the voice protocols working towards RTP in 1992, and then those specifications in the later years were taken up for parts of a complete system to make Voice over IP. One thing I wanted to point on this timeline is a gap between 1981 and 1992 and I'll make a point of that on a later slide. Talking about conferencing that began on the ARPAnet, the ARPAnet had a fairly simple protocol stack compared to what we see today in the current protocols in current networking. The IMP, Interface Message Processor, of the ARPAnet was the packet switching node. It was involved not with just--not only with forwarding packets the way that we do between IP routers today but also in some of the flow control. It worked together with the network control protocol in the host itself. So, this box at the top represents a host and the box at the bottom is the IMP, the network node. That Network Control Protocol is the rough equivalent of today's TCP. As they say, at that time there was no IP. This was before IP and TCP were developed, specified in or initially described in 1974. The service provided by NCP was one of reliable byte stream or reliable data stream. It handled flow controls so that the network nodes would not be overloaded. It handled every transmission when necessary for covering--recovering from packets that were lost and basically, as I say, it provided a reliable communication path. So, the service that was provided was--and for those who were here yesterday, I am intentionally repeating a little bit of Danny's talk as a precursor to the movie. But, the service that was provided, as I say, was a reliable transmission service, because as we all know nobody would want the errors to occur in their data, nobody would want their data to be lost. But of course, that's not true for audio transmission, for interactive audio. It's more important that the delay be low, that all bits get here because you can actually recover from some amount of packet loss especially when the losses are of short duration by bridging the gap. There's a lot of redundancy in the audio sound that you can afford to lose some. So, what we had to do in the ARPAnet case, was to bypass the NCP, bypass the reliable transmission mechanisms and instead go directly to the IMP with a different form of the packets. As I said, the IMPs were providing part of the flow control mechanism in conjunction with NCP and the host, so we needed to use the--a Type 3 packet which was sent without flow control through the ARPAnet nodes. As Danny mentioned yesterday, the BBN Network Operations Center was reluctant to let anyone use that mode of transmission because they were concerned that we could just overload the packet switches in the network, cause buffers to fill-up and cause the thing to collapse. So it did take a while arranging for a specific experiment to be done on Tuesday at three o'clock and we could send some packets till eventually they became comfortable with the--that notion. Today, we have a high volume of audio and video traffic going along with all their other kinds of traffic we have on the Internet and it's--it has become more or less routine. It's in individual streams are no longer such a big part of the capacity of a circuit typically. So, in that picture I was showing, our network voice protocol connecting directly to the IMP driver and the packets which is in the network, the network voice protocol is comprised of a couple of pieces, a data protocol and a control protocol, that separation of data and control is something that we see in many areas. It was an important aspect of this protocol design as well, because there's really different requirements for the data transmission and the controls that go along with it. Also in this slide I'm talking about different forms of voice coding going over the data protocol, NVCP and CVSD and I'll talk a little more about those in a moment, but also we note here, PVP which was the video extension following roughly the same data protocol but extended to carry video packets and over here on the control side, NVCP which was our extension for conferencing. So, those two are described here. The control protocol for NVCP included a floor control mechanism handing off the floor to allow different people to speak, as opposed to just a shared floor and I'll say a little more about why we needed that in a moment. It provided some insulary control functions like voting and you'll see that demonstrated in the movie as well. And I mentioned that we were doing, in addition to real-time transmission of the voice, we were storing some of it for voice messaging. That was in conjunction with text messaging at the time. On the video side, we did use the PVP for multimedia teleconferencing support. Actually we had some--a few rooms setup like this one and use the packet network in the early '80s for that kind of function. The video has a few differences in characteristic from the audio but basically the data protocol accommodates both of those. At that time, we didn't consider storing video on disc because we didn't have anywhere near the space that is now just everywhere for individuals to store, as well as massive storage like Google has, of course. So, to introduce the movie, what we'll see in the movie is this scenario with four sites. What's shown here is at each site, a voice terminal, that's what VT stands for, with a CVSD encoder, and you see those boxes in the movie, the boxes with blue sides. You'll see various user interfaces being employed for controlling the conference and the voice floor switching. This movie is on YouTube but now I'll play it for you live here. [VIDEO CLIP] >> CASNER: So, there's a few things that I'd like to point out from the movie. One is you may have noticed some occasional screeches, particularly in Randy Cole's voice. So, I actually believe that we--when we made the movie, we recorded the audio and then transcribed it into the encoded form for demonstration before putting it, printing it on the film. And we printed the film with a couple of different encodings. I believe this one is actually the LPC encoding because that screeching is a nature of LPC. The way the LPC algorithm works is it produces--it's based on a vocal track model. So, it tries at the analyzer to pick out what the timing is between pitch pulses and if it gets confused by a factor of two, the pitch can be off by a factor of two. So, it analyzes what the pitch pulses are and then at the synthesizer it produces pulses at that rate and feeds them through a filter that models the vocal track to try to reproduce the sound. That's how we get down to the 2.4 kilobit data rate. So, the CVSD that we mentioned there is a much simpler algorithm it's kind of interesting. CVSD stands for Continuously Variable Slope Delta Modulation and basically what it is, is it tries to track the audio wave form, simply by saying at each sample time, is the next one higher or lower? So it tries to drive one bit at a time up or down the track. That has a different kind of artifact that sounds kind of gravelly. You may have noticed--for those of you who heard Danny talk yesterday, you may have noticed that his voice sounded a bit different in the film, in fact, it was said--felt that Danny's voice would be too hard for people to understand, so Eric Mader was speaking for Danny instead. I wanted to point out that the various user interface devices that we developed, some from a character-based terminal, some from a little box that we built, and some from the telephone which we've felt was an important aspect of making this system usable for a larger number of people to have the telephone interfacing. And you did notice that the voting that we could do on the side carrying control traffic which was sort of independent of the voice traffic. That's what this picture shows, the solid lines indicating the voice data flow and the dash lines indicating the control flow. In this particular implementation of conferencing there was strict explicit floor control. Only one person can talk at a time so the chairman could talk at anytime in the reverse channel to the--to whichever participant had the floor and the person who had the floor was talking to everyone else. That's what the arrows; the solid arrows up there are trying to show. The reasons for this form of strict control were that the ARPAnet, the lines of the ARPAnet are only 50 kilobits per second. So, we didn't have enough bandwidth to send many streams of audio. Furthermore, that the LPC encoding, because it's a vocal tracked model can't track multiple voices at the same time. It just doesn't work, it doesn't fit the model. So, we can't have the sound all go to one place, getting mixed together, and then sent as LPC, as is done currently with many centralized conferencing systems. So there's a number of reasons, bandwidth and encoding technology. Later on when we were running, as I said, conferences in the early '80s, we did have systems that could handle the mixing and we sent these data streams from all sites to all other sites at the same time. Also, in this movie you saw an example of telephone interfaced audio. And as I said we felt that was an important aspect of the project. We had a couple of generations of it. What was shown in this movie in the 1978 timeframe was a single unit that we had built at ISI. But the later one, the STNI that I'm talking about here, was a single card. I'll show in a moment how that was used. Both of them though supported DTMF signaling so that you could push touch-tone buttons on the phone and have that be decoded as a single input. And the--going in the other direction, was used for placing a call across a network and dialing out on the other side, to do toll bypass, for example. So, as I say, this was 1978 and note the similarity to what we have with Skype and other systems today. So, this little box up here in the upper right hand corner, is a Lincoln packet voice terminal, PVT, which was comprised of several circuit boards, wire-wrapped circuit boards, plugged in there and a telephone that interfaced to the unit. The telephone actually has a thicker cord than normal and has a microprocessor inside it so you can't see that from just the picture. But this was--this was a system capable of doing voice compression inside the little box instead of with the whole rack of equipment like you saw in the picture from Danny yesterday. And it communicated over a low end with low cost version of Ethernet, CSMA CD bus network called the LEXnet. The STNI was one of the cards that could plug into this slot there, so we developed that at ISI. In 1982, at the end of the network speech compression NEC project, we had a big demo meeting of--at Lincoln Laboratory and there we had an actual five-site conference with multiple networks involved. At SRI, there was a packet radio network. At ISI, we had the STNI going to a telephone network. At Lincoln Laboratory, I think there was another STNI, as well as a local terminal and DCEC in Washington. So we actually had--did have a--we had several conferences similar to this stimulated one, but this one in particular pulled all those pieces together. This is a--you probably can't read all the details of this but this a diagram of what the network was like at that time. The oval at the top is the wideband network, wideband satellite network with an amazing 3 megabits per second capacity compared to the 50 kilobits per second we had on the ARPAnet. So that allowed us to do multiple channels of audio, multiple conferences and connections at the same time. It also allowed us to begin doing packet video because we had enough bandwidth. At the various sites, we had voice terminals of different kinds. Some on the packet radio network. The STNI, as I mentioned at ISI, several at Lincoln. And also at Lincoln Laboratory they had an interface again, a telephony interface, but this one based on a channel bank for--that you would have like in a central switching office. So a bunch of phones could come into that and get connected through a higher speed interconnect, T-1 speed interconnect, into the packet system. So I mentioned the gap in the timeline at the beginning of the talk. Roughly a decade from 1982 to 1992, where there was--I mean, you would wonder why didn't we have this system that we had built commercialized and, you know, why wasn't it deployed more widely. And really the answer is, we had to wait for technology to come along and be ready to support this kind of application, both reduction in cost and increasing capability to get it down to a reasonable size. The thing that happened by the 1992 timeframe was the development of workstations like a Sun workstation, that's what I had at the time, and the implementation of IO devices in those workstations. So then we had, sitting in front of people in the equipment they would already have available to them to use, we would have the voice terminal capability ready to go. During that decade, we weren't all sleeping. We were developing--continuing to develop the protocols and as I mentioned extending from just doing packet audio to doing packet video as well. There was a--there were a couple of networks built-up that supported that research work. The DARTnet was a T-1 based cross-country network. So then instead of having to deal with the satellite delay of the wideband satellite network, we had just local terrestrial delays. The packet voice and video system was used for conferences by, by people who weren't involved in the project, so it was actually tested to see that it really works. When you develop something new today you have to actually get it out into the field so that you can make sure that it really works the way you intended. And we did the same. We didn't really have enough capacity to offer service to the general public but there were a number of research groups that did use the system. There was also a transition, a buildup of the IP multicast capability to allow multi-site conferencing over the Internet. And I'll talk more about that in a minute, too. And then, as I said the--initially the notion of sending packet audio or video over the network was a strain on the network. But as the network grew and as the research work covered the needs of different kinds of transport and how to handle that, how to manage that within the network, then the network became ready for this kind of service. Backing up slightly, I mentioned on the timeline that the split of IP out of TCP occurred in 1978. Similar to the--to what I talked about with the NCP and the ARPAnet protocols initially, TCP originally provided just a reliable transport service with retransmission. We had the same problem in wanting to apply the Internet voice protocol over IP, we need--over the Internet; we needed to have the reliability mechanisms separated from the packet forwarding mechanisms. And so in fact in 1978, IP was extracted from TCP and became a separate protocol, so that an interesting factoid that Danny pointed out yesterday, is that, there is no IP version One, for example. A lot of people may not realize that, because it didn't exist really until Version Four of TCP where IPV4 was split-out. Also at the end of that hibernation decade, in 1992 was when we began work in IETF on--in the audio/video transport working group that I chaired at that time for the development of real time transport protocol. How many of you folks worked with packet audio or video and perhaps with RTP in particular? At least one. The remainder of this talk is largely about the development of that, the protocol. But I mentioned here that the--this was the First IETF Audiocast. What we had done was take our DARTnet T-1 based network that was providing research basis for not only voice work but other aspects of networking. We used that as a core for adding on tunnels to do IP multicast to other sites. We had 20 sites spanning 16 time zones from Australia to Sweden and we actually did have real-time audio inbound and outbound from the IETF meeting at that meeting. Ben Jacobson, I recalled talking in the plenary at the end of the meeting and it was quite impressive to hear him coming from a remote location over the Internet, as it was. That--after that initial meeting, the ad hoc creation of tunnels that we had done became a little more permanent in something we called the MBone, the Multicast Backbone. Again, this was just an overlay network on top of the IP network which did not at that time support any IP multicast natively. So the IP multicast routers were workstations which then encapsulated the traffic and sent it across--encapsulated IP in an IP multicast and IP and then sent it across through the point-to-point routing. That it actually grew through '94, '95, to something of this size where we had 20 countries and 900, a thousand nodes and--then in each of those places, a number of users hooked up that could listen. So we had at later IETF meetings something like I think a maximum of 500 people who had tuned in to one meeting. A couple of highlights of our use of the MBone were the NASA Hubble Space Mission, the NASA TV video got coupled in and broadcast across this network. It was--I remember sitting at my workstation and doing my work and having over my second monitor a video of the Hubble repair going on, being able to listen to the times when their--the part they don't show in the little clip on the TV news was really neat. And another event was the broadcasted Rolling Stones concert and that led actually to an article about the MBone in Newsweek in December 1994. The distributed music performance was a demonstration of some of the work we had done more on the research side for synchronization of audio and video. That was actually a live performance with live players, one in keyboard and I forgot what the other instrument was. At ACM Multimedia '95 where we had three sound streams that were sent from different locations and all synchronized according to a synchronization protocol that allowed us to lineup the timestamps in the data, so that they were--music that was all supposed to be played at the same time, was played at the same time. The performers were both performing according to a--to the third party--third part that was recorded. So that was originated from one place, distributed to the two performers, they played their parts and then all three of those came to a fourth place where it was all put back together. Unfortunately, that synchronization work didn't ever become standardized, but I thought that it was good work. Another aspect of the MBone that I should point out is that it didn't have any explicit session control, starting and stopping. The sessions were all multicast as well, the information about the sessions was multicast. So you had a little session directory program that listened to the session announcement protocol and it collected a list of sessions that you could tune into and the sessions were basically organized by different multicast addresses, so it's like different channels that you could tune into. So that was a very loose form of conference control. And I'll talk more about the control aspects in a moment after talking about the development of RTP. The arrows in this diagram indicate not dataflow or anything but actually the evolutionary path of the protocols. It began with the network voice protocol that I had talked about earlier from our ARPAnet and early Internet work. From there, we picked up the notion of carrying a separate sequence number and timestamp and I'll show that in a minute. And from the Vat program, program and protocol sends for visual audio terminal is developed by Ben Jacobson and Steve McCann at LBL. It had its own protocol and it contributed a couple of notions like on-the-fly vocoding switching, source identifiers when mixing together multiple sources and the start-of-Talkspurt bit for adjusting the time delay. So, the headers are shown on each of this next sequence of slides to show the fairly simple aspects of the data protocols that we put together into RTP. As I mentioned, sequence number and timestamp are important together because the sequence number allows detecting when there are packets lost and the timestamp, which tells what the position of each frame or packet should be, allows you to distinguish between gaps that are due to losses and gaps that are intentional because of silence when we don't transmit anything. In NVP, we actually had a header checksum in place. We spent seven bits on that header checksum because we wanted to be able to accommodate data pass where there might be some bit corruption in the data which we could tolerate but a bit error in the header would cost the packet to probably be unusable. It turns out that there really aren't transports available that don't have some lower level mechanism that's going to kick out the packets that they have in there anyway so we didn't actually carry that idea into RTP. In NVP we did also carry the control tokens in the same packets along with the data and I'll talk more about that separation in a moment. And then at data length, this protocol was dependent on the lower layer protocol to say what the overall length of the data was. In the Vat protocol, there was not both a timestamp and the sequence number, just a timestamp in audio samples. The way that silences were indicated was by a flag on the first packet after silence that indicated that a gap had occurred and so that was a suggestion of a time when the receiver could adjust its delay that it's putting in to accommodate the jitter to either increase it or decrease it according to how much delay had been built up. It also included a site identifier so that you could have premixed audio, audio that had gone to one place been mixed together and delivered on further and still be able to tell who the sources where inside that audio. And it had a conference ID for validation that the packets you are receiving are the ones that you actually intended to receive. We carried those--some of those notions over from NVP and Vat protocol into the first version of RTP because if you're working with RTP, you may have wondered why there's RTP Version 2 and what happened to zero and one. The--that protocol was actually allocated the zero. In the first two bits is where the version number one was and that had zero in that position. RTP Version 1 had one in that position. In this version of the protocol, we used timestamps carried as seconds, 16 bits of seconds and 16 bits of fraction and had that timestamp rate be the same for all protocols, all data formats. The idea behind it was to allow simple inter-media synchronization that you wouldn't have to do timestamp calculations. That--we have a sequence over and timestamp also present, as they said, to detect loss. And in this version of the protocol, we move to a flag at the end of a Talkspurt or the end of a video frame to indicate a point when you knew you had something that was complete. That was more important for video because then, without having to wait for the beginning of the next frame, you would know that that frame was complete and you could display it. Those are our motivations in making the change. We did still have the audio format in each packet so you could switch on-the-fly and we had the notion of carrying control tokens here in the data and the channel ID for multiplexing packets from different sources. RTP Version 1, I think could have been workable but there were influential people who felt it had serious problems and we got push backed from the ITF management to reconsider a couple of the design aspects. And that led to RTP Version 2. Here, a change was to be like Vat to have the timestamp in samples and the motivation there was so that you could do positioning of the data with sample accuracy and positioning of the data in time for playback continuously with one sample accuracy, not worrying about arithmetic errors that could be introduced by having to convert from a fixed timestamp rate into the native sample rate of the media. Of the--the Talkspurt flag was changed to be not the same for audio and video. Back to the beginning of a Talkspurt for audio where it's more convenient and remained at the end for video sets the M-bit up there, the marker bit in RTP still carried the format identifier for switching on-the-fly and carried the source identifiers for the audio that was pre-mixed so that you could tell--could tell who is speaking when you were getting audio that was pre-mixed. The synchronization source identifier is of an application level source identification that served as a backup for validating packets to make sure they were ones that belonged in your stream. As you know; you could send IP packets anywhere and the receiver needs to be able to tell whether they're appropriate or not. Some of the philosophy in RTP Version 2 was to try to keep the control and data separate. The notion of control tokens was removed from the data packets in RTP Version 2 and the separate RTCP control protocol was devised to carry those in separate packets and I'll show that in a moment. We kept a fixed 12-byte header so that calculations were simple and so that it was easy to do header compression on it. You may have heard of RTP Header Compression a couple of different times. And it was highly scalable, multicast was a big influence in the design of RTP so that we could have--it worked well both for two-party conferences as well as conferences with thousands of users and not have the control traffic overwhelm the data traffic because of having so many participants sending control traffic. One other point is in the descriptions of jitter, for example, the time--the measurements--or the--the packets, which conveyed packet loss information, what the last sequence number was received, et cetera. How many packets were received those were all carried as absolute values rather than incremental values. We didn't say I received 60 packets since the last update, we say that we received a total of this many packets. And the reason for that is that if you lose one of those control packets carrying those numbers, it doesn't matter because you can still take a delta between any two packets that you did receive and find out how many were received during that time. Some of that philosophy has not been kept over time. There was always push to try add more complexity into it to allow more, more features. And so in 2009, the idea of keeping the control and data separate was--the idea keeping control data separate was important so that you could have, for example, some monitoring stations that just listened to the control traffic and didn't have to receive the data traffic as well. They could filter that out based on the separate addressing, separate port numbers being used for those two streams. However, with the introduction of NAT and with the introduction of gateway units that were trying to gateway a large number of streams, they ran out of port space per IP address. And so, a new variation on RTP was introduced such that in some circumstances, the RTP and RTCP packets could be multiplexed back together. Also, we tried to keep options out of the header in RTP but we left a little hook in there for experimentation and that Pandora's Box has now been opened and options are happening. One other thing that surprised me was, I thought that based on our early work with CVSD and LPC and then subsequent work in the early '90s on various coding algorithms that that work would kind of slow down. People would run out of ideas about what to do about compressing audio and video, but that hasn't happened at all. This is a list of the different audio codex that for which there are payload formats for RTP defined, and there are more added since this. Switching from the data side to the control side, there's--there are a number of aspects of the control protocols, NVP just scratched the surface of this with the fairly rudimentary sessions setup mechanism. We did have--we did have tokens like ringing the telephone and getting an answer back, that kind of control for setting up the point-to-point conferences and then with NVCP, we extended that to the extra controls for floor switching. In RTP, the companion RTCP protocol supports only loosely controlled sessions. It's really just conveying session information information, it's not about setting up sessions or billing or any of that. So, for a complete solution, you really need a bunch more and the SIP protocol has been addressing that. I'll talk a little about each of those. For NVP, as I mentioned, we had a series of control tokens that you could send for not only establishing this session but also negotiating what kind of VOCODING would used. At this time, we actually described the VOCODING in the--as a whole collection of parameters with the thought that we would be sort of tweaking those parameters as we setup sessions and we want to negotiate. We learned that in fact that wasn't really useful. It changed to where you just have a single code point for a particular VOCODING and you choose certain sets of codings, coding parameters to use. In RTCP, and Danny wanted me to point out, this is not our RTCP, it's RT-CP, has nothing to do with TCP. Except that it completes RTP to be a transport protocol. An important part of any transport protocol is managing congestion, managing the flow control over the network. So, it provides feedback about loss of packets. If you're suffering a lot of packet loss, that probably means that you have congestion in the network and you should either back off your rate or stop sending. So, it's really important for RTCP to be implemented along with RTP, it doesn't always happen in the implementations and that's a concern. RTCP does carry a persistent identifier for the source, the "S" source value that I motioned that's carried in HRTP packet that's used for validation is just a random number. It's a random number so that when people initiate from different places independently there's probability of collision is low. Here this binds in the RTCP we carry that S-source value and persistent identifiers so that you can get some meaning from it. In addition to providing feedback about the--what was lost--packets that were lost, it also allowed each user to learn--so, it allowed each user to learn what loses other users had experienced but it also allowed each user in a multicast session to know how many others there were. And based on that, the timers for repetition of the control packets facing with the control packets automatically adjusted, including some randomization to avoid synchronization so that we could accommodate with RTCP sessions of a wide variety of sizes. So, this dealt with loosely controlled session in order to have a more explicit session like a telephone call. Then the session initiation protocol was developed also in IETF. It drew on a number of research projects that had gone before Etherphone at PARC and Touring Machine at BellCore, work we did it at ISI, CCP and the MMConf system at BBN. I'm sure that most of you heard of SIP by now because it is actually become fairly widely used. A lot of work was done there and still more work needs to be done. The SIP concentrated on just point to point conversing deferring the multipoint explicit conferencing for later. So, you may have wondered why we called this talk being about prehistory in the title, really because voice over IP was only coined in 1995. A lot of people are unaware of the two decades that came before that. Talking about the history part between 95 and now, there has been a lot of work done on the protocols both continuing work on RTP and continuing work on the control protocols to go with it. You know RTP was adopted by ITU-T for the H.323 method of setting up conferences and point-to-point calls and SIP was developed, as I said, in ITF through quite a bit more work and in conjunction without or in addition to it, the RTSP real time streaming protocol was developed for control of more stream-oriented playback using RTP typically. So, my conclusion which really is a conclusion for the two days of talks not just my own, that it's pretty clear, I think to most everyone that packet audio and video are important components of the Internet today. And the roots of that were in what we did in the '70s, not only in developing the concepts of how to deal with real time data but also the protocol work directly fit in to protocols that we're using today. That work depended on advances in technology to make it practical, but it is now something that we are treating as a normal everyday occurrence. So, that's it. The--I have a couple of URLs here if you're interested in the movie, it is on YoutTube and also, you can get the full Mpeg 50 megabyte original at the center location here. Questions? Yes. >> What type of video encoding did you use? >> COHEN: Now, what we did--actually that was kind of interesting at ISI. We--oh, yeah, what sort of video encodings we used? At ISI, we developed our own codec, which was based on--at that point, the DSPs, the Texas Instruments Ti 320, TMS 320 had just been released and we actually took eight of those and built a single instruction multiple data architecture where we had eight of those DSP chips operating in parallel to do discreet cosine transform compression of the video. We didn't have a lot of processing power there so what we did was we had another [PAUSE] processor that looked over all the blocks we used, I think 16X16 blocks of the image, to detect which ones had changed and then we applied the DECT only to those that changed and transmitted those blocks and the rest stayed the same. The algorithm for our--for determining which blocks had changed also didn't really have time to look at every pixel, they looked at the boarder of the block. And so, if you were careful, you could toss a lot of paper up and get it to freeze there because it would--it'd be in the middle of the block and not detected when it left. You know, that was kind of fun. But it was also very fast. It's--we sent 30 frames per second even at that time. So, if he did like this, you could show your fingers accurately, but if you did it like this, then the frame rate would slow down in order to accommodate those differences. Around that time, we also took a couple of commercial codecs, picture tell codec and compression labs codec, which were not really designed for use over packet network. Even though they had block structures, they weren't designed for loses. And we had adapted those block structured data streams to be carried over the packet network. And as long as our packet loss was low enough, then they worked okay. There was another aspect of reconstructing clock at the other end because they were designed to work over synchronized streams to one line. So, we had to accommodate that drift in the clock between the two ends. Other questions? [PAUSE] The question was whether the various forms of audio that we have, chat on computers and DCT phone, I think you're talking about a cordless phone, that's just the local communication technology underlying that might be voice over IP or whatever. And the cellular phone with a couple--which might have--which might be using in your iPhone or Android phone might be using the carriers own transport mechanism or could have a packet voice application on it to carry. So, are all those going to converge? I expect that it will be ultimately all some form of packet-based communication, whether there--whether we get to a point where there's one encoding that's used typically in all of then so that it could be end to end, no matter what sort of devices came in the middle. Given the proliferation of different encoding I don't see that happening, but I do think we're converging towards packet size, the transport mechanism underneath. Yes? >> ITU came up with the H3 [PAUSE] coming with two [PAUSE] >> So, right. ITU did come with H.323 and ITF came up independently with SIP. Why--really, why did we have two? Why did--I guess one way to phrase that question, why was SIP created if ITU-T had dome H.323 already? And I'm on the ITF side so I'm going to have a biased view, H.323 was terribly complicated. And I think in parts, it was a reaction to that. The H.323 came out of a H.320, which was a synchronist protocol so it evolved in ITU-T from a synchronous protocol to packet protocol. And because of that, it carried over some of the constraints from the circuit based-system and models of how things should operate from that system which didn't fit completely with the packet based-view. So, SIP came from the other direction from loosely controlled conferencing and saying, "Okay, what to do we, you know, what's the minimum we can to this to allow sessions to be more explicitly controlled?" So, different people with different perspectives and different goals and that's how it came about. I think SIP has expanded beyond 323 as far as the deployment these days. So, apparently the more Internet packet centric view is the winner. Questions? Okay, thank you.

Examples

Specific examples include:

  • In the Microsoft Windows and ReactOS[1] command-line interfaces, the timeout command pauses the command processor for the specified number of seconds.[2][3]
  • In POP connections, the server will usually close a client connection after a certain period of inactivity (the timeout period). This ensures that connections do not persist forever, if the client crashes or the network goes down. Open connections consume resources, and may prevent other clients from accessing the same mailbox.
  • In HTTP persistent connections, the web server saves opened connections (which consume CPU time and memory). The web client does not have to send an "end of requests series" signal. Connections are closed (timed out) after five minutes of inactivity; this ensures that the connections do not persist indefinitely.
  • In a timed light switch, both energy and lamp's life-span are saved. The user does not have to switch off manually.
  • Tablet computers and smartphones commonly turn off their backlight after a certain time without user input.

See also

References

  1. ^ "timeout.c". July 13, 2019 – via GitHub.
  2. ^ "timeout". docs.microsoft.com.
  3. ^ "TIMEOUT.exe (Windows 7/2008 or later)". ss64.com.

Further reading

This page was last edited on 20 January 2023, at 16:16
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.