[20:00] Nodal Delay, Nodal Processing and queuing isn't something we harped on a lot [20:00] or touched on in some cases. [20:00] The part where it started to matter is when we started talking about buffers and buffer size for routers in regards to network congestion [20:01] If you can imagine a bunch of cars showing up to the drive-thru window at the same time [20:01] that's queueing [20:01] except in the case of networks we're talking about segments or packets [20:01] There's "some" time it takes just to drive through...even if the drive through is empty [20:02] in the case of packets, you can think of it as the fact that it takes some time to put the information on the wire and to redirect it to wherever its going [20:02] We'll get to all that in a lot more detail as we travel further down the protocol stakc [20:02] well at least more detail anyway [20:02] I won't be using terms like nodal processing, or nodal delay on the test. [20:03] Make sense? [20:03] makes sense to me [20:03] == Kristin has changed nick to Guest66026 [20:04] Great! [20:04] Could you explain the difference between bandwidth flooding and connection flooding? [20:04] Definitely. [20:05] bandwidth really refers to how quickly you can receive data. I'd use a plumber's analogy and say that more bandwidth is like having a bigger pipe. A bigger pipe means you can get more data at once. [20:05] To flood someone's bandwidth, you have to send so much data down their pipe, that no other data has a good chance of getting through. [20:05] Does that make sense first? [20:05] yes [20:06] Okay, connection flooding attacks the service itself directly. You send a service so many connections that it is so busy processing them that it can't handle new connections. So one attacks the network infrastructure and the other attacks the services infrastructure directly. [20:06] You might be able to flood a network's bandwidth without ever touching the service. Or you might be able to flood a services connection while using very little bandwidth. [20:07] Think of it like the roads around your house. [20:07] If I turn all of them into parking lots, well you could never leave or get visitors. (bandwidth flooded) [20:07] On the other hand, if I am a taxi service and I drop everyone from bwi off at your house [20:07] I may be the only taxi in chestertown but your house is too full for new visitors [20:08] Make sense John? [20:09] Yes, thanks. [20:09] Everyone else? [20:09] yep! [20:09] Yeah! [20:09] Yes [20:09] Great! [20:09] Okay, what's next? [20:10] Can you explain the difference between persistent and non-persistent connections? [20:10] that usually comes up in regards to HTTP [20:11] But it is similar to how our homeworks worked for some people. [20:11] So let's talk about the homework. [20:11] One version of the protocol may have been: Client connects. Client sends REQ, SEC or BYE. Server responds and closes connection. Client closes connection. [20:12] In this case, the client gets to send one thing. Then closes [20:12] a persistent connection "stays alive" to make further requests [20:12] The reason this is a thing for HTTP is because, generally, when you fetch a web page, you're actually fetching many files. [20:12] So, HTTP persitent connections might fetch them all over a single TCP connection that stays alive [20:13] A non-persistent connection would make a new connection for each file [20:13] since the handshake takes some time ... if you can, persitent connections are a bit more efficient when multiple fetches are required. [20:14] This gets even worse when 'secure' connections are involved. [20:14] As the handshake for the keys is further involved. [20:15] Monica - make sense? [20:15] Yup! Thanks! [20:15] Great! [20:15] are dropped packets and lost packets the same thing? [20:16] It depends on nuance only and isn't something I'd test. Dropped packets are packets that are definitely gone. [20:16] ok [20:16] Lost packets probably means a packet that is definitely gone. [20:16] But, in speech, I may have talked about a packet that has gotten "lost" but finds its way to its destination [20:16] I'll clarify that point if I put it on a question. [20:17] got it [20:17] You should definitely know what is meant by dropped packets though. [20:17] What's next?! [20:18] Could you explain more about sockets? [20:19] That's a broad subject so I'll try to hit the gist. [20:19] It starts with understanding ports. [20:19] Ports uniquely identify a process/running program on a machine. [20:19] When a machine gets data over a network, your network interface has to figure out what to do with that data. [20:20] Often, it will place it in a special place in memory that your process understands as its network i/o. [20:21] Programmatically, the way we interact with this network i/o is via a socket. Sockets are the software interface for the network i/o. So, we use sockets to connect to 'ports' on our machines. [20:21] The server, for example, listens on a socket that it is bound to so it can accept incoming client connections. [20:21] When that connection comes, it then has a client socket that it can use to communicate with the client. [20:22] You can think of them as ports, but its more like the software interface to allow you access to the ports. [20:22] OK, got it. [20:22] It gets a little tricky because we could all connect to the same server from the same machine. [20:22] Imagine we all logged into a web server using sybil and telnet. [20:23] The server could simultaneously handle all these connections because they're coming from different ports even from the same endpoint. [20:24] Depending how you define endpoint anyway. The difference is that oru clients would all be connecting 'from' different sockets. [20:24] Allowing the server to distinguish between ip/port pairs. We haven't talked about ip addresses too much yet, but that's the gist. So its our way to send/recv to/from one end (of two) of a network connection. [20:25] Still good John? [20:25] yes [20:26] Great! [20:27] What's next?! [20:28] How can you have a connectionless UDP connection? Don't you need a connection? [20:28] =^.^= [20:28] Great question! [20:28] The answer is nope. [20:28] (or maybe sorta) [20:29] When I drop mail off at the post office, I'm not connected to the endpoint. [20:29] I *AM* connected to the network. [20:29] But, the packet specifies where it will go and I drop it off and maybe it shows up and maybe it doesn't! [20:29] If nobody is listening on the other end, well its thrown in the trash. [20:30] But you'd never know...because there's no connection state. [20:31] Is that good Monica? [20:31] Yup!! [20:31] Yay, we're learning things!~ [20:32] In my notes, it says that DNS is (has?) a single point of failure when centralized. I don't think I specified this in my notes. Could you explain? [20:33] if it were centralized it would have a single point of failure [20:33] so DNS is when you translate names (like google.com) into their ip addresses (like 8.8.8.8) - not a real example [20:33] if there was one server responsible for all this, then if that server died, most of the use of the internet would die with it [20:33] you couldn't go to 'google.com' [20:33] to google other ip addresses [20:34] unless you happened to know a google ip address already. [20:34] Life would be bad! [20:34] so DNS is decentralized with lots of caches [20:34] it makes it more robust to both deliberate attack and to failure [20:34] Is that clear? [20:35] ok, got it. [20:36] your heart is a single point of failure for your body [20:36] but, you could lose some pieces and not die. [20:36] this isn't a good analogy [20:36] gold star for trying [20:36] Your car has several pistons - you could throw one and still get somewhere, even if not as well. [20:37] theoretically this is true of multi-core machines as well. [20:37] Alright, any other questions? [20:38] Is there any sort of connection between push/pull protocols and persistent/non-persistent connections? [20:38] they're related - although we didn't talk about push/pull too horribly much [20:39] related but different [20:39] okay! that's the sense i was getting from the push/pull connections with the little i do have on them [20:39] So if your connection is always on, it will be push...so it'd have to be persistent [20:40] But, you might consider the persistent HTTP connection a pull protocol still - the client is still specifically asking for things even though it is doing so persitently. [20:41] Imagine you are going to welcome your friend at the door. [20:41] If you sit at the door and wait until they arrive [20:41] that's like a push [20:41] if you go to the door every once in awhile and let them in if they're there [20:41] that's a pull [20:41] unfortunately, your friends probably knock, ruining the analogy. [20:42] nah, it still makes sense! [20:42] its more about whether the client is "connected and waiting" or about whether it simply connects and grabs if present [20:43] what do we need to know about resource records? Just the measure of latency or all the other stuff also? [20:44] you should know the difference between mx, a and cname. [20:44] ok [20:44] there are many more but those are super common. [20:45] We're talking about DNS resource records right? [20:45] yes [20:46] Yup so the A type record is generally what's mapping the written name that you give DNS to the ip address that you actually want and need [20:46] DNS is just a huge directory service where you go to look up the addresses. Imagine it like a giant white pages with names and numbers. The a records are this! [20:47] ok [20:47] MX explains how to map name to 'whoever' handles the mail [20:47] CNAME is about aliases - its called a canonical name record [20:48] So, if you try to go to peaches.go.com you might get a CNAME record that says oranges.go.com [20:48] and then you'd have to go lookup oranges.gocom to figure out the ip [20:48] So it is a way to point something to another 'name' without worrying about where that name is. [20:49] Anyway, that's the three you should know for the test. [20:49] got it -- [20:54] for TCP, we covered a few related to RTT computations. [20:54] dev, timeout, estimate [20:54] And then also related to the deviation. [20:55] You should know them. Understanding RTT estimation might be useful. [20:56] Ok, follow up question, what is the deviation? [20:56] You have your "average RTT" [20:56] Deviation is how far from the average RTT that your sample calculation is. [20:56] So like if your average RTT is 100 [20:56] and your next sample is 150 [20:56] then your deviation is 50 [20:56] If your average RTT is 100 and your next sample is 50 [20:56] then your deviation is still 50 [20:57] high deviation implies weird mojo in your network and thus a large multiplier is used to estimate the RTT [20:57] "weird mojo" being the technical term [20:58] OK, thought so. Just wasn't sure. Then what do the alpha and beta symbols represent? [20:58] when you're computing the average, it isn't a true average RTT. It is an exponential weighted movign average. [20:58] Which is just a fancy name for a special kind of weigthing in the average. [20:58] So let's say your average RTT is 100 [20:59] And you get a sample that is 150 [20:59] the alpha tells you how far to move your average based on this new sample. [20:59] If alpha is 0.5, then the new average is 125 [20:59] but that would mean your average moves half way to your sample everytime, so usually the alpha is around .8 or so (although follow what's in your notes). [20:59] The beta is the same thing but for estimating the average deviation [21:00] ok [21:00] It is essentially a weight on the next 'measure' to figure how much it should affect your average [21:00] so half the equation uses alpha and the other half 1-alpha so that the total weight is 1. This is how weighted averages work.