Networking Services: Best Practices and Technologies | Google IT Support Certificate

Networking Services: Best Practices and Technologies | Google IT Support Certificate

Show Video

this lesson is a part of the google i.t support certificate providing you with job-ready skills to start or advance your career in it get access to practice exercises quizzes discussion forums job search help and more on coursera and you can earn your official certificate visit itsert to enroll in the full learning experience today managing hosts on a network can be a daunting and time-consuming task every single computer on a modern tcpip-based network needs to have at least four things specifically configured an ip address the subnet mask for the local network a primary gateway and a name server on their own these four things don't seem like much but when you have to configure them on hundreds of machines it becomes super tedious out of these four things three are likely the same on just about every node on the network the subnet mask the primary gateway and dns server but the last item an ip address needs to be different on every single node on the network that could require a lot of tricky configuration work and this is where dhcp or dynamic host configuration protocol comes into play listen up because dhcp is critical to know as an i.t support specialist when it comes to troubleshooting networks dhcp is an application layer protocol that automates the configuration process of hosts on a network with dhcp a machine can query a dhcp server when the computer connects to the network and receive all the networking configuration in one go not only does dhcp reduce the administrative overhead of having to configure lots of network devices on a single network it also helps address the problem of having to choose what ip to assign to what machine every computer on a network requires an ip for communications but very few of them require an ip that would be commonly known for servers or network equipment on your network like your gateway router a static and known ip address is pretty important for example the devices on a network need to know the ip of their gateway at all times if the local dns server was malfunctioning network administrators would still need a way to connect to some of these devices through their ip without a static ip configured for a dns server it would be hard to connect to it to diagnose any problems if it was malfunctioning but for a bunch of client devices like desktops or laptops or even mobile phones it's really only important that they have an ip on the right network it's much less important exactly which ip that is using dhcp you can configure a range of ip addresses that's set aside for these client devices this ensures that any of these devices can obtain an ip address when they need one but solves the problem of having to maintain a list of every node on the network and its corresponding ip there are a few standard ways that dhcp can operate dhcp dynamic allocation is the most common and it works how we described it just now a range of ip addresses is set aside for client devices and one of these ips is issued to these devices when they request one under a dynamic allocation the ip of a computer could be different almost every time it connects to the network automatic allocation is very similar to dynamic allocation in that a range of ip addresses is set aside for assignment purposes the main difference here is that the dhcp server is asked to keep track of which ips it's assigned to certain devices in the past using this information the dhcp server will assign the same ip to the same machine each time if possible finally there's what's known as fixed allocation fixed allocation requires a manually specified list of mac address and their corresponding ips when a computer requests an ip the dhcp server looks for its mac address in a table and assigns the ip that corresponds to that mac address if the mac address isn't found the dhcp server might fall back to automatic or dynamic allocation or it might refuse to assign an ip altogether this can be used as a security measure to ensure that only devices that have had their mac address specifically configured at the dhcp server will ever be able to obtain an ip and communicate on the network it's worth calling out that dhcp discovery can be used to configure lots of things beyond what we've touched down here along with things like ip address and primary gateway you could also use dhcp to assign things like ncp servers ntp stands for network time protocol and is used to keep all computers on a network synchronized in time we'll cover it in more detail in later courses but for now it's just worth knowing that dhcp can be used for more than just ip subnet mask gateway and dns server dhcp is an application layer protocol which means it relies on the transport network data link and physical layers to operate but you might have noticed that the entire point of dhcp is to help configure the network layer itself let's take a look at exactly how dhcp works and how it accomplishes communications without a network layer configuration in place warning geeky stuff ahead the process by which a client configured to use dhcp attempts to get network configuration information is known as dhcp discovery the dhcp discovery process has four steps first we have the server discovery step the dhcp client sends what's known as a dhcp discover message out onto the network since the machine doesn't have an ip and it doesn't know the ip of the dhcp server a specially crafted broadcast message is formed instead dhcp listens on udp port 67 and dhcp discovery messages are always sent from udp port 68 so the dhcp discover message is encapsulated in a udp datagram with a destination port of 67 and a source port of 68. this is then

encapsulated inside of an ip datagram with a destination ip of and a source ifp of this broadcast message would get delivered to every node on the local area network and if a dhcp server is present it would receive this message next the dhcp server would examine its own configuration and would make a decision on what if any ip address to offer to the client this will depend on if it's configured to run with dynamic automatic or fixed address allocation the response would be sent as a dhcp offer message with a destination port of 68 a source port of 67 a destination broadcast ip of and its actual ip as the source since the dhcp offer is also a broadcast it would reach every machine on the network the original client would recognize that this message was intended for itself this is because the dhcp offer has the field that specifies the mac address of the client that sent the dhcp discover message the client machine would now process this dhcp offer to see what ip is being offered to it technically a dhcp client could reject this offer it's totally possible for multiple dhcp servers to be running on the same network and for a dhcp client to be configured to only respond to an offer of an ip within a certain range but this is rare more often the dhcp client would respond to the dhcp offer message with a dhcp request message this message essentially says yes i would like to have an ip that you offered to me since the ip hasn't been assigned yet this is again sent from an ip of and to the broadcast ip of

finally the dhcp server receives the dhcp request message and responds with a dhcp ack or dhcp acknowledgement message this message is again sent to a broadcast ip of 255.2 and with a source ip corresponding to the actual ip of the dhcp server again the dhcp client would recognize that this message was intended for itself by inclusion of its mac address in one of the message fields the networking stack on the client computer can now use the configuration information presented to it by the dhcp server to set up its own network layer configuration at this stage the computer that's acting as the dhcp client should have all the information it needs to operate in a full-fledged manner on the network it's connected to all of this configuration is known as dhcp lease as it includes an expiration time a dhcp lease might last for days or only for a short amount of time once a lease has expired the dhcp client would need to negotiate a new lease by performing the entire dhcp discovery process all over again a client can also release its lease to the dhcp server which it would do when it disconnects from the network this would allow the dhcp server to return the ip address that was assigned to its pool of available ips so unlike protocols like dns and dhcp network address translation or nat is a technique instead of a defined standard this means that some of what we'll discuss in this lesson might be more high level than some of our other topics different operating systems and different network hardware vendors have implemented the details of nat in different ways but the concepts of what it accomplishes are pretty constant network address translation does pretty much what it sounds like it takes one ip address and translates it into another there are lots of reasons why you would want to do this they range from security safeguards to preserving the limited amounts of available ipv4 space we'll discuss the implications of nat and the ipv4 address space later in this lesson but for now let's just focus on how nat itself works and how it can provide additional security measures to a network at its most basic level nat is a technology that allows a gateway usually a router or firewall to rewrite the source ip of an outgoing ip datagram while retaining the original ip in order to rewrite it into the response to explain this better let's look at a simple nat example let's say we have two networks network a consists of the 24 address space and network b consists of the

24 address space sitting between these networks is a router that has an interface on network a with an ip of and an interface on network b of now let's put two computers on these networks computer one is on network a and has an ip of and computer2 is on network b and has an ip of

computer 1 wants to communicate with a web server on computer 2. so it crafts the appropriate packet at all layers and sends this to its primary gateway the router sitting between the two networks so far this is a lot like many of our earlier examples but in this instance the router is configured to perform nat for any outbound packets normally a router will inspect the contents of an ip datagram decrement the ttl by one recalculate the checksum and forward the rest of the data at the network layer without touching it but with nat the router will also rewrite the source ip address which in this instance becomes the router's ip on network b or when the datagram gets to computer 2 it'll look like it originated from the router not from computer 1. now

computer 2 crafts its response and sends it back to the router the router knowing that this traffic is actually intended for computer 1 rewrites the destination ip field before forwarding it along what nat is doing in this example is hiding the ip of computer 1 from computer 2. this is known as ip masquerading ip masquerading is an important security concept the most basic concept at play here is that no one can establish a connection to your computer if they don't know what ip address it has by using nat in the way we've just described we could actually have hundreds of computers on network a all of their ips being translated by the router to its own to the outside world the entire address space of network a is protected and invisible this is known as one to many nat and you'll see it in use on lots of lands today nat at the network layer is pretty easy to follow one ip address is translated to another by a device usually a router but at the transport layer things get a little bit more complicated and several additional techniques come into play to make sure everything works properly with one to many nat we've talked about how hundreds even thousands of computers can all have their outbound traffic translated via nat to a single ip this is pretty easy to understand when the traffic is outbound but a little more complicated once return traffic is involved we now have potentially hundreds of responses all directed at the same ip and the router at this ip needs to figure out which responses go to which computer the simplest way to do this is through port preservation port preservation is a technique where the source port chosen by a client is the same port used by the router remember that outbound connections choose a source port at random from the ephemeral ports or the ports in the range forty nine thousand one fifty two through sixty five thousand five hundred and thirty five in the simplest setup a router setup to nat outbound traffic will just keep track of what this source port is and use that to direct traffic back to the right computer let's imagine a device with an ip of it wants to establish an outbound connection and the networking stack of the operating system chooses port 51300 for this connection once this outbound connection gets to the router it performs network address translation and places its own ip in the source address field of the ip datagram but it leaves the source port in the tcp datagram the same and stores this data internally in a table now when traffic returns to the router on port 51300 it knows that this traffic needs to be forwarded back to the ip even with how large the set of ephemeral ports is it's still possible for two different computers on a network to both choose the same source port around the same time when this happens the router normally selects an unused port at random to use instead another important concept about nap and the transport layer is port forwarding port forwarding is a technique where specific destination ports can be configured to always be delivered to specific nodes this technique allows for complete ip masquerading while still having services that can respond to incoming traffic let's use our network

24 again to demonstrate this let's say there's a web server configured with an ip of with port forwarding no one would even have to know this ip prospective web clients would only have to know about the external ip of the router let's say it's 192.16811 any traffic directed at port 80 on 192 16811 would get automatically forwarded to 10 115 response traffic would have the source ip rewritten to look like the external ip of the router this technique not only allows for ip masquerading it also simplifies how external users might interact with lots of services all run by the same organization let's imagine a company with both a web server and a mail server both need to be accessible to the outside world but they run on different servers with different ips again let's say the web server has an ip of and the mail server has an ip of with port forwarding traffic for either of these services could be aimed at the same external ip and therefore the same dns name but it would get delivered to entirely different internal servers due to their different destination ports the iana has been in charge of distributing ip addresses since 1988. since that time the internet has expanded at an incredible rate the 4.2 billion possible ipv4 addresses

have been predicted to run out for a long time and they almost have for some time now the iana has primarily been responsible with assigning address blocks to the five regional internet registries or rirs the five rirs are afnec which serves the continent of africa aaron which serves the united states canada and parts of the caribbean apnic which is responsible for most of asia australia new zealand and pacific island nations laknik which covers central and south america and any parts of the caribbean not covered by aaron and finally ripe which serves europe russia and the middle east and portions of central asia these five rirs have been responsible for assigning ip address blocks to organizations within their geographic areas and most have already run out the iana assigned the last unallocated slash 8 network blocks to various rirs on february 3rd 2011. then in april 2011 apnic ran out of addresses ripe was next in september of 2012. lachnik ran out of addresses to a sign in june 2014 and aaron did the same in september 2015. only afnic has some ips left but those are predicted to be depleted by 2018. wikipedia has a great article all about ipv4 exhaustion and the timelines involved i've added a link to it in the reading just after this video this is of course a major crisis for the internet ipv6 will eventually resolve these problems and we'll cover it in more detail later in this course but implementing ipv6 worldwide is going to take some time for now we want it to continue to grow and we want more people and devices to connect to it but without ip addresses to assign a workaround is needed spoiler alert you already know about the major components of this workaround nat and non-routable address space remember that non-routable address space was defined in rfc 1918 and consists of several different ip ranges that anyone can use an unlimited number of networks can use non-routable address space internally because internet routers won't forward traffic to it this means there's never any global collision of ip addresses when people use those address spaces non-routable address space is largely usable today because of technologies like nat with nat you can have hundreds even thousands of machines using non-routable address space yet with just a single public ip all those computers can still send traffic to and receive traffic from the internet all you need is one single ipv4 address and via net a router with that ip can represent lots and lots of computers behind it it's not a perfect solution but until ipv6 becomes more globally available non-routable address space and nat will have to do businesses have lots of reasons to want to keep their network secure and they do this by using some of the technologies we've already discussed firewalls nat the use of non-routeable address space things like that organizations often have proprietary information that needs to remain secure network services that are only intended for employees to access and other things one of the easiest ways to keep networks secure is to use various securing technologies so only devices physically connected to their local area network can access these resources but employees aren't always in the office they might be working from home or on a business trip and they might still need access to these resources in order to get their work done that's where vpns come in virtual private networks or vpns are a technology that allows for the extension of a private or local network to host that might not work on that same local network vpns come in many flavors and accomplish lots of different things but the most common example of how vpns are used is for employees to access their businesses network when they're not in the office vpns are a tunneling protocol which means they provision access to something not locally available when establishing a vpn connection you might also say that a vpn tunnel has been established let's go back to the example of an employee who needs to access company resources while not in the office the employee could use a vpn client to establish a vpn tunnel to their company network this would provision their computer with what's known as a virtual interface with an ip that matches the address space of the network they've established a vpn connection to by sending data out of this virtual interface the computer could access internal resources just like if it was physically connected to the private network most vpns work by using the payload section of the transport layer to carry an encrypted payload that actually contains an entire second set of packets the network the transport and the application layers of a packet intended to traverse the remote network basically this payload is carried to the vpn's endpoint where all the other layers are stripped away and discarded then the payload is unencrypted leaving the vpn server with the top three layers of a new packet this gets encapsulated with the proper data link layer information and sent out across the network this process is completed in the inverse in the opposite direction vpns usually require strict authentication procedures in order to ensure that they can only be connected to by computers and users authorized to do so in fact vpns were one of the first technologies where two-factor authentication became common two-factor authentication is a technique where more than just a username and password are required to authenticate usually a short-lived numerical token is generated by the user through a specialized piece of hardware or software vpns can also be used to establish site-to-site connectivity conceptually there isn't much difference between how this works compared to our remote employee situation it's just that the router or sometimes a specialized vpn device on one network establishes the vpn tunnel to the router or vpn device on another network this way two physically separated offices might be able to act as one network and access network resources across the tunnel it's important to call out that just like nat vpns are a general technology concept not a strictly defined protocol there are lots of unique implementations of vpns and the details of how they all work can differ a ton the most important takeaway is that vpns are a technology that use encrypted tunnels to allow for a remote computer or network to act as if it's connected to a network that it's not actually physically connected to a proxy service is a server that acts on behalf of a client in order to access another service proxies sit between clients and other servers providing some additional benefit anonymity security content filtering increased performance a couple other things if any part of this sounds familiar that's good we've already covered some specific examples of proxies like gateway routers you don't hear them referred to this way but a gateway definitely meets the definition of what a proxy is and how it works the concept of a proxy is just that a concept or an abstraction it doesn't refer to any specific implementation proxies exist at almost every layer of our networking model there are dozens and dozens of examples of proxies you might run into during your career but we'll cover just a few of the most common ones here most often you'll hear the term proxy used to refer to web proxies as you might guess these are proxies specifically built for web traffic a web proxy can serve lots of purposes many years ago when most internet connections were much slower than they are today lots of organizations used web proxies for increased performance using a web proxy an organization would direct all web traffic through it allowing the proxy server itself to actually retrieve the web page data from the internet it would then cache this data this way if someone else requested the same web page it could just return the cache data instead of having to retrieve the fresh copy every time this kind of proxy is pretty old and you won't often find them in use today why well for one thing most organizations now have connections fast enough that caching individual web pages doesn't provide much benefit also the web has become much more dynamic going to is going to

look different to every person with their own twitter account so caching this data wouldn't do much good a more common use of a web proxy today might be to prevent someone from accessing sites like twitter entirely a company might decide that accessing twitter during work hours reduces productivity by using a web proxy they can direct all web traffic to it allow the proxy to inspect what data is being requested and then allow or deny this request depending on what site is being accessed another example of a proxy is a reverse proxy a reverse proxy is a service that might appear to be a single server to external clients but actually represents many servers living behind it a good example of this is how lots of popular websites are architected today very popular websites like twitter receive so much traffic that there's no way a single web server could possibly handle all of it a website that popular might need many many web servers in order to keep up with processing all incoming requests a reverse proxy in this situation could act as a single front end for many web servers living behind it from the client's perspective it looks like they're all connected to the same server but behind the scenes this reverse proxy server is actually distributing these incoming requests to lots of different physical servers much like the concept of dns round robin this is a form of load balancing another way that reverse proxies are commonly used by popular websites is to deal with decryption more than half of all traffic on the web is now encrypted and encrypting and decrypting data is a process that can take a lot of processing power you'll learn a lot more about encryption and how it works in another course in this program reverse proxies are now implemented in order to use hardware built specifically for cryptography to perform the encryption and decryption work so that the web servers are free to just serve content proxies come in many other flavors way too many for us to cover them all here but the most important takeaway is that proxies are any server that act as an intermediary between a client and another server good job we covered a lot take a break for a bit before you move on to the quiz and project we've cooked up for you once you're done with these take another break and then meet me back here for the next module where we'll cover the history of internet connections congratulations on finishing this lesson from the google i.t support certificate access the full experience including job search help and get the official certificate by clicking the icon or the link in the description watch the next lesson in the course by clicking here and subscribe to our channel for more lessons from upcoming google career certificates

2021-04-08 11:17

Show Video

Other news