Network Principles

1.1 Identify Cisco Express Forwarding concepts

1.1.a FIB

Process Switching: when packets are being process switched they are sent up from the data plane to the control plane for the routing calculation to take place, with process switching every packet that goes through the router is processed against the routing table. Processing every packet that flows through takes time as the information is taken from the data plane, moved to the control plane for processing and then placed back in to the data plane.

 

Fast Switching: After the first packet of a flow has been processed a cache is held that means that the following packets for a flow/connection can be processed at the data plane without having to move to the control plane for processing as they can simply follow the same rule that was created and cached for the initial packet in the flow/connection.

 

Cisco Express Forwarding effectively provides a way for the caches to be generated before the flows/connections arrive at the router. What this means is that none of the standard traffic connections (there are some exceptions) that come in to the router can be processed inside the data plane without any of the packets having to be moved to the control plane for processing. This allows data to move at close-to-wire-speed at all times.

 

Forwarding Information Base is a cache of all the hashed values of routes. An incoming packet has it's header hashed and the hashed value is compared with the hashes in the FIB, these are used to determine the required destination of the packet. The FIB determines which interface data should be sent to. 

1.1.b Adjacency table

The adjacency table has the next-hops for data and also already has the layer 2 information for those next-hops so it - essentially - has pre-built headers for the connections.

1.2 Explain general network challenges

1.2.a Unicast

No notes on this section yet. I am getting around to rewriting them...

1.2.b Out-of-order packets

No notes on this section yet. I am getting around to rewriting them...

1.2.c Asymmetric routing

No notes on this section yet. I am getting around to rewriting them...

1.3 Describe IP operations

1.3.a ICMP Unreachable and Redirects

No notes on this section yet. I am getting around to rewriting them...

1.3.b IPv4 and IPv6 fragmentation

No notes on this section yet. I am getting around to rewriting them...

1.3.c TTL

No notes on this section yet. I am getting around to rewriting them...

1.4 Explain TCP operations

1.4.a IPv4 and IPv6 (P)MTU

The Maximum Transmission Unit (MTU) is the maximum frame size for a segment. The MTU for a frame is 1500 bytes, frames larger than this are split up and fragmented into multiple frames to sent across the network. PMTU works out what the minimum MTU is across the path of the entire route to the destination, when it finds the smallest MTU along the path it will make all segments the size of the smallest MTU so that the network devices along the way to not have to perform fragmentation and reassembly of the frames at each node along the way to the destination.

1.4.b MSS

Maximum Segment Size is the biggest size segment that can be created to work with the Path MTU. The (P)MTU is the smallest MTU that can be used for a path, the MSS is not the same as this as the headers are not taken into account. 

As you can see from the above picture. If the MSS was set to equal the MTU then there would be no room for the Layer 3 and Layer 2 headers to be applied. The frame is what has to conform to the MTU size.The most common default for an MSS is 568 bytes, the most common MTU is 1500 bytes. 

1.4.c Latency

Latency is the amount of time it takes a packet to travel from one host to another. When TCP is concerned this can affect a number of factors. If there is high latency there is a high chance that TCP ACKs will not be responded to in a timely manner which could cause data retransmission. Latency also plays a heavy part of the TCP Window Size setting and affects the speed at which data can be transmitted.

1.4.d Windowing

Windowing is a way of making TCP more efficient. Windowing allows a number of packets to be sent without an acknowledgement.

As per the image, a window size of 3 has been negotiated. This means that once 3 segments have been sent an acknowledgement will be required. The acknowledgement will increment by the same amount of segments that have been sent/received.The window size is calculated with the RWin (receive window) and the bandwidth. With these two metrics, an optimal window size is calculated. As these metrics are constantly changing the window size is also constantly changing.

1.4.e Bandwidth-delay product

Can be referred to as BDP, it is the product of bandwidth and delay. If you take your banwidth and multiply it by your delay then you have your BDP. This is familiar as these are the two main metrics used to calculate EIGRP. The bandwidth-delay product is used to calculate the RWin and therefore affects TCP Windowing.

1.4.f Global synchronization

This is not a good thing. When TCP was designed they added a congestion detection mechanism. If there are a lot of TCP retransmits on the network there is an assumption that there is a lot of congestion on the network. This makes the network slow down its communications by decreasing the TCP Window sizes. When this happens the slow start mechanism then starts which then gradually tries to increase the window size up again. The problem with this is that fixed timers are used to increase the window sizes. What this means is that all connections on a link are slowed down at the same time when congestion occurs. Similarly, they are all slow-started at the same time and they all ramp up their window sizes at the same time, when they all gradually get back to the same window size that caused the problem in the first place the problem will reoccur. TCP will recognize the congestion and shrink the window sizes for everything to back off and desaturate the link. This is called Global Synchronization, the link is saturated and then TCP backs off and then ramps back up to being saturated over and over.

1.5 Describe UDP operations

1.5.a Starvation

This happens when there is congestion on the network and occurs due to TCP Congestion detection. When congestion is realized by the network TCP is scaled down to reduce utilization. As UDP is a connectionless protocol there is no mechanism to tell UDP to lessen the rate at which it is transmitting. Queues/buffers on the switch that have had some space released by TCP are then slightly filled up by UDP. The more congestion that occurs the more UDP fills in the gaps that are created by TCP congestion mechanisms trying to recover the network. After a point UDP can end up filling all buffers/queues which starves TCP flows of any bandwidth at all.The solution to this is to implement QoS on the network to ensure that UDP Starvation can not occur.

1.5.b Latency

Identical to the TCP version of this issue it is the amount of time that a packet takes to traverse the network. However, the latency in UDP is generally less. This is because UDP is not a connection oriented protocol and does not have to go through the handshake processes of TCP it operates quicker. Also, UDP has smaller headers than TCP as it doesn't have to deal with sequence number, reorders, reassembly. This also makes it a faster communication protocol than TCP.

1.6 Recognize proposed changes to the network

1.6.a Changes to routing protocol parameters

OSPF

OSPF can be changed by altering the bandwidth value that OSPF takes into account when running its algorithm. If you alter the bandwidth value for a routing process you will need to alter the value on every router that is running that routing process otherwise you will see sub-optimal routing scenarios occuring in the network. Asymmetric routing and routing loops are examples of what can happen when a network has disparate bandwidth values configured within a routing process.

EIGRP

Like OSPF, EIGRP can be configured with different default metric values to alter how the routing algorithm determines the best router for an Autonomous System. EIGRP differs in that it has more metrics available for configuration: Bandwidth, Delay, Load and Reliability (MTU is also a configurable metric but it's value does not affect the outcome of the Diffuse Update ALgorithm (DUAL)). By default the metric values for Load and Reliability are 0 which means they are not taken into account when the algorithm is run. Altering these values away from the default mean that these metrics will be included in the calculation.

1.6.b Migrate parts of the network to IPv6

Dual Stack

Dual stack is where you run both IPv4 and IPv6 on a router allowing connection between two networks of differing network types.

NAT64

this is some text

Tunnelling

All IPv6 tunnels are a pain and should only be used as a temporary solution as they are far from perfect.

Other Method

this is some text

1.6.c Routing protocol migration

The best way of making a change to the routing protocol for a network will employ a moving boundary (rolling wave migration).

At the boundary you will need to implement redistribution from one routing protocol to another. This method will allow you to make the change gradually instead of trying to implement the change throughout the whole network all at once. Doing it this way allows you to make a change to a number of targeted (and low/lower priority) network segments that will also allow you to troubleshoot any issues that occur without causing major issues to the enterprise. Lessons can be learned from any problems that arise from the migration and can be reflected upon and avoided as the more important portions of the network are migrated to the new routing protocol.

If you need a new website or your website needs updating go to https://10kinds.tech.

10 Kinds Technology
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram