1.1 Explain the different design principles used in an enterprise network
1.1a High-level enterprise network design such as 2-tier, 3-tier, fabric, and cloud
The hierarchical LAN design model divides a network into building blocks. By assembling building blocks into a clear order, you can achieve a higher degree of stability, flexibility, and manageability for the individual blocks and the network as a whole. A hierarchical design employs four fundamental design principles: hierarchy, modularity, resiliency, and flexibility.
The hierarchical LAN design model’s principles are intended so that each element in the hierarchy has a specific set of functions & services that it offers, and a specific role in the design. The campus environment was typically defined as a three-tier model that consisted of the core, distribution, and access layers. In smaller environments, the design may have only two tiers, with the core and distribution elements combined (known as collapsed core). Each layer in the hierarchical model focuses on specific functions and allows specific features and services to be deployed at a particular layer.
Access Layer
The access layer is the edge of a campus network. It is where endpoint devices attach to the wired portion of the network. It is also the layer where devices that extend out the network are connected. The access layer, being the demarcation line between the network infrastructure and the devices that connect to the network, provides security, QoS, and a policy trust boundary. This layer can also be segmented using VLANs to group devices into different logical networks. Access layer switches do not have interconnections to other access layer switches in the hierarchical design model. Communication between endpoints on different access layer switches happens at the distribution layer.
Distribution Layer
The distribution layer act as a service & control boundary between the access layer and core layer. It also is the boundary between the layer 2 and layer 3 domains. The distribution is the aggregation point for the access layer switches, and defines the summarization boundary for the network control plane protocols. The choices for features in the distribution layer are often determined by the requirements of the access layer or core layer.
Core Layer
The core layer is the backbone for all campus architecture elements, and aggregator of all other building blocks. It is responsible for providing scalability, high availability, and fast convergence to the network. The core layer should be able to switch packets with minimal processing as fast as possible.
Two-tier Design (Collapsed Core)
In smaller networks, the core and distribution layers can be combined into one layer known as the collapsed core design. This design can reduce costs while still providing most of the three-tier design benefits. The collapsed core must be able to provide the following services:
- High speed physical & logical paths
- Layer 2 aggregation
- Routing and network access policies
The collapsed core switches will need enough capacity and redundancy, as they will be connected to multiple network blocks.
Three-tier Design
The three-tier design provides traffic aggregation & filtering at three successive levels: the core, distribution, and access layers. This design is used to scale larger networks, and is recommended when more than two pairs of distribution switches are required.
Layer 2 Access Design
All access switches operate in layer 2 forwarding mode, and the distribution switches run in layer 2 & layer 3 forwarding mode. Multiple links between the access and distribution switches are typically used, but STP will block some of the links, reducing the available bandwidth. Restricting a VLAN to a single switch can provide a per-vlan loop free topology, at the cost of restricting all hosts in a VLAN to a single switch.
The distribution layer should use a first hop redundancy protocol across a pair of switches to provide hosts with a gateway for each VLAN. HSRP or VRRP are commonly used, but the downside of these protocols is that hosts will send traffic only toward the active gateway; leaving the link to the standby gateway unused. GLBP provides greater uplink utilization by load-balancing traffic from hosts across multiple uplinks.
Layer 3 Access Design
Routed access is a design where layer 3 is extended to the access layer, eliminating layer 2 links between the access and distribution layers. The access layer switches also provide the IP gateway for hosts. A routed access design can:
- Eliminate the need for spanning tree, and increasing bandwidth from using all available uplinks.
- Make troubleshooting easier with common end-to-end tools.
- Converge faster by using routing protocols over spanning tree.
Simplified Campus Design
A simplified campus design relies on switch clustering technologies such as Virtual Switching System (VSS) and StackWise Virtual, where two switches are clustered into a single logical switch; or Stackwise, where two or more logical switches are stacked into a single logical switch. The logical switch uses a single management and control plane, allowing them to be managed as it were a single switch. VSS and VSW support EtherChannels that span the physical switches (Multi-chassis EtherChannels), and Stackwise supports EtherChannels that span all the switches (cross-stack EtherChannels). MEC and cross-stack EtherChannels allow devices to connect across all the physical switches. VSS, SWV, and Stackwise can be applied to any building block, potentially offering several advantages:
- Simplified management and maintenance.
- No first-hop redundancy protocol needed, as the default gateway for hosts would typically be a logical interface.
- The use of EtherChannel allows all links to be used, increasing bandwidth and decreasing convergence time.
- Seamless traffic failover if one of the switches fail.
Software-Defined Access (SD-Access) Design
SD-Access combines the campus fabric and Cisco DNA, to add fabric capabilities to the network through automation. SD-Access also provides automated segmentation to separate user, device, and application traffic. Within the SD-Access fabric, services such as host mobility and security, and normal switching & routing, are provided.
1.1b High availability techniques such as redundancy, FHRP, and SSO.
Redundancy
Cisco uses two mechanisms to reduce the impact of network outages: stateful switchover (SSO) and non-stop forwarding (NSF), both of which are built on route processor redundancy (RPR) and route processor redundancy plus (RPR+). With redundant route processors, and the separation of the control and data plane, SSO and NSF make possible continuous packet forwarding with zero packet loss.
NSF relies on separation of the control plane from the data plane during a supervisor switchover. The data plane continues to forward packets based on the CEF tables from prior to the switchover; while the control plane signals the supervisor to re-establish routing neighbor adjacencies and rebuild the routing protocol database.
FHRP
A first hop redundancy protocol (FHRP) provides hosts in a subnet with a consistent default gateway that does not change when a failure occurs. Hot standby router protocol (HSRP), virtual router redundancy protocol (VRRP), and gateway load balancing protocol (GLBP) are common first hop redundancy protocols.
HSRP
HSRP is a Cisco-proprietary protocol that provides transparent failover of the default gateway. It is used by a group of routers, or layer 3 switches, to select an active interface or VLAN interface and a standby interface or VLAN interface. The active interface is used for routing packets, and the standby interface takes over if the active device fails, or if preset conditions are met. HSRP provides a virtual IP address and virtual MAC address which acts as the default gateway for hosts, and the active router answers ARP requests for the virtual IP address. The routers in an HSRP group monitor each other by sending Hello packets; if the active router fails or becomes unavailable, the standby router takes over. HSRP has two versions: HSRPv1 & HSRPv2. All devices in the HSRP group must use the same version, as the two versions are not interoperable. The differences between the two versions are:
| HSRPv1 | HSRPv2 |
|---|---|
| Supports groups 0-255 | Supports groups 0-4095 |
| Supports IPv4 only | Supports IPv4/IPv6 |
| Uses multicast address 224.0.0.2 for HSRP messages | Uses multicast address 224.0.0.102 for HSRP messages |
| The virtual MAC address is 0000:0C07:AC + group number | The virtual MAC address is 0000:0C9F:F + group number |
Devices in an HSRP group use three multicast messages:
- Coup: sent when a standby device wants to assume the active role.
- Hello: used to inform other devices in the group the device’s HSRP priority and state.
- Resign: sent by an active device when it is about to shut down HSRP.
The HSRP Hello message contains the priority of the router, the hello time, and the hold time values. The hello time value is the time interval between hello messages sent, while the hold time indicates how long the current hold time is valid. The hello and hold timers can be set to a millisecond value to allow for subsecond failover with a group. Both the hello and hold timers need to match on all devices in a group and ideally should be set as low as possible for fast convergence.
Devices participating in an HSRP group are always in one of the following HSRP states:
- Active: the device is responsible for forwarding packets and responding to ARP requests for the virtual IP address.
- Init/Disabled: the device is not yet ready, or able to participate in HSRP.
- Learn: the device has not determined the virtual IP address, and has not seen a Hello message from the active router.
- Listen: the device is receiving Hello messages.
- Speak: the device is sending and receiving Hello messages.
- Standby: the device is prepared to become the active device if necessary.
HSRP priority determines which device will be the active router. The device with the highest priority value will be the active router; and in case of a tie, the device with the highest interface IP address will become the active router. HSRP preemption enables a device with the highest priority to always become the active router when it is available; however, preemption is disabled by default and can be enabled with standby preempt.
HSRP can be combined with tracking objects, which can dynamically change the priority of a device if the tracking object goes down. For example, the reachability of an IP route can be tracked, and a device priority decreased if the tracked object goes down.
HRSP can use two types of authentication to prevent rogue devices from assuming the active HSRP role. HSRP authentication is configured by setting a password in the HSRP configuration. HSRP routers can use either plaintext passwords between each other, or MD5 hashed passwords; plaintext passwords should not be used as the packets can be intercepted.
VRRP
VRRP is a standard based FHRP that is similar to HSRP in its operation and configuration. VRRP defines two roles - master and backup. The master role is akin to the HSRP active role, and the backup role corresponds to the HSRP standby role. Routers participating in VRRP will have one master device and one or more backup devices. The device with the highest priority will become the master, and the priority can be an integer from 0 to 255. However, priority 0 has a special meaning; it indicates that the current master router has stopped participating in VRRP. Setting the priority can quickly cause a backup device to take over the master role.
VRRP allows the use of an IP address that is the interface address of one of the VRRP group members as the virtual IP address. The master router is the only router that sends VRRP advertisement packets, sent to the multicast address 224.0.0.18, with a default advertisement interval of 1 second and a 3 second hold time.
GLBP
GLBP is a Cisco-proprietary protocol that protects against a failed device or circuit, but also provides true load balancing within a subnet or VLAN by distributing the traffic from the hosts across multiple uplinks. With GLBP, multiple first-hop devices on a subnet can offer a single virtual gateway address while sharing the packet forwarding load. With HSRP and VRRP, only one device is forwarding packets from subnet, leaving the other standby/backup devices inactive. The uplink bandwidth for those devices is therefore unused. GLBP makes use of all available uplink bandwidth by load balancing traffic over multiple devices using a single virtual IP address and multiple virtual MAC addresses. Communication between GLBP members occurs through hello messages sent every 3 seconds to multicast address 224.0.0.102, UDP port 3222.
Members within the GLBP group elect one gateway to be the active virtual gateway (AVG). The AVG assigns a virtual MAC address to each member of the GLBP group, and each member is responsible for forwarding packets sent to the virtual MAC address that it is assigned. The members with an assigned virtual MAC address are known as active virtual forwarders (AVF). The AVG is responsible for answering ARP queries for the virtual IP address; and the AVG replies to the requests with the virtual MAC addresses assigned to AVFs.