Monday, August 31, 2015

F5-CA - Application Delivery Fundamentals - Application Delivery Platforms

Objective 5.01 Describe the purpose, advantages, use cases, and challenges associated with hardware based application delivery platforms and virtual machines

Explain when a hardware based application deliver platform solution is appropriate

Hardware based platforms are generally more powerful, more stable, more scaleable, and may have specialized hardware capable of accelerating common tasks.
The major drawback to hardware platforms is that they come in only a few sizes, meaning you always have to buy more than what you need. Additionally, because you are using hardware, you normally have to purchase the hardware up front.

Explain when a virtual machine solution is appropriate

Virtual based platforms are generally more flexible and can be better tailored for smaller environments. In environments where performance isn't the top requirement, virtual platforms are often cheaper and more easily sized to fit.

Explain the purpose, advantages, and challenges associated with hardware based application deliver platform solutions / virtual machines

Virtual ADC Architectural Considerations

Rapid deploymentAs a software solution, a vADC can be provisioned and ready for inclusion in the development process much quicker than a physical appliance.
Financial efficiency for specific workloadsBecause the cost of a physical appliance can be high relative to certain application types, use, and deployment scenarios, organizations sometimes have to choose between doing nothing and running application infrastructure sub-optimally. With a v ADC, cost can be charged more easily to a specific application workload and the vADC can be dedicated to that workload.
Failure isolationIn the event that the failure of a specific application configuration causes the failure of a physical device front- ending many applications, it will failover to the redundant unit. However, all applications could then be affected. By dedicating a vADC to specific application workloads better fault isolation is created.
ManagementBeing part of the hypervisor vendor's overall management framework can simplify the movement and management of the vADC. Coupling a vADC to specific applications makes it a more integral part of the overall ecosystem.
High availabilityThe same degree of high availability achieved with a purpose-built pADC cannot be realized by commodity server hardware.
SecurityInstead of a completely hardened system, a shared environment is used in which virtual appliance security is dependent upon the hypervisor vendor and the commodity server vendor.
ScalabilityCertain high performance offload services do not have direct access to hardware. Commodity servers also lack purpose-built ASICs for offload. Both impact the scale and throughput of a vADC.

Physical ADC Architectural Considerations

High availabilitypADC hardware designs are carrier-hardened for rapid failover and reliability. Redundant components (power, fans, RAID, and hardware watchdogs) and serial-based failover make for extremely high up-times and MBTF numbers. Commodity hardware of this type is costly and will not be integrated with the ADC software.
SecurityMost pADC appliances and systems are security hardened and proprietary to the vendor. pADCs are not dependent on other vendors' security implementation or lack thereof. With hypervisors, there are known and potentially unknown vulnerabilities. To a certain extent, virtual appliance security is thus dependent upon the hypervisor vendor.
ScalabilitySome pADCs have unique high-speed bridge and offload ASICs for such capabilities as high performance L4 processing, SSL, and compression, which enables them to be a cost-effective aggregation point for many applications or high-performance/throughput applications where latency matters a great deal.
ManagementA pADC has special lights-out management capabilities so regardless of a physical device issue it can still be accessed, diagnosed, and fixed. Management can be less complex because the application delivery functions are centralized in a single device instead of distributed across the data center.
Rapid deploymentShipping a physical product, racking, stacking, and cabling takes time and adds cost to a deployment. It is also not well suited for agile development environments and QA labs.
Failure isolationIn the event that the failure of a specific application configuration causes a physical device front-ending many applications to fail, it will failover to the redundant unit. However, all applications can then be affected. Thus a combination of both physical and virtual ADC can simultaneously provide both failure isolation and scale.

Given a list of environments/situations, determine which is appropriate for a hardware based application deliver platform solution

Given a list of environments/situations, determine which is appropriate for a virtual machine solution

Explain the advantages of dedicated hardware (SSL card, compression card)

SSL encryption and decryption can potentially place a heavy load on servers. By using dedicated hardware such as an SSL encryption card, this load can easily be migrated away from the servers, thereby allowing them to function faster and provide more services. Additionally, the dedicated hardware is normally capable of handling the workload for multiple servers, allowing the investment to be utilized by several services.

More Information:

Increase SSL Offload Performance
Creating a Hybrid ADN Architecture with both Virtual and Physical ADCs

Objective 5.02 Describe the purpose of the various types of advanced acceleration techniques

Describe the purpose of TCP optimization

While there are near endless options when it comes to web acceleration, and we will explore many of them, it’s usually best to start from the beginning, as it were. In this case, as with almost anything on the wire, “the beginning” happens to be the TCP stack. While most may immediately want to jump to web server and browser settings when posed with the “how do you get more out of your application?” question, they would honestly be missing a fair quantity of possible gains. We will certainly tweak those things as well, but let’s work our way up to that.
To begin with, we first want to ensure that we’re using optimized TCP settings. There numerous options at this layer that can be customized to suit your particular application needs. While each of these can absolutely be custom tweaked, we also offer profiles on the BIG-IP that are excellent starting points. Profiles allow you to configure a set of options for a particular scenario or application and re-use or apply it as desired easily. To start with we’ll be selecting the appropriate profiles for our application.

Describe the purpose of HTTP keepalives, caching, compression, and pipelining

Caching is the local storage of network data for re-use, to cut down on transfer time for future requests. With Web pages, static caching simply serves objects -- typically images, JavaScript, stylesheets -- as long as they haven't passed their expiration date. But static caching can generally only be used for about 30 percent of HTTP requests, and that does not typically include high-value dynamic data.

More Information:

Application Acceleration
Application Acceleration Architecture
WAN Optimization Manager
TCP Optimization and Compression

F5-CA - Application Delivery Fundamentals - Security

Objective 4.01 Compare and contrast positive and negative security models 

Describe the concept of a positive security model

A "positive" security model (also known as "whitelist") is one that defines what is allowed, and rejects everything else.
The positive security model can be applied to a number of different application security areas. For example, when performing input validation, the positive model dictates that you should specify the characteristics of input that will be allowed, as opposed to trying to filter out bad input. In the access control area, the positive model is to deny access to everything, and only allow access to specific authorized resources or functions. If you've ever had to deal with a network firewall, then you've probably encountered this application of the positive model.
The benefit of using a positive model is that new attacks, not anticipated by the developer, will be prevented. However, the negative model can be quite tempting when you're trying to prevent an attack on your site. Ultimately, however, adopting the negative model means that you'll never be quite sure that you've addressed everything. You'll also end up with a long list of negative signatures to block that has to be maintained.

Describe the concept of a negative security model

A "negative" (or "blacklist") security model, which defines what is disallowed, while implicitly allowing everything else.
If you are familiar with good old antivirus software, you are familiar with AV definitions which need to get updated.
This is is a great example of Negative Security, where you know what is bad and you block only the bad stuff.
Pros: it's easy to manage as someone else does all the hard work for you in identifying the bad things and it can be automatically updated.
Cons: It provides limited protection against attacks, Symantec said in May 2014 that Antivirus is dead. A study found that even after 2 weeks of a 0-day vulnerability, majority of AV vendors didn't have a signature to block them.

Given a list of scenarios, identify which is a positive security model

Given a list of scenarios, identify which is a negative security model

Describe the benefits of a positive security model

Only what has been defined is allowed
New exploits have a lesser chance of effecting the environment
Useful for environments that don't change frequently

Describe the benefits of a negative security model

Less false positives
Easier to use in frequently changing environments

More Information:

Positive Security Model
Positive vs Negative Security

Objective 4.02 Explain the purpose of cryptographic services 

Describe the purpose of signing

A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or documents. A valid digital signature gives a recipient reason to believe that the message was created by a known sender, that the sender cannot deny having sent the message (authentication and non-repudiation), and that the message was not altered in transit (integrity). Digital signatures are commonly used for software distribution, financial transactions, and in other cases where it is important to detect forgery or tampering.

Describe the purpose of encryption

In cryptography, encryption is the process of encoding messages or information in such a way that only authorized parties can read it. Encryption does not of itself prevent interception, but denies the message content to the interceptor. In an encryption scheme, the intended communication information or message, referred to as plaintext, is encrypted using an encryption algorithm, generating ciphertext that can only be read if decrypted. For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is in principle possible to decrypt the message without possessing the key, but, for a well-designed encryption scheme, large computational resources and skill are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients, but not to unauthorized interceptors.

Describe the purpose of certificates and the certificate chains

In cryptography, a public key certificate (also known as a digital certificate or identity certificate) is an electronic document used to prove ownership of a public key. The certificate includes information about the key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct. If the signature is valid, and the person examining the certificate trusts the signer, then they know they can use that key to communicate with its owner.
In a typical public-key infrastructure (PKI) scheme, the signer is a certificate authority (CA), usually a company which charges customers to issue certificates for them. In a web of trust scheme, the signer is either the key's owner (a self-signed certificate) or other users ("endorsements") whom the person examining the certificate might know and trust.
Certificates are an important component of Transport Layer Security (TLS, sometimes called by its older name SSL, Secure Sockets Layer), where they prevent an attacker from impersonating a secure website or other server. They are also used in other important applications, such as email encryption and code signing.

Distinguish between private/public keys

The Public Key is what its name suggests - Public. It is made available to everyone via a publicly accessible repository or directory. On the other hand, the Private Key must remain confidential to its respective owner. Because the key pair is mathematically related, whatever is encrypted with a Public Key may only be decrypted by its corresponding Private Key and vice versa.

Compare and contrast symmetric/asymmetric encryption

Symmetric encryption uses the identical key to both encrypt and decrypt the data.  Symmetric key algorithms are much faster computationally than asymmetric algorithms as the encryption process is less complicated.  The length of the key size is critical for the strength of the security.  NIST has recommendations on how long a key should be– in general, 160-512 bits.   There are inherent challenges with symmetric key encryption in that the key must somehow be managed.  Distributing a shared key is a major security risk.
Asymmetric encryption uses two related keys (public and private) for data encryption and decryption, and takes away the security risk of key sharing.  The private key is never exposed.  A message that is encrypted by using the public key can only be decrypted by applying the same algorithm and using the matching private key.   Likewise, a message that is encrypted by using the private key can only be decrypted by using the matching public key.

More Information:

Public Key Certificate
Digital Signature
SSL Certificates
Public-Key Cryptography
Symmetric vs Asymmetric Encryption
Client SSL Profiles
Server SSL Profiles 

Objective 4.03 Describe the purpose and advantages of authentication

Explain the purpose of authentication

Authentication is the process of proving that you are who you say you are, usually for the purposes of gaining access to something. In the real world, this is a relatively easy task, even with a really bad driver's license photo; but in cyberspace, nothing is ever that simple.

Explain the advantages of single sign on

Single sign-on (SSO) is a property of access control of multiple related, but independent software systems. With this property a user logs in with a single ID to gain access to a connected system or systems without being prompted for different usernames or passwords, or in some configurations seamlessly sign on at each system.
Benefits of using single sign-on include:

  • Reducing password fatigue from different user name and password combinations
  • Reducing time spent re-entering passwords for the same identity
  • Reducing IT costs due to lower number of IT help desk calls about passwords[3]

SSO shares centralized authentication servers that all other applications and systems use for authentication purposes and combines this with techniques to ensure that users do not have to actively enter their credentials more than once.

Explain the concepts of multifactor authentication

Multifactor authentication is a method of authenticating a user using two of more of the following:

  • Something you know (such as passwords)
  • Something you have (such as a smartcard)
  • Something you are (such as fingerprints)

Describe the role authentication plays in AAA

An AAA server is a server program that handles user requests for access to computer resources and, for an enterprise, provides authentication, authorization, and accounting (AAA) services. The AAA server typically interacts with network access and gateway servers and with databases and directories containing user information.

More Information:

F5 Authentication 101
Single Sign On
Single sign on
Multi-factor Authentication

Objective 4.04 Describe the purpose, advantages, and use cases of IPsec and SSL VPN 

Explain the purpose, advantages, and challenges associated with IPsec

Internet Protocol Security (IPsec) is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session. IPsec includes protocols for establishing mutual authentication between agents at the beginning of the session and negotiation of cryptographic keys to be used during the session. IPsec can be used in protecting data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).
Internet Protocol security (IPsec) uses cryptographic security services to protect communications over Internet Protocol (IP) networks. IPsec supports network-level peer authentication, data origin authentication, data integrity, data confidentiality (encryption), and replay protection.
IPsec is an end-to-end security scheme operating in the Internet Layer of the Internet Protocol Suite, while some other Internet security systems in widespread use, such as Transport Layer Security (TLS) and Secure Shell (SSH), operate in the upper layers at the Application layer. Hence, only IPsec protects all application traffic over an IP network. Applications can be automatically secured by IPsec at the IP layer.

Explain the purpose, advantages, and challenges associated with SSL VPN

An SSL VPN (Secure Sockets Layer virtual private network) is a form of VPN that can be used with a standard Web browser. In contrast to the traditional Internet Protocol Security (IPsec) VPN, an SSL VPN does not require the installation of specialized client software on the end user's computer. It's used to give remote users with access to Web applications, client/server applications and internal network connections.

Given a list of environments/situations, determine which is appropriate for an IPsec / SSL VPN solution

More Information:


F5-CA - Application Delivery Fundamentals - Load Balancing Essentials

Objective 3.01: Discuss the purpose of, use cases for, and key considerations related to load balancing

Explain the purpose of distribution of load across multiple servers

load balancing distributes workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units or disk drives. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. Using multiple components with load balancing instead of a single component may increase reliability and availability through redundancy. Load balancing usually involves dedicated software or hardware, such as a multilayer switch or a Domain Name System server process.

Given an environment, determine the appropriate load balancing algorithm that achieves a desired result

Explain the concept of persistence

Using the BIG-IP® local traffic management system, you can configure session persistence. When you configure session persistence, the BIG-IP system tracks and stores session data, such as the specific pool member that serviced a client request. The primary reason for tracking and storing session data is to ensure that client requests are directed to the same pool member throughout the life of a session or during subsequent sessions.
In addition, session persistence can track and store other types of information, such as user preferences or a user name and password.

More Information:

Load Balancing Wiki
Load Balancing 101
Load Balancing Algorithms
Load Balancing Least Connection, Fastest, Observed, and Predictive
Load Balancing Algorithm Dynamic Ratio
Load Balancing Algorithm Least Sessions
Session Persistence

Objective 3.02: Differentiate between a client and server 

Given a scenario, identify the client/server

In general, all of the machines on the Internet can be categorized as two types: servers and clients. Those machines that provide services (like Web servers or FTP servers) to other machines are servers. And the machines that are used to connect to those services are clients. When you connect to Yahoo! at to read a page, Yahoo! is providing a machine (probably a cluster of very large machines), for use on the Internet, to service your request. Yahoo! is providing a server. Your machine, on the other hand, is probably providing no services to anyone else on the Internet. Therefore, it is a user machine, also known as a client.

Explain the role of a client

The role of a client is to consume the services/resources provided by a server

Explain the role of a server

The role of a server is to provide one or more services to their clients.

More Information:

Client-server model

F5-CA - Application Delivery Fundamentals - F5 Solutions and Technology

Objective 2.01: Articulate the role of F5 products

Explain the purpose, use, and benefits of APM, LTM, ASM, GTM

The F5 Big-IP system is actually composed of several separate modules, each performing a specific feature. A full list of modules can be found at, however, the exam blueprint appears to focus on APM, LTM, ASM, and GTM (now BIG-IP DNS)

Access Policy Manager
APM provides valuable insight and policy-based control over who is on the network and what applications they are using.
Local Traffic Manager
LTM provides intelligent traffic management, ensuring peak application performance and increasing operational efficiency.
Application Security Manager
ASM offers an agile, certified web application firewall to protect and optimize all of your applications with advanced application and location visibility and protection.
Global Traffic Manager - BIG-IP DNS
BIG-IP DNS directs users to the nearest data center that will provide the best application experience. Using high-performance DNS services, BIG-IP DNS secures your DNS infrastructure from attacks.
Advanced Firewall Manager
AFM inspects traffic that is both entering and leaving your datacenter, ensuring protection of sensitive data such as social security numbers and credit card information.
Application Acceleration Manager
AAM speeds up application delivery, controls the growth of bandwidth needs, particularly for video, and off loads additional processor-intensive tasks from the server.

More information:

Access Policy Manager (APM)
Application Security Manager (ASM)
Local Traffic Manager (LTM)
Global Traffic Manager (GTM)
Enterprise Manager (EM)
F5 White Papers
F5 Datasheets

Objective 2.02: Explain the purpose, use, and advantages of iRules

Explain the purpose of iRules

An iRule is a powerful and flexible feature of BIG-IP devices based on F5's exclusive TMOS architecture. iRules provide you with unprecedented control to directly manipulate and manage any IP application traffic. iRules utilizes an easy to learn scripting syntax and enables you to customize how you intercept, inspect, transform, and direct inbound or outbound application traffic.

Explain the advantages of iRules

iRules is a highly customized, Tcl-based scripting language that allows you complete programmatic access to the traffic on the wire. If you’re looking to inspect, analyze, modify, route, re-direct, discard, manipulate, re-play, mirror or…just about anything else you can think of, chances are you can do it with an iRule. It offers incredible flexibility and control that makes it possible to do unbelievably cool stuff with your F5 gear. iRules gives you the ability to write business logic into your network layer, and that is a powerful and liberating thing.

Given a list of situations, determine which would be appropriate for the use of iRules

More information:

iRules 101
Getting Started with iRules

Objective 2.03: Explain the purpose, use, and advantages of iApps

Explain the purpose of iApps

Local Traffic Manager includes F5 iApps, a powerful set of features that enable you to manage application services rather than individual devices and objects. iApps gives you greater visibility and control over application delivery—and helps you deploy in hours rather than weeks. This application-centric approach aligns the network with your applications and adapts application delivery to business needs.

Explain the advantages of iApps

iApps, similar to wizards, guide the administrator through setting up of common configurations. However, unlike a wizard, iApps can be re-run multiple times to adjust or update the configurations.

Given a list of situations, determine which would be appropriate for the use of iApps

If there is an iApp appropriate for your environment, use it. The only exception might be when you wish to customize the configuration after the fact, and dont wish to lose that customization when making updates in the future.

More information:

Objective 2.04: Explain the purpose of and use cases for full proxy and packet forwarding/packet based architectures

Describe a full proxy architecture

The reason there is a distinction made between “proxy” and “full-proxy” stems from the handling of connections as they flow through the device. All proxies sit between two entities – in the Internet age almost always “client” and “server” – and mediate connections.
A full-proxy maintains two separate session tables – one on the client-side, one on the server-side. There is effectively an “air gap” isolation layer between the two internal to the proxy, one that enables focused profiles to be applied specifically to address issues peculiar to each “side” of the proxy.

Describe a packet forwarding/packet based architecture

A network device with a packet-based (or packet-by-packet) design is located in the middle of a stream of communications, but is not an endpoint for those communications; it just passes the packets through. Often a device that operates on a packet-by-packet basis does have some knowledge of the protocols flowing through it, but is far from being a real protocol endpoint. The speed of these devices is primarily based on not having to understand the entire protocol stack, short-cutting the amount of work needed to handle traffic. For example, with TCP/IP, this type of device might only understand the protocols well enough to rewrite the IP addresses and TCP ports; only about half of the entire stack.

Given a list of situations, determine which is appropriate for a full proxy / packet based architecture

More information:
Full Proxy Architecture
Packet-Based vs Full Proxy
Auto Last Hop
Virtual Server Types

Objective 2.05: Explain the advantages and configurations of high availability (HA)

Explain active/active

For certain network environments, you might want to configure an active-active configuration. The basic configuration procedure is similar to the configuration procedure for an active-standby configuration, except that you must set the redundancy mode on both units to Active.
Unlike an active/standby configuration, which is designed strictly to ensure no interruption of service in the event that a BIG-IP system becomes unavailable, an active-active configuration has an additional benefit. An active-active configuration allows the two units to simultaneously manage traffic, thereby improving overall performance.
A common active-active configuration is one in which each unit processes connections for different virtual servers. For example, you can configure unit 1 to process traffic for virtual servers A and B, and configure unit 2 to process traffic for virtual servers C and D. If unit 1 becomes unavailable, unit 2 begins processing traffic for all four virtual servers.

Explain active/standby

When a redundant system is in active/standby mode, one unit is active, that is, accepting and processing connections on behalf of the redundant system, while the other unit is idle (that is, in a standby state).
When failover occurs, the standby unit becomes active, and it normally stays active until failover occurs again, or until you force it into a standby state. Forcing the unit into a standby state automatically causes the other system to become active again, if possible.
For example, you can configure unit 1 to process traffic for virtual servers A and B. The standby unit monitors the active unit, and if communications fail, the standby unit initiates a failover and becomes the active unit. The newly-active unit then begins processing traffic for both virtual servers.

Explain the benefits of deploying BIG-IP devices in a redundant configuration

A redundant system is a type of BIG-IP® system configuration that allows traffic processing to continue in the event that a BIG-IP system becomes unavailable. A BIG-IP redundant system consists of two identically-configured BIG-IP units. When an event occurs that prevents one of the BIG-IP units from processing network traffic, the peer unit in the redundant system immediately begins processing that traffic, and users experience no interruption in service.

Sunday, August 02, 2015

F5-CA - Application Delivery Fundamentals - OSI

While starting to look around for various F5 topics for study, I was directed to This page has a full listing of research topics for the F5 101 exam, but is a little out of date. I decided to use this as the starting point for my studies, but needed to update several aspects of it to the current exam blueprint and software versions.
This page will focus on the OSI model - this is likely review for the most part, but its always good to know exactly what they are asking about..

Objective 1.01: Explain, compare and contrast the OSI layers

The OSI model is the general model used by most/all network solutions today. This model is composed of 7 layers: each layer has a specific role that sends and receives information to the layers directly above and below. This model allows for each layer to perform specific tasks and functions before passing traffic to the next layer.

Describe the function of each OSI layer / Differentiate between the OSI layers

  1. Physical: The physical layer describes the physical connection between devices, the types of cables or frequencies, voltage levels, adapters, and so on. In essence, the physical layer is all of the physical components (wired and wireless) in a network.
  2. Data Link: The data link layer is responsible for 2 things: addressing within the physical network and frame synchronization/error checking. Addressing uses Media Access Control, or MAC addresses - a 12 byte unique identifier that is a mixture of the manufacturer identity and a random number.
  3. Network: The network layer is responsible for addressing and routing on larger networks. The network layer is most commonly associated with an IP Address, but is responsible for much more. The network layer also performs packet sequencing, data fragmentation, creation of virtual circuits and more.
  4. Transport: The transport layer is responsible for flow control between multiple applications and services. This layer is commonly associated with TCP, and is responsible for confirming reliability of data transmission. 
  5. Session: The session layer is responsible for creating, maintaining, and terminating conversations on the network. The easiest parallel to describe the session layer would be a telephone switchboard operator - they establish and maintain the connections between the parties, and then take down the connection when the conversation is finished.
  6. Presentation: The primary role of the presentation layer is to simply convert the well structured network data into something the application layer can understand, and visa-versa. This is necessary because applications may have many different ways to communicate, and this layer translates the communication methods into a single standard.
  7. Application: The application layer is the final layer of integration between the network and applications.

Describe the purpose of the various address types at different OSI layers

Traditionally, addressing occurs only at the Network and Data Link layers. The Data Link layer uses MAC addresses to identify and communicate with nodes over the Physical layer. This addressing method is inefficient as it requires broadcasts to find nodes and perform name resolution. NOTE: This will only allow communication with nodes that are on the same network.

The Network layer uses IP or IPX addresses to communicate with nodes that are potentially outside of the local network. These addresses are routable between networks and are therefore used as the backbone of the internet

Address Resolution Protocol, or ARP, is used to provide translation between Network and Data Link addresses.

More information:

OSI Model Wiki
Another OSI Model Overview

Objective 1.02: Explain protocols and technologies specific to the data link layer

The data link layer is often referred to as having 2 sublayers - the Media Access Control (MAC) layer, and the Logical Link Control (LLC) layer.
The MAC sublayer is responsible for:
  • Frame delimiting and recognition
  • Addressing of destination stations (both as individual stations and as groups of stations)
  • Conveyance of source-station addressing information
  • Transparent data transfer of LLC PDUs, or of equivalent information in the Ethernet sublayer
  • Protection against errors, generally by means of generating and checking frame check sequences
  • Control of access to the physical transmission medium
  • receive/transmit normal frames
  • half-duplex retransmission and backoff functions
  • append/check FCS (frame check sequence)
  • interframe gap enforcement
  • discard malformed frames
  • prepend(tx)/remove(rx) preamble, SFD (start frame delimiter), and padding
  • half-duplex compatibility: append(tx)/remove(rx) MAC address

The LLC sublayer is responsible for providing enabling several network protocols (IP, IPX, Decnet, etc..) to coexist within a multipoint network and to be transported over the same network medium. It can also provide flow control and automatic repeat request (ARQ) error management mechanisms.

Explain the purpose of a switch’s forwarding database

A forwarding information base (FIB), also known as a forwarding table or CAM table, is most commonly used in network bridging, routing, and similar functions to find the proper interface to which the input interface should forward a packet. It is a dynamic table that maps MAC addresses to ports. It is the essential mechanism that separates network switches from network hubs.

An Ethernet switch's role is to copy Ethernet frames from one port to another. The presence of a CAM table is one attribute that separates a switch from a hub. Without a functional CAM table, all frames received by a network switch would be echoed back out to all other ports, much like an Ethernet hub. A switch should only emit a frame on the port where the destination network device resides (unicast), unless the frame is for all nodes on the switch (broadcast) or multiple nodes (multicast).

Generally, the CAM table is a system memory construct used by Ethernet switch logic to map a station's MAC address to the switch port the station is connected to. This allows switches to facilitate communications between connected stations at high speed regardless of how many devices are connected to the switch. The CAM table is consulted to make the frame forwarding decision. Switches learn MAC addresses from the source address of Ethernet frames on the ports, such as Address Resolution Protocol response packets.

Explain the purpose and functionality of ARP

ARP is used for converting a network address (e.g. an IPv4 address) to a physical address like an Ethernet address (also named a MAC address). ARP has been implemented with many combinations of network and data link layer technologies, like IPv4, Chaosnet, DECnet and Xerox PARC Universal Packet (PUP) using IEEE 802 standards, FDDI, X.25, Frame Relay and Asynchronous Transfer Mode (ATM). IPv4 over IEEE 802.3 and IEEE 802.11 is the most common case.

In Internet Protocol Version 6 (IPv6) networks, the functionality of ARP is provided by the Neighbor Discovery Protocol (NDP).

The Address Resolution Protocol is a request and reply protocol that runs encapsulated by the line protocol.[clarification needed] It is communicated within the boundaries of a single network, never routed across internetwork nodes. This property places ARP into the Link Layer of the Internet Protocol Suite,[2] while in the Open Systems Interconnection (OSI) model, it is often described as residing between Layers 2 and 3, being encapsulated by Layer 2 protocols. However, ARP was not developed in the OSI framework.

Explain the purpose and functionality of MAC addresses

A media access control address (MAC address) is a unique identifier assigned to network interfaces for communications on the physical network segment. MAC addresses are used as a network address for most IEEE 802 network technologies, including Ethernet and WiFi. Logically, MAC addresses are used in the media access control protocol sublayer of the OSI reference model.

MAC addresses are most often assigned by the manufacturer of a network interface controller (NIC) and are stored in its hardware, such as the card's read-only memory or some other firmware mechanism. If assigned by the manufacturer, a MAC address usually encodes the manufacturer's registered identification number and may be referred to as the burned-in address (BIA). It may also be known as an Ethernet hardware address (EHA), hardware address or physical address. This can be contrasted to a programmed address, where the host device issues commands to the NIC to use an arbitrary address.

A network node may have multiple NICs and each NIC must have a unique MAC address.

Explain the purpose and functionality of a broadcast domain

A broadcast domain is a logical division of a computer network, in which all nodes can reach each other by broadcast at the data link layer. A broadcast domain can be within the same LAN segment or it can be bridged to other LAN segments.

In terms of current popular technologies: Any computer connected to the same Ethernet repeater or switch is a member of the same broadcast domain. Further, any computer connected to the same set of inter-connected switches/repeaters is a member of the same broadcast domain. Routers and other higher-layer devices form boundaries between broadcast domains.

This is as compared to a collision domain, which would be all nodes on the same set inter-connected repeaters, divided by switches and learning bridges. Collision domains are generally smaller than, and contained within, broadcast domains.

While some layer two network devices are able to divide the collision domains, broadcast domains are only divided by layer 3 network devices such as routers or layer 3 switches. Separating VLANs divides broadcast domains as well, but provides no means to network these without layer 3 functionality.

Explain the purpose and functionality of VLANs

A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer (OSI layer 2).[1][2] LAN is an abbreviation of local area network.

To subdivide a network into virtual LANs, one configures a network switch or router. Simpler network devices can only partition per physical port (if at all), in which case each VLAN is connected with a dedicated network cable (and VLAN connectivity is limited by the number of hardware ports available). More sophisticated devices can mark packets through tagging, so that a single interconnect (trunk) may be used to transport data for multiple VLANs. Since VLANs share bandwidth, a VLAN trunk might use link aggregation and/or quality of service prioritization to route data efficiently.

VLANs allow network administrators to group hosts together even if the hosts are not on the same network switch. This can greatly simplify network design and deployment, because VLAN membership can be configured through software. Without VLANs, grouping hosts according to their resource needs necessitates the labour of relocating nodes and/or rewiring data links.

Explain the purpose and functionality of link aggregation

Yhe term link aggregation applies to various methods of combining (aggregating) multiple network connections in parallel in order to increase throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links should fail.

Further umbrella terms used to describe the method include port trunking,[1]link bundling,[2] Ethernet/network/NIC bonding,[1] or NIC teaming. These umbrella terms encompass not only vendor-independent standards such as Link Aggregation Control Protocol (LACP) for Ethernet defined in IEEE 802.1AX and IEEE 802.1aq or the previous IEEE 802.3ad, but also various proprietary solutions.

More Information:

ARP on F5
MAC Address
Media Access Control
Logical Link Control
CAM Table
Broadcast Domain
Link Aggregation Wiki
Big IP Link Aggregation

Objective 1.03: Explain protocols and apply technologies specific to the network layer

The network layer provides the functional and procedural means of transferring variable length data sequences (called datagrams) from one node to another connected to the same network. It translates logical network address into physical machine address. A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver ("route") the message to the destination node. In addition to message routing, the network may (or may not) implement message delivery by splitting the message into several fragments, delivering each fragment by a separate route and reassembling the fragments, report delivery errors, etc.

Datagram delivery at the network layer is not guaranteed to be reliable.

A number of layer-management protocols belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment.

Explain the purpose and functionality of IP addressing and subnetting

The success of TCP/IP as the network protocol of the Internet is largely because of its ability to connect together networks of different sizes and systems of different types. These networks are arbitrarily defined into three main classes (along with a few others) that have predefined sizes, each of which can be divided into smaller subnetworks by system administrators. A subnet mask is used to divide an IP address into two parts. One part identifies the host (computer), the other part identifies the network to which it belongs.

Given an IP address and net mask, determine the network IP and the broadcast IP

HostsNetmaskNumber of Subnets
/30   4   64
/29   8   32
/28   16   16
/27   32   8
/26   64   4
/25   128   2
/24   256   1
/23   512   2
/22   1024   4
/21   2048   8
/20   4096   16
/19   8192   32
/18   16384   64
/17   32768   128
/16   65536   256

Given a routing table and a destination IP address, identify which routing table entry the destination IP address will match

In computer networking a routing table, or routing information base (RIB), is a data table stored in a router or a networked computer that lists the routes to particular network destinations, and in some cases, metrics (distances) associated with those routes. The routing table contains information about the topology of the network immediately around it. The construction of routing tables is the primary goal of routing protocols. Static routes are entries made in a routing table by non-automatic means and which are fixed rather than being the result of some network topology "discovery" procedure
Shown below is an example of what the table above could look like on an average computer connected to the internet via a home router:
Network DestinationNetmaskGatewayInterfaceMetric
  • The column Network Destination and Netmask together describe the Network id as mentioned earlier. For example, destination and netmask can be written as network id
  • The Gateway column contains the same information as the Next hop, i.e. it points to the gateway through which the network can be reached.
  • The Interface indicates what locally available interface is responsible for reaching the gateway. In this example, gateway (the internet router) can be reached through the local network card with address
  • Finally, the Metric indicates the associated cost of using the indicated route. This is useful for determining the efficiency of a certain route from two points in a network. In this example, it is more efficient to communicate with the computer itself through the use of address (called “localhost”) than it would be through (the IP address of the local network card).

Explain the purpose and functionality of Routing protocols

A routing protocol specifies how routers communicate with each other, disseminating information that enables them to select routes between any two nodes on a computer network. Routing algorithms determine the specific choice of route. Each router has a priori knowledge only of networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then throughout the network. This way, routers gain knowledge of the topology of the network.

Although there are many types of routing protocols, three major classes are in widespread use on IP networks:
  • Interior gateway protocols type 1, link-state routing protocols, such as OSPF and IS-IS
  • Interior gateway protocols type 2, distance-vector routing protocols, such as Routing Information Protocol, RIPv2, IGRP.
  • Exterior gateway protocols are routing protocols used on the Internet for exchanging routing information between Autonomous Systems, such as Border Gateway Protocol (BGP), Path Vector Routing Protocol.

Explain the purpose of fragmentation

In a case where a router receives a protocol data unit (PDU) larger than the next hop's MTU, it has two options if the transport is IPv4: drop the PDU and send an Internet Control Message Protocol (ICMP) message which indicates the condition Packet too Big, or fragment the IP packet and send it over the link with a smaller MTU. 

If a receiving host receives a fragmented IP packet, it has to reassemble the datagram and pass it to the higher protocol layer. Reassembly is intended to happen in the receiving host but in practice it may be done by an intermediate router, for example, network address translation (NAT) may need to re-assemble fragments in order to translate data streams, description provided in RFC 2993.

IP fragmentation can cause excessive retransmissions when fragments encounter packet loss and reliable protocols such as TCP must retransmit all of the fragments in order to recover from the loss of a single fragment. Thus, senders typically use two approaches to decide the size of IP datagrams to send over the network. The first is for the sending host to send an IP datagram of size equal to the MTU of the first hop of the source destination pair. The second is to run the path MTU discovery algorithm, described in RFC 1191, to determine the path MTU between two IP hosts, so that IP fragmentation can be avoided.

NOTE: IPv6 hosts are required to determine the optimal Path MTU before sending packets; however, it is guaranteed that any IPv6 packet smaller than or equal to 1280 bytes must be deliverable without the need to use IPv6 fragmentation.

Given a fragment, identify what information is needed for reassembly

The Identification field, and Fragment offset field along with Don't Fragment and More Fragment flags in the IP protocol header are used for fragmentation and reassembly of IP datagrams.

Explain the purpose of TTL functionality

The time-to-live value can be thought of as an upper bound on the time that an IP datagram can exist in an Internet system. The TTL field is set by the sender of the datagram, and reduced by every router on the route to its destination. If the TTL field reaches zero before the datagram arrives at its destination, then the datagram is discarded and an ICMP error datagram (11 - Time Exceeded) is sent back to the sender. The purpose of the TTL field is to avoid a situation in which an undeliverable datagram keeps circulating on an Internet system, and such a system eventually becoming swamped by such "immortals".

In theory, under IPv4, time to live is measured in seconds, although every host that passes the datagram must reduce the TTL by at least one unit. In practice, the TTL field is reduced by one on every hop. To reflect this practice, the field is renamed hop limit in IPv6.

Given a packet traversing a topology, document the source/destination IP address/MAC address changes at each hop

Assuming there is no NAT, PAT, or proxies in use, the source and destination IP addresses will not change as a packet traverses a topology. The MAC addresses, however, are changed with every layer 3 device.
If the source/destination are within the same broadcast domain, the two systems will communicate directly, without changing of the MAC addresses. However, if they devices are not within the same broadcast domain, a layer 3 device must route between the networks. 
  1. The sending device sends data to the MAC address of the layer 3 device
  2. Using IP addressing, the layer 3 device determines the next hop.
  3. Using its own MAC as the source, and the next hop MAC address as the target, the layer 3 device sends the data out the next interface
  4. This repeats until the next hop device is the intended target

More Information:

Routing on F5
TCP/IP Overview
IP Addressing & Subnetting
Routing Protocols
IP Packet Fragmentation
IP TTL (Time to Live)
Subnetting study guide
Routing Table

Objective 1.04: Explain the features and functionality of protocols and technologies specific to the transport layer

The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail. The transport layer also provides the acknowledgement of the successful data transmission and sends the next data if no errors occurred. The transport layer creates packets out of the message received from the application layer. Packetizing is a process of dividing the long message into smaller messages.

Compare/Contrast purpose and functionality of MTU and MSS

The Maximum Transmission Unit (MTU) operates on Layer 3 (Network Layer) and defines how large a packet can be. Any larger that this value, and the packet is fragmented, or split into 2  or more packets.
The Maximum Session Size (MSS) operates on Layer 4 (Transport Layer) and is almost always 40 bytes less than the MTU. If the packet is larger than the MSS, the packet is rejected and has to be recreated.

Explain the purpose and functionality of TCP

Connection oriented, guaranteed delivery, requires additional overhead

Explain the purpose and functionality of UDP

Not connection oriented, not guaranteed deliver, less overhead than TCP

Explain the purpose and functionality of ports in general

A port is an endpoint that is normally associated with a service. This allows an IP address to provide multiple services at the same time.

Explain how retransmissions occur

Retransmissions occur when the recieving computer alerts the sending computer of corrupt or missing data. This is normally associated with TCP and relies on the guaranteed delivery features such as packet IDs, sequence numbers, and error checking.

Explain the purpose and process of a reset

A reset is generated whenever something "unexpected" occurs. This can be caused by corrupt packets, invalid sequence numbers, receiving data from a client that doesnt have an active connection, etc...
The reset flag is used to "reset" or abort a connection.

Describe various TCP options

The TCP header is composed of several settings and options that describe the packet and how to handle the information enclosed. Header options include the following:

  • Source Port: 16 bits
  • Destination Port: 16 bits
  • Sequence Number: 32 bits
  • Acknowledgment Number: 32 bits
  • Data Offset: 4 bits
  • Reserved: 6 bits
  • Control Bits: 6 bits (from left to right):
    • URG:  Urgent Pointer field significant
    • ACK:  Acknowledgment field significant
    • PSH:  Push Function
    • RST:  Reset the connection
    • SYN:  Synchronize sequence numbers
    • FIN:  No more data from sender
  • Window: 16 bits
  • Checksum: 16 bits
  • Urgent Pointer: 16 bits
  • Options: variable
    • End of Option List
    • No-Operation
    • Maximum Segment Size
  • Padding: variable

Describe a TCP checksum error

As part of the guaranteed delivery in TCP, a checksum is generated with each packet. This checksum is used to validate if the data within the packet is corrupted. In most environments, checksum errors should be very rare.

Describe how TCP addresses error correction

Similar to the checksum eror above, TCP uses error detection to request retransmission of error packets. This occurs from both the client requesting a resend of bad packets, as well  as the server confirming the receipt of sent pacckets

Describe how the flow control process occurs

At the beginning of  a TCP header, there is a field called the Advertized Window Size. This window size states the maximum amount of data that one computer should send to the other. The window size is normally related directly to the size of the receive buffer. As long as the sending computer honors the window size, the s.ystems will never be overwhelmed

More Information:

TCP Functionality
TCP Connection Setup by Virtual Server Type
TCP Profile Settings (Tunables)
UDP Functionality
UDP Profile Settings (Tunables)
TCP Header Format

Objective 1.05: Explain the features and functionality of protocols and technologies specific to the application layer

Explain the purpose and functionality of HTTP

The Hyper Text Transfer Protocol operates at layer 7 (application), and defines a structured text method for distributed information systems. HTTP is the basis for much of the internet and is interpreted by web browsers. More information can be found at

Differentiate between HTTP versions

HTTP originally started with version 0.9, who's key identifier was that it consisted solely of the GET method.
HTTP 1.0 has been the most popular HTTP implementation to date and is composed of several loosely defined standards. The main drawback of HTTP 1.0 wasn't strictly defined, leaving leeway between how browsers and servers interpreted different commands.
HTTP 1.1 came about to improve on 1.0. The schema was well defined and therefore resulted in similarity between vendor solutions. Additionally, new error codes were provided (24 instead of the previous 16), additional authentication methods, and better handling of TCP connections.

Interpret HTTP status codes

There are several common error codes, such as 301 Moved Permanently, 404 Not Found, and 503 Service Unavailable. These are seen frequently and are easy to get to know. However, the error codes are also grouped by the 100's for similar error types:
1xx - Informational
2xx - Successful
3xx - Redirection
4xx - Client Error
5xx - Server Error

Determine an HTTP request method for a given use case

There are several HTTP request methods: GET, POST, HEAD, PUT, DELETE, OPTIONS, CONNECT. The first two: GET and POST are the most commonly used. The primary difference between the two methods are how they request information. 
GET requests information by placing variables in the URL, often seen as form.html?name1=value1&name2=value2. The benefit of this method is that it is easy to see the data being transmitted, and the target location can be bookmarked for future reference. The drawback of this method however is the limited length of the URL and difficulty handling non-text data types.
POST requests information by submitting data to the server. This has several benefits including keeping the URL clean, restricting bookmarks, allowing for unlimited data size, and even obscuring the data being transmitted.

Explain the purpose and functionality of HTTP keepalives, HTTP headers, DNS, SIP, FTP

Differentiate between passive and active FTP

Active and passive FTP refer to how the FTP connections are constructed. FTP uses 2 connections: a command and a data connection. The difference between these two methods is how, and by whom, the data connection is established.
For an Active FTP session, the client starts by initiating the command connection to port 21 on the server. The server then initiates the data connection to the client using a port greater than 1024. If the client is behind a firewall, this can result in a failed connection
For a Passive FTP session, the client also starts by initiating the command connection to port 21 on the server. The server then responds with a port greater than 1024, which the client then uses to initiate the data connection. Because both connections are generated by the client, its position behind a firewall is not a problem.

Explain the purpose and functionality of SMTP

Simple Mail Transfer Protocol (SMTP) is a standard used for transferring email over the internet using TCP port 25. 

Explain the purpose and functionality of a cookie

A cookie is a small file (physical or in memory) on a client computer that is sent to a website whenever the user browses that site. This file can contain simple identifiable information such as an ID, or complex information such as credit cards and passwords.
Initially, cookies were designed to assist with maintaining user sessions and preferences when visiting sites. Recently, cookies have come under suspicion because of their use by advertisers and the ability to track users across multiple web sites.

Given a situation in which a client connects to a remote host, explain how the name resolution process occurs

Name resolution is performed using the Domain Naming System (DNS) services. DNS is structured in a hierarchical manner, normally read from right to left. For instance, in the host

  1. First, your client contacts your local DNS, or LDNS and requests the host
  2. Next, your LDNS contacts the root DNS servers requesting the location of the COM DNS servers.
  3. Once the addresses for the COM servers have been received, your LDNS then asks the COM DNS servers for the address of the EDGOAD DNS servers.
  4. Once the addresses for the EDGOAD servers have been received, your LDNS then asks the EDGOAD DNS servers for the address of the BLOG servers. 
  5. Finally, your LDNS has fully traversed the DNS infrastructure to associate the DNS name with an IP address, which is returned to your client

Explain the purpose and functionality of a URL

A URL is used to specify the location of a resource on a network, and the method of retrieving it. A URL is composed of the following 5 items.
  1. The protocol / scheme to use (http, https, ftp, etc...)
  2. A colon and 2 forward slashes
  3. A host name or IP address
  4. Optionally, a colon followed by port number
  5. The full path of the resource

  • The protocol is: HTTP
  • The host name is:
  • The full path of the resource is: /2015/02/configuring-vnc-to-auto-start-rhel-62.html
HTTP Functionality
HTTP Status Codes
HTTP Headers
F5 HTTP White Paper
DNS Functionality
DNS Record Types
SIP Functionality
F5 SIP White Paper
FTP Functionality
SMTP Functionality
HTTP Cookies
My Name is URL