Friday, August 23, 2013

Objective 5.02 Describe the purpose of the various types of advance acceleration techniques.

Describe the purpose of TCP optimization

TCP tuning techniques adjust the network congestion avoidance parameters of TCP connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. For enterprises delivering Internet and extranet applications, TCP/IP inefficiencies, coupled the effects of WAN latency and packet loss, all conspire to adversely affect application performance. The result of these inefficiencies has inflated the response times for applications, and significantly reduced bandwidth utilization efficiency (ability to “fill the pipe”). 
F5’s BIG-IP® Local Traffic Manager provides a stat e-of-the-art TCP/IP stack that delivers dramatic WAN and LAN application performance improvements for real-world networks. These advantages cannot be seen in typical packet blasting test harnesses, rather they are designed to deal with real-world client and Internet conditions.
This highly optimized TCP/IP stack, called TCP Express, combines cutting-edge TCP/IP techniques and improvements in the latest RFCs with numerous improvements and extensions developed by F5 to minimize the effect of congestion and packet loss and recovery. Independent testing tools and customer experiences have shown TCP Express delivers up to a 2x performance gain for end users and a 4x improvement in bandwidth efficiency with no change to servers, applications, or the client desktops.

TCP Express White Paper

Describe the purpose of HTTP keep alives

A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent this link from being broken. The Hypertext Transfer Protocol supports explicit means for maintaining an active connection between client and server. HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair.

Describe the purpose of Caching

Caching is the local storage of network data for re-use, to cut down on transfer time for future requests. With Web pages, static caching simply serves objects -- typically images, JavaScript, stylesheets -- as long as they haven't passed their expiration date. But static caching can generally only be used for about 30 percent of HTTP requests, and that does not typically include high-value dynamic data.

Dynamic caching completely changes the caching model, making it possible to cache a much broader variety of content including highly dynamic Web pages, query responses, and XML objects. Dynamic caching is a patented technology unique to F5.

The F5 BIG-IP® WebAccelerator makes dynamic caching possible by implementing two key capabilities: a sophisticated matching algorithm that links fully qualified user queries to cached content, and a cache invalidation mechanism triggered by application and user events.

Describe the purpose of compression

In computer science and information theory, data compression, source coding,[1] or bit-rate reduction involves encoding information using fewer bits than the original representation.[2] Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by identifying unnecessary information and removing it.


Advanced compression increases application performance across a network. In contrast to packet-based compression, advanced compression operates at the session layer (layer 5 of the seven-layer OSI model), compressing homogenous data sets while addressing all application types. This approach generates higher system throughput and minimizes latency.
F5 BIG-IP® WAN Optimization Module combines advanced compression with a system architecture built for high performance. BIG-IP is specifically designed to address the needs of bandwidth-intensive networks.

Intelligent compression removes redundant patterns from a data stream to improve application performance. This technique is commonly used for Web applications to help reduce bandwidth needs and improve end-user response times.
The F5 BIG-IP® product family can target specific applications for compression to give the greatest possible benefit to end users. The BIG-IP system monitors TCP round-trip times to calculate user latency, allowing BIG-IP to devote more power to compressing traffic for those who need it most.

Describe the purpose of pipelining

Pipelining is a natural concept in everyday life, e.g. on an assembly line. Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps). A car on the assembly line can have only one of the three steps done at once. After the car has its engine installed, it moves on to having its hood installed, leaving the engine installation facilities available for the next car. The first car then moves on to wheel installation, the second car to hood installation, and a third car begins to have its engine installed. If engine installation takes 20 minutes, hood installation takes 5 minutes, and wheel installation takes 10 minutes, then finishing all three cars when only one car can be assembled at once would take 105 minutes. On the other hand, using the assembly line, the total time to complete all three is 75 minutes. At this point, additional cars will come off the assembly line at 20 minute increments.

HTTP pipelining is initiated by the browser by opening a connection to the server and then sending multiple requests to the server without waiting for a response. Once the requests are all sent then the browser starts listening for responses. The reason this is considered an acceleration technique is that by shoving all the requests at the server at once you essentially save the RTT (Round Trip Time) on the connection waiting for a response after each request is sent. 

Objective 5.01 Describe the purpose, advantage, use cases, and challenges associated with hardware based application delivery platforms and virtual machines

Explain When a hardware based application deliver platform solution is appropriate

(Explain the purpose, advantages, and challenges associated with hardware based application deliver platform solutions)

BIG-IP 8950 & 11050 Hardware Helps Customers Meet Growing Throughput Demands

  • The new platforms support high throughput levels to meet the application delivery needs of service providers and organizations that put a premium on transactions per second, such as financial institutions. The BIG-IP 8950 platform features a throughput level of 20 Gbps, while the 11050 boasts 42 Gbps.
  • The solutions support 10 Gb Ethernet connectivity to help bandwidth-conscious customers deliver enhanced application services. The platforms provide ideal solutions for customers that have configured their data centers around 10GE or are currently planning to upgrade their infrastructure.
  • With the 8950 and 11050 platforms, customers have the ability to incorporate additional application services (acceleration, high availability, application security, etc.), as their business needs evolve. Because these capabilities can be added to the existing ADN hardware platform, F5 solutions offer both enhanced functionality and optimum performance.

Explain when a virtual machine solution is appropriate

(Explain the purpose, advantages, and challenges associated with virtual machines)

BIG-IP LTM VE Improves ADC Scalability and Simplifies Solution Deployment

  • Virtual ADCs can be rapidly deployed and scaled to support applications as resources are needed. In addition, cloud providers can leverage virtual ADCs to apply specific application policies on a per customer basis to support individual organizations’ business priorities.
  • BIG-IP LTM VE provides improved evaluation, development, integration, QA, and staging for application delivery policies and deployments. By enabling customers to deploy a virtual BIG-IP device in a testing lab, customers can conveniently test how applications and networks will respond in a production environment. This capability also enables customers to evaluate the addition of other ADC services such as SSL offloading, caching, and compression, and seamlessly transfer from testing scenarios into production.
  • BIG-IP LTM VE will be available in a full production version and a non-production lab version, as well as the previously announced trial. The full production version features variable throughput options up to 1Gbps. The lab version enables in-depth testing, and is best suited for efforts around application development, test, QA, and other non-production scenarios.
  • Unlike other virtualized application delivery offerings, BIG-IP LTM VE is part of a comprehensive application delivery architecture platform. This means that it has been designed to operate in tight integration with F5’s broad product portfolio, as well as support solutions from other leading virtualization companies such as VMware.

Explain the advantages of dedicated hardware (SSL card, compression card)


HARDWARE ACCELERATION REDUCES COSTS, INCREASES EFFICIENCY

SSL offloading relieves a Web server of the processing burden of encrypting and/or decrypting traffic sent via SSL, the security protocol that is implemented in every Web browser. The processing is offloaded to a separate device designed specifically to perform SSL acceleration or SSL termination.

SSL termination capability is particularly useful when used in conjunction with clusters of SSL VPNs, because it greatly increases the number of connections a cluster can handle.

BIG-IP® Local Traffic Manager with the SSL Acceleration Feature Module performs SSL offloading.

Thursday, August 22, 2013

Objective 4.04 Describe the purpose, advantages, and use cases of IPsec and SSL VPN

Internet Protocol Security (IPsec

Is a technology protocol suite for securing Internet Protocol (IP) communications by authenticating and/or encrypting each IP packet of a communication session. IPsec also includes protocols for establishing mutual authentication between agents at the beginning of the session and negotiation of cryptographic keys to be used during the session.
IPsec is an end-to-end security scheme operating in the Internet Layer of the Internet Protocol Suite. It can be used in protecting data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).
Some other Internet security systems in widespread use, such as Secure Sockets Layer (SSL), Transport Layer Security (TLS) and Secure Shell (SSH), operate in the upper layers of the TCP/IP model. In the past, the use of TLS/SSL had to be designed into an application to protect application protocols. In contrast, since day one, applications did not need to be specifically designed to use IPsec. Hence, IPsec protects any application traffic across an IP network.

Common IPsec VPN Issues

SSL VPN

An SSL VPN (Secure Sockets Layer virtual private network) is a form of VPN that can be used with a standard Web browser. In contrast to the traditional Internet Protocol Security (IPsec) VPN, an SSL VPN does not require the installation of specialized client software on the end user's computer. It's used to give remote users with access to Web applications, client/server applications and internal network connections.

A virtual private network (VPN) provides a secure communications mechanism for data and other information transmitted between two endpoints. An SSL VPN consists of one or more VPN devices to which the user connects by using his Web browser. The traffic between the Web browser and the SSL VPN device is encrypted with the SSL protocol or its successor, the Transport Layer Security (TLS) protocol.
An SSL VPN offers versatility, ease of use and granular control for a range of users on a variety of computers, accessing resources from many locations. There are two major types of SSL VPNs:

  • SSL Portal VPN: This type of SSL VPN allows for a single SSL connection to a Web site so the end user can securely access multiple network services. The site is called a portal because it is one door (a single page) that leads to many other resources. The remote user accesses the SSL VPN gateway using any modern Web browser, identifies himself or herself to the gateway using an authentication method supported by the gateway and is then presented with a Web page that acts as the portal to the other services.

  • SSL Tunnel VPN: This type of SSL VPN allows a Web browser to securely access multiple network services, including applications and protocols that are not Web-based, through a tunnel that is running under SSL. SSL tunnel VPNs require that the Web browser be able to handle active content, which allows them to provide functionality that is not accessible to SSL portal VPNs. Examples of active content include Java, JavaScript, Active X, or Flash applications or plug-ins.

Advantages and Risks of SSL VPN

Objective 4.03 Describe the purpose and advantages of authentication

Explain the role authentication plays in AAA(authentication, authorization and accounting)

Authentication

Authentication refers to the process where an entity's identity is authenticated, typically by providing evidence that it holds a specific digital identity such as an identifier and the corresponding credentials. Examples of types of credentials are passwords, one-time tokens, digital certificates, digital signatures and phone numbers (calling/called).

Authorization

The authorization function determines whether a particular entity is authorized to perform a given activity, typically inherited from authentication when logging on to an application or service. Authorization may be determined based on a range of restrictions, for example time-of-day restrictions, or physical location restrictions, or restrictions against multiple access by the same entity or user. Typical authorization in everyday computer life is for example granting read access to a specific file for authenticated user. Examples of types of service include, but are not limited to: ip address filtering, address assignment, route assignment, quality of Service/differential services, bandwidth control/traffic management, compulsory tunneling to a specific endpoint, and encryption.

Accounting

Accounting refers to the tracking of network resource consumption by users for the purpose of capacity and trend analysis, cost allocation, billing. In addition, it may record events such as authentication and authorization failures, and include auditing functionality, which permits verifying the correctness of procedures carried out based on accounting data. Real-time accounting refers to accounting information that is delivered concurrently with the consumption of the resources. Batch accounting refers to accounting information that is saved until it is delivered at a later time. Typical information that is gathered in accounting is the identity of the user or other entity, the nature of the service delivered, when the service began, and when it ended, and if there is a status to report.

Advantages of SSO (Single Sign On)

Single sign-on (SSO) is a property of access control of multiple related, but independent software systems. With this property a user logs in once and gains access to all systems without being prompted to log in again at each of them. Conversely, Single sign-off is the property whereby a single action of signing out terminates access to multiple software systems.

Benefits of using single sign-on include:
  • Reducing password fatigue from different user name and password combinations
  • Reducing time spent re-entering passwords for the same identity
  • Reducing IT costs due to lower number of IT help desk calls about passwords

Multifactor Authentication

Multi-factor authentication (also MFA, Two-factor authentication, TFA, T-FA or 2FA) is an approach to authentication which requires the presentation of two or more of the three authentication factors: a knowledge factor ("something the user knows"), a possession factor ("something the user has"), and an inherence factor ("something the user is"). After presentation, each factor must be validated by the other party for authentication to occur.

Objective 4.02 Explain the purpose of the cryptographic services

Cryptographic Services

Public networks such as the Internet do not provide a means of secure communication between entities. Communication over such networks is susceptible to being read or even modified by unauthorized third parties. Cryptography helps protect data from being viewed, provides ways to detect whether data has been modified, and helps provide a secure means of communication over otherwise non secure channels. For example, data can be encrypted by using a cryptographic algorithm, transmitted in an encrypted state, and later decrypted by the intended party. If a third party intercepts the encrypted data, it will be difficult to decipher.

Signing

A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or document. A valid digital signature gives a recipient reason to believe that the message was created by a known sender, such that the sender cannot deny having sent the message (authentication and non-repudiation) and that the message was not altered in transit (integrity). Digital signatures are commonly used for software distribution, financial transactions, and in other cases where it is important to detect forgery or tampering.

Encryption

Encryption is the process of encoding messages (or information) in such a way that eavesdroppers or hackers cannot read it, but that authorized parties can.
Certificates and Certificate Chains

Private/Public Keys

Public-key encryption uses a private key that must be kept secret from unauthorized users and a public key that can be made public to anyone. The public key and the private key are mathematically linked; data that is encrypted with the public key can be decrypted only with the private key, and data that is signed with the private key can be verified only with the public key. The public key can be made available to anyone; it is used for encrypting data to be sent to the keeper of the private key. Public-key cryptographic algorithms are also known as asymmetric algorithms because one key is required to encrypt data, and another key is required to decrypt data. A basic cryptographic rule prohibits key reuse, and both keys should be unique for each communication session. However, in practice, asymmetric keys are generally long-lived.

Symmetric/Asymmetric encryption

There are two basic types of encryption schemes: Symmetric-key and public-key encryption(Asymmetric). In symmetric-key schemes, the encryption and decryption keys are the same. Thus communicating parties must agree on a secret key before they wish to communicate. In public-key schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key and is capable of reading the encrypted messages. Public-key encryption is a relatively recent invention: historically, all encryption schemes have been symmetric-key (also called private-key) schemes

Objective 4.01 Compare and contrast positive and negative security models

Positive Security Model

The two approaches to security most often mentioned in the context of application security— positive and negative—are diametrically opposed in all of their characteristic behaviors, but they are structured very similarly. Both positive and negative security approaches operate according to an established set of rules. Access Control Lists (ACLs) and signatures are two implementation examples of positive and negative security rules, respectively. Positive security moves away from “blocked,” end of the spectrum, following an “allow only what I know” methodology. Every rule added to a positive security model increases what is classified as known behavior, and thus allowed, and decreases what is blocked, or what is unknown. Therefore, a positive security model with nothing defined should block everything and relax (i.e., allow broader access) as the acceptable content contexts are defined. 
 

Negative Security Model

At the opposite end of the spectrum, negative security moves towards “blocked what I know is bad,” meaning it denies access based on what has previously identified as content to be blocked, running opposite to the known/allowed positive model. Every rule added to the negative security policy increases the blocking behavior, thereby decreasing what is both unknown and allowed as the policy is tightened. Therefore, a negative security policy with nothing defined would grant access to everything, and be tightened as exploits are discovered .
 
 

Pros and Cons

Although negative security does retain some aspect of known data, negative security knowledge comes from a list of very specific repositories of matching patterns. As data is passed through a negative security policy, it is evaluated against individual known “bad” patterns. If a known pattern is matched, the data is rejected; if the data flowing through the policy is unidentifiable, it is allowed to pass. Negative security policies do not take into account how the application works, they only notice what accesses the application and if that access violates any negative security patterns. Discussions on preferred security methods typically spawn very polarized debates. Tried and true security engineers might ardently argue the merits of the positive security model because it originates from the most “secure” place—“Only allow what I know and expect.” Many any business pundits would argue that the negative model is the best as it starts in the most “functional” place— “Block what I know is bad and let everything unknown through.” Both groups are correct and yet both opinions become irrelevant when projected onto applied security, because both positive and negative security are theoretical. Applied security falls somewhere in the middle of the spectrum, providing a practical balance. At some point, as
the negative approach is tightened, it will take on characteristics of a more positive model, inching towards a more complete security approach. Likewise, as a positive security model is loosened to accommodate new application behaviors, wit will take on some aspects of a more negative approach, such as implementing data pattern matching, to block the more predictable attacks. As a positive policy continues to relax, it will move
closer towards complete functionality. The point at which these two opposing concepts begin to overlap is where applied security starts to take shape.

Objective 3.02 Differentiate between a client and server

Client–server model

The client–server model is a distributed application structure in computing that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.
The client–server characteristic describes the relationship of cooperating programs in an application. The server component provides a function or service to one or many clients, which initiate requests for such services. The model assigns one of two roles to the computers in a network: Client or server. A server is a computer system that selectively shares its resources; a client is a computer or computer program that initiates contact with a server in order to make use of a resource. Data, CPUs, printers, and data storage devices are some examples of resources.