Friday, August 23, 2013

Objective 5.02 Describe the purpose of the various types of advance acceleration techniques.

Describe the purpose of TCP optimization

TCP tuning techniques adjust the network congestion avoidance parameters of TCP connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. For enterprises delivering Internet and extranet applications, TCP/IP inefficiencies, coupled the effects of WAN latency and packet loss, all conspire to adversely affect application performance. The result of these inefficiencies has inflated the response times for applications, and significantly reduced bandwidth utilization efficiency (ability to “fill the pipe”). 
F5’s BIG-IP® Local Traffic Manager provides a stat e-of-the-art TCP/IP stack that delivers dramatic WAN and LAN application performance improvements for real-world networks. These advantages cannot be seen in typical packet blasting test harnesses, rather they are designed to deal with real-world client and Internet conditions.
This highly optimized TCP/IP stack, called TCP Express, combines cutting-edge TCP/IP techniques and improvements in the latest RFCs with numerous improvements and extensions developed by F5 to minimize the effect of congestion and packet loss and recovery. Independent testing tools and customer experiences have shown TCP Express delivers up to a 2x performance gain for end users and a 4x improvement in bandwidth efficiency with no change to servers, applications, or the client desktops.

TCP Express White Paper

Describe the purpose of HTTP keep alives

A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent this link from being broken. The Hypertext Transfer Protocol supports explicit means for maintaining an active connection between client and server. HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair.

Describe the purpose of Caching

Caching is the local storage of network data for re-use, to cut down on transfer time for future requests. With Web pages, static caching simply serves objects -- typically images, JavaScript, stylesheets -- as long as they haven't passed their expiration date. But static caching can generally only be used for about 30 percent of HTTP requests, and that does not typically include high-value dynamic data.

Dynamic caching completely changes the caching model, making it possible to cache a much broader variety of content including highly dynamic Web pages, query responses, and XML objects. Dynamic caching is a patented technology unique to F5.

The F5 BIG-IP® WebAccelerator makes dynamic caching possible by implementing two key capabilities: a sophisticated matching algorithm that links fully qualified user queries to cached content, and a cache invalidation mechanism triggered by application and user events.

Describe the purpose of compression

In computer science and information theory, data compression, source coding,[1] or bit-rate reduction involves encoding information using fewer bits than the original representation.[2] Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by identifying unnecessary information and removing it.


Advanced compression increases application performance across a network. In contrast to packet-based compression, advanced compression operates at the session layer (layer 5 of the seven-layer OSI model), compressing homogenous data sets while addressing all application types. This approach generates higher system throughput and minimizes latency.
F5 BIG-IP® WAN Optimization Module combines advanced compression with a system architecture built for high performance. BIG-IP is specifically designed to address the needs of bandwidth-intensive networks.

Intelligent compression removes redundant patterns from a data stream to improve application performance. This technique is commonly used for Web applications to help reduce bandwidth needs and improve end-user response times.
The F5 BIG-IP® product family can target specific applications for compression to give the greatest possible benefit to end users. The BIG-IP system monitors TCP round-trip times to calculate user latency, allowing BIG-IP to devote more power to compressing traffic for those who need it most.

Describe the purpose of pipelining

Pipelining is a natural concept in everyday life, e.g. on an assembly line. Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps). A car on the assembly line can have only one of the three steps done at once. After the car has its engine installed, it moves on to having its hood installed, leaving the engine installation facilities available for the next car. The first car then moves on to wheel installation, the second car to hood installation, and a third car begins to have its engine installed. If engine installation takes 20 minutes, hood installation takes 5 minutes, and wheel installation takes 10 minutes, then finishing all three cars when only one car can be assembled at once would take 105 minutes. On the other hand, using the assembly line, the total time to complete all three is 75 minutes. At this point, additional cars will come off the assembly line at 20 minute increments.

HTTP pipelining is initiated by the browser by opening a connection to the server and then sending multiple requests to the server without waiting for a response. Once the requests are all sent then the browser starts listening for responses. The reason this is considered an acceleration technique is that by shoving all the requests at the server at once you essentially save the RTT (Round Trip Time) on the connection waiting for a response after each request is sent. 

Objective 5.01 Describe the purpose, advantage, use cases, and challenges associated with hardware based application delivery platforms and virtual machines

Explain When a hardware based application deliver platform solution is appropriate

(Explain the purpose, advantages, and challenges associated with hardware based application deliver platform solutions)

BIG-IP 8950 & 11050 Hardware Helps Customers Meet Growing Throughput Demands

  • The new platforms support high throughput levels to meet the application delivery needs of service providers and organizations that put a premium on transactions per second, such as financial institutions. The BIG-IP 8950 platform features a throughput level of 20 Gbps, while the 11050 boasts 42 Gbps.
  • The solutions support 10 Gb Ethernet connectivity to help bandwidth-conscious customers deliver enhanced application services. The platforms provide ideal solutions for customers that have configured their data centers around 10GE or are currently planning to upgrade their infrastructure.
  • With the 8950 and 11050 platforms, customers have the ability to incorporate additional application services (acceleration, high availability, application security, etc.), as their business needs evolve. Because these capabilities can be added to the existing ADN hardware platform, F5 solutions offer both enhanced functionality and optimum performance.

Explain when a virtual machine solution is appropriate

(Explain the purpose, advantages, and challenges associated with virtual machines)

BIG-IP LTM VE Improves ADC Scalability and Simplifies Solution Deployment

  • Virtual ADCs can be rapidly deployed and scaled to support applications as resources are needed. In addition, cloud providers can leverage virtual ADCs to apply specific application policies on a per customer basis to support individual organizations’ business priorities.
  • BIG-IP LTM VE provides improved evaluation, development, integration, QA, and staging for application delivery policies and deployments. By enabling customers to deploy a virtual BIG-IP device in a testing lab, customers can conveniently test how applications and networks will respond in a production environment. This capability also enables customers to evaluate the addition of other ADC services such as SSL offloading, caching, and compression, and seamlessly transfer from testing scenarios into production.
  • BIG-IP LTM VE will be available in a full production version and a non-production lab version, as well as the previously announced trial. The full production version features variable throughput options up to 1Gbps. The lab version enables in-depth testing, and is best suited for efforts around application development, test, QA, and other non-production scenarios.
  • Unlike other virtualized application delivery offerings, BIG-IP LTM VE is part of a comprehensive application delivery architecture platform. This means that it has been designed to operate in tight integration with F5’s broad product portfolio, as well as support solutions from other leading virtualization companies such as VMware.

Explain the advantages of dedicated hardware (SSL card, compression card)


HARDWARE ACCELERATION REDUCES COSTS, INCREASES EFFICIENCY

SSL offloading relieves a Web server of the processing burden of encrypting and/or decrypting traffic sent via SSL, the security protocol that is implemented in every Web browser. The processing is offloaded to a separate device designed specifically to perform SSL acceleration or SSL termination.

SSL termination capability is particularly useful when used in conjunction with clusters of SSL VPNs, because it greatly increases the number of connections a cluster can handle.

BIG-IP® Local Traffic Manager with the SSL Acceleration Feature Module performs SSL offloading.

Thursday, August 22, 2013

Objective 4.04 Describe the purpose, advantages, and use cases of IPsec and SSL VPN

Internet Protocol Security (IPsec

Is a technology protocol suite for securing Internet Protocol (IP) communications by authenticating and/or encrypting each IP packet of a communication session. IPsec also includes protocols for establishing mutual authentication between agents at the beginning of the session and negotiation of cryptographic keys to be used during the session.
IPsec is an end-to-end security scheme operating in the Internet Layer of the Internet Protocol Suite. It can be used in protecting data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).
Some other Internet security systems in widespread use, such as Secure Sockets Layer (SSL), Transport Layer Security (TLS) and Secure Shell (SSH), operate in the upper layers of the TCP/IP model. In the past, the use of TLS/SSL had to be designed into an application to protect application protocols. In contrast, since day one, applications did not need to be specifically designed to use IPsec. Hence, IPsec protects any application traffic across an IP network.

Common IPsec VPN Issues

SSL VPN

An SSL VPN (Secure Sockets Layer virtual private network) is a form of VPN that can be used with a standard Web browser. In contrast to the traditional Internet Protocol Security (IPsec) VPN, an SSL VPN does not require the installation of specialized client software on the end user's computer. It's used to give remote users with access to Web applications, client/server applications and internal network connections.

A virtual private network (VPN) provides a secure communications mechanism for data and other information transmitted between two endpoints. An SSL VPN consists of one or more VPN devices to which the user connects by using his Web browser. The traffic between the Web browser and the SSL VPN device is encrypted with the SSL protocol or its successor, the Transport Layer Security (TLS) protocol.
An SSL VPN offers versatility, ease of use and granular control for a range of users on a variety of computers, accessing resources from many locations. There are two major types of SSL VPNs:

  • SSL Portal VPN: This type of SSL VPN allows for a single SSL connection to a Web site so the end user can securely access multiple network services. The site is called a portal because it is one door (a single page) that leads to many other resources. The remote user accesses the SSL VPN gateway using any modern Web browser, identifies himself or herself to the gateway using an authentication method supported by the gateway and is then presented with a Web page that acts as the portal to the other services.

  • SSL Tunnel VPN: This type of SSL VPN allows a Web browser to securely access multiple network services, including applications and protocols that are not Web-based, through a tunnel that is running under SSL. SSL tunnel VPNs require that the Web browser be able to handle active content, which allows them to provide functionality that is not accessible to SSL portal VPNs. Examples of active content include Java, JavaScript, Active X, or Flash applications or plug-ins.

Advantages and Risks of SSL VPN

Objective 4.03 Describe the purpose and advantages of authentication

Explain the role authentication plays in AAA(authentication, authorization and accounting)

Authentication

Authentication refers to the process where an entity's identity is authenticated, typically by providing evidence that it holds a specific digital identity such as an identifier and the corresponding credentials. Examples of types of credentials are passwords, one-time tokens, digital certificates, digital signatures and phone numbers (calling/called).

Authorization

The authorization function determines whether a particular entity is authorized to perform a given activity, typically inherited from authentication when logging on to an application or service. Authorization may be determined based on a range of restrictions, for example time-of-day restrictions, or physical location restrictions, or restrictions against multiple access by the same entity or user. Typical authorization in everyday computer life is for example granting read access to a specific file for authenticated user. Examples of types of service include, but are not limited to: ip address filtering, address assignment, route assignment, quality of Service/differential services, bandwidth control/traffic management, compulsory tunneling to a specific endpoint, and encryption.

Accounting

Accounting refers to the tracking of network resource consumption by users for the purpose of capacity and trend analysis, cost allocation, billing. In addition, it may record events such as authentication and authorization failures, and include auditing functionality, which permits verifying the correctness of procedures carried out based on accounting data. Real-time accounting refers to accounting information that is delivered concurrently with the consumption of the resources. Batch accounting refers to accounting information that is saved until it is delivered at a later time. Typical information that is gathered in accounting is the identity of the user or other entity, the nature of the service delivered, when the service began, and when it ended, and if there is a status to report.

Advantages of SSO (Single Sign On)

Single sign-on (SSO) is a property of access control of multiple related, but independent software systems. With this property a user logs in once and gains access to all systems without being prompted to log in again at each of them. Conversely, Single sign-off is the property whereby a single action of signing out terminates access to multiple software systems.

Benefits of using single sign-on include:
  • Reducing password fatigue from different user name and password combinations
  • Reducing time spent re-entering passwords for the same identity
  • Reducing IT costs due to lower number of IT help desk calls about passwords

Multifactor Authentication

Multi-factor authentication (also MFA, Two-factor authentication, TFA, T-FA or 2FA) is an approach to authentication which requires the presentation of two or more of the three authentication factors: a knowledge factor ("something the user knows"), a possession factor ("something the user has"), and an inherence factor ("something the user is"). After presentation, each factor must be validated by the other party for authentication to occur.

Objective 4.02 Explain the purpose of the cryptographic services

Cryptographic Services

Public networks such as the Internet do not provide a means of secure communication between entities. Communication over such networks is susceptible to being read or even modified by unauthorized third parties. Cryptography helps protect data from being viewed, provides ways to detect whether data has been modified, and helps provide a secure means of communication over otherwise non secure channels. For example, data can be encrypted by using a cryptographic algorithm, transmitted in an encrypted state, and later decrypted by the intended party. If a third party intercepts the encrypted data, it will be difficult to decipher.

Signing

A digital signature is a mathematical scheme for demonstrating the authenticity of a digital message or document. A valid digital signature gives a recipient reason to believe that the message was created by a known sender, such that the sender cannot deny having sent the message (authentication and non-repudiation) and that the message was not altered in transit (integrity). Digital signatures are commonly used for software distribution, financial transactions, and in other cases where it is important to detect forgery or tampering.

Encryption

Encryption is the process of encoding messages (or information) in such a way that eavesdroppers or hackers cannot read it, but that authorized parties can.
Certificates and Certificate Chains

Private/Public Keys

Public-key encryption uses a private key that must be kept secret from unauthorized users and a public key that can be made public to anyone. The public key and the private key are mathematically linked; data that is encrypted with the public key can be decrypted only with the private key, and data that is signed with the private key can be verified only with the public key. The public key can be made available to anyone; it is used for encrypting data to be sent to the keeper of the private key. Public-key cryptographic algorithms are also known as asymmetric algorithms because one key is required to encrypt data, and another key is required to decrypt data. A basic cryptographic rule prohibits key reuse, and both keys should be unique for each communication session. However, in practice, asymmetric keys are generally long-lived.

Symmetric/Asymmetric encryption

There are two basic types of encryption schemes: Symmetric-key and public-key encryption(Asymmetric). In symmetric-key schemes, the encryption and decryption keys are the same. Thus communicating parties must agree on a secret key before they wish to communicate. In public-key schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key and is capable of reading the encrypted messages. Public-key encryption is a relatively recent invention: historically, all encryption schemes have been symmetric-key (also called private-key) schemes

Objective 4.01 Compare and contrast positive and negative security models

Positive Security Model

The two approaches to security most often mentioned in the context of application security— positive and negative—are diametrically opposed in all of their characteristic behaviors, but they are structured very similarly. Both positive and negative security approaches operate according to an established set of rules. Access Control Lists (ACLs) and signatures are two implementation examples of positive and negative security rules, respectively. Positive security moves away from “blocked,” end of the spectrum, following an “allow only what I know” methodology. Every rule added to a positive security model increases what is classified as known behavior, and thus allowed, and decreases what is blocked, or what is unknown. Therefore, a positive security model with nothing defined should block everything and relax (i.e., allow broader access) as the acceptable content contexts are defined. 
 

Negative Security Model

At the opposite end of the spectrum, negative security moves towards “blocked what I know is bad,” meaning it denies access based on what has previously identified as content to be blocked, running opposite to the known/allowed positive model. Every rule added to the negative security policy increases the blocking behavior, thereby decreasing what is both unknown and allowed as the policy is tightened. Therefore, a negative security policy with nothing defined would grant access to everything, and be tightened as exploits are discovered .
 
 

Pros and Cons

Although negative security does retain some aspect of known data, negative security knowledge comes from a list of very specific repositories of matching patterns. As data is passed through a negative security policy, it is evaluated against individual known “bad” patterns. If a known pattern is matched, the data is rejected; if the data flowing through the policy is unidentifiable, it is allowed to pass. Negative security policies do not take into account how the application works, they only notice what accesses the application and if that access violates any negative security patterns. Discussions on preferred security methods typically spawn very polarized debates. Tried and true security engineers might ardently argue the merits of the positive security model because it originates from the most “secure” place—“Only allow what I know and expect.” Many any business pundits would argue that the negative model is the best as it starts in the most “functional” place— “Block what I know is bad and let everything unknown through.” Both groups are correct and yet both opinions become irrelevant when projected onto applied security, because both positive and negative security are theoretical. Applied security falls somewhere in the middle of the spectrum, providing a practical balance. At some point, as
the negative approach is tightened, it will take on characteristics of a more positive model, inching towards a more complete security approach. Likewise, as a positive security model is loosened to accommodate new application behaviors, wit will take on some aspects of a more negative approach, such as implementing data pattern matching, to block the more predictable attacks. As a positive policy continues to relax, it will move
closer towards complete functionality. The point at which these two opposing concepts begin to overlap is where applied security starts to take shape.

Objective 3.02 Differentiate between a client and server

Client–server model

The client–server model is a distributed application structure in computing that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.
The client–server characteristic describes the relationship of cooperating programs in an application. The server component provides a function or service to one or many clients, which initiate requests for such services. The model assigns one of two roles to the computers in a network: Client or server. A server is a computer system that selectively shares its resources; a client is a computer or computer program that initiates contact with a server in order to make use of a resource. Data, CPUs, printers, and data storage devices are some examples of resources.

Objective 3.01 Discuss the purpose of, use cases for, and key considerations related to load balancing.

Distributing the load across multiple servers.

Successful load balancing optimizes resource use, maximizes throughput, minimizes response time, and avoids overload. Using multiple components with load balancing instead of a single component may increase reliability through redundancy.

Load Balancing Algorithms

A variety of scheduling algorithms are used by load balancers to determine which backend server to send a request to. Simple algorithms include random choice or round robin. More sophisticated load balancers may take into account additional factors, such as a server's reported load, recent response times, up/down status (determined by a monitoring poll of some kind), number of active connections, geographic location, capabilities, or how much traffic it has recently been assigned.

Session Persistence

An important issue when operating a load-balanced service is how to handle information that must be kept across the multiple requests in a user's session. If this information is stored locally on one backend server, then subsequent requests going to different backend servers would not be able to find it. This might be cached information that can be recomputed, in which case load-balancing a request to a different backend server just introduces a performance issue.

One solution to the session data issue is to send all requests in a user session consistently to the same backend server. This is known as persistence or stickiness. A significant downside to this technique is its lack of automatic failover: if a backend server goes down, its per-session information becomes inaccessible, and any sessions depending on it are lost. The same problem is usually relevant to central database servers; even if web servers are "stateless" and not "sticky", the central database is (see below).

Assignment to a particular server might be based on a username, client IP address, or by random assignment. Because of changes of the client's perceived address resulting from DHCP, network address translation, and web proxies this method may be unreliable. Random assignments must be remembered by the load balancer, which creates a burden on storage. If the load balancer is replaced or fails, this information may be lost, and assignments may need to be deleted after a timeout period or during periods of high load to avoid exceeding the space available for the assignment table. The random assignment method also requires that clients maintain some state, which can be a problem, for example when a web browser has disabled storage of cookies. Sophisticated load balancers use multiple persistence techniques to avoid some of the shortcomings of any one method.

Another solution is to keep the per-session data in a database. Generally this is bad for performance since it increases the load on the database: the database is best used to store information less transient than per-session data. To prevent a database from becoming a single point of failure, and to improve scalability, the database is often replicated across multiple machines, and load balancing is used to spread the query load across those replicas. Microsoft's ASP.net State Server technology is an example of a session database. All servers in a web farm store their session data on State Server and any server in the farm can retrieve the data.

Fortunately there are more efficient approaches. In the very common case where the client is a web browser, per-session data can be stored in the browser itself. One technique is to use a browser cookie, suitably time-stamped and encrypted. Another is URL rewriting. Storing session data on the client is generally the preferred solution: then the load balancer is free to pick any backend server to handle a request. However, this method of state-data handling is not really suitable for some complex business logic scenarios, where session state payload is very big or recomputing it with every request on a server is not feasible, and URL rewriting has major security issues, since the end-user can easily alter the submitted URL and thus change session streams. Encrypted client side cookies are arguably just as insecure and unless all transmission is over HTTPS, they are very easy to copy or decrypt for man-in-the-middle attacks.

Objective 2.06 Explain the advantages and configurations of high availability

High availability

High availability is a system design approach and associated service implementation that ensures a prearranged level of operational performance will be met during a contractual measurement period.

Users want their systems, for example hospitals, production computers, and the electrical grid to be ready to serve them at all times. Availability refers to the ability of the user community to obtain a service or good, access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is said to be unavailable.[1] Generally, the term downtime is used to refer to periods when a system is unavailable.


Active / Active

Active /Standby

Benefits of deploying BIG-IP's in a redundant configuration.

A key to an effective, resilient and robust network is a good design. Big IP design is a key for faster and more effective failover leading to greater availability and lesser convergence time. This blog is written with the deployment considerations that are done.
A Big IP works like a switch, having VLAN’s and Spanning Tree Protocol. This enables the Big IP to fit right into your LAN design. You are offered with choices of Active/Standby (Failover) pair or Active/Active or as I like to call it, the “load balance your load balancer” pair, which doubles up on covering for each other. All this is feasible with the concept of “Floating IP”, “Gratuitous ARP” or “Mac Masquerading”.

Tuesday, August 6, 2013

Objective 2.05 Explain the purpose and use cases for full proxy and packet forwarding / packet based architectures

A. Describe a full proxy architecture.

The Full-Proxy Data Center Architecture by Lori MacVittie


Why a full-proxy architecture is important to both infrastructure and data centers.
In the early days of load balancing and application delivery there was a lot of confusion about proxy-based architectures and in particular the definition of a full-proxy architecture. Understanding what a full-proxy is will be increasingly important as we continue to re-architect the data center to support a more mobile, virtualized infrastructure in the quest to realize IT as a Service.

THE FULL-PROXY PLATFORM

The reason there is a distinction made between “proxy” and “full-proxy” stems from the handling of connections as they flow through the device. All proxies sit between two entities – in the Internet age almost always “client” and “server” – and mediate connections. While all full-proxies are proxies, the converse is not true. Not all proxies are full-proxies and it is this distinction that needs to be made when making decisions that will impact the data center architecture.

A full-proxy maintains two separate session tables – one on the client-side, one on the server-side. There is effectively an “air gap” isolation layer between the two internal to the proxy, one that enables focused profiles to be applied specifically to address issues peculiar to each “side” of the proxy. Clients often experience higher latency because of lower bandwidth connections while the servers are generally low latency because they’re connected via a high-speed LAN. The optimizations and acceleration techniques used on the client side are far different than those on the LAN side because the issues that give rise to performance and availability challenges are vastly different.
 
A full-proxy, with separate connection handling on either side of the “air gap”, can address these challenges. A proxy, which may be a full-proxy but more often than not simply uses a buffer-and-stitch methodology to perform connection management, cannot optimally do so. A typical proxy buffers a connection, often through the TCP handshake process and potentially into the first few packets of application data, but then “stitches” a connection to a given server on the back-end using either layer 4 or layer 7 data, perhaps both. The connection is a single flow from end-to-end and must choose which characteristics of the connection to focus on – client or server – because it cannot simultaneously optimize for both.
The second advantage of a full-proxy is its ability to perform more tasks on the data being exchanged over the connection as it is flowing through the component. Because specific action must be taken to “match up” the connection as its flowing through the full-proxy, the component can inspect, manipulate, and otherwise modify the data before sending it on its way on the server-side. This is what enables termination of SSL, enforcement of security policies, and performance-related services to be applied on a per-client, per-application basis.
This capability translates to broader usage in data center architecture by enabling the implementation of an application delivery tier in which operational risk can be addressed through the enforcement of various policies. In effect, we’re created a full-proxy data center architecture in which the application delivery tier as a whole serves as the “full proxy” that mediates between the clients and the applications.

 

THE FULL-PROXY DATA CENTER ARCHITECTURE

A full-proxy data center architecture installs a digital "air gap” between the client and applications by serving as the aggregation (and conversely disaggregation) point for services. Because all communication is funneled through virtualized applications and services at the application delivery tier, it serves as a strategic point of control at which delivery policies addressing operational risk (performance, availability, security) can be enforced.
A full-proxy data center architecture further has the advantage of isolating end-users from the volatility inherent in highly virtualized and dynamic environments such as cloud computing . It enables solutions such as those used to overcome limitations with virtualization technology, such as those encountered with pod-architectural constraints in VMware View deployments. Traditional access management technologies, for example, are tightly coupled to host names and IP addresses. In a highly virtualized or cloud computing environment, this constraint may spell disaster for either performance or ability to function, or both.  By implementing access management in the application delivery tier – on a full-proxy device – volatility is managed through virtualization of the resources, allowing the application delivery controller to worry about details such as IP address and VLAN segments, freeing the access management solution to concern itself with determining whether this user on this device from that location is allowed to access a given resource.
Basically, we’re taking the concept of a full-proxy and expanded it outward to the architecture. Inserting an “application delivery tier” allows for an agile, flexible architecture more supportive of the rapid changes today’s IT organizations must deal with.
Such a tier also provides an effective means to combat modern attacks. Because of its ability to isolate applications, services, and even infrastructure resources, an application delivery tier improves an organizations’ capability to withstand the onslaught of a concerted DDoS attack. The magnitude of difference between the connection capacity of an application delivery controller and most infrastructure (and all servers) gives the entire architecture a higher resiliency in the face of overwhelming connections. This ensures better availability and, when coupled with virtual infrastructure that can scale on-demand when necessary, can also maintain performance levels required by business concerns.
A full-proxy data center architecture is an invaluable asset to IT organizations in meeting the challenges of volatility both inside and outside the data center.


B. Describe a packet forwarding / packet based architecture

What is a packet-based design?
A network device with a packet-based (or packet-by-packet) design is located in the middle of a stream of communications, but is not an endpoint for those communications; it just passes the packets through. Often a device that operates on a packet-by-packet basis does have some knowledge of the protocols flowing through it, but is far from being a real protocol endpoint. The speed of these devices is primarily based on not having to understand the entire protocol stack, shortcutting the amount of work needed to handle traffic. For example, with TCP/IP, this type of device might only understand the protocols well enough to rewrite the IP addresses and TCP ports; only about half of the entire stack.
TMOS: Redefining the Solution

Friday, August 2, 2013

Objective 2.04 Explain the purpose, use, and advantages of iControl.


A. Explain the purpose of iControl


As iRules is to network traffic, iControl is to F5 configurations. iControl is our open, web services based API that allows complete, dynamic, programmatic control of F5 configuration objects. This means you can add, modify or remove bits from your F5 device on the fly, automatically. Whether you’re looking to add a list of Virtual Servers, shut down half of the 400 members in a pool for an upgrade, or track more advanced stats by writing a script to poll certain data.; the uses for iControl are near limitless.

B. Explain the use of iControl

iControl allows an amazing level of fine grained control and access has proven useful time and time again, as many of our users have come to truly rely on the API for automating management tasks in the large scale environments in which BIG-IP is often deployed. Some even use it to design custom interfaces for particular groups of users, or to integrate directly with an existing portal to allow their applications to tie straight into F5 technology.

iControl Dev Central

Objective 2.03 Explain the purpose, use, and advantages of iApps.


A. Explain the purpose of iApps


F5 iApps is a powerful set of features in the BIG-IP system that provides 
a new way to architect application delivery in the data center. It gives you a holistic, application-centric view of how applications are managed and delivered inside, outside, and beyond the data center.

By managing application services rather than the individual networking components and configurations, you can dramatically speed up deployment, lower OpEx, and streamline IT operations. You can provision application services in minutes rather than weeks, significantly improving time-to-market and creating a highly efficient and predictable process for successful application delivery.

B. Explain the Advantages of iApps


iApps provides a framework that application, security, network, systems, and operations personnel can use to unify, simplify, and control the entire Application Delivery Network (ADN). You gain a contextual view and advanced statistics about the application services supporting the business. iApps abstracts the many individual components required to
 deliver an application by grouping these resources together in templates associated with applications. This alleviates the need to manage discrete components on the network.

Objective 2.02 Explain the purpose use and advantages of iRules

A. Explain the purpose of iRules



F5 iRules is a flexible, programmatic interface that makes it possible to extend and customize the functionality of the BIG-IP system. As an event-driven scripting language, iRules gives you the ability to architect application delivery solutions that improve the security, resiliency, and scale of applications in the data center.

B. Explain the Advantages of iRules


iRules provides unprecedented control to directly manipulate and manage any IP application traffic using an easy-to-learn scripting syntax. A robust and active community at F5 DevCentralprovides a wealth of existing iRules that can be customized to fit your unique application requirements. With free registration on DevCentral, you have access to hundreds of proven iRules that can mitigate threats and extend the capabilities of your application delivery network. 




Objective 2.01 Articulate the role of F5 products.

Explain the purpose use and benefits of:

APM

BIG-IP Access Policy Manager (APM) is a flexible, high-performance access and security solution that provides unified global access to your applications and network. By converging and consolidating remote access, LAN access, and wireless connections within a single management interface, and providing easy-to-manage access policies, BIG-IP APM helps you free up valuable IT resources and scale cost-effectively

Key benefits

  • Provide unified global access 
Consolidate remote access, LAN access, and wireless connections in one interface.
  • Consolidate and simplify
Replace web access proxy tiers and integrate with OAM, XenApp, and Exchange to reduce infrastructure and management costs.
  • Centralize access control 
Gain a simplified, central point of control to manage access to applications by dynamically enforcing context-aware policies.
  • Ensure superior access and endpoint security 
Protect your organization from data loss, virus infection, and rogue device access with comprehensive endpoint capabilities.
  • Obtain flexibility, high performance, and scalability 
Support all of your users easily, quickly, and cost-effectively

Big IP Access Policy Manager Overview
Big Ip Access Policy Manager Datasheet
Configuration Guide for BIG IP Access Policy Manager


ASM

F5 BIG-IP® Application Security Manager(ASM) is a flexible web application firewall that secures web applications in traditional, virtual, and private cloud environments. BIG-IP ASM helps secure applications against unknown vulnerabilities, and enables compliance for key regulatory mandates. BIG-IP ASM is a key part of the F5 application delivery firewall solution, which consolidates traffic management, network firewall, application access, DDoS protection, SSL inspection, and DNS security. 

Key benefits

  • Ensure app security and availability
Get comprehensive geolocation attack protection from layer 7 distributed denial of service (DDoS), SQL injection, and OWASP Top Ten attacks, and secure the latest interactive AJAX applications and JSON payloads.
  • Reduce costs and enable compliance
Achieve security standards compliance with built-in application protection.
  • Get out-of-the-box app security policies
Provide protection with pre-built rapid deployment policies and minimal configuration.
  • Improve app security and performance
Enable advanced application security while accelerating performance and improving cost effectiveness.
  • Deploy flexibly and incorporate external intelligence
Focus on fast application development and flexible deployment in virtual and cloud environments while incorporating external intelligence for securing apps against IP threats.


LTM

F5® BIG-IP® Local Traffic Manager(LTM) helps you deliver your applications to your users, in a reliable, secure, and optimized way. You get the extensibility and flexibility of an intelligent services framework with the programmability you need to manage your physical, virtual, and cloud infrastructure. With BIG-IP LTM, you have the power to simplify, automate, and customize applications faster and more predictably.

Key benefits

  • Deliver applications rapidly and reliably
Ensure that your customers and users have access to the applications they need—whenever they need them.
  • Customize and automate with programmable infrastructure
Control your applications—from connection and traffic to configuration and management—with
F5’s unique TMOS. operating system, which includes native protocol support, an openmanagement
API, and an event-driven scripting language.
  • Transition to SDN and cloud networks
Realize operational consistency and comply with business needs across physical, virtual, and cloud environments with deployment flexibility and scalability.
  • Easily deploy and manage applications
User-defined F5 iApps. Templates make it easy to deploy, manage, and get complete visibility into your applications.
  • Secure your critical applications
Protect the apps that run your business with industry-leading SSL performance and visibility.


GTM

F5® BIG-IP® Global Traffic Manager™ (GTM) distributes DNS and user application requests based on business policies, data center and network conditions, user location, and application performance. BIG-IP GTM delivers F5’s high-performance DNS Services with visibility, reporting, and analysis; scales and secures DNS responses geographically to survive DDoS attacks; delivers a complete, real-time DNSSEC solution; and ensures global application high availability.

Key benefits

  • Scale DNS to more than 10 million RPS with a fully-loaded chassis
BIG IP GTM dramatically scales DNS to more than 10 million query RPS and controls DNS traffic. It ensures that users are connected to the best site, and delivers On-Demand Scaling for DNS and global apps
  • Gain control and secure global application delivery
Route users based on business, geolocation, application, and network requirements to gain flexibility and control. Also ensure application availability and protection during DNS DDoS attacks or volume spikes.
  • Improve application performance
Send users to the site with the best application performance based on application and network conditions. 
  • Deploy flexibly, scale as you grow, and manage your network efficiently
BIG-IP GTM Virtual Edition (VE) delivers flexible global application management in virtual and cloud environments. Multiple management tools give you complete visibility and control; advanced logging, statistics and reporting; and a single point of control for your DNS and global app delivery resources.


EM

 F5® Enterprise Manager™ significantly reduces the cost and complexity of managing multiple F5 devices. You gain a single-pane view of your entire application delivery infrastructure and the tools you need to automate common tasks, ensure optimized application performance, and improve budgeting and forecasting to meet changing business needs. Enterprise Manager is available as a physical or virtual edition. 

Key benefits

  • Ensure optimized performance
Get an up-to-date, comprehensive view of application traffic and device performance. Set thresholds and alerts to react quickly to changing network conditions and user demands.
  • Reduce TCO through automation
Use a single interface to automate common operational tasks for your F5 devices, reducing total cost of ownership and OpEx.
  • Improve budgeting and forecasting
Use 160 customizable metrics to gain complete visibility into your application delivery infrastructure over time and improve planning and budgeting for future projects.
  • Troubleshoot more effectively
Quickly isolate application performance and traffic management problems to minimize the effect on your business.
  • Gain flexibility
Deploy according to your business needs with the flexibility of physical and virtual Enterprise Manager editions
Application-Delivery-Network-Platform-Management-White-Paper


WA (Web Accelerator) 

BIG-IP® WebAccelerator™ automates web performance optimization, to instantly improve performance for end users and help you reduce costs. By offloading your network and servers, BIG-IP WebAccelerator decreases your spending on additional bandwidth and hardware. Users get fast access to applications, and you gain greater revenue and free up IT resources for other strategic projects.

WA Key benefits

  • Improve user experience and revenue 
Reduce frustration for employees and consumers using your site with fast apps, and pave the way for higher productivity and sales.
  • Deploy according to your business needs
Improve asymmetric deployment performance by 2x to 5x, and symmetric deployment by up to 10x. 
  • Reduce costs
Reduce the number of application servers required with SSL offload, compression offload, and caching—and save both CapEx and OpEx.
  • Optimize server and bandwidth usage
Extend server capacity and reduce bandwidth usage to improve performance and reduce costs.
  • Simplify deployment and management
Use pre-defined policies for apps such as SharePoint, SAP Portal, Oracle Portal, E-business Suite 11/12, Siebel CRM, and more to simplify configuration.
  • Boost performance of mobile apps
Apply front-end optimization techniques to overcome the unique app delivery challenges of mobile devices.
Big-Ip-Webaccelerator-Overview
Big-Ip-Webaccelerator-Data-Sheet
Configuration-Guide-for-the-BIG-IP-WebAccelerator-System
Policy-Management-Guide-for-the-BIG-IP-WebAccelerator-System


WOM (Wan Optimization)

F5® BIG-IP® WAN Optimization Module™ (WOM) compresses, deduplicates, and encrypts data between two data centers. When used in conjunction with Oracle Data Guard, BIG-IP WOM improves the performance of replication while enabling the secure transfer of data within your database management system (DBMS). The more latency, congestion, and packet loss your connection suffers from, the more BIG-IP WOM improves your replication environment.

Key features

• TCP Optimization—Cut down on the overhead inherent in TCP to speed replication
• Rate Shaping—Set limits for how much and how little bandwidth Data Guard should receive
• Compression—Send compressed versions of data over the WAN to reduce bandwidth
• Deduplication—Send each distinct bit of data only once with advanced deduplication technology

Key benefits

• Increase Performance—Improve RPOs and RTOs by reducing data replication lag
• Increase Efficiency—Maximize bandwidth utilization
• Cost Savings—Reduce WAN costs and offload CPU-intensive processes from servers
• Improve Security—Encrypt SQL transactions over the WAN

Oracle-Data-Guard-Wom-Solution-Profile

ARX
F5 ARX intelligent file virtualization simplifies how your file data is accessed, moved, and managed. Data management policies help you match the business value of your data to the cost of its storage. Smart storage tiering moves your files automatically and without disruption or downtime, while dynamic capacity balancing improves overall utilization of your existing storage resources. ARX is available in a range of hardware devices and as a virtual edition.

Key benefits
  • Reduce storage costs
Match the business value of your data to the cost of its storage; reduce costs with new technologies.
  • Optimize backups
Decrease the backup of redundant data to lower backup and recovery times, media consumption, and costs.
  • Maximize value of existing storage
Improve utilization, reclaim stranded capacity, and defer additional storage purchases.
  • Simplify management
Perform storage provisioning and decommissioning without disrupting users.
  • Improve flexibility and choice
Move your data wherever and whenever you want, even between heterogeneous devices.