Politics Mpls-enabled Applications Pdf


Monday, April 15, 2019

'MPLS-Enabled Applications thoroughly covers the MPLS base technology and applications 'MPLS-Enabled Applications: Emerging Developments and New. applications emerging developments and new technologies 3rd edition as PDF for free at The Biggest ebook library in the world. Get mpls enabled applications. Fill Mpls Enabled Applications 3rd Edition Pdf, download blank or editable online . Sign, fax and printable from PC, iPad, tablet or mobile with PDFfiller.

Mpls-enabled Applications Pdf

Language:English, Spanish, Dutch
Published (Last):07.06.2016
ePub File Size:26.81 MB
PDF File Size:18.22 MB
Distribution:Free* [*Regsitration Required]
Uploaded by: NIKI

Size Report. DOWNLOAD PDF 'MPLS-Enabled Applications thoroughly covers the MPLS base technology and applications on MPLS-enabled IP networks. View Table of Contents for MPLS&#x;Enabled Applications. MPLS‐Enabled Applications: Emerging Developments and New. In MPLS-Enabled Applications, Third Edition, the authors methodically show how MPLS holds the key to network convergence by allowing operators to offer.

MPLS is now considered the networking technology for carrying all types of network traffic, including voice telephony, real-time video, and data traffic. The Third Edition contains more than illustrations, new chapters, and more coverage, guiding the reader from the basics of the technology, though all its major VPN applications. I consistently recommend this book to colleagues in the engineering, education and business community. Julian Lucek has worked with the Photonics Research Department at British Telecom where he co-built the world's first all-optical regenerator before moving into the IP field to evaluate new routing platforms. He is the author of many published papers in the field of communications technology and holds several patents in that area.

But how does traffic actually get mapped to the LSP? However, the fact that the route must be manually configured to use the LSP is both restrictive and unscalable from an operational point of view, thus limiting widespread use.

To reap the benefits of the traffic-engineered paths, it is necessary for the routing protocols to become aware of the LSPs. The metric can be the same as that of the underlying IP path or it can be configured to a different value to influence the routing decision.

Different routing protocols have different properties and therefore their use of the LSP is different. Forwarding for transit traffic is done based on MPLS labels. Thus, none of the routers except the ASBRs need to have knowledge of the destinations outside the AS, and the routers in the core of the network are not required to run BGP. The use of an LSP allows tight control over the path that transit traffic takes inside the domain. For example, it is possible to ensure that transit traffic is forwarded over dedicated links, making it easier to enforce service-level agreements SLAs between providers.

Therefore, even when traffic engineering is applied to only a portion of the network, label-switched paths are taken into account when computing paths across the entire network. This is a very important property from a scalability point of view, as will be seen in Section 2. In the context of IGPs, there are two distinct behaviors: Advertise the LSP in the link-state advertisements so that other routers can also take it into account in their SPF shortest path first. There is often a lot of confusion about why two different behaviors are needed and how they differ.

This confusion is not helped by the fact that the two behaviors are individually configurable and that vendors use nonintuitive names for the two features. To illustrate the difference between the two, refer to Figure 2. Traffic is forwarded towards destination W from two sources, E and A.

The goal is to forward the traffic along the shortest path. The concept, however, is very simple: The concept is simple: Relying on LSP information distributed by other nodes can sometimes cause surprising behavior.

Let us continue the example above with a slight modification: Because E advertises the LSP in its linkstate advertisements, the node F also receives this advertisement. What happens is that the traffic from F is forwarded to E and then right back to F, only to follow the same links as the pure IGP path. Regardless of whether the protocol used is BGP or one of the IGPs, when several LSPs are available to the same destination, most vendors allow the user the flexibility to pick one out of several LSPs for forwarding, based on various local policies.

One such policy can use the class-of-service classification of the incoming IP traffic for picking the LSP. For example, best-effort traffic is mapped to one LSP, while expedited forwarding traffic is mapped to another. By manipulating the properties of these LSPs, the operator can provide more guarantees to the more important traffic. To summarize, the ability of the routing protocols to make use of the traffic-engineered paths set up in the network enables control over the path that transit traffic takes in a domain and allows deployment of MPLS-TE in just parts of the network.

After seeing how traffic-engineered paths are computed and used, the next thing to look at are some of the considerations for deploying a traffic engineering solution.

Two major factors impact the number of LSPs in the network: The extent of the deployment. Solutions to this problem using LSP hierarchy are also discussed there. The size of the reservations. If the size of the traffic trunk between two points exceeds the link capacity, one solution is to set up several LSPs, each with a bandwidth reservation that can be accommodated by the link capacity.

Traffic can then be loadbalanced between these LSPs. Although logically a single LSP is necessary, the limitation on the maximum size of the reservation causes several LSPs to be set up in this case. Thus, a single LSP can be set up in this case. The downside is that failure of any interface in the bundle causes the LSP not to be able to establish. Most equipment vendors distinguish between the head end, mid-point and tail end when reporting these numbers.

Typical scaling numbers are in the range of several tens of thousand LSPs. It is not uncommon that different numbers are supported for head end and transit mid point. This is mainly because of two factors: It is usually pretty straightforward to evaluate the number of LSPs for which a box is ingress or egress because this information is derived directly from the tunnel end-points.

It is less obvious how to determine the number of LSPs that may transit a particular box. The temptation is to assume that all LSPs may cross a single node in the network. Such choke points could be, for example, the PoPs connecting major regions such as the US and Europe areas of a network. However, in most designs it is safe to assume that transit LSPs are distributed among the routers in the core. Either way, the analysis must be performed not just for the steady state but also for the failure scenarios, when LSPs reroute in the network.

Finally, one factor often overlooked when computing the total number the LSPs on a box is the extra LSPs that are created due to features that are turned on in the network. One example is the bypass LSPs used for fast reroute and another example is the extra LSPs created with make-before-break on reoptimization. The number of LSPs in the network is a concern not only because of the scaling limits of the equipment used but also because of the operational overhead of provisioning, monitoring and troubleshooting a large number of LSPs.

In particular, configuring a full mesh of LSPs between N devices can be very labor intensive to set up and maintain. This is a problem, because the configurations must be changed on N different devices. Operators 4 This solution assumes that the properties of the LSPs are fairly uniform for all LSPs originating at a particular LSR or that mechanisms such as autobandwidth discussed later in this chapter are used to handle properties that are not uniform.

When the number of LSPs is very large, these operations may take a large amount of resources on the router and of bandwidth in the network.

Thus, a tradeoff must be made between the polling frequency and the number of LSPs supported. We have seen that the reservation size impacts the number of LSPs that must be created. However, the reservation size has other effects as well, discussed in the next section. Link capacity is the gating factor on the size of the reservation. In the previous section we saw how this impacts the number of LSPs when the reservation requests exceed the link capacity.

In a network that uses links of different capacities, using the minimum link capacity as a gating factor ensures that paths can be established across any of the links.

This is especially important when rerouting following a failure. The downside is that using a smaller reservation size creates more LSPs in the network and introduces the challenge of efficiently load balancing the traffic over the LSPs.

The granularity of the reservation can also affect the efficiency of the link utilization. In a network with links of equal capacity, if all the reservations are close to the maximum available bandwidth on each link, there will necessarily be unutilized bandwidth on all links that cannot be used by any of the reservations.

In this case it might have been preferable to set up several reservations of sizes such that all the available bandwidth could be used. For example, if all links are of capacity Mbps and all LSPs require 60 Mbps, better utilization can be achieved if instead of a single 60 Mbps reservation, several 20 Mbps reservations are made. The approach of setting up several LSPs rather than one is not always applicable. The LSPs may not have the same delay and jitter properties, and for this reason balancing the traffic between them may not always be possible.

A common rule of thumb is to preempt smaller LSPs rather than large ones. This is done not only to keep the large LSPs stable but also because small LSPs have a better chance of finding an alternate path after being preempted.

This method is also useful in avoiding bandwidth fragmentation having unutilized bandwidth on the links. BGP route advertisements include a next-hop address, which is the address of the next BGP router in the path to the destination. Thus, to forward traffic to that destination, a path to the next-hop address must be found. This process is called resolving the route. The issue here is not one of correct versus incorrect behavior, especially because implementations typically allow the user to control the behavior through the configuration, but rather it is an issue of being aware that such differences may exist and accounting for them properly.

So far, we have seen some of things that must be taken into account when deploying a traffic engineering solution. Next, we will take a look at one of the most popular applications for traffic engineering, namely the optimization of transmission resources. In this context, traffic engineering is deployed in one of two ways: Selectively deployed only in parts of the network. The goal in this case is to route traffic away from a congested link.

This can be thought of as a tactical application, aimed at solving an immediate resource problem. Deployed throughout the entire network. The goal is to improve the overall bandwidth utilization and by doing so, delay costly link upgrades. This can be thought of as a strategic application of the technology, aimed at achieving a long-term benefit. What is needed is a temporary solution to move some of the traffic away from the link until the upgrade actually takes place.

Another example is the requirement to optimize a particularly expensive resource, such as an intercontinental link. Another example is a network spanning several geographic locations, where traffic engineering is required in only some of the regions. For example, a network with a presence in both the US and Asia may run traffic engineering only in the Asia region, where traffic rates are high and links run at high utilization.

Regardless of the type of deployment, when optimizing resource utilization using RSVP-TE, the assumption is that the following information is available: The bandwidth requirement for the LSP at the head end.

The available bandwidth at each node in the network. In real deployments, however, this necessary information may not always be readily accessible. The following sections discuss how to deal with missing information.

Estimating this information can be done by looking at traffic statistics such as interface or per-destination traffic statistics or by setting up an LSP with no bandwidth reservation and tracking the traffic statistics for this LSP.

Once the traffic patterns are known, an LSP can be set up for the maximum expected demand. The problem with this approach is that typically the bandwidth demands change according to the time of day or day of the week.

By always reserving bandwidth for the worst-case scenario, one ends up wasting bandwidth rather than optimizing its utilization. A more flexible solution is to allow the LSP to change its bandwidth reservation automatically, according to the current traffic demand.

This solution is called autobandwidth. The ingress router of an LSP configured for autobandwidth monitors the traffic statistics and periodically adjusts its bandwidth requirements according to the current utilization. A new path is computed to satisfy the new bandwidth requirements, in a make-before-break fashion.

Once the path is set up, traffic is switched to it seamlessly, without any loss. Autobandwidth is not defined in the IETF standards, but rather it is a feature that vendors have implemented to address the problem of traffic engineering when the bandwidth constraints are not known. This assumption can break in two cases: Traffic is not kept within the limits of the reservation. The implications of not keeping the traffic within the reservation limits and the use of policers for doing so are discussed in more detail in Section 4.

Not all traffic traversing the link is accounted for. A common misconception is that RSVP traffic is somehow special because it was set up with resource reservations. This is not true. The RSVP reservation exists in the control plane only, and no forwarding resources are actually set aside for it. This fact is often overlooked in network designs, especially ones for converged networks, where some of the traffic must receive better QoS than others.

Currently, routers take into account only RSVP reservations when reporting available resources and when doing admission control. Because the bandwidth utilized by LDP is not accounted for, the bandwidth accounting is not accurate and there is no guarantee that the RSVP reservations will actually get the required bandwidth in the data plane. In this model, the bandwidth that is available for RSVP reservation is the bandwidth pool allocated to the scheduler queue.

This approach works as long as the non-RSVP traffic does not exceed the bandwidth set aside for it. Statistics monitoring can be used to estimate the traffic demand in the steady state, but no mechanism is available to react to changes in the non-RSVP traffic dynamically e. The important thing to remember is that bandwidth reservations are not a magic bullet. Unless the bandwidth consumption is correctly evaluated, bandwidth reservations do not give any of the guarantees that MPLS-TE strives to achieve.

In an LDP network, the proposition of adding a second MPLS protocol for the sole purpose of achieving resource optimization may not be an attractive one.

The goal was to achieve better resource usage by allowing a higher percentage of the link to be utilized before triggering an upgrade. Traffic engineering the IGP paths was accomplished by manipulating the link metrics. There are two main challenges when doing traffic engineering through IGP metric manipulation: Changing the IGP metric on one link in one part of the network may impact routing in a different part of the network.

Being able to analyze both these factors requires the ability to simulate the network behavior with different link metrics and under different failure scenarios. Thus, an offline tool is required both for planning and for validation of the design. This means that the metrics are computed offline, based on the current traffic information and after simulating different types of failures in the network.

Once the metrics are set in the network, the link utilization is monitored to detect when it becomes necessary to reoptimize the computation.

It is not the intention to modify the IGP metrics on a failure, because this approach would not be feasible from an operations point of view. Instead, the IGP metrics are chosen in such a way that even under failure conditions no link gets overloaded. The answer is that it does not matter, as long as doing traffic engineering improves the existing situation by an amount that justifies the extra work involved. When choosing a metricbased approach over explicit routing, the operator is making a conscious decision to trade off some of the benefits of explicit routing, such as unequal-cost load sharing or fine-grained traffic engineering, for the sake of a simpler network design.

Test results from one vendor [IGP-TE, LDP-TE] show, not surprisingly, worse results using a metric-based approach than explicit routing, but much better results than no traffic engineering at all. Most of the discussion so far focused on a model where the paths are computed dynamically by the routers. As seen in previous sections, the results of this computation may not be the most optimal. Offline computation tools are used to provide better results.

Offline computation tools provide the following advantages in the context of traffic engineering: Exact control of where the paths are placed. The operator knows where the traffic is going to flow. There are no surprises from dynamic computation. Global view of the reservations and of the bandwidth availability.

As seen in Section 2. Ability to cross area and AS boundaries. The computation is not based solely on the information in the TED; therefore, the restriction to a single IGP area does not apply. Computation can take into account both the normal and the failure cases.

Doing so can ensure that LSPs will always be able to reroute following a failure. Figure 2. Assume all links are Mbps and three LSPs are set up: Optimality of the solution. The computation is done offline and can take a long time to complete. More sophisticated algorithms than CSPF can be employed to look for the most optimal solution. The solution can be optimized for different factors: Perhaps the biggest advantage of an offline computation tool is that it can perform optimizations taking into account all the LSPs in the network, while CSPF can take into account only the LSPs originated at the node performing the computation.

Here are a few of the challenges of offline tools: Input to the computation. The result of the computation is only as good as the data that the computation was based on. The traffic matrix, the demands of the LSPs and the available bandwidth must be correctly estimated and modeled. Global versus incremental optimizations. As network conditions change, the computation must be repeated. The result of the new computation may require changes to a large number of LSPs and configuration of a large number of routers.

To perform configuration changes, routers are typically taken offline for maintenance. For practical reasons it may not be desirable to use the result of a computation that calls for a lot of changes in the network. Instead, an incremental optimization may be more appealing: The result of an incremental optimization is necessarily worse than that of a global optimization, but the tradeoff is that fewer routers need to be reconfigured.

Order of the upgrade. Following a recomputation, it is not enough to know the paths of the new LSPs, but also in which order these LSPs must be set up. This is because the reconfiguration of the routers does not happen simultaneously, so an old reservation setup from router A that is due to move may still be active and take up the bandwidth on links that should be used by a new reservation from router B. Limitations of the computation. The result of the computation assumes certain network conditions such as a single failure in the network.

To respond to changing conditions in a network, such as a link cut, the computation must be redone. However, the computation is fairly slow and applying its result requires router configuration, which is not always possible within a short time window.

Therefore, reacting to a temporary network condition may not be practical. By the time the new computation has been performed and the changes have been applied in the network, the failure might already be fixed. Operators have the choice of using a offline computation for the primary and the secondary backup paths, b offline computation of the primary and dynamic computation of the secondary paths or c dynamic computation for both primary and secondary paths secondary paths are discussed in detail in the Protection and Restoration chapter, Chapter 3.

Offline tools are available from several vendors, including Wandl www. Some operators develop their own simulation and computation tools in-house, tailored to their own network requirements.

Using the traffic-engineered path, it is possible to achieve efficient bandwidth utilization, guarantees regarding resource allocation in times of resource crunch and control over the path that the traffic is taking in the network. However, the traffic engineering solution presented so far has three limitations: It operates at the aggregate level across all the DiffServ classes of service and cannot give bandwidth guarantees on a perDiffServ-class basis.

It provides no guarantees for the traffic during failures. In Chapters 4 and 5 we will see how the traffic engineering solution is extended to overcome the first two limitations, using MPLS DiffServ Aware TE and interdomain traffic engineering. In Chapter 3 we will look at mechanisms available for protection and restoration, which overcome the third limitation listed above. Maghbouleh, Metric-Based Traffic Engineering: Panacea or Snake Oil?

A Real-World Study, presentation 2. Awduche, J. Malcolm, J. Agogbua, M. Awduche et al. Jamoussi, L. Andersson, R. Callon, R. Danter, L. Wu, P. Doolan, T.

Worster, N. Fredette, M. Girish, E.

Hm... Are You a Human?

Gray, J. Heinanen, T. Kilty, and A. Andersoon and G. Katz, K. Kompella and D. Smit and T. Awduche and B. However, these applications require high-quality service, not just when the network is in a normal operating condition but also following a failure.

Therefore, protection and restoration mechanisms are necessary to handle the failure case quickly. The ability to provide such fast protection is essential for converging voice, video and data on to a single MPLS network infrastructure.

This chapter deals with protection and restoration in MPLS networks.

We will start by discussing the use of bidirectional forwarding detection BFD for fast-failure detection. Converging all services on to the same core is attractive because it eliminates the need to build and maintain separate physical networks for each service offering and because the flexibility of IP enables new services such as videotelephony integration.

Thus, fast recovery following a failure is an essential functionality for multiservice networks. One way to provide fast recovery following a link failure is to provide protection at Layer 1. The idea is simple. Maintain a standby link that is ready to take over the traffic from the protected one in case of failure and switch traffic to it as soon as the failure is detected.

Because the decision to move to the standby link is a local one, the switchover can happen within 50 ms, making any disruption virtually unnoticeable at the application layer. The quick recovery comes at the cost of maintaining the idle bandwidth and the additional hardware required for the switchover. The advantage of fast reroute over SONET APS is that a it is not limited by the link type, b it offers protection for node failures and c it does not require extra hardware. For a provider contemplating the deployment of a network requiring subsecond recovery such as voice-over IP the first question to ask is whether MPLS FRR is the only option.

Exactly how much loss can be tolerated by a particular application is an important consideration when choosing a protection method.

Many applications do not really need 50 ms protection and can tolerate higher loss, e. Given the more lax requirements, some service providers may decide to deploy pure IP networks and rely on subsecond IGP convergence which is now available from many vendors for the protection. The main differentiator for MPLS FRR in this context is that it can consistently provide a small recovery time, while IGP convergence may be affected by factors such as when the last SPF was run, churn in a different part of the network or CPU central processing unit load caused by other unrelated operations.

Hence, although the average IGP convergence time might be low, the upper bound on the recovery time may be relatively high. The amount of time during which traffic is lost depends on how fast the failure is detected and how fast the traffic is switched over to an alternate path.

Most of this chapter deals with the mechanisms for quickly moving the traffic to an alternate path around the point of failure.

PDFfiller. On-line PDF form Filler, Editor, Type on PDF, Fill, Print, Email, Fax and Export

However, no matter how efficient these mechanisms are, they are useless if the failure is not detected in a timely manner. Thus, fast failure detection, though not directly related to MPLS, is an important component of MPLS protection and is assumed throughout this chapter. In the next section we will take a look at some of the challenges with fast detection.

Some transmission media provide hardware indications of connectivity loss. Other transmission media do not have this capability, e. Ethernet, which is commonly used in PoPs. Let us take a look at the disadvantages of doing so, using IGP hellos as an example. The IGPs send periodic hello packets to ensure 2 The fast detection capability has been added for optical Ethernet. When the packets stop arriving, a failure is assumed.

There are two reasons why hello-based failure detection using IGP hellos cannot provide fast detection times: In common configurations, the detection times range from 5 to 40 seconds. Handling IGP hellos is relatively complex, so raising the frequency of the hellos places a considerable burden on the CPU.

The heart of the matter is the lack of a hello protocol to detect the failure at a lower layer. So what exactly is BFD?

BFD is a simple hello protocol designed to do rapid failure detection. Its goal is to provide a low-overhead mechanism that can quickly detect faults in the bidirectional path between two forwarding engines, whether they are due to problems with the physical interfaces, with the forwarding engines themselves or with any other component. The natural question is just how quickly BFD can detect such a fault. The answer is that it depends on the platform and on how the protocol is implemented.

Available early implementations allow detection times of about ms, with the possibility to improve the time in the future. While this is not perfect if recovery times of 50ms are sought, it is a huge improvement over detection times on the order of seconds and still falls within the requirements of many applications. BFD started out as a simple mechanism intended to be used on Ethernet links, but has since found numerous applications. With the knowledge that this tool exists, the problem of fast detection can be considered to be solved for all media types.

Therefore, in the rest of the chapter, fast failure detection is assumed. Although not as popular as local protection using fast reroute, it is important to examine it because it highlights some of the issues solved by local protection.

A common practice in network deployments is the use of a primary backup approach for providing resiliency. For example, in Figure 3. For fastest recovery times, the secondary is presignaled and ready to take over the traffic, in effect being in hot standby mode. Upon receipt of this error message, the head end switches the traffic to the secondary. If the secondary is not presignaled, the extra time required to set it up further increases the switchover delay.

From the example, several properties of path protection become apparent. Control over the traffic flow following a failure The use of a presignaled secondary path is very powerful because it provides exact knowledge of where the traffic will flow following the failure.

This is important not just for capacity planning but also for ensuring properties such as low delay. Note that the same control can be achieved even if the secondary is not in standby mode, if its path is explicitly configured. Requirement for path diversity For the secondary to provide meaningful protection in case of a failure on the primary, it is necessary that a single failure must not affect both the primary and the secondary.

Clearly, if both LSPs use a common link in their path, then they will both fail when the link breaks. To avoid this, the primary and the secondary must take different paths through the network.

Path diversity is relatively easy to achieve when the LSPs are contained within a single IGP area and many implementations attempt to provide this functionality by default. However, in the chapter discussing Interdomain TE Chapter 5 , we will see that it is not trivial to ensure for LSPs that cross domain boundaries.

The net result is that twice as many resources are used throughout the network if the secondary is set up before the failure. This problem could be avoided if the secondary were not presignaled, at the expense of a longer switchover time. Assuming that the secondary is presignaled and 3 Unfortunately, path diversity alone does not guarantee that the primary and secondary will not share the same fate when a resource fails.

Fate sharing is discussed in detail later in this chapter. To prevent this situation, some providers choose to use LSP priorities and assign better values to all the primary LSPs in the network, to ensure they can always establish. Unnecessary protection End-to-end protection protects the entire path. Thus, even if most links in the primary paths are protected using other mechanisms such as APS , it is not possible to apply protection selectively for just those links that need it. Nondeterministic switchover delay The delay in the switchover between the primary and the standby is dictated by the time it takes for the RSVP error message to propagate to the LSP head end.

This is a control plane operation and therefore the time it takes is not deterministic. Moreover, unless the secondary is set up in the standby mode, further delay is incurred by RSVP signaling of the secondary path.

MPLS-Enabled Applications: Emerging Developments and New Technologies, 3rd Edition

The main advantage of end-to-end path protection is the control it gives the operator over the fate of the traffic after the failure. Its main disadvantages are double-booking of resources, unnecessary protection for links that do not require it and nondeterministic switchover times. They arise from the fact that the protection is provided by the head end for the entire path. Local protection attempts to fix these problems by providing the protection locally rather than at the head end and by protecting a single resource at a time.

Thus, it makes sense to apply protection as close to the point of failure as possible. The idea of local protection is simple. Instead of providing protection at the head end for the entire path, the traffic around the point of failure is rerouted.

Rather than redirecting all the traffic away from the highway altogether, vehicles are directed on to a detour path at exit A and rejoin the highway at exit B or at some other exit down the road from B.

An alternate path, called the detour or bypass, exists around link R1—R2. In case of a failure, traffic is shuttled around the failed link using this path and rejoins the LSP at R2. Thus, the traffic is quickly rerouted around the point of failure and for this reason this mechanism is called fast reroute.

The idea is not to keep the traffic on the detour until the link recovers, but rather to keep it long enough for the LSP head end to move the LSP to a new path that does not use the failed link. There are several attractive properties to fast reroute: A single resource is protected and therefore it is possible to pick and choose which resource to protect. Protection can be applied quickly because it is enforced close to the point of failure. The answer is because it relies heavily on source routing, where the path is determined at the source and no independent forwarding decisions are made by the individual nodes in the path.

Let us see how. For local protection to work, traffic must reach the beginning of the protection path after the failure has occurred. When traffic is forwarded as IP, the forwarding decision is made independently at every hop based on the destination address. In Figure 3. All the link metrics are equal to 1, except link R8—R9, which has a metric of The LSP is set up along a path determined at the head end.

Once traffic is placed into the LSP, it is guaranteed to be forwarded all the way to the tail end, regardless of the routing changes that happen in the network. Once it rejoins the LSP at R2, it is guaranteed to reach the tail end. Local protection mechanisms are qualified based on two criteria: The type of resource that is protected, either a link or a node. Thus, local protection is either link protection or node protection.

As we will see in later sections, this influences the placement of the backup path. Regardless of the protected resource, local protection mechanisms are collectively referred to as local protection or fast reroute FRR. The number of LSPs protected by the protection tunnel, either 1: These are called one-to-one backup and facility backup respectively.

The ability to share the protection paths is not an issue of scalability alone. As we will see in later sections, it also determines how traffic is forwarded over the protection path. The basic mechanisms of fast reroute, one-to-one backup and facility backup are described in the section dealing with link protection. The section describing node protection focuses only on the special aspects of node protection.

To protect against the failure of a link, a backup tunnel is set up around the link. This backup is called a detour in the case of one-to-one protection and bypass in the case of many-to-one protection. The head end of the backup tunnel is the router upstream of the link and the tail end of the detour is the router downstream of the link where upstream and downstream are relative to the direction of the traffic.

Figure 3. Node A, where traffic is spliced from the protected path on to the backup is called the Point of Local Repair PLR and node B, where traffic merges from the backup into the protected path again, is called the Merge Point MP. Let us take a look at the different actions that need to happen before and after the failure. This means that: The backup path must be computed and signaled before the failure happens and the forwarding state must be set up for it, at the PLR, MP and all the transit nodes.

He is the author of many published papers in the field of communications technology and holds several patents in that area. Please check your email for instructions on resetting your password. If you do not receive an email within 10 minutes, your email address may not be registered, and you may need to create a new Wiley Online Library account. If the address matches an existing account you will receive an email with instructions to retrieve your username.

Skip to Main Content. Ina Minei Julian Lucek. First published: Print ISBN: About this book With a foreword by Yakov Rekhter "Here at last is a single, all encompassing resource where the myriad applications sharpen into a comprehensible text that first explains the whys and whats of each application before going on to the technical detail of the hows.

He is the author of many published papers in the field of communications technology and holds several patents in that area. Request permission to reuse content from this site. Undetected country. NO YES. Emerging Developments and New Technologies, 3rd Edition. Selected type: Added to Your Shopping Cart. With a foreword by Yakov Rekhter "Here at last is a single, all encompassing resource where the myriad applications sharpen into a comprehensible text that first explains the whys and whats of each application before going on to the technical detail of the hows.

ROZELLA from Colorado
Please check my other posts. I am highly influenced by midget car racing. I do relish reading books sharply.