Benchmarking
Methodology for Link-State IGP Data Plane Route ConvergenceAllot Communications67 South Bedford Street, Suite 400BurlingtonMA01803USA+ 1 508 309 2179sporetsky@allot.comJuniper Networks1194 North Mathilda AveSunnyvaleCA94089USA+ 1 314 378 2571bimhoff@planetspork.comCisco Systems6A De KleetlaanDiegemBRABANT1831Belgiumkmichiel@cisco.com
Benchmarking Working Group
This document describes the methodology for benchmarking Link-State
Interior Gateway Protocol (IGP) Route Convergence. The methodology is to
be used for benchmarking IGP convergence time through externally
observable (black box) data plane measurements. The methodology can be
applied to any link-state IGP, such as ISIS and OSPF.This document describes the methodology for benchmarking Link-State
Interior Gateway Protocol (IGP) convergence. The motivation and
applicability for this benchmarking is described in . The terminology to be used for this benchmarking
is described in .IGP convergence time is measured on the data plane at the Tester by
observing packet loss through the DUT. All factors contributing to
convergence time are accounted for by measuring on the data plane, as
discussed in . The test cases in this
document are black-box tests that emulate the network events that cause
convergence, as described in .The methodology described in this document can be applied to IPv4 and
IPv6 traffic and link-state IGPs such as ISIS , OSPF , and others.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in BCP 14, RFC 2119 . RFC 2119 defines the use of these key words to
help make the intent of standards track documents as clear as possible.
While this document uses these keywords, this document is not a
standards track document.This document uses much of the terminology defined in and uses existing terminology defined in other
BMWG work. Examples include, but are not limited to:Throughput[Ref., section 3.17]Device Under Test (DUT)[Ref., section 3.1.1]System Under Test (SUT)[Ref., section 3.1.2]Out-of-order Packet[Ref., section 3.3.2]Duplicate Packet[Ref., section 3.3.3]Stream[Ref., section 3.3.2]Loss Period[Ref., section 4]Figure shows the
test topology to measure IGP convergence time due to local Convergence
Events such as Local Interface failure (), layer 2 session failure
(), and IGP
adjacency failure ().
This topology is also used to measure IGP convergence time due to the
route withdrawal (),
and route cost change () Convergence Events. IGP
adjancencies MUST be established between Tester and DUT, one on the
Preferred Egress Interface and one on the Next-Best Egress Interface.
For this purpose the Tester emulates two routers, each establishing
one adjacency with the DUT. An IGP adjacency SHOULD be established on
the Ingress Interface between Tester and DUT.Figure shows the
test topology to measure IGP convergence time due to Remote Interface
failure (). In this
topology the two routers R1 and R2 are considered System Under Test
(SUT) and SHOULD be identically configured devices of the same model.
IGP adjancencies MUST be established between Tester and SUT, one on
the Preferred Egress Interface and one on the Next-Best Egress
Interface. For this purpose the Tester emulates one or two routers. An
IGP adjacency SHOULD be established on the Ingress Interface between
Tester and SUT. In this topology there is a possibility of a transient
microloop between R1 and R2 during convergence.Figure shows the
test topology to measure IGP convergence time due to local Convergence
Events with members of an Equal Cost Multipath (ECMP) set (). In this topology,
the DUT is configured with each egress interface as a member of a
single ECMP set and the Tester emulates N next-hop routers, one router
for each member. IGP adjancencies MUST be established between Tester
and DUT, one on each member of the ECMP set. For this purpose each of
the N routers emulated by the Tester establishes one adjacency with
the DUT. An IGP adjacency SHOULD be established on the Ingress
Interface between Tester and DUT.Figure shows the
test topology to measure IGP convergence time due to remote
Convergence Events with members of an Equal Cost Multipath (ECMP) set
(). In this
topology the two routers R1 and R2 are considered System Under Test
(SUT) and MUST be identically configured devices of the same model.
Router R1 is configured with each egress interface as a member of a
single ECMP set and the Tester emulates N next-hop routers, one router
for each member. IGP adjancencies MUST be established between Tester
and SUT, one on each egress interface of SUT. For this purpose each of
the N routers emulated by the Tester establishes one adjacency with
the SUT. An IGP adjacency SHOULD be established on the Ingress
Interface between Tester and SUT. In this topology there is a
possibility of a transient microloop between R1 and R2 during
convergence.Figure shows
the test topology to measure IGP convergence time due to local
Convergence Events with members of a Parallel Link (). In this topology,
the DUT is configured with each egress interface as a member of a
Parallel Link and the Tester emulates the single next-hop router. IGP
adjancencies MUST be established on all N members of the Parallel Link
between Tester and DUT. For this purpose the router emulated by the
Tester establishes N adjacencies with the DUT. An IGP adjacency SHOULD
be established on the Ingress Interface between Tester and DUT.Two concepts will be highlighted in this section: convergence time
and loss of connectivity period.The Route Convergence time indicates the
period in time between the Convergence Event Instant and the instant in time the DUT is ready to
forward traffic for a specific route on its Next-Best Egress Interface
and maintains this state for the duration of the Sustained Convergence
Validation Time . To measure Route
Convergence time, the Convergence Event Instant and the traffic received
from the Next-Best Egress Interface need to be observed.The Route Loss of Connectivity Period
indicates the time during which traffic to a specific route is lost
following a Convergence Event until Full Convergence completes. This Route Loss of Connectivity Period
can consist of one or more Loss Periods . For
the testcases described in this document it is expected to have a single
Loss Period. To measure Route Loss of Connectivity Period, the traffic
received from the Preferred Egress Interface and the traffic received
from the Next-Best Egress Interface need to be observed.The Route Loss of Connectivity Period is most important since that
has a direct impact on the network user's application performance.In general the Route Convergence time is larger than or equal to the
Route Loss of Connectivity Period. Depending on which Convergence Event
occurs and how this Convergence Event is applied, traffic for a route
may still be forwarded over the Preferred Egress Interface after the
Convergence Event Instant, before converging to the Next-Best Egress
Interface. In that case the Route Loss of Connectivity Period is shorter
than the Route Convergence time.At least one condition needs to be fulfilled for Route Convergence
time to be equal to Route Loss of Connectivity Period. The condition is
that the Convergence Event causes an instantaneous traffic loss for the
measured route. A fiber cut on the Preferred Egress Interface is an
example of such a Convergence Event.A second condition applies to Route Convergence time measurements
based on Connectivity Packet Loss . This
second condition is that there is only a single Loss Period during Route
Convergence. For the testcases described in this document this is
expected to be the case.To measure convergence time benchmarks for Convergence Events
caused by a Tester, such as an IGP cost change, the Tester MAY start
to discard all traffic received from the Preferred Egress Interface at
the Convergence Event Instant, or MAY separately observe packets
received from the Preferred Egress Interface prior to the Convergence
Event Instant. This way these Convergence Events can be treated the
same as Convergence Events that cause instantaneous traffic loss.To measure convergence time benchmarks without instantaneous
traffic loss (either real or induced by the Tester) at the Convergence
Event Instant, such as a reversion of a link failure Convergence
Event, the Tester SHALL only observe packet statistics on the
Next-Best Egress Interface. If using the Rate-Derived method to
benchmark convergence times for such Convergence Events, the Tester
MUST collect a timestamp at the Convergence Event Instant. If using a
loss-derived method to benchmark convergence times for such
Convergence Events, the Tester MUST measure the period in time between
the Start Traffic Instant and the Convergence Event Instant. To
measure this period in time the Tester can collect timestamps at the
Start Traffic Instant and the Convergence Event Instant.The Convergence Event Instant together with the receive rate
observations on the Next-Best Egress Interface allow to derive the
convergence time benchmarks using the Rate-Derived Method .By observing lost packets on the Next-Best Egress Interface only,
the observed packet loss is the number of lost packets between Traffic
Start Instant and Convergence Recovery Instant. To measure convergence
times using a loss-derived method, packet loss between the Convergence
Event Instant and the Convergence Recovery Instant is needed. The time
between Traffic Start Instant and Convergence Event Instant must be
accounted for. An example may clarify this.Figure illustrates a Convergence
Event without instantaneous traffic loss for all routes. The top graph
shows the Forwarding Rate over all routes, the bottom graph shows the
Forwarding Rate for a single route Rta. Some time after the
Convergence Event Instant, Forwarding Rate observed on the Preferred
Egress Interface starts to decrease. In the example, route Rta is the
first route to experience packet loss at time Ta. Some time later, the
Forwarding Rate observed on the Next-Best Egress Interface starts to
increase. In the example, route Rta is the first route to complete
convergence at time Ta'.If only packets received on the Next-Best Egress Interface are
observed, the duration of the packet loss period for route Rta can be
calculated from the received packets as in Equation 1. Since the
Convergence Event Instant is the start time for convergence time
measurement, the period in time between T0 and CEI needs to be
subtracted from the calculated result to become the convergence time,
as in Equation 2.Route Loss of Connectivity Period SHOULD be measured using the
Route-Specific Loss-Derived Method. Since the start instant and end
instant of the Route Loss of Connectivity Period can be different for
each route, these can not be accurately derived by only observing
global statistics over all routes. An example may clarify this.Following a Convergence Event, route Rta is the first route for
which packet loss starts, the Route Loss of Connectivity Period for
route Rta starts at time Ta. Route Rtb is the last route for which
packet loss starts, the Route Loss of Connectivity Period for route
Rtb starts at time Tb with Tb>Ta.If the DUT implementation would be such that Route Rta would be the
first route for which traffic loss ends at time Ta' with Ta'>Tb.
Route Rtb would be the last route for which traffic loss ends at time
Tb' with Tb'>Ta'. By using only observing global traffic statistics
over all routes, the minimum Route Loss of Connectivity Period would
be measured as Ta'-Ta. The maximum calculated Route Loss of
Connectivity Period would be Tb'-Ta. The real minimum and maximum
Route Loss of Connectivity Periods are Ta'-Ta and Tb'-Tb. Illustrating
this with the numbers Ta=0, Tb=1, Ta'=3, and Tb'=5, would give a LoC
Period between 3 and 5 derived from the global traffic statistics,
versus the real LoC Period between 3 and 4.If the DUT implementation would be such that route Rtb would be the
first for which packet loss ends at time Tb'' and route Rta would be
the last for which packet loss ends at time Ta'', then the minimum and
maximum Route Loss of Connectivity Periods derived by observing only
global traffic statistics would be Tb''-Ta, and Ta''-Ta. The real
minimum and maximum Route Loss of Connectivity Periods are Tb''-Tb and
Ta''-Ta. Illustrating this with the numbers Ta=0, Tb=1, Ta''=5,
Tb''=3, would give a LoC Period between 3 and 5 derived from the
global traffic statistics, versus the real LoC Period between 2 and
5.The two implementation variations in the above example would result
in the same derived minimum and maximum Route Loss of Connectivity
Periods when only observing the global packet statistics, while the
real Route Loss of Connectivity Periods are different.The test cases described in Section MAY be used for link-state IGPs, such as
ISIS or OSPF. The IGP convergence time test methodology is
identical.The obtained results for IGP convergence time may vary if other
routing protocols are enabled and routes learned via those protocols
are installed. IGP convergence times SHOULD be benchmarked without
routes installed from other protocols.The Tester emulates a single IGP topology. The DUT establishes IGP
adjacencies with one or more of the emulated routers in this single
IGP topology emulated by the Tester. See test topology details in
. The emulated topology SHOULD
only be advertised on the DUT egress interfaces.The number of IGP routes will impact the measured IGP route
convergence time. To obtain results similar to those that would be
observed in an operational network, it is RECOMMENDED that the number
of installed routes and nodes closely approximate that of the network
(e.g. thousands of routes with tens or hundreds of nodes).The number of areas (for OSPF) and levels (for ISIS) can impact the
benchmark results.There are timers that may impact the measured IGP convergence
times. The benchmark metrics MAY be measured at any fixed values for
these timers. To obtain results similar to those that would be
observed in an operational network, it is RECOMMENDED to configure the
timers with the values as configured in the operational network.Examples of timers that may impact measured IGP convergence time
include, but are not limited to:Interface failure indicationIGP hello timerIGP dead-interval or hold-timerLSA or LSP generation delayLSA or LSP flood packet pacingSPF delayAll test cases in this methodology document MAY be executed with
any interface type. The type of media may dictate which test cases may
be executed. Each interface type has a unique mechanism for detecting
link failures and the speed at which that mechanism operates will
influence the measurement results. All interfaces MUST be the same
media and Throughput for each test case. All interfaces SHOULD be
configured as point-to-point.The Throughput of the device, as defined in and benchmarked in
at a fixed packet size, needs to be determined over the preferred path
and over the next-best path. The Offered Load SHOULD be the minimum of
the measured Throughput of the device over the primary path and over
the backup path. The packet size is selectable and MUST be recorded.
Packet size is measured in bytes and includes the IP header and
payload.The destination addresses for the Offered Load MUST be distributed
such that all routes or a statistically representative subset of all
routes are matched and each of these routes is offered an equal share
of the Offered Load. It is RECOMMENDED to send traffic matching all
routes, but a statistically representative subset of all routes can be
used if required.In the Remote Interface failure testcases using topologies and there is a possibility of
a transient microloop between R1 and R2 during convergence. The TTL or
Hop Limit value of the packets sent by the Tester may influence the
benchmark measurements since it determines which device in the
topology may send an ICMP Time Exceeded Message for looped
packets.The duration of the Offered Load MUST be greater than the
convergence time.Since packet loss is observed to measure the Route Convergence
Time, the time between two successive packets offered to each
individual route is the highest possible accuracy of any packet loss
based measurement. When packet jitter is much less than the
convergence time, it is a negligible source of error and therefore it
will be ignored here.The benchmark measurements may vary for each trial, due to the
statistical nature of timer expirations, cpu scheduling, etc.
Evaluation of the test data must be done with an understanding of
generally accepted testing practices regarding repeatability, variance
and statistical significance of a small number of trials.It is RECOMMENDED that the Tester used to execute each test case
has the following capabilities:Ability to establish IGP adjacencies and advertise a single IGP
topology to one or more peers.Ability to insert a timestamp in each data packet's IP
payload.An internal time clock to control timestamping, time
measurements, and time calculations.Ability to distinguish traffic load received on the Preferred
and Next-Best Interfaces .Ability to disable or tune specific Layer-2 and Layer-3
protocol functions on any interface(s).The Tester MAY be capable to make non-data plane convergence
observations and use those observations for measurements. The Tester
MAY be capable to send and receive multiple traffic Streams .Also see Section for method-specific
capabilities.Different convergence time benchmark methods MAY be used to measure
convergence time benchmark metrics. The Tester capabilities are
important criteria to select a specific convergence time benchmark
method. The criteria to select a specific benchmark method include, but
are not limited to:Tester capabilities:Sampling Interval, number of Stream statistics to collectMeasurement accuracy:Sampling Interval, Offered LoadTest specification:number of routesDUT capabilities:ThroughputThe Offered Load SHOULD consist of a single Stream . If sending multiple Streams, the measured
packet loss statistics for all Streams MUST be added together.In order to verify Full Convergence completion and the Sustained
Convergence Validation Time, the Tester MUST measure Forwarding Rate
each Packet Sampling Interval.The total number of packets lost between the start of the traffic
and the end of the Sustained Convergence Validation Time is used to
calculate the Loss-Derived Convergence Time.The Loss-Derived Method can be used to measure the Loss-Derived
Convergence Time, which is the average convergence time over all
routes, and to measure the Loss-Derived Loss of Connectivity Period,
which is the average Route Loss of Connectivity Period over all
routes.The measurement accuracy of the Loss-Derived Method is equal to
the time between two consecutive packets to the same route.The Offered Load SHOULD consist of a single Stream. If sending
multiple Streams, the measured traffic rate statistics for all
Streams MUST be added together.The Tester measures Forwarding Rate each Sampling Interval. The
Packet Sampling Interval influences the observation of the different
convergence time instants. If the Packet Sampling Interval is large
compared to the time between the convergence time instants, then the
different time instants may not be easily identifiable from the
Forwarding Rate observation. The requirements for the Packet
Sampling Interval are specified in . The
RECOMMENDED value for the Packet Sampling Interval is 10
milliseconds. The Packet Sampling Interval MUST be reported.The Rate-Derived Method SHOULD be used to measure First Route
Convergence Time and Full Convergence Time. It SHOULD NOT be used to
measure Loss of Connectivity Period (see Section ).The measurement accuracy of the Rate-Derived Method for
transitions that occur for all routes at the same instant is equal
to the Packet Sampling Interval and for other transitions the
measurement accuracy is equal to the Packet Sampling Interval plus
the time between two consecutive packets to the same destination.
The latter is the case since packets are sent in a particular order
to all destinations in a stream and when part of the routes
experience packet loss, it is unknown where in the transmit cycle
packets to these routes are sent. This uncertainty adds to the
error.The Offered Load consists of multiple Streams. The Tester MUST
measure packet loss for each Stream separately.In order to verify Full Convergence completion and the Sustained
Convergence Validation Time, the Tester MUST measure packet loss
each Packet Sampling Interval. This measurement at each Packet
Sampling Interval MAY be per Stream.Only the total packet loss measured per Stream at the end of the
Sustained Convergence Validation Time is used to calculate the
benchmark metrics with this method.The Route-Specific Loss-Derived Method SHOULD be used to measure
Route-Specific Convergence Times. It is the RECOMMENDED method to
measure Route Loss of Connectivity Period.Under the conditions explained in Section , First Route Convergence Time
and Full Convergence Time as benchmarked using Rate-Derived Method,
may be equal to the minimum resp. maximum of the Route-Specific
Convergence Times.The measurement accuracy of the Route-Specific Loss-Derived
Method is equal to the time between two consecutive packets to the
same route.For each test case, it is recommended that the reporting tables below
are completed and all time values SHOULD be reported with resolution as
specified in .ParameterUnitsTest Casetest case numberTest Topology(1, 2, 3, 4, or 5)IGP(ISIS, OSPF, other)Interface Type(GigE, POS, ATM, other)Packet Size offered to DUTbytesOffered Loadpackets per secondIGP Routes advertised to DUTnumber of IGP routesNodes in emulated networknumber of nodesNumber of Routes measurednumber of routesPacket Sampling Interval on TestersecondsForwarding Delay ThresholdsecondsTimer Values configured on DUT: Interface failure indication delayseconds IGP Hello Timerseconds IGP Dead-Interval or hold-timeseconds LSA Generation Delayseconds LSA Flood Packet Pacingseconds LSA Retransmission Packet Pacingseconds SPF DelaysecondsTest Details:If the Offered Load matches a subset of routes, describe how this
subset is selected.Describe how the Convergence Event is applied; does it cause
instantaneous traffic loss or not.Complete the table below for the initial Convergence Event and the
reversion Convergence Event.ParameterUnitsConversion Event(initial or reversion)Traffic Forwarding Metrics: Total number of packets offered to DUTnumber of Packets Total number of packets forwarded by DUTnumber of Packets Connectivity Packet Lossnumber of Packets Convergence Packet Lossnumber of Packets Out-of-Order Packetsnumber of Packets Duplicate Packetsnumber of PacketsConvergence Benchmarks: Rate-Derived Method: First Route Convergence Timeseconds Full Convergence Timeseconds Loss-Derived Method: Loss-Derived Convergence Timeseconds Route-Specific Loss-Derived Method: Route-Specific Convergence Time[n]array of seconds Minimum R-S Convergence Timeseconds Maximum R-S Convergence Timeseconds Median R-S Convergence Timeseconds Average R-S Convergence TimesecondsLoss of Connectivity Benchmarks: Loss-Derived Method: Loss-Derived Loss of Connectivity Periodseconds Route-Specific Loss-Derived Method: Route LoC Period[n]array of seconds Minimum Route LoC Periodseconds Maximum Route LoC Periodseconds Median Route LoC Periodseconds Average Route LoC PeriodsecondsIt is RECOMMENDED that all applicable test cases be performed for
best characterization of the DUT. The test cases follow a generic
procedure tailored to the specific DUT configuration and Convergence
Event . This generic procedure is as
follows:Establish DUT and Tester configurations and advertise an IGP
topology from Tester to DUT.Send Offered Load from Tester to DUT on ingress interface.Verify traffic is routed correctly.Introduce Convergence Event .Measure First Route Convergence Time .Measure Full Convergence Time .Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC Period
.Wait sufficient time for queues to drain.Restart Offered Load.Reverse Convergence Event.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.ObjectiveTo obtain the IGP convergence times due to a Local Interface
failure event.ProcedureAdvertise an IGP topology from Tester to DUT using the
topology shown in Figure .Send Offered Load from Tester to DUT on ingress
interface.Verify traffic is forwarded over Preferred Egress
Interface.Remove link on DUT's Preferred Egress Interface. This is the
Convergence Event.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times and Loss-Derived
Convergence Time.Wait sufficient time for queues to drain.Restart Offered Load.Restore link on DUT's Preferred Egress Interface.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.ResultsThe measured IGP convergence time may be influenced by the link
failure indication time, LSA/LSP delay, LSA/LSP generation time,
LSA/LSP flood packet pacing, SPF delay, SPF execution time, and
routing and forwarding tables update time .ObjectiveTo obtain the IGP convergence time due to a Remote Interface
failure event.ProcedureAdvertise an IGP topology from Tester to SUT using the
topology shown in Figure 2.Send Offered Load from Tester to SUT on ingress
interface.Verify traffic is forwarded over Preferred Egress
Interface.Remove link on Tester's interface connected to SUT's Preferred Egress
Interface. This is the Convergence Event.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times and Loss-Derived
Convergence Time.Wait sufficient time for queues to drain.Restart Offered Load.Restore link on Tester's interface connected to DUT's
Preferred Egress Interface.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.ResultsThe measured IGP convergence time may be influenced by the link
failure indication time, LSA/LSP delay, LSA/LSP generation time,
LSA/LSP flood packet pacing, SPF delay, SPF execution time, and
routing and forwarding tables update time. This test case may
produce Stale Forwarding due to a
transient microloop between R1 and R2 during convergence, which may
increase the measured convergence times and loss of connectivity
periods.ObjectiveTo obtain the IGP convergence time due to a Local Interface link
failure event of an ECMP Member.ProcedureAdvertise an IGP topology from Tester to DUT using the test
setup shown in Figure 3.Send Offered Load from Tester to DUT on ingress
interface.Verify traffic is forwarded over the DUT's ECMP member
interface that will be failed in the next step.Remove link on one of the DUT's ECMP member interfaces. This
is the Convergence Event.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times and Loss-Derived
Convergence Time. At the same time measure Out-of-Order Packets
and Duplicate Packets .Wait sufficient time for queues to drain.Restart Offered Load.Restore link on DUT's ECMP member interface.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period. At the same time measure Out-of-Order Packets and Duplicate Packets .ResultsThe measured IGP Convergence time may be influenced by link
failure indication time, LSA/LSP delay, LSA/LSP generation time,
LSA/LSP flood packet pacing, SPF delay, SPF execution time, and
routing and forwarding tables update time .ObjectiveTo obtain the IGP convergence time due to a Remote Interface link
failure event for an ECMP Member.ProcedureAdvertise an IGP topology from Tester to DUT using the test
setup shown in Figure 4.Send Offered Load from Tester to DUT on ingress
interface.Verify traffic is forwarded over the DUT's ECMP member
interface that will be failed in the next step.Remove link on Tester's interface to R2. This is the
Convergence Event Trigger.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times and Loss-Derived
Convergence Time. At the same time measure Out-of-Order Packets
and Duplicate Packets .Wait sufficient time for queues to drain.Restart Offered Load.Restore link on Tester's interface to R2.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period. At the same time measure Out-of-Order Packets and Duplicate Packets .ResultsThe measured IGP convergence time may influenced by the link
failure indication time, LSA/LSP delay, LSA/LSP generation time,
LSA/LSP flood packet pacing, SPF delay, SPF execution time, and
routing and forwarding tables update time. This test case may
produce Stale Forwarding due to a
transient microloop between R1 and R2 during convergence, which may
increase the measured convergence times and loss of connectivity
periods.ObjectiveTo obtain the IGP convergence due to a local link failure event
for a member of a parallel link. The links can be used for data load
balancingProcedureAdvertise an IGP topology from Tester to DUT using the test
setup shown in Figure 5.Send Offered Load from Tester to DUT on ingress
interface.Verify traffic is forwarded over the parallel link member
that will be failed in the next step.Remove link on one of the DUT's parallel link member
interfaces. This is the Convergence Event.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times and Loss-Derived
Convergence Time. At the same time measure Out-of-Order Packets
and Duplicate Packets .Wait sufficient time for queues to drain.Restart Offered Load.Restore link on DUT's Parallel Link member interface.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period. At the same time measure Out-of-Order Packets and Duplicate Packets .ResultsThe measured IGP convergence time may be influenced by the link
failure indication time, LSA/LSP delay, LSA/LSP generation time,
LSA/LSP flood packet pacing, SPF delay, SPF execution time, and
routing and forwarding tables update time .ObjectiveTo obtain the IGP convergence time due to a local layer 2
loss.ProcedureAdvertise an IGP topology from Tester to DUT using the
topology shown in Figure 1.Send Offered Load from Tester to DUT on ingress
interface.Verify traffic is routed over Preferred Egress Interface.Remove Layer 2 session from DUT's Preferred Egress Interface.
This is the Convergence Event.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.Wait sufficient time for queues to drain.Restart Offered Load.Restore Layer 2 session on DUT's Preferred Egress
Interface.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.ResultsThe measured IGP Convergence time may be influenced by the Layer
2 failure indication time, LSA/LSP delay, LSA/LSP generation time,
LSA/LSP flood packet pacing, SPF delay, SPF execution time, and
routing and forwarding tables update time .DiscussionConfigure IGP timers such that the IGP adjacency does not time
out before layer 2 failure is detected.To measure convergence time, traffic SHOULD start dropping on the
Preferred Egress Interface on the instant the layer 2 session is
removed. Alternatively the Tester SHOULD record the time the instant
layer 2 session is removed and traffic loss SHOULD only be measured
on the Next-Best Egress Interface. For loss-derived benchmarks the
time of the Start Traffic Instant SHOULD be recorded as well. See
Section .ObjectiveTo obtain the IGP convergence time due to loss of an IGP
Adjacency.ProcedureAdvertise an IGP topology from Tester to DUT using the
topology shown in Figure 1.Send Offered Load from Tester to DUT on ingress
interface.Verify traffic is routed over Preferred Egress Interface.Remove IGP adjacency from the Preferred Egress Interface
while the layer 2 session MUST be maintained. This is the
Convergence Event.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.Wait sufficient time for queues to drain.Restart Offered Load.Restore IGP session on DUT's Preferred Egress Interface.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.ResultsThe measured IGP Convergence time may be influenced by the IGP
Hello Interval, IGP Dead Interval, LSA/LSP delay, LSA/LSP generation
time, LSA/LSP flood packet pacing, SPF delay, SPF execution time,
and routing and forwarding tables update time .DiscussionConfigure layer 2 such that layer 2 does not time out before IGP
adjacency failure is detected.To measure convergence time, traffic SHOULD start dropping on the
Preferred Egress Interface on the instant the IGP adjacency is
removed. Alternatively the Tester SHOULD record the time the instant
the IGP adjacency is removed and traffic loss SHOULD only be
measured on the Next-Best Egress Interface. For loss-derived
benchmarks the time of the Start Traffic Instant SHOULD be recorded
as well. See Section .ObjectiveTo obtain the IGP convergence time due to route withdrawal.ProcedureAdvertise an IGP topology from Tester to DUT using the
topology shown in Figure 1. The routes that will be withdrawn
MUST be a set of leaf routes advertised by at least two nodes in
the emulated topology. The topology SHOULD be such that before
the withdrawal the DUT prefers the leaf routes advertised by a
node "nodeA" via the Preferred Egress Interface, and after the
withdrawal the DUT prefers the leaf routes advertised by a node
"nodeB" via the Next-Best Egress Interface.Send Offered Load from Tester to DUT on Ingress
Interface.Verify traffic is routed over Preferred Egress Interface.The Tester withdraws the set of IGP leaf routes from nodeA.
This is the Convergence Event. The withdrawal update message
SHOULD be a single unfragmented packet. If the routes cannot be
withdrawn by a single packet, the messages SHOULD be sent using
the same pacing characteristics as the DUT. The Tester MAY
record the time it sends the withdrawal message(s).Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.Wait sufficient time for queues to drain.Restart Offered Load.Re-advertise the set of withdrawn IGP leaf routes from nodeA
emulated by the Tester. The update message SHOULD be a single
unfragmented packet. If the routes cannot be advertised by a
single packet, the messages SHOULD be sent using the same pacing
characteristics as the DUT. The Tester MAY record the time it
sends the update message(s).Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.ResultsThe measured IGP convergence time is influenced by SPF or route
calculation delay, SPF or route calculation execution time, and
routing and forwarding tables update time .DiscussionTo measure convergence time, traffic SHOULD start dropping on the
Preferred Egress Interface on the instant the routes are withdrawn
by the Tester. Alternatively the Tester SHOULD record the time the
instant the routes are withdrawn and traffic loss SHOULD only be
measured on the Next-Best Egress Interface. For loss-derived
benchmarks the time of the Start Traffic Instant SHOULD be recorded
as well. See Section .ObjectiveTo obtain the IGP convergence time due to taking the DUT's Local
Interface administratively out of service.ProcedureAdvertise an IGP topology from Tester to DUT using the
topology shown in Figure 1.Send Offered Load from Tester to DUT on ingress
interface.Verify traffic is routed over Preferred Egress Interface.Take the DUT's Preferred Egress Interface administratively
out of service. This is the Convergence Event.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.Wait sufficient time for queues to drain.Restart Offered Load.Restore Preferred Egress Interface by administratively
enabling the interface.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.It is possible that no measured packet loss will be observed
for this test case.ResultsThe measured IGP Convergence time may be influenced by LSA/LSP
delay, LSA/LSP generation time, LSA/LSP flood packet pacing, SPF
delay, SPF execution time, and routing and forwarding tables update
time .ObjectiveTo obtain the IGP convergence time due to route cost change.ProcedureAdvertise an IGP topology from Tester to DUT using the
topology shown in Figure 1.Send Offered Load from Tester to DUT on ingress
interface.Verify traffic is routed over Preferred Egress Interface.The Tester, emulating the neighbor node, increases the cost
for all IGP routes at DUT's Preferred Egress Interface so that
the Next-Best Egress Interface becomes preferred path. The
update message advertising the higher cost MUST be a single
unfragmented packet. This is the Convergence Event. The Tester
MAY record the time it sends the update message advertising the
higher cost on the Preferred Egress Interface.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.Wait sufficient time for queues to drain.Restart Offered Load.The Tester, emulating the neighbor node, decreases the cost
for all IGP routes at DUT's Preferred Egress Interface so that
the Preferred Egress Interface becomes preferred path. The
update message advertising the lower cost MUST be a single
unfragmented packet.Measure First Route Convergence Time.Measure Full Convergence Time.Stop Offered Load.Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period.ResultsThe measured IGP Convergence time may be influenced by SPF delay,
SPF execution time, and routing and forwarding tables update time
.DiscussionTo measure convergence time, traffic SHOULD start dropping on the
Preferred Egress Interface on the instant the cost is changed by the
Tester. Alternatively the Tester SHOULD record the time the instant
the cost is changed and traffic loss SHOULD only be measured on the
Next-Best Egress Interface. For loss-derived benchmarks the time of
the Start Traffic Instant SHOULD be recorded as well. See Section
.Benchmarking activities as described in this memo are limited to
technology characterization using controlled stimuli in a laboratory
environment, with dedicated address space and the constraints specified
in the sections above.The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test traffic
into a production network, or misroute traffic to the test management
network.Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT.Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
benchmarking purposes. Any implications for network security arising
from the DUT/SUT SHOULD be identical in the lab and in production
networks.This document requires no IANA considerations.Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward,
Peter De Vriendt and the BMWG for their contributions to this work.Benchmarking terminology
for network interconnection devicesHarvard University33 Kirkland StreetWilliam James Hall 1232CambridgeMA02138US+1 617 495 3864SOB@HARVARD.HARVARD.EDUThis memo discusses and defines a number of terms that are used
in describing performance benchmarking tests and the results of
such tests. The terms defined in this memo will be used in
additional memos to define specific benchmarking tests and the
suggested format to be used in reporting the results of each of
the tests. This memo is a product of the Benchmarking Methodology
Working Group (BMWG) of the Internet Engineering Task Force
(IETF).Key words for use in RFCs to Indicate
Requirement LevelsHarvard University1350 Mass. Ave.CambridgeMA 02138- +1 617 495 3864sob@harvard.edu
General
keywordIn many standards track documents several words are used to
signify the requirements in the specification. These words are
often capitalized. This document defines these words as they
should be interpreted in IETF documents. Authors who follow these
guidelines should incorporate this phrase near the beginning of
their document: The key words "MUST", "MUST NOT", "REQUIRED", "SHALL",
"SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described
in RFC 2119.Note that the force of these words is modified by the
requirement level of the document in which they are used.Benchmarking Methodology
for Network Interconnect DevicesHarvard University1350 Mass. AveRoom 813CambridgeMA02138US+1 617 495 3864+1 617 496 8500sob@harvard.eduNetScout Systems4 Westford Tech Park DriveWestfordMA01886US+1 978 614 4116+1 978 614 4004mcquaidj@netscout.comThis document discusses and defines a number of tests that may
be used to describe the performance characteristics of a network
interconnecting device. In addition to defining the tests this
document also describes specific formats for reporting the results
of the tests. Appendix A lists the tests and conditions that we
believe should be included for specific cases and gives additional
information about testing practices. Appendix B is a reference
listing of maximum frame rates to be used with specific frame
sizes on various media and Appendix C gives some examples of frame
formats to be used in testing.Use of OSI
IS-IS for routing in TCP/IP and dual environmentsDigital Equipment Corporation (DEC)550 King StreetLKG 1-2/A19LittletonMA01460-1289US+1 508 486 5009This RFC specifies an integrated routing protocol, based on the
OSI Intra-Domain IS-IS Routing Protocol, which may be used as an
interior gateway protocol (IGP) to support TCP/IP as well as OSI.
This allows a single routing protocol to be used to support pure
IP environments, pure OSI environments, and dual environments.
This specification was developed by the IS-IS working group of the
Internet Engineering Task Force.The OSI IS-IS protocol has reached a mature state, and is ready
for implementation and operational use. The most recent version of
the OSI IS-IS protocol is contained in ISO DP 10589. The proposed
standard for using IS-IS for support of TCP/IP will therefore make
use of this version (with a minor bug correction, as discussed in
Annex B). We expect that future versions of this proposed standard
will upgrade to the final International Standard version of IS-IS
when available.Comments should be sent to "isis@merit.edu".Benchmarking Terminology
for LAN Switching DevicesEuropean Network Laboratories (ENL)2rue Helene Boucher78286 Guyancourt CedexFrance+ 33 1 39 44 12 05+ 33 1 39 44 12 06bob.mandeville@eunet.fr
Operations
local area networkBenchmarkingOSPF Version 2Ascend Communications, Inc.1 Robbins RoadWestfordMA01886978-952-1367978-392-2075jmoy@casc.com
Routing
open shortest-path first protocolroutingOSPFThis memo documents version 2 of the OSPF protocol. OSPF is a
link-state routing protocol. It is designed to be run internal to
a single Autonomous System. Each OSPF router maintains an
identical database describing the Autonomous System's topology.
From this database, a routing table is calculated by constructing
a shortest- path tree.OSPF recalculates routes quickly in the face of topological
changes, utilizing a minimum of routing protocol traffic. OSPF
provides support for equal-cost multipath. An area routing
capability is provided, enabling an additional level of routing
protection and a reduction in routing protocol traffic. In
addition, all OSPF routing protocol exchanges are
authenticated.The differences between this memo and RFC 2178 are explained in
Appendix G. All differences are backward-compatible in nature.
Implementations of this memo and of RFCs 2178, 1583, and 1247 will
interoperate.Please send comments to ospf@gated.cornell.edu.Terminology for Benchmarking Network-layer Traffic Control
MechanismsThis document describes terminology for the benchmarking of
devices that implement traffic control using packet classification
based on defined criteria. The terminology is to be applied to
measurements made on the data plane to evaluate IP traffic control
mechanisms. Rules for packet classification can be based on any
field in the IP header, such as the Differentiated Services Code
Point (DSCP), or any field in the packet payload, such as port
number. This memo provides information for the Internet
community.OSPF for IPv6This document describes the modifications to OSPF to support
version 6 of the Internet Protocol (IPv6). The fundamental
mechanisms of OSPF (flooding, Designated Router (DR) election,
area support, Short Path First (SPF) calculations, etc.) remain
unchanged. However, some changes have been necessary, either due
to changes in protocol semantics between IPv4 and IPv6, or simply
to handle the increased address size of IPv6. These modifications
will necessitate incrementing the protocol version from version 2
to version 3. OSPF for IPv6 is also referred to as OSPF version 3
(OSPFv3).</t><t> Changes between OSPF for IPv4, OSPF
Version 2, and OSPF for IPv6 as described herein include the
following. Addressing semantics have been removed from OSPF
packets and the basic Link State Advertisements (LSAs). New LSAs
have been created to carry IPv6 addresses and prefixes. OSPF now
runs on a per-link basis rather than on a per-IP-subnet basis.
Flooding scope for LSAs has been generalized. Authentication has
been removed from the OSPF protocol and instead relies on IPv6's
Authentication Header and Encapsulating Security Payload
(ESP).</t><t> Even with larger IPv6 addresses, most
packets in OSPF for IPv6 are almost as compact as those in OSPF
for IPv4. Most fields and packet- size limitations present in OSPF
for IPv4 have been relaxed. In addition, option handling has been
made more flexible.</t><t> All of OSPF for IPv4's
optional capabilities, including demand circuit support and
Not-So-Stubby Areas (NSSAs), are also supported in OSPF for IPv6.
[STANDARDS TRACK]Routing IPv6 with IS-ISThis document specifies a method for exchanging IPv6 routing
information using the IS-IS routing protocol. The described method
utilizes two new TLVs: a reachability TLV and an interface address
TLV to distribute the necessary IPv6 information throughout a
routing domain. Using this method, one can route IPv6 along with
IPv4 and OSI using a single intra-domain routing protocol.
[STANDARDS TRACK]One-way Loss Pattern Sample MetricsTerminology for Benchmarking Link-State IGP Data Plane Route
ConvergenceThis document describes the terminology for benchmarking
Interior Gateway Protocol (IGP) Route Convergence. The terminology
is to be used for benchmarking IGP convergence time through
externally observable (black box) data plane measurements. The
terminology can be applied to any link-state IGP, such as ISIS and
OSPF. Link-State IGP Data Plane Route ConvergenceConsiderations for Benchmarking Link-State IGP Data Plane
Route ConvergenceThis document discusses considerations for benchmarking
Interior Gateway Protocol (IGP) Route Convergence for any
link-state IGP, such as Intermediate System-Intermediate System
(ISIS) and Open-Shorted Path first (OSPF). A companion methodology
document is to be used for benchmarking IGP convergence time
through externally observable (black box) data plane measurements.
A companion