Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Catalyst 65xx Notes based on various Cisco docs via http://www.cisco.com.
- Contents:
- Types of Chassis, Supervisor Engines and Line Cards
- 10-Gigabit-Ethernet standards
- Details of an example line card: WS-X6908-10G-2T
- Excerpts from Cat6500-E Chassis
- Datasheet
- Backplane
- Supervisor Engines
- PFC - Policy Feature Card
- Types of Line Cards
- DFC - Distributed Forwarding Card
- A Packet being switche from ingress to egress through a Cat65xx
- Virtual Switching System
- EtherChannel
- Various Commands for Cat65xx
- View VSS Switch info.
- Switching Fabric
- Redundancy (Stateful SwitchOver and NonStop Forwarding)
- Port Speed & Duplex
- 802.3x Flow Control, PAUSE Frames
- Power Management and Environmental Monitoring
- (***'show platform'*** - fabric/harware utilization)
- Which link on an EtherChannel would be used
- mLACP for Server Access
- Hardware Layer3 Switching
- Traffic and Storm-Control
- General info on Buffers, Queues, & Thresholds
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Types of Chassis, Supervisor Engines and Line Cards
- Chassis
- WS-C6503-E 3-slot 4-RU
- WS-c6504-E 4-slot 5-RU
- WS-C6506-E 6-slot 12-RU
- WS-C6509-E 9-slot 15-RU
- WS-C6509-V-E 9-slot vertical
- WS-C6513-E 13-slot 20-RU
- Supervisors
- Supervisor 2T - (2) 10GbE ports / MSFC5 with PFC4
- Supervisor 2T - (2) 10GbE ports / MSFC5 with PFC4XL
- Virtual Switching Supervisor 720 - (2) 10GbE ports / MSFC3 XL
- Virtual Switching Supervisor 720 - (2) 10Gbe ports / MSFC3
- Sup720 Fabric - (2) GbE ports / MSFC with PFC3BXL
- Sup720 Fabric - (2) GBE ports / MSFC3 PFC3B
- Sup32 - (2) 10GbE ports with PFC3B
- Sup32 - (8) GbE ports with PFC3B
- Line Cards - 10Gb Ethernet
- 8 port w/ DFC4
- 8 port w/ DFC4XL
- 16 port w/ DFC4
- 16 port w/ DFC4XL
- 16 port (copper) w/ DFC4
- 16 port (copper) w/ DFC4XL
- 16 port w/ DFC3C/DFC3CXL
- 16 port (copper) w/ DFC3C/DFC3CXL
- 8 port w/ DFC3C
- 4 port w/o DFC
- Line Cards - Gb Ethernet
- 48 port w/ DFC4 fabric-enabled
- 48 port w/ DFC4XL fabric-enabled
- 24 port w/ DFC4 fabric-enabled
- 24 port w/ DFC4XL fabric-enabled
- 24 port fabric-enabled
- 48 port fabric-enabled
- Line Cards - 10/100/1000 Ethernet
- 48 port w/ DFC4 fabric-enabled
- 48 port w/ DFC4XL fabric-enabled
- 48 port fabric-enabled
- MSFC = Multi-layer Switching Feature Card
- PFC = Policy Feature Card
- DFC = Distributed Feature Card
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ 10-Gigabit-Ethernet standards
- 802.3ae-2002 - 10GbE standard (fiber), 2002.
- 802.3an-2006 - 10GBaseT standard (10GbE over copper), 2006.
- - Shielded Cat6, 6a, Cat7 up to 100 meters (330 feet).
- - Cat6 UTP up to 55 meters (181 feet).
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Details of an example line card: WS-X6908-10G-2T
- Source: http://www.cisco.com/c/en/us/products/switches/catalyst-6500-series-switches/datasheet-listing.html
- - Backplane connection at 80Gbps full duplex.
- - Forwarding: IPv4 60mpps ; IPv4 30mpps.
- - ACL(s): Non-XL: 48k security, 16k QoS ; XL: 192k Security, 64K QoS.
- - VLAN(s): 16k supported.
- - MAC table: 128k addresses.
- - Port buffers: 256 MB per port. 128MB ingress, 128MB egress.
- - DFC4XL is an attached daughter card:
- - Supports the FIB.
- - IPv4 Unicast / MPLS entries: 1024k (DFC4 256k).
- - IPv6 Unicast / IPv4 multicast: 512k.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Excerpts from Cat6500-E Chassis Datasheet.
- Source: http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/data_sheet_c78-708665.html
- - Up to 2 terabits per second system b/w capacity. (? 13 slot X 80gbps per slot for ingress and for egress ?)
- - 80Gbps b/w per slot.
- - Slot capacities: 3, 4, 6, 9, 9-vertical, 13.
- - Standby hot-sync: from 50 to 200ms switch-over
- - Redundant power supplies and hot-swappable fan-trays.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Excerpts from Cat6500 Architecutre; Backplane
- Source: http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/prod_white_paper0900aecd80673385.html
- - (2) backplanes
- - 32 Gbps shared switching bus; interconnects line-cards.
- - Line cards also connect over high speed switching path: crossbar switching fabric.
- - First generation switching fabric: switching capacity: 256 Gbps
- - Supervisor 720: crossbar switching fabric integrated directly into the Sup board itself.
- - Eliminates need for standalone switch fabric module.
- - Switching capacity: 720 Gbps.
- - Some chassis allow for line-cards to have (2) channels in and out of the switching fabric.
- - Crossbar switching fabric. What does it do? Allows each line card to forward and receive data to
- every other line card over a unique set of transmission paths.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Excerpts from Cat6500 Architecutre; Supervisor Engines
- - Same link as Backplane section; Brief info on Sup cards:
- - Sup32
- - Switch Processor (SP): 256MB bootflash. Max 512MB.
- - 64MB bootflash for Route Processor (RP).
- - DRAM is 256MB for both RP & SP.
- - NVRAM = 2MB.
- - Sup 720
- - Integrates Crossbar fabric, PFC, & MSFC.
- - PFC, MSFC No longer optional. (In earlier Sups, they were?).
- - Increased bandwidth support on the crossbar switching fabric.
- - Backward compatible with earlier line cards. (Preserve line card investments).
- - DRAM: 1GB switch processor. 1GB route processor.
- - SP bootflash = 512MB default.
- - RP bootflash = 64MB.
- - NVRAM = 2MB.
- - Later vesions of the 720 Sup support additional features for example:
- - MPLS support in hardware.
- - EoMPLS
- - Security ACL hit counters
- - Multipath URPF check performed in hardware.
- - Etc... See the Architecture White Paper (link above) for more info.
- -MSFC handles control plane functions; e.g. Layer 3 routing protocols.
- - Is integrated onto the Supervisor card.
- - MSFC3 supports forwarding rates of up to 500Kpps. (I assume this is re: control-plane traffic).
- - Route Processor and Switch Processor is located on the MSFC3.
- - RP manages: L3 routing protocols, ARP, ICMP [reception/response], SVI(s), etc.
- - SP manages: L2 such as Spanning Tree, VTP, CDP, pushing FIB tables to the PFC.
- - MSFC daughter card doesn't do forwarding. Creates the CEF table from the routing protocols,
- then pushes to any PFC or DFC(s) present.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Excerpts from Cat6500 Architecutre; PFC - Policy Feature Card
- Catalyst 6500 Architecture, Policy Feature Card
- - Daughter card that sits on the Supervisor Card.
- - Contains ASIC(s) to accelerate L2/L3 switching.
- - Apparently the PFC is the 'transit packet' forwarding engine/mechanism.
- - Is this an alternative to the DFC? Meaning if a DFC is added, then the PFC is disabled???
- - PFC3BXL supports up to 1 million routes in it's forwarding tables.
- - Also supports such Features (many if not all (?) processed in hardware).
- - NAT/PAT, uRPF, GRE, MPLS, BiDir PIM, Egress Policing, etc.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Excerpts from Cat6500 Architecutre; Types of Line Cards
- - Classic. Single connection to the 32Gbps shared bus.
- - CEF256. Connection to both the 32Gbps shared bus and the Switching Fabric. Uses the
- switching fabric if a Sup 720 is present. Connects at 8Gbps.
- - CEF720. Same as CEF256. Connects at 20Gbps.
- - dCEF256. Does not connect to shared bus. Requires switch fabric. Connects at 8Gbps.
- - dCEF720. Does not connect to shared bus. Requires switch fabric. Connects at 20Gbps.
- - Single or Dual Fabric Line Cards. i.e. Cat6513:
- - Slots 1 through 8 are single channel. (So these slots can't accept dual channel line cards).
- - Slots 9 through 13 are dual channel fabric.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ DFC - Distributed Forwarding Card
- - Used on selected line cards to support local switching.
- - That is switching within the line card itself where the packet doesn't cross the dBUS or switch fabric.
- - Uses same ASICs as found on the PFC(s).
- - Supports the layer2 and layer3 switching of frames.
- - Holds copies of ACL(s) (for QoS and Security) for local processing of ACLs.
- - Supported only by certain Supervisors and different generations cannot be mixed.
- A note on ACL (QoS / Security) processing.
- - The Cisco doc, “...6500 Architecture Whitepaper...” says there is no performance hit when ACL(s)
- are implemented as the lookups are performed in hardware.
- - ACL(s) are pushed into both the PFC & DFC.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ A Packet being switche from ingress to egress through a Cat65xx
- Source: "Cisco Catalyst Architecture White Paper at
- http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/prod_white_paper0900aecd80673385.html
- For below steps see Figure 24 at above linke: Direct image link:
- http://www.cisco.com/c/dam/en/us/products/collateral/switches/catalyst-6500-series-switches/prod_white_paper0900aecd80673385.doc/_jcr_content/renditions/prod_white_paper0900aecd80673385_23.jpg
- (1) Packet arrives at port and is passed to fabric asic.
- (2) Fabric ASIC forwards packet header to the local DFC.
- (3) DFC performs Forwarding Lookup along with QoS/Security ACL to see if that processing is necessary.
- Results passed back to Fabric ASIC.
- (4) Fabric ASIC forwards packet over switch fabric to dst port.
- (5) Destination Line Card receives the packet and forwards out the port.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Virtual Switching System
- - Enables two physical switch chassis to behave as one single logical entity for packet forwarding
- and configuration management.
- - Of the (2) chassis one is in Active mode, the other is in Hot Standby mode.
- - Only the Active chassis runs the control plane; Layer2 & Layer3 switching and routing protocols.
- - Active chassis pushes control plane updates to the Standby unit.
- - Virtual Switch Link (VSL) connects to Supervisor on each Switch. Used for control traffic between
- Active and Standby. Will send data plane traffic if no direct (local) path.
- - Both chassis' forward [transit] packets at the data plane level.
- - Spanning-Tree dependency reduced: No physical loops since VSS pairs seen as single logical
- switch and Multi-chassis Etherchannel (MEC) can be used.
- - Still: must use STP tools (Root Guard, BPDU filter, etc) in case of accidentally created loops.
- - First Hop Redundancy Protocols are eliminated.
- - Need only a single IP-address to serve as a gateway, rather than 3 used, e.g. HSRP.
- - System configuration is from the Active chassis. Interface names take on format: <type>
- <Switch/Slot/Port> ; E.g.: show interface te2/5/4 indicates: Switch 2, Slot 5, Port 4.
- - There are various Hardware (interface) requirements. See Catalyst Config docs for info.
- - Config changes can only occur on Active chassis. Changes are pushed to Standby chassis.
- Traditional
- -------------------------------------------
- [R1] [R2]
- | \ / |
- | \ / |
- | X |
- | / \ |
- | / \ |
- [D1]-----[D2]
- | \ / |
- | \ / |
- | X |
- | / \ |
- | / \ |
- |-->| /<-| \ |<<--similar circumstances for A2's ports.
- | [A1] | [A2]
- | |
- | |-vlan10 port is blocking
- | |-vlan20 port is operational
- |
- |-vlan10 operational
- |-vlan20 port is blocking
- VSS Physical View
- -------------------------------------------
- [R1] [R2]
- | \ /<-|-----\
- | \ / |<-----\
- | X | \--(2) ports form (1) MultiChassis EtherChannel.
- | / \ | -Same for R1's ports to D1 & D2 respectively.
- | / \ | -Same for A1's ports to ...
- [D1]-----[D2] -Same for A2's ports ...
- | \ ^-/--|---\
- | \ / | \-- Virtual Switch Link (link for control plane and non-local switch to switch traffic).
- | X |
- | / \ |
- | / \ |
- | / \ |
- [A1] [A2]
- VSS Logical View
- -------------------------------------------
- [R1] [R2]
- \\ //
- \\ //
- +-----+ R1, R2, A1, A2 each see a single switch.
- | VSS |
- +-----+
- // \\
- // \\
- [A1] [A2]
- VSS, Dual Active Detection
- - Dual Active is a situation when the redundant chassis also becomes Active.
- - That is, communication on the VSL between the two switches is lost and they both believe
- they are to be the Active Chassis. (might be a.k.a 'Split Brain').
- - This situation is problematic and detrimental to the network.
- - Detection methods:
- - Enhanced PAgP: PAgP messaging can (then) occur across a MEC (transiting an neighboring
- switch) in order for the two VSS member switches to communicate. Faster than IP-BFD.
- - IP Bidirectional Forwarding Detection: Backup link that uses BFD messaging.
- - Dual-Active Fast Hello: Uses special hello messages over a backup link. Faster than IP-BFD.
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ EtherChannel
- - Multiple Ethernet links combined to appear as one link, either at Layer2 or Layer3.
- - Protocol:
- - PAgP - Port Aggregation Control Protocol; Cisco proprietary
- - LACP - Link Aggregation Control Protocol (802.3ad); IEEE standard.
- - PAgP and LACP are not compatible; both ends must match.
- - PAgP as the EtherChannel control-protocol; mode options:
- - On = statically on
- - Auto = doesn't initiat PAgP but does respond to PAgP
- - Desirable = will initiat PAgP
- - Will not work:
- - Auto <> Auto
- - On <> Auto
- - On <> Desirable.
- - In fact a combination of Desirable <> On (or NotEtherChannel <> On), can
- result i a bridge loop & traffic storm.
- - Port states:
- - bundled - is part of an EtherChannel; can tx/rx bpdu(s) & data traffic.
- - Suspended - not part of an EtherChannel; can rx bpdu(s) but cannot tx; data trafic blocked.
- - Standalone - port is not bundled in EtherChannel; can tx/rx bpud(s) & data traffic.
- - If one end of an etherchannel has more ports thant the other; then the mismatched ports enter Standalone state.
- This can cause problems.
- - Config option exists to force StandAlone port in this condition to be disbaled.
- - Implication: User complaines of slow performance or sees a link in his/her EtherChannel is down, then check
- the remote side for this condition.
- - LACP - will form EtherChannel only between ports in passive or active mode.
- - Port mode combinations:
- Active <> Active > Works
- Active <> Passive > Works
- Passive <> Passive > Doesn't work
- - Up to 8 ports in an LACP Bundle; with up to 8 hot-standby ports.
- - If an active port failes then a standby-port is rotated into the group.
- - [Some] Basic Requirements
- - Same speed, same duplex
- - A member link port cannot be a SPAN destination port
- - If L2 etherchannel, then ports in access mode must be in the same VLAN
- - If L2 trunking: use same trunk mode (on | auto | desirable); allow same vlans on each side.
- - Load distribution algorithms: "Fixed Algorithm (default) and Adaptive Algorithm
- - Some external devices could require the Fixed Algorithm. See Config Guide for details.
- - Change of algorithm means on DFC equipped modules (or on Active Supervisor in dual-sup) that EtherChannel ports will flap.
- # show interfaces <intf> etherchannel
- # show etherchannel <port-channel-num> port-channel
- # show lacp sys-id
- # show etherchannel load-balance
- # show etherchannel summary
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Various Commands for Cat65xx
- Source: “Catalyst 6500 Release 12.2SX Software Configuration Guide” at
- http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/122SX/configuration/guide/book.html
- +-----------------------------------------------------------------+
- View VSS Switch info.
- # show switch virtual [ role | link ]
- +-----------------------------------------------------------------+
- MultiChassis EtherChannel
- - Configure at normal etherchannel. VSS recognizes it as cross-chassis.
- +-----------------------------------------------------------------+
- Switching Fabric
- # show fabric switching-mode module all
- - Modes: Compact, Truncated, Bus; Compact = Best Performance
- - Applicable for Sup720 & Sup720-10GE
- - See Config Guide for info on modes
- # show fabric status
- - Various entries showing slot, speed, status, etc.
- # show fabric errors
- +-----------------------------------------------------------------+
- Redundancy (Stateful SwitchOver and NonStop Forwarding)
- # show redundancy states
- show cef stat
- - Look for NSF capable = yes
- # show ip bgp neighbors <a.b.c.d>
- - Look for Graceful Restart = advertised and received
- # show ip ospf
- - Look for Non-stop Forwarding enabled
- # show isis nsf
- # show ip protocols
- - For EIGRP, look for: EIGRP NSF enabled
- +-----------------------------------------------------------------+
- Port Speed & Duplex
- - 10GbE & GbE: negotiation process: Link Negotiation
- - 10/100/1000mbps: negotiation process: Auto Negotiation
- - If one side is Auto the other side must be Auto
- - One side cannot negotiate speed/duplex if the other side is hard-coded.
- - If manually set speed to 10 or 100Mbps the switch prompts to set duplex.
- - Speed cannot be Auto if Duplex is not Auto
- - Duplex mode on GbE & 10GbE is Full. This cannot be changed.
- - OK, so apparently there is no concept of "What duplex should we use?
- during Link Negotiation process.
- - Link Negotiation (LN) states and port status :
- - Both Local Port & Remote Port are set to LN = OFF, then both ports: UP
- - Both Local Port & Remote Port are set to LN = ON, then both ports: UP
- - Both Local Port & Remote Port are mismatched. One side is Up, one side is Down.
- - Reference Section 1.9 at http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/12-2SX/best/practices/recommendations.html
- # show interfaces <intf>
- - See duplex & speed setting
- # show interfaces <intf> transceiver properties
- - Check negotiation status
- +-----------------------------------------------------------------+
- 802.3x Flow Control, PAUSE Frames
- - If receive buffers become full, the port can send PAUSE frames to stop transmission for a specified time.
- - 10GbE fiber ports respond to PAUSE frames by default. On [at least] WS-X6502-10GE, this is non-configurable.
- (configif)# flowcontrol [ received | send ] [ on | off | desired ]
- Gigabit Ethernet supports desired option, in that if it's uknown what remote port supports.
- show interface <intf> flowcontrol
- +-----------------------------------------------------------------+
- Power Management and Environmental Monitoring
- - Systems with redundant power/supplies, both must be same wattage
- - Some configurations require more wattage than can be supplied by a single power supply.
- - Implication: If a power/supply fails, some services through a Cat65xx will be affected.
- - Obviously, redundancy is not support in this scenario.
- # show power
- - View details of system power (total, used, available, etc).
- # show power status power-supply 2
- # show env status power-supply [ 1 | 2 ]
- # show platform hardware capacity cpu
- - View CPU capacity & utilization for route-processor, switch-processor, and switching module.
- # show platform hardware capacity eobc EOBC Resources
- - Display EOBC related statistics, such as Packets/sec, Total Packets, Dropped Packets; for route-processor,
- switch-processor, and DFC(s).
- # show platform hardware capacity fabric Switch Fabric Resources
- - Display Current and Peak Switching utilization.
- # show platform hardware capacity forwarding
- - Utilization of MAC tables, FIB TCAM per-proto (e.g. IP-FIB), Adjacency Tables (ARP/phy-intf).
- # show platform hardware capacity interface Interface Resources
- - shows interface resouce; e.g. Interface Tx/Rx drops; Buffer sizes
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Determining which link an EtherChannel will send a frame on.
- Source: https://supportforums.cisco.com/discussion/11113941/test-etherchannel-commands-7609
- # show etherchannel load-balance
- - First check what the load-balance method is.
- - Also if ingress port is PFC controlled just use default command above.
- # show etherchannel load-balance module <mod_num>
- - But if it's DFC controled than add module operator.
- # remote login switch
- - connect to the SP (switch-processor)
- # attach <slot_num>
- - Connect to other linecards (DFC) besides the SP.
- # test etherchannel load-balance interface port-channel <num> <operator depends on algo>
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ mLACP for Server Access (mLACP = MultiChassis Link Aggregation Control Protocol)
- InterChassis
- Communication
- +----+ Channel (ICC) +----+
- Point of | A1 |--------------------| A2 | Point of
- Attachment +----+ +----+ Attachment
- \ /
- \<-----------------/-------\
- \ /<--------\
- \ / \-Dual Link LACP EtherChannel
- Active Link Standby Link
- \ /
- \<-------/-------\
- \ /<--------\
- \ / \-Dual Link LACP EtherChannel
- +------+
- |Server|
- | DHD | DHD = Dual Homed Device
- +------+
- # show redundancy interchassis
- - Config Guide lists the operator "interface" ; Command Ref lists the operator "interchassis"
- # show lacp multi-chassis [ group | port-channel ]
- # show lacp internal
- # show lacp neighbor
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Hardware Layer3 Switching
- - Instead of the Route Processor (RP), the PFC(s) & DFC(S) can forward, at wire speed, IP unicast traffic between subnets
- - This means this type of packet forwarding occurs in hardware rather than in software via the RP.
- - Forwarding decision made locally at ingress. Rewrite info sent to egress port where packet is rewritten:
- L2 dst, L2 src, L3 IP TTL, L3 checksum, L2 checksum/FCS.
- - H/W L3 Switching is default & cannot be disabled.
- - Load-balancing in use is Per-Flow (ip src / ip dst pair).
- - "ip load-sharing per-packet" ; "ip cef accounting per-prefix" ; "ip cef accounting non-recursive"
- - These commands only apply to traffic that is CEF switched in software by the RP.
- - Implication: per-packet load-sharing means forwarding via the slow path.
- # show interface <intf> | begin L3_in_Switched
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Traffic and Storm-Control
- - Occurs when packets flood the LAN and degrade network performance.
- - Monitors unicast, multicast, broadcast traffic over a 1-second interval period.
- - Compares traffic level with a threshold percentage.
- - Traffic level over the threshold is dropped.
- - There are options to shutdown the interface (err-disable) or send an snmp trap)
- - Shut / no shut to clear err-disable. (Research other methods if such is possible).
- - Note that if you enabled both broadcast and multicast storm-control and either one drives the utilization past the threshold, then
- both types of traffic will be suppressed.
- (config-if)# storm-control [ broadcast | multicast | unicast ] level <level>
- The level percentage is an approximation due to different packet sizes
- and counting methods.
- # show interfaces <intf> counters storm-control
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+ Some General info on Buffers, Queues, & Thresholds – Cat6500 Ethernet Modules
- Source: http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/prod_white_paper09186a0080131086.html
- - Some type of receive and transmit buferring will occur.
- - Because frames must be stored & enqueued as forwarding decisions are made & output to the line is scheduled.
- - Ingress to the switch fabric rarely cause of congestion.
- - Egress ports are where majority of packets will be destined,
- - therefore transmit-side buffers are larger than received-side.
- - No QoS configured: means FIFO with tail-drop is implemented.
- - Enabling QoS means port buffers are spread to one or more queues.
- - With QoS, Ingress & Egress scheduling is based on CoS values.
- - Default: Higher CoS mapped to Higher Queue (does Higher Q mean higher priority?).
- - Drop Thresholds (each Queue has two types: Tail drop, WRED drop):
- - Tail Drop: Frames of given CoS accepted into queue until Threshold reached. Subsequent frames are dropped until
- queue drops below threshold value.
- - WRED Drop: Low & High Watermarks. Accepted into queue up to Low watermark. Then random drop with increasing
- probability of drop as approaching High Watermark. After high watermark then drop all until below h/w.
- - Port Queue and Threshold Acronyms. (Example only. See doc for more listings).
- - Rx: 1q2t - 1 standard queue with 2 tail drop thresholds.
- - Rx: 1q4t - 1 standard queue with 4 tail drop thresholds.
- - Rx: 2q4t - 2 standard queues with four WRED drop thresholds per queue.
- - Tx: 2q2t - 2 standard queues with 2 tail drop thresholds per queue.
- - Tx: 1p2q2t - 1 strict-priority queue with 2 tail drop thresholds per queue.
- - Example Buffer size, Queues, and Thresholds for:
- - WS-X6748-GE-TX 48-port (10/100/1000T), Dual-Fabric with RJ45.
- - Total Buffer size – 1.3 MB
- - Rx Buffer size – 166 KB
- - Tx Buffer size – 1.2 MB
- - Rx Port type – with DFC3: 2q8t; with CFC: 1q8t
- - Tx Port type – 1p3q8t DWRR
- - Rx Queue size – with DFC3: Q2 33KB, Q1 133KB; with CFC Q1 166KB
- - Tx Queue size – SP 175KB, Q3 175KB, Q2 233 KB, Q1 583 KB
- +-----------------------------------------------------------------+
- | |
- +-----------------------------------------------------------------+
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement