Long Distance Fibre Channel Link Tuning

In this video I talk about some of the variables involved in long distance link tuning of fibre-channel distance links.  In this blog post I'll detail some of the tools that are available.  I will also provide an example of estimating the number of buffer credits you will need.  Note that this tuning is only for fibre-channel links.  This does not apply to FCIP tunnels or circuits.   One critical piece of information that you will need to calculate buffer credits is the frame size.  Smaller frames means more of them can fit in the link, so you would need more buffer credits.  Of the variables that go into the formula, this is the only unknown.  Everything else is either known or is a constant. 

Brocade has the 'portbuffershow' command that can tell you the average frame size for a link.  You would look at the Framesize columns for  TX and RX in the portbuffershow output to get the frame size.  The portbuffershow output is organized by logical switch and then by port.    

On a Cisco fabric, you can calculate the frame size using one of the 'show interface' commands:

switch# show interface fc1/1 counters


14079632456 frames input, 18624775031572 bytes


Note that the above values are the rate in the last 5 minutes, so you should  run this when replication is running.  If traffic were not moving across the link the numbers would be skewed low and you would end up under-allocating buffer credits.

If you had a one-way replication solution, you should run the portbuffershow or show interface commands on the switches at the primary site, as that is where most of the data frames would be coming from.   If you ran it on the target or secondary side, the TX column value is normally much smaller as that would mostly be acknowledgements to SCSI commands.  The frames are also going to be on average much smaller  and it would skew the required buffer credits.  For a two-way replication solution you should run the calculation for both sides of the link.  

The average Frame Size = Bytes/Frames, so from the above Cisco example output:

18624775031572 (bytes) / 14079632456 (frames) = 1323 bytes/frame or about 1.3KB per frame

If the values above are changed to use fewer frames then we get:

18624775031572 (bytes) / 10000032456 (frames) = 1862 bytes/frame or about 1.8KB per frame.     Note that the maximum frame size is 2112 bytes.   If you are in doubt, you can use this as your value, but know that you may under-allocate the required credits if your frame size is too large. 

Once we know the average frame size, we can calculate the number of buffer credits with the following formula:

Required Buffer Credits = ((distance * rate * 1000) / average_frame_size) + 6


  • Distance is measured in km
  • Rate is the speed (1, 2, 4, 8, 16, 32)
  • Average_frame_size is in bytes (max is 2112)
  • 6 credits added for Fibre-channel overhead for F_Class traffic

So using that formula for both of our frame size values:

For a 1323 size byte frame and 50 Kilometer link at 8gbps:
((50*8*1000)/1323)+6 = 308 Credits

Using the same values except for increasing the frame size we get
((50*8*1000)/1862)+6 = 220 Credits

You can see how knowing the average frame size is critical.  If you under-allocate credits you are likely to still see credit exhaustion at at least the primary site.  Ideally you want just enough credits to keep the link full.  If you can't estimate the frame size, then start with a frame size of 1000 bytes and calculate the credits.  Monitor the links.   If you continue to see buffer credit exhaustion or frame queueing, increase the credits until the frame queueing is minimized.   A tip: to make it easier to simulate values, create a spreadsheet with the above formula.  It then becomes trivial to change values and see the effect on the number of buffer credits. 

Overallocation is not as problematic on newer switches as older ones.  Newer switches have enough memory available to transmit at full line rate on all ports so they can maintain many more credits per port than older switches. Older switches can use a pool of shared credits, and allocating more of them to one port means fewer credits are available for the rest of the ports in a port group.    As I note in the video, for links that are 100s of kilometers your calculations will likely result in a number of credits above the switch maximum per port.  In these cases, set the credits for the maximum number that the switch allows per port. 


Popular posts from this blog

Troubleshooting Slow Drain Devices on Broadcom Switches

Spectrum Virtualize NPIV and Host Connectivity