Wednesday, October 14, 2020

Long Distance Fibre Channel Link Tuning




In this video I talk about some of the variables involved in long distance link tuning of fibre-channel distance links.  In this blog post I'll detail some of the tools that are available.  I will also provide an example of estimating the number of buffer credits you will need.  Note that this tuning is only for fibre-channel links.  This does not apply to FCIP tunnels or circuits.   One critical piece of information that you will need to calculate buffer credits is the frame size.  Smaller frames means more of them can fit in the link, so you would need more buffer credits.  Of the variables that go into the formula, this is the only unknown.  Everything else is either known or is a constant. 

Brocade has the 'portbuffershow' command that can tell you the average frame size for a link.  You would look at the Framesize columns for  TX and RX in the portbuffershow output to get the frame size.  The portbuffershow output is organized by logical switch and then by port.    

On a Cisco fabric, you can calculate the frame size using one of the 'show interface' commands:

Monday, October 12, 2020

Using the IBM Storage Insights Pro Grouping Features

 


I recently posted this post on how you can help IBM Storage Support help you by ensuring you are utilizing the full monitoring features available on your storage systems and switches.    You should also have at least the free version of IBM Storage Insights installed.   If you have Storage Insights Pro or Storage Insights for Spectrum control, there are some additional steps that you should take that will benefit both you and the IBM Support team resolve your problems as quickly as possible. 

IBM Storage Insights Pro and Storage Insights for Spectrum Control come with some powerful features for grouping and  organizing storage resources.  These features are found under the Groups menu.   You can organize your storage resources into Applications, Departments and General Groups.   

There is a hierarchy to the organization of resources.   Departments can contain sub-departments, Applications or General Groups.  Applications can contain hosts or other applications.  General Groups can contain volumes or storage systems.  

 Applications and Departments let you create the applications that are critical to your business and assign them to the same departments they are part of in your actual business.  You can define an application (such as a database) and then add the hosts to that application that run that database.  When you do this, Storage Insights automatically pulls the storage systems and volumes associated with those hosts into the application that you created.  

General Groups let you group volumes and storage systems together.  One use case is after groupinb volumes in a General Group you can define alerts for the members of that group.  The Alert Policies feature provides a similar function for storage systems and other types of resources, but not for volumes.  I recommend that you continue to use Alert Policies to manage alerts on storage systems, but you cannot currently add volumes to an Alert Policy directly.   This can be important because different types of physical storage (Flash Core vs nearline) will have different performance expectations and a response time that is valid for Flash Core is not achievable on nearline drives.   So a volume backed by nearline will see constant alerts if you were to configure an alert that is expecting flash storage.  Also, volumes with different I/O patterns will have different response time expectations.  See this post for an example of one such I/O pattern.  General Groups enable you to group volumes, storage systems, hosts or other resources however you wish.  

How Using the Grouping Features Helps IBM Storage Support

Storage systems that are organized into at least applications help IBM Storage Support more quickly identify potentially affected resources when you have a problem.  A typical performance problem statement from a client is  "our XXX database performance is very slow".    Before IBM Support can be gin to work on a problem, we need to know what hosts are affected, what storage systems are providing the storage to the hosts and and what volumes those hosts are using.    If a customer has organized the resources using the Storage Insights features, identifying the affected components is much easier than trying to do it without Storage Insights.    Here are some real-world examples that illustrate how using the grouping features is beneficial:

A customer who had a number of volumes being replicated via Global Mirror Change Volumes (GMCV) on a pair of SVC clusters.  The issue was that a subset of 20 or so volumes out of a few hundred were nearly always behind on their recovery point.  Out of the group, some would catch up, then fall behind, then a different set in the subset would catch up, etc.  So while the group of 20 was nearly constant, the volumes that were actually behind would frequently change.  We had the customer create a General Group of the 20 volumes, then on any given day the customer could tell us which volumes were behind.  It was much easier to look at those 20 and identify particular volumes than repeatedly filter them out of the few thousand that the customer had.  Over time we were able to determine that the volumes that were falling behind had an I/O pattern that was spikes of very high write I/O activity, then a longer period of very low.  The volumes would fall behind during the intense writes, then not catch up because the low I/O activity meant they were at a lower priority for replication.   Having both the Storage Insights performance data and the ability to put these volumes in a group made it much easier to diagnose the issue.

Another customer had an application that would intermittently have performance problems, and users were complaining about the slowness of the application.  The customer had several virtual hosts spread across 10 VMWare servers in a cluster.  These virtual hosts were running the application.   The virtual hosts could be running on any of VMware servers at any given time, and any of several dozen volumes could also be affected.  We had the customer create an application and add the VMWare hosts to the application.  This automatically pulled in the storage systems, volumes and backing storage for the application.  We were able to much more quickly determine the root cause of the problem as the backing storage was being overdriven  without having to repeatedly filter on hosts or volumes.    The problem would have been resolved eventually, but the pattern was more clearly seen when we were able to start with a much smaller set of volumes and hosts.

You can see how utilizing the Groups features of Storage Insights Pro can benefit both you and the the IBM Support teams.  If you want to find out more about the features, visit the Storage Insights Youtube Channel and check out the videos there that cover Departments, Applications and Groups.








Monday, September 14, 2020

Help IBM Storage Support Help You

     



 I had a client recently ask me what was the most effective thing his company could do to get me the data that would be the most helpful in troubleshooting problems in his solution.  This was after we were unable to provide a definitive root cause to a problem that occurred intermittently in his solution.   He had a fairly simple fabric that consisted of two 96-port switches, a few IBM Storage Systems and 30 or so hosts.   His problem was an issue with performance on the hosts.  At the time the best I was able to tell him was data indicated a slight correlation between host read activity and a performance problem but I was not able to confirm anything with certainty. 

    My answer was simple:  configure better event detection and system logging.  This is something I teach as a best-practice at IBM Technical University.   I also suggested that his company install at least the free version of IBM Storage Insights.   Without a performance monitoring tool, troubleshooting performance problems is very similar to trying to figure out why a traffic jam is happening using still pictures from traffic cameras.  Now imagine trying to root-cause a traffic jam that happened yesterday or last week with pictures taken today and the only other data you have is statistics such as how many cars the camera has counted since the last time you reset those counters on the camera.  Solving problems using similar data as the example is what the Support teams at IBM are asked to do, and effectively what this customer was asking.   

That said, here are the recommended actions you can take to ensure the best chance of being able to provide the data that we need to solve your problems:

  1.   Configure callhome on your products.  You can search the Knowledge Center for instructions for your specific IBM hosts, storage and switches.  Your product can monitor itself and open tickets for hardware issues that you might not necessarily be aware of.
  2.   Configure a syslog server on your products.  While this won't directly help provide data, it does preserve events if a host, storage system or switch has a system failure.  Without offloading syslog data, critical event data for these kinds of failures is lost.  Logs also wrap.  Having a syslog server configured prevents losing system events due to logs wrapping.  You can search the Knowledge Center for instructions for your specific IBM hosts and storage on how to do this.  For SAN Switches refer to the instructions from Cisco and Brocade.
  3. Configure monitoring and alerting on your SAN Switches.  This may require additional licensing but an effective monitoring policy often gives us critical timestamped data.   As an example, a recent case I worked on had several hosts losing path to storage.   Looking at the switch data, the switch ports for these hosts and a few others were seeing CRC errors.  You can read more about them and how to troubleshoot them here.  These errors are the easiest to detect and resolution is straight-forward.  Because this customer had implemented a good monitoring policy I was able to easily see the time-stamps and was able to let the customer know these errors were ongoing and needed to be resolved.    
  4. Install a performance monitoring tool, at least the free version of IBM Storage Insights.  My client did not have Storage Insights set up.  If he'd had it set up then most likely we would have been able to use the performance data to confirm the theory.   A guided tour of Storage Insights is here. If you have Spectrum Control already, Storage Insights is included for the systems you have licensed in Spectrum Control.  You get all the same monitoring and alerting features that are included in Storage Insights Pro.  Check out this post to learn how Storage Insights can enhance your IBM Storage Support experience.
  

For point 3,  Cisco has the port-monitor feature.  You can find a complete overview here.  I strongly recommend that you disable the slow-drain policy that is active on a newly deployed switch and at least activate the default port-monitor policy.  The default policy will alert on many more counters (19)  than the slow-drain policy does.  The two counters that the slow-drain policy alerts on are included in the default policy.    Enabling the default policy can help by providing time-stamped data for troubleshooting problems.      Brocade has the Monitoring and Alerting Policy Suite (MAPS).   MAPS can also provide the time-stamped data that is often critical to determining why a problem occurred.  You can find the FOS v8.2 MAPS user guide here and you can find a blog post on integrating Brocade Flow Vision rules into MAPs here.   Integrating Flow Vision allows you to alert for specific kinds of frames. 



Tuesday, September 8, 2020

IBM Announces IBM SANnav


IBM Announced IBM SANnav today.  You can register for a webinar to learn more about SANnav here.    

SANnav is a next-generation SAN management application.  It was built from the ground up with a simple, browser-based user interface.   It can streamline common workflows, such as configuration, zoning, deployment, troubleshooting, and reporting. The modernized GUI can improve operational efficiency by enablog enhanced monitoring capabilities, faster troubleshooting, and advanced analytics. 

Key features and capabilities include:

  1. Configuration management: You can use policy-based management to apply consistent configurations across the switches in your fabrics.  SANnav also makes zoning devices easier by providing a more intuitive interface than previous management products.  
  2. Dashboards:  You can see  at-a-glance views and summary health scores for fabrics, switches, hosts, and targets that may be contributing to performance issues within the network. You can instantly navigate to any hot spots for investigation and take corrective action. 
  3. Filter management: You can sort through large amounts of data by selecting only attributes of importance. For example, users can search for all 32 Gbps ports that are offline. This filter reduces the displayed content to only the points of interest, allowing faster identification and troubleshooting.
  4. Investigation mode: Provides intuitive views that you can navigate for key details to help them understand complex behaviors. SANnav Management Portal periodically collects metrics and stores them in a historical time-series database for further analysis. In addition, it can collect metrics more frequently (at 10-second intervals) for select ports.  This performance data is invaluable when trying to troubleshoot a problem that occurs intermittently and/or is severe enough to impact production but not severe enough to cause a complete outage.
  5. Reporting: Generates customized reports that provide graphical summaries of performance and health information, including all data captured using IBM b-type Fabric Vision technology. Reports can be configured and scheduled directly from SANnav Management Portal to show only the most relevant data, enabling administrators to more efficiently prioritize their actions and optimize network performance
  6. Autonomous SAN:    This is the feature I am most looking forward to learning more about.  As I am in the business of troubleshooting fabrics to find problems I would like to see how effective this is an how quickly the switches can detect problems and notify administrators.  Perhaps some day we'll have switches that can detect problems and automatically route traffic onto faster links (where possible).  This would be very similar to a recent drive I took where my phone's GPS program routed me around a major traffic jam.  It was slower than the main roads assuming no traffic, but was many minutes faster than driving through the congestion. 
As a reminder, you can register for the free webinar at the link above.  I hope to see you there. 

Tuesday, August 25, 2020

Integrating Broadcom Flow Vision Rules with MAPS



Sound monitoring and syslogging practices are the first and sometimes most important step in troubleshooting.  They also the most overlooked as they must be  configured before a problem happens.  If system logging is not configured before a problem happens, valuable information is lost. 

Broadcom has two important features that you can use to monitor the health of your Broadcom fabrics and alert you when problems are detected:  Flow Vision (the monitoring) and Monitoring And Alerting Policy Suite (MAPS), which can both monitor and alert if it detects error conditions.  In this post I'll provide a brief overview of each feature and then we'll see how we can integrate Flow Vision into MAPs to provide a comprehensive monitoring and alerting solution. 

Flow Vision

Flow Vision provides a detailed  view of the traffic between devices on your fabrics.  It captures traffic for analysis to find bottlenecks, see excessive bandwidth utilization, and look at other similar flow-based fabric connectivity.  Flow Vision can inspect the contents of a frame to gather statistics  on each frame.  Flow Vision has three main features:  Flow Monitor, Flow Generator and Flow Mirror.  In this blog post we'll take a look at Flow Monitor as that is what we will  integrate into MAPs.    Flow Monitor  provides the ability to monitor flows that you define and it gathers statistics on frames and I/Os.  Some example use cases for flows:

  •  Flows ­ through the fabric for virtual machines or standalone hosts connected via NPIV that start from from a single N_Port ID (PID) to destination targets. 
  • Flows monitoring inside logical fabrics and inter-fabric (routed) traffic passing through
  • Gaining insights into application performance through the capture of statistics for ­specified flows. 
  • Monitoring various frame types at a switch port to provide deeper insights into storage I/O access such as the various SCSI commands

MAPS

 MAPS is a policy-based health monitor that allows a switch to constantly monitor itself for fault detection and performance problems (link timeouts, excessive link resets, physical link errors) and if it detects a problem, alert you via the alert options on the policy, or if they are defined, on the individual rule.   However, MAPS does not inspect the contents of the data portion of frames.  Options for alerting include email, SNMP or raslog (the system log).  You should -always- have the raslog option set as this will give IBM Support critical timestamped data if the switch detects a problem. 


Integrating Flow Vision with MAPs


Combining these two capabilities gives you a fully integrated and very powerful monitoring configuration.  You can have Flow Vision monitor for certain types of frames, or frames between a specific source/destination pair and then feed that into MAPs to take advantage of the alerting capabilities of MAPs.

In this example we're going to take advantage of the ability of Flow Vision to inspect the contents of a frame, and then we'll  add that to MAPS to utilize the alerting capabilities in MAPS.  Suppose we  want to know when a certain host sends an abort sequence (ABTS) to a storage device.  For this example, our host name is Host1.  It is connected via NPV so we can't just monitor the ingress port, as it is possible another host will send an ABTS.  We are filtering on a specific source N_Port Id.  We also want to ensure we collect all ABTS that are sent so we are not filtering on a destination ID.

Step 1:  Create the flow:


switch:admin> flow --create Host1_ABTS  -ingrport 1/10 -srcdev 0xa1b2c3 -frametype abts 

The above rule says to filter ingress port 1/10 for the source N_PORT ID A1B2C3 and filter for frametype of ABTS.  Optionally we could specify a -dstdev of "*" and Flow vision would learn which destinations the source dev is sending to.   

Step 2: Import the flow into MAPs

switch:admin> mapsconfig --import Host1_ABTS

Step 3: Verify the Flow has been imported

switch:admin> logicalgroup --show 


------------------------------------------------------------------ 
Group Name |Predefined|Type |Member Count|Members 
------------------------------------------------------------------
ALL_PORTS|Yes       |Port |8           |2/6,1/6-18
ALL_F_Ports|Yes |Port |5 |1/4,3/7-11
ALL_2K_QSFP |Yes |Sfp  |0 |
Host1_ABTS |No |Port |3 |Monitored Flow

Step 4: Create a Rule and add the rule to a Policy

switch:admin> mapsrule --create myRule_Host1_ABTS -group myflow_22 -monitor TX_FCNT -timebase min -op g -value 5 -action RASLOG -policy myPolicy 

Where "-timebase" is the time period to monitor the changes, "-op g" is greater than,  "-value" is the value to trigger at, and "-action" is the action to take. So this rule says to log to the raslog if the switch detects greater than 5 ABTS per minute from the source N_Port  ID that was specified in the flow.  

Next we activate the new policy:

switch:admin> mapspolicy --enable policy myPolicy

Hopefully from this example you can see the utility of being able to monitor and alert on  both the contents of frames, as well as errors or changes detected on your switches.  This example can also server as a blueprint for enabling additional logging capability when troubleshooting a problem.  Perhaps you have an intermittent issue that disappears before you can collect the necessary data.  With Flow Vision you can monitor for a condition and then trigger MAPS to alert you via email or raslog.    For more information you can review the Brocade MAPS and Flow Vision guides here:


Friday, August 21, 2020

Cisco SAN Analytics and Telemetry Streaming - Why Should I Use Them?


Are you sometimes overwhelmed by performance problems on your Storage Network?  Do you wish you had better data on how your network is performing?  If you answered yes to either of these questions, read on to find out about Cisco SAN Analytics and Telemetry Streaming.    


The Cisco SAN Analytics engine is available on Cisco 32Gbps and faster MDS 9700 series port port modules and the 32 Gbps standalone switches.   This engine is constantly sampling the traffic that is running through the switches.  It provides a wealth of statistics that can be used to analyze your Cisco or IBM c-Type fabric.  Telemetry Streaming allows you to use an external application such as Cisco DataCenter Network manager to sample and visualize the data that the analytics engine generates to find patterns in your performance data and identify problems or predict the likelihood of a problem occurring.


You can find an overview of both SAN Analytics and Telemetry Streaming here.  That link also includes a complete list of the hardware that SAN Analytics is supported on.


In this blog post we'll take a quick look at the more important reasons to use SAN Analytics and the Telemetry Streaming features.


Find The Slow Storage and Host Ports on the SAN

This is probably the most common use case for any performance monitoring software.  We want to identify the outlier storage or host ports in the path of slow IO transactions. In this case, slowness is defined as longer IO or the exchange completion time.  Both of these are measures of how long it takes to complete a write or read operation.  While there are several potential reasons for slow I/O, one of the more common ones is slow or stuck device ports.  A host or storage port continually running out of buffer credits is a common cause of performance issues.  SAN Analytics makes it much easier to identify these slow ports. 

 

Find The Busiest Host and Storage Ports

You can use SAN Analytics to identify the busy ports on your SAN. This enables you to monitor the busy devices and proactively plan capacity expansion to address the high utilization before impact application performance.  If you have a host that has very high port utilization, you need to know this before adding more workload to the host.  If you have storage ports that have very high port utilization, perhaps you can load balance your hosts differently to spread the load across the storage ports so that a few ports aren't busier than the rest of them.  

 

It is important to note that busy ports are not automatically slow ports.  Your busy ports may be keeping up with the current load that is placed on them.   However, if the load increases, or a storage system where all ports are busy has a link fail, the remaining ports may not have enough available capacity to meet that demand.  SAN Analytics can help identify these ports.  


Related to this is verifying that multi-pathing (MPIO) is working properly on your hosts.  SAN Analytics can help you determine if all host paths to storage are active, and if they are, whether the utilization is uniform across all of the paths.

 

Discover if Application Problems are Related to Storage Access


SAN Analytics enables you to monitor the Exchange Completion Time (ECT) for an exchange.  This is a measure of how long a command takes to complete.  An overly long ECT can be caused by a few different problems, including slow device ports.  However, if SAN Analytics is reporting a long ECT on write commands when there are no issues indicated on the SAN, this often means that the problem is inside the storage.    


Identify the Specific Problematic Hosts Connected to an NPV Port


Hypervisors such as VIOS, VMware or Hyper-V all use N-Port Virtualization to have virtual machines log into the same set of physical switch ports on a fabric.  Customers frequently also have physical hosts connecting through an NPV device.  An example of this is a Cisco UCS chassis with several hosts connecting through a fabric extender to a set of switch ports.  In these situations, getting a traffic breakdown per server or virtual HBA from just data available in the switch data collection is challenging.   It is even more so when you are trying to troubleshoot a slow drain situation.  Switch statistics can point to a physical port, but if multiple virtual hosts are connected to that port it is often difficult to determine which host is at fault.  There are a few Cisco commands that can be run to try to determine this, but they need to be run in real-time, and on a large and busy switch you can often miss when the problem is happening as the data from these commands can wrap every few seconds. 


The SAN Analytics engine collects this data on each of the separate hosts.  This gives you the ability to drill down to specific hosts in minute detail.  Once you identify a specific switch port that is slow drain, you can then use the data available in SAN Analytics to determine which of the hosts attached to that port is the culprit.  


If you want to learn more:


The Cisco and IBM C-Type Family


How IBM and Cisco are working together

Thursday, August 20, 2020

Implementing a Cisco Fabric for Spectrum Virtualize Hyperswap Clusters



 I wrote this previous post on the general requirements for SAN Design for Spectrum Virtualize Hyperswap and Stretched clusters.  In this  follow-on post, we'll look at a sample implementation on a Cisco or IBM C-type fabric.  While there are several variations on implementation (FCIP vs Fibre-Channel ISL is one example) the basics shown here can be readily adapted to any specific design.  This implementation will also show you how to avoid one of the most common errors that IBM SAN Central sees on Hyperswap clusters - where the ISLs on a Cisco private VSAN are allowed to carry traffic for multiple VSANs.

We will implement the below design, where the public fabric is VSAN 6, and the private fabric is VSAN 5. The below diagram is a picture of one of two redundant fabrics.  The quorum that is depicted can be either an IP quorum or a third-site quorum.   For the purposes of this blog post, VSAN 6 has already been created and has devices in it.  We'll be creating VSAN 5, adding the internode ports to it and ensuring that the Port-Channels are configured correctly.  We'll also verify that Port-Channel3 on the public side is configured correctly to ensure VSAN 5 stays dedicated as a private fabric.   For the examples below, Switch1 is at Failure Domain 1.  Switch 2 is at Failure Domain 2.  

Hyperswap SAN Design

Before we get started, the  Spectrum virtualize ports should have the local port mask set such that there is at least 1 port per node per fabric dedicated to internode.   Below is the recommended port masking configuration for Spectrum Virtualize clusters.   This blog post assumes that has already been completed.  

Recommended Port Masking


Now let's get started by creating the private VSAN:

switch1(config)# conf t
switch1(config)# vsan database
switch1(config-vsan-db)# vsan 5 name private

switch2(config)# conf t
switch2(config)# vsan database
switch2(config-vsan-db)# vsan 5 name private

Next, we'll add the internode ports for our cluster.  For simplicity in this example, we're working with a 4 node cluster, and the ports we want to use are connected to the first two ports of Modules 1 and 2 on each switch.   We're only adding 1 port per node here.  Remember that there is a redundant private fabric to configure which will have the remaining internode ports attached to it.  

switch1(config)# conf t
switch1(config)# vsan database
switch1(config-vsan-db)# vsan 5 interface fc1/1, fc2/1

switch2(config)# conf t
switch2(config)# vsan database
switch2(config-vsan-db)# vsan 5 interface fc1/1, fc2/1

Next we need to build the Port-channel for VSAN 5.  Setting the trunk mode to 'off' ensures that the port-channel will only carry traffic from the  single VSAN we specify.   For Cisco 'trunking' means carrying traffic from multiple VSANs.  By turning trunking off, no other VSANs can traverse the port-channel on the private VSAN.   Having multiple VSANs traversing the ISLs on the private fabric is one of the most common issues that SAN Central finds on Cisco fabrics.  This is because trunking is allowed by default, and adding all VSANs to all ISLs is also a default when ISLs are created  We also will set the allowed VSANs parameter to only allow traffic for VSAN 5.  Lastly to keep things tidy we'll add the port-channel to VSAN 5 on each switch

switch1(config)# conf t
switch1(config)# int port-channel4
switch1(config-vsan-db)# vsan 5 interface port-channel4
switch1(config)# conf t
switch1(config)# int port-channel4
switch1((config-if)# switchport mode E
switch1((config-if)# switchport trunk mode off
switch1((config-if)switchport trunk allowed vsan 5
switch1((config-if)int fc1/14
switch1((config-if)# channel-group 4
switch1((config-if)int fc2/14
switch1((config-if)# channel-group 4
switch1((config-if)vsan database


switch2(config)# conf t
switch2(config)# int port-channel4
switch2(config-vsan-db)# vsan 5 interface port-channel4
switch2(config)# conf t
switch2(config)# int port-channel4
switch2((config-if)# switchport mode E
switch2((config-if)# switchport trunk mode off
switch2((config-if)switchport trunk allowed vsan 5
switch2((config-if)int fc1/14
switch2((config-if)# channel-group 4
switch2((config-if)int fc2/14
switch2((config-if)# channel-group 4
switch2((config-if)vsan database


The next steps would be bring up Port-Channel 4 and the underlying interfaces on Switch 1 and Switch 2, ensure the VSANs have merged correctly and lastly zone the  Spectrum Virtualize node ports together. 

We also need to examine Port-channel 3 on the public fabric to ensure it is not carrying traffic for the private VSAN.  To do this:  

switch1# show interface port-channel3

......

......

Admin port mode is auto, trunk mode is auto

Port vsan is 1 

Trunk vsans (admin allowed and active) (1,3,5,6)


Unlike the private VSAN, the trunk mode can be in auto or on. This is the public VSAN so there may be multiple VSANs using this Port-Channel.   The problem is highlighted in red.  We are allowing private VSAN 5 to traverse this Port-Channel.  This must be corrected, using the commands above that set the trunk allowed parameters.  Your vsans allowed statement would include all of the current VSANs except 
VSAN 5.  On a side note,  it is a good idea to review which VSANs are allowed to be trunked across port-channels or ISLs. Allowed VSANs that are not defined on the remote switch for a given ISL will show up as Isolated on the switch you run the above command on.  The only VSANs that should be allowed are the ones that should be running across the ISL. You would need to perform the same check on Port-Channel 3 on Switch 2. 

Lastly, the above commands can be used for FCIP interfaces or standalone FC ISLs.  You would just substitute the interface name for port-channel4 in the above example.  A note for standalone ISLs is that it is recommended that they be configured as port-channels.   You can read more about that here.  


 I hope this answers your questions and helps you with implementing your next Spectrum Virtualize Hyperswap cluster.  If you have any questions find me on LinkedIn or Twitter or post in the comments.