After learning some new information about the Entry Level FlexPod design I felt the need to put an update together on some of the questions I posed in an earlier blog post seen here. First a quick recap, Cisco & NetApp announced a new FlexPod design for smaller workloads. The new design consists of the NetApp FAS2240 storage array, Cisco Unified Computing system, and Cisco Nexus Series switches. Below is a quick diagram I put together with all the parts & pieces and how they will interconnect.
What Protocols will be supported?
One of the questions I posed when the entry FlexPod was announced was which protocols and configurations would be supported. As you can see in the diagram above it is 10Gb – supporting iSCSI, NFS, and CIFS protocols. Considering this is meant for smaller workloads it is not a huge deal that it will not support native Fiber Channel or FCoE (due to the mezzanine card is not a unified adapter) since this is meant for smaller workloads. If by chance you outgrow either due to capacity or I/O load – you always have the option of upgrading to a FAS3200 series and turning the 2240 into a disk shelf for that FAS system.
Cost Comparison & Scale:
Prior to UCSM 2.0(2m) to manage the C series rack-mount servers you were required to connect them via a Cisco 2248 Fabric Extender for Management and the P81 to the Fabric Interconnect seen below. This obviously doesn’t scale nearly as well and adds cost as you were purchasing additional port licenses for the Fabric Interconnect. The ability to use the 2232 Fabric Extender will definitely make this a lot more cost effective as you can connect up to 16 rack-mount servers to a single pair of 2232 Fabric Extenders.
From a compute standpoint you can scale either through additional C series rack-mounts or through B series blade servers within the pod. You heard that right, the FAS2240 will be supported in a B Series blade environment!
From a storage standpoint, the FAS2240 supports up to 5 additional disk shelves beyond the 24 drives within the disk controller itself. If you manage to out grow that you can use the 2240 as a disk shelf to a 32XX or 62XX series system. You could also add an additional controllers although you will not have the capability for Cluster Mode in the new code – the requirement of 10Gb inter cluster links leaves you with 1Gb connectivity for server/client traffic.
Cabling and Config Considerations
This should not be anything new since the 2208 I/O Module has been shipping for a while now – but its worth bring up – cabling for the 2232 Fabric Extender. With the 6248 Fabric Extender there are six sets of 8 contiguous ports – with each set of ports manage by a single chip. If you will be cabling up all 8 uplink ports from the 2232 you will want to maximize your VIF configuration and connect them to a complete set of 8 ports in the fabric interconnect. More information on this can be found here.
From a configuration standpoint you will want to choose how your uplinks will function – either through Hard-pinning or port-channel within the FEX discovery policy in UCSM.