VMware Product Updates

Today VMware announced quite a few updates to their product lines including vSphere, vCloud Director, and a few new products or packages such as vCloud Suites and vCloud Network & Security.

VMware vSphere 5.1

The biggest news – vRAM entitlements have been removed.

Support for larger virtual machines: 64 virtual CPUs (vCPUs) and 1TB of virtual RAM (vRAM).  There is also an update to the virtual machine format now version 9 to support the larger VMs, CPU performance counters and graphics acceleration.

vSphere Distribute Switch now has functionality for health check, configuration backup and restore, roll back and recovery and LACP.

Enhanced vMotion – support vMotion without shared storage.

vSphere Data Protection – New architecture based on EMC Avamar to back up virtual machine data to disk without agents and deduped.

vSphere Replication allows for host based replication decoupled from SRM.

vSphere Web Client is now feature parity with vCenter Client.  vCenter Client will soon perish.  Also integrated with vCenter Orchestrator.

Support for 16Gbps Fiber Channel and boot from FCoE via software initiator.

vShield Endpoint is now bundled into vSphere product line.

There has also been some updates on features and entitlements with the different vSphere editions shown below.

vCloud Network and Security

VMware has rebranded vShield product line to vCloud Network and Security.   Below is a screenshot of the functionality available in each edition.

Virtual Storage Appliance (VSA)

Couple of updates in VSA 5.1:

  • You can now deploy VSA on an existing ESXi cluster.
  • Run vCenter within the VSA Cluster
  • Increase storage capacity while the cluster is online, also includes support for 2 and 3 TB drives.
  • Centrally manage multiple VSA clusters.

vCloud Director 5.1

  • Storage Profiles now available in vCD including storage vMotion and DRS.
  • vCNS integration
  • Snapshots for VMs or vApps
  • Single Sign On

vCloud Suite 5.1

With the launch of vSphere 5.1 there is now a new licensing model that combines multiple products for an easier buy/rollout around Private Cloud.  This combines vSphere Enterprise Plus, vCloud Director, vCloud Networking and Security, vFabric Application Director, Site Recovery Manager, and vCenter Operations.  This is licensed much like vSphere ie Per Processor with no vRAM entitlement.  Below is a snap shot of the editions available.

One thing to note, customers with current SnS support and are current Enterprise Plus customers they get a free upgrade to vCloud Suite Standard.  All others will have an upgrade cost that isn’t that bad.

Advertisements

Cisco UCS Smart Play Bundles

Its that time again, Cisco has released new Unified Computing bundles!  One of the major changes this time around is that the bundles are now good for 6 months beginning August 15th 2012.

UCS B-Series Smart Play Bundles

Once again Cisco has offered both diskless and disk bundles for their B-Series Promotions, just add HD to the part number.

UCS C-Series SmartPlay Bundles

UCS B Series M3 Expansion Paks

UCS B Series M2 Blade Paks

New UCS Announcements

New Blades, Fusion-io, and E series for ISRs on my!

Cisco Live 2012 is in full swing and with that always comes new product announcements.  There has been a few announcements this week for Cisco’s Unified Computing platform including new blades, partnership with Fusion-io for the M3 series blades, and the new E Series UCS for the ISR Generation 2s.  First I’ll cover some recent announcements of the Generation 1 hardware for UCS and get the bad news out of the way!

End of Sale/End of Life Announcements for Generation 1 Hardware

June 1, Cisco announced the end of sale and end of life for for the 6120 and 6140 Fabric Interconnects including all of their components such as the expansion modules.  With this announcement also came the EoS/EoL of the 2104 Fabric Extender. Replacement parts are the Generation 3 components: 6248/6296 Fabric Interconnects and 2204/2208 Fabric Extenders.  The 6100/2100 parts can still be ordered until November 20th – however I can’t imagine why you would order them with the functionality you get from the new hardware, not to mention all of the bundles have been shipping with the 6200/2200 series for the last two quarters.

Two important dates to be aware of with this announcement:

1) Last data that Cisco Engineering may release any final software maintenance releases or bug fixes: February 28, 2013.

2) Last Date of hardware support: November 30, 2017.

The full announcement can be found here.

New B22M3 and B420M3 Servers

B22M3 Blade Server

Cisco has release a new lower end server that my guess would be replaces the B200M2 and will soon issue an EoS notice on it.  The following are some high level server specs:

  • Half Width form factor.
  • Two Intel E5-2400 series processors – up to 16 cores.
  • 12 DIMM slots running at 1600 MHz for a total of 192GB RAM utilizing 16GB DIMMs.
  • Two HDD/SSD bays.
  • One mLOM port for the VIC 1240 and a single mezzanine adapter port supporting Cisco and third party hardware.
    • Supports up to 80Gbps of I/O throughput utilizing both the VIC1240 and the mezzanine card.

Same caveats apply for the B22M3 as the other M3 servers such as:

  • If a VIC1240 is not installed you will need to populate the mezzanine card adapter with either Cisco or 3rd party for network connectivity.
  • The mezzanine cards are not supported for 1-CPU configurations

B420M3 Blade Server

A new higher performance blade released was the B420M3 – my guess again to replace the B440M2 server.  Talk about a beast as far as servers go in a blade chassis, check out these stats:

  • Full Width form factor.
  • Four Intel E5-4600 series processors – up to 32 cores.
  • 48 DIMM slots will support up to 1.5 TB
  • Four HDD/SDD bays.
  • One mLOM port for the VIC1240 and two mezzanine adapter ports supporting Cisco and third party hardware.
    • Supports up to 160Gbps of I/O throughput

Same caveats above apply.  The only thing I haven’t confirmed due to the links being broken for the Technical Specifications is how will the networking work with having three network adapter ports…My theory:

  • 40Gbps utilizing the VIC1240 on the mLOM port
  • 40Gbps utilizing the VIC1280/Port Expander  on the mezzanine port.
  • 80Gbps utilizing the VIC1280 on the second mezzanine port.

What will be interesting is how these are tied to the CPUs.  I’ll update this once I have more information.

Fusion-io Support in M3 Servers

So Fusion-io announced yesterday that they have entered into an OEM relationship with Cisco.  This is really exciting as now Cisco has a great story again from a storage standpoint.  Having onboard flash that will speed up the rate at which the applications run and not have to wait for spinning disk is definitely something more and more customers are looking for.  There isn’t a whole lot of data as to what will be supported and other than later this year from a shipping standpoint.  NetApp supporting server side flash is definitely on their roadmap and from I understand will not be limited on the end card meaning they can interface with Fusion-io within UCS – lets hope for a FlexPod update on that!

More info can be found here although its pretty limited at this point.

Cisco UCS E Series Blades for the ISR G2

This isn’t Cisco’s first shot at putting a blade in a router – this first came about more than a year ago however had limited support.  They have since refreshed the line to include a few different models:

UCS E140S

This is a single wide blade and will be supported in the 2911, 2921, 2951, and all of the 3900 series models.  It contains a single Intel Xeon E3 processor, either 8GB or 16GB of RAM and up to two hard drives.  Supports up to two hard drives for SATA, SAS, or SSD.

It has two internal and one external Gigabit Ethernet ports for connectivity and will have similar support as the C series for the Cisco Integrated Management Console (CIMC) utilizing an 10/100 out of band management port.

UCS E140D and E140DP

This is a double wide blade and will be supported in the 2921, 2951, and the 3900 series models.  It contains a single Intel Xeon E5-2400 quad core process, either 8GB RAM or up to 48GB.  Supports up to two hard drives much like the E140S (3 hard drives in the DP model).

It has two internal and one external Gigabit Ethernet ports for connectivity – support for up 4 additional 1Gb ports or one 10Gb Ethernet supporting FCoE utilizing a PCIe adapter.

UCS E160D and E160DP

These models follow the 140 models with a couple of differences:

  • Six core processor
  • Only supported i the 3900 series

And a Teaser!

UCS Central – software package for management multiple UCS domains.  Will ship when 2.1 releases and the plan as of right now is to make it free.  One of my colleagues out at Cisco Live this year (@perryping) updated me based on a demo he was able to catch and had this to say:

“Will be almost entirely Read Only @ First Customer Ship (FCS).  It will be deployed as an OVA on a VM within vSphere. No abilities for global service profile at FCS; it is pretty slick looking though!”

Really looking forward to this – we are seeing more an more customers deploy multiple UCS domains and this is a much needed addition to have that single pane of glass for cross data center UCS domains.

Cisco UCS Backplane Traces & Hardware Interoperability

After running around last week attempting to figure out whether the VIC 1280 was supported with 2104 FEX in the M2 servers and finding that some docs were contradicting others I decided to put together a reference guide for myself and coworkers.  Below you will find an output of most of my slides I generated with the backplane traces and the configuration supported with Gen1/2/3 hardware from Cisco.

I’ll start by saying all Gen1/2/3 hardware is backward/forward compatible with each other – although you might run into software features that are not available i.e. utilizing a 2208 Fabric Extender with a 6140 Fabric Interconnect – you will be unable to port-channel the those FEX-FI links.

Quick note about my diagrams – they all show the FEX Backplane links based on slot 0 (or 1 depending on how you look at the blade), its the top left.  You can do the math on what it looks like for the remaining slots 🙂

Halfwidth Blade with VIC1280:

Halfwidth Blade with 2104XP & VIC1280

Halfwidth Blade with 2204XP & VIC1280

Halfwidth Blade with 2208XP & VIC1280

Fullwidth Blade with VIC1280

Fullwidth Blade with 2104XP & VIC1280

Fullwidth Blade with 2204XP & VIC1280

Fullwidth Blade with 2208XP & VIC1280

Quick Notes on B200M3

  1. All ODD backplane traces are pinned to the mLOM adapter.
  2. all EVEN backplane traces are pinned to the mezzanine adapter.
    1. This explains why you cannot have only the mezzanine adapter populated with a 2104 FEX (The backplane trace isn’t populated for this functionality)
  3. Currently populating the mLOM AND mezzanine card slots (to future proof yourself) with a 2104 results in the blade failing discovery – this is a software bug; see point above.
  4. Currently populating the mLOM and mezzanine card with anything other than the VIC1240 Port Expander using the 2200 series of FEXs is not supported at FCS.  This means no support for the VIC1280, Non I/O Mezzanine cards, or 3rd party mezzanine cards right now.

NOTE: You will notice throughout some of my B200M3 slides I only utilize a single backplane trace for each fabric on the 3rd party mezzanine adapter – this may nor may not be the case – it will depend on the manufactures and what Cisco will support as to whether they will utilize 1 or 2 backplane traces to the FEX.

B200M3 Backplane Traces

Update: FlexPod Entry Level Announcement

After learning some new information about the Entry Level FlexPod design I felt the need to put an update together on some of the questions I posed in an earlier blog post seen here.  First a quick recap, Cisco & NetApp announced a new FlexPod design for smaller workloads.  The new design consists of the NetApp FAS2240 storage array, Cisco Unified Computing system, and Cisco Nexus Series switches.  Below is a quick diagram I put together with all the parts & pieces and how they will interconnect.

What Protocols will be supported?

One of the questions I posed when the entry FlexPod was announced was which protocols and configurations would be supported. As you can see in the diagram above it is 10Gb – supporting iSCSI, NFS, and CIFS protocols.  Considering this is meant for smaller workloads it is not a huge deal that it will not support native Fiber Channel or FCoE (due to the mezzanine card is not a unified adapter) since this is meant for smaller workloads.  If by chance you outgrow either due to capacity or I/O load – you always have the option of upgrading to a FAS3200 series and turning the 2240 into a disk shelf for that FAS system.

Cost Comparison & Scale:

Prior to UCSM 2.0(2m) to manage the C series rack-mount servers you were required to connect them via a Cisco 2248 Fabric Extender for Management and the P81 to the Fabric Interconnect seen below.  This obviously doesn’t scale nearly as well and adds cost as you were purchasing additional port licenses for the Fabric Interconnect.  The ability to use the 2232 Fabric Extender will definitely make this a lot more cost effective as you can connect up to 16 rack-mount servers to a single pair of 2232 Fabric Extenders.

From a compute standpoint you can scale either through additional C series rack-mounts or through B series blade servers within the pod.  You heard that right, the FAS2240 will be supported in a B Series blade environment!

From a storage standpoint, the FAS2240 supports up to 5 additional disk shelves beyond the 24 drives within the disk controller itself.  If you manage to out grow that you can use the 2240 as a disk shelf to a 32XX or 62XX series system.  You could also add an additional controllers although you will not have the capability for Cluster Mode in the new code – the requirement of 10Gb inter cluster links leaves you with 1Gb connectivity for server/client traffic.

Cabling and Config Considerations

This should not be anything new since the 2208 I/O Module has been shipping for a while now – but its worth bring up – cabling for the 2232 Fabric Extender.  With the 6248 Fabric Extender there are six sets of 8 contiguous ports – with each set of ports manage by a single chip.  If you will be cabling up all 8 uplink ports from the 2232 you will want to maximize your VIF configuration and connect them to a complete set of 8 ports in the fabric interconnect.  More information on this can be found here.

From a configuration standpoint you will want to choose how your uplinks will function – either through Hard-pinning or port-channel within the FEX discovery policy in UCSM.

New UCS Bundles

New Q4 UCS Bundles

First off, Cisco has extended the existing M2 Series bundles through July 28th, 2012.  Those bundles can be seen below:

Cisco Q3 Bundles

The new additions specifically relate to the new M3 server line Cisco launched.  The available bundles are below:

Q4 M3 Server Bundles

As with all UCS bundles they can be ordered with or without hard drives included – although it is recommended to leave out the HDDs and to stick with Boot from SAN to keep inline with the stateless computing model Cisco has rolled out.

With bundles as usual Cisco release additional add-on packs to expand the bundle with easy to use part numbers and discounted pricing:

Q4 M3 Add-on Packs

New FlexPod Announcement

This week Cisco and NetApp jointly announced a new FlexPod offering for small/medium-sized businesses and could possibly be used for remote sites for scaled out applications. They have also validated a few management tools to make it easier to administer, maintain, and automate FlexPod environments.

Entry Level Pod

The entry level FlexPod now incorporates the NetApp FAS2240 Series Controllers, Cisco UCS C-Series Rack servers, 6200 Series Fabric Interconnects, Nexus 5000 switches, and Nexus 2232 Fabric Extenders.  While I think there has been a need for an entry level pod, I question all of the required components:

NetApp FAS2240

While the FAS2240 is an excellent entry level box you are unfortunately left with a ‘one or the other’ protocol approach – either you get native Fiber Channel or you get 10Gb file protocols – the mezzanine card is not a unified adapter.  Maybe this is a push for NFS data stores for VMware environments – however in a UCS environment that leaves you either local boot or iSCSI boot from SAN for your hypervisor.

Besides the protocol issue – the FAS2240 does give you a lot of scalability options including being used as a disk shelf in a 3K series if you happen to outgrow it.

Cisco Components

From a server perspective the rack C series will be utilized for the compute portion of FlexPod.  Obviously the P81E Virtual Interface Card (VIC) will need to be utilized with the Nexus 2232 Fabric Extender.  As of the current release the 10Gb ports from the P81E and the 1Gb onboard LOM ports will both need to be connected to the Nexus 2232 forcing you to burn 10Gb ports for 1Gb connectivity – in a future release the IMC traffic will flow in band on the P81E and remove the requirement for the 1Gb ports.

Fabric Interconnects (6200) series are included in the environment for management of the C Series Servers, Nexus 2232, and VIC P81E which helps with the management of the server configuration such as RAID, boot order and most other things we get in a B Series environment.

Nexus 5Ks will be utilized as the core of the pod for 10Gb connectivity other possible termination points such as Fiber Channel.

Summary

So while I agree there needed to be an entry level FlexPod – there are still a lot of unknowns seeing as how there have not been any updated Design Guides or Deployment Guides for the new pod:

  • What configurations for the FAS2240 will be supported – native Fiber Channel or only 10Gb Ethernet (NFS/CIFS/iSCSI)…both?
  • How cost effective will the solution be?  It has been my experience that when including the Fabric Interconnects into a C series environment the cost of a B series environment tends to beat it out – perhaps the the price break for the FAS2240 vs the 3200 series will help with the price delta?
  • Now that the FAS2240 has been approved in the entry level will it also be supported in the prior FlexPod architectures and allow us for deployments where B series is preferred but may not require a FAS3200 Series?

I guess I am off to run some pricing exercises!