Tuesday, February 2, 2010

Cisco UCS Information for "Server People"

I've been working with the UCS equipment as time allows for the last few weeks.  I've also had the privilege to visit Cisco TAC for UCS here in Raleigh, NC to pick their brains a bit.  Here is a quick bullet list of some features that I found interesting from my server based perspective.

  • The amount of 10GB over subscription from the UCS chassis to the 6100's (Fabric Interconnects) is proportional to the number of uplinks.  There are four connections maximum per FEX, per chassis.  One uplink will provide an 8:1 ratio, two uplinks a 4:1 ratio, and lastly four uplinks for a 2:1 ratio.  Three uplinks is not supported.  (My next article will be an detailed article on the FEX's)
  • This may seem obvious to the Cisco folks but it wasn't to me.  The 6100's are "backwards".  They are designed to be mounted in the back of the rack so all cabling is towards the rear of the UCS chassis.  Cooling is "front to back" on the 6100's to match the UCS chassis.
  • You can "Mix and Match" adapters cards on each blade because the uplink is a common 10GB fabric.  This means if you only need a few Palo cards in a chassis and maybe CNA's on the rest, you can do that.  Service Profiles won't be compatible but you do have that flexibility
  • Only Cisco memory is supported on UCS blades.  No 3rd Party memory
  • The UCS Chassis needs 2 power supplies.  It ships with zero.  3 power supplies provide N+1 redundancy and four power supplies provides N+N.  As more power supplies are added, the load is distributed evenly across each power supply
  • The UCS Chassis has 8 fans but needs 4 to operate so it is N+N redundant
  • When a Chassis is plugged in, multiple blades are powered up in serial fashion to prevent an in-rush current spike that could blow the circuits.  This has been a problem with other blades customers of mine in the past.
  • The 6100's are active/active for 10GB data but are active/passive for management of the chassis.  At any given time one 6100 is active and constantly passing information over the L1/L2 connections to keep the passive management module up to date
  • The FEX connections on the back of a UCS chassis CAN'T be cross connected to the 6100's.  I'll have more information on this in the next article
  • The UCS Manager allows up to 4 KVM connections at one time.  I'm still checking if this is 4 per UCS Manager, 4 per chassis, or 4 per blade (If you know, please leave a comment!)
  • The maximum number of vNICs the Palo card can present is 56 and is dependent on the number of FEX links from the chassis to the 6100's.  I'm still getting details on this information and I will post this in the near future
  • The 6100 Fabric Interconnects are licensed per port.  The 6120 comes with 8 ports licensed and the 6140 comes with 16 ports licensed.  Additional ports must be purchased individually, kind of like an FC switch.  This applies to both Northbound and Southbound traffic (I REALLY don't like this!!!)
  • Smart Net needs to be purchased on the 6100's, each chassis, the blades, and the expansion modules in the 6100's.  Smart Net lasts for one year so if you want three year coverage, you need to purchase quantity 3 of Smart Net item for each.  This is VERY different from HP and IBM servers.
  • The first three chassis in a UCS domain (managed by the same 6100's) communicate via an SEPROM to verify and prevent a split brain scenario in the event of the 6100's losing communication 
  • The UCS Manager includes the ability to e-mail alerts and all "call home" to Cisco, much like a NetApp storage system


Brad Hedlund said...

The formula to calculate # of vNICS & vHBAs on the Palo adapter is as follows:

15 * (number of FEX uplinks) - 2

So with 4 uplinks per FEX you can have 58 Palo virtual adapters. Typically you would use 2 of those for vHBA's, so that would leave 56 vNICs available for VM's or hypervisor switches.


Aaron Delp said...

Hey Brad! Thank you for that. I've actually received two "official" Cisco answers about Palo vNICs. One was that answer (came from the BU), and I have received another technical one that is slightly different. I can share with you via e-mail if you would like. I don't want to post here in case it isn't correct.

Thanks again for the comment!

Andrew Miler said...

As I don't work for a Cisco reseller (but do do a lot with VMware+storage), I've been tracking UCS overall....thanks for the very detailed posts.

In reading them though, I'll admit I'm just seeing a whole ton of complexity that almost feels mind-boggling at times (I'll admit for the record I'm not hugely crazy about blades in general ;-).

If you don't mind being all pre-salesy, what would be the main benefits/drivers for UCS?

Note: I am currently focused on the channel space so while I try to keep aware of the entire range of capabilities I am usually helping customers with relatively smaller VMware installations (i.e. not usually more than 10-20 ESX hosts and 300-500 VM's) so stay focused on simplicity for those size setups.

If it makes more sense to respond privately and/or not approve this, no problem....just figured I'd post my thoughts/questions as a comment here in case you wanted the response to be publicly available as well.

Thanks again.

Aaron Delp said...

Hey Andrew - I'll put on my pre-sales hat for a second here. If you want to follow up, you know how to get hold of me!

What I really like about UCS right now is the management aspect of it. The software while complex looking at first looks to be very different. I will know more on that in the upcoming weeks as I dig into it.

Here's how I see the market right now. If you need 3-4 hosts, buy some rack servers and call it a day.

If you need 5+ then we look at blades. While you have set up on the front, it is more of a wire it and forget it that I really like. With Blades you have centralized management and less power consumption and cables consolidation that make really good sense, no matter who the vendor is.

Back to UCS - The hardware management looks really nice. Think of it as VMware host profiles for hardware. You have the ability to flash firmwares and manage the hardware that both IBM and HP just don't have right now in their included products. They both depend on other fee products and I have worked with both over the years with very mixed results.

Hit me up if you want anymore info!

Andrew Miler said...

Very helpful -- thanks for the overview. I'll be looking forward to the rest of your posts.