Showing posts with label EMC. Show all posts
Showing posts with label EMC. Show all posts

Wednesday, May 23, 2012

EMC & Puppet Labs Announce Project Razor

Something really cool happened this morning!  EMC and Puppet Labs jointly announced a next generation provisioning system called Project Razor.  Brian and I had a chance to sit down with Puppet Labs and EMC to get some exclusive information on the project for a Cloudcast that we released this morning.  If you are at EMC World, be sure to check out Chad's World featuring Razor tonight at 5:30

Rather than tell you all about it, here are a bunch of links hot off the press (but go listen to the podcast!!):
UPDATE: Since I posted this here are a few links to actually get it and posts on how to get started as well!

Monday, May 21, 2012

Come see VCE's CRAZY Mobility Demo at EMC World!

This is something you won't see everyday!  In a few hours VCE, EMC, and Cisco will attempt a pretty amazing demo at the EMC World Solutions Echange.  The teams have combined forces to take Workload Mobility to a new level.  Here's a few pictures of our team hard at work finishing the build this morning.



What is Workload Mobility you ask?

Utilizing EMC's VPLEX technology, we have created active-active clusters between all three booths on the trade show floor and we will be running demos a few times a day to demonstrate the ability to seamlessly move workloads from one location to another with ZERO RPO and ZERO RTO.

Want to know more?  Come by VCE's Main Booth (#410) and ask questions!

I'll be working in both VCE booths (#410 and #515) anytime the Solutions Exchange is open.  Come by and say hi!

Lastly, A huge THANK YOU to the VCE Corporate Engineering team for the long hours they put in to make this happen!  Tom, Praphul, John, Aaron P, Bilal, and Sean, you guys rock!!

Wednesday, August 24, 2011

The #vHunt at VMworld is ON!

A number of EMC folks have already posted about this including Chad, vTexan, Bas, and Matt.  EMC and a few partners including VCE will be hosting a #vHunt at VMworld this year.  To borrow from Bas's blog, the rules are simple:


So, what’s the deal? Simple, follow these steps and see if you win:
  • Members of the EMC vSpecialist and VCE teams will be tweeting various tasks and challenges throughout VMworld. Day and Night, on the floor, in the labs, in the sessions, at the parties, etc. You can identify those tasks and challenges by looking for the vHunt hashtag “#vHunt” in our tweets. You will find all kinds of things there, facts, trivia or fun challenges.
  • Every time you respond to one of those challenges or tasks, tag it with “#vHunt. Someone from EMC will be watching those responses.
  • During the convention (Monday, Tuesday and Wednesday, 3x a day, our marketing folks pick winners based on the criteria above, and hand the winners their prize.
That’s all there is to it!  I have a few things in mind already.

In case you're wondering, I'll be pretty easy to find.  I'll either be in the VCE Booth (#1121) or in the Blogger's Lounge.  I won't be attending sessions this year so stop by and say hello!

Wednesday, July 7, 2010

Comparing NetApp SMT vs. VCE Vblock - Are you a PC or a Mac?

I have seen a nice uptick in interest from my customers in both NetApp's SMT (Secure Multi-Tenancy) and the VCE Coalition's (VMware, Cisco & EMC) Vblock solutions.  This has led me to do a lot of digging and I feel I'm now in the position to objectively compare and contrast them against each other.

How do the compare at a very high level?  

The NetApp's SMT solution has a very PC feel to it.  It feels like one of the fancy "rigs" that you would build yourself for playing high end PC games.  You take a blueprint (reference architecture), build it yourself, upgrade it over time, and turn the "nerd knobs" to suit your tastes. It also has a specific use case.  VCE's Vblock is akin to Macs.  It is an all in one solution that "just works".  It may not be as flexible at times but the idea is that you don't need the "nerd knobs" to do your work on a daily basis.  The approach to each architecture is very different and the best fit for you depends on what you are looking for.

Take Away #1: They are NOT the same solutions with different storage on the backend!

Both solutions attempt to solve customer pain points through the idea of "stacks".  By combining Servers, Network, Virtualization, and Storage into a single solution we are solving many customers Data Center problems in a way that is easier to digest.

Take Away #2: This is a new way to sell (as partners) and digest (as customers) the same technology!  As the solutions required in the Data Center gain complexity, we need a simplified way to tie all the technology together.

How about some more details?

Let's start with NetApp's SMT solution.  Here is a link to the SMT design document & implementation document for more information.  SMT is a reference architecture and framework that is very flexible and can be implemented into an environment in pieces using common technology.  You can't just purchase an SMT solution, you have to design it.  This is both SMT's greatest strength and weakness.  It is very flexible but it requires a team with the knowledge to put all the pieces together.
A second factor that comes into play is that SMT is designed with a specific solution in mind.  SMT provides the ability for multiple tenants to exist on the same set of infrastructure.  Many IT departments and sub organizations are a lot like my five year old, at times she just doesn't play well with others.  Customers historically don't like the idea of "sharing".    SMT is bit like carving up a pie, everybody gets a piece and you don't have to share (or don't know you're sharing!).
The last point to SMT is the ability to migrate your environment over to SMT as time permits.  Swap out the network piece at one point, swap out the servers another time, etc.  This is easier for individual departments to digest.


How does this compare to the VCE coalition's Vblock?

Just take the opposite of everything I just said!  (just kidding, but it is very true)
Vblock is a product, not an architecture.  It is purchased and delivered in a "box" that is configured and ready to go.  You choose the size you need (I have a comparison of the current models here) and you can order it with minimal customization.  It "just works".  There is a misconception that Vblock is a black box of VMs' that you have little or no control over. While it's true that VCE does put guidelines around what hardware is within a given Vblock, there are no limits placed on what can be configured at a software level.
Because Vblock is an orderable solution, there is no fitting this into your environment over time.  You buy, they ship it.  The key point here is the customer will not have to configure it because all implementation will be done up front.

The one big criticism of Vblock I have heard to date is what do you do with it?  SMT solves a technical problem with a solution; Vblock on the other hand provides preconfigured resources.  That is great if preconfigured resources is the problem you are trying to solve.
I would love to see the VCE coalition's message improve here.  Right now the message is "preconfigured resources" but I think it needs to be more than that; we need to use Vblock to provide solutions to customers, not just technology.

I have seen Vblock targeted at upper management (Director/C-Level) to date.  There is simply no other way to get all the departments on the same sales cycle and get them to agree on the solution so that it can be purchased at the same time.  Getting everyone in a circle holding hands and singing KumBaYa and agreeing to purchase a Vblock requires sponsorship.
Here is a summary of my points (again, sorry for the image as a table):


In summary, I love both solutions and I look forward to seeing both evolve over time.  I do believe the future of the Data Center revolves around this type of solution.  What do you think?

Thursday, June 17, 2010

My #1 Issue with VMware ESXi Today

It is no secret that VMware has anointed ESXi the future hypervisor of choice.  I am often asked what I think of ESXi and if I think it is functionally equivalent to ESX.  My short answer is NO, not today.  My opinion is due to one key reason, realignment of vmdk's on VMFS datastores today.  If you don't know what I mean, you need to understand the concepts below.

What is the difference between vmdk alignment and realignment and why should you care?

The alignment of vmdk's is the big white (or pink if you prefer) elephant in the room from a performance standpoint.  It is often ignored by many customers and misalignment can lead to as much as a 30% degradation in performance!!  To their credit, both NetApp and EMC have recognized this and and released documentation to address the issue.  Here are links to VMware & NetApp papers on the subject.  EMC documents H2197, H2529 & H5536 on PowerLink also provide information on the subject.  EVERY storage vendor has this problem; EMC and NetApp are the only two I know of that have spoken about it.  If aren't aligning/realigning your virtual machines as part of standard best practices, you should be!


What is the difference between alignment and realignment?

As pointed out to me by Duncan Eppping in a Twitter conversation, there is a big difference between alignment and realignment.  Alignment happens when the partitions are created but before the OS is installed.  Let's be honest, it is a pain in the butt and many don't learn about alignment issues until they start to have problems.  Because it is a pain, many choose realignment to remedy the problem.  Realignment is the concept of using tools (vOptimizer and NetApp's mbrscan/mbalign come to mind) to align an existing vmdk that is misaligned.  This process requires taking the machine offline and then rewriting the entire vmdk in an aligned format.  I'm speaking about realignment in this article.

Are all virtual machine operating systems affected?

NO - Windows 2003 and earlier are misaligned by default but Microsoft changed the alignment of Windows 2008 so it is now aligned by default.  Be careful though, Windows Dynamic Disks, Cirtix Servers, and many other special cases often won't work. See my article on the NetApp tools for more information.  I'm not an expert on the Linux vm's but if you have experience, please leave a comment!  Lastly, if you align your template, all deployments from that template will also be aligned.  This is really nice IF you remember to align your template before installation of the OS and before you deploy all your machines!

What does this have to do with ESXi?

Both vOptimizer and NetApp mbrscan/mbralign utilize the service console so ESX works just fine.  Due to the lack of a service console, ESXi is a different story.  With NFS datastores, you can mount a Linux host to the datastore and perform the alignments using the NetApp mbrscan/mbralign.  This process is documented by Nick Triantos here.  I'm not sure if you can use vOptmizer in this way.  VMFS is another story.  Currently, there doesn't appear to be a clear method to realign vmdk's.  No service console to run the utility and no way to access the VMFS outside of ESXi.  What do you do if your storage vendor doesn't support NFS?  What would you do if your storage vendor is VMFS (LUN based) only?  You have problems my friend.

I did a poll on Twitter yesterday and the results confirmed the same findings as my customer base.  Many still use win2k03 based servers and most P2V's are Windows 2003 server and earlier.  In addition, ESXi is trending to the market faster than windows 2008 virtual machines yet many have no idea about the concept of vmdk alignment/realignment.
What are the work-arounds?

  • Create a linux host and attach it to an NFS datastore and run NetApp mbrscan/align from there (not sure if that will work on vOptimizer)
  • Set up a single ESX host to perform the alignments on VMFS datastores
  • Align your Windows 2003 template and deploy aligned machines on VMFS and P2V all machines to NFS
When/How could this be fixed (I'm guessing here, I have no inside knowledge of any vendor products)?

  • ESXi includes the ability to detect and realign vmdk's
  • Storage Vendors include the ability to detect and realign vmdk's
  • Incoming P2V's of older OS's are aligned in transit
  • Windows Server 2008 Server becomes the standard 
Bottom Line - Can you get around the issue?  Yes you can.  Is it a pain?  Yes it is.  Until this is fixed I will continue to recommend ESXi with NFS environments but I will continue to resist ESXi with VMFS unless the customer fully understands the ramifications of this issue.  I look forward to using ESXi as my standard and we get closer and closer everyday but we also need to be careful.

Wednesday, June 9, 2010

Comparing Vblocks

I believe one of the most interesting concepts to come along in our industry recently has been Cisco/EMC/VMware's Vblock.  My best definition for Vblock is a reference architecture that you can purchase.  Think about that for a second.  Many vendors publish reference architectures that are guidelines for you to build to their specifications.  Vblock is different because it is a reference architecture you can purchase.  This concept is a fundamental shift in our market to simplify the complexity of solutions as we consolidate Data Center technologies.  We are no longer purchasing pieces and parts, we are purchasing solutions.
Anybody who knows me knows I love the term "cookie cutter".  I use it all the time because it very simply conveys the idea of mass replication in a predefined way.  Vblock is a "Cookie Cutter" Data Center.  As long as you stay within the guidelines presented in the reference architecture, the product is guaranteed to work.
I took some time this week to compare and contrast all of the various Vblock configurations.  Take particular notice to the items highlighted in yellow below.  Vblock 0 is very different from Vblock 1 and 2 due to the basis in IP storage vs. FC storage as well as using ESXi vs. ESX.  Here are my findings (Please excuse the use of a graphic for the table, blogger sucks at tables):


UPDATE (I forgot this part) - A Few Notes on Vblock0
Because Vblock0 boots from local disks instead of FC boot from SAN, you lose the stateless ability of Cisco UCS.  This isn't a big deal in a smaller environment but stateless is increasingly important as the solution scales up. The Vblock0 reference document doesn't list the disk characteristics at all.  As a matter of fact, it makes no mention of the disk configuration and the IOP's Block 0 will generate.  I hope the document is updated to include this information in the near future.


Now, the million dollar question: How many virtual machines can I fit on each solution?  The reference materials I used for this didn't really provide numbers that I would use so I have decided to use my own.  My assumptions are presented as well as my work in case you want to change the math, or call me an idiot.
The Vblock 1 and Vlock 2 Reference Architecture document lists a 4 vm's per core estimate using 2GB for the virtual machine size so I thought I would start there.  vBlock 0 and vBlock 1 contain blades with 48GB. vBlock 1 also contains a few blades with 96GB and vBlock 2 is all 96GB.  Here is the math I used to figure out the proper ratio of vm's per core per GB:

4 vm's per core w/ 48GB:
4 vm's per core * 8 cores per blade = 32 vm's per blade * 2 GB per vm = 64 GB total needed / 48 GB total memory = 1.3 oversubscribe (30% memory over subscription)

Using the numbers presented in the reference architecture works out pretty well.  I wouldn't be comfortable with a higher density than that for the 48GB solution.

8 vm's per core w/ 96GB:
8 vm's per core * 8 cores per blade = 64 vm's per blade * 2 GB per vm = 128 GB total needed / 96 GB total memory = 1.3 oversubscribe (30% memory over subscription)

Since the blade has double the memory, I doubled the vm's per core to achieve the same over subscription ratio.  I don't mind telling you that 64 vm's on a blade takes me to the edge of my comfort zone but I'll accept that density level for this calculation.

Using the values of 32 vm's per 48GB blade and 64 vm's per 96GB blade you achieve the following minimum and maximum ranges for each of the Vblocks.  Remember the Vblock 1 contains BOTH 48GB and 96GB blades so the math gets a little harder there.

Aaron's Fuzzy Math Vblock Minimums and Maximums:

There you have it.  What do you think?  Am I even close?  Please leave a comment!

Tuesday, January 19, 2010

That's A Lot of Hardware!

Just a quick post today.  As some saw on Twitter yesterday I will be getting my hands on some pretty impressive hardware.  My company has decided to move our customer demo lab to our office and all the gear arrived yesterday.  Here's a few pictures for now but we will be setting all of this up over the next few weeks.  I will be posting some impressions and tips as I go.  With my HP and IBM Blade background I am hoping to write a good bit on the UCS experience.  In addition to the EMC NS-120, I am hoping to integrate our existing EMC NS-960 for some experience with that hardware as well.  Should be interesting!!

Picture #1 - 2x Cisco UCS Chassis each with 4 blades, 2x Cisco 6120 Nexus, 1x Cisco Nexus 5020


Picture #2 - A LOT of NetApp disk shelves (NetApp controller not pictured)


Picture #3 - EMC NS-120 still in the box (but not for long!)