Some interesting blog posts came about yesterday with the start of this article on ESXi Scratch Partition Best Practices on the VMware ESXi Chronicles Blog. This VMware KB article includes more technical detail as well as a resolution. In the KB article, it states that many SAS and Boot From SAN(BFS) installations will NOT create a default scratch partition at install due to the possible shared nature of both the SAS and BFS architectures.
Both Scott Lowe and Forbes Guthrie followed up with articles based on previous ESXi installation experience. Their great articles started me thinking one step further. Since I tend to think of servers in terms of Cisco UCS these days, doesn't this mean that ALL Cisco UCS (and most other vendor servers as well) will not have a default scratch space because most installations are now either SAS or BFS?? If this is the case, shouldn't this KB article represent a new best practice for ALL installations of ESXi in the future as well as all installations in the past??
UPDATE: Jeremy Waldrop shared with me that many of his UCS BFS installs include the scratch space by default and Scott Lowe shared on his blog that sometimes his didn't. Looks like we have a "feature" on our hands where sometimes the scratch partition is created and sometimes it isn't. My recommendation to everyone is to make sure you check for the partition post installation until we gather more information on the subject.
Does this need to be a best practice? I'm sure the answer here will be "it depends." It depends on how much of a difference not having the scratch space is to you. Read the KB article carefully and understand what you are losing by not having a default scratch partition. From reading the article, it seems worth it to me. What are your thoughts?
Showing posts with label IBM. Show all posts
Showing posts with label IBM. Show all posts
Wednesday, April 27, 2011
Tuesday, March 2, 2010
IBM's New eX5 Server Announcments
I wanted to tell everyone about the new server lines IBM announced today. I attended IBM Business Partner training on this a few months ago and the products are impressive. I was under NDA until today to speak about anything. I can talk about the IBM products specifically but I'm still not able to talk about the Intel Nehalem EX chipset. I will have in depth posts of the EX chipset when it is officially released. Also, I am writing this from notes taken a few months ago so a few things might be slightly off. If you see a mistake, please let me know and I will correct it!
As always, Kevin does an awesome job of laying out the products (and has some great pictures) so head over to his site for an introduction. Hot off the press, Kevin has another article on just the X5 blade here.
Here's the basics:
I will point you to links here and here. As I stated before, I'll have in depth analysis of the chipset when it is announced. The why we care part is actually really cool. There are some great advancements in the technology but there are also many things to make your life easier at time of purchase as well.
In conclusion, I'm very excited about the 2 socket offerings. They appear to be very innovative and exciting. I wasn't given access to any other vendor's early release information so I'm not even sure if anybody else will offering 2 socket servers based on Nehalem EX. Interesting times indeed...
As always, Kevin does an awesome job of laying out the products (and has some great pictures) so head over to his site for an introduction. Hot off the press, Kevin has another article on just the X5 blade here.
Here's the basics:
- The servers contain Intel's yet to be announced Nehalem EX chipset. I can't discuss the details on that since I'm still under NDA. I will present what has been pre-announced by Intel.
- The Intel Nehalem EX (Intel 75XX) was designed by Intel to be the 4 socket follow up to the previous generation, the Intel 74XX. This was the IBM 3850M2 and the HP DL 580 servers.
- (My opinion here, don't fuss at me Intel and IBM) Intel intended the Nehalem EX to be a 4 socket architecture. IBM modified the architecture in cooperation with Intel for 2 socket servers.
- IBM has released the following servers based on Nehalem EX:
- 2 socket rack server called the x3690 X5. It can hold two Intel 75XX processors and 32 memory slots
- 2 socket blade server called the X5 blade. This was a pre-announce so I can't talk much about it yet. One thing that will be cool about the blade is it will be "lego based". By this I mean you can buy one and snap on another for a 4 socket blade
- 4 socket rack server called the x3850 X5 and the x3950 X5. This will stack like the previous generation of 3850's and 3950's. Each 3850/3950 will hold four Intel 75XX processors and 64 slots of memory
- Additional memory can be bolted on to any of the models above using an IBM exclusive attachment called the MAX5. This will be a 1U (for the rack servers) with 32 memory slots or 1 blade width attachment that will give you an additional 24 memory slots. It attaches directly into the Intel QPI (Quick Path Interconnect) bus for easy, low latency memory expansion of the models
- If I remember correctly both the 3690 and the 3850/3950 will have 1 GB on board network ports but an Emulex card can be added to the systems to replace the 1GB with 10GB on board
I will point you to links here and here. As I stated before, I'll have in depth analysis of the chipset when it is announced. The why we care part is actually really cool. There are some great advancements in the technology but there are also many things to make your life easier at time of purchase as well.
In conclusion, I'm very excited about the 2 socket offerings. They appear to be very innovative and exciting. I wasn't given access to any other vendor's early release information so I'm not even sure if anybody else will offering 2 socket servers based on Nehalem EX. Interesting times indeed...
Labels:
IBM
Monday, February 15, 2010
Buying an HS22V for VMware? READ THIS!
I have had some interest from our customers in the new IBM HS22V Blade Server. There is a great overview of the details of the new blade over at Kevin's site here. I did find out one thing very interesting that I wanted to share. The HS22V is different from previous models because it will only take up to two 1.8 inch SSD drives. No hard drives here! That's a great advancement except for one thing; the list price of ONE of the drives is currently over $1600!!! This means over $3200 (list prices!) to load an operating system if you want a raid-1 set. That is a pretty high price. Here's a screenshot of the IBM configuration tool with the SSD drive.
But, if you are running VMware ESXi you have another option. Hidden in the other section (not the storage section) is an option for ESXi version 3.5 or 4.0. The best thing is it is only $75 list!!
This cost difference brings about an interesting choice for ESX based organizations vs. ESXi. How much are you willing to pay for that Service Console?
But, if you are running VMware ESXi you have another option. Hidden in the other section (not the storage section) is an option for ESXi version 3.5 or 4.0. The best thing is it is only $75 list!!
This cost difference brings about an interesting choice for ESX based organizations vs. ESXi. How much are you willing to pay for that Service Console?
Monday, January 25, 2010
Cisco UCS vs IBM and HP - Where are the Brains?
UPDATE: Thank you to everyone for the great comments! Please look for the updated sections that I have highlighted below. I have learned a lot from everyone and I will continue to update this as more information rolls in. I welcome any and all comments. Thank you!
As many of you know, my company recently acquired some very nice lab gear for customer demonstrations and proof of concept work. Many of my peers already know the UCS systems inside and out but I really need hands on to "get it".
As I learn the UCS system I will share my experiences here. My perspective is to share what is different (good and bad) about UCS compared to the IBM and HP Blade products. Before anyone asks, I will only be covering IBM and HP. If you have additional experiences, please share them in the comments. I also have no intention of picking sides. At the end of the day I sell and support all of the above systems and I can get the job done with all of them. They all have their own unique strengths and weaknesses that I intend to highlight.
In case you aren't familiar with what UCS is, I suggest you take a look at Colin's post over on his blog. He does a great job putting all the pieces together. Plus, I'm going to steal a few of his graphics. (thanks Colin!!)
A UCS system consists of one or more chassis and a pair of Cisco 6120 switches that provide both the 10GB bandwidth to the blades as well as the management of the system. The last part of that statement is the key to understanding how UCS is currently different from the competition. I define management in this example as the control of the blade hardware state. This includes identification, power on, power off, remote control, remote media, and the virtual I/O assignments for MAC and WWPN's.
By moving the management from the chassis level to the switch level, the solution can now take advantage of a multi-chassis environment. Here's a simple modification of Colin's diagram to illustrate this point.
(UPDATED!) What are the limitations to the Cisco UCS model?
Someone asked in the comments how this scales. Honestly that was a great question. I'm still learning Cisco and I was wrapped up in making it work. Let's take a look at that. Currently you can have up to 8 chassis per pair of UCS Managers (Cisco 6100's). That number will increase in the upcoming weeks and eventually the limit will top out at 40. But, the more realistic limitation is either 10 or 20 depending on the number of FEX uplinks from the chassis to the 6100's unless you are using double wide blades. If you don't understand what that means right now, don't sweat it. I'll be posting about that shortly.
(UPDATED) What if you need to manage more than the chassis limitations today?
If you need to go above the limit, then you have two options. The first option is to purchase another pair of 6100's to create another UCS System and they will be independent of each other. The second option is provided by BMC software. This will allow you to manage more chassis and the solution also provides additional enhancements. I admit I know little to nothing about the product so I'll just post the link from the comments and you can take a look. The brain mapping for that would like this.
How do you get into the brains?
Each 6120 has an ip address and both 6120's are linked together to create a clustered ip address. The clustered ip is the preferred way to access the software. The clustering is handled over dual 1GB links labeled L1 and L2 on each switch. They are connected together like this:
Cisco uses a program to manage this environment called creatively enough, Cisco UCS Manager or UCSM. To access UCSM, point a browser at the clustered ip address. Once authenticated, you will be prompted to download a 20MB java package (yes it is java, yuck!). Here is a pic of ours with both chassis powered up.
Notice that both chassis are in the same "pane of glass". This allows for management of all the blades from one interface and the movement of server profiles (covered later) from one chassis to another within the same management tool.
How does this compare to IBM?
IBM is a two part answer.
IBM Part One - Single Chassis Interface in AMM
IBM uses a module in each BladeCenter chassis called the Advanced Management Module (AMM). There can be up to two AMM's in each chassis. If there are two AMM's, one is active and the other is passive. They share the configuration and a single ip address on the network. In the case of failure of the primary, the passive module becomes active and communication resumes on the original ip address. The AMM will control power state, identification, virtual media and remote control out of the box. Virtual I/O (both WWPN and MAC) is an additional purchased license in the AMM. The product is called the Blade Open Fabric Manager (BOFM). I don't know if BOFM supports 10GB but I know it supports 1GB ethernet and 2/4GB FC. This is what it would look like with brains in place:
As you can see, each chassis is managed individually. In my experience, this is the most common configuration I have seen.
IBM Part Two - Multiple Chassis Management with IBM Director
IBM does have a free management product called IBM Director that can pull all this together into a single pane of glass. The blade administration tasks are built into the interface and virtualized I/O is handled through the Advanced BladeCenter Open Fabric Manager. Advanced BOFM is a Director plug-in and is a fee based product. Logically it would look something like this:
The downside to this solution is you now have another server in your environment to manage. In my experience Director is a little flaky at times but I also haven't tried the newest version which is a redesign to address many of the issues.
How does this compare to HP?
HP is a two part answer as well. I haven't implemented HP's Virtual Connect over multiple chassis so I will ask that if you know this answer and can throw some links my way, please do and I will update this section.
(UPDATED!) HP Part One - Single Chassis Interface in Onboard Administrator (OA)
HP's approach is very similar to IBM. HP's management modules are called the Onboard Administrator and there can be a maximum of two in each chassis. HP is different from IBM because each module requires an ip address. At any given time one ip address is active and one ip address is passive. If you access the passive module on the network, it will tell you that you are on the passive module and instruct you connect to the active module. Like the IBM AMM, the OA will control all basic functions such as power state, identification, virtual media, and remote control. Like IBM, HP has a separate product for virtual I/O called Virtual Connect. Unlike the IBM and Cisco products, HP's Virtual Connect is implemented at the I/O module level. The only way to achieve virtual I/O is to purchase the HP I/O modules. HP's brain mapping is a little different than IBM because you can connect up to four chassis into one interface. Since you probably won't be able to power more than four chassis in a rack, think of it as consolidation at the rack level.
(UPDATED!) HP Part Two - Multiple Chassis Interface in HP Insight Tools
After you get to four chassis, HP Insight Tools need to be brought in to fulfill the needs. Based on the comments below it appears that two products will fit the bill. To manage the chassis and blade functions you will need Insight Dynamics VSE Server Suite and to manage the virtual I/O you will need the Virtual Connect Enterprise Manager product. Both the Insight Dynamics VSE Server adn the Virtual Connect Enterprise Manager is fee based.
Summary
(If you made it this far, I'm impressed!) Cisco's approach feels very "up to date". I really like the idea of not having to add another server (and additional fees for virtualized I/O) to the environment for management of the products. By moving all of the management centrally to the switches you are better able to see the environment and implement a multi-chassis/multi-rack solution. IBM and HP offer a similar solution that has grown over time but the roots of the interface are in single chassis/rack management. But, at the end of the day both IBM and HP offer a centralized management solution.
Thoughts? Concerns? Please leave a comment!
As many of you know, my company recently acquired some very nice lab gear for customer demonstrations and proof of concept work. Many of my peers already know the UCS systems inside and out but I really need hands on to "get it".
As I learn the UCS system I will share my experiences here. My perspective is to share what is different (good and bad) about UCS compared to the IBM and HP Blade products. Before anyone asks, I will only be covering IBM and HP. If you have additional experiences, please share them in the comments. I also have no intention of picking sides. At the end of the day I sell and support all of the above systems and I can get the job done with all of them. They all have their own unique strengths and weaknesses that I intend to highlight.
In case you aren't familiar with what UCS is, I suggest you take a look at Colin's post over on his blog. He does a great job putting all the pieces together. Plus, I'm going to steal a few of his graphics. (thanks Colin!!)
A UCS system consists of one or more chassis and a pair of Cisco 6120 switches that provide both the 10GB bandwidth to the blades as well as the management of the system. The last part of that statement is the key to understanding how UCS is currently different from the competition. I define management in this example as the control of the blade hardware state. This includes identification, power on, power off, remote control, remote media, and the virtual I/O assignments for MAC and WWPN's.
By moving the management from the chassis level to the switch level, the solution can now take advantage of a multi-chassis environment. Here's a simple modification of Colin's diagram to illustrate this point.
(UPDATED!) What are the limitations to the Cisco UCS model?
Someone asked in the comments how this scales. Honestly that was a great question. I'm still learning Cisco and I was wrapped up in making it work. Let's take a look at that. Currently you can have up to 8 chassis per pair of UCS Managers (Cisco 6100's). That number will increase in the upcoming weeks and eventually the limit will top out at 40. But, the more realistic limitation is either 10 or 20 depending on the number of FEX uplinks from the chassis to the 6100's unless you are using double wide blades. If you don't understand what that means right now, don't sweat it. I'll be posting about that shortly.
(UPDATED) What if you need to manage more than the chassis limitations today?
If you need to go above the limit, then you have two options. The first option is to purchase another pair of 6100's to create another UCS System and they will be independent of each other. The second option is provided by BMC software. This will allow you to manage more chassis and the solution also provides additional enhancements. I admit I know little to nothing about the product so I'll just post the link from the comments and you can take a look. The brain mapping for that would like this.
How do you get into the brains?
Each 6120 has an ip address and both 6120's are linked together to create a clustered ip address. The clustered ip is the preferred way to access the software. The clustering is handled over dual 1GB links labeled L1 and L2 on each switch. They are connected together like this:
Cisco uses a program to manage this environment called creatively enough, Cisco UCS Manager or UCSM. To access UCSM, point a browser at the clustered ip address. Once authenticated, you will be prompted to download a 20MB java package (yes it is java, yuck!). Here is a pic of ours with both chassis powered up.
Notice that both chassis are in the same "pane of glass". This allows for management of all the blades from one interface and the movement of server profiles (covered later) from one chassis to another within the same management tool.
How does this compare to IBM?
IBM is a two part answer.
IBM Part One - Single Chassis Interface in AMM
IBM uses a module in each BladeCenter chassis called the Advanced Management Module (AMM). There can be up to two AMM's in each chassis. If there are two AMM's, one is active and the other is passive. They share the configuration and a single ip address on the network. In the case of failure of the primary, the passive module becomes active and communication resumes on the original ip address. The AMM will control power state, identification, virtual media and remote control out of the box. Virtual I/O (both WWPN and MAC) is an additional purchased license in the AMM. The product is called the Blade Open Fabric Manager (BOFM). I don't know if BOFM supports 10GB but I know it supports 1GB ethernet and 2/4GB FC. This is what it would look like with brains in place:
As you can see, each chassis is managed individually. In my experience, this is the most common configuration I have seen.
IBM Part Two - Multiple Chassis Management with IBM Director
IBM does have a free management product called IBM Director that can pull all this together into a single pane of glass. The blade administration tasks are built into the interface and virtualized I/O is handled through the Advanced BladeCenter Open Fabric Manager. Advanced BOFM is a Director plug-in and is a fee based product. Logically it would look something like this:
The downside to this solution is you now have another server in your environment to manage. In my experience Director is a little flaky at times but I also haven't tried the newest version which is a redesign to address many of the issues.
How does this compare to HP?
HP is a two part answer as well. I haven't implemented HP's Virtual Connect over multiple chassis so I will ask that if you know this answer and can throw some links my way, please do and I will update this section.
(UPDATED!) HP Part One - Single Chassis Interface in Onboard Administrator (OA)
HP's approach is very similar to IBM. HP's management modules are called the Onboard Administrator and there can be a maximum of two in each chassis. HP is different from IBM because each module requires an ip address. At any given time one ip address is active and one ip address is passive. If you access the passive module on the network, it will tell you that you are on the passive module and instruct you connect to the active module. Like the IBM AMM, the OA will control all basic functions such as power state, identification, virtual media, and remote control. Like IBM, HP has a separate product for virtual I/O called Virtual Connect. Unlike the IBM and Cisco products, HP's Virtual Connect is implemented at the I/O module level. The only way to achieve virtual I/O is to purchase the HP I/O modules. HP's brain mapping is a little different than IBM because you can connect up to four chassis into one interface. Since you probably won't be able to power more than four chassis in a rack, think of it as consolidation at the rack level.
(UPDATED!) HP Part Two - Multiple Chassis Interface in HP Insight Tools
After you get to four chassis, HP Insight Tools need to be brought in to fulfill the needs. Based on the comments below it appears that two products will fit the bill. To manage the chassis and blade functions you will need Insight Dynamics VSE Server Suite and to manage the virtual I/O you will need the Virtual Connect Enterprise Manager product. Both the Insight Dynamics VSE Server adn the Virtual Connect Enterprise Manager is fee based.
Summary
(If you made it this far, I'm impressed!) Cisco's approach feels very "up to date". I really like the idea of not having to add another server (and additional fees for virtualized I/O) to the environment for management of the products. By moving all of the management centrally to the switches you are better able to see the environment and implement a multi-chassis/multi-rack solution. IBM and HP offer a similar solution that has grown over time but the roots of the interface are in single chassis/rack management. But, at the end of the day both IBM and HP offer a centralized management solution.
Thoughts? Concerns? Please leave a comment!
Tuesday, January 12, 2010
Cisco 4001i Nexus Switch for IBM BladeCenter in Depth
I have been talking to a few customers recently about the Cisco Nexus 4001i Switch for the IBM BladeCenter. The product looks very nice and I have recently discovered some additional information that I wanted to share.
If you are unfamiliar with the product, head over to Kevin's site and take a look at this link and this link. He has some very good links from Cisco about the switch. In addition, I found a link to the IBM RedPaper on the switch here. For those of you that don't like links, here's the summary: It is a 20 port (14 down to blades, 6 uplinks), non-blocking 10GB FCoE capable switch utilizing the Nexus OS. The FCoE functionality is optional. To enable FCoE you need to purchase the FC Enablement Kit (IBM Part Number 49Y9983).
Here are some additional notes:
If you are unfamiliar with the product, head over to Kevin's site and take a look at this link and this link. He has some very good links from Cisco about the switch. In addition, I found a link to the IBM RedPaper on the switch here. For those of you that don't like links, here's the summary: It is a 20 port (14 down to blades, 6 uplinks), non-blocking 10GB FCoE capable switch utilizing the Nexus OS. The FCoE functionality is optional. To enable FCoE you need to purchase the FC Enablement Kit (IBM Part Number 49Y9983).
Here are some additional notes:
- The 4001i is NOT an FC Forwarder, it is a FIP Snooping switch
- You may have already noticed but this switch DOES NOT have FC ports. To talk FC you will need to go out of the 4001i into a Nexus 5k and break out the FC there
- If using the Emulex Virtual Fabric Blade Adapter there is no vNIC functionality
- The switch only supports Cisco SFP's/SFP+ cables and it doesn't ship with any. You will need to purchase them separately to go with the switch. They are NOT resold through IBM. Since there are 6 uplinks, you will need a maximum of 6 SFP's (or copper SFP+ cables) per switch.
- The switch will support a 1GB connection down to the IBM Blade 2port/4port 1GB Adapter. I questioned why you would need this but on second thought I really like this. This way you can provide additional 1GB connections to a server that may not need 10GB without the purchase of additional 1GB switches. The fact that you can "share" a 1GB and 10GB CFFh slot is VERY nice!
Thursday, July 30, 2009
Random Thoughts from the IBM Technical Conference
This week I've had the privilege of attending the IBM Technical Conference in Chicago. It was great catching up with old friends and the sessions were interesting. Here are some notes from a few sessions in no particular order. This isn't all inclusive, just things that jumped out at me in the sessions.
Running VMWare vSphere and vCenter (HA, VMotion, and simple VMs), all in Workstation
Running VMWare vSphere and vCenter (HA, VMotion, and simple VMs), all in Workstation
- Very cool session - How to set up ESXi, ESX, vCenter, and OpenFiler vm's using VMWare Workstation for a learning tool, proof of concept, demo environment, etc.
- The session leader Paul and a few IBM peers were able to get Open Filer going with vSphere in vm's. They will be providing me with the documentation on how to do it and they will also post this information to the VMWare forums (I'll update when they do)
- With the new Intel Nehalem CPU's, you typically need 2 GB PER CORE to keep enough data in the memory pipeline and not starve the CPU with idle cycles
- If you think about that for a second, 1 socket (quad core) needs at least 8 GB
- If you are using both sockets and have less than 16GB, the CPU's will be starved
- If you are running 32-bit OS's (with a max amount of 4GB), you only need one socket populated
- Even if you use the Memory extensions, the overhead isn't worth it for 32 bit
- Typical Disk I/O's for solution sizing: FC and SAS Disk generate around 150-250 IOP's per spindle, SATA generates 75-120 IOP's on average per spindle
- The amount of IOP's with SATA decreases as the size increases
- Once a drive is about 80% full, performance will decrease quickly
- SATA performs well for large sequential operations (backups, streaming video)
- SAS/FC are better for random access operations (better rotational speeds)
- Triple Constraints for High Performance: Memory Latency, Memory Bandwidth, and CPU Core Intensity
- Up until Nehalem, Intel was the most core intensive (vs. AMD), IBM won for Memory Latency with the X4 chipset (vs. the Intel and AMD MP sets), and AMD won the Memory Bandwidth (using direct memory access off the CPU's)
- Nehalem changes all of that (I can't comment on upcoming AMD designs, NDA)
- I'm afraid to say more than that from this session due to NDA concerns
Labels:
IBM
Monday, January 5, 2009
IBM BladeCenter Keyboard Lock Up
I have seen this many times and it has been a source of frustration for many IBM Blade customers. A while back IBM introduced the ability to change the focus of the KVM and media tray from one blade to another from the local keyboard. Previous to this, you were required to push the button on the front or remotely change the focus from the Management Module Interface. If you don’t know about this feature, it can be confusing because the keyboard will be unresponsive and seem to lock up. You can tell when the KVM is in this mode because the num-lock, caps-lock, and scroll lock LEDS will flash in sequence over and over.
In order to return KVM focus back to the blade you are using you are required to press NumLk-NumLk-(blade slot number)-Enter
In order to return media tray focus back to the blade you are using you are required to press NumLk-NumLk-m-(blade slot number)-Enter
Link to IBM BladeCenter KVM Tip
In order to return KVM focus back to the blade you are using you are required to press NumLk-NumLk-(blade slot number)-Enter
In order to return media tray focus back to the blade you are using you are required to press NumLk-NumLk-m-(blade slot number)-Enter
Link to IBM BladeCenter KVM Tip
Labels:
IBM
IBM I/O Card Best Practices on the 3850/3950 M2
IBM has some similar restrictions for the 3850 M2 and 3950 M2. As stated in the IBM Red Paper on the product , the box contains two PCI bridges (Reference page 33 of 42 in the Red Paper, last footnote). IBM recommends that a limit of two "high speed" cards be used per PCI bridge. Bridge #1 is slots 1-4 and Bridge #2 is slots 5-8. A high speed card is defined as any card that can push 8GB or greater bandwidth. There is also an IBM Retain Tip (H192284) that states the same information.
I have spoken to a few people in IBM about this and here are a few more recommendations. In addition to the obvious 8GB+ cards (10G Ethernet, dual 4GB HBA, 8GB HBA), some other cards are also considered high speed. The quad 1GB NIC and the RAID card for that machine can also be considered high speed.
Labels:
IBM
Mandatory Replacement of IBM BladeCenter OPM’s
This has been out a few weeks but it just came to my attention. IBM recently announced a Retain tip that the IBM BladeCenter Optical Passthru Module needs to be replaced because it will fail after 511 days of activity. This ECA (Engineering Change Announcement) is considered MANDATORY. Please see this link for more information and how to submit a form to have IBM replace the module:
Labels:
IBM
Not all 10G Cards are Equal
I found this out while configuring some 10G Ethernet cards for a customer recently. Be careful who makes your 10G cards. It turns out the NetXen 10G card (OEMed to HP, IBM and Others) has a hard limitation of only being able to address 32GB of system memory in the box. If the system has more than 32GB of memory, the card will start dropping packets.
My sources tell me there will not be a firmware fix for this at this time.To say this was a surprise to me is an understatement! I have never seen a card with a limitation based on memory before. IBM has pulled support for the card on the 3850 M2 model even though it is still a valid configuration in the configuration tools. I haven’t had a chance to check the DL580 but be careful if you are considering either of these boxes with the 10G card. Right now you will have to go 3rd party and your level of support may vary.
IBM Power Calculator
IBM also has a Power Calculator, available here. The IBM tool is a little rough around the edges, but it gets the job done.
The IBM tool is a download vs. an on-line tool. The configuration of a chassis is very easy to do and the report is very straight forward. I would like to see a user populated utilization level to customize the report but you can make the calculations yourself with the information provided. Lastly, IBM allows you to add the PDU’s you need, a nice touch.
IBM only supports exporting the data to an Xls format. All in all, a bit of an ugly baby but it provides the necessary information.IBM also has a Power Calculator, available here. The IBM tool is a little rough around the edges, but it gets the job done.The IBM tool is a download vs. an on-line tool. The configuration of a chassis is very easy to do and the report is very straight forward. I would like to see a user populated utilization level to customize the report but you can make the calculations yourself with the information provided. Lastly, IBM allows you to add the PDU’s you need, a nice touch.
IBM only supports exporting the data to an Xls format. All in all, a bit of an ugly baby but it provides the necessary information.
Labels:
IBM
VTP Mode and Service Config for HP and IBM Cisco Blade Switches
Some time ago Scott Lowe wrote up a great article on how to set up link state tracking for Cisco switches on both IBM and HP Blades.
I have set up a number of the switches lately and I wanted to add two more commands that I consider default settings on the switch to make your life easier before deployment. You will want to check with the network admin once you are on-site and probably modify them again to meet customer requirements.
vtp mode transparent
no service config (on the HP Cisco 3020 switches)
VTP Mode Transparent will place the switches into a mode where they will not participate in the VTP Domain to pass VLAN information to other Cisco switches in your organization. This allows you to “sandbox” the switch at the customer site and make sure everything plays well before you place the switch in the VTP domain. This prevents VTP problems if your VTP number is higher than the customer’s number, which would push your VTP settings out to the rest of the organization, providing they didn’t change the default VTP domain name. Sounds crazy, but it can happen.
No Service Config on the HP Blade Cisco switches will disable the “smart” feature in the switch where it will broadcast for a TFTP service to configure itself. If you don’t want/need this feature, simply enter this command in the config and it will go away. You will know you have this feature turned on if you are getting the following error in the switch logs and console on a regular basis:
%Error opening tftp://255.255.255.255/network-confg (Socket error)
%Error opening tftp://255.255.255.255/cisconet.cfg (Socket error)
%Error opening tftp://255.255.255.255/3620-confg (Socket error)
%Error opening tftp://255.255.255.255/3620.cfg (Socket error)
Subscribe to:
Posts (Atom)