Running VMWare vSphere and vCenter (HA, VMotion, and simple VMs), all in Workstation
- Very cool session - How to set up ESXi, ESX, vCenter, and OpenFiler vm's using VMWare Workstation for a learning tool, proof of concept, demo environment, etc.
- The session leader Paul and a few IBM peers were able to get Open Filer going with vSphere in vm's. They will be providing me with the documentation on how to do it and they will also post this information to the VMWare forums (I'll update when they do)
- With the new Intel Nehalem CPU's, you typically need 2 GB PER CORE to keep enough data in the memory pipeline and not starve the CPU with idle cycles
- If you think about that for a second, 1 socket (quad core) needs at least 8 GB
- If you are using both sockets and have less than 16GB, the CPU's will be starved
- If you are running 32-bit OS's (with a max amount of 4GB), you only need one socket populated
- Even if you use the Memory extensions, the overhead isn't worth it for 32 bit
- Typical Disk I/O's for solution sizing: FC and SAS Disk generate around 150-250 IOP's per spindle, SATA generates 75-120 IOP's on average per spindle
- The amount of IOP's with SATA decreases as the size increases
- Once a drive is about 80% full, performance will decrease quickly
- SATA performs well for large sequential operations (backups, streaming video)
- SAS/FC are better for random access operations (better rotational speeds)
- Triple Constraints for High Performance: Memory Latency, Memory Bandwidth, and CPU Core Intensity
- Up until Nehalem, Intel was the most core intensive (vs. AMD), IBM won for Memory Latency with the X4 chipset (vs. the Intel and AMD MP sets), and AMD won the Memory Bandwidth (using direct memory access off the CPU's)
- Nehalem changes all of that (I can't comment on upcoming AMD designs, NDA)
- I'm afraid to say more than that from this session due to NDA concerns
No comments:
Post a Comment