Title: Hybrid IT - Steps to Building a Successful Model - presented by RightScale
Presenter: Brian Adler, Sr. Services Architect, RightScale & Ryan Geyer, Cloud Solutions Engineer
Brian is services, this won't be a product pitch ;)
RightScale is a CMP (Cloud Management Platform) - provides configuration management, an automation engine, as well as governance controls and does both public and on-premise clouds (I think the word private cloud must be on the naughty list at the show, all pitches do NOT use the dirty "p word")
RightScale allows management & automation across cloud resource pools
basic overview of terminology and where we have come in IaaS to Cloud Computing today
On-Premise Key Considerations
1. Workload and Infrastructure Interaction - what are the resource needs? Does this make sense in the cloud and which size instance would be best? Instance type is very important
2. Compliance - data may be contained on-prem for compliance
3. Latency - does the consumer require low latency for a good user experience
4. Cost - the faster it has to go (latency) the more expensive it will be in the cloud
5. Cost - What is the CAPEX vs. OPEX and does it make sense
Use Cases
1. Self-Service IT Portal (The IT Vending Machine) - Users select from fixed menu, for example, pre-configured and isolated test/dev
Demo Time - Showing off an example of a portal using the RightScale API's, basically push a big button, enter a few options, let it spin up an an environment, in this example they provisioned five servers and a php environment in a few minutes
2. Scalable Applications with Uncertain Demand - This is the typical web scale use case, fail or succeed very fast in the public cloud. "See if it stucks", once it sticks, maybe pull it in house if cost reduction can be achieved when the application is at steady state
3. Disaster Recovery - Production is typically on-premise and DR environment is in the cloud, this is often considered a "warm DR" scenario - replication in real time database from production to DR, all other servers are "down". You then spin up the other servers and the DB is already up and running, then flip the DNS entries over when DR is up and running. You can achieve an great RTO & RPO in this example. You can also do this from on AWS region to another.
Demo Time - Showing RightScale Dashboard with a web app demo + DR. Demo had 2 databases, master and slave replicating and in different regions (side discussions about WAN optimization and encryption here as well), Production in the example was in US-East AWS and DR was US-West AWS. The front end of the app was down in West. When you launch the West DR site, it will go and configure everything and automated as part of the server template. All DR happens just by turning up the front end in West
Design Considerations
Location of Physical Hardware- again speed vs. latency vs. location
Availability and Redundancy Configuration - This can be easy to hard depending on your needs
Workloads, Workloads, Workloads - Does the application require HA of the infrastructure? Will it tolerate an interruption? Can it go down? Will users be impacted?
Hardware Considerations - Do you need specialty? commodity?
(Sorry, he had others listed, I zoned out for a slide or two..)
On to Hybrid IT - Most customers start out wanting "cloud bursting" but most often an application is used in one location or the other. Check out the slide for the reasons.
Common practice is a workload is all on-premise or public. Burting isn't a common use case. If they do use bursting, they set up a VPC between private and public to maintain a connection.
Demo Time - What would a hybrid bursting scenario look like in the RightScale dashboard? Customer has a local cloud that is VPC connected to AWS. Load Balancers, one is private, one is in AWS. They are using Apache running on top of a virtual machine to maintain compatibility between private and public. DNS is using Route 53 (AWS DNS). RightScale uses the concept on an Array. As RightScale monitors the performance, additional instances are fired up and "bursted" or scaled out to AWS above and beyond the local already running resources.
You do not need the same LB's on the front end like the example above. For example could be in a local CloudStack/OpenStack environment with a hardware firewall in front but also include AWS and AWS ELB in the rules as well
Take Away - It is very possible to use both public and private and there isn't a need for a "one size fits all approach"
Great session, probably the best session of the day so far for me today.
No comments:
Post a Comment