Showing posts with label Public Cloud. Show all posts
Showing posts with label Public Cloud. Show all posts

Tuesday, April 30, 2013

AWS Summit Liveblog: RightScale - Hybrid IT Design

Usual Liveblog disclaimer: typing this as I go in the session, please excuse typos and formatting issues

Title: Hybrid IT - Steps to Building a Successful Model - presented by RightScale
Presenter: Brian Adler, Sr. Services Architect, RightScale & Ryan Geyer, Cloud Solutions Engineer

Brian is services, this won't be a product pitch ;)

RightScale is a CMP (Cloud Management Platform) - provides configuration management, an automation engine, as well as governance controls and does both public and on-premise clouds (I think the word private cloud must be on the naughty list at the show, all pitches do NOT use the dirty "p word")

RightScale allows management & automation across cloud resource pools

basic overview of terminology and where we have come in IaaS to Cloud Computing today

On-Premise Key Considerations

1. Workload and Infrastructure Interaction - what are the resource needs? Does this make sense in the cloud and which size instance would be best?  Instance type is very important
2. Compliance - data may be contained on-prem for compliance
3. Latency - does the consumer require low latency for a good user experience
4. Cost - the faster it has to go (latency) the more expensive it will be in the cloud
5. Cost - What is the CAPEX vs. OPEX and does it make sense

Use Cases

1. Self-Service IT Portal (The IT Vending Machine) - Users select from fixed menu, for example, pre-configured and isolated test/dev

Demo Time - Showing off an example of a portal using the RightScale API's, basically push a big button, enter a few options, let it spin up an an environment, in this example they provisioned five servers and a php environment in a few minutes

2. Scalable Applications with Uncertain Demand - This is the typical web scale use case, fail or succeed very fast in the public cloud. "See if it stucks", once it sticks, maybe pull it in house if cost reduction can be achieved when the application is at steady state

3. Disaster Recovery - Production is typically on-premise and DR environment is in the cloud, this is often considered a "warm DR" scenario - replication in real time database from production to DR, all other servers are "down".  You then spin up the other servers and the DB is already up and running, then flip the DNS entries over when DR is up and running.  You can achieve an great RTO & RPO in this example.  You can also do this from on AWS region to another.

Demo Time - Showing RightScale Dashboard with a web app demo + DR.  Demo had 2 databases, master and slave replicating and in different regions (side discussions about WAN optimization and encryption here as well), Production in the example was in US-East AWS and DR was US-West AWS.  The front end of the app was down in West.  When you launch the West DR site, it will go and configure everything and automated as part of the server template.  All DR happens just by turning up the front end in West

Design Considerations

Location of Physical Hardware- again speed vs. latency vs. location

Availability and Redundancy Configuration - This can be easy to hard depending on your needs

Workloads, Workloads, Workloads - Does the application require HA of the infrastructure? Will it tolerate an interruption? Can it go down?  Will users be impacted?

Hardware Considerations - Do you need specialty? commodity?

(Sorry, he had others listed, I zoned out for a slide or two..)

On to Hybrid IT - Most customers start out wanting "cloud bursting" but most often an application is used in one location or the other.  Check out the slide for the reasons.

Common practice is a workload is all on-premise or public. Burting isn't a common use case.  If they do use bursting, they set up a VPC between private and public to maintain a connection.

Demo Time - What would a hybrid bursting scenario look like in the RightScale dashboard?  Customer has a local cloud that is VPC connected to AWS.  Load Balancers, one is private, one is in AWS.  They are using Apache running on top of a virtual machine to maintain compatibility between private and public.  DNS is using Route 53 (AWS DNS).  RightScale uses the concept on an Array.  As RightScale monitors the performance, additional instances are fired up and "bursted" or scaled out to AWS above and beyond the local already running resources.

You do not need the same LB's on the front end like the example above.  For example could be in a local CloudStack/OpenStack environment with a hardware firewall in front but also include AWS and AWS ELB in the rules as well

Take Away - It is very possible to use both public and private and there isn't a need for a "one size fits all approach"

Great session, probably the best session of the day so far for me today.




AWS Summit Liveblog: Cloud Backup and DR

Usual Liveblog Disclaimer: This is type as fast as I can, blog may contain typing and formatting errors, sorry about that

Session: Technical Lessons on how to do Backup and Disaster Recovery in the Cloud (whew, long title)

Presenter: Simone Brunozzi, Technology Evangelist

Simone presented in the morning keynote on the Enterprise demo, good presenter

3 parts = HA -> Backup -> Disaster Recovery

HA = Keeping Services Alive

Backup = Process of keeping a copy

DR = Recover using a backup

(Simone has is using great examples using churches and monasteries but too long to type all of that out here.)

5 Concepts of DR

1. My backup should be accessible - AWS uses API's, Direct Connect, customer owns the data, redundancy is built it, AWS has import/export capabilities

AWS Storage Gateway as an example, using a gateway cache volume on-premise that will replicate to a volume in AWS public cloud, S3, snapshots, etc.  Can be a GW-cached or GW-stored (one is a cache, the other is a full offline copy). Secure tunnel for transport over AWS Direct Connect or Internet

2. My backup should be able to scale - "Infinite scale" with S3 and Glacier, scale to multiple regions, seamless, no need to provision, cost tiers (cheaper options and at scale are available)

3. My backup should be safe - SSL Endpoints, signed API calls, stored encrypted files, server-side encryption, durability: multiple copies across different data centers, local/cloud with AWS Storage Gateway

4. My backup should work with my DR policy (I don't want to wait 10 years to recover) - easy to integrate within AWS or Hybrid, AWS Storage Gateway: Run services on Amazon EC2 for DR, cleat costs, reduced costs, You decide the redundancy/availability in relation to costs.

5. Someone should care about it - Need clear ownership, permission can be set in IAM with roles, monitor logs

Now a customer story:

Shaw Media - Canadian Media Company, before AWS - multiple datacenters, lot of equipment, downtime, different technologies across datacenters - they were told to change everything and become more agile and cost effective in the next 9 months to better serve the business

Solved the issue with AWS, fast deployment of servers, network rules, and ELB on AWS, first site in only 4 weeks, after that a full migration of 29 sites from a physical DC in 9 months - This was Phase one (This was main websites)

Phase Two - Other web services migration was next (check out the picture for the details), impressive stats.  Typical web servers, apps servers, database servers, etc.


Lessons Learned - went to fast, didn't catch it... damnit

DR - Learn from your outages (test your policy on a regular basis and refine the document)

(Sorry, he's going to fast to type or even take pictures of the slides.... Really wish he would he gone slower in this section, the content was really good grrrrrrr)

Lessons to learn from DR

1. You NEED a DR plan in place - how will you recover?  Can your business survive without it?  For AWS, across Availability Zones (AZ's) or App DR with Standby (see pictures).  The second option is cheaper to implement but will take a little longer to recover from.

 

Perform a business analysis of RTO & RPO (if you don't know what that is, Google it, you need to know what it is)  In a nutshell, RTO, how long to get it back, RPO, how much data can I lose?  This is the typical cost vs. performance trade off.  Take the various AWS services as an example:


2. Test your DR - Many may say Duh! to this one but I'm always surprised how little customers actually do this.  The ability to spin up capacity just for DR testing helps to minimize cost and the ability to not have a DR site to manage is pretty cool. Data Transfer speeds (Data Gravity) could be an issue in this kind of scenario

3. Reducing Costs - Took a screenshot, it was easier


Overall - great presentation although I wish he would have spent more time on the customer slides as there was some good technical content there...




AWS Summit Liveblog: Introducing AWS OpsWorks

Usual liveblog disclaimer, this could be messy, please excuse typos, sorry for that.

Chris Barclay, Product manager for AWS OpsWorks is presenting

Application Management Challenges - Reliability and Scalability are important, operations tasks typically: Provision, Deplot, etc.

"Once Upon a Time..."  - We took the time to develop everything by hand (home made bread)

Today we need to automate to go faster (cranking out automation in a factory like, mass produced way)

In Today's infrastructure, everything is considered code, including the configuration of the "parts", sounds much like a recent Cloudcast we did...

AWS OpsWorks is a tool to tackle this challenge, very reliable and repeatable and integrated with AWS, at no additional cost

Why use OpsWOrks?
Simple, Productive, Flexible, Powerful, Secure

Common complaint was there are a lot of AWS "building blocks" but many don't want to stitch them together, AWS at times can be complex because of large number of services offered

Chris turned over the presentation over to another person (didn't catch the name) at DriveDev, DevOps consulting group, focus on F500 and startups

He talked about a typical "old school" application development that went poorly. They were able to use built in OpsWorks recipes with the addition of Chef Cookbooks on top of it. Took customer and migrated them off private and into public with OpsWorks in a short amount of time.  Basically, they were a success...

How are customers using OpsWorks today?

From OS to application using OpsWorks, From OS to your code using beanstalk, From OS up and automate everything with Chef or another tool

Takeaway - It depends on how much automation you need and at what level and up depends on which tool will be best.


Demo Time...

Talking about Chef and how OpsWorks uses it

The concept of Lifecycle events, based on this a recipe is triggered

 

Showing integration with github, keeps source and cookbooks out on git

Chris did a creation of a stack, PHP app server layer with MySQL on top, then added instances and started them up (could change to multiple AZ's for HA at creation)

After this, there are builtin Chef recipes that can be used, you can also add your own if need additional functionality, can also add additional EBS volumes if needed, elastic IP's, IAM Instance profiles, etc.

Talked about a time based instance - an instance that only exists during certain times of day, also threshold instances that can be fired up as needed (scaling of an app server based on memory, CPU, network, etc)

Added the app from git onto the stack that was built

Chris went from here into deep level git items that were above me (I admit I'm not the target audience here).  The take away, he made a change, committed the change, performed a deployment, looked very easy

Now on to Permissions - talking about various 

What's next?  More integrations with AWS resources (i.e. ELB features) - Deeper VPC, more built-in layers (go vote on their forums, they will prioritize by public opinion)

Summary: OpsWorks for productivity, control, reliability


AWS Summit Keynote Live Blog


This is a live blog from the AWS Summit Keynote by Andy Jassy.  The usual disclaimer applies, I'll be typing fast and furious so expect misspellings and some formatting errors.  Also, no Internet in the keynote (MiFi or conference) so I'll be moving this over to the blog after the keynote.

There are a TON of people at the event (I'll see if they announce numbers but easily in the thousands), impressive

Intro videos going on now…

Andy Jassy in on stage - starts with the age of AWS, 7 years old, March 2006

Now digging into the breadth of the services - they are very proud of the pace of innovation (see pictures attached)

With the exception of 2010, they have doubled the number of services every year, up to almost 160 services available today

71 new features so far in 2013



9 regions, 25 availability zones, 39 edge locations - also talked about the GovCloud and the requirements on it to support Public Sector workloads

Amazon S3 - Over 2 Trillion objects, 1,100,000 peak requests/sec

He's firing facts and figures now so fast I can't keep up. Nothing but speeds and feeds and stats to impress. He's talking very fast

Talking about customers and user base

 

Use cases - talking about the use case is really abut building blocks and letting the developers decide how to stitch together the blocks, AWS was not going to dictate the use cases

Talking about security - security is number one priority at AWS, talking about features access control from the edge, dedicated instances, encryption, etc.

Certifications are more important than security - They are HIPPA, ISO, SOX, FISMA, etc.

Now moving on to pricing (he's talking really fast, no transition in between topics)

They plan to remove cost from process and pass on to customers, 31 price drops to date, the more customers they have, the better economy of scale, they consider this a "wheel" more customers drives price drops which brings in more customers

AWS Trusted Advisor - checks for cost optimizations, security and availability checks, performance recommendations (running on demand vs. reserved instances for instance), pretty cool stuff.  I remember hearing about this but never dug into it.  It appears they are trying to change the mindset about steady state apps, they have brought this up a few times that you can run steady state in cloud, but need to do it on a reserved instance.

Now on to partners (again, no real transition) - The usual impressive list of both consulting and technology partners

AWS Marketplace - Their "App Store", 25 categories, 778 product listings - applications already configured and certified on the AWS ecosystem

Why are customers adopting cloud computing? (finally, a real transition)

1. Trade Capital Expense (CAPEX) for Operating Expense (OPEX) - $0 to get started and can fail fast if needed
2. Lower Variable Expense than most companies can do in house - they mention again how large they are and the economies of scale to pace on t customers (seems to be their new message) - They appear to be positioning themselves as the "Walmart of the Cloud" - Low Price Leader and pass savings on to you
3. You Don't Need to Guess Capacity - Talking about the typical predict up front model, what happens if you build it and nobody comes? What happens if too may people come?  If the infrastructure is elastic no need for this planning and predictive step
4. Dramatically Increase Speed and Agility - Old World server request, usually takes weeks to get servers for development, AWS takes minutes and is all self service - compares development to invention, need to perform a lot experiments, need to experiment and fail with little to no cost or collateral damage, speeds up development
5. Stop Spending Money on the Undifferentiated Heavy Lifting - They do all the "infrastructure stuff" for you, talking about how the infrastructure typically doesn't differentiate your business in anyway but it also consumes a lot of resources in operations.
6. Go Global in minutes - Because of Regions and Availability Zones the ability to scale and go grow to a different region is much easier. No need to set up operations in another area of the world

Message is very Enterprise centric (no surprise there)

Sean Beausoleil is on stage now - lead engineer for Mailbox - 2 years ago - talking about their first product, it worked but wasn't "sticky" enough, the reason was because email still held most user's data. How to tackle the mailbox as a better tool and task management

Now a video about Mailbox uses - In case you haven't tried it, Mailbox basically turns your mail into a to-do list. They were overwhelmed with the response to the initial movie that was release as a preview. They needed a massively scalable back end to support. The product pulls from IMAP -> Cloud -> to device (see picture)
They knew they would need a massive backend on AWS, they copied their existing system to AWS, they found a lot of bottlenecks in the app as they scaled up in testing.  They were able to test AHEAD of production.  Some components of the app were rewritten.  That is why the introduced the reservation system some of you that got the app may have seen.  (I was on that list)


The created the reservation system so they could scale over time until they were sure they could scale.  Even all this preparation didn't prepare them for the growth.  They were handling 100 Mil emails a day in 2 months from launch.  They are able to re-architect on the fly, comment was "you can't predict what production will look like until you are in production". I couldn't agree more based on past experience

AWS allowed them to optimize and scale and perform swaps of hardware instance sizes on the fly to balance the usage against the costs.  They would model the workload and perform swaps of hardware seamlessly in the background with no downtime.  I have to admit, that is pretty frckin cool.

Andy is back - AWS adoption into the Enterprise is the topic now

Andy is now talking about how most "old guard" are pushing for private cloud. He states none of the 6 points above are available in private cloud. He says old guard is high margin business that isn't the same as AWS. He is now talking about a balance of "old" on premise resources and new cloud era workloads - talking about Amazon Direct Connect, LDAP integration, VPC, etc. Says these tools to move from on-premise enterprises are the focus going forward. Mentions BMC and CA as partners in the future for single plane of glass management

How are Enterprises using AWS?

Strategy 1: Cloud for Development and Test - first and most common use case
Strategy 2: Build New Apps for the Cloud - this is the next generation of applications. Retire the old and create new apps, faster to build, less expensive to run, easier to manage, etc
Strategy 3: Use Cloud to Make Existing On-Prem Apps Better - Take in house apps and outsource the analytics for example for processing in the cloud. They mentioned a few enterprises including Nasdaq that do this today
Strategy 4: New Cloud Apps the Integrate Back to On-prem systems - AWS serves up the front end and the processing is on the back end on-prem
Strategy 5: Migrate Existing Apps to Cloud - he admits this is emerging and often requires consulting services, taking that very traditional workload and move it to the cloud
Strategy 6: All in - NETFLIX!  No keynote is complete with out them…

Now up - Demo of Enterprise and cloud by Simone (need his name)
They want to show you how AWS is relevant in the Enterpise
3 parts - Authentication, Integration, Migration

Authentication - Talking about Okta, an AD integration partner, brings AD into the AWS, Created an AWS Admins group in AD and it will talk to AWS IAM and preform the changes to needed to access AWS - AWS admin rights

Integration - Storage Gateway for Backup and Recovery Volumes - volume on premise - replicates to S3, replication of data happens, stand up an EC2 instance and attach to the volume on AWS if needed - talked about iSCSI targets and how to attach them (that brings back memories). Once this is done you could map back to on-premise (little fuzzy on the details)

Migration - Talking about moving export an image from VMware vCenter on-premse, transfer to AWS as an image (AMI). From there you can copy to another region. the example here is move to USA first and then transfer to Singapore.  I admit the use case of moving region to region is really cool.



Talking again about the perception of AWS and the Enterprise. The is obviously a focus.

What ar ether working on next? Amazon VPC is a focus (to continue to build the Enterprise), Direct Connect, Amazon Route 53 (DNS Services)

I'm actually gonna bail on the rest of this so i can go get a seat in the labs before they fill up. (Scratch that, line is so long for the labs they are useless)


They appear to be positioning themselves as the "Walmart of the Cloud" - Low Price Leader and pass savings on to you.  Key message also was to recognize that Enterprise will continue to use on-premise

Summary - Good stuff, it is good to hear them focus on the Enterprise and do it an a way that isn't as in your face as it was at the AWS:ReInvent conference.

Thursday, November 29, 2012

AWS re:Invent Werner Vogel Keynote Live Blog

This is a live blog of Werner Vogel's Thursday morning keynote on the next generation of cloud architectures.  This will be quick and dirty so I can keep up with the information as it is presented.


  • Werner takes the stage to Nirvana playing...  getting the crowd fired up
  • usual recap of Day One announcements of the S3 price reduction and the new data warehouse service offering
  • Werner shows a slide from 2007 that he created and shows how the message is still relevant today (removed the heavy lifting of capital constraints - physics, people, scope)
  • AWS developed because Amazon's core business needed to scale and they were too constrained to continue to grow OR there was capacity that was underutilized. They were having trouble with the peaks and valleys of traffic to the business (Think Black Friday)
  • 11.10.10 - turned off the last physical web server supported Amazon business, 10.31.11 - turned off the last physical server supporting UK business removing the physical constraints to growth for their business
  • Key aspects of a 21st century architecture - secure, scalable, fault tolerant, high performance, cost effective
  • everything is a programmable resource - data centers, networks, compute, storage, databases, load balancers, all just services now - No more "hugging" anything
  • when a project focused on fixed resources, 31% never compete, 52% over budget due to inaccurate resource estimates, changing requirements, unmanaged risk, scope creep and complexity
  • If you are no longer "bogged down" by resources, Werner fells this will change and numbers will be more positive.
  • How do you change this mindset?
  • Take a step back and dont think of the resources anymore. i.e. An EC2 instance is not a server anymore
  • Decompose into smal loosely coupled, stateless building blocks
  • discussing imdb.com - their old architecture - running on Amazon Web Servers attached to IMDB Service. The architecture wasn't scalable. There was a tight coupling between the business. If Amazon went up, IMDB had to go up
  • After architecture was to loose couple the html code on S3 so if Amazon scaled up, IMDB wouldn't have too
  • Automate your application and processes
  • Werner says humans are terrible at automation, if you have to ssh or login to an instance, it isn't automated. He recommends Chef or Puppet
  • Let Business levers control the system
  • Be Agile, break down to small building blocks, let the business decide and be able to pivot in a short time to the new demands
  • Architect with cost in mind
  • Time for customer testimony
  • Ryan Park - Pinterest technical operations now on stage
  • Introduction to Pinterest
  • Design Principles used - Flexibility (Apache ZooKeeper used to redirect and balance as well as AWS Load Balancer)
  • Scalable (decomposed everything into services vs. monolithic design), databases are thousands of shards, no one server contains the whole database
  • Measurability (monitor application and infrastructure performance at all times)
  • peak traffic is USA times - autoscale to shut down 20% after hours reduces the cost when traffic is lower
  • used reserved instances for the standard traffic and then do on-demand and spot instances to handle the elastic load throughout the day. Watchdog processes look for spin up, spin down
  • $54 a hour to run initially, after changes, $20 a hour to run
  • Werner back discussing the spot instance market
  • Now talking Resilient design aspects (you shall protect your customers at all times)
  • Protecting your customer is the first priority
  • Encrypt all user data!
  • Amazon encrypts all traffic in transit as well as at rest
  • HTTPS is used everywhere
  • In production, deploy to at least two availability zones
  • You need to protect your business, if you go to production, span zones, period
  • Integrate security into your application from the ground up
  • "If firewalls were the answer, we would still have moats around all are cities" - classic quote
  • Build, test, integrate and deploy continuously
  • Don't wait for the "next version" to implement features, constantly iterate and deploy
  • Amazon average deployment time between "versions" is 11.6 seconds, constant iteration of the site (wow, that is amazing!)
  • Shows the old architecture, was very error prone, this method wasn't possible in the old model.
  • In the old way, roll back was almost impossible, today roll back is a single API call
  • Dr. Matt Wood - Chief Data Scientist for Amazon is on stage to give a demonstration
  • decoupled, stateless architectures are easy to maintain at scale but you don't need to be Amazon to take advantage of this.
  • Live Demo Time - Photo Management Application running on EC2, showing version 1.0 running
  • put full stack into version control so all dependencies are "stored" as a template including your bootstrap environment. Super easy to manage this way
  • everything is a load balanced, stateless architecture built across 10 instances
  • simulated traffic coming into site and data is in Dynamo DB
  • photo comes in, photo is processed and clean up, then published
  • discussion of the cost of processing 1000 images using version 1.0 of the product
  • spun up version 2.0 of the product using a different (faster) instance type, will this make a difference in the costs?
  • New Instance is launched to the load balancer, cost metric went down in real time so saving money just by using the template with a faster instance.
  • Since this brought cost down, replace the rest with faster instances and costs have now gone down, on the fly without the user being aware
  • "Don't become attached to your compute infrastructure"
  • Back to Werner - Don't think in single failures
  • "There is always a failure waiting around the corner"
  • Don't treat failure as an exception, treat it as a normal possible state
  • On stage now - Alyssa Henry, VP of Storage to discuss S3 design
  • S3 runs within multiple availability zones
  • How does an S3 request get processed (say a put request)
  • Load Balancer -> Availability Zone -> Web Server -> Index service and storage service stores on multiple instances in multiple facilities
  • What happens in a failure? redundant at all areas so even if an availability zone goes down you simply change DNS weight to route traffic away from failure
  • Adaptability - Weren't sure how the traffic would grow so they needed loose coupling of services so they could scale in whatever direction the customers took them
  • S3, circa 2006, saw amazing growth, built a new storage service that was side by side with the new storage and migrated over time from the old service to the new service.  In place migration and upgrades (better performance, higher availability, and lower costs to customers) without downtime.  There wasn't a "version 2.0" version of S3 offered, the users simply were migrated over to it
  • Werner back - Now talking Adaptive design
  • Assume Nothing
  • Build your architecture focused on your customer, not the available resources at your disposal
  • You get more business value by starting from scratch with no assumptions
  • This prevents you from being locked in (i.e capacity planning on a project, what if you are wrong?)
  • On Stage: CEO of Animoto - Brad Jefferson
  • Brad explains the site (user video creation site)
  • each video is custom rendered frame by frame on their site, a single server per video, this caused an EC2 instance explosion when announced Facebook integration
  • went from 100 instances to 5000+ instances in 12 months
  • In 2007 all was on AWS except the rendering, these were homegrown in house servers
  • In 2008, rendering was done on AWS using extra large instances
  • they were good at building videos, not on building servers
  • 2009, high-CPU extra large instances, higher performance (faster to render for users, higher resolution), added medium instances for lower resolution to lower costs
  • in 2011 Cluster GPU instances (quadruple extra large instances), now full HS video and streaming before the video is even finished rendering
  • Werner back on stage
  • Werner talking about the AWS Startup Challenge, if you own a business, submit for possible EC2 credits to help build startups, deadline is the next 7 days
  • Announcement: Two New EC2 Instance Types
    • Cluster High Memory 240 GB, 2x120GB
    • High Storage 48TB on 24 HD's with 117GB for "big data"
  • Talking about the importance of collection of data metrics to determine where your business is going
  • Don't look at averages, means... what if 20% of your customers are having a bad experience?
  • Look at 99.9% of your customers, do not focus on the average
  • "control the worst experience your customers are getting"
  • Announcement: AWS Data Pipeline, data driven workload service to move data from one service to another
    • Orchestration service for data workflows
    • Automated and scheduled data flows
    • pre-integrated with AWS data sources
    • Easily connect with 3rd party and on-premise sources
  • Demonstration - AWS Data Pipeline, Dr. Matt Wood back on stage
    • create a pipeline is a drag and drop interface
    • pre-made templates (example is Dynamo DB to S3)
    • configure the source, destination and requirements
    • automate this by setting a schedule
    • Another Example - Data Logs from S3
    • Create a Daily Report and analyze using map reduce (new Hadoop Cluster) - pay as you go log analysis
    • After Hadoop analysis, reports will be stored in another S3 bucket
    • In addition to the daily logs, create a weekly roll up analytics report
  • Conclusion and wrap up time....
In conclusion, very interesting stuff!