Thursday, March 13, 2014

Cisco Live: Lenny Kravitz & Imagine Dragons! What?!

This post has a personal as well as corporate level. First the personal. I was recently selected as a Cisco Champion! A huge thanks to Cisco for allowing me access to the program and they have done an amazing job of incubating and developing a program to get their word out in new and creative ways. I threw the graphic label up on my site but I haven't really had the bandwidth to talk about it until now. Cisco has chosen a great community and it has been an awesome ride so far!



Ok, enough of that. The Cisco Champions got the scoop yesterday on some great upcoming announcements for Cisco Live! coming up May 18-22 in San Francisco. I'm hoping I can make it to the event but planning for the "day job" is still in the works. If want to register, HURRY UP! Deadline for Early Registration expires tomorrow, March 15th.

What is there to look forward too at the event?

Rowan Trollope (better known as the person who tries to keep Peder Ulander in line) did a great post yesterday on the Next Generation of Cisco Collaboration Experiences. This is the biggest announcement of new products since TelePresence and I can't wait to hear more about the products in the near future. A few highlights of the new products to look forward to at Cisco Live!

·         Cisco Telepresence MX Series learn more
·         Cisco Telepresence SX10 Quick Set learn more
·         Cisco TelePresence Precision 60 Camera learn more
·         Cisco TelePresence SpeakerTrack 60 learn more
·         Cisco TelePresence SX80 Codec learn more
·         Cisco Business Edition 6000 Enhancements learn more
·         Cisco Business Edition 7000 learn more
·         Cisco Intelligent Proximity learn more

But who is playing the Customer Appreciation Event?!



Historically I haven't been a big fan of going to the customer appreciation shows (too many guys in too small a space) but there is NO WAY I would miss this one! Lenny Kravitz and Imagine Dragons playing at AT&T Ballpark! Yup, you heard it hear first. The event will be Wednesday night, May 21st. I can't wait! (Pro Tip: Either catch an early bus back to beat the lines or drink enough you won't mind waiting on transportation back. Of course, Uber or a Taxi is your friend as well!)


Sunday, February 9, 2014

Goldilocks & Supply Chains with VMTurbo

I'm fulfilling a New Year's Resolution to get back into blogging. Life has been crazy (but good crazy) and it's time to reestablish some old habits.

If you listen to the Cloudcast you know that Shmuel Kliger, CEO & Founder of VMTurbo was a guest back in December. I didn't know it then but that episode was the first step in a major education that I wanted to pass along to everyone.

Let's start with what you probably think about VMTurbo. You probably think they are a monitoring solution for virtualization, specifically VMware. If you thought this, you are not alone. I'll attempt to convince you that VMTurbo has a pretty unique value proposition and is properly aligned at the intersection of a bunch of upcoming data center operations trends (Software Defined Anything, Public and Private Cloud, etc.). Version 4.5 of their product was released a little over a week ago and here are a few items I observed in the last few months digging into the product.

VMTurbo Isn't Your Daddy's VMware Monitoring Tool

A major shift within the last 12-18 months has been the advance of multiple hypervisors and private cloud IaaS projects and products. The virtualization world isn't a one horse race anymore. Many other hypervisors are now "good enough" and the case can be made for certain workloads to run on non-vSphere environments. As you can see over on the new product page, VMTurbo works with all the major hypervisors and private/public cloud IaaS offerings. They cover vSphere, vCloud, Xen, Hyper-V, CloudStack (and Citrix CloudPlatform), OpenStack, Azure, and AWS. This is smart of them, really smart. I see workloads moving all the time to different environments (and sometimes moving back). Finding a tool to cover all possible infrastructure combinations is difficult currently.



What Do You Mean It Isn't Just a Monitoring Tool?

This is where VMTurbo gets really interesting for me. As mentioned on the podcast, I'm an old data center operations guy and it has always been a passion for me. I have been exploring not just WHAT VMTurbo does but HOW it does it as well. Yes, the product monitors complex systems but the WAY it does it is very different. This isn't just a bunch of agents running on machines and sending alerts back (or SNMP traps!) and then somebody gets an email or a page to take a corrective action. Just because you get an alert doesn't mean you know what corrective action needs to take place. What is the cause? Is there a problem downstream (a hot spot on a disk on a volume or LUN creating disk latency)? Monitoring systems often detect "black outs" (something is down) but don't do as well with "brown outs" (something isn't performing optimally) because most monitoring systems don't understand the connections from the lowest level of hardware all the way up to the application and potential performance impacts. Only by understanding how application resources are mapped to physical infrastructure can insight be gained into optimal performance of a system.

This is where VMTurbo comes into play. The product uses a Supply Chain Model to map every input and output of hardware and software in a system to understand potential impacts as well as potential improvements. Every product you consume has a supply chain. How does a product get from various raw materials into a finished offering that is consumed by you? Think about that for a second...

Take the computer or mobile device you are reading this on as an example. Every part, thousands of them, have to be made from raw materials, brought together, shipped, and offered to you as a product. You are the consumer. Now, think about an application or a workload as that consumer. All the underlying parts (disk, memory, compute, network) need to be combined and offered as services (hypervisor, virtual machines) that are consumed in various amounts by the application. Furthermore, each resource can serve as both an input and an output. Some will take resources, but will also serve resources to others.




By taking this approach, everything in the system becomes Data (with a capitol D). Once everything in the system is Data, you can start to apply some universal concepts such as a Common Data Model and System Optimization through the Economic Scheduling Engine. I'm going to take each one of those in turn.

What is a Common Data Model and why should I care?

By taking a complex infrastructure and breaking it down into a Common Data Model (compute, storage, network, hypervisor, etc.) it becomes very easy to add new systems and components. Remember above when I stated that VMTurbo supports the various hypervisors as well as IaaS projects/products? At a very fundamental level all products break down in the same way (Common Data) and once broken down we can begin to understand the mapping between components. This mapping gives us greater insight into connections for root cause analysis as well as making additions of new components and software very easy because the initial mapping is already complete. The latest version of VMTurbo has added hardware from Cisco UCS as well as storage from NetApp. I'm sure this integration further down the stack will continue and will be a great value add for converged infrastructure products (FlexPod anyone?). Here is an example of a mapping in the interface:



Here is anther way to understand this mapping. When I was at IBM supporting business parters about 10 years ago virtualization was just starting to heat up. Part of the early days of this market was convincing customers with physical infrastructure to go virtual. The demand was there but the tools at the time were not. Because of this my team would go in and analyze physical environments and then break them down (using a common data model) and carve up workloads and perform a manual calculation of how much virtual infrastructure would be required to support the proposed environment. We would map out applications down to the basic compute, memory, storage, and networking requirements. This was a complex operation that took weeks and lots of manual calculations and Excel formulas to accomplish. VMTurbo does basically that same thing and does it automatically without human intervention! This could have saved me hundreds of hours back in the day!

The Common Data Model is about more than just analysis. VMTurbo is able to recommend and (if configured to do so) will actually remediate environments to optimal operations. That takes me to the next section.

What is System Optimization through the Economic Scheduling Engine and why should I care?

We've talked Supply Chains to death, let's talk about Goldilocks for a bit. Most people in our field don't know it but they are always searching for a Goldilocks State of Operations. Our customers are looking for something that isn't too big, isn't too small, but just right. The problem with this is our applications and workloads are often dynamic and changing and so finding the "just right" spot is hard because it is constantly shifting. Too little resources and application performance may suffer, too many resources and we are wasting money through over provisioning of resources.

This is where the whole "cloud computing" idea comes into play. Cloud computing can be boiled down to the concept of Dynamic IT, dynamic pools of scalable resources. As our application workload shifts and moves, our underlying IT infrastructure must shift and move to compensate. This is what we call a "perfect state" in an economy system. We are providing just enough resources to be consumed.


VMTurbo uses this model to constantly monitor the resources demands and attempt to move and shift resources as needed. Think of it as VMware DRS for your entire infrastructure. The only way to do this is to map and understand the relationships of the infrastructure to the applications and how to make corrections as needed. VMTurbo attempts to provide a Goldilocks State of Operations to your entire infrastructure.

If you are still with me, thank you! In conclusion, VMTurbo is a pretty unique product that I have been having a great time digging into for the last few months. Through the use of VMTurbo's Common Data model as well as the Economic Scheduling Engine they are able to really provide a product that is well suited to tackle the increasingly complex infrastructure interdependencies as well as ever increasing and shifting application workloads. Go check out the site for more information.

Disclaimer: As noted, Shmuel Kliger was a guest on the Cloudcast podcast of which I am a co-founder. I also attended a pre-release briefing and product demonstration on VMTurbo 4.5. No compensation was given or expected and I'm writing this blog post because I think it is cool tech and wanted to help get the word out.

Image Credits: VMTurbo

Big thanks to M. Sean McGee for his Goldilocks UCS Blade Post a few years back. The title is an homage to that post.

Tuesday, August 20, 2013

Catching Up - What Have I Been Up Too

What happened to the last 3 months?!  To say it has been a busy summer has been an understatement.  Hopefully more blogs coming in September for some cools things I'm working on but here is a recap of the last few months.


I've also been generating a lot of content over the summer, too much to list here.  Go check out my blog on Tech Target as well as the latest episodes of the Cloudcast and Mobilecast.  Thanks again for coming by!

Wednesday, May 22, 2013

Citrix Synergy Keynote Live Blog


Well, access has proved to be an issue (general wireless saturated, I have TWO MiFi's that both wouldn't work) so I'm writing this offline and will publish ASAP.  Usual Live Blog disclaimer, this is me typing as fast as possible, probably spelling and formatting errors, please forgive that.  Limited bandwidth so I'll add pictures a little later today as well.

  • Mark T (CEO is up) - introduction of Synergy 2013, packed crowd
  • Citrix Cloud Platform is up first, over 200+ production clouds, 40,000+ node scale, lots of references
  • Talks about CloudPlatform being based on Apache CloudStack, 35,000 community members, top level Apache project graduation fastest in history, most contributions of any Apache project
  • ShareFile - On-premise storage option, private and public cloud data storage, you choose where your data is stored
  • ANNOUNCE: ShareFile StorageZonce Connectors - application level connectors into the Enterprise
  • ANNOUNCE: Windows Azure support for ShareFile
  • MSFT update - 80% growth of XenDesktop on Hyper-V
  • A bunch of MSFT Windows 2012 and Windows 8 updates (too many and too fast to type)
  • Citrix Receiver for Windows 8 is out
  • Moving the Windows experience to a Mac is up next:
  • ANNOUNCE: Desktop Player for Mac - Run Virtual Desktops on the Mac, online, offline, encrypted, centrally controlled, tech preview coming next month
  • Cisco Partnership up next
  • XenDesktop on UCS is a large UCS use case (FlexPod as well)
  • Tighter Integration between Cisco and Citrix across the board in all product areas
  • NetScaler has taken off (MPX, VPX, SDX) as a replacement for Cisco ACE and joint interoperability and development coming in the future
  • Innovation Award: (videos shown of all the finalists, Miami Children's Hospital, USP - University of San Paulo, Essar) - Award goes to… USP for their use case of Cloud Platform and Cloud Portal!!  Very exciting to see our customer receive this great reward.  We are very proud of to partner with them to help them serve their customers, the students of the university!
  • Up Next: Going mobile
  • What is driving the industry? - Consumerization - Mobile devices and Bring Your Own Anything is taking over!
  • Generations - The next generations requires different access than the traditional IT would allow
  • Disruptions - self explanatory
  • The Pace - Everything is faster and at a greater magnitude in scope
  • Paradigm shift - "Don't Own Stuff" - more agile, more flexible because CAPEX and "stuff" isn't holding you back
  • "Move, Add, & Change" - How to move faster, change quickly, add and remove quickly. Orgs need to tackle this
  • for example 100,000 changes in an org cost 75 million once upon a time. now down to 25 million, savings and efficiency
  • It is all about Mobile Workstyles going forward
  • Up first, Windows Desktops still prevail in the Enterprise (about 85-90% today) - It is still a Windows world
  • What would XenDesktop and XenApp look like in a mobile cloud era? - Project Avalon - 
  • ANNOUNCE: first release is called XenDesktop 7 - designed for simplicity and mobility
  • FlexCast - Windows Apps and Windows Desktops under one umbrella - FMA - Flexcast Management Architecture
  • 1 package to download - automated installation and deployment
  • HDX Insight - end to end monitoring of HDX traffic
  • no more workload provisioning, app-by-app publishing, windows app migration (all about simplification of the operations and building)
  • HDX Mobile - HD video on any device, even over 3G, 100% increase in WAN efficiency, native mobile functions (access, device GPS, sensors, cameras, etc)
  • HDX mobile SDK for Windows Apps - take a .NET app, turn it into a Windows "Mobile" app through XenDesktop and XenApp, develop once and it will adjust to the device
  • Demo Time - Brad Petterson up to demo XenDesktop 7
  • Apps and Desktop provisioning all in one using Studio - showing Director with information from NetScaler and network traffic in real time. Shows XenApp/XenDesktop traffic, goes all the way to the app level, also shows a larger IT Support view that allows better troubleshooting across an entire org, shows an ability to assess and act on the infrastructure
  • Now demo of iPad mini connecting with Receiver to a Windows 8 virtual desktop, showing off the Windows 8 experience on an iPad mini, very fluid, flash video is seamless, also showing off a full screen movie streaming over the iPad
  • The redesign of Windows Apps is pretty cool to me, makes the VDI on a mobile device potentially less painful. Seems to be a natural progression
  • Up Next - Cloud Enable the CPU, GPU, Network and Storage
  • Delivering "intense" apps that would normally not be a candidate for delivery
  • Jen-Hsun Huang - CEO & Co-Founder of NVIDIA is up to talk about this
  • partnership has been around for along time, since 2006
  • talking about the "good old days" and how some projects actually failed over the years because the "cloud" wasn't ready for these intense workloads
  • Demo Time - Abode PhotoShop running on an iPad - pulls up a picture, using the GPU in the "cloud" to manipulate the picture in real time, shows very complex graphic manipulation in real time.
  • What about applications that have required the "big powerful workstations" until now because of the processing power required?
  • Talking about the design of the Boeing of 787, the databases on the back end (Data Gravity Again!  Google It), made development around the world difficult
  • Instead using remote workstations driven by GPU's and only move the pixels, not the data
  • Showing various examples of apps running in realtime, actually showing a 4k video resolution file and editing in real time.  Very cool
  • Now talking about how it happens on the back end. Virtualization of more than the CPU is required, we now need the GPU to be virtualized
  • New NVIDIA GPU's are designed with virtualization in mind, now integrated with virtualization
  • ANNOUNCE: virtual application running on a virtual desktop with a virtual NVIDIA GPU
  • Showing AutoCad, PLM (Manufacturing), vGPU remotely for the first time
  • Google Earth running on a virtual machine using a hand gesture technology (have to see it to explain it), Demo of hand gesture control of Google Earth in real time, really cool!!
  • It's called the NVIDIA GRID vGPU and is integrated into XenDesktop 7
  • Open GL Support, industry first direct GPU
  • Up Next - XenApp 6.5
  • Announce of Feature Pack 2 with many new features (too many to type here)
  • June will see shipping for both XenApp and XenDesktop
  • The world of apps is moving beyond Windows Apps
  • What about IOS, Android, mobile data?
  • 3 big areas to mobile devices - devices + apps + data - need a strategy that takes both into account
  • Even if you take care of all three areas, the Experience is the most important factor
  • How do you deliver a consumer-like mobile experience at work?
  • 3 things to do that - infrastructure to manage mobile lifecycle + mobile apple & data + developer tools and app ecosystem
  • XenMobile - How to deliver this - Provision, security, apps, and data to mobile devices
  • Want seamless windows integration
  • Worx Enroll - self-service device registration is the first step (provisioning)
  • Worx Home - Mobile settings, support, more (operations)
  • Demo Time  - Showing of BYOD of an iPhone 5 using Worx Enroll and Worx Home
  • Enroll checks the device, checks if it is jailbroken (Boo!) and certifies the device
  • You then enroll and your "apps" are pushed to the device, Worx Home acts like a corporate app store, could be a desktop, an app, a mobile app, a file, etc.
  • XenMobile has GoToAssist built in for mobile device support in the Enterprise
  • Now sowing XenMobile admin UI, shows all devices in the enterprise with a very nice break down of the devices
  • This allows you as an admin to wipe the "business" side of the device
  • Now showing a new Samsung S4 and the Nokia with Windows 8, Android on a stick from Wyse
  • XenMobile is designed for the full mobile lifecycle
  • What about apps that talk to each other (copy, paste, etc)
  • You don't want salesforce data leaking out, evernote to contain confidential information for example, create a barrier between life and work
  • MDX Technology - Micro VPN and secure app containers, app specific lock and swipe, inter-app communication, conditional access policies
  • XenMobile now includes WorxMail (mail, calendar, contacts), WorxWeb, ShareFile as a "basis" for office communications
  • Demo Time - Showing email, have a sensitive email, can't open it or move it out of the app "container" but does it allow it on ShareFile
  • Showing another email with a link to the internal Intranet and it will fire up a micro-VPN and use WorxWeb to tunnel back
  • Showing an integration of ShareFile integrated with internal file shares on the intranet.  Allows you to connect back to corp data on ShareFile along with document editing on the iPad
  • SharePoint connector into ShareFile - Pulls SharePoint into ShareFile, allows checkout of documents and editing with many SharePoint tracking features in place.  Check back in with a Note as well
  • Podio - can now use the Chat API, (use GTM for real time interaction, Podio for team based actions), can also do video chat built into Podio with builtin one button, it uses HD Faces technology built into Podio
  • XenMobile has 3 version - MDM Edition, App Edition,  and Enterprise Edition
  • Available in June
  • Worx App SDK - Worx Enable any mobile app
  • Also a Worx "App store" for IT to enable apps in the Enterprise
  • (NetScaler & wrap up content here but had some other things come up so missed them, sorry about that)

Monday, May 6, 2013

April Recap

My trend of posting monthly recaps a few days late continues...  Sorry about that, hopefully the May recap will be on-time.  I was traveling most of April so the blogs this month tend to reflect that.

I'll start with the Cloudcast (.net) for the month of April.  We published a record number of episodes. A HUGE thanks to both Amy Lewis and Brian Katz for their amazing contributions!  Amy did a fantastic job as roving reporter and Brian's Mobilecast is really taking off!  As always, please send us any show feedback, we love to hear from you!


Next up is my new TechTarget Blog, you have subscribed with your latest Google Reader replacement, right??  I'm really having a good time writing over there.  This site (aarondelp.com) has always been more hands on and live blogs from events but the interest in the latest trends around Open Clouds and the operational aspects of cloud computing has been both great and humbling.  Thank you to everyone who has taken the time to read the articles and provide feedback!


The only blogging I was able to do on my site this month is Live Blogs from the AWS event.  Here are all of them.

Tuesday, April 30, 2013

AWS Summit Liveblog: RightScale - Hybrid IT Design

Usual Liveblog disclaimer: typing this as I go in the session, please excuse typos and formatting issues

Title: Hybrid IT - Steps to Building a Successful Model - presented by RightScale
Presenter: Brian Adler, Sr. Services Architect, RightScale & Ryan Geyer, Cloud Solutions Engineer

Brian is services, this won't be a product pitch ;)

RightScale is a CMP (Cloud Management Platform) - provides configuration management, an automation engine, as well as governance controls and does both public and on-premise clouds (I think the word private cloud must be on the naughty list at the show, all pitches do NOT use the dirty "p word")

RightScale allows management & automation across cloud resource pools

basic overview of terminology and where we have come in IaaS to Cloud Computing today

On-Premise Key Considerations

1. Workload and Infrastructure Interaction - what are the resource needs? Does this make sense in the cloud and which size instance would be best?  Instance type is very important
2. Compliance - data may be contained on-prem for compliance
3. Latency - does the consumer require low latency for a good user experience
4. Cost - the faster it has to go (latency) the more expensive it will be in the cloud
5. Cost - What is the CAPEX vs. OPEX and does it make sense

Use Cases

1. Self-Service IT Portal (The IT Vending Machine) - Users select from fixed menu, for example, pre-configured and isolated test/dev

Demo Time - Showing off an example of a portal using the RightScale API's, basically push a big button, enter a few options, let it spin up an an environment, in this example they provisioned five servers and a php environment in a few minutes

2. Scalable Applications with Uncertain Demand - This is the typical web scale use case, fail or succeed very fast in the public cloud. "See if it stucks", once it sticks, maybe pull it in house if cost reduction can be achieved when the application is at steady state

3. Disaster Recovery - Production is typically on-premise and DR environment is in the cloud, this is often considered a "warm DR" scenario - replication in real time database from production to DR, all other servers are "down".  You then spin up the other servers and the DB is already up and running, then flip the DNS entries over when DR is up and running.  You can achieve an great RTO & RPO in this example.  You can also do this from on AWS region to another.

Demo Time - Showing RightScale Dashboard with a web app demo + DR.  Demo had 2 databases, master and slave replicating and in different regions (side discussions about WAN optimization and encryption here as well), Production in the example was in US-East AWS and DR was US-West AWS.  The front end of the app was down in West.  When you launch the West DR site, it will go and configure everything and automated as part of the server template.  All DR happens just by turning up the front end in West

Design Considerations

Location of Physical Hardware- again speed vs. latency vs. location

Availability and Redundancy Configuration - This can be easy to hard depending on your needs

Workloads, Workloads, Workloads - Does the application require HA of the infrastructure? Will it tolerate an interruption? Can it go down?  Will users be impacted?

Hardware Considerations - Do you need specialty? commodity?

(Sorry, he had others listed, I zoned out for a slide or two..)

On to Hybrid IT - Most customers start out wanting "cloud bursting" but most often an application is used in one location or the other.  Check out the slide for the reasons.

Common practice is a workload is all on-premise or public. Burting isn't a common use case.  If they do use bursting, they set up a VPC between private and public to maintain a connection.

Demo Time - What would a hybrid bursting scenario look like in the RightScale dashboard?  Customer has a local cloud that is VPC connected to AWS.  Load Balancers, one is private, one is in AWS.  They are using Apache running on top of a virtual machine to maintain compatibility between private and public.  DNS is using Route 53 (AWS DNS).  RightScale uses the concept on an Array.  As RightScale monitors the performance, additional instances are fired up and "bursted" or scaled out to AWS above and beyond the local already running resources.

You do not need the same LB's on the front end like the example above.  For example could be in a local CloudStack/OpenStack environment with a hardware firewall in front but also include AWS and AWS ELB in the rules as well

Take Away - It is very possible to use both public and private and there isn't a need for a "one size fits all approach"

Great session, probably the best session of the day so far for me today.




AWS Summit Liveblog: Cloud Backup and DR

Usual Liveblog Disclaimer: This is type as fast as I can, blog may contain typing and formatting errors, sorry about that

Session: Technical Lessons on how to do Backup and Disaster Recovery in the Cloud (whew, long title)

Presenter: Simone Brunozzi, Technology Evangelist

Simone presented in the morning keynote on the Enterprise demo, good presenter

3 parts = HA -> Backup -> Disaster Recovery

HA = Keeping Services Alive

Backup = Process of keeping a copy

DR = Recover using a backup

(Simone has is using great examples using churches and monasteries but too long to type all of that out here.)

5 Concepts of DR

1. My backup should be accessible - AWS uses API's, Direct Connect, customer owns the data, redundancy is built it, AWS has import/export capabilities

AWS Storage Gateway as an example, using a gateway cache volume on-premise that will replicate to a volume in AWS public cloud, S3, snapshots, etc.  Can be a GW-cached or GW-stored (one is a cache, the other is a full offline copy). Secure tunnel for transport over AWS Direct Connect or Internet

2. My backup should be able to scale - "Infinite scale" with S3 and Glacier, scale to multiple regions, seamless, no need to provision, cost tiers (cheaper options and at scale are available)

3. My backup should be safe - SSL Endpoints, signed API calls, stored encrypted files, server-side encryption, durability: multiple copies across different data centers, local/cloud with AWS Storage Gateway

4. My backup should work with my DR policy (I don't want to wait 10 years to recover) - easy to integrate within AWS or Hybrid, AWS Storage Gateway: Run services on Amazon EC2 for DR, cleat costs, reduced costs, You decide the redundancy/availability in relation to costs.

5. Someone should care about it - Need clear ownership, permission can be set in IAM with roles, monitor logs

Now a customer story:

Shaw Media - Canadian Media Company, before AWS - multiple datacenters, lot of equipment, downtime, different technologies across datacenters - they were told to change everything and become more agile and cost effective in the next 9 months to better serve the business

Solved the issue with AWS, fast deployment of servers, network rules, and ELB on AWS, first site in only 4 weeks, after that a full migration of 29 sites from a physical DC in 9 months - This was Phase one (This was main websites)

Phase Two - Other web services migration was next (check out the picture for the details), impressive stats.  Typical web servers, apps servers, database servers, etc.


Lessons Learned - went to fast, didn't catch it... damnit

DR - Learn from your outages (test your policy on a regular basis and refine the document)

(Sorry, he's going to fast to type or even take pictures of the slides.... Really wish he would he gone slower in this section, the content was really good grrrrrrr)

Lessons to learn from DR

1. You NEED a DR plan in place - how will you recover?  Can your business survive without it?  For AWS, across Availability Zones (AZ's) or App DR with Standby (see pictures).  The second option is cheaper to implement but will take a little longer to recover from.

 

Perform a business analysis of RTO & RPO (if you don't know what that is, Google it, you need to know what it is)  In a nutshell, RTO, how long to get it back, RPO, how much data can I lose?  This is the typical cost vs. performance trade off.  Take the various AWS services as an example:


2. Test your DR - Many may say Duh! to this one but I'm always surprised how little customers actually do this.  The ability to spin up capacity just for DR testing helps to minimize cost and the ability to not have a DR site to manage is pretty cool. Data Transfer speeds (Data Gravity) could be an issue in this kind of scenario

3. Reducing Costs - Took a screenshot, it was easier


Overall - great presentation although I wish he would have spent more time on the customer slides as there was some good technical content there...




AWS Summit Liveblog: Introducing AWS OpsWorks

Usual liveblog disclaimer, this could be messy, please excuse typos, sorry for that.

Chris Barclay, Product manager for AWS OpsWorks is presenting

Application Management Challenges - Reliability and Scalability are important, operations tasks typically: Provision, Deplot, etc.

"Once Upon a Time..."  - We took the time to develop everything by hand (home made bread)

Today we need to automate to go faster (cranking out automation in a factory like, mass produced way)

In Today's infrastructure, everything is considered code, including the configuration of the "parts", sounds much like a recent Cloudcast we did...

AWS OpsWorks is a tool to tackle this challenge, very reliable and repeatable and integrated with AWS, at no additional cost

Why use OpsWOrks?
Simple, Productive, Flexible, Powerful, Secure

Common complaint was there are a lot of AWS "building blocks" but many don't want to stitch them together, AWS at times can be complex because of large number of services offered

Chris turned over the presentation over to another person (didn't catch the name) at DriveDev, DevOps consulting group, focus on F500 and startups

He talked about a typical "old school" application development that went poorly. They were able to use built in OpsWorks recipes with the addition of Chef Cookbooks on top of it. Took customer and migrated them off private and into public with OpsWorks in a short amount of time.  Basically, they were a success...

How are customers using OpsWorks today?

From OS to application using OpsWorks, From OS to your code using beanstalk, From OS up and automate everything with Chef or another tool

Takeaway - It depends on how much automation you need and at what level and up depends on which tool will be best.


Demo Time...

Talking about Chef and how OpsWorks uses it

The concept of Lifecycle events, based on this a recipe is triggered

 

Showing integration with github, keeps source and cookbooks out on git

Chris did a creation of a stack, PHP app server layer with MySQL on top, then added instances and started them up (could change to multiple AZ's for HA at creation)

After this, there are builtin Chef recipes that can be used, you can also add your own if need additional functionality, can also add additional EBS volumes if needed, elastic IP's, IAM Instance profiles, etc.

Talked about a time based instance - an instance that only exists during certain times of day, also threshold instances that can be fired up as needed (scaling of an app server based on memory, CPU, network, etc)

Added the app from git onto the stack that was built

Chris went from here into deep level git items that were above me (I admit I'm not the target audience here).  The take away, he made a change, committed the change, performed a deployment, looked very easy

Now on to Permissions - talking about various 

What's next?  More integrations with AWS resources (i.e. ELB features) - Deeper VPC, more built-in layers (go vote on their forums, they will prioritize by public opinion)

Summary: OpsWorks for productivity, control, reliability


AWS Summit Keynote Live Blog


This is a live blog from the AWS Summit Keynote by Andy Jassy.  The usual disclaimer applies, I'll be typing fast and furious so expect misspellings and some formatting errors.  Also, no Internet in the keynote (MiFi or conference) so I'll be moving this over to the blog after the keynote.

There are a TON of people at the event (I'll see if they announce numbers but easily in the thousands), impressive

Intro videos going on now…

Andy Jassy in on stage - starts with the age of AWS, 7 years old, March 2006

Now digging into the breadth of the services - they are very proud of the pace of innovation (see pictures attached)

With the exception of 2010, they have doubled the number of services every year, up to almost 160 services available today

71 new features so far in 2013



9 regions, 25 availability zones, 39 edge locations - also talked about the GovCloud and the requirements on it to support Public Sector workloads

Amazon S3 - Over 2 Trillion objects, 1,100,000 peak requests/sec

He's firing facts and figures now so fast I can't keep up. Nothing but speeds and feeds and stats to impress. He's talking very fast

Talking about customers and user base

 

Use cases - talking about the use case is really abut building blocks and letting the developers decide how to stitch together the blocks, AWS was not going to dictate the use cases

Talking about security - security is number one priority at AWS, talking about features access control from the edge, dedicated instances, encryption, etc.

Certifications are more important than security - They are HIPPA, ISO, SOX, FISMA, etc.

Now moving on to pricing (he's talking really fast, no transition in between topics)

They plan to remove cost from process and pass on to customers, 31 price drops to date, the more customers they have, the better economy of scale, they consider this a "wheel" more customers drives price drops which brings in more customers

AWS Trusted Advisor - checks for cost optimizations, security and availability checks, performance recommendations (running on demand vs. reserved instances for instance), pretty cool stuff.  I remember hearing about this but never dug into it.  It appears they are trying to change the mindset about steady state apps, they have brought this up a few times that you can run steady state in cloud, but need to do it on a reserved instance.

Now on to partners (again, no real transition) - The usual impressive list of both consulting and technology partners

AWS Marketplace - Their "App Store", 25 categories, 778 product listings - applications already configured and certified on the AWS ecosystem

Why are customers adopting cloud computing? (finally, a real transition)

1. Trade Capital Expense (CAPEX) for Operating Expense (OPEX) - $0 to get started and can fail fast if needed
2. Lower Variable Expense than most companies can do in house - they mention again how large they are and the economies of scale to pace on t customers (seems to be their new message) - They appear to be positioning themselves as the "Walmart of the Cloud" - Low Price Leader and pass savings on to you
3. You Don't Need to Guess Capacity - Talking about the typical predict up front model, what happens if you build it and nobody comes? What happens if too may people come?  If the infrastructure is elastic no need for this planning and predictive step
4. Dramatically Increase Speed and Agility - Old World server request, usually takes weeks to get servers for development, AWS takes minutes and is all self service - compares development to invention, need to perform a lot experiments, need to experiment and fail with little to no cost or collateral damage, speeds up development
5. Stop Spending Money on the Undifferentiated Heavy Lifting - They do all the "infrastructure stuff" for you, talking about how the infrastructure typically doesn't differentiate your business in anyway but it also consumes a lot of resources in operations.
6. Go Global in minutes - Because of Regions and Availability Zones the ability to scale and go grow to a different region is much easier. No need to set up operations in another area of the world

Message is very Enterprise centric (no surprise there)

Sean Beausoleil is on stage now - lead engineer for Mailbox - 2 years ago - talking about their first product, it worked but wasn't "sticky" enough, the reason was because email still held most user's data. How to tackle the mailbox as a better tool and task management

Now a video about Mailbox uses - In case you haven't tried it, Mailbox basically turns your mail into a to-do list. They were overwhelmed with the response to the initial movie that was release as a preview. They needed a massively scalable back end to support. The product pulls from IMAP -> Cloud -> to device (see picture)
They knew they would need a massive backend on AWS, they copied their existing system to AWS, they found a lot of bottlenecks in the app as they scaled up in testing.  They were able to test AHEAD of production.  Some components of the app were rewritten.  That is why the introduced the reservation system some of you that got the app may have seen.  (I was on that list)


The created the reservation system so they could scale over time until they were sure they could scale.  Even all this preparation didn't prepare them for the growth.  They were handling 100 Mil emails a day in 2 months from launch.  They are able to re-architect on the fly, comment was "you can't predict what production will look like until you are in production". I couldn't agree more based on past experience

AWS allowed them to optimize and scale and perform swaps of hardware instance sizes on the fly to balance the usage against the costs.  They would model the workload and perform swaps of hardware seamlessly in the background with no downtime.  I have to admit, that is pretty frckin cool.

Andy is back - AWS adoption into the Enterprise is the topic now

Andy is now talking about how most "old guard" are pushing for private cloud. He states none of the 6 points above are available in private cloud. He says old guard is high margin business that isn't the same as AWS. He is now talking about a balance of "old" on premise resources and new cloud era workloads - talking about Amazon Direct Connect, LDAP integration, VPC, etc. Says these tools to move from on-premise enterprises are the focus going forward. Mentions BMC and CA as partners in the future for single plane of glass management

How are Enterprises using AWS?

Strategy 1: Cloud for Development and Test - first and most common use case
Strategy 2: Build New Apps for the Cloud - this is the next generation of applications. Retire the old and create new apps, faster to build, less expensive to run, easier to manage, etc
Strategy 3: Use Cloud to Make Existing On-Prem Apps Better - Take in house apps and outsource the analytics for example for processing in the cloud. They mentioned a few enterprises including Nasdaq that do this today
Strategy 4: New Cloud Apps the Integrate Back to On-prem systems - AWS serves up the front end and the processing is on the back end on-prem
Strategy 5: Migrate Existing Apps to Cloud - he admits this is emerging and often requires consulting services, taking that very traditional workload and move it to the cloud
Strategy 6: All in - NETFLIX!  No keynote is complete with out them…

Now up - Demo of Enterprise and cloud by Simone (need his name)
They want to show you how AWS is relevant in the Enterpise
3 parts - Authentication, Integration, Migration

Authentication - Talking about Okta, an AD integration partner, brings AD into the AWS, Created an AWS Admins group in AD and it will talk to AWS IAM and preform the changes to needed to access AWS - AWS admin rights

Integration - Storage Gateway for Backup and Recovery Volumes - volume on premise - replicates to S3, replication of data happens, stand up an EC2 instance and attach to the volume on AWS if needed - talked about iSCSI targets and how to attach them (that brings back memories). Once this is done you could map back to on-premise (little fuzzy on the details)

Migration - Talking about moving export an image from VMware vCenter on-premse, transfer to AWS as an image (AMI). From there you can copy to another region. the example here is move to USA first and then transfer to Singapore.  I admit the use case of moving region to region is really cool.



Talking again about the perception of AWS and the Enterprise. The is obviously a focus.

What ar ether working on next? Amazon VPC is a focus (to continue to build the Enterprise), Direct Connect, Amazon Route 53 (DNS Services)

I'm actually gonna bail on the rest of this so i can go get a seat in the labs before they fill up. (Scratch that, line is so long for the labs they are useless)


They appear to be positioning themselves as the "Walmart of the Cloud" - Low Price Leader and pass savings on to you.  Key message also was to recognize that Enterprise will continue to use on-premise

Summary - Good stuff, it is good to hear them focus on the Enterprise and do it an a way that isn't as in your face as it was at the AWS:ReInvent conference.

Friday, April 5, 2013

March Recap

This post is a few days late but I wanted to put together a recap of everything that has been happening in March. To say March was a busy month was an understatement!  I'm not sure how much content I'll be able to post here in April as I have two speaking engagements to prepare for and I have decided I'm going to transition this blog away from Blogger and Feedburner to a WordPress hosted site.  Look for the new site probably sometime in late May based on my schedule right now.

March was our busiest month in recent memory at The Cloudcast (.net).  We published seven podcasts in March including the beginning of our expansion plans with our first podcast branch, the Mobilecast, as well as our first in a series of guest hosts, the always awesome Amy Lewis at Cisco.  Our goal for 2013 is to extend our reach into areas people have told us they want as well as some new faces to the podcast.  Please tell us what you think!

The Cloudcast #76 - Bringing Depth to PaaS for Real World Deployments
The Cloudcast #77 - OpenStack, PaaS APIs, Platform Tools, Automation & News
The Cloudcast #78 - Open Source Software 101
The Cloudcast #79 - DevOps Evolution and the Phoenix Project
The Cloudcast #80 - Regional Cloud Madness

The Mobilecast #1 - A Year of Going Mobile
The Mobilecast #2 - Health, Fitness and Wearable Computing

In addition to this blog I have also been asked to blog about Cloud Computing over at Tech Target.  I have a pretty extensive consulting and operations background so I have been asked to think about cloud computing from an operations standpoint.  I'm aiming for at least one blog a week over there.  Please head on over and subscribe to the blog!  I met my goal in March, here are links:

What Happens When Your Cloud Goes Away?
Cloud Applications and Vanishing Software Generations
Will Clouds Ever Be Open?
Impacts of Cloud Workload Consolidation

Last (but not least!) on this site I published two articles, one on the NYC Cloud Computing Meetup I attended and a new semi-regular news link round up I plan to do.

NYC Cloud Computing Meetup Recap
In Case You Missed It #1

As always, thanks to everyone for coming by and look for big changes coming "soon"!

Tuesday, March 19, 2013

NYC Cloud Computing Meetup Recap

Last week I was able to attend the New York City Cloud Computing Meetup.  It was a very cool event and Joe Brockmeier presented Deploying Apache CloudStack from API to UI.



Deploying Apache CloudStack from API to UI from Joe Brockmeier

Joe did an awesome job (as always) and the meetup was nicely attended, I would estimate about 40-50 people were in the room.  Here are a few random thoughts and impressions in no particular order:

  • The session was very interactive. It took the crowd a little bit to come out of their shell but once they did the discussion was very free form and constructive
  • The level of questions were very good.  Many were about how to implement and architecture related questions about specific features. Snapshots in particular generated a lot of discussion on slide 26. It appears we are starting to move beyond the basic cloud definitions and into the nitty gritty of implementations
  • There were customers in the room and they greatly helped with the discussions (Thanks Jeff @ DataPipe!). It was great to hear how their real world experiences were put to use and how they were able to tackle some of the issues and concerns brought up
  • I like how Joe started with some features of the NIST definition and then added an additional point (see slide 4). I agree with Joe that API access is crucial going forward
  • Slide 15 (the architecture overview) generated a lot of real world discussion in the room that I believe was very helpful to everyone
All in all a great event!

Tuesday, March 5, 2013

In Case You Missed It #1

I'm going to try something new and see how this works. I read a LOT of Cloud Computing news.  When I was speaking on a panel recently I was asked afterwards why I don't share a lot of the news I find interesting and thought provoking.  Great question and here is an initial attempt to do just that.

Below is a list of articles I found interesting over the last two weeks and some commentary on what I see going on in the industry.  I'm still not 100% on the format so let me know what you like and want to see changed.

Events & Misc. Links


Amazon News
Amazon continues to steam ahead but the last few announcements have been very interesting.  In their quest to add more value (and lock-in) to their ecosystem, a bunch of small companies with products built around their cloud were put on notice.  How does a small startup compete with AWS when they decide to move into that space?  Time will tell...

Open Compute
One big OpenStack story to focus on from yesterday, IBM going "all in" with OpenStack.  I saw this one coming a mile away.  Even though I'm now employed by one of the vendors I posted about I still contend that it depends on which vendors show up to the OpenStack Party. As an outsider looking in it appears HP is "phoning it in" (and a lot of people are leaving), while IBM and RedHat are getting serious.


VMware
Beating up VMware has become the cool thing to do.  I joked about it on Twitter but I believe the VMware's message from PEX (VMware Partner Exchange) last week sent the wrong message the same way I felt AWS sent out some bad mojo at their conference late last year.  The big guys tend to approach this as all or nothing and everyone else is the enemy (it's their job, don't blame them) but most customers I talk to don't see it this way at all.