Thursday, November 13, 2014

AWS Re:Invent Day 2 Keynote Live Blog

Got here early enough to get coffee and get a good seat! Crap, seat not so good, still can’t see slides well… *sigh*

Werner Vogels on stage - Quick recap of yesterday’s announcements. Says the party artist will be announced at the end of the keynote

Talking about building applications on cloud - Harder, Better, Faster, Stronger than they have ever been. (hint hint)

Services delivered in a broad ecosystem make the difference (trying really to differentiate on services it would seem) vs. just an IaaS platform

Splunk on stage - All core products run on AWS, Splunk cloud (they run it for you), Splunk Enterprise, etc.

What has changed over the last year? Customers are moving from just dev/test and peak apps and moving true production workloads en masse to AWS. Splunk can help with visibility between on-prem and AWS.

Mentions customers - Coca-Cola, Nike and their use cases. often POC on Amazon and then moving production to AWS. Saved time and money using AWS.

Mention of Finra - stock trading security regulator - no more standing up hardware, they moved all applications and Splunk to AWS to focus on what matters, not management of infrastructure. Mention of multiple regions and APIs for scalability 

(I notice almost all guests on stages mention that, must be in the speaker notes for everyone. AWS is hitting scalability, API’s, and services as differentiators)

Werner back on stage - Slide -> AWS is Secure, Adaptive, Resilient, and Global. talking about “pushing a button” to make infrastructure appear

The Application Extends the Platform - talking about importance of API’s and extension of the platform in infrastructure as code and fitting tis into emerging application development models

(As an aside, the Splunk dude that just spoke sat down next to me… awkward)

OmniFone on stage - online music platform, talking about the music industry and the complications of music as an industry. They started with a 15 million platform, it didn’t hold up to the load. They could’t iterate fast enough. They had to start over and started over on AWS. “AWS was the only choice” (Also noticing that as a common theme of the guests, they are all saying it).

Now has a geographically scalable, redundant services across the globe on AWS. Building this platform has allowed the music industry to build what matters. They have delivered more audio/video faster than ever before.

Talking about high res quality sound and the challenges (about 150 times the file size of typical mobile file delivery). How do you deliver the large files in a large uninterrupted stream? talking about Podio (Neil Young’s company?) and what they are doing there

Werner back on stage - Broad Services drive the speed of development, talking about “agility as the Holy Grail” of application delivery. Increasing consumer choice is driving the market to a new model that needs to be agile and fast. Dev & Test is the Core to Agility

Says today budget’s of most CIO’s for Dev & Test are between 40%-60%. How do you optimize that and make that portion of the budget faster.

The Weather Company (The Weather Channel) on stage - talking about weather as a science and data platform. How do you great services based on information you can’t control but potentially affects both business and lives all around the world.

They have built a platform on AWS to feed others (Apple, Google, Yahoo, etc.) to move beyond cable. Also feed data to all major airplanes to help with traffic control. Provide data to local broadcasting companies all over the world. They want to be the “data warehouse” for all things weather in the world.

They didn’t start this way, had a traditional model of physical data centers with physical hardware. They had to change both the infrastructure as well as the culture. 

(I like they brought the human aspect into this, not just technology, so often overlooked)

They choose AWS for scale (scalability point hit again), as well as confidence in the services. Platform has provided close to 100% uptime and weather forecast is less than a second by analyzing over 800 sources around the world. The platform allowed them to “go faster” and constantly improve the accuracy of forecasting over time.

Over 1 billion devices served from the platform between IOS8, Android, and downloads of apps on Mac/PC

Werner back on stage - Development is changing to support agility

Pristine (Google Glass specific company with a focus on healthcare) on stage - They are using AWS and…. drumroll please…. Docker!! (You knew it was coming!)

Slide - Containers are the key to our growth, this allows them to develop once and run everywhere. Rollback are simple, etc.

Talking about the combination of AMI’s for the base image and the layering of containers on top is the “perfect match” for them and allows them to go as fast as possible and scale beyond anything else that is out there.

Werner back on stage - Why do developers love containers? Going into to all the usual containers value proposition. talking about containers do present some overhead challenges set up.

Announcement - AWS Container Service - deploy environment to make containers easy. (huge applause). All with an API, integrates with Docker repositories, also integrates with Mesos

Demo of Docker containers into the system on stage now (I can’t see the screen well sadly). Instances (AMI’s set up), register the cluster with the service. Name the Docker image that will be used, start running the task. Single instance, deploy and scale to 5 instances, deploy front end.

Scale up to 30 instances (different instance types as well)

(Got a call.. had to step out… I’m sure it was awesome… sorry about that)

Docker CEO, Ben on stage - Where isn’t Ben these days?! Good for him and good for Docker….

Developers are content creators - Docker removes the “crap work (my words)” from development and allows developers to go faster.

5 steps to containers - 
1. isolation of process in an OS
2. good API’s to run anywhere
3. create an ecosystem (Docker Hub)
4. create a new container based app model
5. create a platform for managing it all

Talking about Gilt.com - joint AWS and Docker customer, before docker 7 apps and hard to deploy, 300 micro services and 100+ deploys per day

Just passed over 50 million downloads of Docker!

Werner back on stage - Simplification drives reliability and performance

What are the primitives of cloud in an execution environment?

talking about data, triggers, and actions of applications. A data change triggers an action to update other portions of data.

Why don’t we architect that way? need to create a full, complex stack to “run a function and modify data”

Announcement - AWS Lamba - An event driven computing service for dynamic applications. You just write code and no underlying infrastructure (it’s always there somewhere, they are just taking it away so you don’t have to worry about it)

Basically state changes and events drive the system (new pricing model?) - write code without infrastructure.  - (Another PaaS without calling it a PaaS?)

code only runs when needed - cost efficient and efficient

Really interesting concept - Talking about IoT (Internet of Things) and triggers as the new currency

Netflix on stage - talking about micro-derives and Lamba, they can replace inefficient existing services with trigger based serves.

Encoding Media Files is an example - get file from studio, chunk it up, process it, ship out to CDN’s
Backup for Disaster Recovery - they can now do backups based on triggers and events vs. time
Security - when an instance is spun up, trigger security check to make sure it is configured correctly

Werner back on stage - Units of cost for Lamba - number of requests, execution time - there is a free tier for each customers each month - today it is available as a preview.

Announcement - New Instance offering C4 (based on Haswell processor), up to 36 vCPU’s, EBS optimized by default and included in the price.

Announcement - New EBS - SSD backed EBS up to 10,000 IOPS (up to 160Mps) and 20,000 IOPS (up to 320Mbps)

Intel on stage now - talking about C4 instance… speeds… feeds… The processor is actually an AWS exclusive

My take: It would appear they have hit on a few key differentiators to move forward beyond iaaS. Scalability (to differentiate from on-prem), API’s for developers (to differentiate from other public clouds), and services across the broad ecosystem. They want to be the developers model of choice and seem to get the only way to get “next generation applications” is to enable the developers and start down the micro-services and containers path.

Well played AWS… well played…

Over all, super impressed with year vs the keynotes of past years.

Wednesday, November 12, 2014

AWS Re:Invent - ARC307 - Infrastructure as Code Session

Live Blog of the AWS Re:Invent Infrastructure as Code Session (ARC307)

Packed house - This session is offered in one of the large ballrooms. At least 1,000 people in the session and this session on the live stream as well. David Winter & Alex Corley from AWS as well as Tom Wanielista from Simple.com presenting

David up first - his background is a very traditional datacenter hardware centric background. He had a project to build on AWS.

Started simple with manual spin-up of instances, it wasn’t fast enough one person using a console. He needed to go faster. API was the next step, he then built a bash script. His first steps…

Hired somebody else, they then wrote the same in python. This was the beginning of using this as a “cookie cutter” repository for test/dev. Then one day something bad happened… (Security related event)

Production went down… hard. (Security groups were removed by beta product they were testing), all networking went “deny all” in the security groups, locking everyone in the world out

Had to rebuild them all by hand… (ouch)  How do you prevent this from ever happening again.

AWS Cloud Formation was now the basis for “Infrastructure as Code”. Too much configuration that was done by hand needed to be automated to recover quickly. Also, this allows iteration of new development cycles very fast as a side benefit to go forward.

Alex Corley is up - version control to wrap complex systems and provide a template for roll out. Cloud Formation uses a model methodology to define the infrastructure. You create models in Cloud Formation (CF from now on), JSON structure

CF supports just about all AWS services today (security groups, compute offerings, network services, etc.)

version control is built in CF. Store the intended stated (next rev) in CF and do a code review before it is published. Can use many different repositories (GitHub for example)

Create a template, check it in, code review, deeply worldwide across AWS regions. All automation handled through CF.

Tom from Simple.com - (customer testimony) - simple is a bank. SOA architecture on AWS from Day One. 

They started at the console just like most everyone. As they developed features and grew, this got out of control. They didn’t know who changed what and what happened.

And then along came PCI compliance… No way to audit and report on the current infrastructure. Had to start over from scratch.

Goals: Security / Insight / Growth / Speed - these were the 4 pillars of the new infrastructure.

The rebuilt using AWS Cloud Formation, then they wrote cloudbank in Python. Middleware between simple.com and AWS Cloud Formation from an operations stand point. Everything stored in github. They modify cloud bank, it talks to CF. Jenkins cluster in the mix

cloud bank applies the AWS standards under the covers (security groups, network settings, etc.)

What are the benefits of this automation? They write code every day, they simple added the ability to spin up infrastructure and moved this into the code flow greatly increasing efficiency and agility of the organization. This also makes the infrastructure programmable. For example, there is a chance in PCI compliance, simply push out the change in code. The developers can now handle the infrastructure.

They have evolved from 20-200 people and are still using this method.

David back up - demonstration time (Alex doing demo)

Alex - demo application (web application) running on AWS. Cluster of 5 machines. talking to a git repo, made a chance in the code to increase the size of a graphical fix. commit the change, refresh and it was fixed.

Supposed to be 5 machines, only one is talking. Modified Cloud Formation instance to talk to 5 hosts, commit, refresh the app, now more instances talking to the front end.

Last issue, throughput is insufficient now.  Double infrastructure now from 5 machines to 10 machines to get more bandwidth to the front end. This spins up 5 new AMI’s, some custom configuration and insertion into the cluster, all done with a commit

Application problem, security problem, infrastructure problem - all three fixed through the same process and change management model

Wrap Up:

Good for startups - Agile, developers ramp quickly

Good for Enterprises - Template driven, compliance oriented infrastructure

AWS Re:Invent Day 1 Keynote Live Blog

Place is PACKED. Over 13,000 here. I’m way in the back, can’t see the slides on the screen well but can see the big screens showing the current speaker. I should have been here earlier but needed coffee. Priorities…

TL;DR - See My take at the bottom

Jassy up first - over 1 million active customers, lots (and lots) of logos slides (public sector, Enterprise, SI’s, etc.)

AWS Market Place - huge growth, 2000 offerings, 7 mil in downloads

Slide about Enterprise IT Vendors and how most large multi-billion “Enterprise IT Vendors” are all shrinking while AWS is growing (yes, they included themselves in that category)

Lydia Leong (Gartner) quote thrown up on the screen - he is really trying to embrace the Enterprise vs. just telling them they are doing it wrong in past years keynotes

Moves on to the “Old Way” of doing things, Enterprises spend millions for slow, inflexible infrastructure and software

AWS is the “The New Normal” - multiple data centers (mentioned fault tolerance), 11 regions, 28 availability zones - went on to mention all the features that are built into every region (backup, identify management, monitoring, analytics), complete offering of services and all offered on demand as needed and can spin up as needed (This has to be the longest list of features I’ve ever heard, he has been going on for about 3 minutes and I’m not sure he has taken a breath)

Still going…. 

Now talking about the features in the service. Many others offer a basic service, AWS goes deep on most offerings (another list of offerings, he is going into compute and how they are differentiated i.e. GPU specific, small compute, large compute, etc.)

Still going on list of services… Jassy is the Energizer Bunny of feature lists

First Customer is up - MLB (Major League Baseball) - CTO of MLB.com, started from scratch, now a six billion business for MLB. They built a PaaS they share with other providers (ESPN, etc.). Want to be on any screen at anytime for events. StatCast is hosted on AWS, new system to go really deep and apply big data and prediction to baseball stats and players.

How do they capture the data? Radar sampling that tracks ball over 2000 a second, can “see” the baseball rotation it is that accurate. 17 Petabytes of data per season. AWS was the only one with scale and bursting capability (what do you do in offseason when you don’t need it). Keep adding to data warehouse over time to provide historical stats.

How does it work - collect data locally, use Amazon Direct Connect to export into AWS. From there MLB’s real time PaaS delivers StatCast to devices

Example - Breakdown of play during the World Series, shows how runner started slow (because he thought is was easy) and then sped up at the end. He was out by .2 second. If he ran the whole time, he would have been safe by over a foot.

Jassy is back - talking about transformation to Cloud Native Applications. You don’t have the option to move slow anymore.

Second Customer is up - CEO, Healthcare Company (sorry, didn’t catch the name, Phillips maybe) - going through a real world customer use case who had cancer and how they determined this (took blood that indicated it, found the cancer, showed patient how to adjust lifestyle and live with it vs. radiation treatment). This was real time data and fitting a treatment to the customer vs. other traditional alternatives using big data.

How do we turn a mountain of data into Actionable Items? This is where real time data comes into play. They are adding a PetaByte a month to the system right now (common theme here of scale and how no one else can scale like AWS). No one can support the large amounts of data.

Jassy back - Slide - Is there hope for a new normal in the area of relational databases? Old world DB’s are expensive, locked in. Many Enterprises are looking to MySQL and PostGres as an alternative. The OSS DB’s are hard right now….

(Announcement) - Amazon Aurora - Commercial Grade Database Engine - in development for 3 years, MySQL compatible but at 5x performance, same or better availability than Enterprise versions at 1/10 of the cost of the leading solutions in the market.

Product dude brought out for Aurora (didn’t catch his name) - Biggest Enterprise pain today is world class databases. They started with a blank slate and knew they wanted MySQL compatibility.

Compatibility with MySQL 5.6… 6 million inserts per minute, 30 million selects (I heard some folks around me saw wow to that one, I guess that is a big deal), data automatically backed up to S3 and highly available, crash recovery in seconds, database cache survives restart (no warming). Most features available only in Ent. class offerings.

Offered at .29 per hour (audience clapped at that)

Jassy back on stage - Talking about Software Deployment now. Pushed 50 million deployments in last 12 months using “Apollo” (codename for their internal project… I sense an announcement coming)

(Announcement) - AWS Code Deploy - Central monitoring and control, works with “virtually any” language and tool chain set, available today, free to use. Performs roll backs of code as well as commits. 

Talking about CI/CD now. Develop, Build&Test, Deploy, Monitor & Analyze

(Announcement) - AWS Code Pipeline - Integrates with existing tools, used internally in Amazon

(Announcement) - AWS Code Commit - code repository without size limits.

All exist together and work with external partners. (wonder who they will play nice with)

Now talking about compliance - They are now ISO-9001 compliant. They have been working with healthcare customers to achieve this level of certification.

Security up next - talking about encryption

(Announcement) - AWS Key Management Service - Encryption, IAM and policies all in one place (sorry for lack of details here, had to take a call)

OK, back…

Talking about Service Catalog (coming in 2015) - AWS Service Catalog, create a grouping of resources, create an offering, serve it out in a service catalog… They say Enterprises want this

(This *COULD* be interesting. I talked to Ent folks about this years ago and it never took off because it was too hard or costly to create the offerings and serve out the catalogs to multiple clients. If they make this easy to consume and usable, it could take off IMO. Enterprises want it but never really adopted it at scale. This was the original Enterprise vision of “cloud”, a portal of services)

Talking New Applications vs. Old Applications (here comes the Jassy we know and love… bring on the part where he tells everyone they are doing it wrong and need to do it the AWS way)

Dev/Test - Many Enterprises are using Dev/Test as a starting point for AWS. 
Mobile - The future of applications and architecture
Talking about companies migrating fully over to AWS. Feels like the days of virtualization (we want to be a 100% virtualized environment!). I doubt that will ever happen. Some workloads might go AWS…

CTO of Intuit on stage - They are moving all their applications to AWS. As Intuit evolves into a majority SaaS company. Over 8,000 employees, 3,000 engineers. Multi-billion online and mobile services. Had lease on datacenter up and migrated over to AWS. 6x cost savings, 1/5 of the time for buildout, developers were able to move faster. Over time this trend increased, starve the old, build new in AWS. Many acquisitions were built on AWS so that made absorbing them into Intuit very easy.

Jassy back - Talking about Hybrid Infrastructure (not Hybrid Cloud according to AWS). Jassy talking about a lot of Enterprises that still have on-prem resources because they aren’t ready to move to cloud. Talking about all the Hybrid features (VPC, Direct Connect, vCenter Integration, Access Control, Directory Service).

CTO of Johnson & Johnson  - 270 operating companies in 60 countries, 100,000+ employees, more stats,,, blah blah blah…

Thousands of Servers, Complex IT Ops - new strategy, less servers, automated IT, greater business efficiency

120 applications running in AWS now, plan to triple that in the next 12 months (they have to have THOUSANDS of apps, so I wonder what the percentage actually would be)

They want to move to Amazon Workspaces for Desktops

Jassy back - Slide - Partnering is the new normal (Announcement coming?)

Talking about culture of AWS - Customer focus comes first, AWS is pioneering (first to market), long term orientation

They will never call you at the end of a quarter to close a deal to make numbers (difference between am OPEX subscription model vs. a CAPEX purchase model)

AWS as a trusted advisor, Cost Optimized Service and Advice - over a 350Mil in cost reductions on behalf of customers

My take: Keynote felt very different from past years, company has moved from announcing more offerings (look, new compute offerings!) to announcing services to expand the ecosystem. Makes sense as the growth has slowed and they need to pick it up. Felt like a VMworld keynote from 5-7 years ago. A company that is starting to branch out and may very well start eating their own ecosystem so they can continue to grow. Also thought it was weird the pre-announced a few things this year. Not sure if they didn’t get them out in time but pretty sure they haven't done that before. AWS has gone from the “stealth IT little guy” poking the Enterprise in the eye and telling them they are doing it wrong and is now embracing the idea that they need the Enterprise and they now need to be nice to them. The fact that Jassy didn’t crap all over “Hybrid Infrastructure” and actually talked about it at the end helps prove this point.

I believe the Aurora and CI/CD announcements will move the needle and look really awesome. The security announcements were needed to fill out the Enterprise portfolio. The Service Catalog could be interesting when it releases.

Thursday, July 31, 2014

OpenStack Summit Session Voting - Please Vote!

Time to dust of the blog and beg some folks for votes on OpenStack Summit sessions...

First off, here are some great sessions I would love to see and I encourage you to vote for! There are so many submissions picking a few is difficult:

Scott & Ken's great session on VMware & OpenStack: OpenStack for VMware Operators

Getting Started with OpenStack

OpenStack Performance Tuning

Multitenancy with Cinder: How Volume Types Enable It

Lastly, I have three sessions up for consideration, please vote if you are interested and I hope to see everyone in Paris!

Predictable Cinder Performance with SolidFire Storage

Building a Cloud Career in OpenStack

Ask the Experts: Challenges for OpenStack Storage

Thursday, March 13, 2014

Cisco Live: Lenny Kravitz & Imagine Dragons! What?!

This post has a personal as well as corporate level. First the personal. I was recently selected as a Cisco Champion! A huge thanks to Cisco for allowing me access to the program and they have done an amazing job of incubating and developing a program to get their word out in new and creative ways. I threw the graphic label up on my site but I haven't really had the bandwidth to talk about it until now. Cisco has chosen a great community and it has been an awesome ride so far!



Ok, enough of that. The Cisco Champions got the scoop yesterday on some great upcoming announcements for Cisco Live! coming up May 18-22 in San Francisco. I'm hoping I can make it to the event but planning for the "day job" is still in the works. If want to register, HURRY UP! Deadline for Early Registration expires tomorrow, March 15th.

What is there to look forward too at the event?

Rowan Trollope (better known as the person who tries to keep Peder Ulander in line) did a great post yesterday on the Next Generation of Cisco Collaboration Experiences. This is the biggest announcement of new products since TelePresence and I can't wait to hear more about the products in the near future. A few highlights of the new products to look forward to at Cisco Live!

·         Cisco Telepresence MX Series learn more
·         Cisco Telepresence SX10 Quick Set learn more
·         Cisco TelePresence Precision 60 Camera learn more
·         Cisco TelePresence SpeakerTrack 60 learn more
·         Cisco TelePresence SX80 Codec learn more
·         Cisco Business Edition 6000 Enhancements learn more
·         Cisco Business Edition 7000 learn more
·         Cisco Intelligent Proximity learn more

But who is playing the Customer Appreciation Event?!



Historically I haven't been a big fan of going to the customer appreciation shows (too many guys in too small a space) but there is NO WAY I would miss this one! Lenny Kravitz and Imagine Dragons playing at AT&T Ballpark! Yup, you heard it hear first. The event will be Wednesday night, May 21st. I can't wait! (Pro Tip: Either catch an early bus back to beat the lines or drink enough you won't mind waiting on transportation back. Of course, Uber or a Taxi is your friend as well!)


Sunday, February 9, 2014

Goldilocks & Supply Chains with VMTurbo

I'm fulfilling a New Year's Resolution to get back into blogging. Life has been crazy (but good crazy) and it's time to reestablish some old habits.

If you listen to the Cloudcast you know that Shmuel Kliger, CEO & Founder of VMTurbo was a guest back in December. I didn't know it then but that episode was the first step in a major education that I wanted to pass along to everyone.

Let's start with what you probably think about VMTurbo. You probably think they are a monitoring solution for virtualization, specifically VMware. If you thought this, you are not alone. I'll attempt to convince you that VMTurbo has a pretty unique value proposition and is properly aligned at the intersection of a bunch of upcoming data center operations trends (Software Defined Anything, Public and Private Cloud, etc.). Version 4.5 of their product was released a little over a week ago and here are a few items I observed in the last few months digging into the product.

VMTurbo Isn't Your Daddy's VMware Monitoring Tool

A major shift within the last 12-18 months has been the advance of multiple hypervisors and private cloud IaaS projects and products. The virtualization world isn't a one horse race anymore. Many other hypervisors are now "good enough" and the case can be made for certain workloads to run on non-vSphere environments. As you can see over on the new product page, VMTurbo works with all the major hypervisors and private/public cloud IaaS offerings. They cover vSphere, vCloud, Xen, Hyper-V, CloudStack (and Citrix CloudPlatform), OpenStack, Azure, and AWS. This is smart of them, really smart. I see workloads moving all the time to different environments (and sometimes moving back). Finding a tool to cover all possible infrastructure combinations is difficult currently.



What Do You Mean It Isn't Just a Monitoring Tool?

This is where VMTurbo gets really interesting for me. As mentioned on the podcast, I'm an old data center operations guy and it has always been a passion for me. I have been exploring not just WHAT VMTurbo does but HOW it does it as well. Yes, the product monitors complex systems but the WAY it does it is very different. This isn't just a bunch of agents running on machines and sending alerts back (or SNMP traps!) and then somebody gets an email or a page to take a corrective action. Just because you get an alert doesn't mean you know what corrective action needs to take place. What is the cause? Is there a problem downstream (a hot spot on a disk on a volume or LUN creating disk latency)? Monitoring systems often detect "black outs" (something is down) but don't do as well with "brown outs" (something isn't performing optimally) because most monitoring systems don't understand the connections from the lowest level of hardware all the way up to the application and potential performance impacts. Only by understanding how application resources are mapped to physical infrastructure can insight be gained into optimal performance of a system.

This is where VMTurbo comes into play. The product uses a Supply Chain Model to map every input and output of hardware and software in a system to understand potential impacts as well as potential improvements. Every product you consume has a supply chain. How does a product get from various raw materials into a finished offering that is consumed by you? Think about that for a second...

Take the computer or mobile device you are reading this on as an example. Every part, thousands of them, have to be made from raw materials, brought together, shipped, and offered to you as a product. You are the consumer. Now, think about an application or a workload as that consumer. All the underlying parts (disk, memory, compute, network) need to be combined and offered as services (hypervisor, virtual machines) that are consumed in various amounts by the application. Furthermore, each resource can serve as both an input and an output. Some will take resources, but will also serve resources to others.




By taking this approach, everything in the system becomes Data (with a capitol D). Once everything in the system is Data, you can start to apply some universal concepts such as a Common Data Model and System Optimization through the Economic Scheduling Engine. I'm going to take each one of those in turn.

What is a Common Data Model and why should I care?

By taking a complex infrastructure and breaking it down into a Common Data Model (compute, storage, network, hypervisor, etc.) it becomes very easy to add new systems and components. Remember above when I stated that VMTurbo supports the various hypervisors as well as IaaS projects/products? At a very fundamental level all products break down in the same way (Common Data) and once broken down we can begin to understand the mapping between components. This mapping gives us greater insight into connections for root cause analysis as well as making additions of new components and software very easy because the initial mapping is already complete. The latest version of VMTurbo has added hardware from Cisco UCS as well as storage from NetApp. I'm sure this integration further down the stack will continue and will be a great value add for converged infrastructure products (FlexPod anyone?). Here is an example of a mapping in the interface:



Here is anther way to understand this mapping. When I was at IBM supporting business parters about 10 years ago virtualization was just starting to heat up. Part of the early days of this market was convincing customers with physical infrastructure to go virtual. The demand was there but the tools at the time were not. Because of this my team would go in and analyze physical environments and then break them down (using a common data model) and carve up workloads and perform a manual calculation of how much virtual infrastructure would be required to support the proposed environment. We would map out applications down to the basic compute, memory, storage, and networking requirements. This was a complex operation that took weeks and lots of manual calculations and Excel formulas to accomplish. VMTurbo does basically that same thing and does it automatically without human intervention! This could have saved me hundreds of hours back in the day!

The Common Data Model is about more than just analysis. VMTurbo is able to recommend and (if configured to do so) will actually remediate environments to optimal operations. That takes me to the next section.

What is System Optimization through the Economic Scheduling Engine and why should I care?

We've talked Supply Chains to death, let's talk about Goldilocks for a bit. Most people in our field don't know it but they are always searching for a Goldilocks State of Operations. Our customers are looking for something that isn't too big, isn't too small, but just right. The problem with this is our applications and workloads are often dynamic and changing and so finding the "just right" spot is hard because it is constantly shifting. Too little resources and application performance may suffer, too many resources and we are wasting money through over provisioning of resources.

This is where the whole "cloud computing" idea comes into play. Cloud computing can be boiled down to the concept of Dynamic IT, dynamic pools of scalable resources. As our application workload shifts and moves, our underlying IT infrastructure must shift and move to compensate. This is what we call a "perfect state" in an economy system. We are providing just enough resources to be consumed.


VMTurbo uses this model to constantly monitor the resources demands and attempt to move and shift resources as needed. Think of it as VMware DRS for your entire infrastructure. The only way to do this is to map and understand the relationships of the infrastructure to the applications and how to make corrections as needed. VMTurbo attempts to provide a Goldilocks State of Operations to your entire infrastructure.

If you are still with me, thank you! In conclusion, VMTurbo is a pretty unique product that I have been having a great time digging into for the last few months. Through the use of VMTurbo's Common Data model as well as the Economic Scheduling Engine they are able to really provide a product that is well suited to tackle the increasingly complex infrastructure interdependencies as well as ever increasing and shifting application workloads. Go check out the site for more information.

Disclaimer: As noted, Shmuel Kliger was a guest on the Cloudcast podcast of which I am a co-founder. I also attended a pre-release briefing and product demonstration on VMTurbo 4.5. No compensation was given or expected and I'm writing this blog post because I think it is cool tech and wanted to help get the word out.

Image Credits: VMTurbo

Big thanks to M. Sean McGee for his Goldilocks UCS Blade Post a few years back. The title is an homage to that post.

Tuesday, August 20, 2013

Catching Up - What Have I Been Up Too

What happened to the last 3 months?!  To say it has been a busy summer has been an understatement.  Hopefully more blogs coming in September for some cools things I'm working on but here is a recap of the last few months.


I've also been generating a lot of content over the summer, too much to list here.  Go check out my blog on Tech Target as well as the latest episodes of the Cloudcast and Mobilecast.  Thanks again for coming by!

Wednesday, May 22, 2013

Citrix Synergy Keynote Live Blog


Well, access has proved to be an issue (general wireless saturated, I have TWO MiFi's that both wouldn't work) so I'm writing this offline and will publish ASAP.  Usual Live Blog disclaimer, this is me typing as fast as possible, probably spelling and formatting errors, please forgive that.  Limited bandwidth so I'll add pictures a little later today as well.

  • Mark T (CEO is up) - introduction of Synergy 2013, packed crowd
  • Citrix Cloud Platform is up first, over 200+ production clouds, 40,000+ node scale, lots of references
  • Talks about CloudPlatform being based on Apache CloudStack, 35,000 community members, top level Apache project graduation fastest in history, most contributions of any Apache project
  • ShareFile - On-premise storage option, private and public cloud data storage, you choose where your data is stored
  • ANNOUNCE: ShareFile StorageZonce Connectors - application level connectors into the Enterprise
  • ANNOUNCE: Windows Azure support for ShareFile
  • MSFT update - 80% growth of XenDesktop on Hyper-V
  • A bunch of MSFT Windows 2012 and Windows 8 updates (too many and too fast to type)
  • Citrix Receiver for Windows 8 is out
  • Moving the Windows experience to a Mac is up next:
  • ANNOUNCE: Desktop Player for Mac - Run Virtual Desktops on the Mac, online, offline, encrypted, centrally controlled, tech preview coming next month
  • Cisco Partnership up next
  • XenDesktop on UCS is a large UCS use case (FlexPod as well)
  • Tighter Integration between Cisco and Citrix across the board in all product areas
  • NetScaler has taken off (MPX, VPX, SDX) as a replacement for Cisco ACE and joint interoperability and development coming in the future
  • Innovation Award: (videos shown of all the finalists, Miami Children's Hospital, USP - University of San Paulo, Essar) - Award goes to… USP for their use case of Cloud Platform and Cloud Portal!!  Very exciting to see our customer receive this great reward.  We are very proud of to partner with them to help them serve their customers, the students of the university!
  • Up Next: Going mobile
  • What is driving the industry? - Consumerization - Mobile devices and Bring Your Own Anything is taking over!
  • Generations - The next generations requires different access than the traditional IT would allow
  • Disruptions - self explanatory
  • The Pace - Everything is faster and at a greater magnitude in scope
  • Paradigm shift - "Don't Own Stuff" - more agile, more flexible because CAPEX and "stuff" isn't holding you back
  • "Move, Add, & Change" - How to move faster, change quickly, add and remove quickly. Orgs need to tackle this
  • for example 100,000 changes in an org cost 75 million once upon a time. now down to 25 million, savings and efficiency
  • It is all about Mobile Workstyles going forward
  • Up first, Windows Desktops still prevail in the Enterprise (about 85-90% today) - It is still a Windows world
  • What would XenDesktop and XenApp look like in a mobile cloud era? - Project Avalon - 
  • ANNOUNCE: first release is called XenDesktop 7 - designed for simplicity and mobility
  • FlexCast - Windows Apps and Windows Desktops under one umbrella - FMA - Flexcast Management Architecture
  • 1 package to download - automated installation and deployment
  • HDX Insight - end to end monitoring of HDX traffic
  • no more workload provisioning, app-by-app publishing, windows app migration (all about simplification of the operations and building)
  • HDX Mobile - HD video on any device, even over 3G, 100% increase in WAN efficiency, native mobile functions (access, device GPS, sensors, cameras, etc)
  • HDX mobile SDK for Windows Apps - take a .NET app, turn it into a Windows "Mobile" app through XenDesktop and XenApp, develop once and it will adjust to the device
  • Demo Time - Brad Petterson up to demo XenDesktop 7
  • Apps and Desktop provisioning all in one using Studio - showing Director with information from NetScaler and network traffic in real time. Shows XenApp/XenDesktop traffic, goes all the way to the app level, also shows a larger IT Support view that allows better troubleshooting across an entire org, shows an ability to assess and act on the infrastructure
  • Now demo of iPad mini connecting with Receiver to a Windows 8 virtual desktop, showing off the Windows 8 experience on an iPad mini, very fluid, flash video is seamless, also showing off a full screen movie streaming over the iPad
  • The redesign of Windows Apps is pretty cool to me, makes the VDI on a mobile device potentially less painful. Seems to be a natural progression
  • Up Next - Cloud Enable the CPU, GPU, Network and Storage
  • Delivering "intense" apps that would normally not be a candidate for delivery
  • Jen-Hsun Huang - CEO & Co-Founder of NVIDIA is up to talk about this
  • partnership has been around for along time, since 2006
  • talking about the "good old days" and how some projects actually failed over the years because the "cloud" wasn't ready for these intense workloads
  • Demo Time - Abode PhotoShop running on an iPad - pulls up a picture, using the GPU in the "cloud" to manipulate the picture in real time, shows very complex graphic manipulation in real time.
  • What about applications that have required the "big powerful workstations" until now because of the processing power required?
  • Talking about the design of the Boeing of 787, the databases on the back end (Data Gravity Again!  Google It), made development around the world difficult
  • Instead using remote workstations driven by GPU's and only move the pixels, not the data
  • Showing various examples of apps running in realtime, actually showing a 4k video resolution file and editing in real time.  Very cool
  • Now talking about how it happens on the back end. Virtualization of more than the CPU is required, we now need the GPU to be virtualized
  • New NVIDIA GPU's are designed with virtualization in mind, now integrated with virtualization
  • ANNOUNCE: virtual application running on a virtual desktop with a virtual NVIDIA GPU
  • Showing AutoCad, PLM (Manufacturing), vGPU remotely for the first time
  • Google Earth running on a virtual machine using a hand gesture technology (have to see it to explain it), Demo of hand gesture control of Google Earth in real time, really cool!!
  • It's called the NVIDIA GRID vGPU and is integrated into XenDesktop 7
  • Open GL Support, industry first direct GPU
  • Up Next - XenApp 6.5
  • Announce of Feature Pack 2 with many new features (too many to type here)
  • June will see shipping for both XenApp and XenDesktop
  • The world of apps is moving beyond Windows Apps
  • What about IOS, Android, mobile data?
  • 3 big areas to mobile devices - devices + apps + data - need a strategy that takes both into account
  • Even if you take care of all three areas, the Experience is the most important factor
  • How do you deliver a consumer-like mobile experience at work?
  • 3 things to do that - infrastructure to manage mobile lifecycle + mobile apple & data + developer tools and app ecosystem
  • XenMobile - How to deliver this - Provision, security, apps, and data to mobile devices
  • Want seamless windows integration
  • Worx Enroll - self-service device registration is the first step (provisioning)
  • Worx Home - Mobile settings, support, more (operations)
  • Demo Time  - Showing of BYOD of an iPhone 5 using Worx Enroll and Worx Home
  • Enroll checks the device, checks if it is jailbroken (Boo!) and certifies the device
  • You then enroll and your "apps" are pushed to the device, Worx Home acts like a corporate app store, could be a desktop, an app, a mobile app, a file, etc.
  • XenMobile has GoToAssist built in for mobile device support in the Enterprise
  • Now sowing XenMobile admin UI, shows all devices in the enterprise with a very nice break down of the devices
  • This allows you as an admin to wipe the "business" side of the device
  • Now showing a new Samsung S4 and the Nokia with Windows 8, Android on a stick from Wyse
  • XenMobile is designed for the full mobile lifecycle
  • What about apps that talk to each other (copy, paste, etc)
  • You don't want salesforce data leaking out, evernote to contain confidential information for example, create a barrier between life and work
  • MDX Technology - Micro VPN and secure app containers, app specific lock and swipe, inter-app communication, conditional access policies
  • XenMobile now includes WorxMail (mail, calendar, contacts), WorxWeb, ShareFile as a "basis" for office communications
  • Demo Time - Showing email, have a sensitive email, can't open it or move it out of the app "container" but does it allow it on ShareFile
  • Showing another email with a link to the internal Intranet and it will fire up a micro-VPN and use WorxWeb to tunnel back
  • Showing an integration of ShareFile integrated with internal file shares on the intranet.  Allows you to connect back to corp data on ShareFile along with document editing on the iPad
  • SharePoint connector into ShareFile - Pulls SharePoint into ShareFile, allows checkout of documents and editing with many SharePoint tracking features in place.  Check back in with a Note as well
  • Podio - can now use the Chat API, (use GTM for real time interaction, Podio for team based actions), can also do video chat built into Podio with builtin one button, it uses HD Faces technology built into Podio
  • XenMobile has 3 version - MDM Edition, App Edition,  and Enterprise Edition
  • Available in June
  • Worx App SDK - Worx Enable any mobile app
  • Also a Worx "App store" for IT to enable apps in the Enterprise
  • (NetScaler & wrap up content here but had some other things come up so missed them, sorry about that)

Monday, May 6, 2013

April Recap

My trend of posting monthly recaps a few days late continues...  Sorry about that, hopefully the May recap will be on-time.  I was traveling most of April so the blogs this month tend to reflect that.

I'll start with the Cloudcast (.net) for the month of April.  We published a record number of episodes. A HUGE thanks to both Amy Lewis and Brian Katz for their amazing contributions!  Amy did a fantastic job as roving reporter and Brian's Mobilecast is really taking off!  As always, please send us any show feedback, we love to hear from you!


Next up is my new TechTarget Blog, you have subscribed with your latest Google Reader replacement, right??  I'm really having a good time writing over there.  This site (aarondelp.com) has always been more hands on and live blogs from events but the interest in the latest trends around Open Clouds and the operational aspects of cloud computing has been both great and humbling.  Thank you to everyone who has taken the time to read the articles and provide feedback!


The only blogging I was able to do on my site this month is Live Blogs from the AWS event.  Here are all of them.

Tuesday, April 30, 2013

AWS Summit Liveblog: RightScale - Hybrid IT Design

Usual Liveblog disclaimer: typing this as I go in the session, please excuse typos and formatting issues

Title: Hybrid IT - Steps to Building a Successful Model - presented by RightScale
Presenter: Brian Adler, Sr. Services Architect, RightScale & Ryan Geyer, Cloud Solutions Engineer

Brian is services, this won't be a product pitch ;)

RightScale is a CMP (Cloud Management Platform) - provides configuration management, an automation engine, as well as governance controls and does both public and on-premise clouds (I think the word private cloud must be on the naughty list at the show, all pitches do NOT use the dirty "p word")

RightScale allows management & automation across cloud resource pools

basic overview of terminology and where we have come in IaaS to Cloud Computing today

On-Premise Key Considerations

1. Workload and Infrastructure Interaction - what are the resource needs? Does this make sense in the cloud and which size instance would be best?  Instance type is very important
2. Compliance - data may be contained on-prem for compliance
3. Latency - does the consumer require low latency for a good user experience
4. Cost - the faster it has to go (latency) the more expensive it will be in the cloud
5. Cost - What is the CAPEX vs. OPEX and does it make sense

Use Cases

1. Self-Service IT Portal (The IT Vending Machine) - Users select from fixed menu, for example, pre-configured and isolated test/dev

Demo Time - Showing off an example of a portal using the RightScale API's, basically push a big button, enter a few options, let it spin up an an environment, in this example they provisioned five servers and a php environment in a few minutes

2. Scalable Applications with Uncertain Demand - This is the typical web scale use case, fail or succeed very fast in the public cloud. "See if it stucks", once it sticks, maybe pull it in house if cost reduction can be achieved when the application is at steady state

3. Disaster Recovery - Production is typically on-premise and DR environment is in the cloud, this is often considered a "warm DR" scenario - replication in real time database from production to DR, all other servers are "down".  You then spin up the other servers and the DB is already up and running, then flip the DNS entries over when DR is up and running.  You can achieve an great RTO & RPO in this example.  You can also do this from on AWS region to another.

Demo Time - Showing RightScale Dashboard with a web app demo + DR.  Demo had 2 databases, master and slave replicating and in different regions (side discussions about WAN optimization and encryption here as well), Production in the example was in US-East AWS and DR was US-West AWS.  The front end of the app was down in West.  When you launch the West DR site, it will go and configure everything and automated as part of the server template.  All DR happens just by turning up the front end in West

Design Considerations

Location of Physical Hardware- again speed vs. latency vs. location

Availability and Redundancy Configuration - This can be easy to hard depending on your needs

Workloads, Workloads, Workloads - Does the application require HA of the infrastructure? Will it tolerate an interruption? Can it go down?  Will users be impacted?

Hardware Considerations - Do you need specialty? commodity?

(Sorry, he had others listed, I zoned out for a slide or two..)

On to Hybrid IT - Most customers start out wanting "cloud bursting" but most often an application is used in one location or the other.  Check out the slide for the reasons.

Common practice is a workload is all on-premise or public. Burting isn't a common use case.  If they do use bursting, they set up a VPC between private and public to maintain a connection.

Demo Time - What would a hybrid bursting scenario look like in the RightScale dashboard?  Customer has a local cloud that is VPC connected to AWS.  Load Balancers, one is private, one is in AWS.  They are using Apache running on top of a virtual machine to maintain compatibility between private and public.  DNS is using Route 53 (AWS DNS).  RightScale uses the concept on an Array.  As RightScale monitors the performance, additional instances are fired up and "bursted" or scaled out to AWS above and beyond the local already running resources.

You do not need the same LB's on the front end like the example above.  For example could be in a local CloudStack/OpenStack environment with a hardware firewall in front but also include AWS and AWS ELB in the rules as well

Take Away - It is very possible to use both public and private and there isn't a need for a "one size fits all approach"

Great session, probably the best session of the day so far for me today.




AWS Summit Liveblog: Cloud Backup and DR

Usual Liveblog Disclaimer: This is type as fast as I can, blog may contain typing and formatting errors, sorry about that

Session: Technical Lessons on how to do Backup and Disaster Recovery in the Cloud (whew, long title)

Presenter: Simone Brunozzi, Technology Evangelist

Simone presented in the morning keynote on the Enterprise demo, good presenter

3 parts = HA -> Backup -> Disaster Recovery

HA = Keeping Services Alive

Backup = Process of keeping a copy

DR = Recover using a backup

(Simone has is using great examples using churches and monasteries but too long to type all of that out here.)

5 Concepts of DR

1. My backup should be accessible - AWS uses API's, Direct Connect, customer owns the data, redundancy is built it, AWS has import/export capabilities

AWS Storage Gateway as an example, using a gateway cache volume on-premise that will replicate to a volume in AWS public cloud, S3, snapshots, etc.  Can be a GW-cached or GW-stored (one is a cache, the other is a full offline copy). Secure tunnel for transport over AWS Direct Connect or Internet

2. My backup should be able to scale - "Infinite scale" with S3 and Glacier, scale to multiple regions, seamless, no need to provision, cost tiers (cheaper options and at scale are available)

3. My backup should be safe - SSL Endpoints, signed API calls, stored encrypted files, server-side encryption, durability: multiple copies across different data centers, local/cloud with AWS Storage Gateway

4. My backup should work with my DR policy (I don't want to wait 10 years to recover) - easy to integrate within AWS or Hybrid, AWS Storage Gateway: Run services on Amazon EC2 for DR, cleat costs, reduced costs, You decide the redundancy/availability in relation to costs.

5. Someone should care about it - Need clear ownership, permission can be set in IAM with roles, monitor logs

Now a customer story:

Shaw Media - Canadian Media Company, before AWS - multiple datacenters, lot of equipment, downtime, different technologies across datacenters - they were told to change everything and become more agile and cost effective in the next 9 months to better serve the business

Solved the issue with AWS, fast deployment of servers, network rules, and ELB on AWS, first site in only 4 weeks, after that a full migration of 29 sites from a physical DC in 9 months - This was Phase one (This was main websites)

Phase Two - Other web services migration was next (check out the picture for the details), impressive stats.  Typical web servers, apps servers, database servers, etc.


Lessons Learned - went to fast, didn't catch it... damnit

DR - Learn from your outages (test your policy on a regular basis and refine the document)

(Sorry, he's going to fast to type or even take pictures of the slides.... Really wish he would he gone slower in this section, the content was really good grrrrrrr)

Lessons to learn from DR

1. You NEED a DR plan in place - how will you recover?  Can your business survive without it?  For AWS, across Availability Zones (AZ's) or App DR with Standby (see pictures).  The second option is cheaper to implement but will take a little longer to recover from.

 

Perform a business analysis of RTO & RPO (if you don't know what that is, Google it, you need to know what it is)  In a nutshell, RTO, how long to get it back, RPO, how much data can I lose?  This is the typical cost vs. performance trade off.  Take the various AWS services as an example:


2. Test your DR - Many may say Duh! to this one but I'm always surprised how little customers actually do this.  The ability to spin up capacity just for DR testing helps to minimize cost and the ability to not have a DR site to manage is pretty cool. Data Transfer speeds (Data Gravity) could be an issue in this kind of scenario

3. Reducing Costs - Took a screenshot, it was easier


Overall - great presentation although I wish he would have spent more time on the customer slides as there was some good technical content there...




AWS Summit Liveblog: Introducing AWS OpsWorks

Usual liveblog disclaimer, this could be messy, please excuse typos, sorry for that.

Chris Barclay, Product manager for AWS OpsWorks is presenting

Application Management Challenges - Reliability and Scalability are important, operations tasks typically: Provision, Deplot, etc.

"Once Upon a Time..."  - We took the time to develop everything by hand (home made bread)

Today we need to automate to go faster (cranking out automation in a factory like, mass produced way)

In Today's infrastructure, everything is considered code, including the configuration of the "parts", sounds much like a recent Cloudcast we did...

AWS OpsWorks is a tool to tackle this challenge, very reliable and repeatable and integrated with AWS, at no additional cost

Why use OpsWOrks?
Simple, Productive, Flexible, Powerful, Secure

Common complaint was there are a lot of AWS "building blocks" but many don't want to stitch them together, AWS at times can be complex because of large number of services offered

Chris turned over the presentation over to another person (didn't catch the name) at DriveDev, DevOps consulting group, focus on F500 and startups

He talked about a typical "old school" application development that went poorly. They were able to use built in OpsWorks recipes with the addition of Chef Cookbooks on top of it. Took customer and migrated them off private and into public with OpsWorks in a short amount of time.  Basically, they were a success...

How are customers using OpsWorks today?

From OS to application using OpsWorks, From OS to your code using beanstalk, From OS up and automate everything with Chef or another tool

Takeaway - It depends on how much automation you need and at what level and up depends on which tool will be best.


Demo Time...

Talking about Chef and how OpsWorks uses it

The concept of Lifecycle events, based on this a recipe is triggered

 

Showing integration with github, keeps source and cookbooks out on git

Chris did a creation of a stack, PHP app server layer with MySQL on top, then added instances and started them up (could change to multiple AZ's for HA at creation)

After this, there are builtin Chef recipes that can be used, you can also add your own if need additional functionality, can also add additional EBS volumes if needed, elastic IP's, IAM Instance profiles, etc.

Talked about a time based instance - an instance that only exists during certain times of day, also threshold instances that can be fired up as needed (scaling of an app server based on memory, CPU, network, etc)

Added the app from git onto the stack that was built

Chris went from here into deep level git items that were above me (I admit I'm not the target audience here).  The take away, he made a change, committed the change, performed a deployment, looked very easy

Now on to Permissions - talking about various 

What's next?  More integrations with AWS resources (i.e. ELB features) - Deeper VPC, more built-in layers (go vote on their forums, they will prioritize by public opinion)

Summary: OpsWorks for productivity, control, reliability