Man, this has been one of those weeks where there’s so much out there to comment on, that it’s hard to keep up. Rather than try to put together a full post about each of them, I’ll point out the stuff I’ve read and try to put some context on them from a service provider perspective. On the enterprise and vendor side there’s such a vibrant and useful blogging community, so I figure I’ll use some of the news out there and we can discuss how it looks from the service provider side of the fence.
Yes, this really does change everything…
Every business sector has its holy grail. Some grails are common across multiple sectors, some are specific to only a few, but they are the driving forces behind innovation in that space. Some of these goals are part of the journey, while others are destinations in and of themselves. Virtualization, in my opinion, is part of the journey, as are “iterative” technologies like de-dupe, thin provisioning, etc… It’s not that they aren’t incredible accomplishments or that they don’t provide substantial value, but they are mile markets along the highway, not the exit you are trying to get to.
From a Service Provider standpoint, the next exit is the ability to disconnect workloads from the physical underpinnings that support it, making geographic diversity a core part of an enterprise cloud platform. Sure, we can kind of do that now, but to say the technology involved is in it’s earliest stages would be perfectly accurate. Taking old technology and getting it to do new things isn’t innovation. The technology that Chuck references is not the announcement of that goal, but it’s a hell of a milestone, and the level of genuinely new innovation is significant.
“But solving this fundamental technology challenge of distributed cache coherence can enable an entirely new model for storage: global federation. The clear potential exists for a global dynamic pool of both storage resources as well as the applications that use them.
Create a consistent global view of storage and cache state, and all of the sudden we can consider multiple writers to the same information at serious performance levels. Do this at a low enough level of abstraction (e.g. block devices), and it becomes generically applicable to just about any enterprise use case you’d care to consider.
Yes, it’s rocket science.” — Chuck Hollis
I can’t wait for EMC World, and I’m hoping to get to meet Chuck. While his posts have always been informative, this one was an eye opener for me. Anyone at an executive level who is willing to say out loud and publicly that a conservative “good enough” viewpoint is career limiting and that “Go big or go home” is the only attitude that matters is someone I’d like to buy a beer for.
Chad Sakac has another good post up, mostly about VDI. With the release of View 4 and with the continued success of our Enterprise Cloud platform, we continue to see people asking for hosted VDI in the same way that have been consuming hosted VMs. This post and Chad’s earlier one on the same topic are the first real salvos in the battle for hosted VDI, since the I/O profiles, use cases and real world numbers are just starting to filter into the discussion. The reference design on the storage side is proving to be the first hurdle needed for any VDI deployment (at least those with some scope). Once the enterprise reference is there, much like we’ve seen with the vBlock, we’ll have to throw it in a service provider environment and see what happens. What works one way with a single enterprise VDI profile is going to be a whole different animal when you have dozens or hundreds of those profiles contending for the same resources. Man I love this job!
Finally, speaking of the vBlock, I think the heavy lifting is done. My position paper has been submitted internally, and we’ll see what happens. I’m optimistic, but I know how things work. I’d say the odds for us right now are probably 70/30 in favor of getting the green light to move ahead.
The vendors involved, especially Cisco, keep trying to push upthe number of VMs that the compute nodes can support, while the EMC team keeps trying to be the voice of reason on the storage density side. I love my sales teams, I really do, but having to explain that in an N+2 cluster you are going to have two whole blades that are doing nothing, gets exhausting. As the numbers kept climbing, I decided to see what the theoretical maximums were at each level of the technology. Turns out that right now VMware is the limit, with a single DRS/HA-enabled cluster only supporting 32 nodes and a total of 1,280 VMs per cluster. Sure, we could create multiple clusters, but as Duncan Epping points out here and here, you increase the cost of the clusters ((N+x)*2) and you limit the effectiveness of DRS. Either way, in a vBlock you still end up limiting the number of VMs you can support, and that’s probably a good thing. Understanding the limitations and business value up front is always a good thing.
That’s all for now. Have a great weekend everybody!