Thanks to Mr. Chuck Hollis for the inspiration on this one. If you haven't already, you can check out Chuck's well considered and wide ranging thoughts here.
In my case, the acronym NGB refers to Next Generation Backup. Hrmm. Not very sexy is it? Well it is better than RAT. (Which is a real acronym for something I can't talk about yet. I kid you not. I hope marketing comes to their senses on that one!)
Anyway, NGB is about what is going on in backup. It is about change. It is about being willing to question everything and rethink it all. It is about new beginnings. It is about making backup better.
Just like Chuck, I think it might go either way. Revolution or Evolution. Either way, I think that we (that is EMC for those of you who haven't read the disclaimer!) have you covered. But I don't want this to be a promo or an "aren't we great!" speech. If you want one of those, call up your favorite sales rep. Just don't blame me if you regret it later! So lets stick to some big picture thinking, shall we?
It seems pretty clear that there are some pretty revolutionary technologies available today in the world of backup. Most involve deduplication at some stage of the game. But lets focus on the two major categories of deduplication products: source, and target.
Source deduplication happens at the source of the backup: the backup client. Source deduplication is like extra-strength deduplication because it comes with the added benefit of saving you network bandwidth. Meaning that not only can you save 95% or more of the storage required to save all those backups, but you can often save 99.5% or more of the bandwidth required to transmit a full backup.
It turns out that this matters a lot if you are backing up remote offices, the data on the laptops of a mobile workforce, personal data, and similar things. Does that matter a lot? I think so.
(As it turns out, it matters an awful lot for VMware too. But we will save that line of thinking for another day.)
There are hundreds of millions of mobile workers. There is more data out of the data center than in it--and it is almost axiomatic than bandwidth is constrained for all those bytes.
All of this just cries out for deduplication. Save yourself a ton of money, grief, lost tapes, inefficient processes, and all the other headaches associated with backing that data up, and deduplicate it at the source!
But we can push the envelope further here: if you don't like the notion of owning all the infrastructure necessary to host those deduplicated backups, don't.
Buy backup as a service. If you haven't already, head over to mozy.com to see what I mean. Mozy does for backups what Salesforce.com did for CRM. And it does it faster, and cheaper, than most organizations can do it for internally.
How does suddenly not having to care about backing up 50% of your data sound?
Pretty good, I bet.
More accurately, it probably means you can now back up the 50% of the data in your enterprise that you had no good way of backing up before.
Make no mistake, this is revolutionary: with source deduplication, used in the right place at the right time, we can probably "fix" backup for 50% of the organization by entirely dispensing with traditional backup in favor of deduplication at the source.
No more tape. No more WAN clogging applications. Just simplicity.
And a subscription fee.
And since we are rethinking the world of backup, what about the data center?
We have talked about virtual tape (a lot) already. And I think we can see how virtual tape and disk is likely where operational recovery happens for a lot of us. Now take your screaming fast VTL, and deduplicate it.
How about taking 1 PB of tape and reducing that to a mere 50 TB of disk? How about take 10,000 cartridges and making them disappear? And instead of moving 20 TB of tape every day to some off site vault that just might burn down, why don't we just move 2 TB to an alternate facility. The difference here is that one is not very secure, and one is. Moving 20 TB of data over a WAN link is just not affordable, unless you happen to own a few strands of dark fibre (and yes,there are a lucky few that fall into that category!). Moving 2 TB is well within financial reach for the masses.
And now our revolutionary future starts to shape up.
We see deduplication at the source for remote data, and the mobile data of the mobile work force. We see deduplication at the core, with replication of that data being cheap, easy, and secure relative to the alternatives. We see letting somebody else take on the headache of backup where that makes sense, and we see owning it all in the data center.
Do you want to own it all? Fine.
Do you want to own a piece of it? OK.
Do you want to mix and match? No problem.
Complement it all with tape where that makes sense? That is groovy too.
And ultimately that is why the future can be evolutionary too. Because if you want to pick the problems off one by one, and retain the existing processes and procedures you can do that too.
Remote backup a pain? Fix it. That doesn't mean you have to throw out that 3494 or 9310. (Its comforting, isn't it? 10,000 lbs and 500 sq. ft. of backup goodness.)
But, if you want to throw out that tired old Powderhorn, well do you ever have a lot of choices that you didn't have 3 years ago. From high speed VTL to VTLs that offer deduplication to deep archiving with CAS, there are plenty of options other than the same old same old.
VMware backup troubling you? Lots of great new choices there too. Choices that don't mean storing petabytes of vmdk files. Thank goodness.
And that is the best thing of all: evolution or revolution? You decide. You tell me.