Mainframe tape is a funny thing. Or at least it can look funny to a person that doesn't work with mainframes.
Funny because mainframe tape does a bunch of things. In the open systems world, life is simple by comparison. Tape does (or did!) basically one thing: backup. Yes, you could argue that it was used for archiving too, but that argument largely depends on the notion that backup and archive tend to be the same thing for open systems folks. Which isn't necessarily a good thing, but it is certainly the case more often than not.
In the mainframe world however, tape does at least four things: backup, archive, batch processing, and migration (for DFHSM). Now some of these, it could be argued, are sort of suited to tape. And some are not. In fact, as time has moved on, the value of tape to each of these has degraded.
And as innovation has occurred, that value has degraded even further. Because mainframe tape is very much like open system tape in one key respect: it isn't very much fun. It is unreliable (relative to other IT systems), it is insecure, it is slow, and it is an operational nightmare. And on top of it all, it makes disaster recovery planning, testing, and execution really really difficult.
Now, it has persisted longer in the mainframe world, for two reasons. One, there has been relatively little innovation, in the sense of a dramatic changes that completely changed perspective and technology on tape. Yes, VTS systems came along. And no, they weren't all that interesting or radical—they were largely a relatively small disk cache in front of a whole lot of physical tape. Secondly, I don't think it is any secret that mainframe people are a little more resistant to change than open systems people (and when you run the world’s most stable platform, why wouldn't you be?).
So the status quo has persisted for a while. However, several years ago now EMC introduced a new product, called a DLm, or Disk Library for Mainframe. And the Dlm was different than anything we had seen before, because it didn't use tape. It wasn't just a cache for physical tape. Yes, you could still support tape if you wanted to, but the Dlm didn't use it, need it, or require it in any way.
And today, we are updating the product again. The latest third generation release is the Dlm6000. Just like previous versions, the Dlm6000 doesn't use any physical tape. It is a disk based replacement for tape. It uses disk: both primary disk and deduplicated disk. So it is here that the DLm6000 takes such a big step forward from the previous generation in integrating deduplication functionality: not only do you not need tape, but you don’t need nearly as much disk either.
The mix of the two will depend on your requirements, but by offering a mix, we can offer both performance and capacity. Performance that is better than 2x that of our competitors. And capacity of up to 5.7 PB per system. All of which can be managed from the Z/OS console—meaning that determining whether data ultimately ends up being deduplicated, or not, ends up to be trivially easy. As does determining which data gets replicated.
So the Dlm offers a unique platform: a single virtual tape system that can fulfill all of the requirements that tape has traditionally met for mainframe users: it can be a target for data used in batch processing, it can be a target for data migrations performed by DFHSM, it can be an archive platform, and it can be a target for backups.
One platform, that does it all. And doesn't rely on tape.
On top of that, the DLm is, I believe, the only mainframe tape virtualization product today that has no single point of failure, with no metadata that can be “lost” in the event of a component failure. Which helps contribute to another piece of functionality that is near and dear to my heart: the ability to do disruptive/destructive disaster recovery testing without impacting primary data functions, or replication, in any way. (This type of functionality is also found on the Data Domain platform, and in my opinion is hugely important. If you can’t test your DR plan without disrupting data replication, it is just not acceptable. If you have to break the replication, then resynchonize systems, and wait for replication to catch up after a DR test, this is just not good enough. And that isn’t a process you should accept from a modern tape virtualization or disk backup target.)
And in the spirit of just one more thing, the last major improvement the DLm brings is non-disruptive code upgrades.