Q

Pros and cons of DASD options

I understand that when running Linux under VM, there are several DASD options -- all SAN, all OS/390 or a mix. What are the pros and cons of each? Are you better able to manage capacity and performance if you make all of the DASD VM minidisks as opposed to SAN? If so, which products produce the required information?

I understand that when running Linux under VM, there are several DASD options -- all SAN, all OS/390 or a mix. What are the pros and cons of each? Are you better able to manage capacity and performance if you make all of the DASD VM minidisks as opposed to SAN? If so, which products produce the required information?
The major advantages of traditional ECKD/FBA disks are:
  1. Easy manageability using the existing traditional VM tools (backup, cloning, flashcopy, etc.) -- ECKD and FBA disk storage types are well supported under VM and are well integrated into the tooling that is supplied with VM.

  2. Multipathing -- Multiple connections between the storage unit and the host, or multipathing, is supported without any problems or concerns and is particularly effective for FICON and FICON Express systems.

  3. Unified DR -- If you're sharing your processor with another IBM operating system such as z/OS, putting your Linux systems on disk technology that z/OS understands allows you to leverage the recovery tools for that environment, giving you a faster time to recover if there is a disaster. Taking full pack dumps of your shutdown (important caveat!) Linux systems in an emergency using your z/OS backup tools allows you to restore your emergency system and then restore your Linux environment at the same time you restore the z/OS environment.

    Side comment: Related to this, you may think about creating a one-pack VM system and using that for your base restore system. You'd be surprised how much parallel restore traffic you can drive if you do.

  4. Hardware reuse -- If you already have existing FBA/ECKD disk, Linux can happily make use of older/slower hardware that is already paid for.

  5. Accessibility of storage for multiple IBM operating systems -- All 390 and zSeries operating systems understand ECKD, and many understand FBA as well. If demand for disk is greater or less in a particular environment, the ECKD disk can be assigned wherever it is needed without worry for compatibility.

  6. Channel-attached infrastructure -- At the moment, you have to have a channel-attached tape drive if you want to do backups to a VM supported device with the included tools, so you pretty much have to have *some* channel-attached infrastructure anyway. Additional commercial applications are available to enable FCP-attached devices, but those devices can't be shared with existing IBM operating systems.
Disadvantages of traditional FBA/ECKD disks:
  1. Limited size -- FBA/ECKD volumes are much smaller than most SAN volumes. LVM and MD RAID technologies allow creating larger logical volumes, but there is a finite number of volumes you can assemble into larger volumes without rebuilding LVM or MD to have higher limits. This can be a tricky process, and for some distributions, it also puts you in peril of losing your distributor's support. Check your support contract closely to make sure this isn't the case for you.

  2. Extra expense for the FBA/ECKD interfaces on storage servers -- On most of the popular enterprise storage devices, the necessary interfaces to plug in to ESCON or FICON are anywhere between 400% and 500% of the cost of a FCP interface. This skews the cost-per-megabyte for ECKD disk dramatically.

  3. "It's different." -- If you're trying to convince people to move applications from other environments where they can request enormous LUNs and not have to worry about LVM or MD, it's one more thing to have to convince them to do, and the argument generates a fair amount of unnecessary resistance.
On the other hand, we have the FCP storage. The advantages of FCP storage:
  1. Really, *really* huge volumes without LVM -- FCP disks can be of pretty much arbitrary size. It's not unusual to have single volumes reaching 500GB each in the SAN environment.

  2. Lower cost infrastructure -- The necessary mainframe adapters are the same price as FICON adapters (they *are* FICON adapters with different microcode), but all the other things that those adapters connect to -- SAN switches, interfaces to the storage devices, etc. -- are identical to the ones used for your open systems. These are usually *much* cheaper than the FBA/ECKD interfaces, often (as noted above) 400%-500% cheaper.

  3. Volume format compatibility with open systems volumes inside the storage units -- If your open systems backup solution understands backing up FCP-attached storage on the open systems connected to your SAN, you get the same features in your 390 Linux guests, often without needing a zSeries-specific client application (the backups are done inside the storage unit -- e.g., the so-called "LAN-free" backup methods). It also allows other systems to mount volumes created by mainframe Linux, if some form of locking software is available on both systems.

  4. Re-use of existing open system resources -- Most open systems SAN deployments are over-provisioned by a significant margin. FCP-attachment of your mainframe lets you use some of that over-provisioning productively without new purchases. Since z/VM itself can also reside on FCP-only disk, you may be able to get substantial benefits without additional disk investment.

  5. Common storage management policies and procedures with open systems -- Allocation of a FCP disk can be done in the same way as it is done for open systems, which makes for one less set of policies.
Disadvantages of FCP storage:
  1. Mainframes communication -- Very few mainframe tools understand it. Most cloning and copy facilities are not available with FCP disk, and VM cannot invoke some specialized features of certain disk units (mostly for old political battle reasons inside and outside IBM, but the problem remains).

  2. Configuration complexity -- It is complex to configure, both to VM and to the storage admin. LUNs and WWIDs are not native or natural concepts in VM or z/OS.

  3. Dump tools -- Most z/OS and z/VM volume dump tools cannot access FCP storage at all. When VM is using SCSI volumes for its own use, the SAN devices emulate 9336 disks (and thus can be dumped with DDR), but pure FCP volumes are not accessible to CMS-based tools.

  4. Recovery -- Recovery in DR requires additional planning. You don't get all the applications and data in the same restore cycle; a separate data restoration plan is necessary.
  5. No native support for FCP-attached tape in VM -- You need at least one channel-attached drive, or you need to invent a different -- and usually quite expensive, if you choose a commercial solution -- solution to handle both VM and Linux backups.

  6. Performance differences -- There is a measurable performance difference between FCP and FBA/ECKD disk when used for VM CP functions.

  7. Multiple physical paths -- Managing multiple physical paths to FCP disk is still a bit difficult for non-Linux systems. EVMS is your friend, but VM doesn't have good tools to do it.

  8. Z/OS and FCP devices -- Z/OS doesn't understand FCP devices at all. VM, VSE and Linux are perfectly happy to run on FCP or emulated 9336 on FCP, but z/OS has to have ECKD. Until VM adds ECKD emulation on FCP disk, you have to have separate disks for z/OS. This is a pain in emergencies.
To go back to your original question, I think there's a place for both types of storage. Typically, I recommend that the application and OS code be stored on traditional FBA/ECKD disk as a rule, and that the application data be stored on the type of disk that provides the best cost/performance tradeoff. Applications that benefit from very large volumes (like databases) often are good candidates for FCP storage, but may perform better on traditional disks. Moving data from one type of storage to another is a pretty easy task; moving applications and OS boot is much harder work.

As a rule, I always install Linux systems as VM minidisks (starting on cyl 1) rather than as full volumes (starting on cyl 0). This makes them much more flexible, and if I ever do have to move a Linux guest to an LPAR, I'm always sure that it'll fit on real volumes (it's always 1 cyl smaller than the equivalent real volume). If you do have hybrid systems, consider using EVMS to manage the disks. It's very sophisticated, and gives you a good way to control disk storage in a visual manner.

You also asked:

Are you better able to manage capacity and performance if you make all of the DASD VM minidisks as opposed to SAN? If so, which products produce the required information?
"Better" is a relative term. It depends a lot on from which side of the house you approach the problem. The raw data for analysis is collected by the VM monitor and accounting data services in CP. The questions are: What consumes that data? And how does it do so?

If you are approaching the problem from the traditional mainframe side of the house, then the VM minidisk approach is much more in line with what is expected from a capacity/performance instrumentation solution. The information necessary is present in the OS instrumentation stream (e.g., the VM monitor and accounting data streams), and there is very detailed data available for real-time analysis using tools like Omegamon or the IBM Performance Toolkit, or longer term classic performance analysis tools like MXG. These tools do a fine job with measuring the container running the Linux systems, but (with the exception of PerfKit) don't do much inside the Linux guests either from a storage or processor utilization perspective. The usual suspects (HP, Tivoli, BMC, etc.) are catching up on this one, but that's still a fairly weak point in the management solution discussion. Most provide some level of reporting on VM storage and capacity planning, but it's not usually very comprehensive.

If you're approaching this from the open systems side of the house, the storage capacity planning is done from the tools supplied by your storage vendor. This planning should be identical to measurements done for any open system connected to the storage. There's not a clear leader in that space, as the tools are usually vendor-specific. I know it's a wimp-out, but the answer here is "whatever tool you're using for the rest of your open systems"; that's probably the best I can give you. One caveat is that the storage vendor tools are not likely to understand the utilization patterns of a VM system running on FCP disk very well, so some of the optimizations possible with some storage vendors are likely to be off.

So, "it depends." If your organization wants to consolidate all your storage management into a common storage management group, you probably should consider the hybrid solution I mentioned above (code on FBA/ECKD, data on FCP) or wait for z/VM 5.3, which is rumored to address some of the performance differences between an all-FCP and mixed FBA/ECKD and FCP environment. If you maintain separate storage management groups for mainframe and open systems storage, it's probably simpler to stick with traditional FBA/ECKD storage or the hybrid for now until the tools evolve a bit more.

This was first published in December 2005

Dig deeper on Mainframe operating systems and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close