Even the most experienced data center pros can learn from expert information technology books. Keep a few relevant reference volumes on hand to help solve problems or come up with a new, better way to operate. And if you're learning a new field, books let you move at your own pace and build up a repertoire in tandem with real-world experience.
The SearchDataCenter Advisory Board shares their picks for the best data center design, data virtualization and Linux books, as well as books for learning Python, treating IT as a business enabler and more.
For the administrator who wants to learn
Sander van Vugt, independent trainer and consultant
The first book I'd recommend is actually my own, Red Hat Enterprise Linux 6 Administration: Real World Skills for Red Hat Administrators. It is not just a book to help people pass the Red Hat Certified System Administrator (RHCSA) and Red Hat Certified Engineer (RHCE) exams -- it covers RHCSA and RHCE test information and much more, to prepare Red Hat administrators for real work. I've added topics that are important but not covered on any test, such as Linux high availability and performance optimization. It is also a practical book with hands-on exercises for each topic.
There are two books I recently purchased that I highly recommend. Python for Dummies by Stef Maruch and Aahz Maruch is written in a very accessible way. And yes, this one really helped me learn Python programming in just a few hours each week! And my favorite: SELinux System Administration by Sven Vermeulen. It's one of those little books published by Packt, just 100 pages, that makes the complicated topic of SELinux data center security accessible to anyone. It's a good start for new administrators, and it goes deep enough for experienced staff to be worth the small investment. And the topic is so relevant for the data center, because SELinux really helps make the data center secure.
For the CTO and IT strategist
Wayne Kernochan, president, Infostructure Associates
I recommend two books about data virtualization: Data Virtualization for Business Intelligence Systems from Rick van der Lans and Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility from Judith Davis and Robert Eve. Data virtualization is strategic for modern businesses, and it continues to evolve. Both books cover the present and future, as well as best practices and strategic case studies.
Data Virtualization for Business Intelligence Systems is a "consultant's cookbook" of concepts and best-practices generic implementations. Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility picks up on business strategies and shares actual case studies. I recommend using them together: Recreate the results from a case study that Davis and Eve describe by implementing van der Lans' best practices for your business architecture.
The benefits of data virtualization are somewhat quantifiable, as these books show, but also a harder to pin down part of "information agility." Don't treat data virtualization as just infrastructure software; shore up your understanding of it in the context of business operations, and reap the rewards.
For the IT leader
John Treadway, products and software leader, Cloud Technology Partners
I recommend The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win by Gene Kim, Kevin Behr and George Spafford. The Phoenix Project is a great read, covering what people need to focus on to build a modern IT operation that works for the business.
For all data center employees
Robert E. McFarlane, principal, data center design, Shen Milsom & Wilke
I recommend the ASHRAE TC 9.9 Datacom Series, which at present covers ten major data center infrastructure topics. IT staffers of every technical level, from data center manager to design engineer, will get useful information from the series.
ASHRAE makes the Datacom Series available online, on CD, in softcover editions and in a composite hardcover edition with bonus material.
For the mainframe administrator
Robert Crawford, systems programmer
Early in my career I was blessed to work for a company that believed in professional education, which I prefer to books. And when I attend SHARE conferences, there are a few speakers whose sessions I never miss:
- Jim Grauel used to work in CICS level II support, so naturally he provided many debugging sessions. Although Jim has since retired and some of the information may be out of date, I would still seek out his presentations to understand how the pieces fit together.
- Any presentation by Bob Rogers will be technically detailed, eminently understandable and highly entertaining. "How You Do What You Do When You're a CPU" is a perennial favorite.
- Steve Zemblowski spends a lot of time on the road carrying the CICS gospel to the masses. He does a great job explaining new CICS features and what they can be used for.
There is no substitute for experience. Both CICS and the mainframe on which CICS runs are pretty open, considering all we get to see is the object code. Get into transaction CETR (trace control) and turn on every trace point to the deepest level. After you run a few transactions through the region, you will learn a great deal about how CICS is put together, which domains call which functions and why things happen the way they do.
The same is true for the mainframe in general. A dump gives you ample opportunity to format system areas, follow control block chains and learn how to disassemble machine instructions. Browse through the authorized assembler macros manuals for information about what's going on underneath the covers. And, for the very hardcore, there's always the Principles of Operations manual, which describes the mainframe processor right down to the time-of-day clock.
And if your bosses ask what you're doing, tell them you're looking into the mainframe cloud for big data!