BOSTON -- There are certain tricks to building and maintaining a data center that only experienced professionals...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
can tell you.
Data center design
Companies must match the design with their business plan, said Dennis Julian, principal director of engineering at Integrated Design Group Inc., an engineering and architectural design firm in Boston.
Michael SwetzVP of data center operations, State Street Corp.
"Know what your end game is so the design can reflect and support that," he said.
Keep it simple and design a facility that accommodates different situations like fires, failure, downtime and more. Have a total cost of ownership analysis to assess costs -- both direct and indirect -- and benefits of IT equipment.
Design your data center for appropriate availability and energy efficiency and understand tier levels.
"People often say they want X-tier level, then list off Y-tier level features," Julian said.
If you colocate to a Tier III facility, expect no more than 1.6 hours of unplanned downtime annually. Incorporating more batteries and generators adds more financial flexibility and allows the business to control the equipment, according to Julian.
Do not short change your design and make concessions. Plan thoroughly and ensure that your build will match your design plans. Be sure not to rush your engineer either, he said.
Don't look at just one piece of the picture with power usage effectiveness (PUE). PUE is only useful for self-comparison over time. It changes over the course of the year, vacillating between, for example, 2.00 in the summer and 1.04 in colder months. Most importantly, understand what PUE means and what it doesn't, he said.
And don't forget to design with the ability to test your systems. Many enterprises phase in generators but don't test them, which may result in leaks or failure. Not planning for equipment tests compromises security and safety, according to Julian.
Align the budget, scope and schedule, said Kenn Stipcak, managing director of critical environments at Mark G Anderson Consultants Inc., a construction management firm in Washington, DC.
Start with an Owner's Project Requirements document that details the functional requirements of the project. Everyone must have the same expectations, as the scope defines the budget and schedule for a data center build. Break down the schedule into phases of your project and create gate checks to make sure the build follows the schedule.
It's also important to understand density in the design.
"People think they're going to operate at 15 kilowatt per rack when they're really consuming 2 kilowatt per rack," Stipcak said.
Every business has a cost sweet spot to help determine what density is realistic for your facility, he said. For example, spreading out the load might make more sense than using load banks, due to cost.
Don't let process management take over project management; the tools you use shouldn't dictate your process.
For example, one of the firm's clients allowed a $1,000 change order to delay their project while waiting for approval from higher-ups. It ended up costing the business tens of thousands of dollars in delay costs, he said.
Manage and maintain
Business and IT need to be friends, said James DiNoia, senior project manager for critical systems at CBRE Group, a commercial real estate services firm in Boston. Have regular meetings with both teams to discuss future plans for the business. That lets facility teams know what's going into and coming out of the racks, he said.
Have hands-on management. Site engineers should walk the floor and check the equipment every day to make sure everything is working properly. And where possible, put sensors on everything, DiNoia said.
Consistent maintenance is paramount. Address problems as soon as they arise, but don't do anything without enough information on maintenance. Have a procedure that will bring out all of your questions. Understand that your maintenance window probably falls on a Sunday afternoon, he said.
When investing in uninterruptible power supplies (UPS), consider cost over the whole lifespan, not just the price tag, said John Raio, SVP of operations at Quality Uptime Services, a power service organization in Connecticut. It might cost more to swap out UPSes than to put additional batteries in right away.
Monitor your batteries; they should be running at 100% checks. Monitoring will allow you to catch problems before batteries fail. Plan ahead and keep spare batteries on site, Raio said.
All members of the staff should be trained, regardless of the current response time. Proper training will decrease repair time. Data center staff should be comfortable operating the equipment, he said.
"Don't let an emergency be the staff's first time doing emergency procedures," Raio said.
Others agree that preparedness is critical.
"Murphy's Law will always prevail," said Michael Swetz, vice president of data center operations at State Street Corp., a financial services firm located in Boston.
"People build data centers and they don't know why they're doing what they're doing," he added.
Do you have regulatory requirements to meet? Do you want to scale? Don't let those be afterthoughts, Swetz said. IT pros should also look into industry standard practices, and use them; don't be out there on your own defining things, he added.
Successful data center operations come down to people, process and technology. You have to interpret what people want and need properly when designing and building the data center. You need key performance indicators on the applications' infrastructure, Swetz said.