News Stay informed about the latest enterprise technology news and product updates.

The IT fundamentals that ignored's shortcomings can be traced back to some IT fundamentals that its developers didn't -- or couldn't -- follow correctly.

If there is one lesson large IT shops can learn from the disaster that is, it is what not to do in developing and launching a high-volume website.

There are few development projects that compare with the scale and societal importance of But if the government's IT organization more carefully considered the technology pieces needed to handle traffic demands and observed time-honored fundamental IT practices, the site today would not be a cautionary tale.

Some of those overlooked elements include the site's basic architecture, properly stress testing the site's availability, disaster recovery (DR) capabilities and, last but not least, the failure to put a single person in charge to manage the project from conception to final delivery.

But the federal government isn't the only organization guilty of not taking proper precautions to ensure the stability and reliability of a volume website. For example, despite a significant outage suffered this past weekend by -- thanks to an unspecified "component failure" traced back to one of its hosting providers -- many IT organizations have failed to implement a proven DR strategy.

"You would be surprised [by] how many organizations don't do disaster recovery," said Nate Ulery, senior director of the IT infrastructure and operations practice at West Monroe Partners. "In most cases, organizations invest in the necessary hardware and software for DR, but they don't invest in the process to where things are documented and tested on a regular basis. Consequently, they are actually hesitant to use it."

It is the nature of any large software project that you just don't give it a due date. They are like babies; they will come when they are ready.
Wade ElleryRadiant Logic

Many IT shops, as appears to have been the case with, fail to consider there are two types of disasters: the one most people think of, involving hurricanes and earthquakes, and disasters limited to just your organization, which might involve only one corrupt application or a single hardware or software component failure, Ulery said.

"So many people think the hardware and software they purchase will serve as a silver bullet, but the reality is -- it's not. They have unrealistic expectations of what DR and high availability can actually provide," Ulery said.

Given the high volume was to handle, the architecture model for the site should have been more carefully chosen, according to some. The site's architecture is built around a traditional database model instead of one centered around directories, which are better suited for fielding queries, navigating through large data stores to find the relevant information, and delivering results quickly.

"Databases are excellent at storing and accessing large amounts of data, but they aren't designed to deliver data in near real time. Directories are optimized to read stored information and get it to you fast. The challenge for is managing not only large amounts of data, but also linking together large amounts of data from external third parties," said Wade Ellery, a director of systems and development for Novato, Calif.-based Radiant Logic.

Ellery added that is not the only site that could benefit from a directory-based approach to searching complex data sets and returning queries quickly. In fact, a growing number of his large customers are gravitating toward managing not just their own employees and internal data, but also that of their customers.

"We have insurance companies with 100 million going to 200 million potential identities to manage," Ellery said. " is the largest single attempt to pull all these identities and information together, but it foretells where the world is headed. As networks grow and we start tracking and linking more information, we will be building extremely large data sets that need this [directories] approach."

The most glaring mistake, in the opinion of some, was not hiring a technically savvy person with a C-level title to coordinate such a sprawling development effort. There wasn't an identifiable individual who articulated a vision for the site, or served as coordinator for the many internal programmers and 50 or so external subcontractors developing the many different technical pieces of, they noted.

"You have to have one chief at the top who can see the big picture but also [who can] be technically savvy. You can't have some bureaucrat or politician making critical decisions about what technical pieces to pick," Ellery said.

As the chief information officer of the United States and long-time executive at Microsoft, Steven VanRoekel could have played a more active role in either overseeing the project or hiring a technically experienced coordinator, according to some. But how effective he, or his predecessor Vivek Kundra, could have been in that role is questionable given certain bureaucratic and financial constraints.

"Federal CIOs don't always have as much control as they need over budgets, which are set by Congress. Many times the best they can do is just to be directional," said Tony Byrne, founder of The Real Story Group, an analyst firm based in Olney, Md.

Besides the lack of crisp execution, the lesson IT leaders can learn from is they should be as forthright as possible with their colleagues: If something is going to be complex and hard, tell them early and often, but also take the time to educate them and not just push them off, Byrne said.

The government had its hands full from the start, Byrne admitted, in having to overcome three difficult elements, or what he calls the "Web application trifecta."

"They had to distill a very complex customer journey on the front end, apply a diverse set of business rules to back-end transactions involving many external partners, and support huge volumes of traffic that had intense spikes," he said. "Any one of these requirements demands quite specialized expertise."

Yet another flaw in the project, which technical workers had little control over, was giving the project a hard and fast deadline of October 1. But given the late approval of Obamacare by the Supreme Court, the time-consuming decision on the part of states to either run their own healthcare exchanges or let the Federal government operate those exchanges for them, programmers lacked the proper amount of time and financial resources to deliver a satisfactory product.

"It is the nature of any large software project that you just don't give it a due date. They are like babies; they will come when they are ready," Ellery said. "But they were trapped [in] politics and promises of having it by October 1."

Dig Deeper on Colocation, hosting and outsourcing management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.