What technologies should systems administrators who manage and administer a Linux cluster master?
A systems administrator should get smart on the Beowulf initiative, virtualization technologies and the cluster-awareness of existing applications. The system administrator should get intelligent about the underlying plumbing of a cluster, the network that these clusters are being built on. They need to know about configuration guidelines and how to optimally design and configure a fabric for a growing number of nodes.
FEEDBACK: What clustering technologies did your enterprise's administrators have to master?
Send your feedback to the SearchEnterpriseLinux.com news team.
Not all of the customers' applications are going to be cluster-aware. Either you need to have some sophisticated middleware that will hide the fact that the cluster is in a restricted environment, or you need to have a cluster-capable application, such as Oracle 10g. So, the big challenge is finding cluster-aware applications or finding middleware that will be the extraction layer, running the application over a cluster even if the application doesn't know there is a cluster.
Some vendors -- such as Sun, IBM and Veritas -- are coming out with virtualization software that will help a company get to a clustered environment even if they don't have cluster-aware applications. Isn't storage the application most commonly put on clusters in corporate environments?
About five years ago, businesses started to move to storage area networks (SANs). Every computer used to have its own storage, and people asked: 'Why can't I share this storage between all of my computers?' Now they are asking, 'Why cant I share my computer between all of my applications?" That is what clustering will allow them to do. In general, what IT shops are ripe for running clustering right now?
Some are running a large proprietary Unix-type system, a high-performance system that is expensive to acquire, upgrade and maintain. Others are running distributed systems, and they would love to not have to do that. Clustering looks good, whether you are running an expensive proprietary Unix system and want to cut costs, or you are running segmented workloads and would love to be able to consolidate with a pool of computing power.
Oracle users are a great case in point. They have a large database, and conventional wisdom says that a database can only be as big as its server. So, the database grows, but buying a big proprietary server is cost-prohibitive. Then, you split the database in half, with half the database on one machine and half on the other, and have to conglomerate the results. That is operationally painful, but it can be cheaper than buying a large proprietary server.
With clustering that company could run one instance of Oracle four, eight or 16 servers in a cluster. It is a single database, but it shields the users from the fact that there are multiple physical machines under it. The beauty is that the database can get as large as you want it to, and you just keep adding inexpensive machines under it. If one of the machines fails, the database doesn't go down because it has other machines that it continues to run on. Is clustering on Windows-based commodity servers also gaining traction?
Windows is an interesting case because Microsoft is beginning to realize the power of clusters. With its 64-bit computing initiative, Microsoft has started making significant noise around its product's ability to participate in high-performance computing clusters. Microsoft looks to create the opportunity for a lot of Windows-based nodes to share resources and run a big application on a cluster. I think that [Windows-based clusters] are going to show up more and more in the enterprise in the next couple of years. What's changed about that situation now?
Over the past couple of years, commodity computers have become more powerful and really cheap. There is simply no denying the cost benefits of an Opteron-based fabric or an Itanium-based cluster of computers anymore. The big vendors are seeing that they can't win a proprietary architecture versus commodity cluster argument.
Also, Linux has gained steam in the enterprise after sneaking in as a file-and-print, firewall and security server. When Oracle came in with "unbreakable Linux," that really woke people up to the enterprise possibilities of Linux. Then bringing together Linux, Beowulf clustering tools and commodity computers, you get a computing paradigm that any CIO could wrap his head around. Why go ahead and spend a million dollars for a proprietary mainframe-level machine, when for $125,000, you can get greater horsepower through commodity 64-bit server clusters.
To me, the biggest thing holding Linux clustering back has been that it was not beneficial for major systems vendors. A lot of people who had a lot of influence didn't want it to succeed. For years, vendors' mantra has been that a very large mission-critical job needs proprietary software and systems.