Tip

Top 10 things to review when developing a hardware scalability plan

Developing a plan for scalability for an application is based on many complex variables and requires a detailed analysis of the applications, operating systems, and hardware platforms to be used. Here are the top 10 items to include in your analysis and planning efforts:

  1. Buy the right server up front. Servers are often deployed with the minimum specs needed at the time of implementation to save costs. However, it is important to be sure that the specific model of server configuration purchased permits the scalability method you're planning. If you don't discover you can't add CPUs or you have to remove four 512MB memory chips to grow to 4GB RAM in a 4 slot box until you are ready to deploy that server, the costs of scalability may unexpectedly increase at the time the additional resources are needed.

  2. Budget scaling costs up front. Infrastructure doesn't always perform as intended, and you may need to scale up sooner than you had planned. It's a good idea to be sure you budget can accommodate early/ emergency expansion plans.

  3. Establish partitioning strategy based on utilization levels. Co-locate multiple system instances on a single system (monitoring and reporting, for example) with adequate spare capacity, and isolate system instances with high utilization requirements on servers with adequate room for future scaling (such as adding CPUs).

  4. Migrating existing partitions to a new environment requiring additional

    Requires Free Membership to View

  1. application functionality will typically require a larger number of servers.
    This is because you've already taken advantage of the resource optimization gains partitioning provides. However, when migrating older partitioned environments, you may actually be able to reduce the number of new servers required. This is most frequently encountered in servers over four years old.

  2. Define your application workload properly. How you scale (up or out) is dependent on the CPU-intensity and I/O-intensity of the application workload. Not understanding this workload characteristic can also result in an over- or under- configured server. For example, a workload that involves significant sharing and/or locking of data such as ad hoc queries is limited by the ability of the CPU to get access to the data necessary to carry out its calculation. Adding CPUs to an I/O-intense application without enough I/O capacity to move data between processors will result in the additional CPUs sitting idle.

  3. Vertical scaling isn't a 1:1 ratio. Adding an additional CPU doesn't increase power in a linear fashion because the law of diminishing returns applies. Before adding CPUs, be sure the additional performance gain justifies the cost of that additional CPU and any additional per-CPU software licenses you are adding. Ask both your software and hardware vendors about the scaling curve for their products.

  4. Choose your benchmark carefully. Different benchmarks use different scaling curves. Make sure the benchmark you choose mimics your environment as closely as possible. Selecting the wrong benchmark can result in an under- or over-configured system. Two common CPU-intense benchmarks are SPECint2000 and SPECfp2000, while the TPC-C benchmark represents an I/O intense workload.

  5. Active/active clusters have special scaling considerations. In an active/ active cluster, your application is running on every node at all times. If one node fails, your application is then running on N-1 nodes, and. Make your vertical or horizontal scaling plans based on a potential failure mode load, not just the application load for a fully functional environment.

  6. Assess software licensing costs for your scaling plan. Capacity on demand hardware configurations may limit the upfront hardware costs for a server, but some applications are licensed according to the total number of CPU sockets on the server, regardless of whether those sockets are populated.

  7. Assess the support costs for logical versus partitioning. Logical partitions offer more flexibility and less hardware cost than physical partitions, but the support costs for a more complex environment are higher. You may be able to deploy 10 logical partitions on 3 servers, but you still have 10 instances of OS and application to support. More over, more highly skilled staff is required to efficiently support the environment and reduce the chances of operator-error related downtime.


About the author: Kackie Cohen is a Silicon Valley-based consultant providing data center planning and operations management to government and private sector clients. Kackie is the author of Windows 2000 Routing and Remote Access Service and co-author of Windows XP Networking.

This was first published in January 2006

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.