Developing a plan for scalability for an application is based on many complex variables and requires a detailed...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
analysis of the applications, operating systems, and hardware platforms to be used. Here are the top 10 items to include in your analysis and planning efforts:
- Buy the right server up front. Servers are often deployed with the minimum specs needed at the time of implementation to save costs. However, it is important to be sure that the specific model of server configuration purchased permits the scalability method you're planning. If you don't discover you can't add CPUs or you have to remove four 512MB memory chips to grow to 4GB RAM in a 4 slot box until you are ready to deploy that server, the costs of scalability may unexpectedly increase at the time the additional resources are needed.
- Budget scaling costs up front. Infrastructure doesn't always perform as intended, and you may need to scale up sooner than you had planned. It's a good idea to be sure you budget can accommodate early/ emergency expansion plans.
- Establish partitioning strategy based on utilization levels. Co-locate multiple system instances on a single system (monitoring and reporting, for example) with adequate spare capacity, and isolate system instances with high utilization requirements on servers with adequate room for future scaling (such as adding CPUs).
- Migrating existing partitions to a new environment requiring additional application functionality will typically require a larger number of servers. This is because you've already taken advantage of the resource optimization gains partitioning provides. However, when migrating older partitioned environments, you may actually be able to reduce the number of new servers required. This is most frequently encountered in servers over four years old.
- Define your application workload properly. How you scale (up or out) is dependent on the CPU-intensity and I/O-intensity of the application workload. Not understanding this workload characteristic can also result in an over- or under- configured server. For example, a workload that involves significant sharing and/or locking of data such as ad hoc queries is limited by the ability of the CPU to get access to the data necessary to carry out its calculation. Adding CPUs to an I/O-intense application without enough I/O capacity to move data between processors will result in the additional CPUs sitting idle.
- Vertical scaling isn't a 1:1 ratio. Adding an additional CPU doesn't increase power in a linear fashion because the law of diminishing returns applies. Before adding CPUs, be sure the additional performance gain justifies the cost of that additional CPU and any additional per-CPU software licenses you are adding. Ask both your software and hardware vendors about the scaling curve for their products.
- Choose your benchmark carefully. Different benchmarks use different scaling curves. Make sure the benchmark you choose mimics your environment as closely as possible. Selecting the wrong benchmark can result in an under- or over-configured system. Two common CPU-intense benchmarks are SPECint2000 and SPECfp2000, while the TPC-C benchmark represents an I/O intense workload.
- Active/active clusters have special scaling considerations. In an active/ active cluster, your application is running on every node at all times. If one node fails, your application is then running on N-1 nodes, and. Make your vertical or horizontal scaling plans based on a potential failure mode load, not just the application load for a fully functional environment.
- Assess software licensing costs for your scaling plan. Capacity on demand hardware configurations may limit the upfront hardware costs for a server, but some applications are licensed according to the total number of CPU sockets on the server, regardless of whether those sockets are populated.
- Assess the support costs for logical versus partitioning. Logical partitions offer more flexibility and less hardware cost than physical partitions, but the support costs for a more complex environment are higher. You may be able to deploy 10 logical partitions on 3 servers, but you still have 10 instances of OS and application to support. More over, more highly skilled staff is required to efficiently support the environment and reduce the chances of operator-error related downtime.
About the author: Kackie Cohen is a Silicon Valley-based consultant providing data center planning and operations management to government and private sector clients. Kackie is the author of Windows 2000 Routing and Remote Access Service and co-author of Windows XP Networking.