Can you please give some information on how Sybase performs on Linux? Is the Linux OS ready for heavy OLTP traffic? What are the limitations, if any, that we should be aware of?
This is a very interesting question. Before considering Linux you need to take time and appreciate the following:
- If you are going to use the standard off-the-shelf Linux such as Red Hat 8.0, etc. then you will be limited to 2.0 GB of max shared memory. So your Sybase memory cannot be more than that.
- If you twick the kernel or settle for something like Red Hat Advanced Server 3.0, you can have up to 2.7 GB of shared memory.
- In my opinion, the Linux kernel currently does not scale well beyond four CPUs for databases. So if you have an application that is CPU-bound and can live with four or fewer Sybase engines then you will get benefits from the Intel CPU speed and lower total cost of ownership (TCO) of Linux.
- Bear in mind that for a given Sybase engine (a Unix process), you will be limited to 1014 connections/threads to Sybase. So as long as your total connections to Sybase is not going to exceed say 4000, you will be all right with Linux.
- Sybase will soon be coming out with heterogeneous dump and load utility. This means that you can dump a database in Solaris and load it to Linux. In that case you can use the Linux on Intel as a cost-effective host for your development servers.
- At the current juncture I will advise you to think carefully if you are going to use Linux for mission-critical databases. Consider, the O/S host support, the hardware support and in-house expertise in Linux. If you are a predominantly a Unix shop then this should not be a big issue. However, if you are a Windows house, I suggest that you gain enough Linux experience before deploying Linux in production.
- For OLTP Linux should be fine. What do you gain from faster CPUs? In here, I would like if I may make a reference to the way Sybase handles the "Critical Section of the code". Simply said, any block of the code marked as synchronized in Sybase is called "a Critical Section of the code". While one critical section is being executed on a data page (for example say for updating a record), Sybase prevents other critical sections from executing on the same data page. As we know Sybase does this through spinlock/semaphore. The semaphore is not released until the Critical Section is completed. If CPU speed is constant then the time spent in each Critical Section of the code is approximately the same. Compared to an O/S like Solaris, if we keep the number of CPUs the same, however increase the CPU speed, then we are going to have the same threads of execution but each "Critical Section of the code" is disposed of in shorter time and hence this should result in less contention and better performance.
For More Information
- Dozens more answers to tough SQL Server questions from Mich Talebzadeh are available here.
- The Best Sybase Web Links: tips, tutorials, scripts, and more.
- Ask the Experts yourself: Our SQL, database design, SQL Server, DB2, Sybase, object-oriented and data warehousing gurus are waiting to answer your toughest questions.
Dig Deeper on Linux servers
Related Q&A from Mich Talebzadeh
Sybase expert Mich Talebzadeh explains how to log on in ASE. Continue Reading
Sybase expert Mich Talebzadeh gives fifteen reasons for why Sybase will definitely be around for years to come. Continue Reading
Sybase expert Mich Talebzadeh explains the syntax for user-defined Sybase functions. Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.