When Intel Corp. introduced PCI Express (PCIe) technology in 2004, the third-generation I/O interconnect made its...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
predecessor PCI-X sound like the Pigeon post.
Since then, adoption of the latest-and-greatest bus technology has followed a peculiar trajectory: PCIe is nearly ubiquitous in commodity x86 servers like Hewlett-Packard Co.'s ProLiant and Dell Inc.'s PowerEdge, but high-end Unix servers with PCIe are only just starting to come to market.
PCIe vs. PCI-X
The Peripheral Component Interconnect (PCI) standard is bus technology that connects a system's motherboard to peripheral devices like disk drives. Compared with its predecessors, PCIe is based on serial technology, which allows for more scalable performance and lower latency given its direct connection to chipsets and higher bandwidth (with between 5 Gbps and 80 Gbps peak theoretical bandwidth). A Dell white paper asserts that PCIe should enable servers to keep pace with processor and I/O advances for at least 10 years.
PCIe's low-voltage I/O also requires fewer pins than other PCI buses, which minimizes costs and requires fewer wires to route on the board, so bandwidth can be added easily, said Al Yanes, chairman of PCI-SIG, the special-interest group responsible for PCI Express industry-standard I/O technology.
PCIe is also faster. The first-generation PCIe runs at 2.50 GHz, and the new second generation – which has been available for less than one year – clocks in at 5.0 GHz. These speeds compare with PCI-X clock speeds of about 266 MHz, Yanes said. It also supports advanced power management and has native hot-plug/hot-swap support.
In comparison, previous parallel-PCI buses cannot be easily scaled up in frequency or down in voltage, do not include native hot plugging/hot swapping of peripherals and are limited to one direction -- send or receive -- at a time, increasing latency, the Dell white paper asserts.
PCI Express adoption varies
Blade server and commodity x86 server vendors that use off-the-shelf chipsets were quick to implement PCIe interconnects. In 2004, HP, Dell and IBM began shipping PCIe-based x86 servers, and by 2006, nearly every x86 server built had a PCIe interconnection, according to In-Stat's Multimedia and Interface Technologies Service, which analyzes interface technologies and multimedia semiconductors. Leading vendors Dell and HP both offer select models with PCI-X support, but only for legacy customers.
While x86 and blade servers created today support PCIe, high-end servers still largely rely on the PCI-X bus architecture, said Dave Zabrowski, president and CEO of Neterion Technologies. Neterion makes 10 Gigabit Ethernet (GbE) adapters for both form factors and sees demand for both PCIe and PCI-X, Zabrowski explained.
Indeed, while PCIe has been on the market for three years, HP just began offering PCIe support on its Itanium-based Integrity Servers rx2660, rx3600 and rx6600 in February 2007 and then on models rx7640, rx8640 and Superdome in November 2007. Those servers also support PCI-X.
And while IBM's high-end Power6 model IBM System p 570, introduced last May, now supports both PCI-X and PCIe slots, IBM is in no hurry to implement PCIe slots in the rest of its high-end servers and will continue supporting PCI-X adapters over time, said Rick Bause, communications manager for IBM Systems and Technology Group.
"PCI-X and PCI Express cards are physically different, and it is important to continue providing PCI-X slots to allow customers to protect their newer PCI-X card investment," Bause said.
IBM also hasn't fully upgraded its high-end servers in PCIe, because in many cases PCI-X is adequate. For example, a 64-bit PCI-X bus at 133 MHz delivers 1 GB per second of peak bandwidth between the system chipset and the I/O device. This is enough bandwidth for many server I/O requirements, including GbE, Ultra320 SCSI, and 2 Gbps Fibre Channel.
"There is typically no significant performance advantage today between the two adapters [PCIe and PCI-X] such as when a 1 GB Ethernet is being implemented," Bause said.
But fabrics such as 10 GbE, 10 Gbps Fibre Channel, and InfiniBand require greater bandwidth than a 133 MHz PCI-X bus can provide.
A revolution on hold
Whatever the potential benefits of PCIe, the major shift from PCI-X has yet to occur. "You hear people say that everything is PCI Express now, but that is not the case. It is being adopted, but it has taken time, especially in the enterprise space, where technology adoption doesn't happen overnight," Zabrowski said. For example, "IBM System p or Itanium servers are predominately PCI-X 2.0. PCI-X has worked for them, so they are slow to change."
And for its part, Neterion still sees strong demand for both PCI-X and PCIe versions of its cards, said Zabrowski.
Nor is PCIe an inherently easy upgrade; it's serial architecture prevents it from being backward-compatible with previous parallel-bus architectures, requiring users to acquire PCIe compatible servers to use PCIe format cards.
"PCIe represents a bus architecture change, which also changes proprietary chipsets. If you look at bus architectures, they don't change very often, so any change is significant to the industry -- not so much for the end user, but for the OEMs, because the systems have to change pretty significantly," said Zabrowski.
In short, older generations of PCI won't disappear quickly, said Yanes of PCI-SIG.
"People tend to estimate the demise of technologies far too early, when they are still viable," Yanes said. "I'd say PCI-X will still be around for at least another three years," and that PCIe will to move into the high-end server space slowly. "Perhaps by 2011, PCI Express will be commonly used in higher-end servers as well."
Let us know what you think about the story; email Bridget Botelho, News Writer.
Also, check out our news blog at serverspecs.blogs.techtarget.com.