When you want something done quickly, double up the resources working on it.
Parallel processing, also defined as parallel computing, allows multiple server CPUs or processor cores to execute workflows at the same time -- in parallel -- by sharing resources and coordinating actions. Programs are broken down into pieces and then recombined, which completes the work much faster than serial processing.
Parallel processing applications can occur on a single processor with multiple cores, a single server with multiple processors, or across clusters or grids of computers. When set up properly, parallel processing is massively scalable, up to thousands of processors. This is typically seen in supercomputing deployments, not enterprise IT.
While serial processing is good for CPUs, serial communication is better for connections. Peripheral component interconnect express (PCIe) is a lower latency bus than the parallel PCI, and also has higher data transfer rates. Serial means data can travel over the bus in both directions simultaneously, whereas a parallel bus only sends data in one direction at one time. You'll find PCIe connections for network interface cards, graphics cards and storage accelerators on data center servers.
Hardware and applications are becoming more abstracted, with cloud architectures and software-defined data centers. However, a fundamental understanding of how different hardware designs handle different workloads will pay off in the best use of available capacity or a simple hardware fix to a bottleneck problem. For example, rather than rewrite a complex application that is lagging, the data center staff can spec local storage accelerators on PCIe busses. Conversely, when acquiring new hardware for a new application, the data center team can work with developers and programmers to understand how the app will use its resources and plan the best deployment to serve it.