Without knowledge of an application's communication patterns, organizations might choose the wrong location to host it and create network latency issues.
A cloud workload that communicates heavily with on-premises workloads generates a lot more traffic between the public cloud and the enterprise data center, and this requires architecture improvements to ensure sufficient connectivity, said Lowe.
IT teams must assess the complicated nature of network architectures. Perhaps a data center's north-south traffic generates a significant amount of east-west requests. Or perhaps you've architected a series of microservices around an application that need to frequently communicate with each other.
"If you take one component of a larger group of pieces that are communicating to form an application and you stick that in a public cloud, you've just done two things," said Lowe. "You've introduced some latency … and you may have introduced a point where you've got 10 or 15 different services needing to communicate with one or two services in the public cloud, and you may have a fairly significant amount of traffic -- more than perhaps you really understand."
If you decide to move an application to the public cloud, carefully choose the region to which it belongs to avoid network latency issues. Know which users will connect to that region, from where they connect, how much bandwidth you need and what other components or workloads are involved.
Create a spectrum of potential network latency issues for that cloud workload, and determine how much latency is tolerable, said Fidacaro.
"If it's a lot of volume and requires a lot of bandwidth, does it make sense to run all of that data across this pipe into a centralized public cloud?" he said.
If it doesn't, an enterprise can shift compute power and analytics to the edge. Use edge analytics to compress that data, pass only what you need to retain across the network and push the rest up to the cloud.