With all the cloud options available, network engineers—and business leaders—often have trouble deciding whether to deploy applications in a public or private cloud. There is no single answer to this question; as with all trade-offs in information technology broadly and network engineering specifically, the answer will always be some variant of “How many balloons fit in a bag?”
How big is the bag? Are the balloons inflated or not? What are the balloons inflated with—water or air? There are too many variables to give an answer beyond more questions to ask and guidelines to follow.
This section considers three applications as “case studies.”
These case studies assume
• Public and private cloud options have similar capabilities. For successful and well-managed private cloud deployments, developers and application owners will experience similar private and public cloud options, and the positive and negative aspects of the two options are balanced.
• While a private cloud will always require more planning and operations than public cloud operations, these are often offset by lower costs than a public cloud.
• Business leaders are more interested in solving actual problems in information technology than in following the latest trend.
• Commercial applications and services available only as a pure cloud-based SaaS solution are not considered here.
Insurance Estimation Application
Suppose a large insurance company wants to build a new application to streamline estimates and approval for natural events like floods and tornados. The company often employs contractors and part-time adjusters during significant events.
This means
• Company representatives will use their devices, preferably tablets, to run this application.
• The application must be accessible over low-bandwidth links, including cellular networks, guest Wi-Fi, etc.
• Corporate security is concerned with protecting the network from infiltration through privately owned devices.
• Corporate leadership is very concerned about exposing internal information about customers, risk management, risk aversion, and other aspects of the business. Company leadership believes that turning crucial data over to a public cloud provider risks the company’s continued profitability.
The company’s information technology leaders are especially concerned about data gravity, illustrated in Figure 17-3.
Figure 17-3 Data Gravity
• Application A is moved from a private, on-premises cloud (or a traditional data center network) to a public cloud provider. Data set A is moved to the public cloud to support application A.
• Application B, still on-premises, suffers a significant performance loss because part of the data B uses is now only accessible through a VPN between the on-premises data center and the public cloud instance. To resolve this situation, the IT team moves application B to the cloud, along with data set B.
• Application C, still on-premises, suffers a significant performance loss because part of the data it uses is accessed over a VPN. To resolve this, application C is moved to the public cloud.
Data gravity can be summarized in three rules:
• It is harder to move data around than people think.
• Applications tend to follow existing data.
• New data tends to follow existing applications.
Many public cloud providers are open about using data gravity in their favor. Public cloud providers often allow customers to move data into their cloud service for free. Some providers will bring a specially designed truck on-site to pull data out of existing data centers to transfer it to the cloud without using the network. However, removing data is not easy; public cloud providers often charge to pull data from the cloud, and no “data transfer trucks” are available to pull data from a public cloud and transfer it to an on-premises private cloud.
There are several points of tension in these requirements:
• Any information representatives collect must be pulled off the device as quickly as possible to protect the data. Individually owned devices tend to be more easily compromised, potentially causing a breach of private data.
• Any interface between representative-owned devices and the corporate network must be heavily guarded against infiltration.
• Data must be pulled from cloud services into company-owned and operated private cloud resources as much as possible.
These tensions suggest the best solution would be a hybrid-cloud deployment. Figure 17-4 illustrates a possible model for this application.
Figure 17-4 Remote Worker Insurance Claim Application The application has four distinct parts:
• Collection servers running in the public cloud. Representatives connect to this application through a web browser to record information through handwritten notes, forms, photographs, etc. Each collection server has access to only a single case file at a time, as requested by the representative. This reduces the scope of data released if one of these collection servers is breached.
• The data collected in the public cloud application is then transferred to a set of front-end servers. These servers are accessible only by the collection servers, protecting them from data breaches and other attacks.
• The front-end servers do the initial data cleanup, sending alerts to internal employees, managers, etc. when specific conditions are met (such as a claim over a given dollar amount).
The data is then transferred to a database server. Access to the data stored on the front-end servers is tightly controlled by the user and role in the company.
• The database servers store the data for other applications, like the analytics service in the illustration. Access to this data is tightly controlled on a per-application basis.
This application follows a hybrid multi-cloud deployment model, taking advantage of widely available cloud services while preserving corporate data ownership by storing and processing the data in a private cloud.
Multi-cloud models use more than one cloud provider.
Hybrid cloud models use a mixture of public and private cloud services.