Cloud Computing Guide: Key Technologies and Architecture

The Key Technologies and Services Shaping Cloud Computing

Cloud computing architecture now plays a key role in digital and business transformation, with technologies and services that make cloud ideal for modernizing existing IT infrastructure. It’s also a vehicle for evolution and change in general, with cloud serving as the platform for emerging technologies such as artificial intelligence (AI), blockchain, and the Internet of Things (IoT).

In this guide, we’ll be looking at some of the key technologies, services, and architecture of the cloud.

Cloud Native Application Development Technologies

The Cloud Native Computing Foundation defines cloud native technologies as those that support tools granting access to data on demand. Their deployment and response can be near-instantaneous, making it possible to implement processes that formerly took weeks or months, in a matter of hours, or even minutes.

These technologies “empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” We’ll get to those in a moment, but first let’s take a look at some to the other technologies and services that characterize the cloud environment.


Containers are lightweight, portable receptacles that include all of the logic required to run applications. They can be easily scaled, and create the isolation needed to move workloads freely between different infrastructure and hardware setups. The same container can run on physical hardware in an on-premises data center, or in a virtual machine (VM) in the public cloud.

Containers can simplify the deployment, management and operational issues often associated with a hybrid cloud implementation.

Data Fabric Architecture

Data fabrics use the cloud as a medium to link information assets on premises, in public, private, or hybrid cloud deployments, or in an edge computing or Internet of Things environment.

Data fabric architectures enable organizations to readily locate their information, no matter how dispersed or decentralized the data landscape may be. In this way, enterprises can manage their internal data, and information pertaining to customers, suppliers, partners, contractors, and other external stakeholders.

Disaster Recovery as a Service (DRaaS)

Gartner, Inc., estimates that the average cost of IT downtime is close to $5,600 per minute. Disaster Recovery or DR is a strategy for enabling organizations to get back up and running as quickly as possible, in the event of a network outage or other calamity causing major disruption to standard operations. This applies to IT infrastructure, data stores, and enterprise communications.

With its basis in software and allowances for redundancy in infrastructure (multiple data centers spread across the globe, alternate lines of communication, etc.) cloud architecture can easily implement automated DR strategies to minimize recovery time. There’s an entire market dedicated to providing Disaster Recovery as a Service (DRaaS) — one that IDC estimates will reach $4.5 billion in 2020, with 15.4% growth through 2023.

Infrastructure as a Service (IaaS)

The mantra “infrastructure as code” allows providers to build their cloud architecture with an eye toward decreasing physical hardware, and minimizing cloud native costs.

Cloud computing technologies enable providers to offer the traditional data center elements of storage, computing power, and networking in a virtualized form as software over the internet. These Infrastructure as a Service (IaaS) offerings enable subscribers to access infrastructure using their web browsers, and provide a cloud architecture that shifts the computing workload to a remote location.


Micro-services consist of a single function, or a small group of functions that run a particular aspect of a software application. These small services are interlaced with each other when an application runs, so that if one of them fails, others will step in to keep the application running continuously. Micro-services units are also decoupled from each other, making it easier to replace or upgrade them as technologies evolve.

Micro-services can rely on other services, generally through load balanced REST APIs or event streams. The underlying technologies enabling micro-services include NoSQL databases, event streaming, and container orchestration. These technologies allow for scaling up to large deployments involving thousands of micro-services.

Software as a Service (SaaS)

Just as cloud computing architecture allows IT infrastructure to take on a virtualized or software-based form that’s deliverable across networks, the same applies to software. For this reason, the need for individuals or organizations to install and manage individual programs on their machines is no longer an imperative.

Software as a Service (SaaS) or application as a service takes the same subscription-based payment model as other ‘aaS” or “as a Service” options, and gives users with an internet connection access to the latest and most powerful versions of productivity software and other applications.

Cloud hosted software and platforms also enable users to run custom applications from the cloud, and even entire virtualized desktops.

Multi-Tenant Public Cloud

Providers like AWS, Azure, and Google Cloud owe much of their success to multi-tenant public cloud provision. Here, multiple subscribers or “tenants” effectively rent a proportion of a common reserve of cloud computing resources (storage space, network bandwidth, processing power, etc.). Public cloud infrastructure typically consists of a provider’s network with several data centers spread over a wide geographical region.

Private Cloud

Because of the shared nature of a multi-tenant public cloud, the resources available can run short at critical moments. With common resources such as IP addresses, it’s also possible that the indiscretions or malpractice of one tenant on the network can tarnish the name and brand image of all the others.

For these reasons, and often to retain closer control of in-house data or intellectual property, many organizations prefer to set up their own cloud infrastructure. In a private cloud like this, the owner usually takes most if not all of the responsibility for managing, monitoring, and maintaining the infrastructure and data.

Hybrid Cloud Architecture

A combined deployment or hybrid cloud architecture allows organizations to benefit from the economy and features of a public cloud infrastructure, while providing selective control over theirown resources and data held in a private cloud.Similarly, a hybrid cloud strategy for master data management(MDM) enables organizations to deploy applications quickly, with information easily accessible for both real-time analytics and data warehouse operations, while allowing the enterprise to scale its infrastructure smoothly, and keep sensitive data private and secure.

Edge Computing

A survey by Cisco predicts that the number of devices connected to IP networks will be more than three times the global population by 2022. And a study by McKinsey claims that 127 new IoT devices connect to the internet every second. All of these devices continually receive and transmit data, which analytical systems must study to monitor performance or yield valuable insights.

This kind of analysis is best performed as close as possible to the objects contributing the data. It’s for this reason that edge computing — moving IT infrastructure closer to the location where data is being generated — has been gaining in popularity and usage. Its enabling environment is the cloud.

Hyper-Scale Data Center Technologies

Besides the versatility of computing at the edge, organizations also require an IT infrastructure that can scale at a rapid pace, to keep step with changing market conditions and fluctuations in demand.

Hyper-scale data centers and IT systems take a modular approach to infrastructure, with individual components providing extreme flexibility in scaling at the physical level, and the use of cloud computing architecture to enable infrastructures to scale rapidly and exponentially.

Artificial Intelligence (AI)

No discussion of contemporary cloud technology would be complete without some reference to AI, or artificial intelligence. AI-based platforms can help data center operators to learn from past information and transactions, and to distribute workloads across peak periods more efficiently. Predictive analytics capabilities can now also allow for preventative maintenance, spotting network problems and infrastructure failures before they occur.

Cloud providers make use of these technologies themselves, and many offer their subscribers AI-based services and solutions, delivered via the cloud.

Low Code Cloud Computing

Another emerging technology creating interest in the cloud is low code computing. Here, users with little or no development experience can build viable software solutions by using a drag-and-drop interface that requires very little knowledge of coding. Growing numbers of low code solutions are now entering the market.

“Serverless” Applications

Serverless applications are written using programming languages developed specifically for the cloud. Using the cloud computing principle of “pay as you go”, these languages are typically configured to run computing cycles only when triggered.

The most popular of these languages is Lambda from AWS, but both Microsoft (with Azure Functions) and Google (with Google Cloud Functions) offer serverless application development environments for their own cloud computing architecture.