Ken Muse

Azure Container Solutions


Containers have found their way into nearly every part of the development lifecycle. From being part of the development cycle to running production systems, they are a key component of modern applications. Trying to understand the best choice in Azure can be quite the challenge with some many options available. Today, I’ll quickly cover some of the available options and when you might consider each. This isn’t a comprehensive list and it doesn’t cover the do-it-yourself VM approach, but it will help you understand the current landscape in Azure.

Azure Container Apps (ACA)

This is the newest offering from Microsoft and represents a powerful way to eventually enable scalable microservice architectures that are flexible and powerful, but without the need to understand the complexities of Kubernetes. It can be thought of as a PaaS container orchestration solution. It relies on the open source Distributed Application Runtime (Dapr), and can be easily connected to other Azure services to build powerful, connected solutions. Under the covers, it utilizes Kubernetes and KEDA, so it supports scale-to-zero. It can be used for long-running processes and dynamic workloads. Like many other services, you pay for the time the resources are being used. Unlike Azure Container Instances, it supports certificates and scaling up to 25 replicas based on events such as queue length, concurrent requests, CPU utilization, and memory utilization. It has some ingress control, so it supports port forwarding (you can redirect port 80/443 to the actual port used by the container).

While this clearly represents a new investment and direction from Microsoft, it is currently still in preview. As a result, you should be very cautious with any production usage. Because it is in preview, it has a few limitations. For example, the options for memory and CPU are limited at the moment. It is also not able to use volume storage, although it can use Dapr state storage. Finally, it’s only available in a handful of regions at the moment.

Longer term, this is likely to be a preferred path, offering a high degree of flexibility without the need for a Kubernetes cluster. If more control is needed, Azure Kubernetes Service can be used to gain more control. For the moment, it’s still in preview – so it’s a great chance to learn about Dapr and containerization! The nice thing about the solution is that Dapr applications can be easily migrated to AKS if more control is required, so the tradeoffs are minimal.

Azure Container Instances (ACI)

This is an older technology providing container support. It is a single instance environment, so it cannot be scaled to match workloads. Because it is not tied to the Azure event system, it’s typically a poor fit for reactive programming needs. It can be connected to Azure Kubernetes Service as a Virtual Kubelet, enabling containerized activities to be dynamically created and managed by Azure Kubernetes Service. This can make it a good fit for situations that require a processing job, burst capacity, or which need to execute a unit of work in isolation. It can also work well if you’re needing to connect to a container to perform some work which requires storage, since it does support volumes. Similar to ACA, you pay for the size of the resources and the consumed time. Unlike ACA, it supports a broader range of CPU and memory sizes. ACI also supports container groups, making it possible to deploy a set of related containers onto a single system.

ACI has a few limits which must be considered. First, the container’s ports cannot be forwarded. This means that if the container uses port 8888, only port 8888 can be exposed. There is no ingress control or port forwarding, which also means that this solution cannot be scaled for traffic. It is also unable to utilize certificates currently. It is limited by default to 100 per region and 10 cores, but a support ticket can raise those numbers. It can be slower to start than many of the other solutions, so this must also be considered.

Azure Kubernetes Service (AKS)

If you need complete control over your containers, the resources consumed, and how they scale, you want AKS. Because you control the cluster size (and VM types), you can take nearly complete control over the deployment of resources. It has built-in logging and monitoring, as well as the ability to scale both the services and the compute nodes. Because it is a managed cluster of VMs with an orchestrator, it can be integrated with nearly every Azure service, and it can manage and control ACI to enable burst loads.

Although it’s a PaaS offering, it still requires some understanding of how to manage and deploy to Kubernetes, and it can require some expertise if it needs performance tuning. You pay for the compute nodes in the cluster, so the costs are tied to the virtual machines. It does support scaling the nodes to zero to minimize these costs. This is very recommended for development environments.

This is the most powerful and configurable container option, but it can also be the most expensive.

Azure App Services

If you’re creating a web application, exposing ports 80 and/or 443, this may be the right solution. Azure App Services enables deploying and running containers that represent the web application. Because this is running in an Azure App Service, the system supports most of the features of that platform. While this is not a full orchestration solution, the App Service can be configured to automatically manage and scale the environment. This provides full support for certificates and request routing. Until ACA is out of preview, this is the most reliable solution for creating and maintaining containerized web applications without using AKS. Because ACI cannot scale automatically, it generally doesn’t work well for web applications or APIs that need that level of control.

Once ACA is out of preview, it’s dynamic billing model will make it very competitive with Azure App Service. For now, use App Services when you need to run a monolithic web application that can dynamically scale and don’t want to manage or maintain an AKS cluster. If you’re utilizing a full microservices architecture, then consider AKS until ACA is available.

Functions

Functions are best for reactive, event-driven processing. This service dynamically creates instances and scales to match the incoming requests. The base image for this service is very portable – it’s build on top of KEDA, making it possible to run this code inside of a Kubernetes cluster. Functions are a perfect fit for a nano-service architecture or where a reactive programming model is needed. It is a poor fit for long-running processes, unless those processes can be decomposed into shorter processes using Azure Durable Functions. If more control is needed over the runtime environment, consider hosting the Function code in AKS to take advantage of the reactive model.

Keep in mind that long-running tasks are an anti-pattern in the cloud, so be careful to make the execution activities short and focused!

Azure Spring Cloud

This is a managed container offering focused on enabling the deployment of Spring Boot microservices. It is an offering based on a partnership with Tanzu and Pivotal, making the Spring Cloud environment available in Azure. This option makes the most sense if you are already developing and deploying Spring Cloud services.

Azure Red Hat Open Shift

If you’re optimizing your containers for the Kubernetes-powered Red Hat OpenShift platform, this solution provides a way to deploy your containers on Azure using familiar approaches to manage and scale solutions.

Happy DevOp’ing!