TDC BUSSINES 2022: Secrets Behind Serverless

Introduction
This post is a summary of my presentation at TDC BUSINESS 2022. Cloud computing has transformed the way we interact with applications and services. One of the most significant advances in this field is the adoption of the serverless model, which allows applications to be executed without the need to manage servers. In this post, we will explore the evolution of serverless, starting with its launch by AWS in 2014, and discuss the initial challenges and improvements over the years.
The Beginning of Serverless
In 2014, AWS introduced the concept of serverless computing with the launch of Lambda functions. Initially, Lambda functions were executed on EC2 instances, which were automatically provisioned by AWS, without the need for direct management by developers.
Lambda functions were executed in Linux containers, which included the language runtime and function code. However, there were some limitations in this initial architecture, such as security issues and resource waste.
Initial Challenges:
-
Security: The containers shared the same kernel of the operating system, which could cause problems if insecure code was executed in a container, affecting other containers. AWS considered blocking non-secure system calls, but this would limit the developer and restrict the freedom to create code, even if insecure.
-
Resource Waste: Since each account had its own EC2 instance to execute Lambda functions, isolation between accounts was ensured through separate EC2 instances. However, this led to hardware resource waste, as not all users fully utilized the available resources. This problem also increased the costs of Lambda executions, making them more expensive at the time.
Firecracker
Firecracker is a virtual machine (VM) manager that works in conjunction with Linux’s Kernel-based Virtual Machine (KVM) to create tiny, fast VMs. It was designed to solve the isolation and resource optimization problems faced by serverless developers.
In contrast to the previous model, Firecracker provides stronger isolation between serverless functions. Each Lambda function now runs inside a “micro-VM,” which has its own isolated kernel. This means that even if insecure code is executed in a function, it will not affect other functions since they do not share the same kernel as the host machine.
In addition, Firecracker addresses the issue of resource waste. Previously, each AWS account had its own dedicated EC2 instance to execute its Lambda functions, which often resulted in underutilized resources and higher costs. With Firecracker, AWS can run multiple Lambda functions from different accounts on the same EC2 instance, optimizing resource allocation and reducing costs.
Defining Serverless Computing
A definition that makes sense for serverless computing is “any computing platform that hides the use of the server from developers and runs code on demand, automatically scaled and charged only for the time the code is running.” We can extract three main characteristics from this definition:
-
Hides server use: When developing a serverless application, developers do not need to worry about allocating hardware resources, be it physical or virtual, as in an EC2 instance. The server exists in the background, but it is not something that developers need to manage directly.
-
Automatic Scaling: While in a container-based environment, such as Kubernetes, developers need to create automatic scaling rules, in serverless, this is not necessary. The platform automatically manages function scaling as needed, without developer intervention.
-
Usage-based billing: Unlike the provisioned model, where allocated resources are charged regardless of whether they are being used or not, serverless only charges for the time the code is running. This can result in significant cost savings.
Comparing Provisioned and Serverless Computing:
In the provisioned computing model, when a workload consumes all the hardware resources of a server, developers need to manually expand capacity by creating server replicas and load balancing the workload. This contrasts with the serverless model, where expansion and scaling are automatically managed by the platform.
Benefits of Serverless:
A 2021 study concluded that the main benefit of serverless is allowing developers to focus on implementing business and application rules, leaving the complexity of infrastructure management to cloud providers. This allows developers to generate value for the market by focusing more on their applications and less on infrastructure and scalability issues.
Why Use Serverless
According to a 2021 study by Castro and Paul, the main motivations for adopting serverless are:
-
Saving resources on irregular or intermittent workloads: With serverless computing, you only pay for what you use, which is especially advantageous for workloads that do not occur frequently or regularly.
-
Avoiding operational concerns and built-in scalability: Infrastructure is managed by cloud providers, allowing developers to focus on their applications and businesses without worrying about scalability and maintenance issues.
-
Shorter time to market for products: Since developers do not have to deal with infrastructure, the time to launch new products is reduced, allowing for faster delivery to the market.
The study also identified the main types of applications that use serverless computing:
- 42% Core Functionality: The core business application is running on serverless, as is the case with Zé Delivery in Brazil.
- 39% Utility: Some auxiliary functionalities are running on serverless, such as sending emails.
- 16% Scientific Workloads: Researchers and scientists use serverless to perform analyses and experiments.
In terms of cost, a study conducted by BBVA compared the costs of provisioned and serverless computing. The results showed that, in many cases, serverless computing can be more cost-effective, especially when the number of requests per second is lower.
One of the main challenges of serverless computing is the so-called “cold start.” This occurs when the function takes a long time to be executed for the first time, causing a negative experience for the user. However, there are several strategies to overcome this problem, such as using provisioned concurrency or creating alerts in CloudWatch.
One of the distinctive features of serverless computing is that applications behave like functions. This means that the variables allocated during function execution are deallocated after completion, requiring the use of in-memory databases, such as Redis or Data Grid, to maintain persistent values.
In summary, serverless computing offers a series of benefits, such as resource savings, built-in scalability, and shorter time to market for products. However, it is important to consider its characteristics and challenges when designing a serverless application to ensure that you get the most out of this innovative approach.
Cite this article
You can cite this article in your academic work.
@article{rodrigues2023bhserverless,
title={TDC BUSSINES 2022: Secrets Behind Serverless},
author={Rodrigues, Thiago Luiz},
journal={URL https://rodriguesthiago.me/posts/secrets_behind_serverless/},
year={2023}
}