Since its launching in 2014, serverless architecture has had a great impact on software developing, changing considerably how software products are made. Since serverless architecture offers the great advantage of not having to manage physical servers anymore, developers can now devote most of their time and efforts to build outstanding software applications.

That been said, however, it is important for developers to not only focus on the benefits of serverless architecture –that would be an insufficient and also wrong approach to it-, but to study and understand the wider implications of that technology, including also its downsides, if we want to get the greatest possible benefit from it.

Therefore, in this article, we are going to discuss the traits of serverless architecture.

Since serverless architecture offers the great advantage of not having to manage physical servers anymore, developers can now devote most of their time and efforts to build outstanding software applications.

A lower learning curve

A pretty distinctive trait of serverless computing is that the learning curve for developers is lower at the beginning, in comparison with traditional server management. Moreover, many of the technical elements needed when managing traditional servers will not be required in the case of serverless architecture, like for example, patching or debugging.

This helps developers to meet the goals of the project more efficiently and in lesser time, and consequently, this helps to get faster any application ready for the market.

Nevertheless, the learning curve does not stay the same during the whole developing process. As long as the application developing is progressing, things will get more complicated for developers.

For example, even though you are not dealing with server management, you should put some attention into elements like log management, Infrastructure as a code and, also networking. But, as you have to deal with it in ways that differ from traditional server management, it is important to learn how to do that properly.

No more host working

One of the main characteristics of serverless architecture is that in this type of architecture, it is not necessary to work directly with any server.

That, of course, will mean a lot less of server maintenance, i.e. no need to upgrade, debugging or monitoring constantly your infrastructure neither patching your server, as it is necessary for traditional server-based architecture.

However, at the same time, this will also imply that you won´t be able to use the same type of metrics used in traditional server hosting, like for example error rates, Requests per Second (RPS), Average Response Times (ART) or Peak Response Times (PRT).

In serverless architecture, that will not be part of your job, and therefore monitoring other types of metrics will be necessary, but that will also mean learning new ways of how to do a correct performance tuning of your server architecture.

For example, Wisen Tanasa explains that AWS Lambda has an execution limit, and that is different from the total of CPU cores that you have. Nevertheless, as strange it may seem if you want to expand your execution limit, and therefore to have a larger number of CPU cores, you will do need to change the memory allocation size of your Lambda. So, you need to beware of the problems that you could have if when you are doing performance testing, your test exceeds the concurrent execution limits.

The question of state in your architecture

Stateless is another main trait of serverless architecture.

One of the options provided by serverless architecture to developers is the so-called Functions as a Service (FaaS), whose basic principle is, as we have been explaining in this article, to offer a platform for customers to develop, run and manage applications but without the need of managing a server.

That said, it is important to know that Faas is ephemeral, and therefore it isn´t possible to save any data in memory, for the containers used to run your code are used once and then discarded.

Consequently, this is positive for scaling your applications horizontally, for instance, to scale your application horizontally at a higher pace and more efficiently, without having to take care of your application´s state.

Yet, this trait has some cons that every developer that wants to go serverless must consider. As Tanasa explains, if you can´t store any data, you have to be more cautious with errors, and also you won´t be able to use software that needs or work with states likewise, as is the case with HTTP sessions. To solve this second issue, it is required to support the software that you are going to use with Backend as a Service.

Serverless means hostless, and hostless means more elasticity

Another trait of serverless architecture directly linked to hostless, is elasticity, because, in the majority of cases, the design of serverless architecture implies a wide range of elasticity, which also means logically a set of important advantages for scalability.

While in traditional server management developers always must scale resources manually, in serverless architecture those functions are controlled automatically. Moreover, in many cases, common problems regarding resource allocation do not exist in this type of architecture, which is a great advantage.

Sometimes it could happen that it will be necessary to combine your serverless architecture with old systems that aren´t able to support its high elasticity, a problem that could lead to a failure in the downstream systems, so you´ll need obviously to develop some ideas about to deal with that situation.

In those cases, Tanasa suggests defining a limit on your AWS concurrency or using a queue to communicate with the downstream systems. This is an issue on which you should put some attention, especially if your downstream systems are critical systems.

Serverless architecture is distributed by nature

In the field of computing, distributed means dividing a project or business into several sub-services and assigning them on various computers or serves.

Because of the trait of stateless, all the data that needs to be stored will be saved in the Backend as a Service platform, a fact that will necessarily mean that this type of architecture is naturally distributed, i.e., distributed is also a trait of serverless computing, and, it is a trait directly linked with that of stateless.

Although the pros of this trait, there are also some cons, as is the case of consistency, so we strongly recommend putting attention on the consistency model of the BaaS that you chose for your project.

Also, another thing that you should be careful with is the methods used by the BaaS platform to delivery distributed message.

For example, Tanasa underlines the importance of the complex question of exactly-once delivery. Also, another problem that you should pay attention to is the behavior of distributed transactions.

Pros and cons of being event driven

Serverless architectures are event-driven, meaning they run in response to events. Each function only runs when it is triggered by a specific event, and it does not run otherwise.

In the last term, we´re going to talk about event-driven trait. When we say that serverless architecture is event-driven, we mean each function of your application will run when, and only when, it is triggered, otherwise it won´t.

One of the main features that come up from event-driven is that your architecture will have a considerably lower level of interdependence between its components, which means a lower level of coupling, a character that could give some benefits. For instance, Tanasa explains that in this kind of architecture, it will be a lot easier to introduce new functions that listen to changes in a blob store.

Yet, there are also some drawbacks regarding event-driven trait.

You may miss the integrated perspective of your architecture, which could cause difficulties when solving problems in the system, therefore we consider that it is important to pay attention to distributed tracing, although it is a field still in development when it comes to a serverless architecture.

The clear solution is to refactor the code, along the lines of the 12-Factor App guidelines, and run it in a shared environment, perhaps via Docker or Kubernetes. But that’s a story for another day!