On a different article, we talked about microservices. We will now tackle a trend which has just come up and – to me – as opposed to microservices, it was prompted by the very developers and being sponsored by AWS, Azure and Google Cloud: Serverless.
Before getting into detail, we should be careful and state a clear difference with microservices, and thus make the definition process of Serverless less painful: The architecture of microservices is the result of applying the “divide and rule” principle to avoid monolithic architecture, whereas Serverless appears with the idea to set and stop supplying servers regardless the low and mid-level management of infrastructure – and let the service provider on the cloud be in charge of it.
On some papers we will find that Serverless is formally related to the term FaaS (Function as a service), in which the developer generates a group of lines of code that has only one functionality (obtain a register from a database or store a user, for instance). When generating code, it is exposed by means of some products, for example, among the ones that AWS provide, some examples are Route 53, Api Gateway and S3) supplied by the cloud and thus, the application is consumed by users. On this presentation of Amazon Web we can see how this architecture works
Under the Serverless model the equipment is directly and exclusively focused on the functionality, the cloud will then be in charge of the way to expose the code, generate metrics and, furthermore, scale the product according to the existing demand, which represents an economy of the efforts carried out by the team. What’s more, normally, the payment model is only focused on paying the processing cost of servers of the cloud and the amount of received queries.
Some great benefits
The architecture based on Serverless offers:
- Cutting down operative and service costs: You pay for what you use, there is no idle infrastructure.
- Usability with a minimum effort: The supplier is accountable for providing all necessary while the development team only sets key parameters for the functionality to be used.
- Productivity: Efforts can be aimed, since it is only in charge of features – and not of how and where the gear will work.
And now some drawbacks
Although it is actually promising, this infrastructure also presents conflicts when implemented:
- Uncommon functions suffer from big latency when the cloud seeks to reduce idle infrastructure.
- It is neither healthy nor recommendable to apply it when functionality calls for a big processing load.
- Methodologies such as TDD (Test Driven Development) are extremely complex to implement, a set of thorough testing from end to end including infrastructure is sometimes impossible.
- It is necessary to have knowledge of many services involved in their correct performance to give the optimal set up for each one; hence a role in charge of infrastructure should not be discarded.
- The change of service provider on cloud could become a huge crusade, whose success depends on whether similar services exist.
Some emerging technologies such as virtual assistants, bots and IoT are the main consumers of this kind of architecture. Because of this, it is possible that this extends to other types of areas. Wrapping all up, Serverless is an emerging architecture that has great challenges to face yet, but it also has – no doubt – many interesting opportunities along the way