Skip to main content

FaaS/Serverless Platform to Abstract Infrastructure

Context

Microservices are adopted. Microservices have infrastructure services or other microservices as dependencies. There is no sophisticated deployment environment yet. Potentially, there is a single-node deployment.

Problem

Solution

Deploy to a serverless cloud platform that simplifies microservice development, deployment, and operation. A microservice becomes a collection of fine-grained ephemeral functions that can be updated independently.

The cloud provider runs the microservice functions to process incoming events and requests and uses compute resources only when events/API calls arrive. Internally, the serverless platforms work with different stages of provisioning, freeing the allocated resources after a retention period. Performance on the very first function execution might be poor when resources need to be allocated but improves drastically after.

FaaS/Serverless features can take over a lot of complexity automatically that would have to be considered:

By using the convenience features of serverless computing, development and deployment of microservices becomes much simpler. Operation can become more resource efficient and save costs compared to other setups. However, the whole setup and architecture puts a lot of trust in the underlying serverless cloud platform and the provided services. This coupling to the serverless cloud platform can limit portability and lead to a cloud vendor lock-in if the target platform is a cloud vendor specific one. However, there are also open source serverless solutions that allow self-hosting and portability. When choosing the serverless technology, make sure that all features for managing your microservices are available since those platforms can be hard to extend.

Maturity

Proposed, to be evaluated.

Sources of Evidence

L8:

  • Context: "waves" of technologies enabling microservices
  • 9th wave: serverless computing technologies
    • Technologies: AWS Lambda, OpenWhisk, Azure Functions, Google Cloud Functions, Spring Cloud Functions
    • => FaaS cloud model
      • lets users develop, deploy and deliver into production more fain-grained service functionalities / functions
      • no complexity of creating and managing the infrastructure resources
    • places much trust in underlying platform and the services offered
    • (-) drastically increased dependency on particular environment
      • counteracts the goal one of microservice's goals and remind some of you of SOA infrastructure
  • 4th generation aims to bring microservices to new realm
    • exploit recent FaaS and serverless computing technologies to simplify development and delivery
    • microservices would turn into collections of ephemeral functions - each created, updated, replaced, and deleted as quickly and arbitrarily as possible
    • communication-centric technologies (sidecars, service meshes) still be necessary then?
      • Current FaaS not there yet
      • Side-car-like functions to intermediate all function-to-function interactions => monitor and manage those => new kind of service/function mesh

L15:

  • Serverless environments us OS containers as Docker to deploy and deploy microservices
  • Granular code deployments
    • incremental and rapid scaling of server infrastructure (surpasses elasticity by dynamically scaling VMs)
    • load balance across servers => minimize idle server capacity better than VM placements
    • small size and footprint of containers => containers can be aggregated and reprovisioned more rapidly than bulky VMs
    • to multiple regions for redundancy and fault tolerance
  • Cloud provider responsible for: creating, destroying, load balancing requests across container pools
  • status COLD for infrastructure
    • to conserve server real estate and energy
    • deprovision containers when service demand is low
    • free infrastructure to be harnessed by others
    • => better server utilization leading to workload consolidation and energy savings
  • 4 different types of function invocation to infrastructure warm up for severless computing infrastructure
    • provider cold: first service invocation for microservice code release
    • VM cold: first service invocation to VM hosting one or more containers hosting microservice code
    • container cold: first service invocation on OS container hosting microservice code
    • warm: repeated invocation to preexisting container hosting microservice code
  • Load balancing in serverless
    • round robin
    • or based on CPU
  • RQs:
    • performance implications of serverless => impact of COLD vs WARM
      • extra infrastructure is provisioned to compensate for initialization overhead of COLD service requests
      • Container initialization overhead significant, esp. for VM cold init
      • requests against WARM: not always reuse extraneous infrastructure
    • load balancing in serverless => influence of computational requirements of individual microservices
      • well-balanced distribution across containers and host VMs for WARM and COLD
      • for WARM: also at higher stress levels, for lower uneven distribution across hosts
    • retention of microservices => performance implications
      • after 10min: containers deprecated first, followed by VMs
      • performance degregation to 15x after 40min of inactivity
    • memory
      • optimal memory reservation requires benchmarking on platform!
  • Serverless platforms abstract most infrastructure management configuration from end users

L30:

  • AWS Lambda completely hides machines from developers
  • traditional performance models based on notion of (virtual) machines => inadequate

L41:

  • they reduced shared components to 1: the lambda framework
    • framework to connect to infrastructure and provide standard formatting methods (messages, logs, health checks)
  • [NOTE] Not sure if they mean a framework to connect to AWS lambda, or if it is something else (then not relevant here!)

L59:

  • take advantage of serverless architectures
  • Technologies: AWS Lambda, Google Cloud Functions, Azure functions
  • cloud vendor runs small scripts to process incoming events, uses compute resources only when events/API calls arrive
  • (+) simplicity
  • (+) efficiency
  • can reduce compute costs by 90% or more

L61:

  • Studies focus on microservice layer only, not other layers
  • aligns with serverless functions in cloud
    • dev provides business logic
    • operational overhead taken care by platform (e.g. AWS Lambda)
    • transparently managed infrastructure and operations aspects of the system
      • deployment
      • configuration
      • monitoring & logging
      • security facilities and patches
      • OS, platforms and library updates
      • management of service lifecycle
      • service vertical and horizontal scaling

LN42:

  • microservice technologies are evolving fast; one fast moving area is serverless computing
  • encapsulates common functionality (but not business functionality) into a hosting microservice
    • handles common functionalty like authentication, validation, monitoring
    • and executes proper business functions
  • provides "function as a service"
    • AWS Lambda, Iron.io, Google functions
    • => host your business functions

LM45:

  • Context: interviews and insights from multiple cases on technologies and sw quality in MSA
  • C10-S14
    • employ various AWS offerings to host services
      • ElasticBeanstalk, Fargate, Lambda functions as examples
  • FaaS / serverless NOT seen as a viable option in most cases
    • vendor lock-in
    • high costs for constant workload
    • request-response focused
    • slow start-up
    • immaturity of technology
    • => unfitting for complex custom solutions
    • need for stability before such disruptive move
    • difficult to explain FaaS mindset and operation model to customers / management
    • 2 participants use it, 3 think about using it
      • choose use cases very selectively