Skip to main content

Containers as Portable Deployment Artifacts

Context

Microservices are being adopted. Each microservice has a specific runtime environment with dependent libraries and configurations. There are multiple deployment stages, e.g. a testing environment and/or multiple production environments.

Problem

  • Microservice instances behave differently in different environments
  • The setup of the environment is tedious since it is done manually or is automated with a configuration management tool.

Solution

Create one container image per microservice as a deployment artifact.

Image-based container technology aids the self-containment property of microservices by packaging dependencies inside the deployment artifact avoiding incompatible library and language versions. The images can be instantiated to running containers on any machine that allows the execution of the container. As the handling of containers is independent of the technology running inside, teams can choose their technology stack freely fostering the independence of teams and microservices. Containers allow to have a similar runtime environment almost everywhere and solve the "runs on my machine" problem.

VMs pose an alternative to containerization. Both technologies ease deployability and allow co-locating of multiple environments on one physical host contributing to scalability and saving (cloud) operation costs. However, VMs produce more overhead, are less lightweight, and still require configuration management tools (e.g. Ansible, Chef). The faster startup time of containers allows them to adapt to load faster. The layers of the container images allow optimizing the image transfer by reusing cached images.

Container images improve the hardening of the deployment artifacts as their build description is declarative. Handling the supply chain of the container image becomes more manageable than with configuration management tools. For German industry, chapter SYS. 1.6 of the IT-Grundschutz-Kompendium by BSI is a good starting point to harden container images.

Maturity

Proposed, to be evaluated.

Sources of Evidence

L3:

  • microservices can have multiple instances
    • virtualization
      • not cost efficient
      • computational overhead
      • configuration management system to create prod and test env
    • containers
      • lower overhead
      • better isolation
      • portability => anywhere that supports containers
        • no need to change source code
        • removed need to specifying environments
      • e.g. Docker => docker registry
      • reduces conflicts between dev and ops teams

L5:

  • Context: fewer occurances => need for future research
  • automation of container management and deployment as one of them

L7:

  • Usage of lightweight containers
    • Docker + Dropwizard (different scopes: OS virtualization vs. code library assembly)
    • Comparison to SOA: out of scope of SOA
  • Open questions
    • Need for novel container patterns and technologies needed?
      • or established component and container models as Spring Boot and Spring Cloud sufficient?

L8:

  • microservices usually packaged and deployed to cloud using lightweight container technologies
    • DevOps practices
    • automated software integration and delivery
  • Containers (LXC, Docker, rkt) existed before microservice trend
    • allow individual services to be more effectively packaged, deployed, and managed at runtime
  • First generation of microservices
    • individual services packaged using lightweight containers as LXC
    • then deployed and managed at runtime using container orchestration tool as Mesos

L9:

  • containers are agile and lightweight
    • great promises in achieving desired level of efficiency
  • Containers + microservice design => closer to unlocking full potential of software-defined systems
    • [Interpretation] Containers as part of configuration as code
      • potential for automation!
    • code portability
      • packaged along with dependencies into image
      • "develop once, deploy everywhere"
      • VMs can achieve similar, but not as portable and as lightweight
      • resolves configuration complexity and runtime consistency of infrastructure
        • automation tools as Chef, Puppet not necessary any more
          • many config points to define dependencies between SW and deployment env
          • install/uninstall or update process cumbersome
          • config points can turn into break points or errors
        • packaging as container image => less config points
          • Example: OpenStack deployment
            • Dependencies: 22 (chef) -> 5 (Docker)
            • Download links: > 50 (chef) -> 6 images (Docker)
            • Config variables: 80 (chef) -> 26 (Docker)
    • easy lifecycle management
      • satisfy demands of continuous delivery and integration
      • operators prefer managing lifecycle of infrastructure code with image-based approaches (e.g. versioned docker images) to avoid in-situ updates
      • deployment and tracking of different releases simultaneaosly in different environments, online A/B testing, fast rollbback etc.
        • easy rollbacks
      • amplify benefits as components => decoupled preventing interfering with one another during maintenance
    • efficient resource utilization
      • multiple containers per server
      • more efficient than VMs in accessing memory and I/O devices, while still providing good isolation
      • single-host level: containers can run more cloud services than VMs while same performance
      • portable and fast-booting containers => run infrastructure components as "micro-services" than can be scaled out or back easily
        • fast container creation and startup times => respond better to workload variations in timely fashion
        • different layers => faster transferring time and more fine-grained versin control
  • Context: OpenStack as Containers/Microservices
    • architecture
      • 1 container host for db, 1 container host for messaging service (rammbitMQ), third hosts controller services (Keystone, Glance, Nova, Neutron servers, ...)
      • each service packaged with dependent libraries as container image
        • can be downloaded and used to create container on any Docker host
        • contain scripts to generate config file at runtime
  • Rolling upgrade
    • zero-downtime
      • container with new version deployed to same host
      • redirect traffic when fully started
      • transparent to external clients
    • image-based DevOps simplifies rolling upgrade by eliminating complexities
      • e.g. (un)installing old/new binaries along with dependencies and resolving conflicts
      • faster with containers than VMs since fast startup time
        • in their test: 1.6x faster
        • downtime experience not notably different

L12:

  • Problems of virtualization
    • not cost-efficient + computational overhead
      • number of microservices, number of instances
    • need for Configuration Management systems => create exact test and production environments
  • Containerization
    • suites well for microservices
    • (+) lower overheads than virtualization but also isolation
      • lightweight OS like CoreOS, Project Atomic => only have minimal parts to host many containers
    • (+) resolve "works on my machine" problems
    • (+) exact environments and artifacts in dev and prod env
    • (+) portability => deploy anywhere supporting containers without changing source code or image
      • clouds support it
  • Docker = defacto standard for containerization in industry

L13:

  • Context: performance evaluation of containers
  • Docker usage
    • (a) container as lightweight server
      • package application server and sidecars (logging, monitoring, config management, proxying,...) into container
    • (b) one process per container
      • technique is common, no name => Related Processes Per Container (RPPC)
      • ensemble of containers, each for process, e.g. each sidecar
      • (+) speeds up deployment (only redeploy subset)
      • (+) reduces disruption
      • (+) empowers devops team
      • Comparison to Configuration Management Tools (Puppet, Chef)
        • similar: only update what needs to be changed
        • different: unit of deployment (container)
      • => useful as building block for microservices
  • Containers
    • very fast
    • wrap dependencies
    • wrap vagaries of implementations
  • Microservices with containers
    • Master-slave approach
      • one master container coordinating slave containers (run application process)
      • master tracks subordinate's container help in their communication and guarantee slaves don't impact with other containers from other masters
    • Nested-container approach
      • all containers within one container
      • easier to manage
      • easy IPC, guaranteeing fate sharing, sharing memory, disk and network
      • more overhead => 2 layers of Docker daemon
  • Evaluation
    • bare metal vs. master-slave container vs. nested-container vs. VM
    • CPU Performance Evaluation
      • quite similar behavior of all variants => no significant performance impact for CPU-intensive executions
    • Overhead of Container Creation
      • master-slave fastest, followed by nested-container (8x to master-slave), followed by VM (2x to nested-container)
      • nested-container: overhead by initialization of Docker in parent container => load image stored locally on host and creation of child container itself
        • can be optimized by sharing pr-created volume (8s -> 1.7s creation time)
        • but concurrency and security problems
        • increase more than lineary when overloading cores

L14:

  • Containers well-suited for microservices
    • e.g. Docker
    • (+) lower overhead than VM
  • Integration into CI
    • Tests
    • Container => deployed to next stage: testing
      • load and integration tests
      • used to approve stories by POs

L16:

  • Virtualization platform as basis technology for microservices
    • esp. advances in application containers
  • (+) ease of deployment
    • each container: service and all dependencies
    • deploy without care about libraries, versions => libraries invisible to other services
    • tools for automatic deployment (e.g. Docker)
  • (+) better scalability
    • just start/stop container of image
    • overhead of containers much lower and start-up fast
  • Comparison to IoT CPS: OSGi used
  • Immutable Server Pattern
    • after application tested and put into operation => artifact is not altered anymore
    • don't provide user credentials to container
    • when sth needs to be changed => replace application container with new version of it
    • old version can be redeployed by replacing the new version again
    • Usage in IoT
      • not yed employed
      • could enable updates with minimum of risk and downtime
      • hard to find out how version handling is done in IoT domain

L17:

  • bottleneck in one microservice => microservice containerized and executed across multiple hosts in parallel

L18:

  • Containers and microservices natural pair
    • realize greater modularity
    • code reuse
    • reproducibility
    • fine-grained scaling
  • containers can simplify creation, deployment, distribution and maintenance of components
    • don' aid issues of communication between services over a complex networking layer

L19:

  • Docker is good fit for microservices
    • supports to address inherent microservice challenges
      • higher level of complexity from dev to going on prod
      • automation in every aspect
      • failure isolation
      • testing
    • Accelerate automation
      • container as deployment unit to granularly contain a service
      • creation and launching of container is scriptable by design
      • many tools available
      • Docker accelerates automation culture in every step of software dev lifecycle
    • Accelerate independency
      • isolated box contains run time environment for service
      • wide platform support by Docker
      • teams can independently work on implementing service with whatever technology / language / process / tools they are comfortable with
    • Accelerate portability
      • contains all dependencies in container
      • container is portable among different platforms
        • on VM, local laptop, bare-metal server, cloud
      • different stakeholders can run application (devs, testers, admins, ...)
      • helps to do independent isolated testing of microservice
    • Accelerate resource utilization
      • lightweight and portable => implication requirements towards being a microservice-friendly environment
      • container = just application + dependencies
      • isolated process on host OS, sharing kernel with other containers
      • if placed in VM env => containerization makes it even more portable and efficient
      • in in bare-metal env => lightweight nature helps creating and running more instances than VMs => better resource utilization
    • Secured
      • allows devs to maximize security at different levels
      • freely use pen testing tools to stress test any part for build cycle
      • source for Docker image are explicitly declaratively described in Docker build => handle image supply chain easier
        • easily force and mandate security policies
      • harden services by immutable service pattern => strong security insurance for the services
  • Docker image created for service of repository including
    • appropriate versioning mechanism
      • keep track of level of code in the images and eventual Docker containers later on
  • easily harden immutable services by putting them into containers adds strong security insurance

L20:

  • Each microservice can be packaged and shipped in its own container
    • "natural way of packaging ans shipping microservices"
    • easy to deploy
    • easy to migrate over multiple cloud platforms (widely supported) => portablity
    • small size of microservices => very fast to get up and running
  • Containerization as second largest gain regarding deployment in operation stage

L21:

  • portability: microservices usually packaged in containers (e.g. Docker)
    • include microservice and all its environment (lib, databases) in a unique entity
    • easy deployment on any platform supporting the container technology
      • tool knowing how to deploy container cloud deploy it no matter what is inside
    • uniform behavior over heterogeneous platforms
    • isolation
      • different library version don't collide
      • fault tolerance by independent processes, only affected by interfaces + resources relying on
    • effortless relocation or replication of microservice across heterogeneous platforms => scaling

L23:

  • Comparison to VM: VM has Hypervisor and Guest OS;
    • Hypervisor-free containerization: Containers have Host OS + container engine but no guest OS
    • Hypervisor managed physical cloud: Hypervisor + Host OS + Container engine => container without guest OS
  • Hypervisor based stack => suited for IaaS clouds
    • Containers more suited for PaaS clouds
    • VM and container complement each other => need to analyze regarding performance isolation, overhead, and security requirements
  • Container used to create microservice
    • allows to instantiate, relocate, and optimize hardware resources in mor flexible way while providing near-native performance (hypervisor-free mode)
      • low overhead by sharing OS kernel
      • weaker isolation, greater security vulnerabilities
    • best solution to adapt changes in federated system

L24:

  • microservice ideally be packaged, provisioned, orchestrated through cloud by using lightweight container technologies
    • e.g. Docker
  • Context: benchmark requirements for OSS microservice projects
    • R7: Support for reusable container images
      • 3 of 4 examined projects fully fulfill R7
    • to accelerate deployment => containers like Docker
    • reusable images with whole SW stack and execution env
    • easily deploy to same virtual env independently from physical infrastructure

L25:

  • multiple instances of same microservice usually run in different virtualized containers/VMs
  • Multiple Service per Host Pattern: multiple services run on same host/node
    • Subpattern "service instance per container pattern" / "service instance per VM" by Richardson
    • (+) scalability: multiple service instances on same host
    • (+) performance: rapid deployment of new services compared to VMs
  • Single service per host pattern (in principle)
    • (+) better isolation, no conflicting resources
    • (-) dramatically reduce performance and scalability
    • => counterproductive in practice, violates basic idea of microservice
  • VM usage: complete startup => no quick deployments
    • need to maintain dedicated OS, service container, and all VM-related tasks

L26:

  • containers is coincidence than direct consequence of microservice design
  • isolation of execution environments
  • lend themselves to scalability by quick instantiation on demand
  • other features required more work to overcome
    • well-thought-through mechanisms for network communication between associated complications of their use on different physical hosts / in different datacetners
    • security, monitoring, need to minimize operational size of container
    • => requires use of standards
      • appc by CoreOS => ACI container image formate
      • runC container engine by Docker => OCI container image formate
      • Open Container Initiative by Linux and Cloud Native Computing Foundation
  • microservice delivery matches well in many ways with containers
    • most popular currently to deploy microservices

L28:

  • microservices => build web service as Docker container on laptop => transfer image to production cloud
  • containers expected to put more pressure on computer systems than native processes (monolith)
    • consume CPU cycles to communicate with each other over API calls
    • Docker relies on OS features (cgroups, iptables) to isolate => more pressure on OS level
    • process within container runs as native process of the ost OS
    • devs can consider container image as executable, container as native process
  • Docker network configuration
    • (1) host network
      • exports network interfaces of host OS to process in a Docker
      • probably does not degrade performance
    • (2) bridge network
      • exports virtualized network interface to a container
      • container conencted to a private network segment
      • iptables to transfer packets among virtualized interfaces of containers and other physical networks
      • dev can use any port numbers because virtualization avoids conflicts
        • => isolation
      • pressure on software and hardware; negative affect on performance
  • Portability
    • dev's laptop and production cloud
    • duplicates system configurations including standard libs
  • For fain grained control within orchestrator: container should only contain one process (even though it could contain more) => increasing number of containers
  • cloud vendors on devs need to understand microservice behavior when colocating containers to one physical host
    • co-location to reduce cloud operation cost
  • CPU data cache misses
    • large amount of containers => more entries in iptables
    • access control => checking access control list
    • => large number of cache misses, degraded performance
    • => optimization opportunity of OS layer

L30:

  • distributed => microservices more difficult to operate than monolith
    • solution: sohpisiticated (container-based) virtualization and infrastructure technologies necessary
      • e.g. Docker + K8s
    • => deployments much more dynamic and volatile

L31:

  • Migration practice MP13: Containerize the services
    • Context
      • each service requires specific environment to run correctly
        • manual setup or configuration management tool
      • differences between dev and prod env can cause problems
        • e.g. same code produce different behaviors
      • => deployment into prod becomes cumbersome task
    • Problem
      • How dev and prod env produce same results for the same code?
      • How complexity of config management tools / difficulties of manual deployments be eliminated?
    • Solution
      • Each service to individual VM + config management tool => isolation
        • waste of resources by virtualization
        • config managment tool = layer of complexity in deployment
      • Containerization
        • more lightweight
        • can remove need for config management tool
          • ready images in central repo
          • further config in new images building stage
      • add step to CI pipeline
        • build container image
        • store image to private image repository
        • can be run in dev and prod env => same behavior
      • each service, own container image creation configuration => inside code repository
      • good practice: env variable as high priority source for populating software configuration
      • config keys can have different values in different environments
        • DB URL
        • credential
        • => easily inject in container creation phase
    • Challenges
      • computational overhead
      • development env should be adapted to embrace containers
    • Technologies
      • Docker
  • For monitoring: add component to each service container image and configure
    • at creation of container image
    • at creation of actual container
  • Containerization found in all three case studies

L32:

  • Microservice has own codebase, team, set of virtual machines or containers in which it runs

L34:

  • microservices emphasize lighweight virtual machines => containers (e.g. Docker) or individual processes
    • (+) unbinds dependency on certain technology => service specific infrastructure
  • Industry relates microservices to container technologies that simplify automated deployment
    • credit for building self-contained microservice deployment units
  • contributes to scalability
  • part for "Deployment operations"

L37:

  • Context: Example system using microservices
  • Docker and K8s as microservice container

L40:

  • optimized allocation of application components within containers in VMs on top of host machine by cloud infrastructure
    • minimize waste of resources and maximize packing of components within a single MV
    • simpler and faster migration from one VM to another to satisfy cloud applications' changing resources demands

L41:

  • Context: example system
  • Containerization: services hosted in Linux containers on Docker Swarm Cluster
    • use of Docker toolchain
    • Docker images in internal Docker Registry
      • deploy to when new version of service is successfully built and tested by CI
  • Automation
    • CICD pipelines
    • Docker supports process
  • Containers => same env for local testing, test servers, production
    • team controls whole infrastructure with Docker, including databases and open ports
  • Dependency on Docker => Linux containers becoming standard through Open Container Initiative

L42:

  • need for infrastructure for container management

L43:

  • Container might become mainstream, esp with container management nad orchestration systems
  • Container solve key problems, enabling rapid deployment pipelines and advanced deployment strategies
    • solves "runs on my machine" problem => holds code and exec env as llibrs, OS, env settings => confidence that works if succesfully excecutes on test servers
      • removes concerns of differences in OS version and coomplex code dependency chains
    • homogenous abstraction allowing deployment scripts and tooling to deal with single entitiy no matter what is executed
      • different technologies can be deployed with same tool chain
  • CD pipeline with container technology
    • need for integrated toolchain
    • rapid deployments
    • confidence in relation to quality

L45:

  • Context: Automated recovery of microservice architecture
  • service descriptors contain properties and configurations of each service to package and run
    • deployment info in target env, name of service + container, in and output port of containers, build path
    • e.g. Docker and Vagrant files
  • Docker and Docker-compose files can be used to recover the current architecture

L46:

  • Microservices: independent, un-coupled service running in own container; independently scalable

L47:

  • context: example application
  • Container tech made deployment possible
    • Docker contains all dependencies => black box
    • isolation, low coupling, without overhead of VMs

L49:

  • Isolation from other microservices and exec env via virtualized container

L54:

  • Microservices reside on VMs => enhances scaling property
    • small machine size sufficient
    • comparison monolith: whole new machine to scale minor service within application
  • Building VM image takes a long time
    • large size => network transfer complex
    • overhead by hypervisor
  • => need for container technology like Linux Container
    • separate space for processes without need for hypervisor
    • Docker => lightweight containers + handle them
    • pack up service with dependencies into single image => code portability
  • Containers on different machines => tool to locate containers and run them
    • scheduler layer => Cluster Manager

L55:

  • Microservices can be easily containerized and deployed as single processes
    • reduce cascade-fail overall application
    • isolation => reactive scaling, independent monitoring, debugging, and testing

L59:

  • Microservices can be
    • event-processing scripts
    • containers
    • entire VMs
    • => need for systematic way to package and deploy them

L61:

  • Microservices can run on
    • physical machine running an OS
    • machine running a container engine
    • virtualized environment (hypervisor is mapped as OS)
    • machine running a container engine on top of virtualized environment
  • More than half of examined studies focus on microservice layer, not considering any other layer (Container, VM, HW, OS)
  • some studies: env as important aspect of architecture
    • container and VM layers often discussed
      • => key enabling technologies for MSA
    • combination of virtualization + containers => especially suitable for IaaS cloud model
  • interest in security and usability decreased in research trends
    • containers and resource limintations in virtualized env may have played a role in this context

L63:

  • Cloud instance layer: contains cloud instances provided by IaaS cloud providers
  • can run various containers that execute actual microservices

LN21:

  • microservices widely employ container based deployment for portability, flexibility, efficiency, and speed
    • e.g. Docker

LN41:

  • presence of containers provide pefect environment of microservices
    • containers to test and deploy single services in separate containers across available network of computers
    • (+) demove dependencies on underlying infrastructure services, reduce complexity when dealing with multiple platforms
    • (+) standardized building and CI/CD
    • microservices and containers bind together and form an ecosystem
    • security problems of containers impact microservices => secure containers
  • security issues of containers
    • containers on host share kernel => may make it possible for attackers to gain access to container (kernel exploit)
    • DNS, escapes from containers, poisoned images, secret compromise
    • => necessity to secure containers to prevent attacks on microservices
    • (delves into details, too fine grained for our RQ)
  • microservices is deployed in many distributed containers => customers might be more suspicious to their private information
    • information leakage remains serious challenge
  • Context: ideal solution
    • part of it: employ containers adopting SELinux in VMs to ensure container security
    • VMs on phyisical machine and container instances running on VM
    • increases complexity, but overcome container volnerability of kernel exploits
      • has to bypass VM kernel and hypervisor as first line of defense
    • isolation by namespaces and cgroups in containers is second line of defense
      • isolation not enough => need for additional measures
    • mandatory access control => SELinux as third layer of defense
    • mitigates security issues of micrsservice-based fog applications

LN42:

  • DevOps provides process framework for managing the microservice contained ecosystem

LN43:

  • Docker and similar tech enable easier development and delpoyment of microservices
  • (+) makes microservices portable and isolated
  • (+) no conflicts of dependencies
  • (+) no need to configure each environment
  • (+) imitate production environment on local env for developers
  • (+) multiple tools to handle scaling, deployment, and management of containers
    • among them: kubernetes providing horizontal scaling, service discovery, load balancing, etc.

LN44:

  • Context: microservice security issues by layers
  • layer virtualization
    • threat example: sandbox escape, hypervisor compromise, shared memory attacks, use of malicious/vulnerable images
    • mitigation example: stronger isolation, no shared library access, no shared hardware cache, verification of image origin and integrity, timely software updates, principle of least privledge
  • together with hardware layer: least partially accessible to attacker
    • malicious hardware manufaturer
  • Context: security implications
  • isolation thu loose coupling
    • shared nothing principle and strict data owning => isolatable microservice; only access to what tehy need
    • integrity, confidentiality and correct execution warranted by secure containers and compiler extensions
      • secure containers: Docker using Intel Software Guard Extension (SGX)
  • Context: experiment
    • services can run as separate processes or in Docker containers
    • whole app run as multi-container docker application using compose
    • starts whole app with one command

LN48:

  • Context: deployment model for microservices
  • build container after compiling and testing microservices
    • push to container repository for deployment
  • Context: experiment
  • all microservices deployed in docker container => isolated from each other
    • Jenkins packages microservice in the form of a docker image
  • VMs vs container
    • enabler to sandbox microservice: the independence of each other
    • (+) more secure
    • (+) easier to manage
    • (+) helps to achieve zero downtime and reduce efforts to roll updates
    • containers
      • (+) easier to manage as more lighweight
      • (+) outperform VMs (benchmark)
        • 66% reduction of image size compared to VMs (1056 vs. 257 MB)
        • significantly shorter deployment time (2 vs. 10 s)
        • less time for image creation => update times (17 vs. 5 s)
        • => used Docker instead of VM to deploy microservices

LM43:

  • Context: SLR findings about microservices in DevOps
  • S18: containers for dealing with scalability issue, easy way to scale operations by creating copies of the services
  • S27:recommends Kubernetes working with container tools like Docker
    • to deploy and scale microservices in production
  • S09 suggests using ocntainers and VMs for better efficiency in resource utilization
  • S19 MSA on SONATA NFV => tools; among others: Docker
  • Challenge: providing security at runtime
    • virtualization through containers and VMs
    • security of individual microservice tends to be neglected
  • Table 10: lists containerizethe service
  • Discussion about performace of docker containers and VMs for implementing microservices
    • VMs better than containers on AWS EC2: container not directly on host
    • Networking causes overhead
    • Other study: container better performance than VMs
    • nested containers no significant perforamnce impact
    • Unikernel can outperform container
    • => performance issues not merely soved by using Docker / VM / Unikernel
  • S12: service instance per container pattern
  • S31: multiple service instances per host pattern
  • S31: service instance per host pattern

LM45:

  • Context: interviews and insights from multiple cases on technologies and sw quality in MSA
  • C1-S1
    • Java-based microservices
    • Current deployment as JAR files via Ansible
      • comfortable
    • migration to docker and Kubernetes planned to incease operabiltiy
  • C2-S3
    • new system has 10 dockerized Java services
  • C3-S5
    • Java services + one in Scala
    • Docker to support on-premise deployment for customers and internal SaaS hosting
  • C4-S7
    • 250 ServiceWorkerRegistration, most in Java but also Node, Go, Kotlin
    • currently: most deployments as JAR or WAR files
    • move to Docker is ongoing
  • C9-S13
    • Migration from traditional WebSphere to Liberty on Docker
    • Service cut postponed, instead strangler pattern
  • Deployment of services (also Table 2)
    • most use docker (11 of 14)
    • remaining 3: migration is planned
    • exception: C10
      • use Docker for many services, but not a standard, so variety of deployment artifacts used
    • operability and portability of Docker valued highly
  • Portability profits from container technologies
    • attributed to containers and tools like Ansible
    • installability to change platform

LM47:

  • Context: SLR with tactics to achieve quality attributes
  • Containterization as tactic for virtualization through containers
  • higher performance than VMs
    • each VM has own operating system
    • containers share same host OS
      • require guest process to be compatible with host kernel
      • => reduces overhead, higher performance, less memory requirement, reduced infrastucture cost
  • more lightweight: create and run more microservices with higher resource utilization
    • single host: containers can run more microservices than VMs
    • multiple hosts: fast booting => scale microservices in or out easier, adapt to workload changes, more efficient resource usage
  • Docker as technology
  • Constraints: in cloud, accessing OS level modules or data => sacrifice protability and security

LM48:

  • Context: microservice migration describes an examples project (FX Core) and compares back to monolith
  • containers => independent environments
    • handles heterogeneity
    • can be deployed on heterogeneous infrastructure, e.g. differently sized hosts
  • all services running on Docker Swarm cluster
    • Docker compose: deploy all services with dependencies for local testing
      • exactly same env like for production (even though locally deployed) => same reliability between envs
    • Docker registry for container images
    • CI builds container images and saves in registry
    • images inherit from infrastructure and base image
  • containerization as part of orchestration

Interview A:

  • agile world: spawn test environment/instances
    • containerization no option but industry standard!
  • if you start with containers, you will be at k8s quite fast
    • requires new skillset, new job descriptions
    • they started with a few containers, didn't know how many would come
    • iterated over container usage: log to std-out, not to local log file => the very basics of containerization
    • when 30-40 containers: accessing single log files
      • log files are gone since containers
      • => went to cloud thinking: cuttle vs cat
  • Security
    • BSI 2018: community draft for containers: Document Sys-1.6-Container
      • will probably affect BSI Grundschutz Kompendium this year
      • => they use this as guideline
        • more knowledge required than some basic tutorials about Docker
    • non-priviledged user in containers
    • pen testing was very pedantic
      • base images base on Alpine Linux
      • problems since netcat
      • could not extract netcat => had to extract everything
        • called hardening
      • devs running amok: no shell for dev purposes in containers
        • required compensation somehow
  • Container runtime
    • formerly Docker
    • now migration to CRI-O
      • Docker commercially difficult
      • migration went well since they did not rely on specifics
    • Mantra: vanilla kubernetes, vanilla Docker without specialitites
      • => compatibility to change container runtime

Interview F:

  • Deployment platform has an influence on how microservices communicate
  • Be it Amazon, Microsoft, or someting with Docker