When working with applications, one of the required things are - development environment. This may not be obvious at first, but great developer experience can ultimately save you money. Hence having working, reliable and automated development environment is key to increase developer's experience and hence productivity.
Below we'll review few options, and make decisions based on your needs
Local Docker-based environment
This is simplest setup, you only need to install Docker locally. Choose this approach for relatively simple environments. For example, I would pick this approach for system that has up to 6 components e.g. couple of services running together with API Gateway or Rails app with database and redis. I wouldn't use docker-based environment trying to wire 10 microservices together.
- simpler approach
- reliable tooling that allows for easy volume binding, e.g. for sharing code or dependencies
- easy integration via docker-compose
- doesn't give you dev/prod parity unless you deploy to Docker Swarm (you shouldn't anyway)
Local Kubernetes-based environment
Choose k8s-based environment when:
- you deploy to k8s and dev/prod parity is important for you
- you do Kubernetes-specific work, for example developing Helm charts
- when you want your developers to understand how applications are executed in production
This approach would require more powerful local workstations.
- dev/prod parity when you deploy to Kubernetes
- Complexity around provisioning, deployment, volume binding
- High usage of resources (especially when trying to reproduce production-like environment, for example with istio)
It is not uncommon to adopt Docker-based environment for development purposes. But when you need to test how application is executed in Kubernetes (without going thru CI/CD loop) you build docker image locally and test deployment on a local Kubernetes cluster. This approach allows for developer-friendly approach initially and the trade-off is time spent testing k8s-related pieces separately.
This is the most complex setup across approaches mentioned here. Because it involves multiple moving pieces and you also want to address security implications of remote development environments:
- Fully remote
- all of the platform components are running in remote environment
- code is synced to remote environment via volume sharing or file sync
- build stage happens within remote environment
- Partial remote
- all of the components are running in remote environment, but
- components can be enabled to run locally and are connected to cluster via tunnels
- build stage happens on local machine
Empowering developers to collaborate
When done right, this approach provides an excellent and reliable developer workflow. Because in addition to lighter resource consumption on local, developers use their remote environments for collaborating with their fellow developers (pair programming, coaching), product managers (feature review) and QA engineers (reproducing bugs, doing initial testing)
- lightweight resource requirement on your local workstation
- superb developer experience
- allows for streamlined collaboration on a single workstation
Complexity around orchestrating remote environments
- Managing remote environments in automated way (security upgrades, provisioning of new images)
- Tunneling between local workstation and your remote development machine
Security. Where's the code that you work with? Who has access it? Who can run it or download it?
Alternatives that aren't discussed here
Previously common way to automate provisioning of development environment was Vagrant. Fantastic tool from Hashicorp that allows provisioning VMs on local with multiple provisioners, like shell script, Chef, puppet, Docker and others.
However we see more and more teams adopt Docker-based development environments due to it's simplicity and dev/prod parity