Building Scalable Microservices with Docker and Kubernetes
Building Scalable Microservices with Docker and Kubernetes
Microservices architecture has revolutionized how we build and deploy applications at scale. Combined with Docker for containerization and Kubernetes for orchestration, this approach enables teams to develop, deploy, and maintain complex applications with unprecedented flexibility and scalability.
Understanding Microservices Architecture
Microservices break down a monolithic application into smaller, independent services that communicate over well-defined APIs. Each service:
- Owns its data and business logic
- Can be developed by different teams
- Uses different technologies as appropriate
- Scales independently based on demand
- Deploys independently without affecting other services
Benefits and Challenges
Benefits:
- Independent scaling and deployment
- Technology diversity
- Team autonomy
- Fault isolation
- Better testability
Challenges:
- Increased complexity
- Network latency
- Data consistency
- Service discovery
- Monitoring and debugging
Containerizing Microservices with Docker
Docker provides the perfect foundation for microservices by packaging applications with their dependencies into lightweight, portable containers.
Dockerfile Best Practices
# Use multi-stage builds for smaller images
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Production stage
FROM node:16-alpine
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
WORKDIR /app
# Copy dependencies and source code
COPY --from=builder /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .
USER nodejs
EXPOSE 3000
# Use exec form for proper signal handling
CMD ["node", "server.js"]
Service Example: User Service
// user-service/server.js
const express = require('express');
const mongoose = require('mongoose');
const { body, validationResult } = require('express-validator');
const app = express();
app.use(express.json());
// Health check endpoint
app.get('/health', (req, res) => {
res.status(200).json({ status: 'healthy', service: 'user-service' });
});
// User model
const User = mongoose.model('User', {
name: String,
email: String,
createdAt: { type: Date, default: Date.now }
});
// Create user
app.post('/users',
body('name').isLength({ min: 1 }),
body('email').isEmail(),
async (req, res) => {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
try {
const user = new User(req.body);
await user.save();
res.status(201).json(user);
} catch (error) {
res.status(500).json({ error: error.message });
}
}
);
// Get user
app.get('/users/:id', async (req, res) => {
try {
const user = await User.findById(req.params.id);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
res.json(user);
} catch (error) {
res.status(500).json({ error: error.message });
}
);
// Connect to MongoDB
mongoose.connect(process.env.MONGODB_URI || 'mongodb://localhost:27017/users')
.then(() => console.log('Connected to MongoDB'))
.catch(err => console.error('MongoDB connection error:', err));
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`User service running on port ${PORT}`);
});
// Graceful shutdown
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down gracefully');
mongoose.connection.close();
process.exit(0);
});
Docker Compose for Development
# docker-compose.yml
version: '3.8'
services:
user-service:
build: ./user-service
ports:
- "3001:3000"
environment:
- MONGODB_URI=mongodb://mongo:27017/users
- NODE_ENV=development
depends_on:
- mongo
volumes:
- ./user-service:/app
- /app/node_modules
product-service:
build: ./product-service
ports:
- "3002:3000"
environment:
- MONGODB_URI=mongodb://mongo:27017/products
- NODE_ENV=development
depends_on:
- mongo
order-service:
build: ./order-service
ports:
- "3003:3000"
environment:
- MONGODB_URI=mongodb://mongo:27017/orders
- USER_SERVICE_URL=http://user-service:3000
- PRODUCT_SERVICE_URL=http://product-service:3000
- NODE_ENV=development
depends_on:
- mongo
- user-service
- product-service
api-gateway:
build: ./api-gateway
ports:
- "3000:3000"
environment:
- USER_SERVICE_URL=http://user-service:3000
- PRODUCT_SERVICE_URL=http://product-service:3000
- ORDER_SERVICE_URL=http://order-service:3000
depends_on:
- user-service
- product-service
- order-service
mongo:
image: mongo:5
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
volumes:
mongo_data:
Service Communication Patterns
1. Synchronous Communication (HTTP/REST)
// order-service/services/userService.js
const axios = require('axios');
class UserService {
constructor() {
this.baseURL = process.env.USER_SERVICE_URL || 'http://localhost:3001';
this.client = axios.create({
baseURL: this.baseURL,
timeout: 5000
});
}
async getUser(userId) {
try {
const response = await this.client.get(`/users/${userId}`);
return response.data;
} catch (error) {
if (error.response?.status === 404) {
throw new Error('User not found');
}
throw new Error(`User service error: ${error.message}`);
}
}
}
module.exports = UserService;
2. Asynchronous Communication (Message Queues)
// shared/messageQueue.js
const amqp = require('amqplib');
class MessageQueue {
constructor() {
this.connection = null;
this.channel = null;
}
async connect() {
try {
this.connection = await amqp.connect(process.env.RABBITMQ_URL || 'amqp://localhost');
this.channel = await this.connection.createChannel();
console.log('Connected to RabbitMQ');
} catch (error) {
console.error('Failed to connect to RabbitMQ:', error);
throw error;
}
}
async publishEvent(exchange, routingKey, message) {
await this.channel.assertExchange(exchange, 'topic', { durable: true });
this.channel.publish(exchange, routingKey, Buffer.from(JSON.stringify(message)));
}
async subscribeToEvent(exchange, routingKey, queue, handler) {
await this.channel.assertExchange(exchange, 'topic', { durable: true });
await this.channel.assertQueue(queue, { durable: true });
await this.channel.bindQueue(queue, exchange, routingKey);
this.channel.consume(queue, async (msg) => {
if (msg) {
try {
const content = JSON.parse(msg.content.toString());
await handler(content);
this.channel.ack(msg);
} catch (error) {
console.error('Error processing message:', error);
this.channel.nack(msg, false, false); // Dead letter queue
}
}
});
}
}
module.exports = MessageQueue;
3. Event-Driven Architecture
// order-service/events/orderEvents.js
const MessageQueue = require('../shared/messageQueue');
class OrderEvents {
constructor() {
this.messageQueue = new MessageQueue();
}
async init() {
await this.messageQueue.connect();
}
async publishOrderCreated(order) {
await this.messageQueue.publishEvent(
'orders',
'order.created',
{
orderId: order._id,
userId: order.userId,
items: order.items,
total: order.total,
timestamp: new Date()
}
);
}
async publishOrderUpdated(order) {
await this.messageQueue.publishEvent(
'orders',
'order.updated',
{
orderId: order._id,
status: order.status,
timestamp: new Date()
}
);
}
}
module.exports = OrderEvents;
Kubernetes Deployment
1. Kubernetes Manifests
# k8s/user-service-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: your-registry/user-service:latest
ports:
- containerPort: 3000
env:
- name: MONGODB_URI
valueFrom:
secretKeyRef:
name: mongodb-secret
key: connection-string
- name: NODE_ENV
value: "production"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
2. ConfigMaps and Secrets
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
LOG_LEVEL: "info"
API_TIMEOUT: "5000"
MAX_CONNECTIONS: "100"
---
# k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
connection-string: bW9uZ29kYjovL3VzZXI6cGFzc0Btb25nb2RiOjI3MDE3L215ZGI= # base64 encoded
3. Horizontal Pod Autoscaler
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-service-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Service Mesh with Istio
1. Istio Service Mesh Setup
# istio/virtual-service.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: user-service
spec:
hosts:
- user-service
http:
- match:
- headers:
version:
exact: v2
route:
- destination:
host: user-service
subset: v2
weight: 100
- route:
- destination:
host: user-service
subset: v1
weight: 100
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: user-service
spec:
host: user-service
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
maxRequestsPerConnection: 10
circuitBreaker:
consecutiveErrors: 3
interval: 30s
baseEjectionTime: 30s
2. Observability with Istio
# istio/telemetry.yaml
apiVersion: telemetry.istio.io/v2alpha1
kind: Telemetry
metadata:
name: metrics
spec:
metrics:
- providers:
- name: prometheus
- overrides:
- match:
metric: request_total
tagOverrides:
request_protocol:
operation: UPSERT
value: "http"
Monitoring and Observability
1. Prometheus Metrics
// shared/metrics.js
const promClient = require('prom-client');
// Create a Registry
const register = new promClient.Registry();
// Add default metrics
promClient.collectDefaultMetrics({
register,
prefix: 'node_'
});
// Custom metrics
const httpRequestDuration = new promClient.Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status'],
buckets: [0.1, 0.5, 1, 2, 5]
});
const httpRequestTotal = new promClient.Counter({
name: 'http_requests_total',
help: 'Total number of HTTP requests',
labelNames: ['method', 'route', 'status']
});
const databaseConnectionPool = new promClient.Gauge({
name: 'database_connection_pool_size',
help: 'Current database connection pool size'
});
register.registerMetric(httpRequestDuration);
register.registerMetric(httpRequestTotal);
register.registerMetric(databaseConnectionPool);
// Middleware for Express
function metricsMiddleware(req, res, next) {
const start = Date.now();
res.on('finish', () => {
const duration = (Date.now() - start) / 1000;
const route = req.route?.path || req.path;
httpRequestDuration
.labels(req.method, route, res.statusCode)
.observe(duration);
httpRequestTotal
.labels(req.method, route, res.statusCode)
.inc();
});
next();
}
module.exports = {
register,
metricsMiddleware,
databaseConnectionPool
};
2. Structured Logging
// shared/logger.js
const winston = require('winston');
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: {
service: process.env.SERVICE_NAME || 'microservice',
version: process.env.SERVICE_VERSION || '1.0.0'
},
transports: [
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
})
]
});
// Request logging middleware
function requestLogger(req, res, next) {
const startTime = Date.now();
res.on('finish', () => {
const duration = Date.now() - startTime;
logger.info('HTTP Request', {
method: req.method,
url: req.url,
status: res.statusCode,
duration,
userAgent: req.get('User-Agent'),
ip: req.ip
});
});
next();
}
module.exports = { logger, requestLogger };
Deployment Strategies
1. Blue-Green Deployment
# k8s/blue-green-deployment.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: user-service-rollout
spec:
replicas: 5
strategy:
blueGreen:
activeService: user-service-active
previewService: user-service-preview
autoPromotionEnabled: false
scaleDownDelaySeconds: 30
prePromotionAnalysis:
templates:
- templateName: success-rate
args:
- name: service-name
value: user-service-preview
postPromotionAnalysis:
templates:
- templateName: success-rate
args:
- name: service-name
value: user-service-active
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: your-registry/user-service:latest
ports:
- containerPort: 3000
2. Canary Deployment
# k8s/canary-deployment.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: user-service-rollout
spec:
replicas: 10
strategy:
canary:
steps:
- setWeight: 10
- pause: {duration: 1m}
- setWeight: 20
- pause: {duration: 1m}
- setWeight: 50
- pause: {duration: 2m}
- setWeight: 80
- pause: {duration: 2m}
canaryService: user-service-canary
stableService: user-service-stable
trafficRouting:
istio:
virtualService:
name: user-service
routes:
- primary
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: your-registry/user-service:latest
Security Best Practices
1. Network Policies
# k8s/network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: user-service-netpol
spec:
podSelector:
matchLabels:
app: user-service
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: api-gateway
- podSelector:
matchLabels:
app: order-service
ports:
- protocol: TCP
port: 3000
egress:
- to:
- podSelector:
matchLabels:
app: mongodb
ports:
- protocol: TCP
port: 27017
- to: [] # Allow DNS
ports:
- protocol: UDP
port: 53
2. Pod Security Standards
# k8s/pod-security-policy.yaml
apiVersion: v1
kind: Pod
metadata:
name: user-service-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 1001
seccompProfile:
type: RuntimeDefault
containers:
- name: user-service
image: your-registry/user-service:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
Conclusion
Building scalable microservices with Docker and Kubernetes requires careful consideration of:
- Service Design: Proper service boundaries and communication patterns
- Containerization: Optimized Docker images and container security
- Orchestration: Effective Kubernetes deployments and resource management
- Service Mesh: Advanced traffic management and observability
- Monitoring: Comprehensive metrics, logging, and tracing
- Security: Network policies, pod security, and access controls
- Deployment: Automated CI/CD with safe deployment strategies
This architecture provides the foundation for building resilient, scalable applications that can grow with your business needs while maintaining high availability and performance standards.