Skip to main content

Managed Kubernetes

Lineserve Managed Kubernetes provides a fully managed Kubernetes service that simplifies container orchestration and application deployment.

What is Managed Kubernetes?โ€‹

Managed Kubernetes is a container orchestration service that automates the deployment, scaling, and management of containerized applications. With Lineserve's managed service, you get:

  • Fully Managed Control Plane: We manage the Kubernetes master nodes
  • Auto-scaling: Automatic scaling of nodes and pods
  • High Availability: Multi-zone deployment for reliability
  • Integrated Monitoring: Built-in monitoring and logging
  • Security: Regular security updates and patches

Key Featuresโ€‹

๐Ÿš€ Easy Deploymentโ€‹

  • One-click Clusters: Deploy Kubernetes clusters in minutes
  • Multiple Node Pools: Different instance types for different workloads
  • Auto-scaling: Horizontal and vertical pod autoscaling
  • Load Balancing: Integrated load balancers for services

๐Ÿ”’ Enterprise Securityโ€‹

  • RBAC: Role-based access control
  • Network Policies: Secure pod-to-pod communication
  • Secrets Management: Encrypted secrets storage
  • Private Clusters: Isolated cluster networking

๐Ÿ“Š Monitoring & Observabilityโ€‹

  • Prometheus Integration: Built-in metrics collection
  • Grafana Dashboards: Pre-configured monitoring dashboards
  • Log Aggregation: Centralized logging with ELK stack
  • Alerting: Custom alerts and notifications

๐Ÿ”ง Developer Toolsโ€‹

  • kubectl Access: Full kubectl command-line access
  • Helm Support: Package manager for Kubernetes
  • CI/CD Integration: Seamless integration with CI/CD pipelines
  • Registry Integration: Private container registry

Cluster Architectureโ€‹

Control Planeโ€‹

The control plane is fully managed by Lineserve:

  • API Server: Kubernetes API endpoint
  • etcd: Distributed key-value store
  • Scheduler: Pod scheduling and placement
  • Controller Manager: Cluster state management

Worker Nodesโ€‹

Worker nodes run your applications:

  • Kubelet: Node agent for pod management
  • Container Runtime: Docker or containerd
  • Kube-proxy: Network proxy for services
  • Node Monitoring: Resource usage monitoring

Networkingโ€‹

  • CNI Plugin: Container Network Interface
  • Service Mesh: Optional Istio integration
  • Ingress Controllers: NGINX or Traefik
  • Load Balancers: Layer 4 and Layer 7 load balancing

Getting Startedโ€‹

Prerequisitesโ€‹

  • Lineserve account with billing set up
  • Basic understanding of containers and Kubernetes
  • kubectl installed locally (optional)

Creating Your First Clusterโ€‹

Using Web Consoleโ€‹

  1. Navigate to Kubernetes: Go to Cloud > Kubernetes
  2. Create Cluster: Click "Create Cluster"
  3. Configure Cluster:
    • Name: Choose a descriptive name
    • Region: Select your preferred region
    • Kubernetes Version: Choose version (latest recommended)
    • Node Pool: Configure initial node pool
  4. Deploy: Click "Create Cluster"

Using APIโ€‹

curl -X POST -H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "my-k8s-cluster",
"region": "nairobi",
"version": "1.28",
"node_pools": [
{
"name": "default-pool",
"size": "standard",
"count": 3,
"auto_scaling": {
"enabled": true,
"min_nodes": 1,
"max_nodes": 10
}
}
]
}' \
https://api.lineserve.com/v1/kubernetes/clusters

Using CLIโ€‹

lineserve k8s create \
--name my-k8s-cluster \
--region nairobi \
--version 1.28 \
--node-count 3 \
--node-size standard \
--auto-scaling

Connecting to Your Clusterโ€‹

Get Kubeconfigโ€‹

# Using CLI
lineserve k8s kubeconfig my-k8s-cluster > ~/.kube/config

# Using API
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://api.lineserve.com/v1/kubernetes/clusters/cluster_id/kubeconfig \
> ~/.kube/config

Verify Connectionโ€‹

kubectl cluster-info
kubectl get nodes

Node Poolsโ€‹

What are Node Pools?โ€‹

Node pools are groups of nodes with similar configurations:

  • Instance Type: CPU, memory, and storage specifications
  • Scaling: Independent scaling policies
  • Taints and Labels: Node scheduling preferences
  • Spot Instances: Cost-effective spot instance support

Node Pool Typesโ€‹

General Purposeโ€‹

Balanced CPU and memory for most workloads:

  • standard-2: 2 vCPUs, 4GB RAM
  • standard-4: 4 vCPUs, 8GB RAM
  • standard-8: 8 vCPUs, 16GB RAM

CPU Optimizedโ€‹

High-performance processors for CPU-intensive tasks:

  • cpu-4: 4 vCPUs, 8GB RAM
  • cpu-8: 8 vCPUs, 16GB RAM
  • cpu-16: 16 vCPUs, 32GB RAM

Memory Optimizedโ€‹

High memory-to-CPU ratio for memory-intensive applications:

  • memory-4: 4 vCPUs, 32GB RAM
  • memory-8: 8 vCPUs, 64GB RAM
  • memory-16: 16 vCPUs, 128GB RAM

Managing Node Poolsโ€‹

Add Node Poolโ€‹

lineserve k8s add-node-pool my-cluster \
--name cpu-pool \
--size cpu-optimized \
--count 2 \
--min-nodes 1 \
--max-nodes 5

Scale Node Poolโ€‹

lineserve k8s scale-nodes my-cluster cpu-pool --count 4

Update Node Poolโ€‹

kubectl patch nodepool cpu-pool -p '{"spec":{"size":"cpu-8"}}'

Application Deploymentโ€‹

Deploying Your First Applicationโ€‹

Simple Nginx Deploymentโ€‹

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
kubectl apply -f nginx-deployment.yaml
kubectl get services

Using Helm Chartsโ€‹

# Add Helm repository
helm repo add bitnami https://charts.bitnami.com/bitnami

# Install WordPress
helm install my-wordpress bitnami/wordpress \
--set service.type=LoadBalancer \
--set persistence.enabled=true

Container Registry Integrationโ€‹

Using Lineserve Registryโ€‹

# Login to registry
docker login registry.lineserve.com

# Tag and push image
docker tag my-app:latest registry.lineserve.com/my-project/my-app:latest
docker push registry.lineserve.com/my-project/my-app:latest

Deployment with Private Registryโ€‹

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
imagePullSecrets:
- name: registry-secret
containers:
- name: my-app
image: registry.lineserve.com/my-project/my-app:latest
ports:
- containerPort: 8080

Auto-scalingโ€‹

Horizontal Pod Autoscaler (HPA)โ€‹

Automatically scale pods based on CPU/memory usage:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70

Vertical Pod Autoscaler (VPA)โ€‹

Automatically adjust pod resource requests:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: nginx-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
updatePolicy:
updateMode: "Auto"

Cluster Autoscalerโ€‹

Automatically scale cluster nodes based on demand:

  • Scale Up: Add nodes when pods can't be scheduled
  • Scale Down: Remove underutilized nodes
  • Cost Optimization: Minimize infrastructure costs
  • Availability: Maintain application availability

Networkingโ€‹

Service Typesโ€‹

ClusterIPโ€‹

Internal cluster communication:

apiVersion: v1
kind: Service
metadata:
name: internal-service
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 8080

LoadBalancerโ€‹

External access with load balancing:

apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 80
targetPort: 8080

NodePortโ€‹

Direct node access:

apiVersion: v1
kind: Service
metadata:
name: nodeport-service
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
nodePort: 30080

Ingress Controllersโ€‹

NGINX Ingressโ€‹

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80

Network Policiesโ€‹

Secure pod-to-pod communication:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080

Storageโ€‹

Persistent Volumesโ€‹

Durable storage for stateful applications:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: ssd

Storage Classesโ€‹

Different storage performance tiers:

  • ssd: High-performance SSD storage
  • hdd: Cost-effective HDD storage
  • nvme: Ultra-fast NVMe storage

StatefulSetsโ€‹

For stateful applications like databases:

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi

Monitoring & Loggingโ€‹

Prometheus Monitoringโ€‹

Built-in Prometheus for metrics collection:

  • Cluster Metrics: Node and pod resource usage
  • Application Metrics: Custom application metrics
  • Alerting: Prometheus Alertmanager integration
  • Grafana: Pre-configured dashboards

Loggingโ€‹

Centralized logging with ELK stack:

  • Elasticsearch: Log storage and indexing
  • Logstash: Log processing and transformation
  • Kibana: Log visualization and analysis
  • Fluentd: Log collection from containers

Custom Monitoringโ€‹

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-app-monitor
spec:
selector:
matchLabels:
app: my-app
endpoints:
- port: metrics
path: /metrics
interval: 30s

Securityโ€‹

RBAC (Role-Based Access Control)โ€‹

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

Secrets Managementโ€‹

apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm

Pod Security Standardsโ€‹

apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: app
image: my-app:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL

Best Practicesโ€‹

Resource Managementโ€‹

  • Resource Requests: Always set CPU and memory requests
  • Resource Limits: Set appropriate limits to prevent resource hogging
  • Quality of Service: Understand QoS classes (Guaranteed, Burstable, BestEffort)
  • Namespace Organization: Use namespaces to organize applications

High Availabilityโ€‹

  • Multi-zone Deployment: Deploy across multiple availability zones
  • Pod Disruption Budgets: Ensure minimum replicas during updates
  • Health Checks: Implement liveness and readiness probes
  • Graceful Shutdown: Handle SIGTERM signals properly

Securityโ€‹

  • Least Privilege: Use minimal required permissions
  • Network Segmentation: Implement network policies
  • Image Security: Scan container images for vulnerabilities
  • Secrets Rotation: Regularly rotate secrets and certificates

Pricingโ€‹

Cluster Managementโ€‹

  • Control Plane: $50/month per cluster
  • Worker Nodes: Standard VPS pricing
  • Load Balancers: $10/month per load balancer
  • Storage: Standard storage pricing

Node Pricing (per hour)โ€‹

  • standard-2: $0.05/hour
  • standard-4: $0.10/hour
  • cpu-optimized-4: $0.12/hour
  • memory-optimized-4: $0.15/hour

Troubleshootingโ€‹

Common Issuesโ€‹

Pod Stuck in Pendingโ€‹

kubectl describe pod <pod-name>
kubectl get events --sort-by=.metadata.creationTimestamp

Service Not Accessibleโ€‹

kubectl get svc
kubectl describe svc <service-name>
kubectl get endpoints

Node Issuesโ€‹

kubectl get nodes
kubectl describe node <node-name>
kubectl top nodes

Debugging Commandsโ€‹

# Get cluster info
kubectl cluster-info

# Check pod logs
kubectl logs <pod-name> -f

# Execute commands in pod
kubectl exec -it <pod-name> -- /bin/bash

# Port forwarding
kubectl port-forward <pod-name> 8080:80

# Get resource usage
kubectl top pods
kubectl top nodes

Next Stepsโ€‹