Skip to main content

Architecture: Interaction with Kubernetes

This document describes how kube-ingress-dash interacts with Kubernetes and the overall system architecture.

System Architecture

The following diagram illustrates the overall architecture of kube-ingress-dash:

Detailed Interaction Flow

The interaction between kube-ingress-dash and Kubernetes follows this flow:

Multi-Namespace Streaming

The application supports real-time streaming of ingress resources across multiple namespaces using Server-Sent Events (SSE).

For comprehensive architecture diagrams including detailed sequence flows, error isolation patterns, and event aggregation, see the Production Features Architecture documentation.

How It Works

  1. Client Connection: Browser establishes SSE connection to /api/ingresses/stream
  2. Namespace Selection: Client can filter by specific namespaces or watch all
  3. Parallel Watchers: Separate watchers created for each namespace
  4. Event Aggregation: Events from all watchers merged into single stream
  5. Real-time Updates: Changes pushed to client immediately

Benefits

  • Isolation: Errors in one namespace don't affect others
  • Performance: Parallel watching improves responsiveness
  • Scalability: Efficiently handles large clusters with many namespaces

Error Handling Architecture

The application implements comprehensive error handling with multiple layers.

For detailed architecture diagrams including complete error handling flow, circuit breaker state machine, error classification decision tree, and retry timing diagrams, see the Production Features Architecture documentation.

Error Classification

The application classifies errors into two categories:

  • Transient Errors: Network timeouts, temporary API unavailability

    • Automatically retried with exponential backoff
    • Examples: ECONNRESET, ETIMEDOUT, 503 Service Unavailable
  • Permanent Errors: Authentication failures, permission denied

    • Not retried, returned immediately to user
    • Examples: 401 Unauthorized, 403 Forbidden, 404 Not Found

Circuit Breaker Pattern

Protects the application and Kubernetes API from cascading failures:

  • Closed State: Normal operation, requests pass through
  • Open State: Too many failures, requests fail fast
  • Half-Open State: Testing if service recovered

Configuration:

  • Opens at 50% failure rate over 30-second window
  • Waits 60 seconds before testing recovery
  • Prevents overload during outages

Retry Logic

Implements exponential backoff for transient errors:

  • Initial delay: 1 second
  • Maximum delay: 30 seconds
  • Maximum attempts: 3
  • Backoff multiplier: 2x

Performance Optimizations

Virtual Scrolling

Efficiently renders large lists of ingresses:

  • Only renders visible items
  • Reduces DOM nodes and memory usage
  • Smooth scrolling with thousands of ingresses

Caching and Rate Limiting

The application implements caching and rate limiting to optimize performance and protect resources.

For detailed architecture diagrams including caching layer architecture, request deduplication flow, rate limiting with token bucket algorithm, and Kubernetes API throttling, see the Production Features Architecture documentation.

Key features:

  • Caching Layer: Memory and Redis-based caching to reduce Kubernetes API load
  • Request Deduplication: Prevent duplicate concurrent requests
  • Rate Limiting: Protect application and Kubernetes API from overload
  • Token Bucket Algorithm: Fair rate limiting per client
  • Kubernetes API Throttling: Prevent cluster overload

Health Checks

The application exposes a health check endpoint at /api/health:

GET /api/health

Response (Healthy):
{
"status": "healthy",
"timestamp": "2024-01-15T10:30:00Z",
"checks": {
"kubernetes": {
"status": "up",
"latency": 45
}
}
}

Response (Unhealthy):
{
"status": "unhealthy",
"timestamp": "2024-01-15T10:30:00Z",
"checks": {
"kubernetes": {
"status": "down",
"latency": 5000,
"error": "Failed to connect to Kubernetes API"
}
}
}

The health check verifies Kubernetes API connectivity and is used by Kubernetes liveness and readiness probes.