...

From Ingress NGINX to Higress: migrating 60+ resources in 30 minutes with AI

Key Highlights:

License is not valid, please check your API Key!


Rewrite the following article in a natural, human-like tone. Keep the meaning the same but improve clarity, structure, and readability. Do NOT mention any source, website, or external reference. Return clean HTML paragraphs:

With the official retirement of Ingress NGINX that took place in March 2026, enterprise platform teams are facing an urgent security and compliance mandate. Remaining on a retired controller leaves critical infrastructure vulnerable to unpatched security risks. For one infrastructure engineer managing a cluster with over 60 complex Ingress resources, the challenge was clear: find a modern, enterprise-ready replacement that could be implemented without months of manual refactoring.

This blog post explains how full migration validation was achieved in just 30 minutes by leveraging an AI agent and Higressa cloud-native and AI-native API gateway founded by Alibaba that recently joined the CNCF Sandbox.

The solution: Why Higress for the AI era?

Higress, which is built on the industry-standard Envoy and Istiois specifically designed as an AI-native gateway that addresses the shortcomings of legacy controllers while providing specialized features for Large Language Models (LLMs).

  • AI-Native Architecture: Unlike traditional gateways, Higress treats LLMs as first-class citizens. It includes specialized features like Token-based rate limiting (to manage model costs) and caching capabilities (to reduce latency for common AI prompts).
  • LLM Protocol Governance: It provides a unified protocol to interface with various LLM providers, enabling teams to switch models behind a single, secure endpoint.
  • Zero-Downtime Reliability: Leveraging Envoy’s xDS protocolHigress allows for configuration updates in milliseconds. This eliminates the “NGINX reload” issue, which is critical for maintaining persistent connections in AI streaming and gRPC.
  • Model Context Protocol (MCP): Higress supports hosting MCP servers, allowing AI agents to securely interact with enterprise tools and data via the gateway.

AI-Assisted Migration Workflow

To accelerate the transition, the Alibaba engineer utilized an AI agent equipped with specialized “Skills.” This approach shifted the “grunt work” of analysis and validation to the AI, while keeping human engineers in control of the final production execution.

1. Understanding the Current State

The agent was first tasked with auditing the existing cluster. Using the nginx-to-higress-migration skillthe agent automatically identified all Ingress resources and flagged NGINX-specific annotations that required translation.

2. Risk-Free Simulation

To ensure the migration wouldn’t break production traffic, the engineer used the agent to create a simulated environment using Kind (Kubernetes in Docker). Higress was installed with status updates disabled (global.enableStatus=false) to prevent Higress from updating the Ingress status field, allowing it to coexist peacefully with NGINX. This enabled the engineer to test the new routing logic side-by-side with the old NGINX controller.

3. Solving Custom Logic with WASM

For the complex NGINX snippets flagged during analysis, the engineer utilized the higress-wasm-go-plugin skill. This skill allowed the AI to generate high-performance WebAssembly (WASM) plugins that replicated custom Lua or NGINX logic within the Higress sandbox.

Outcome: 30 Minutes to compliance

By leveraging Higress’s native NGINX compatibility and AI-assisted validation, infrastructure migration was achieved at lightning speed:

Phase AI Agent Task Outcome
Analysis Audit 60+ Ingress resources Full gap analysis in <1 minute
Simulation Mirror environment in Kind Verified “Digital Twin” with <10 minute of manual typing
Plugin Dev WASM Plugin Generation Custom snippets translated in <2 minutes
Execution Generate Final Runbook Production-ready in 30 minutes

The retirement of Ingress NGINX is not just a migration hurdle, but an opportunity to upgrade to a more resilient, AI-ready architecture. By moving to Higressorganizations gain an enterprise-grade gateway based on Envoy and Istio that is built for the future of LLM integration.

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.