Networking
Resource Plane
An open-source proxy server for cloud-nativre setups.
What is Envoy?
Envoy is an open-source proxy server. Designed for edge, middle, and service applications, it operates in front and sidecar deployments, making it a flexible network abstraction add-on.
Profile
It's common to run Envoy instances in sidecar mode next to all of your application's services. The tool then eases traffic shaping and observation by offering several features geared towards troubleshooting. Notably, it claims to facilitate consistent observability, which could be helpful if blind spots are plaguing your app.
In addition to observability, Envoy supports most of the functions you'd expect from a deployment-ready network tool, such as timeouts, mirroring, load-balancing, and routing. Users who find the out-of-the-box features fall short can easily extend this proxy using YAML or gRPC API endpoints. One cool example of this is the ability to modify HTTP traffic using pluggable filters written in Lua, WASM, and other formats – although you'll likely need to rebuild Envoy to export the appropriate symbols for each.
Focus
Envoy's design philosophy springs from the idea that networks ought to be transparent to applications. This open-source project works regardless of application language, and it keeps network reconfiguration independent of app development.
Envoy is also compatible with a wide range of network topologies. In addition to supporting L7 protocols, it works with various L3 and L4 protocols, transport sockets, authentication tools, and more. The project's compatibility focus means that in most cases, there's very little learning curve: Many users can make the switch simply by translating their network settings into a plain old YAML file.
Such flexibility comes with an unsurprising caveat: Envoy can't overcome suboptimal configurations or high-latency topologies. It's best to use the built-in performance assessment tools and metrics to stress-test any deployment you create. Fortunately, the project supports experimentation with numerous Docker sandboxes that make it easy to get a feel for how everything comes together.
Background
Envoy's bona fides are somewhat self-sustaining – Many adopters hear about it because it powers other service mesh products they've used. Before it jumped to open-source and truly picked up steam, however, it was an internal Lyft product made to unify data planes in larger-scale mesh architectures.
Envoy's codebase is overwhelmingly C++, although the proxy itself is application-language agnostic. Thanks to its support for industry-standard networking protocols, gRPC, and HTTP/2, big companies from Airbnb and Medium to Tencent and Vmware rely on this proxy – or tools that use it. Today, it's a Cloud Native Computing Foundation graduated project.
Envoy main features
Built-in, multi-level observability
Envoy's observability hits a sweet spot between being exhaustive and opt-in. You can use only the observability features you require to stay informed while minimizing overhead. In addition to letting users delve into their networks’ L7 traffic, this proxy makes it possible to observe different database implementations at a low level.
Multiple configurability options
Managing an Envoy proxy with YAML is straightforward without feeling overly limited. You can fulfill many custom use cases, like HTTP filtering, rate limiting, and cluster traffic management, simply by changing your configuration. For more involved tasks, however, you can extend the proxy using an API endpoint, some of which support dynamic adjustments.
Out-of-process architecture
Envoy is completely self-contained: Even when you connect proxies, the applications that use them remain independent from the network topology. This makes Envoy compatible with apps written in all languages and infrastructures that may require network-only fine-tuning.