Service Mesh Data Plane Benchmark

Benchmarking FSM and Istio data planes

Flomesh Service Mesh (FSM) aims to provide service mesh functionality with a focus on high performance and low resource consumption. This allows resource-constrained edge environments to leverage service mesh functionality similar to the cloud.

In this test, benchmarks were conducted for FSM (v1.1.4) and Istio (v1.19.3). The primary focus is on the service latency distribution when using two different meshes and monitoring the resource overhead of the data plane.

FSM uses Pipy as the data plane, while Istio uses Envoy.

Before testing, it is important to note that the focus is on comparing latency and resource consumption between them, rather than extreme performance.

Testing Environment

The benchmark was tested in a Kubernetes cluster running on Azure Cloud VM. The cluster consists of 2 Standard_D8_v3 nodes. FSM and Istio are both configured with loose traffic mode and mTLS, while other settings are set to default.

  • Kubernetes: K3s v1.24.17+k3s1
  • OS: Ubuntu 20.04
  • Nodes: 8c32g * 2
  • Sidecar: 1c512Mi

The test tool is located on the branch fsm of this repository, which is forked from istio/tools.


The procedure is documented in this file.

In the test tool, there are two applications: fortioclient and fortioserver. The load is generated by fortioclient triggered with kubectl exec.

For both meshes, tests are conducted for baseline (no sidecar) and both (two sidecars) modes. Load is generated with 2, 4, 8, 16, 32, 64 concurrencies at QPS 1000. You can review the benchmark configs for FSM and Istio.

An essential aspect is setting the request and limit resource to 1000m and 512Mi.


**Illustration: xxx_baseline means that the service is accessed directly without sidecar; xxx_both means that both the client and the server have sidecars. **

The X-axis represents the number of concurrencies; the Y-axis represents latency in milliseconds





Resource Consumption

Among them, the CPU consumption of Istio and FSM is higher when there are two concurrencies. It is speculated that the reason is that there is no preheating before the test starts.

Client sidecar cpu

Server sidecar cpu

Client sidecar memory

Server sidecar memory


This time, we benchmarked FSM and Istio data planes with limited sidecar resources.

  • Latency: The latency of FSM’s Pipy sidecar proxy is lower than Istio’s Envoy, especially under high concurrency.
  • Resource consumption: With only 2 services, FSM’s Pipy consumes less resources than Istio’s Envoy.

From the results, FSM can still maintain high performance with low resource usage and more efficient use of resources. So FSM is particularly suitable for resource-constrained and large-scale scenarios, reducing costs effectively.. These are made possible by Pipy’s low-resource, high-performance features.

Of course, while FSM is suitable for cloud, it can also be applied to edge computing scenarios.


Was this page helpful?