Istio Certified Associate Exam Preparation - Traffic Management Scenarios
Request Routing
Practice configuring Istio to dynamically route requests to multiple versions of a microservice.
Isto's request routing allows to control traffic flow between services deployed within a Kubernetes cluster.
Request routing can be used for A/B testing. For instance, it enables the testing of new features by directing a subset of user requests that meet specific criteria to deployments with the latest functionalities.
This approach facilitates the observation of telemetry data and user feedback, based on real usage data and user reactions.
Examples of request match criteria that can be defined in Istio include:
- HTTP Header
- URL Path Prefix
- Query Parameters
Isto's request match criteria are defined using an HTTPMatchRequest object in a VirtualService resource.
Intro
There are two deployments installed in the Kubernetes cluster:
- notification-service-v1
- notification-service-v2
The notification-service is used to send notifications using different channels.
The notification-service-v1 sends notifications using EMAIL(s) only, while notification-service-v2 sends notifications using both EMAIL and SMS.
Check the running pods and services and wait until they are all in status Running
.
kubectl get po,svc -L app,version
Note that the notification-service-v1 pods have labels app=notification-service and version=v1. The notification-service-v2 pods have labels app=notification-service and version=v2.
Kubernetes service routing
The Kubernetes notification-service service is currently routing 50% to v1 and 50% to v2, load balancing the requests evenly, therefore:
- ~ 50% of the notifications are sent via EMAIL
- ~ 50% of the notifications are sent both via EMAIL and SMS
Verify it using:
kubectl exec -it tester -- bash -c \
'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify; \
echo; \
done;'
Configure destination rule
Isto's request routing is configured using two resources: a DestinationRule and a VirtualService.
Create a DestinationRule resource in the default
namespace named notification
containing two subsets v1
and v2
for the notification-service
host, with the following properties:
destination rule:
- name:
notification
- namespace:
default
- host:
notification-service
subset 1, targets notification-service pods with label version=v1
:
- name:
v1
- labels:
version=v1
subset 2, targets notification-service pods with label version=v2
:
- name:
v2
- labels:
version=v2
Tip
|
Solution
|
Configure virtual service
Create a VirtualService resource in the default
namespace named notification
with only a single default HTTP destination route for host notification-service
. The destination route points to the subset named v1
, created in the previous step.
virtual service:
- name:
notification
- namespace:
default
- host:
notification-service
http default route:
- destination host:
notification-service
- destination subset:
v1
Istio will route all requests to host: notification-service
to the notification service with version v1
.
Tip
|
Solution
|
Verify the result using:
# Should return ["EMAIL"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify;
echo; \
done;'
Routing based on HTTP header
Update the notification
virtual service resource to add a new route based on a matching HTTP header.
If the request contains HTTP header testing
: true
, then route the request to v2
, otherwise default to v1
.
http default route:
- host:
notification-service
- subset:
v1
http header match request route:
- header name:
testing
- matching type:
exact
on valuetrue
- destination host:
notification-service
- destination subset:
v2
The HTTP match request configuration parameters can be found here: HTTPMatchRequest.
Tip
|
Solution
|
Verify the result using:
# Should return ["EMAIL","SMS"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST -H "testing: true" http://notification-service/notify;
echo;
done;'
# Should return ["EMAIL"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify;
echo;
done;'
Routing based on URI prefix
Update the notification
virtual service resource to add a new route based on a matching URI prefix.
If the request URI starts with/v2/notify
route the request to v2
, otherwise to v1
.
Additionally, rewrite the path /v2/notify
to /notify
.
http default route:
- host:
notification-service
- subset:
v1
http URI prefix match request route:
- URI prefix:
/v2/notify
- rewrite URI:
/notify
- destination host:
notification-service
- destination subset:
v2
The HTTP match request configuration parameters can be found here: HTTPMatchRequest.
The HTTP route parameters for path rewriting can be found here: HTTPRoute
Tip
|
Solution
|
Verify the result using:
# Should return ["EMAIL","SMS"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/v2/notify;
echo;
done;'
# Should return ["EMAIL"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify;
echo;
done;'
Routing based on query parameter
Update the notification
virtual service resource to add a route based on matching query parameter.
If the request contains query parameter testing=true
then route the request to v2
, otherwise to v1
.
http default route:
- host:
notification-service
- subset:
v1
http query param match request route:
- query param key:
testing
- query key value:
true
- query value match type:
exact
- destination host:
notification-service
- destination subset:
v2
The HTTP match request configuration parameters can be found here: HTTPMatchRequest.
Tip
|
Solution
|
Verify the result using:
# Should return ["EMAIL","SMS"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify?testing=true;
echo;
done;'
# Should return ["EMAIL"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify;
echo;
done;'
# Should return ["EMAIL","SMS"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify?testing=true;
echo;
done;'
# Should return ["EMAIL"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify;
echo;
done;'
Fault Injection
Practice configuring Istio to inject faults to test the resiliency of an application.
In this scenario you will practice injecting faults to test the resiliency of an application.
Istio allows to set two types of faults between communicating services using HTTPFaultInjection configuration properties:
- delay: simulates network failures, delays, or overloaded upstream services.
- abort: simulates return error codes back to downstream services, simulating a faulty upstream service.
Intro
There are two deployments installed in the Kubernetes cluster:
- booking-service-v1
- notification-service-v1
The booking-service uses the notification-service to send notifications whenever a new booking has been placed.
Check the running pods and services and wait until they are all in status Running
.
kubectl get po,svc -L app,version
Test the booking-service by placing a new booking:
kubectl exec -it tester -- \
bash -c 'curl -s -X POST http://booking-service/book; \
echo;'
Configure destination rule
Isto's fault injection is configured using two resources: a DestinationRule and a VirtualService.
Create a DestinationRule resource in the default
namespace named notification
containing a single subset v1
for the notification-service
host, with the following properties:
destination rule:
- name:
notification
- namespace:
default
- host:
notification-service
subset 1, targets notification-service pods with label version=v1
:
- name:
v1
- labels:
version=v1
Configure short delay fault injection
Create a VirtualService resource in the default
namespace named notification
with only a single default HTTP destination route for host notification-service
. The destination route targets the subset named v1
, created in the previous step.
Apply a fixed delay fault injection rule to the default HTTP destination route with the following parameters:
virtual service:
- name:
notification
- host:
notification-service
http default route:
- destination host:
notification-service
- destination subset:
v1
- delay fixed delay:
3s
- delay percentage:
100
We want to make sure that despite the introduced latency when calling the notification-service
, the booking-service
HTTP client waits at least for 5 seconds before giving up with the request.
The HTTP fault injection configuration parameters can be found here: HTTPFaultInjection.
Verify that a booking can be placed correctly using:
kubectl exec -it tester -- \
bash -c 'curl -s -w "\n* Response time: %{time_total}s\n" \
-X POST http://booking-service/book'
{"id":"0c17a9c0-d083-4b25-966a-5b9307c57759","notification":["EMAIL"]}
* Response time: 3.024751s
You should see that the response time takes ~3 seconds and that despite the introduced delay a booking can be placed successfully.
Configure high delay fault injection
Update now the notification
virtual service fixed delay time to 10 seconds instead of 3 seconds.
This should trigger the booking-service
HTTP client timeout (hardcoded in the application to 5 seconds) when making a request to the notification-service
. This should force the following error message returned when placing a new booking:
The service is currently unavailable, please try again later
Use the following configuration properties:
virtual service:
- name:
notification
- host:
notification-service
http default route:
- destination host:
notification-service
- destination subset:
v1
- delay fixed delay:
10s
- delay percentage:
100
Verify that a booking cannot be placed due to service currently unavailable error using:
kubectl exec -it tester -- \
bash -c 'curl -s -w "\n* Response time: %{time_total}s\n" \
-X POST http://booking-service/book'
The service is currently unavailable, please try again later.
* Response time: 5.014421s
You should see that the response now is an error: The service is currently unavailable, please try again later
and that the response time is ~5 seconds.
In this case the booking-service REST client timeout kicks in, managing correctly the timeout error from the upstream service which you simulated using the virtual service fault delay configuration.
Configure abort fault injection
Update the notification
virtual service again to inject an HTTP abort fault to trigger an HTTP error status 500 instead of a delay.
We want to simulate the upstream notification-service
returning status code 500 to test that the booking-service
handles the error correctly.
virtual service:
- name:
notification
- host:
notification-service
http default route:
- destination host:
notification-service
- destination subset:
v1
- abort http status:
500
- abort percentage:
100
Verify that a booking cannot be placed using:
kubectl exec -it tester -- \
bash -c 'curl -s -X POST http://booking-service/book; \
echo;'
You should see that the response now is an error:
Booking could not be placed, notification service returned HTTP status=500
Traffic Shifting
Practice configuring Istio to migrate traffic from an old to a new version of a microservice
Isto's traffic shifting allows to shift traffic from one version of a service to another.
Traffic shifting proves beneficial in scenarios employing a canary rollout strategy for deploying features to production.
This strategy involves initially releasing a software update to a limited user base, allowing thorough testing and acceptance. Upon successful validation, the update is progressively released to all the users; otherwise, a rollback is initiated.
In this scenario you will practice traffic shifting by configuring a series of routing rules that redirect a percentage of traffic from one destination to another.
Lab preparation:
There are two deployments installed in the Kubernetes cluster:
- notification-service-v1
- notification-service-v2
The notification-service is used to send notifications using different channels.
The notification-service-v1 sends notifications using EMAIL(s) only, while notification-service-v2 sends notifications using both EMAIL and SMS.
Check the running pods and services and wait until they are all in status Running
.
kubectl get po,svc -L app,version
Note that the notification-service-v1 pods have labels app=notification-service and version=v1. The notification-service-v2 pods have labels app=notification-service and version=v2.
Kubernetes service routing
The Kubernetes notification-service service is currently routing 50% to v1 and 50% to v2, load balancing the requests evenly, therefore:
- ~ 50% of the notifications are sent via EMAIL
- ~ 50% of the notifications are sent both via EMAIL and SMS
Verify it using:
kubectl exec -it tester -- bash -c \
'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify; \
echo; \
done;'
Configure destination rule
Isto's traffic shifting is configured using two resources: a DestinationRule and a VirtualService.
Create a DestinationRule resource in the default
namespace named notification
containing two subsets v1
and v2
for the notification-service
host, with the following properties:
destination rule:
- name:
notification
- namespace:
default
- host:
notification-service
subset 1, targets notification-service pods with label version=v1
:
- name:
v1
- labels:
version=v1
subset 2, targets notification-service pods with label version=v2
:
- name:
v2
- labels:
version=v2
Tip
|
Solution
|
Configure 80% to v1 and 20% to v2 traffic shifting
Create a VirtualService resource in the default
namespace named notification
with a single default HTTP destination route for host notification-service
.
Define the route with two destinations, one to the subset named v1
with weight 80%
and the second one to the subset named v2
with weight 20%
.
virtual service:
- name:
notification
- namespace:
default
- host:
notification-service
http default route:
- destination 1 host:
notification-service
- destination 1 subset:
v1
- destination 1 weight:
80
- destination 2 host:
notification-service
- destination 2 subset:
v2
- destination 2 weight:
20
This configuration will route 80% of the requests to host notification-service
to the notification service with version v1
, and 20% of the requests to the notification service with version v2
.
Tip
|
Solution
|
Verify the result using:
kubectl exec -it tester -- bash -c \
'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify;
echo; \
done;'
Roughly 20% of the requests should be forwarded to v2
, hence notifications are sent via EMAIL and SMS only ~20% of the times.
Configure 100% v2 traffic shifting
Update the notification
virtual service resource to shift all traffic to v2
only:
virtual service:
- name:
notification
- namespace:
default
- host:
notification-service
default route:
- destination 1 host:
notification-service
- destination 1 subset:
v1
- destination 1 weight:
0
- destination 2 host:
notification-service
- destination 2 subset:
v2
- destination 2 weight:
100
This configuration will now route 100% of the requests to host notification-service
to the notification service with version v2
, hence all notifications are now sent both via EMAIL and SMS.
Tip
|
Solution
|
Verify the result using:
kubectl exec -it tester -- bash -c \
'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify;
echo; \
done;'
Circuit Breaking
Practice configuring Istio's circuit breaking rules and then test the configuration by intentionally "firing" the circuit breaker.
Intro
Istio's circuit-breaking allows to limit the impact of failures, latency spikes, and other network issues that may occur between requests within services deployed in a Kubernetes cluster.
Circuit breaking is an important pattern for developing resilient microservice applications, limiting the propagation of failures and preventing cascading impacts on other services throughout the application.
In this scenario, you will configure circuit-breaking rules and then test the setup by intentionally triggering the circuit breaker, simulating a controlled failure.
Environment Preparation
There are three deployments installed in the Kubernetes cluster:
- notification-service-v2
- notification-service-v3
- fortio
The notification-service-v2 sends notifications using both EMAIL and SMS, while the notification-service-v3 is faulty and always returns 507 (insufficient storage) HTTP response code.
Fortio is a load testing client which lets you control the number of connections, concurrency, and delays for outgoing HTTP calls.
You will use fortio to intentionally "fire" the circuit breaker.
Check the running pods and services and wait until they are all in status Running
.
kubectl get po,svc -L app,version
Configure destination rule circuit breaker by connection pool
Isto's circuit breaking is configured using two resources: a DestinationRule and a VirtualService.
Your task is to configure an HTTP connection pool traffic policy which allows max 1 pending request and max 1 request per connection. These settings will control the volume of connections to the notification-service
, triggering a circuit breaker if the conditions are not met. The HTTP connection pool traffic policy is set in a destination rule resource.
Create a DestinationRule resource in the default
namespace named notification
containing a single subset named default
for the notification-service
host, with the following properties:
destination rule:
- name:
notification
- namespace:
default
- host:
notification-service
- traffic policy connection pool http
http1MaxPendingRequests
:1
- traffic policy connection pool http
maxRequestsPerConnection
:1
default subset, targets notification-service pods with label version=v2
:
- name:
default
- labels:
version=v2
The connection pool traffic policy http1MaxPendingRequests equal to 1 and maxRequestsPerConnection equal to 1 means that if you exceed more than one connection and request concurrently, you should start seeing some failures when the istio-proxy opens the circuit for further requests and connections.
Tip
|
Solution
|
Configure virtual service
Create a VirtualService resource in the default
namespace named notification
with only a single default HTTP destination route for host notification-service
. The destination route points to the subset named default
, created in the previous step.
virtual service:
- name:
notification
- namespace:
default
- host:
notification-service
http default route:
- destination host:
notification-service
- destination subset:
default
Istio will route all requests to host: notification-service
to the default
subset.
Tip
|
Solution
|
Verify the result using:
# Should return ["EMAIL", "SMS"]
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify;
echo;
done;'
Test the circuit breaker
Test that the circuit breaker works correctly using fortio
.
Call the notification-service with a single connection (-c 1
) and send 20 requests (-n 20
). You should see that 100% of the requests terminate with success status code 201
:
export FORTIO_POD=$(kubectl get pods -l app=fortio -o 'jsonpath={.items[0].metadata.name}')
kubectl exec ${FORTIO_POD} -c fortio -- \
/usr/bin/fortio load -c 1 -qps 0 -n 20 -loglevel Warning \
-X POST http://notification-service/notify
Expected result:
Code 201 : 20 (100.0 %)
Increase now the number of concurrent connections to 3 (-c 3
) and run fortio
again:
kubectl exec ${FORTIO_POD} -c fortio -- \
/usr/bin/fortio load -c 3 -qps 0 -n 20 -loglevel Warning \
-X POST http://notification-service/notify
You should see now that some of the requests failed with status code 503 (Service Unavailable), meaning that the server is currently unable to handle the requests.
In this case the circuit breaker "kicked" in, by trapping some of the requests and you should see that some of the requests terminated with 503
status code (your numbers below might be different):
Code 201 : 7 (35.0 %)
Code 503 : 13 (65.0 %)
Query the istio-proxy
to investigate how many calls so far have been flagged for circuit breaking:
kubectl exec "$FORTIO_POD" -c istio-proxy -- \
pilot-agent request GET stats | \
grep "default|notification-service.default.svc.cluster.local.upstream_rq_pending_overflow"
Traffic Mirroring
Practice configuring Istio to send a copy of live traffic to a mirrored service.
In this scenario you will practice configuring Istio to mirror traffic directed to a service, to another service.
Traffic mirroring allows to publish changes to production with as little risk as possible, by sending a copy of live traffic to a mirrored service.
The mirrored traffic happens out of band of the critical request path for the primary service, and it is "fire and forget", meaning that the responses are discarded.
Intro
There are two deployments installed in the Kubernetes cluster:
- notification-service-v1
- notification-service-v2
The notification-service is used to send notifications using different channels.
The notification-service-v1 sends notifications using EMAIL(s) only, while notification-service-v2 sends notifications using both EMAIL and SMS.
Check the running pods and services and wait until they are all in status Running
.
kubectl get po,svc -L app,version
Note that the notification-service-v1 pods have labels app=notification-service and version=v1. The notification-service-v2 pods have labels app=notification-service and version=v2.
In this scenario you will mirror traffic both to v1 and to v2 so that we can test if the SMS notification channel implemented in v2 works correctly with live mirrored traffic.
Configure destination rule
Isto's traffic mirroring is configured using two resources: a DestinationRule and a VirtualService.
Create a DestinationRule resource in the default
namespace named notification
containing two subsets v1
and v2
for the notification-service
host, with the following properties:
destination rule:
- name:
notification
- namespace:
default
- host:
notification-service
subset 1, targets notification-service pods with label version=v1
:
- name:
v1
- labels:
version=v1
subset 2, targets notification-service pods with label version=v2
:
- name:
v2
- labels:
version=v2
Tip
|
Solution
|
Configure virtual service
Create a VirtualService resource in the default
namespace named notification
with a single default HTTP destination route for host notification-service
.
Configure the route destination to subset v1
and to mirror 100% of the traffic to subset v2
, created in the previous step.
virtual service:
- name:
notification
- namespace:
default
- host:
notification-service
http default route:
- destination host:
notification-service
- destination subset:
v1
- mirror host:
notification-service
- mirror subset:
v2
- mirror percentage:
100
Tip
|
Solution
|
Test the traffic mirroring
Test that the traffic mirroring works correctly. Generate some traffic to the notification-service
:
kubectl exec -it tester -- \
bash -c 'for i in {1..20}; \
do curl -s -X POST http://notification-service/notify; \
echo; \
done;'
This traffic was routed to the notification-service
v1
by the virtual service default route. The responses were only containing ["EMAIL"]
because v1
sends notifications using EMAIL(s) only.
Check the notification-service
v1
container logs, to verify that the traffic was sent to v1
:
kubectl logs \
$(kubectl get pods -o name -l app=notification-service,version=v1)
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /notify --> notification-service/controller.Notify (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Listening and serving HTTP on :8084
[GIN] 2024/04/09 - 14:52:28 | 201 | 65.316µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 9.418µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 31.398µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 19.89µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 14.721µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 11.922µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 11.918µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 15.445µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 32.766µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 9.63µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 10.382µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 8.839µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 10.156µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 13.785µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 48.305µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 29.916µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 34.01µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:29 | 201 | 8.829µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:29 | 201 | 28.538µs | 127.0.0.6 | POST "/notify"
[GIN] 2024/04/09 - 14:52:29 | 201 | 40.181µs | 127.0.0.6 | POST "/notify"
Now we need to check the notification-service
v2
container logs, to verify that there are logs containing all the requests mirrored from v1
by Istio:
kubectl logs \
$(kubectl get pods -o name -l app=notification-service,version=v2)
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /notify --> notification-service/controller.Notify (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Listening and serving HTTP on :8084
[GIN] 2024/04/09 - 14:52:28 | 201 | 59.278µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 51.744µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 21.142µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 19.56µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 10.615µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 10.755µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 10.868µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 13.364µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 11.472µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 44.855µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 12.303µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 11.829µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 10.573µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 10.527µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 24.783µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 13.75µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:28 | 201 | 10.668µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:29 | 201 | 26.246µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:29 | 201 | 9.439µs | 192.168.1.7 | POST "/notify"
[GIN] 2024/04/09 - 14:52:29 | 201 | 9.316µs | 192.168.1.7 | POST "/notify"