We surveyed 200+ security practitioners to discover the state of device management in 2022.Click here to learn about their struggles and best practices.
Fleet exposes a basic health check at the
/healthz endpoint. This is the interface to use for simple monitoring and load-balancer health checks.
/healthz endpoint will return an
HTTP 200 status if the server is running and has healthy connections to MySQL and Redis. If there are any problems, the endpoint will return an
HTTP 500 status.
Prometheus can be configured to use a wide range of service discovery mechanisms within AWS, GCP, Azure, Kubernetes, and more. See the Prometheus configuration documentation for more information.
Prometheus has built-in support for alerting through Alertmanager.
Consider building alerts for
TODO (Seeking Contributors) Add example alerting configurations
Cloudwatch Alarms can be configured to support a wide variety of metrics and anomaly detection mechanisms. There are some example alarms
in the terraform reference architecture (see
Prometheus provides basic graphing capabilities, and integrates tightly with Grafana for sophisticated visualizations.
Fleet is designed to scale to hundreds of thousands of online hosts. The Fleet server scales horizontally to support higher load.
Scaling Fleet horizontally is as simple as running more Fleet server processes connected to the same MySQL and Redis backing stores. Typically, operators front Fleet server nodes with a load balancer that will distribute requests to the servers. All APIs in Fleet are designed to work in this arrangement by simply configuring clients to connect to the load balancer.
The Fleet/osquery system is resilient to loss of availability. Osquery agents will continue executing the existing configuration and buffering result logs during downtime due to lack of network connectivity, server maintenance, or any other reason. Buffering in osquery can be configured with the
Note that short downtimes are expected during Fleet server upgrades that require database migrations.
If performance issues are encountered with the MySQL and Redis servers, use the extensive resources available online to optimize and understand these problems. Please file an issue with details about the problem so that Fleet developers can work to fix them.
For performance issues in the Fleet server process, please file an issue with details about the scenario, and attach a debug archive. Debug archives can also be submitted confidentially through other support channels.
fleetctl debug archive command to generate an archive of Fleet's full suite of debug profiles. See the fleetctl setup guide) for details on configuring
.tar.gz archive will be available in the current directory.
In most configurations, the
fleetctl client is configured to make requests to a load balancer that will proxy the requests to each server instance. This can be problematic when trying to debug a performance issue on a specific server. To target an individual server, create a new
fleetctl context that uses the direct address of the server.
fleetctl config set --context server-a --address https://server-a:8080 fleetctl login --context server-a fleetctl debug archive --context server-a
fleetctl debug archive command retrieves information generated by Go's
net/http/pprof package. In most scenarios this should not include sensitive information, however it does include command line arguments to the Fleet server. If the Fleet server receives sensitive credentials via CLI argument (not environment variables or config file), this information should be scrubbed from the archive in the
If you notice something we've missed or could be improved on, please follow this link and submit a pull request to the Fleet repo.