For bug reports, please use the Github issue tracker.
For questions and discussion, please join us in the #fleet channel of osquery Slack.
Yes. Fleet scales horizontally out of the box as long as all of the Fleet servers are connected to the same MySQL and Redis instances.
Note that osquery logs will be distributed across the Fleet servers.
Read the performance documentation for more.
This can be caused by a variety of problems. The best way to debug is usually to add
--verbose --tls_dump to the arguments provided to
osqueryd and look at the logs for the server communication.
Connection refused: The server is not running, or is not listening on the address specified. Is the server listening on an address that is available from the host running osquery? Do you have a load balancer that might be blocking connections? Try testing with
No node key returned: Typically this indicates that the osquery client sent an incorrect enroll secret that was rejected by the server. Check what osquery is sending by looking in the logs near this error.
certificate verify failed: See How do I fix "certificate verify failed" errors from osqueryd.
bad record MAC: When generating your certificate for your Fleet server, ensure you set the hostname to the FQDN or the IP of the server. This error is common when setting up Fleet servers and accepting defaults when generating certificates using
Osquery requires that all communication between the agent and Fleet are over a secure TLS connection. For the safety of osquery deployments, there is no (convenient) way to circumvent this check.
osqueryd. This is often unnecessary when using a certificate signed by an authority trusted by the system, but is mandatory when working with self-signed certificates. In all cases it can be a useful debugging step.
https://localhost:443, but the certificate is for
https://fleet.example.com, the verification will fail.
curl -v -X POST https://fleetserver:port/api/v1/osquery/enroll.
If the both the existing and new certificates verify with osquery's default root certificates (such as a certificate issued by a well-known Certificate Authority) and no certificate chain was deployed with osquery, there is no need to deploy a new certificate chain.
If osquery has been deployed with the full certificate chain (using
--tls_server_certs), deploying a new certificate chain is necessary to allow for verification of the new certificate.
Deploying a certificate chain cannot be done centrally from Fleet.
Seeing your proxy's requests fail with an error like
To get your proxy server's HTTP client to work with a local Fleet when using a self-signed cert, disable SSL / self-signed verification in the client.
The exact solution to this depends on the request client you are using. For example, when using Node.js ± Sails.js, you can work around this in the requests you're sending with
await sails.helpers.http.get() by lifting your app with the
NODE_TLS_REJECT_UNAUTHORIZED environment variable set to
NODE_TLS_REJECT_UNAUTHORIZED=0 sails console
Redis has an internal buffer limit for pubsub that Fleet uses to communicate query results. If this buffer is filled, extra data is dropped. To fix this, we recommend disabling the buffer size limit. Most installs of Redis should have plenty of spare memory to not run into issues. More info about this limit can be found here and here (search for client-output-buffer-limit).
We recommend a config like the following:
client-output-buffer-limit pubsub 0 0 60
Osquery provides the enroll secret only during the enrollment process. Once a host is enrolled, the node key it receives remains valid for authentication independent from the enroll secret.
Currently enrolled hosts do not necessarily need enroll secrets updated, as the existing enrollment will continue to be valid as long as the host is not deleted from Fleet and the osquery store on the host remains valid. Any newly enrolling hosts must have the new secret.
Deploying a new enroll secret cannot be done centrally from Fleet.
Primarily, this would be done by changing the
--tls_hostname and enroll secret to the values for the new server. In some circumstances (see What do I need to do to change the Fleet server TLS certificate?) it may be necessary to deploy a new certificate chain configured with
These configurations cannot be managed centrally from Fleet.
This error usually indicates that the Fleet server has run out of file descriptors. Fix this by increasing the
ulimit on the Fleet process. See the
LimitNOFILE setting in the example systemd unit file for an example of how to do this with systemd.
Some deployments may benefit by setting the
--server_keepalive flag to false.
This was also seen as a symptom of a different issue: if you're deploying on AWS on T type instances, there are different scenarios where the activity can increase and the instances will burst. If they run out of credits, then they'll stop processing leaving the file descriptors open.
Absolutely! If you're updating from the current major release of Fleet (v4), you can install the latest version without upgrading to each minor version along the way. Just make sure to back up your database in case anything odd does pop up!
If you're updating from an older version (we'll use Fleet v3 as an example), it's best to take some stops along the way:
Taking it a bit slower on major releases gives you an opportunity to better track down where any issues may have been introduced.
This could be caused by a mismatched connection limit between the Fleet server and the MySQL server that prevents Fleet from fully utilizing the database. First determine how many open connections your MySQL server supports. Now set the
--mysql_max_idle_conns flags appropriately.
First, check if you have a version of MySQL installed that is at least 5.7. Then, make sure that you currently have a MySQL server running.
The next step is to make sure the credentials for the database match what is expected. Test your ability to connect to the database with
mysql -u<username> -h<hostname_or_ip> -P<port> -D<database_name> -p.
If you're successful connecting to the database and still receive a database connection error, you may need to specify your database credentials when running
fleet prepare db. It's encouraged to put your database credentials in environment variables or a config file.
fleet prepare db \ --mysql_address=<database_address> \ --mysql_database=<database_name> \ --mysql_username=<username> \ --mysql_password=<database_password>
No. Currently, Fleet is only available for self-hosting on premises or in the cloud.
Fleet is tested with MySQL 5.7.21 and 8.0.28. Newer versions of MySQL 5.7 and MySQL 8 typically work well. AWS Aurora requires at least version 2.10.0. Please avoid using MariaDB or other MySQL variants that are not officially supported. Compatibility issues have been identified with MySQL variants and these may not be addressed in future Fleet releases.
fleet prepare db (via environment variable
FLEET_MYSQL_USERNAME or command line flag
--mysql_username=<username>) uses to interact with the database needs to be able to create, alter, and drop tables as well as the ability to create temporary tables.
You can deploy MySQL or Maria any way you want. We recommend using managed/hosted mysql so you don't have to think about it, but you can think about it more if you want. Read replicas are supported. You can read more about MySQL configuration here.
Duplicate host enrollment is when more than one host enrolls in Fleet using the same identifier (hardware UUID or osquery generated UUID).
Typically, this is caused by cloning a VM Image with an already enrolled
osquery client, which results in duplicate osquery generated UUIDs. To resolve this issue, it is
advised to configure
--osquery_host_identifier=uuid (which will use the hardware UUID), and then
delete the associated host in the Fleet UI.
In rare instances, VM Hypervisors have been seen to duplicate hardware UUIDs. When this happens,
--osquery_host_identifier=uuid will not resolve the duplicate enrollment problem. Sometimes
the problem can be resolved by setting
--osquery_host_identifier=instance (which will use the
osquery generated UUID), and then delete the associated host in the Fleet UI.
Find more information about host identifiers here.
Enroll secrets are valid until you delete them.
That is up to you! Some organizations have internal goals around rotating secrets. Having multiple secrets allows some of them to work at the same time the rotation is happening. Another reason you might want to use multiple enroll secrets is to use a certain enroll secret to auto-enroll hosts into a specific team (Fleet Premium).
Rotating enroll secrets follows this process:
To do this with
fleetctl (assuming the existing secret is
oldsecret and the new secret is
Begin by retrieving the existing secret configuration:
$ fleetctl get enroll_secret --- apiVersion: v1 kind: enroll_secret spec: secrets: - created_at: "2021-11-17T00:39:50Z" secret: oldsecret
Apply the new configuration with both secrets:
$ echo ' --- apiVersion: v1 kind: enroll_secret spec: secrets: - created_at: "2021-11-17T00:39:50Z" secret: oldsecret - secret: newsecret ' > secrets.yml $ fleetctl apply -f secrets.yml
Now transition clients to using only the new secret. When the transition is completed, remove the old secret:
$ echo ' --- apiVersion: v1 kind: enroll_secret spec: secrets: - secret: newsecret ' > secrets.yml $ fleetctl apply -f secrets.yml
At this point, the old secret will no longer be accepted for new enrollments and the rotation is complete.
A similar process may be followed for rotating team-specific enroll secrets. For teams, the secrets are managed in the team yaml.
unknown column error typically occurs when the database migrations haven't been run during the upgrade process.
Check out the documentation on running database migrations to resolve this issue.
If you would like to manage hosts that can travel outside your VPN or intranet we recommend only exposing the "/api/v1/osquery" endpoint to the public internet.
Fleet requires at least MySQL version 5.7.
To migrate from Fleet Free to Fleet Premium, once you get a Fleet license, set it as a parameter to
fleet serve either as an environment variable using
FLEET_LICENSE_KEY or in the Fleet's config file. See here for more details. Note: You don't need to redeploy Fleet after the migration.
Most likely, yes! While we'd definitely recommend keeping Fleet up to date in order to take advantage of new features and bug patches, most legacy versions should work with Redis 6. Just keep in mind that we likely haven't tested your particular combination so that you may run into some unforeseen hiccups.
If you notice something we've missed or could be improved on, please follow this link and submit a pull request to the Fleet repo.