Occasionally useful notes


The official FreeBSD package repositories or more specifically the CDN delivering these packages as a service to the public can be slow depending on where in the world you are. Also the more bandwidth and requests per second you add to their load (e.g. with “clever” parallel pkg-fetch(8) scripts) the less there is for everyone else.

Problem analysis:

The official pkg(7) repository databases are signed by a FreeBSD project public key-pair (a copy of the public halves for validation can be found in /usr/share/keys/pkg/). The repository databases in turn contain strong cryptographic hashes of all contained packages. This means that while FreeBSD packages are by default fetched via HTTPS the transport encryption is not required for integrity.

The FreeBSD package CDN mirrors provide a valid ETag, but other than that are configured to be “hostile” to third party caching (Cache-Control: max-age=0, private).

The pkg(7) code assumes that repositories (database + packages) are in sync. To avoid user frustration our cache must not return stale cache hits.

A possible solution:

Use Varnish as HTTP cache and validate every cache hit with a HEAD request against the FreeBSD package CDN to compare the latest ETag with the cached response and stunnel to maintain transport encryption from the cache to the FreeBSD package CDN since Varnish does not support HTTPS directly.

Find the fastest upstream servers for your cache.

The different FreeBSD package CDN servers will offer vastly different bandwidth and latency depending on where in the world your cache is. To find the fastest servers as upstream for your cache install the fastest_pkg command: pkg install ports-mgmt/fastest_pkg). Run fastest_pkg on your intended caching server. Save the output for later (and ignore its recommendation to change your configuration).

Setup the TLS proxy.

Run pkg install security/stunnel to install stunnel (or pkg install --yes -- security/stunnel to install without asking for confirmation).

Reduce the /usr/local/etc/stunnel/stunnel.conf configuration from the lengthy example to just this:

; ****************************************
; * Global options                       *
; ****************************************

; (Useful for troubleshooting)
;	foreground = yes
;	debug      = info
;	output     = /tmp/stunnel.log

; ****************************************
; * Include configuration file fragments *
; ****************************************

include = /usr/local/etc/stunnel/conf.d

Configure stunnel to run as daemon by placing the following fragment in /usr/local/etc/stunnel/conf.d/00-daemon.conf:

pid    = /var/run/stunnel/
setuid = stunnel
setgid = stunnel

Now it's time to list those mirror servers you consider fast enough to be useful upstreams in /usr/local/etc/stunnel/conf.d/pkg.conf (sorted in descending order by measured bandwidth) e.g.:

client      = yes
accept      =
connect     =
verifyChain = yes
CApath      = /etc/ssl/certs
checkHost   =
OCSPaia     = yes

client      = yes
accept      =
connect     =
verifyChain = yes
CApath      = /etc/ssl/certs
checkHost   =
OCSPaia     = yes

client      = yes
accept      =
connect     =
verifyChain = yes
CApath      = /etc/ssl/certs
checkHost   =
OCSPaia     = yes

; ... Continue with more servers. ...

Now enable and start the stunnel service (service stunnel enable followed by service stunnel start).

You can manually test your HTTP to HTTPS proxy with fetch -vv -o /dev/null http://localhost:8000 (increment the port number for each server). If you're already familiar with a different tool (e.g. curl, wget) you can use it instead.

Setup the HTTP cache.

Unless configured otherwise Varnish will consume as much main memory as possible. Assuming the package cache this is supposed to be just one service among many on your server lets define a new login class for Varnish and restrict it to 1GiB of resident memory (if there is memory pressure).

Append the following lines to /etc/login.conf to define a memory limited login class named varnish based on the daemon class:


Any time /etc/login.conf is modified the read-only database /etc/login.conf.db has to be regenerated using cap_mkdb(1) like this: cap_mkdb /etc/login.conf.

Now install Varnish by running pkg install www/varnish7.

Use sysrc(8) to configure the varnishd rc.d service (login class, listen address address and port, configuration file to load, storage to use for the cache content):

sysrc \
  varnishd_login_class="varnish" \
  varnishd_listen=":80" \
  varnishd_config="/usr/local/etc/varnish/pkg.vcl" \

Place the following configuration into /usr/local/etc/varnish/pkg.vcl. Change the backends according to your fastest_pkg output (and cut-off point for lowest acceptable bandwidth):

vcl 4.0;

# This configuration uses the workaround described in [1] to validate cache hits using a HTTP HEAD request
# with the cached ETag to work around the "Cache-Control: max-age=0, private" returned by FreeBSD package mirrors.
# [1] :
#       Archived at:

import directors;

# Define a backend for each FreeBSD package mirror.
# Use slow health checks to reduce the load on the project infrastructure.
# The backend definitions are sorted by measured bandwidth.

backend fra { # 16.2 MB/s
	.host = "";
	.port = "8000";
	.probe = {
		.url       = "/";
		.timeout   = 5s;
		.interval  = 69s;
		.window    = 23;
		.threshold = 5;

backend sjb { # 10.3 MB/s
	.host = "";
	.port = "8001";
	.probe = {
		.url       = "/";
		.timeout   = 5s;
		.interval  = 69s;
		.window    = 23;
		.threshold = 5;

backend nyi { # 4.8 MB/s
	.host = "";
	.port = "8002";
	.probe = {
		.url       = "/";
		.timeout   = 5s;
		.interval  = 69s;
		.window    = 23;
		.threshold = 5;
# Add more servers as needed...

# On load create a fallback type director and populate it
# with the known FreeBSD package mirrors in order of bandwidth
# measured by fastest_pkg (from ports-mgmt/fastest_pkg).
sub vcl_init {
	new pkg = directors.fallback();
	pkg.add_backend(fra ); # 16.2 MB/s
	pkg.add_backend(sjb ); # 10.3 MB/s
	pkg.add_backend(nyi ); # 4.8 MB/s

# Use restarts to probe the cache validity by ETag.
# Possible states are:
#   - init (req.restarts == 0)
#   - "cache_check"
#   - "backend_check"
#   - "valid"
# On misses no re restarts are performed. On hits
# the following state machine runs multiple steps:
# ┌────────────────┐
# │                ▼
# │            ┌───────┐
# │     ┌──────┤ recv  ├─────┐
# │     │      └───────┘     │
# │     ▼                    ▼
# │ ┌───────┐            ┌───────┐   ┌──────────────────┐
# │ │ hash  │            │ pass  ├──▶│ backend_fetch    │
# │ └───┬───┘            └───────┘   └─────────┬────────┘
# │     ▼                                      ▼
# │ ┌───────┐  ┌───────┐             ┌──────────────────┐
# ├─┤ hit   │  │ miss  │             │ backend_response │
# │ └───────┘  └───────┘             └─────────┬────────┘
# │                                            │
# │ ┌─────────┐                                │
# └─┤ deliver │◀───────────────────────────────┘
#   └─────────┘
# - First start:
#   * Save the Etag.
#   * Restart, because we need to go to the backend.
# - 1st restart:
#   * Pass, because we don't necessarilly want to put the object in cache.
#   * Use a HEAD request to fetch only the headers (including the ETag).
#   * If the backend returns a different ETag evict the conflicting cache entry.
#   * Restart (again).
# - 2nd (and last) restart
#   * Just act normal this time.

# Setup state machine and begin recording the cache hit/miss.
sub vcl_recv {
	# The first time (not yet restarted).
	if (req.restarts == 0) {
		# Use the failover director of FreeBSD package mirrors.
		set req.backend_hint = pkg.backend();

		# Clear the cache hit/miss header.
		unset req.http.X-Cache;

		# Set the internal state to "cache_check".
		set req.http.X-State = "cache_check";
		return (hash);
	# The second time (first restart).
	} else if (req.http.X-State == "backend_check") {
		return (pass);
	# The third (and last) time.
	} else {
		return (hash);

# Hash only the URL not the Host/IP address allowing clients to share
# the cache no matter under which Host/IP address they use it.
sub vcl_hash {
	return (lookup);

# Depending on the X-State...
sub vcl_hit {
	# Extract the ETag from the HEAD reponse.
	if (req.http.X-State == "cache_check") {
		set req.http.X-State = "backend_check";
		set req.http.etag = obj.http.etag;
		return (restart);
	# Record the cache hit
	} else {
		if (obj.ttl <= 0s && obj.grace > 0s) {
			set req.http.X-Cache = "hit graced";
		} else {
			set req.http.X-Cache = "hit";
		return (deliver);

# Record the cache miss.
sub vcl_miss {
	set req.http.X-Cache = "miss";

# Record the cache pass.
sub vcl_pass {
	set req.http.X-Cache = "pass";

# Record pipelined uncachable request.
sub vcl_pipe {
	set req.http.X-Cache = "pipe uncacheable";

# Record synthetic responses
sub vcl_synth {
	set req.http.X-Cache = "synth synth";

	# Show the information in the response
	set resp.http.X-Cache = req.http.X-Cache;

# Change the HTTP method to HEAD when probing the backend
# FreeBSD package mirrrors for the latest ETag.
sub vcl_backend_fetch {
	if ( bereq.http.X-State == "backend_check" ) {
		set bereq.method = "HEAD";
		set bereq.http.method = "HEAD";

# Evict invalidated cache entries.
sub vcl_backend_response {
	# Is the the response to the HTTP HEAD probing request?
	if ( bereq.http.X-State == "backend_check" ) {
		# Evict objects that failed ETag validation.
		if (bereq.http.etag != beresp.http.etag) {
			ban("obj.http.etag == " + bereq.http.etag);
	# Otherwise cache successful responses.
	} else if ( beresp.status == 200 ) {
		# The FreeBSD package mirrors are configured with "Cache-Control: max-age=0, private" which would prevent caching.
		# Set the TTL to 1 second to cache it at all.
		unset beresp.http.cache-control;
		set beresp.ttl = 7d;

		# Keep the response in cache for 7 days if the response has validating headers.
		if (beresp.http.ETag || beresp.http.Last-Modified) {
			set beresp.keep = 7d;

# Make sure to only deliver real responses.
sub vcl_deliver {
	# The client wants the real response not the response to the probe
	# for the latest ETag so restart (again).
	if (req.http.X-State == "backend_check") {
		set req.http.X-State = "valid";
		return (restart);

	# Append cachability to the X-Cache header.
	if (obj.uncacheable) {
		set req.http.X-Cache = req.http.X-Cache + " uncacheable";
	} else {
		set req.http.X-Cache = req.http.X-Cache + " cached";

	# Show the information in the response
	set resp.http.X-Cache = req.http.X-Cache;

The Varnish package installs two services: varnishd and varnishlog. The later consumes the logs from an in-memory buffer and writes them to the file system. Enable both services using service varnishlog enable; service varnishd enable and start them service varnishd start; service varnishlog start.

Put this in /usr/local/etc/newsyslog.conf.d/varnish.conf to enable log rotation via newsyslog(8):

/var/log/varnish.log varnishlog:varnish 640 7 * @T00 B /var/run/

To enjoy you new cache put this in /usr/local/etc/pkg/repos/FreeBSD.conf:

FreeBSD {
	url         = "http://localhost/${ABI}/latest"
	mirror_type = "NONE"

Replace localhost with your cache's resolvable hostname or IP address as needed.


IPv6 has a the concept of link scope. From IPv6's point of view a bridge interface is a single link (just like multiple hosts connected to a physical Ethernet switch), but if there are IP addresses configured on the member interfaces of a FreeBSD bridge the kernel considers these interfaces as their own links with associated link scope. This will cause IPv6 to break. The only correct configuration is to have no IP addresses configured on the member interfaces. The IP addresses belong exclusively on the bridge interface itself. The member interfaces should be treated as pure Ethernet (OSI layer 2) interfaces instead of both OSI layer 2 (Ethernet like) and OSI layer 3 (IP).

A further complication is that the bridge has to have unmodified access to the Ethernet frames, but most 1Gb/s and faster as well as virtual network interfaces have offloading features like TSO and LRO to rewrite the small (by modern standards) 1500 byte Ethernet frames into “fake” larger frames to reduce the CPU overhead of processing the packet inside each frame. While useful to IP hosts these offloading features have to be disabled to bridge Ethernet or route and filter the IP packets inside.

⚠️ Warning ⚠️ It's easy to lock yourself out of your system by migrating from using its NICs directly to adding them to a bridge. Don't continue unless you have a useable out of band console (keyboard and video console, configured serial port, IPMI SOL/KVM, etc.).

A high-level overview of the configuration.

  1. Change the default behaviour of the bridge driver.
  2. Prepare first bridge member interface.
  3. Create the bridge interface adding its first member interface.
  4. Configure the bridge interface.
  5. Add or remove other member interfaces as needed.

Change the default behaviour of the bridge driver.

The best way to assign a stable MAC address to the bridge I've found it to add to /etc/sysctl.conf and running sysctl -f /etc/sysctl.conf. This causes the bridge interfaces to inherit the MAC address of their first member interface instead of generating a badly randomised one. I find this preferable to manually assigning each bridge a stable MAC address because the bridge MAC address will be as expected by whoever provisioned the member interface (e.g. your hosting provider expecting your system to acquire its IP address via DHCP).

Prepare first bridge member interface.

The first bridge member interface connects the bridge interface itself and its other member interfaces to the network outside your FreeBSD system. Network interfaces have to be “up” to forward traffic. Most types of network interfaces become implicitly “up” by assigning IP addresses to them, but bridge member interfaces must NOT have IP addresses configured on them. The member interfaces must still be brought up. Assuming your first bridge member interface is ix1 add ifconfig_ix1="up" to /etc/rc.conf using sysrc ifconfig_ix1=up.

In the case of my test system I also had to disable the TSO offloading functionality. I carefully inspected the options=...<...> line from the ifconfig ix1 output looking for TSO and LRO offloading features to disable and added the corresponding ifconfig arguments to /etc/rc.conf like this: sysrc ifconfig_ix1+=' -tso -vlanhwtso'.

Make sure there are no other interfering ifconfig_<ifn>_* entries left in /etc/rc.conf at this point e.g. using grep ix1 /etc/rc.conf.

Create the bridge interface adding its first member interface.

The netif rc.d script handles interface configuration and in the case of cloned (pseudo-)interfaces also creates them. Use sysrc cloned_interfaces+=bridge0 to add bridge0 to the list of interfaces to be cloned. The rc.d script can also pass further arguments to ifconfig invocation used to create the interfaces. These are taken from the create_args_<ifn> variables. Bridge interfaces don't default to automatically generate a link-local address required for correct IPv6 operation from its MAC address. Run sysrc create_args_bridge0='inet6 auto_linklocal -ifdisabled addm ix1' to prepare the bridge for IPv6 and add its first member interface as early as possible.

Configure the bridge interface.

Configure IPv4 through the ifconfig_bridge0 entry /etc/rc.conf e.g. by running sysrc ifconfig_bridge0='up DHCP'.

Configure IPv6 through the ifconfig_bridge0_ipv6 entry in /etc/rc.conf e.g. running sysrc ifconfig_bridge0_ipv6='inet6 accept_rtadv'. It's also a good idea to enable rtsold and configure it to use the bridge interface e.g. using sysrc rtsold_enable=YES rtsold_flags='-i -m bridge0'.

Add or remove other member interfaces as needed.

The bridge is now connected to the outside by its first member interface and configured as an IPv4 and IPv6 enabled network interface. You can now add additional members e.g. tap/vmnet interfaces for bhyve guests or one end of an epair for vnet enabled jails.

TL;DR: copy-pasta is my favourite food and damn the consequences

sysrc ifconfig_ix1='up -tso -vlanhwtso'
sysrc create_args_bridge0='inet6 auto_linklocal -ifdisabled addm ix1'
sysrc cloned_interfaces+='bridge0'
sysrc ifconfig_bridge0='up DHCP'
sysrc ifconfig_bridge0_ipv6='inet6 accept_rtadv'
sysrc rtsold_flags='-i -m bridge0'
sysrc rtsold_enable=YES 
shutdown -r now


I want my cake and eat it too.

The configuration described in the last post works, but lacks the comfort the wg-quick(8) script brings, but wg-quick(8) doesn't integrate well with FreeBSD's existing rc.d scripts. To get the best of both worlds I wrote the “missing” WireGuard rc.d script that handles the basic wg-quick(8) features (inner tunnel addresses, DNS configuration using resolvconf(8), PreUp/PostUp/PreDown/PostDown hooks, MTU) and auto-detects the presence of WireGuard configurations in /etc/wireguard (at least by default). It also makes an effort to clean up after failures instead of leaking partially configured network interfaces.

The rc.d script lets the kernel pick the next available unit number for a WireGuard tunnel interface and renames the interface in a single ifconfig(8) invocation like this: ifconfig wg create name $name instead of asking for a specific unit number like this ifconfig wg$unit create. The rc.d script runs after netif allowing users to pre-reserve specific unit numbers by adding them to the cloned_interfaces rc.conf variable.

With all the error recovery code and support for verbose logging, it has the dubious distinction of being longer than any of the rc.d scripts shipped with FreeBSD 13.2.

The resulting WireGuard rc.d script written in FreeBSD sh(1) is available here. Please read it before feeding it to your root shells.

TL;DR: I love copy & paste and blindly trust random people on the Internet.

# Download the WireGuard rc.d script into /tmp.
fetch -o /tmp/

# Install the WireGuard rc.d script into the /etc/rc.d directory.
install -S -m 555 -o root -g wheel /tmp/ /etc/rc.d/wireguard

# Delete the temporary file.
rm /tmp/

# Create the WireGuard configuration directory.
install -d -m 750 -o root -g wheel /etc/wireguard

# Configure a WireGuard interface.
$EDITOR /etc/wireguard/wg-foo.conf

# Start the WireGuard rc.d service.
service wireguard start

Here is an example WireGuard configuration as starting point:

[Interface]           # wg-foo
PrivateKey            = cElrYhZSY8znrhGdn5c/oXrTvuesYJnVsPBXR+56snc=
# PublicKey           = hIrvK/JVH3+CyPmhvh2w/+eN00KfSN+Fro/t4U592h8=
ListenPort            = 51820
MTU                   = 1400
DNS                   =,
DNS                   =
Address               = 2001:db8::1/64,
PreUp                 = logger -t "wireguard" -- "PreUp    : %i"
PostUp                = logger -t "wireguard" -- "PostUp   : %i"
PreDown               = logger -t "wireguard" -- "PreDown  : %i"
PostDown              = logger -t "wireguard" -- "PostDown : %i"

[Peer]                # Restrict AllowedIPs for point to multi-point.
Endpoint	      =
PublicKey             = +lvewJa4CBEUlCOXLv0D+vXFB5mYQTzY6iRmz0zI6zg=
PresharedKey          = aWVYfsvLR1egBz4zPlHPy+UqgkZAAxhjkjEdwDcArAM=
AllowedIPs            = ::0/0,
PersistentKeepalive   = 25


FreeBSD 13.2 imported WireGuard into the base system, but so far the official documentation on it is limited only the man pages wg(4) and wg(8). The invasive wg-quick(8) script used in most examples wasn't imported breaking those. Despite this FreeBSD 13.2 has everything needed to comfortably use WireGuard except for the documentation needed to make it accessible to new users. This article is an attempt to to fix that.

How the parts fit together

Most existing examples configure everything about a WireGuard tunnel using a single WireGuard configuration file per tunnel interface processed by the wg-quick(8) script. Instead the setup described in this article uses the WireGuard configuration only for those parameters directly understood by wg setconf. The remaining network interface configuration of the WireGuard tunnel interfaces is left to the existing FreeBSD rc.d scripts. WireGuard tunnel interfaces have to be explicitly created as they do not correspond to any physical network interface that can be discovered by enumerating the installed physical network interfaces. The recommended way to have the rc.d scripts create tunnel interfaces is to add them to (space separated) cloned_interfaces list in /etc/rc.conf. Variables in /etc/rc.conf are used to have the netif and routing rc.d scripts configure the interface IP addresses, MTU, and static routes as required. Which variables are recognised by the base system rc.d scripts and a short description of their semantics can be looked up in the rc.conf(5) man page.

The /etc/rc.conf configuration is a shell script sourced by the various rc.d scripts to obtain their configuration. As long as /etc/rc.conf contains only variable assignments it can be queried and updated like a key-value store using sysrc(8). Getting too clever here will cause (a lot) more pain than gain over time. Tested for you sigh.

As part of the boot process FreeBSD runs its rc.d service scripts one after the other. For this article it's enough to know that devd is started before netif configures network interfaces allowing us to rely on devd(8) to load the rest of the WireGuard tunnel configuration on demand.

Prepare the system to load WireGuard configurations on demand.

  • Configure devd(8) to load WireGuard configuration files on demand by placing this devd.conf(5) configuration snippet in /etc/devd/wireguard.conf:

    notify 0 {
        match “system”    “IFNET”;
        match “type”      “LINK_UP”;
        media-type        “unknown”;

    action “. /etc/rc.subr . /etc/network.subr load_rc_config network if autoif $subsystem && [ -r /etc/wireguard/$subsystem.conf ] then grep -vE '^[[:space:]]*(Address|DNS|MTU|Table|PreUp|PostUp|PreDown|PostDown)[[:space:]]*=' /etc/wireguard/$subsystem.conf | wg setconf $subsystem /dev/stdin fi”; };

  • Restart devd(8) to apply the new configuration.

  • Create a /etc/wireguard directory to hold the WireGuard configuration files: install -d -m 700 -o root -g wheel /etc/wireguard (only accessible by root).

Create your first WireGuard interface.

  • Pick a free interface name. This article assumes WireGuard interfaces to be created under a name starting with “wg” followed by an index. The potential consequences of renaming network interfaces aren't covered in this article. The interface will be referred to as $WG. Run read WG to have the shell perform the substitution for you.
  • Write the WireGuard tunnel configuration into /etc/wireguard/${WG}.conf.
    • Start with just a new private key: (echo '[Interface]' && echo -n 'PrivateKey = ' && wg genkey) >/etc/wireguard/${WG}.conf.
    • Write the rest of the configuration as needed: ${EDITOR:-vi} /etc/wireguard/${WG}.conf. Tunnels without without at least one configured peer are of little use.
  • Configure the $WG interface in rc.conf(5).
    • Set the interface MTU: sysrc "create_args_${WG}=1400" (optional, defaults to 1420).
    • Assign a human friendly description to the interface: sysrc "ifconfig_${WG}_descr=first_tunnel" (optional).
    • Bring the interface up: sysrc "ifconfig_${WG}=up" (required).
    • Waste no time on futile IPv6 duplicate address detection: sysrc "ifconfig_${WG}_ipv6=no_dad" (required).
    • Configure the interface IPv6 address (and prefix length): sysrc "ifconfig_${WG}_alias0=2001:DB8:1::1/64" (required).
    • Configure the interface IPv4 address (and prefix length): sysrc "ifconfig_${WG}_alias1=" (optional).
    • Add the tunnel to the list of cloned interfaces: sysrc "cloned_interfaces+=${WG}" (required).
  • Create the new interface without rebooting: service netif start $WG.

Inspect the result.

  • Use ifconfig -l | tr ' ' '\n' | grep $WG to check if the interface was created.
  • Use ifconfig $WG to print the FreeBSD interface configuration.
  • Use wg show $WG to print the WireGuard tunnel state. An active session to a peer should have a latest handshake: … below two minutes.
  • Use wg showconf $WG dump the running WireGuard interface configuration. The private key is omitted unless the command is executed as root.
  • Use netstat -rn | grep $WG to find routes going through the tunnel.