Posted on ::

I got invited to participate on a small lab network.

A few years ago, I had a presence on DN42 which is an incredible resource for learning internet routing concepts. However, when I dismantled my homelab, sadly my connection to DN42 left as well.

The only surviving piece of my homelab network is a single router that keeps my ASN/IPv6 block visible on the Internet.

LARPANET

LARPANET is a hobbyist mesh VPN connecting multiple labs.

— xylobol

While I don't have much of a "lab" anymore these days, I do have a router I could help provide more resiliency to the network and another endpoint to connect in through.

Architecture

When I decommissioned my homelab, I had one VPS serving as a router that I kept online to continuing advertising my IPv6 prefix. This router is named vultr-rtr01.

vultr-rtr01 is a VPS hosted by Vultr running RockyLinux. The router has various connections, mostly tunnels, to various peers and endpoints.

Diagram of router showing BGP connections

I won't be covering any details here about the peerings with Hurricane Electric and Vultr, look for a different blog post for that. However, I will be referring to them as we walk through the routing configuration for LARPANET.

Isolation

As vultr-rtr01 is built to route public Internet IPv6 traffic, adding a private peering network to this router necessitated isolation to ensure routes don't leak between the systems as well as isolating routing decisions. For this, I utilized linux VRFs.

Virtual Routing and Forwarding (VRF) is a term referring to layer 3 separation of routing. In essence, we're creating multiple routing tables and assigning interfaces to specific routing tables.

BGP

For dynamic routing, BGP is the obvious protocol being used. For this, I'm using the bird daemon which I have a lot of experience with. It's a very capable routing daemon supporting many routing protocols.

For LARPANET specifically, we'll be setting up peerings with multiple other labs and exchanging routes in the 10.240.0.0/16 IP space.

Configuration

Walking through the configuration for this router is relatively straight forward. I'll cover each piece and any specifics needed.

VRF

The VRF is relatively simple to define using nmcli. We establish a routing table number (I picked 2488) and the name of the VRF.

# /etc/NetworkManager/system-connections/vrflarpa.nmconnection
[connection]
id=vrflarpa
uuid=22664a22-6e92-4617-b031-b3a1a5387644
type=vrf
interface-name=vrflarpa

[vrf]
table=2488

[ipv4]
method=disabled

[ipv6]
addr-gen-mode=default
method=disabled

[proxy]

Dummy interface

In order to have an endpoint on my network that can respond, I will create a loopback or "dummy" interface.

# /etc/NetworkManager/system-connections/dum1.nmconnection
[connection]
id=dum1
uuid=85934dd8-db0e-462a-b32b-b8edd74b84e2
type=dummy
interface-name=dum1
master=vrflarpa # This interface will be within the 'vrflarpa' VRF
slave-type=vrf

[dummy]

[ipv4]
address1=10.240.88.1/32 # A /32 within our IP space
method=manual

[ipv6]
addr-gen-mode=default
method=disabled

[proxy]

Wireguard

For setting up the wireguard tunnels, I decided to use the support of nmcli to handle this. I used to use wg-quick, but this allows me to have a common configuration interface for the different setups I have.

Overall, the configuration of the tunnel is straight forward. This is an example for one of the connections.

# /etc/NetworkManager/system-connections/wg_larpa-77.nmconnection
[connection]
id=wg_larpa-77
uuid=780b7515-05fc-4288-a0f6-6d94b98f805b
type=wireguard
interface-name=wg_larpa-77
master=vrflarpa # This interface will be within the 'vrflarpa' VRF
slave-type=vrf

[wireguard]
ip4-auto-default-route=0 # We disable default route handling
ip6-auto-default-route=0
listen-port=51823
peer-routes=false # And also disable peer routes. Because we'll be handling routing with BGP
private-key=<snip>

[wireguard-peer.<snip>]
endpoint=<snip>
allowed-ips=10.240.0.0/16;10.77.88.0/31; # We allow the valid IPv4 range for LARPANET

[ipv4]
address1=10.77.88.1/31 # The agreed upon peer network
method=manual

[ipv6]
addr-gen-mode=default
method=disabled

[proxy]

Bird

As I have an existing bird setup for public routing, I had to set up some more functions for handling filtering and establishing templates.

First, I added some functions to my functions.conf file.

# /etc/bird.d/functions.conf
function is_larpa_net_v4() {
    return net ~ [10.240.0.0/16+]; # We check if the network is within the 10.240.0.0/16 range, having a prefix more specific than /16
}

function is_larpa_self_v4() {
    return net ~ [10.240.88.0/24]; # Look for explicitly our own LARPANET network
}

Then we establish a template for our peerings.

# /etc/bird.d/templates/bgp.larpa-peer.conf
template bgp larpapeer {
    local as 396088; # We are AS396088

    path metric 1;

    ipv4 {
        table larpa4;
        import filter {
            if is_larpa_self_v4() then reject; # We want to reject our own route (this is protection against misconfig elsewhere as this shouldn't happen)
            if is_larpa_net_v4() then accept; # We accept all routes in 10.240.0.0/16
            reject; # Reject all others
        };

        export filter {
            if is_larpa_self_v4() then accept; # We announce our own route
            if is_larpa_net_v4() then accept; # And accept all 10.240.0.0/16 routes
            reject; # Reject all others
        };

    };

};

Next, we need to define our interaction with the kernel routing table.

# /etc/bird.d/protocols.conf
ipv4 table larpa4;

protocol kernel vrflarpa {
    kernel table 2488; # The 2488 table we defined
    ipv4 {
        table larpa4; # Export routes from the bird 'larpa4' table to the kernel routing table
        export all;
    };
}

protocol static { # We define a static route protocol to import our IP space as a route
    route 10.240.88.0/24 unreachable;
    ipv4 {
        table larpa4;
        import all;
    };
}

Finally, we configure the peer.

# /etc/bird.d/peers/larpa-as4200000077.conf
protocol bgp AS4200000077 from larpapeer {
    neighbor 10.77.88.0 as 4200000077;
}

With that configuration done, we should be running after a simple birdc configure

Verifying Operation

Wireguard tunnel

We can check the status of the wireguard tunnels pretty simply.

# wg
interface: wg_larpa-77
  public key: <snip>
  private key: (hidden)
  listening port: 51823

peer: <snip>
  endpoint: <snip>
  allowed ips: 10.240.0.0/16, 10.77.88.0/31
  latest handshake: 6 seconds ago
  transfer: 26.67 MiB received, 26.02 MiB sent

Bird protocol

In bird, we can show the status of the peering easily with the birdc command line.

bird> show pro AS4200000077
Name       Proto      Table      State  Since         Info
AS4200000077 BGP        ---        up     17:58:57.103  Established 

And more details with the all keyword.

bird> show pro all AS4200000077
Name       Proto      Table      State  Since         Info
AS4200000077 BGP        ---        up     17:58:57.103  Established   
  BGP state:          Established
    Neighbor address: 10.77.88.0
    Neighbor AS:      4200000077
    Local AS:         396088
    Neighbor ID:      10.240.77.1
    Local capabilities
      Multiprotocol
        AF announced: ipv4
      Route refresh
      Graceful restart
      4-octet AS numbers
      Enhanced refresh
      Long-lived graceful restart
    Neighbor capabilities
      Multiprotocol
        AF announced: ipv4
      Route refresh
      Extended message
      Graceful restart
      4-octet AS numbers
      ADD-PATH
        RX: ipv4
        TX:
      Enhanced refresh
      Long-lived graceful restart
        LL stale time: 0
        AF supported:
        AF preserved: ipv4
      Hostname: bella
    Session:          external AS4
    Source address:   10.77.88.1
    Hold timer:       7.446/9
    Keepalive timer:  1.966/3
  Channel ipv4
    State:          UP
    Table:          larpa4
    Preference:     100
    Input filter:   (unnamed)
    Output filter:  (unnamed)
    Routes:         4 imported, 3 exported, 2 preferred
    Route change stats:     received   rejected   filtered    ignored   accepted
      Import updates:             76          0          0          0         76
      Import withdraws:           46          0        ---          8         61
      Export updates:            123         52          0        ---         71
      Export withdraws:           42        ---        ---        ---         48
    BGP Next hop:   10.77.88.1

Routing tables

There are two places for us to check our routing table and make sure we agree.

First in bird

bird> show route table larpa4
Table larpa4:
10.240.42.0/24       unicast [AS4200000042 09:02:35.981] * (100) [AS4200000042i]
	via 10.88.42.0 on wg_larpa-42
                     unicast [AS4200000077 17:58:58.403] (100) [AS4200000042i]
	via 10.77.88.0 on wg_larpa-77
10.240.9.0/24        unicast [AS4200000077 17:59:21.132] * (100) [AS4200000009i]
	via 10.77.88.0 on wg_larpa-77
                     unicast [AS4200000042 17:59:21.232] (100) [AS4200000009i]
	via 10.88.42.0 on wg_larpa-42
10.240.77.0/24       unicast [AS4200000077 17:58:58.403] * (100) [AS4200000077i]
	via 10.77.88.0 on wg_larpa-77
                     unicast [AS4200000042 17:58:57.206] (100) [AS4200000077i]
	via 10.88.42.0 on wg_larpa-42
10.240.95.0/24       unicast [AS4200000042 09:02:35.981] * (100) [AS4200000095i]
	via 10.88.42.0 on wg_larpa-42
                     unicast [AS4200000077 17:58:58.403] (100) [AS4200000095i]
	via 10.77.88.0 on wg_larpa-77
10.240.88.0/24       unreachable [static2 2026-03-11] * (200)

As well as the kernel table

# ip route show vrf vrflarpa
10.77.88.0/31 dev wg_larpa-77 proto kernel scope link src 10.77.88.1 metric 50 
10.88.42.0/31 dev wg_larpa-42 proto kernel scope link src 10.88.42.1 metric 51 
10.240.9.0/24 via 10.77.88.0 dev wg_larpa-77 proto bird metric 32 
10.240.42.0/24 via 10.88.42.0 dev wg_larpa-42 proto bird metric 32 
10.240.77.0/24 via 10.77.88.0 dev wg_larpa-77 proto bird metric 32 
unreachable 10.240.88.0/24 proto bird metric 32 
10.240.95.0/24 via 10.88.42.0 dev wg_larpa-42 proto bird metric 32 

Ping

Finally, we can execute a ping sourcing from our dum1 interface as simply as this

# ip vrf exec vrflarpa ping 10.240.77.1
PING 10.240.77.1 (10.240.77.1) 56(84) bytes of data.
64 bytes from 10.240.77.1: icmp_seq=1 ttl=64 time=28.1 ms
64 bytes from 10.240.77.1: icmp_seq=2 ttl=64 time=28.1 ms
64 bytes from 10.240.77.1: icmp_seq=3 ttl=64 time=27.8 ms

Conclusion

I've participated in private networks like DN42 for a while. It's a lot of fun to do.

There are some optimizations I probably need to do to this related to tuning MTU and TCP-MSS, but for now this operates as expected.

Additional configuration can also be done support different types of peers. At the moment, LARPANET only really has full tables exchanged between all peers. This is fine when all peers agree to route traffic for all other participants, but it's also possible to provide transit to a node by either sending full tables to them and only accepting their route, or by sending them a summary/default route.

For the time being, this configuration will do.

Table of Contents