In the previous articles, I had an issue with Mac/Orbstack’s networking system, that prevented my localhost from communicating with the GNS3 appliances. Since I didn’t have enough time to troubleshoot it myself, I decided to give PNETLab a try.

This time, I leveraged my GCP free trial to deploy the PNETLab server and used Terraform to automate the server provisioning. Then, I will use Tailscale to securely access the lab without exposing it to the Internet.

Goals

  • Automate PNETLab Deployment with Terraform.
  • Secure access to the lab without exposing it to the public internet.
  • Configure a domain for easy access to the lab.

Prerequisites

  • GCP CLI installed.
  • Terraform installed.
  • Tailscale account (free).
  • Domain name.

Network Diagram Overview

Here’s an overview of what we going to do.

image-20250326131830503

Preparing GCP project

Before we begin, make sure that you’ve authenticated to GCP.

$ gcloud auth application-default login

Once authenticated, create a project for our PNETLab deployment.

$ PROJECT_ID=pnetlab-$RANDOM
$ echo $PROJECT_ID
pnetlab-xxx
$ gcloud projects create $PROJECT_ID --set-as-default

Then, link the project to your billing account.

$ gcloud billing accounts list              
ACCOUNT_ID            NAME                OPEN  MASTER_ACCOUNT_ID
AAAAAA-BBBBBB-CCCCCC  My Billing Account  True
$ gcloud billing projects link $PROJECT_ID --billing-account=AAAAAA-BBBBBB-CCCCCC

Last, enable Google Compute API.

$ gcloud services enable compute.googleapis.com

Server Deployment

Preparing Terraform Code

We’ll use the following folder structure.

$ tree
.
├── README.MD
├── ansible
└── terraform
    ├── terraforms.tfvars #sensitive file (project_id, region, user/keys)
    ├── main.tf 
    └── variables.tf

3 directories, 5 files

Within the terraform folder, initialize the terraform working directory.

$ terraform init

terraform.tfvars

terraforms.tfvars contains the following (make your adjustment):

project_id      = ""
region          = ""
zone            = ""
os_image        = "projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20240116a"
ssh_user        = "username"
ssh_pub_user    = "ssh-ed25519 AAAAC3N... root"
ssh_pub_ansible = "ssh-ed25519 AAAAC3N... ansible"

main.tf

main.tf file contains the following:

terraform {
  required_providers {
    google = {
      source = "hashicorp/google"
    version = "~> 4.5" }
  }
}

provider "google" {
  project = var.project_id
  region  = var.region
  zone    = var.zone
}

# SETUP NETWORK #
resource "google_compute_network" "pnetlab_vpc" {
  name                    = "pnetlab-vpc"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "pnetlab_subnet" {
  name          = "pnetlab-subnet"
  network       = google_compute_network.pnetlab_vpc.id
  ip_cidr_range = "10.10.1.0/24"
}

resource "google_compute_firewall" "allow_ssh_access" {
  name    = "allow-ssh"
  network = google_compute_network.pnetlab_vpc.id

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = [ "0.0.0.0/0" ]
}

# NAT
resource "google_compute_router" "pnetlab_router" {
  name = "pnetlab-router"
  region = var.region
  network = google_compute_network.pnetlab_vpc.id

}

resource "google_compute_router_nat" "pnetlab_router_nat" {
  name = "pnetlab-router-nat"
  region = var.region
  router = google_compute_router.pnetlab_router.name
  nat_ip_allocate_option = "AUTO_ONLY" # use dynamic public ip for nat
  source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES" # nat any sources ip range available within the vpc

}

# SETUP EXT DISK #
resource "google_compute_disk" "additional_disk" {
  name   = "pnetlab-additional-disk"
  type   = "pd-standard"  # Choose disk type (pd-standard, pd-ssd, etc.)
  size = 100 
  lifecycle {
    prevent_destroy = true
  }
}

# SETUP INSTANCES #
# ubuntu-1804-bionic-v20240116a                                 ubuntu-os-cloud      ubuntu-1804-lts
resource "google_compute_instance" "pnetlab_server" {
  name         = "pnetlab-server"
  machine_type = "n2-standard-2"
  tags         = ["pnetlab"]

  boot_disk {
    initialize_params {
      image = var.os_image # cloud image
      size  = 20                 # assign 20gb
      type  = "pd-standard"
    }
  }
  
  # external disk 100gb
  attached_disk {
    source = google_compute_disk.additional_disk.id
  }

  network_interface {
    network    = google_compute_network.pnetlab_vpc.id
    subnetwork = google_compute_subnetwork.pnetlab_subnet.id

    # Disable public IP
    # access_config {
    #  network_tier = "STANDARD"
    # }
  }

  scheduling {
    automatic_restart           = true
    on_host_maintenance         = "MIGRATE"
    preemptible                 = false
    provisioning_model          = "STANDARD"
  }

  advanced_machine_features {
    enable_nested_virtualization = true
  }
  
  metadata = {
    ssh-keys = "${var.ssh_user}:${var.ssh_pub_user} \n${var.ssh_user}:${var.ssh_pub_ansible}"
  }
}

variables.tf

variables.tf define the variables we used in the tfvars file, it contains:

variable "project_id" {
  type        = string
  description = "Project ID"
}

variable "region" {
  type        = string
  description = "Project region"
  default     = "us-central"
}

variable "zone" {
  type        = string
  description = "Project zone"
  default     = "us-central1-a"
}

variable "os_image" {
  type        = string
  description = "OS family"
}

variable "ssh_user" {
  type        = string
  description = "The sudo user"
}

variable "ssh_pub_user" {
  type        = string
  description = "The public key used for authentication to an instance. Format: [public key] [username]"
}

variable "ssh_pub_ansible" {
  type        = string
  description = "The public key used for configuration management. Format: [public key] [username]"
}

Apply Configuration

With all the resource files defined, we will run the following command in the terraform directory.

$ terraform fmt         #Format your Terraform files  
$ terraform validate    #Check for syntax errors  
$ terraform plan -out pnetlab-deploy       #Preview the infrastructure changes  
$ terraform apply pnetlab-deploy -auto-approve  # Deploy the resources  

After it finishes, check if the instance is running:

$ gcloud compute instances list --filter="name=pnetlab-server" --project="$PROJECT_ID"

Installing PNETLab

For this step, some are just copy-paste from the official installation guide at the PNETLab website.

Add PNETLab Repository

SSH into the server from the Google Cloud SSH browser. Once we logged in, add and install the pnetlab package.

$ sudo su -
$ echo "deb [trusted=yes] http://repo.pnetlab.com ./" | sudo tee -a /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install pnetlab

Create Swap Memory

Create a swap disk, allocate the size according to your need.

root@pnetlab-server:~# fallocate -l 1G /swapfile
root@pnetlab-server:~# chmod 600 /swapfile
root@pnetlab-server:~# mkswap /swapfile
root@pnetlab-server:~# swapon /swapfile
root@pnetlab-server:~# cp /etc/fstab{,.bak}
root@pnetlab-server:~# '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Configure PNETLab DNS

Edit file /etc/network/interfaces add line dns-nameservers 8.8.8.8 to the configuration of interface pnet0.

...
# The primary network interface
iface eth0 inet manual
auto pnet0
iface pnet0 inet dhcp
    dns-nameservers 8.8.8.8
    bridge_ports eth0
    bridge_stp off
...

And restart the networking service.

root@pnetlab-server:~# sudo service networking restart

Configure Additional Disk

Time to add the additional disk we defined in the terraform resource. The purpose of this disk is to store all the pnetlab project files as well as the images later.

In my case, the additional disk is available at /dev/sdb.

root@pnetlab-server:~# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0     7:0    0 40.4M  1 loop /snap/snapd/20671
loop1     7:1    0  368M  1 loop /snap/google-cloud-cli/203
loop2     7:2    0 63.9M  1 loop /snap/core20/2105
sda       8:0    0   20G  0 disk 
├─sda1    8:1    0 19.9G  0 part /
├─sda14   8:14   0    4M  0 part 
└─sda15   8:15   0  106M  0 part /boot/efi
sdb       8:16   0  100G  0 disk 

We’ll create an LVM disk out of it.

root@pnetlab-server:~# vgcreate /dev/vg01 /dev/sdb
  Volume group "vg01" successfully created
root@pnetlab-server:~# lvcreate -l 100%FREE --name lv01 vg01 
  Logical volume "lv01" created.
root@pnetlab-server:~# mkfs.ext4 /dev/vg01/lv01 
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: done                            
Creating filesystem with 26213376 4k blocks and 6553600 inodes
Filesystem UUID: f1173542-f07c-493f-89e6-188439880465
Superblock backups stored on blocks: 
  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
  4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

Then we set a mount point for the volume we just created.

root@pnetlab-server:~# mkdir /mnt/unetlab
root@pnetlab-server:~# mount /dev/vg01/lv01 /mnt/unetlab

Edit the /etc/fstab file and add the following line to mount /mnt/unetlab on boot:

/dev/vg01/lv01  /mnt/unetlab  ext4  defaults  0  0

Now we’ll create a new directory for PNETLab project files and image files in our newly mounted disk.

root@pnetlab-server:~# mkdir /mnt/unetlab/addons
root@pnetlab-server:~# mkdir /mnt/unetlab/labs

Next, we’ll remove the old directories for project files and image files, and replace them with symbolic links pointing to the new directories on the mounted disk.

root@pnetlab-server:~# rm -rf /opt/unetlab/addons/ 
root@pnetlab-server:~# rm -rf /opt/unetlab/labs/
root@pnetlab-server:~# ln -s /mnt/unetlab/addons/ /opt/unetlab/
root@pnetlab-server:~# ln -s /mnt/unetlab/labs/ /opt/unetlab/

Change ownership of the new labs directory to ensure PNETLab’s services can access it back.

root@pnetlab-server:~# chown -R www-data:www-data /mnt/unetlab/labs/

Configure Tailscale VPN

We will register the server as a Tailscale node, so make sure you already generated an auth key in the Tailscale admin’s console.

First, install Tailscale by executing the install script.

root@pnetlab-server:~# curl -fsSL https://tailscale.com/install.sh | sh 

We will configure the server as a subnet router or simply a router. This will allow direct access to the PNETLab QEMU/Docker appliances from your local network via the Tailscale network. To achieve this, we must allow IP forwarding in the server.

root@pnetlab-server:~# echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
root@pnetlab-server:~# echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
root@pnetlab-server:~# sudo sysctl -p /etc/sysctl.d/99-tailscale.conf

Next, to register the server as a node in your Tailscale network and a subnet router, run the following command, replacing tskey-auth-XXXXX with your actual auth key and 10.0.10.0/24 with your PNETLab local network:

root@pnetlab-server:~# tailscale up --auth-key=tskey-auth-XXXXX  --advertise-route=10.0.10.0/24

In my case, 10.0.10.0/24 is my docker network, therefore I can communicate with all the docker appliances in this network locally via the pnetlab server that act as a gateway.

Revisit your Tailscale’s admin console. You should see a Subnets label on the pnetlab node.

image-20250326110623945

Open the node settings, select “edit route settings…” and tick your network there.

image-20250326100421964

Save and exit.

Update Grub Config

Remove the default grub config.

root@pnetlab-server:~# rm /etc/default/grub.d/50-cloudimg-settings.cfg
root@pnetlab-server:~# update-grub
root@pnetlab-server:~# reboot

Update Version (Optional)

After the installation completed, we can directly upgrade PNETLab to the latest version. Since it’s an optional, I will just put the official guide link on how to upgrade the lab version: go here.

Configure Domain Name

I’d like to get rid of the self-signed warning and use a custom domain name when accessing PNETLab. To do so, we will obtain a LetsEncrypt’s certificate to replace the self-signed one and then configure a new DNS record: pnetlab.yourdomain.com. The DNS will be pointing to your Tailscale node’s IP.

For this step, I will assume that you already own a domain.

Use LetsEncrypt SSL on PNETLab

On the pnetlab server, install certbot and prepare the work folder.

root@pnetlab-server:~# sudo apt install -y certbot
root@pnetlab-server:~# mkdir letsencrypt/work letsencrypt/config letsencrypt/logs

Run the following command to obtain a Let’s Encrypt certificate and complete the ACME DNS challenge with your DNS provider/hosting (NameCheap, Hostinger, etc).

root@pnetlab-server:~# certbot certonly --agree-tos --email admin@yourdomain.com --manual --preferred-challenges=dns -d \*.yourdomain.com --config-dir ./letsencrypt/config --work-dir ./letsencrypt/work --logs-dir ./letsencrypt/logs

The certificate should be available at ./letsencrypt/config/live/yourdomain.com

Next, edit /etc/apache2/sites-available/pnetlabs.conf and change these values with your certificate path.

SSLCertificateFile   /path/to/your/cert/cert1.pem
SSLCertificateKeyFile /path/to/your/certkey/privkey1.pem

Restart the apache services

root@pnetlab-server:~# systemctl restart apache2

Create DNS A record

Go to your DNS hosting provider and add a new records with the following data:

  • Type: A
  • Hostname/Domain Name: pnetlab.yourdomain.com
  • IPv4: pnetlab-tailscale-ip

Here’s an example of using Cloudflare DNS management for your reference.

image-20250326102815535

100.126.1.2 is the tailscale node IP of my pnetlab server.

Verify if your domain is mapped correctly using the dig command or Google Toolbox.

$ dig pnetlab.gcp.fahmifj.space                         
...[SNIP]...
;; QUESTION SECTION:
;pnetlab.fahmifj.space.	IN	A

;; ANSWER SECTION:
pnetlab.fahmifj.space. 300	IN	A	100.126.1.2
...[SNIP]...

Once you done and get connected to Tailscale network, you should be able to access PNETLab at pnetlab.yourdomain.com.

Conclusion

We’ve completed all the necessary steps to deploy a PNETLab server on Google Cloud. With this setup, we can do our networking stuff in the lab safely without exposing it to the internet. But, of course it doesn’t stop here, you have to start importing the appliances into the lab.

That’s all, see you in the next post!

Troubleshoot

Cannot install packages

Try disable/comment this whole url line from the repository.

#deb [trusted=yes] http://i-share.top/repo ./
#deb [trusted=yes] http://repo.pnetlab.com ./

Cloud Appliance: DHCP not working

Verify udhcpd service is running and enabled.

$ service udhcpd status
$ cat /etc/default/udhcpd | grep ENABLED

Restart and update it if necessary and see if it’s working.

Reference