In the previous article, I had an issue with either Mac or Orbstack’s networking system, that prevented my localhost from communicating with the GNS3 appliances. Instead of spending more time troubleshooting, I decided to give PNETLab a try, a web-based network emulator that can run entirely on a cloud server.

In this article, I’ll show how to deploy PNETLab on Google Cloud using Terraform to automate the setup, and Tailscale to securely access the lab without exposing it to the public Internet.

Goals

  • Automate PNETLab server deployment with Terraform.
  • Secure access to the lab without exposing it to the public internet.
  • Configure a domain for easy access to the lab.

Prerequisites

  • GCP & Twingate accounts.
  • GCP CLI installed.
  • Terraform installed.
  • Own a domain name (optional).

Network Diagram Overview

Here’s an overview of what we’re going to build:

image-20251028001404879

Prepare the GCP Project

Create a Project

Before we begin, make sure that you’ve authenticated to GCP. Once done, create a project for our PNETLab deployment.

$ PROJECT_ID=pnetlab-$RANDOM
$ echo $PROJECT_ID
pnetlab-xxx
$ mkdir $PROJECT_ID
$ cd $PROJECT_ID
$ gcloud projects create $PROJECT_ID
$ gcloud config set project $PROJECT_ID

Then, link the project to your billing account.

$ gcloud billing accounts list              
ACCOUNT_ID            NAME                OPEN  MASTER_ACCOUNT_ID
AAAAAA-BBBBBB-CCCCCC  My Billing Account  True
$ gcloud billing projects link $PROJECT_ID --billing-account=AAAAAA-BBBBBB-CCCCCC

Enable the Compute API

Before you can use Google Cloud’s compute resources, we need to enable the Compute API which allows your project to access and manage VM instances.

$ gcloud services enable compute.googleapis.com

Note: I’m using GCP’s free trial credits for this setup, if you’re on a paid account, the steps remain the same.

Instance Deployment

Preparing Terraform Code

We’ll use the following folder structure.

$ tree
.
├── README.MD
└── terraform
    ├── terraforms.tfvars # sensitive informations goes here (project_id, region, user/keys)
    ├── main.tf # defined gcp resources
    └── variables.tf

1 directories, 4 files

Within the terraform folder, initialize the terraform working directory.

$ terraform init

terraform.tfvars

terraforms.tfvars contains the following (make your adjustment):

project_id      = ""
region          = ""
zone            = ""
os_image        = "projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20240116a"
ssh_user        = "username"
ssh_pub_user    = "ssh-ed25519 AAAAC3N... root"
ssh_pub_ansible = "ssh-ed25519 AAAAC3N... ansible"

main.tf

main.tf file contains the following:

terraform {
  required_providers {
    google = {
      source = "hashicorp/google"
    version = "~> 4.5" }
  }
}

provider "google" {
  project = var.project_id
  region  = var.region
  zone    = var.zone
}

# SETUP NETWORK #
resource "google_compute_network" "pnetlab_vpc" {
  name                    = "pnetlab-vpc"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "pnetlab_subnet" {
  name          = "pnetlab-subnet"
  network       = google_compute_network.pnetlab_vpc.id
  ip_cidr_range = "10.10.1.0/24"
}

resource "google_compute_firewall" "allow_ssh_access" {
  name    = "allow-ssh"
  network = google_compute_network.pnetlab_vpc.id

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = [ "0.0.0.0/0" ]
}

# NAT
resource "google_compute_router" "pnetlab_router" {
  name = "pnetlab-router"
  region = var.region
  network = google_compute_network.pnetlab_vpc.id

}

resource "google_compute_router_nat" "pnetlab_router_nat" {
  name = "pnetlab-router-nat"
  region = var.region
  router = google_compute_router.pnetlab_router.name
  nat_ip_allocate_option = "AUTO_ONLY" # use dynamic public ip for nat
  source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES" # nat any sources ip range available within the vpc

}

# SETUP EXT DISK #
resource "google_compute_disk" "additional_disk" {
  name   = "pnetlab-additional-disk"
  type   = "pd-standard"  # Choose disk type (pd-standard, pd-ssd, etc.)
  size = 100 
  lifecycle {
    prevent_destroy = true
  }
}

# SETUP INSTANCES #
# ubuntu-1804-bionic-v20240116a                                 ubuntu-os-cloud      ubuntu-1804-lts
resource "google_compute_instance" "pnetlab_server" {
  name         = "pnetlab-server"
  machine_type = "n2-standard-2"
  tags         = ["pnetlab"]

  boot_disk {
    initialize_params {
      image = var.os_image # cloud image
      size  = 20                 # assign 20gb
      type  = "pd-standard"
    }
  }
  
  # external disk 100gb
  attached_disk {
    source = google_compute_disk.additional_disk.id
  }

  network_interface {
    network    = google_compute_network.pnetlab_vpc.id
    subnetwork = google_compute_subnetwork.pnetlab_subnet.id

    # Disable public IP
    # access_config {
    #  network_tier = "STANDARD"
    # }
  }

  scheduling {
    automatic_restart           = true
    on_host_maintenance         = "MIGRATE"
    preemptible                 = false
    provisioning_model          = "STANDARD"
  }

  advanced_machine_features {
    enable_nested_virtualization = true
  }
  
  metadata = {
    ssh-keys = "${var.ssh_user}:${var.ssh_pub_user} \n${var.ssh_user}:${var.ssh_pub_ansible}"
  }
}

variables.tf

variables.tf define the variables we used in the tfvars file, it contains:

variable "project_id" {
  type        = string
  description = "Project ID"
}

variable "region" {
  type        = string
  description = "Project region"
  default     = "us-central"
}

variable "zone" {
  type        = string
  description = "Project zone"
  default     = "us-central1-a"
}

variable "os_image" {
  type        = string
  description = "OS family"
}

variable "ssh_user" {
  type        = string
  description = "The sudo user"
}

variable "ssh_pub_user" {
  type        = string
  description = "The public key used for authentication to an instance. Format: [public key] [username]"
}

variable "ssh_pub_ansible" {
  type        = string
  description = "The public key used for configuration management. Format: [public key] [username]"
}

Run Code

With all the resource files defined, we will run the following command in the terraform directory.

$ terraform fmt         # Format your Terraform files  
$ terraform validate    # Check for syntax errors  
$ terraform plan -out pnetlab-deploy       #Preview the infrastructure changes  
$ terraform apply pnetlab-deploy -auto-approve  # Deploy the resources  

Verify Deployment

After it finishes, check if the instance is running:

$ gcloud compute instances list --filter="name=pnetlab-server" --project="$PROJECT_ID"

PNETLab Installation

Install PNETLab

SSH into the instance using gcloud cli.

$ gcloud compute ssh INSTANCE_NAME --zone=YOUR_ZONE

Add pnetlab repository with the following command.

$ echo "deb [trusted=yes] http://repo.pnetlab.com ./" | sudo tee -a /etc/apt/sources.list

Update the repository and install pnetlab.

root@pnetlab-server:~# apt-get update
root@pnetlab-server:~# apt-get install pnetlab

Note: I switched my user to root with $ sudo su -

Configure the Instance

Swap

Create a swap disk, allocate the size according to your need.

root@pnetlab-server:~# fallocate -l 1G /swapfile
root@pnetlab-server:~# chmod 600 /swapfile
root@pnetlab-server:~# mkswap /swapfile
root@pnetlab-server:~# swapon /swapfile
root@pnetlab-server:~# cp /etc/fstab{,.bak}
root@pnetlab-server:~# '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

DNS

Edit file /etc/network/interfaces add line dns-nameservers 8.8.8.8 to the configuration of interface pnet0.

...
# The primary network interface
iface eth0 inet manual
auto pnet0
iface pnet0 inet dhcp
    dns-nameservers 8.8.8.8
    bridge_ports eth0
    bridge_stp off
...

And restart the networking service.

root@pnetlab-server:~# sudo service networking restart

Disk

Time to add the additional disk we defined in the terraform resource. The purpose of this disk is to store all the pnetlab project files as well as the images later.

In my case, the additional disk is available at /dev/sdb.

root@pnetlab-server:~# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0     7:0    0 40.4M  1 loop /snap/snapd/20671
loop1     7:1    0  368M  1 loop /snap/google-cloud-cli/203
loop2     7:2    0 63.9M  1 loop /snap/core20/2105
sda       8:0    0   20G  0 disk 
├─sda1    8:1    0 19.9G  0 part /
├─sda14   8:14   0    4M  0 part 
└─sda15   8:15   0  106M  0 part /boot/efi
sdb       8:16   0  100G  0 disk 

We’ll create an LVM disk out of it.

root@pnetlab-server:~# vgcreate /dev/vg01 /dev/sdb
  Volume group "vg01" successfully created
root@pnetlab-server:~# lvcreate -l 100%FREE --name lv01 vg01 
  Logical volume "lv01" created.
root@pnetlab-server:~# mkfs.ext4 /dev/vg01/lv01 
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: done                            
Creating filesystem with 26213376 4k blocks and 6553600 inodes
Filesystem UUID: f1173542-f07c-493f-89e6-188439880465
Superblock backups stored on blocks: 
  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
  4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

Then we set a mount point for the volume we just created.

root@pnetlab-server:~# mkdir /mnt/unetlab
root@pnetlab-server:~# mount /dev/vg01/lv01 /mnt/unetlab

Edit the /etc/fstab file and add the following line to mount /mnt/unetlab on boot:

/dev/vg01/lv01  /mnt/unetlab  ext4  defaults  0  0

Now we’ll create a new directory for PNETLab project files and image files in our newly mounted disk.

root@pnetlab-server:~# mkdir /mnt/unetlab/addons
root@pnetlab-server:~# mkdir /mnt/unetlab/labs

Next, we’ll remove the old directories for project files and image files, and replace them with symbolic links pointing to the new directories on the mounted disk.

root@pnetlab-server:~# rm -rf /opt/unetlab/addons/ 
root@pnetlab-server:~# rm -rf /opt/unetlab/labs/
root@pnetlab-server:~# ln -s /mnt/unetlab/addons/ /opt/unetlab/
root@pnetlab-server:~# ln -s /mnt/unetlab/labs/ /opt/unetlab/

Change ownership of the new labs directory to ensure PNETLab’s services can access it back.

root@pnetlab-server:~# chown -R www-data:www-data /mnt/unetlab/labs/

Grub

Remove the default grub config.

root@pnetlab-server:~# rm /etc/default/grub.d/50-cloudimg-settings.cfg
root@pnetlab-server:~# update-grub
root@pnetlab-server:~# reboot

Restart and Verify

Once all the steps above are complete, restart the server and verify if the pnetlab service is running using the following command and check these ports: 80, 443 for pnetlab web services and 4822 for guacamole proxy.

root@pnetlab-server:~# netstat -tlpn  

Update Version (Optional)

After the installation completed, we can directly upgrade PNETLab to the latest version. It’s an optional, so I will just put the official guide link on how to upgrade the lab version: go here.

Setup Tailscale VPN

For this step, I will assume that you already generated an auth key in the Tailscale admin console.

Install Tailscale

First, install Tailscale by executing the install script.

root@pnetlab-server:~# curl -fsSL https://tailscale.com/install.sh | sh 

Configure Subnet Router

We will configure the instance as a subnet router, simply turning the server into a router. This will enable direct access between PNETLab appliances network and our local network via the Tailscale network.

To achieve that, we first need to enable IP forwarding on the server.

root@pnetlab-server:~# echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
root@pnetlab-server:~# echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
root@pnetlab-server:~# sudo sysctl -p /etc/sysctl.d/99-tailscale.conf

To register your instance as a node in your Tailscale network and enable it to act as a subnet router, run the following command:

root@pnetlab-server:~# tailscale up --auth-key=tskey-auth-XXXXX  --advertise-route=10.0.10.0/24

Note:

  • Replace tskey-auth-XXXXX with your actual auth key and 10.0.10.0/24 with your designated local network on the PNETLAB server:

  • The --advertise-routes=10.0.10.0/24 option tells the instance to advertise the subnet 10.0.10.0/24 to your Tailscale network. Once advertised, other devices in your Tailscale network can route traffic to this subnet through this instance.

Revisit your Tailscale’s admin console. You should see a Subnets label on the pnetlab node.

image-20250326110623945

Open the node settings, select “edit route settings…” and tick your network there.

image-20250326100421964

Verify Routing

To verify, you can simply ping any interface IP of the instance that you have advertised. The image below is an example of mine, where I’m pinging the Docker interface.

image-20251028003923861

Configure Domain Name

We can use a custom domain name to access PNETLab. To do so, we will obtain a LetsEncrypt’s certificate to replace the self-signed one and then configure a new DNS record: pnetlab.yourdomain.com. This record will be pointing to Tailscale IP of our instance.

For this step, I will assume that you already own a domain and know how to complete an ACME DNS challenge (proving control over a domain).

Obtain LetsEncrypt SSL

Within the instance, install certbot and prepare the work folder.

root@pnetlab-server:~# sudo apt install -y certbot
root@pnetlab-server:~# mkdir letsencrypt/work letsencrypt/config letsencrypt/logs

We can obtain a Let’s Encrypt certificate with the following command.

root@pnetlab-server:~# certbot certonly --agree-tos --email admin@yourdomain.com --manual --preferred-challenges=dns -d \*.yourdomain.com --config-dir ./letsencrypt/config --work-dir ./letsencrypt/work --logs-dir ./letsencrypt/logs

Complete the ACME DNS challenge on your DNS provider (NameCheap, Hostinger, etc). After successful validation, the CA (Let’s Encrypt) will issue the certificate, which should be available at ./letsencrypt/config/live/yourdomain.com.

Next, edit /etc/apache2/sites-available/pnetlabs.conf and change these values with your certificate path.

SSLCertificateFile   /path/to/your/cert/cert1.pem
SSLCertificateKeyFile /path/to/your/certkey/privkey1.pem

Restart the apache services

root@pnetlab-server:~# systemctl restart apache2

Create DNS A record

Go to your DNS hosting provider and add a new records with the following data:

  • Type: A
  • Hostname/Domain Name: pnetlab.yourdomain.com
  • IPv4: pnetlab-tailscale-ip

Here’s an example of mine, where I’m using Cloudflare for DNS management.

image-20250326102815535

100.126.1.2 is the tailscale node IP of my pnetlab server.

Verify if your domain is mapped correctly using the dig command or Google Toolbox.

$ dig pnetlab.gcp.fahmifj.space                         
...[SNIP]...
;; QUESTION SECTION:
;pnetlab.fahmifj.space.	IN	A

;; ANSWER SECTION:
pnetlab.fahmifj.space. 300	IN	A	100.126.1.2
...[SNIP]...

Once you done and get connected to Tailscale network, you should be able to access PNETLab at pnetlab.yourdomain.com.

Troubleshoot

If you happen to meet these errors, try the following solutions.

Cannot install packages

Try disable/comment this whole url line from the repository.

#deb [trusted=yes] http://i-share.top/repo ./
#deb [trusted=yes] http://repo.pnetlab.com ./

Cloud Appliance: DHCP not working

Verify udhcpd service is running and enabled.

$ service udhcpd status
$ cat /etc/default/udhcpd | grep ENABLED

Update to enabled and restart and see if it’s working.

Conclusion

We’ve completed all the necessary steps to deploy a PNETLab server on Google Cloud. With this setup, we can do our networking stuff in the lab safely without exposing it to the internet.

That’s all, see you in the next post!

Reference