Appearance
vBNG & Bison BNG PPPoE Setup Guide
Overview
This guide covers Virtual BNG (Broadband Network Gateway) and Bison BNG setup for PPPoE with Zal Ultra RADIUS integration. Virtual BNG solutions provide cost-effective, scalable alternatives to hardware BNG systems.
What is vBNG?
vBNG = Virtual Broadband Network Gateway
✅ Software-based BNG running on x86 servers
✅ Scalable and cost-effective alternative to hardware
✅ Supports PPPoE, IPoE, L2TP
✅ RADIUS AAA integration
✅ High performance with DPDK/VPPSupported Platforms:
- 🔧 Accel-PPP - Popular open-source PPPoE server
- 🚀 DPDK-based vBNG - High-performance packet processing
- 📡 VPP (Vector Packet Processing) - Cisco FD.io project
- 🦬 Bison BNG - Open-source BNG solution
- 🐧 Linux PPPoE Server - Built-in Linux PPP daemon
Table of Contents
- Accel-PPP Setup
- DPDK vBNG Setup
- VPP BNG Setup
- Bison BNG Setup
- Performance Tuning
- Monitoring & Troubleshooting
Accel-PPP Setup
What is Accel-PPP?
Accel-PPP = Accelerated PPP daemon
✅ High-performance PPPoE/L2TP/PPTP server
✅ RADIUS AAA support
✅ Built-in DHCP server
✅ Traffic shaping (tc, tbf)
✅ VLAN support
✅ IPv6 support
✅ Can handle 10,000+ sessions on commodity hardwareStep 1: Install Accel-PPP
bash
# Update system
apt-get update
apt-get upgrade -y
# Install dependencies
apt-get install -y build-essential cmake gcc linux-headers-$(uname -r)
apt-get install -y libpcre3-dev libssl-dev liblua5.1-0-dev
apt-get install -y git
# Clone Accel-PPP
cd /opt
git clone https://github.com/xebd/accel-ppp.git
cd accel-ppp
# Build and install
mkdir build && cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr \
-DKDIR=/usr/src/linux-headers-$(uname -r) \
-DLUA=TRUE \
-DSHAPER=TRUE \
-DRADIUS=TRUE \
-DNETSNMP=TRUE \
..
make
make install
# Load kernel modules
modprobe pppoe
modprobe ppp_mppe
# Enable on boot
echo "pppoe" >> /etc/modules
echo "ppp_mppe" >> /etc/modulesStep 2: Configure Accel-PPP
bash
# Create configuration file
cat > /etc/accel-ppp.conf << 'EOF'
[modules]
log_file
log_syslog
pppoe
auth_pap
auth_chap_md5
auth_mschap_v1
auth_mschap_v2
radius
ippool
shaper
net-snmp
connlimit
sigchld
[core]
log-error=/var/log/accel-ppp/core.log
thread-count=4
[common]
single-session=replace
sid-case=upper
[ppp]
verbose=1
min-mtu=1280
mtu=1492
mru=1492
accomp=deny
pcomp=deny
ccp=0
check-ip=0
single-session=replace
mppe=prefer
ipv4=require
ipv6=deny
ipv6-intf-id=0:0:0:1
ipv6-peer-intf-id=0:0:0:2
ipv6-accept-peer-intf-id=1
lcp-echo-interval=30
lcp-echo-failure=3
lcp-echo-timeout=0
unit-cache=1
[pppoe]
verbose=1
interface=eth1
ac-name=ISP-vBNG
service-name=ISP-PPPoE
called-sid=mac
tr101=0
pado-delay=0
ifname=pppoe%d
sid-case=upper
vlan-mon=eth1,100-200
vlan-timeout=60
vlan-name=%I.%N
[radius]
nas-identifier=vBNG-01
nas-ip-address=192.168.1.1
gw-ip-address=10.10.0.1
auth-server=192.168.1.100:1812,YourSecretKey123
acct-server=192.168.1.100:1813,YourSecretKey123
dae-server=192.168.1.100:3799,YourSecretKey123
verbose=1
timeout=3
max-try=3
acct-timeout=120
acct-delay-time=0
acct-interim-interval=300
[ip-pool]
gw-ip-address=10.10.0.1
10.10.1.2-254
10.10.2.2-254
10.10.3.2-254
[dns]
dns1=8.8.8.8
dns2=8.8.4.4
[shaper]
verbose=1
attr=Filter-Id
down-burst-factor=0.1
up-burst-factor=1.0
latency=50
mpu=0
r2q=10
quantum=1500
moderate-quantum=1
cburst=1534
ifb=ifb0
[cli]
telnet=127.0.0.1:2000
tcp=127.0.0.1:2001
password=admin
[snmp]
master=0
agent-name=accel-ppp
[connlimit]
limit=10/min
burst=3
timeout=60
[log]
log-file=/var/log/accel-ppp/accel-ppp.log
log-emerg=/var/log/accel-ppp/emerg.log
log-fail-file=/var/log/accel-ppp/auth-fail.log
log-debug=/var/log/accel-ppp/debug.log
copy=1
level=3
EOFStep 3: Configure Network Interface
bash
# Configure subscriber interface (no IP)
cat > /etc/network/interfaces.d/eth1 << 'EOF'
auto eth1
iface eth1 inet manual
up ip link set $IFACE up
down ip link set $IFACE down
EOF
# Bring up interface
ip link set eth1 up
# Create IFB interface for shaping
modprobe ifb numifbs=1
ip link set ifb0 upStep 4: Configure NAT
bash
# Enable IP forwarding
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
# Configure iptables NAT
iptables -t nat -A POSTROUTING -s 10.10.0.0/16 -o eth0 -j MASQUERADE
iptables -A FORWARD -i pppoe+ -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o pppoe+ -m state --state RELATED,ESTABLISHED -j ACCEPT
# Save iptables rules
apt-get install -y iptables-persistent
netfilter-persistent saveStep 5: Start Accel-PPP
bash
# Create log directory
mkdir -p /var/log/accel-ppp
# Start service
systemctl start accel-ppp
systemctl enable accel-ppp
# Check status
systemctl status accel-ppp
# View logs
tail -f /var/log/accel-ppp/accel-ppp.logStep 6: Verify Configuration
bash
# Connect to CLI
telnet 127.0.0.1 2000
# Password: admin
# Show sessions
show sessions
# Show statistics
show stat
# Show radius
show radius
# Exit
exitDPDK vBNG Setup
What is DPDK?
DPDK = Data Plane Development Kit
✅ High-performance packet processing framework
✅ Bypasses kernel network stack
✅ Direct NIC access (poll mode drivers)
✅ Can handle millions of packets per second
✅ Used by major vendors (Intel, Cisco, etc.)Architecture
Subscriber → NIC (DPDK PMD) → DPDK vBNG → RADIUS (Zal Ultra)
↓
Kernel Bypass
↓
User Space ProcessingInstallation
bash
# Install DPDK
apt-get install -y dpdk dpdk-dev
# Install build tools
apt-get install -y build-essential meson ninja-build python3-pyelftools
# Clone DPDK
cd /opt
git clone https://github.com/DPDK/dpdk.git
cd dpdk
# Build DPDK
meson build
cd build
ninja
ninja install
ldconfig
# Setup hugepages
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
# Make permanent
echo "vm.nr_hugepages=1024" >> /etc/sysctl.conf
echo "nodev /mnt/huge hugetlbfs defaults 0 0" >> /etc/fstabConfigure DPDK vBNG
bash
# Bind NIC to DPDK
dpdk-devbind.py --status
dpdk-devbind.py --bind=vfio-pci 0000:01:00.0
# Configure vBNG (example using VPP)
# See VPP section belowVPP BNG Setup
What is VPP?
VPP = Vector Packet Processing (FD.io)
✅ Cisco's open-source high-performance framework
✅ Built on DPDK
✅ Supports PPPoE, L2TP, IPoE
✅ Advanced routing and switching
✅ Can handle 100+ Gbps on commodity hardwareInstallation
bash
# Add VPP repository
curl -s https://packagecloud.io/install/repositories/fdio/release/script.deb.sh | bash
# Install VPP
apt-get install -y vpp vpp-plugin-core vpp-plugin-dpdk
# Start VPP
systemctl start vpp
systemctl enable vppConfigure VPP for PPPoE
bash
# Connect to VPP CLI
vppctl
# Configure interfaces
create host-interface name eth1
set interface state host-eth1 up
set interface ip address host-eth1 192.168.1.1/24
create host-interface name eth0
set interface state host-eth0 up
# Configure PPPoE
pppoe create server host-eth1
# Configure RADIUS
radius add server 192.168.1.100 key YourSecretKey123
# Configure IP pool
ip pool add 10.10.1.2 - 10.10.1.254
# Enable NAT
nat44 add interface address host-eth0
set interface nat44 in host-eth1 out host-eth0Bison BNG Setup
What is Bison BNG?
Bison BNG = Open-source BNG solution
✅ Modern BNG implementation
✅ Supports PPPoE, IPoE
✅ RADIUS AAA integration
✅ High performance
✅ Container-readyInstallation
bash
# Install dependencies
apt-get install -y golang-go git
# Clone Bison BNG
cd /opt
git clone https://github.com/bison-bng/bison-bng.git
cd bison-bng
# Build
make build
# Install
make installConfigure Bison BNG
bash
# Create configuration
cat > /etc/bison-bng/config.yaml << 'EOF'
server:
interfaces:
- name: eth1
type: pppoe
radius:
servers:
- address: 192.168.1.100
port: 1812
secret: YourSecretKey123
accounting_port: 1813
coa_port: 3799
nas:
identifier: BISON-BNG-01
ip_address: 192.168.1.1
pools:
- name: default
range: 10.10.1.2-10.10.1.254
gateway: 10.10.0.1
pppoe:
service_name: ISP-PPPoE
ac_name: ISP-BISON-BNG
mtu: 1492
authentication:
- chap
- pap
dns:
primary: 8.8.8.8
secondary: 8.8.4.4
EOF
# Start Bison BNG
systemctl start bison-bng
systemctl enable bison-bngPerformance Tuning
System Optimization
bash
# Increase file descriptors
echo "* soft nofile 65535" >> /etc/security/limits.conf
echo "* hard nofile 65535" >> /etc/security/limits.conf
# Kernel tuning
cat >> /etc/sysctl.conf << 'EOF'
# Network performance
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_rmem=4096 87380 67108864
net.ipv4.tcp_wmem=4096 65536 67108864
net.core.netdev_max_backlog=5000
net.ipv4.tcp_congestion_control=bbr
# Connection tracking
net.netfilter.nf_conntrack_max=1000000
net.netfilter.nf_conntrack_tcp_timeout_established=7200
# IP forwarding
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
EOF
sysctl -pCPU Affinity
bash
# Pin Accel-PPP to specific CPUs
taskset -cp 0-3 $(pidof accel-pppd)
# For DPDK/VPP, configure in startup config
# Isolate CPUs for packet processing
echo "isolcpus=1-7" >> /etc/default/grub
update-grub
rebootMonitoring & Troubleshooting
Accel-PPP Monitoring
bash
# CLI commands
telnet 127.0.0.1 2000
show sessions
show sessions ifname pppoe0
show stat
show radius stat
# Log monitoring
tail -f /var/log/accel-ppp/accel-ppp.log
tail -f /var/log/accel-ppp/auth-fail.log
# Session count
echo "show sessions" | nc 127.0.0.1 2001 | grep -c "pppoe"Performance Monitoring
bash
# CPU usage
top -p $(pidof accel-pppd)
# Network statistics
ifconfig pppoe0
ip -s link show pppoe0
# Connection tracking
cat /proc/sys/net/netfilter/nf_conntrack_count
cat /proc/sys/net/netfilter/nf_conntrack_maxCommon Issues
Issue 1: High CPU Usage
bash
# Check thread count
grep thread-count /etc/accel-ppp.conf
# Increase threads (match CPU cores)
thread-count=8
# Restart service
systemctl restart accel-pppIssue 2: Session Limit Reached
bash
# Check current sessions
echo "show stat" | nc 127.0.0.1 2001
# Increase kernel limits
ulimit -n 65535
# Check conntrack
cat /proc/sys/net/netfilter/nf_conntrack_maxIssue 3: RADIUS Timeout
bash
# Check RADIUS connectivity
ping 192.168.1.100
# Increase timeout
timeout=5
# Check RADIUS logs in Zal Ultra
# Network → RADIUS LogsBest Practices
Security
✅ Use strong RADIUS secret
✅ Restrict management access (CLI, SNMP)
✅ Enable firewall rules
✅ Use single-session=replace to prevent duplicates
✅ Monitor failed authentication attempts
✅ Regular security updatesPerformance
✅ Use DPDK/VPP for high-capacity networks
✅ Tune kernel parameters
✅ Use CPU affinity for packet processing
✅ Monitor system resources
✅ Use appropriate thread count
✅ Enable hardware offloading if availableReliability
✅ Use systemd for automatic restart
✅ Monitor service health
✅ Set up redundant RADIUS servers
✅ Regular backups of configuration
✅ Test failover scenarios
✅ Monitor logs for errorsCapacity Planning
Hardware Recommendations
Small ISP (< 1,000 users):
CPU: 4 cores
RAM: 8 GB
NIC: 1 Gbps
Storage: 50 GB SSD
Software: Accel-PPPMedium ISP (1,000 - 10,000 users):
CPU: 8-16 cores
RAM: 16-32 GB
NIC: 10 Gbps
Storage: 100 GB SSD
Software: Accel-PPP or VPPLarge ISP (10,000+ users):
CPU: 16+ cores (high clock speed)
RAM: 64+ GB
NIC: 10/25/40 Gbps with DPDK support
Storage: 200 GB NVMe SSD
Software: DPDK + VPP or Bison BNGRelated Documentation
- 📘 PPPoE Overview - MikroTik setup
- 📗 Cisco PPPoE - Cisco IOS/IOS-XE
- 📙 Juniper PPPoE - JunOS setup
- 🔐 RADIUS Setup - FreeRADIUS configuration
Summary
✅ vBNG & Bison BNG Setup Complete!
What We Covered:
- ✅ Accel-PPP installation and configuration
- ✅ DPDK high-performance setup
- ✅ VPP BNG configuration
- ✅ Bison BNG setup
- ✅ Performance tuning
- ✅ Monitoring and troubleshooting
Key Points:
✅ Accel-PPP: Best for small-medium ISPs
✅ DPDK/VPP: Best for high-capacity networks
✅ Bison BNG: Modern, container-ready solution
✅ Proper tuning essential for performance
✅ Monitor system resources regularlyYour virtual BNG is ready for Zal Ultra! 🚀
