1# How to test Vhost-user net with OpenVSwitch/DPDK 2 3The purpose of this document is to illustrate how to test vhost-user-net in cloud-hypervisor with OVS/DPDK as the backend. 4 5## Framework 6 7It's a simple test to validate the communication between two virtual machine, connecting them to vhost-user ports respectively provided by `OVS/DPDK`. 8``` 9 +----+----------+ +-------------+-----------+-------------+ +----------+----+ 10 | | | | | | | | | | 11 | |vhost-user|----------| vhost-user | ovs | vhost-user |----------|vhost-user| | 12 | |net device| | port 1 | | port 2 | |net device| | 13 | | | | | | | | | | 14 | +----------+ +-------------+-----------+-------------+ +----------+ | 15 | | | | | | 16 |vm1 | | dpdk | | vm2 | 17 | | | | | | 18 +--+---------------------------------------------------------------------------------------------+--+ 19 | | hugepages | | 20 | +---------------------------------------------------------------------------------------------+ | 21 | | 22 | host | 23 | | 24 +---------------------------------------------------------------------------------------------------+ 25``` 26## Prerequisites 27 28Prior to running the test, the following steps need to be performed. 29- Enable hugepages 30- Install DPDK 31- Install OVS 32 33Here are some good references for detailing them. 34- Red Hat 35 * https://wiki.qemu.org/Documentation/vhost-user-ovs-dpdk 36- Ubuntu server 37 * https://help.ubuntu.com/lts/serverguide/DPDK.html 38 * https://software.intel.com/en-us/articles/set-up-open-vswitch-with-dpdk-on-ubuntu-server 39 40## Test 41The test runs with multiple queue (MQ) support enabled, using 2 pairs of TX/RX queues defined for both OVS and the virtual machine. Here are the details on how the test can be run. 42 43_Setup OVS_ 44 45`ovs_test.sh` is created to setup and start OVS. OVS will provide the `dpdkvhostuser` backend running in server mode. 46```bash 47mkdir -p /var/run/openvswitch 48modprobe openvswitch 49killall ovsdb-server ovs-vswitchd 50rm -f /var/run/openvswitch/vhost-user* 51rm -f /etc/openvswitch/conf.db 52export DB_SOCK=/var/run/openvswitch/db.sock 53ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema 54ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach 55ovs-vsctl --no-wait init 56ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xf 57ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024 58ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true 59ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xf 60ovs-vswitchd unix:$DB_SOCK --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log 61ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev 62ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser 63ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser 64ovs-vsctl set Interface vhost-user1 options:n_rxq=2 65ovs-vsctl set Interface vhost-user2 options:n_rxq=2 66``` 67_Run ovs_test.sh_ 68```bash 69./ovs_test.sh 70``` 71 72_Launch the VMs_ 73 74VMs run in client mode. They connect to the socket created by the `dpdkvhostuser` backend. 75```bash 76# From one terminal. We need to give the cloud-hypervisor binary the NET_ADMIN capabilities for it to set TAP interfaces up on the host. 77./cloud-hypervisor \ 78 --cpus boot=2 \ 79 --memory size=512M,file=/dev/hugepages \ 80 --kernel vmlinux \ 81 --cmdline "reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \ 82 --disk path=clear-kvm.img \ 83 --net "mac=52:54:00:02:d9:01,vhost_user=true,socket=/var/run/openvswitch/vhost-user1,num_queues=4" 84 85# From another terminal. We need to give the cloud-hypervisor binary the NET_ADMIN capabilities for it to set TAP interfaces up on the host. 86./cloud-hypervisor \ 87 --cpus boot=2 \ 88 --memory size=512M,file=/dev/hugepages \ 89 --kernel vmlinux \ 90 --cmdline "reboot=k panic=1 nomodules i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd root=/dev/vda3" \ 91 --disk path=clear-kvm.img \ 92 --net "mac=52:54:20:11:C5:02,vhost_user=true,socket=/var/run/openvswitch/vhost-user2,num_queues=4" 93``` 94 95_Setup VM1_ 96```bash 97# From inside the guest 98sudo ip addr add 172.100.0.1/24 dev enp0s3 99``` 100 101_Setup VM2_ 102```bash 103# From inside the guest 104sudo ip addr add 172.100.0.2/24 dev enp0s3 105``` 106 107_Ping VM1 from VM2_ 108```bash 109# From inside the guest 110sudo ping 172.100.0.1 111``` 112 113_Ping VM2 from VM1_ 114```bash 115# From inside the guest 116sudo ping 172.100.0.2 117``` 118 119__Result:__ At this point, VM1 and VM2 can ping each other successfully. We can now run `iperf3` test. 120 121_Run VM1 as server_ 122```bash 123# From inside the guest 124iperf3 -s -p 4444 125``` 126 127_Run VM2 as client_ 128```bash 129# From inside the guest 130iperf3 -c 172.100.0.1 -t 30 -p 4444 & 131``` 132 133