Wednesday

Maximizing Throughput on Linux Devices using the RF24 communication Stack

 Maximizing Throughput on Linux Devices using the RF24 communication Stack

How to achieve peak performance via TCP/IP connections

 With the recent changes to the core RF24 driver improving stability, I've begun more thorough investigation & testing regarding max throughput and speed on Linux devices.

 What I've found is that the typical TCP/IP stack for Linux is designed to increase throughput for high speed, bidirectional communication devices, whereas the RF24 Comm Stack is built around the RF24 and RF52 radio devices, which can only either send or receive at a given time. To work around this problem, users are encouraged to modify the TCP/IP window sizes. This allows the system to send smaller payloads, one at a time, thus increasing throughput and overall speed of communication when using the RF24Gateway layers for Linux to Linux device communication.

To configure the window sizes for RF24Gateway, just run the following commands:

sudo sysctl net.ipv4.tcp_wmem="1500 1500 1500"
sudo sysctl net.ipv4.tcp_rmem="1500 1500 1500"

 This sets the window sizes to 1500, which is right around the MAX_PAYLOAD_SIZE configured in the RF24Network layer for Linux devices.

 Results can be tested by running the following commands before and after making this change:

iperf3 -c 10.1.3.134 -4 -t 60 - For no window limit

iperf3 -c 10.1.3.134 -4 -t 60 -w 1500 - With a 1500 byte TCP/IP window

 These changes are only temporary, but users can edit /etc/sysctl.conf to make them permanent.

 Note: These changes can severely impact or disable other network systems & services. Users are advised to put these commands into a script, to enable/disable enhanced RF24 throughput.

 With the RF24Gateway ncurses interrupt example, I'm achieving speeds up to 150-175Kbps or 20-25KB/S over TCP/IP. Up to around 30KB/s using UDP. In order to do this, one needs to modify the line gw.poll(2); and change it to gw.poll(1);. This reduces the delay in the handling of data, allowing maximum throughput.

 IPerf3 results over TCP/IP
 
This works out to around 100-150Kbps on average or 12.5-19KB/s over TCP/IP, which includes the RF24Mesh node renewing its address periodically, interrupting communication slightly. 

 
IPerf3 results over UDP
 
The results with UDP (shown above) vary depending on what bit-rate you set, in this case 130Kbps or about 16.25KB/s was the chosen bit-rate.
 
 
More IPerf3 results over UDP

Here, a slightly higher speed, 145Kbps was chosen, however there was a slight bit of loss. 
(1/14500 Datagrams)
 

No comments:

Recreating nRF24 greater than 32 byte payload issue with nRF52840

 Recreating nRF24 greater than 32-byte payload issue with nRF52840 A D.O.S. attack against nRF24L01P that do not validate Dynamic Payload Si...