With a GL-AR750 I had laying around, I wondered what fun project I could do with it. I then figured that with a USB 2.0 port AND a micro-sd card this could be a small NAS to duplicate some backup!
This could be the perfect device for an off-site backup. Even if I already knew that it would not be blazing fast, I wondered what kind of performance this small package could handle!
The specification of this small little router look like :
- A small linux OS (openWRT)
- 650 Mhz single core processor (QCA9531)
- 16 MB of flash storage!
- 128 MB of DDR2 Ram
- 802.11a/b/g/n/ac
- 3 Ethernet ports
- One Micro usb port
- one USB 2.0 port
Yes, Quite the little device!
I began by benchmarking an SD card to know what performance it could reach without being limited by the Micro Usb port of the AR750. I obtained something like :
So about 22 MB/s read speed and 14 MB/s write speed.
Before testing the sd card, I ran a quick Iperf3 benchmark just to make sure that the device could handle a synthetic load on its network interface.
mathieu@LINUXPUTER:~$ iperf3 -c 192.168.0.89 -f K
Connecting to host 192.168.0.89, port 5201
[ 5] local 192.168.0.31 port 48898 connected to 192.168.0.89 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 12.6 MBytes 12877 KBytes/sec 0 471 KBytes
[ 5] 1.00-2.00 sec 11.5 MBytes 11736 KBytes/sec 0 471 KBytes
[ 5] 2.00-3.00 sec 11.4 MBytes 11675 KBytes/sec 0 471 KBytes
[ 5] 3.00-4.00 sec 11.4 MBytes 11674 KBytes/sec 0 471 KBytes
[ 5] 4.00-5.00 sec 10.5 MBytes 10702 KBytes/sec 0 471 KBytes
[ 5] 5.00-6.00 sec 11.4 MBytes 11674 KBytes/sec 0 471 KBytes
[ 5] 6.00-7.00 sec 11.4 MBytes 11675 KBytes/sec 0 471 KBytes
[ 5] 7.00-8.00 sec 10.5 MBytes 10701 KBytes/sec 0 471 KBytes
[ 5] 8.00-9.00 sec 11.4 MBytes 11675 KBytes/sec 0 471 KBytes
[ 5] 9.00-10.00 sec 11.4 MBytes 11674 KBytes/sec 0 471 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 113 MBytes 11606 KBytes/sec 0 sender
[ 5] 0.00-10.04 sec 111 MBytes 11364 KBytes/sec receiver
iperf Done.
That seems good, I was using the 100 mbits ethernet port so this was also expected to cap around this speed. While running the Iperf3 benchmark the cpu was pinned at 100% … so yeah I wasn’t expecting to reach that kind of speed with the sd card.
First try : scp
I began by running the most basic scp command to get a first impression on what kind of performance I would get :
For the writing speed I did :
scp -v linuxmint-20.2-mate-64bit.iso root@192.168.0.89:/mnt/sda1/download/
...
linuxmint-20.2-mate-64bit.iso 2% 56MB 4.4MB/s 07:34 ETA
For the reading speed test I did the inverse command and got :
scp -v root@192.168.0.89:/mnt/sda1/download/linuxmint-20.2-mate-64bit.iso ./testiso.iso
...
linuxmint-20.2-mate-64bit.iso 2% 60MB 5.0MB/s 06:37 ETA
The thing I noticed for both these tests was how the dropbear was using most of the cpu instead of the scp command. I also wondered why the read speed was hitting a ceiling at 5.0Mb/s.
I assumed that high cpu usage of the dropbear process was because of the encryption and wondered if I could get something a little bit faster by having a quicker SSH encryption. Sadly openWRT doesn’t contain much cypher so I didn’t get better results.I then wondered if using an other program could yield me quicker results.
Second try : rsync
I then wondered if I could gain any speed by just using rsync instead of scp. I changed my basic scp function for the equivalent basic rsync function :
For the writing speed I got :
$rsync --progress linuxmint-20.2-mate-64bit.iso root@192.168.0.89:/mnt/sda1/download/rsynctest.iso
sending incremental file list
linuxmint-20.2-mate-64bit.iso
221,904,896 10% 3.26MB/s 0:09:35
And for the reading speed I got :
$rsync --progress root@192.168.0.91:/mnt/sda1/download/linuxmint-20.2-mate-64bit.iso ./testfile.iso
linuxmint-20.2-mate-64bit.iso
336,035,840 15% 3.74MB/s 0:07:52
This was actually quite underwhelming in term of speed. I was expecting at least the same speed as SCP and maybe a slight advantage, but I did not expect a downgrade in performance. Fortunately, the time spent researching information on rsync made me stumble onto something really interesting. Rsync has a tcp protocol to talk from an Rsync client to a Rsync daemon.
Third try : The rsync protocol
On the server I installed the rsync daemon using the openWRT package manager. I then modified the /etc/rsyncd.conf
file so that it contained :
/etc/rsyncd.conf
# Minimal configuration for rsync daemon
# Next line required for init script
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
# Basic shared folder
[test_dir]
path = /mnt/sda1/download
read only = false
An rsync client can then access the test_dir folder and read/write to it.
On the client I then ran similar command as before, but this time using the rsync://
protocol :
$rsync --progress rsync://192.168.2.42:/test_dir/linuxmint-20.2-mate-64bit.iso ./testiso.iso
linuxmint-20.2-mate-64bit.iso
667,385,856 31% 6.08MB/s 0:03:56
Success!! This time I nearly doubled my previous download speed! The thing to know is that by default this protocol isn’t really secure so you need to add a password to your network share.
The biggest win of all is that the cpu on our small server wasn’t even capped at 100% so there must be another bottleneck now!.
Just for fun I tested the write speed onto our little server and the speed was more than decent… and I actually really don’t understand how this could be possible.
rsync --progress ./linuxmint-20.2-cinnamon-64bit.iso rsync://192.168.2.42:/test_dir/write_iso.iso
linuxmint-20.2-cinnamon-64bit.iso
213,155,840 9% 9.16MB/s 0:03:27
We were almost reaching the maximum write speed of our SD card, and we were faster than the reading speed which doesn’t really make sense… I’ll have to dig a little deeper to explain why. The good news is that those speeds were far better than what we initially had, so I consider this a win!
The key takeaway message is that on a small embedded device, there is a good performance boost that can be gained by bypassing the ssh protocol. Just don’t forget to secure your network share if you end up using the rsync daemon!