Hello all,
I have been working on a power pc architecture with kernel 2.6.14 (I know it is old but I cannot change kernels to most recent ones now in my embedded application). I am able to achieve a throughput of continuous 40 Mbps for my RECEIVING video application on UDP but I cannot if I use tcp sockets, the overall system cpu load reaches more than 80% of total available when streaming only at 2 Mbps.
Things, I have tried:
- Increasing rmem ip and tcp buffers to 6 MBytes. On the case below, I used setsockopt() function with the option SO_RCVBUF.
net.ipv4.tcp_rmem = 4096 87380 6619136
net.ipv4.tcp_mem = 6619136 6619136 6619136
- The system is not running out of memory (I will show 'top' command results).
- I have tried reno and bic congestions algorithms.
- I have enabled "Explicit Congestion Notification".
I am aware that tcp by nature should be slower than udp but I don't think the performance hit can be so large.
Below are some statistics when running my receiving video application named "etrans'. Basically, it seems that the system or kernel is taking
a performance hit since its cpu load (50.7%) increases much greater than the udp application (3%)
root@RD-60-011D6F:~# top
7:55pm up 42 min, 0 users, load average: 3.32, 3.34, 3.08
80 processes: 79 sleeping, 1 running, 0 zombie, 0 stopped
CPU states: 40.4% user, 50.7% system, 0.0% nice, 8.9% idle
Mem: 125868K total, 101248K used, 24620K free, 11688K buffers
Swap: 0K total, 0K used, 0K free, 30276K cached
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
1502 root 25 0 2404 1100 876 R 29.9 0.8 0:17 top.old
1210 root 16 0 56260 15M 4284 S 24.2 12.7 10:49 dtrans
988 root 15 0 56260 15M 4284 S 3.4 12.7 2:20 dtrans
1102 root 15 0 10620 1804 1384 S 3.3 1.4 1:36 pard
961 root 15 0 10608 3116 2708 S 3.1 2.4 1:27 vtosfifo
1119 root 16 0 16476 4448 1600 S 3.1 3.5 0:45 snmpd
889 root 16 0 19192 2140 1644 S 1.9 1.7 0:44 cmdld
990 root 15 0 56260 15M 4284 S 1.5 12.7 0:26 dtrans
733 root 15 0 0 0 0 SW 1.4 0.0 0:25 kjournald
957 root 11 -5 0 0 0 SW< 1.2 0.0 0:32 lnbpwr_wq/0
974 root 15 0 56260 15M 4284 S 1.1 12.7 0:19 dtrans
788 root 10 -5 0 0 0 SW< 1.0 0.0 0:26 fec_wq/0
1032 avahi 15 0 3584 1624 1384 S 1.0 1.2 0:52 avahi-daemon
937 root 16 0 8692 2436 1896 S 0.7 1.9 0:06 sshd
842 syslogd 15 0 2000 668 572 S 0.6 0.5 0:28 syslogd
1066 root 16 0 11580 2148 1272 S 0.4 1.7 0:03 httpd
927 root 15 0 20292 3188 2448 S 0.1 2.5 0:16 menud
958 root 15 0 11788 5048 3732 S 0.1 4.0 3:02 dcmd
1 root 16 0 1664 548 484 S 0.0 0.4 0:02 init
2 root 34 19 0 0 0 SWN 0.0 0.0 0:00 ksoftirqd/0
3 root 10 -5 0 0 0 SW< 0.0 0.0 0:01 events/0
4 root 12 -5 0 0 0 SW< 0.0 0.0 0:00 khelper
5 root 10 -5 0 0 0 SW< 0.0 0.0 0:00 kthread
21 root 10 -5 0 0 0 SW< 0.0 0.0 0:00 khubd
23 root 10 -5 0 0 0 SW< 0.0 0.0 0:00 kblockd/0
69 root 20 0 0 0 0 SW 0.0 0.0 0:00 pdflush
70 root 15 0 0 0 0 SW 0.0 0.0 0:00 pdflush
72 root 20 -5 0 0 0 SW< 0.0 0.0 0:00 aio/0
71 root 25 0 0 0 0 SW 0.0 0.0 0:00 kswapd0
73 root 15 0 0 0 0 SW 0.0 0.0 0:00 cifsoplockd
606 root 11 -5 0 0 0 SW< 0.0 0.0 0:00 kseriod
638 root 11 -5 0 0 0 SW< 0.0 0.0 0:00 ata/0
658 root 16 0 0 0 0 SW 0.0 0.0 0:00 kjournald
690 root 15 0 0 0 0 SW 0.0 0.0 0:02 kjournald
783 root 10 -5 0 0 0 SW< 0.0 0.0 0:16 frpanel_wq/0
833 root 16 0 5988 1104 808 S 0.0 0.8 0:00 sshd
852 root 16 0 1668 388 320 S 0.0 0.3 0:00 klogd
865 root 18 0 11024 2008 1472 S 0.0 1.5 0:00 fregd
884 root 16 0 19192 2140 1644 S 0.0 1.7 0:00 cmdld
885 root 15 0 19192 2140 1644 S 0.0 1.7 0:00 cmdld
892 root 16 0 20292 3188 2448 S 0.0 2.5 0:24 menud
898 root 16 0 10632 1896 1456 S 0.0 1.5 0:00 pnpd
905 root 16 0 18824 2464 1992 S 0.0 1.9 0:01 xcpServerd
907 root 17 0 18824 2468 2012 S 0.0 1.9 0:00 xcpClientd
917 root 16 0 20560 3948 3132 S 0.0 3.1 0:18 sysd
924 root 15 0 18824 2464 1992 S 0.0 1.9 0:01 xcpServerd
925 root 16 0 18824 2464 1992 S 0.0 1.9 0:00 xcpServerd
926 root 16 0 20292 3188 2448 S 0.0 2.5 0:01 menud
928 root 15 0 18824 2468 2012 S 0.0 1.9 0:01 xcpClientd
933 root 16 0 18824 2468 2012 S 0.0 1.9 0:00 xcpClientd
934 root 15 0 20560 3948 3132 S 0.0 3.1 0:01 sysd
935 root 15 0 20560 3948 3132 S 0.0 3.1 0:10 sysd
948 root 11 -5 0 0 0 SW< 0.0 0.0 0:00 dvblite_wq/0
962 root 16 0 11788 5048 3732 S 0.0 4.0 0:01 dcmd
973 root 15 0 2676 1420 1208 S 0.0 1.1 0:00 sh
981 root 16 0 56260 15M 4284 S 0.0 12.7 0:00 dtrans
*************************** Ouput from net stat ***************************
root@RD-60-011D6F:~# netstat -set
Tcp:
5 active connections openings
317 passive connection openings
0 failed connection attempts
16 connection resets received
3 connections established
532287 segments received
262156 segments send out
0 segments retransmited
0 bad segments received.
6133 resets sent
TcpExt:
27823 packets pruned from receive queue because of socket buffer overrun
210 TCP sockets finished time wait in fast timer
122 delayed acks sent
860 delayed acks further delayed because of locked socket
1 packets directly queued to recvmsg prequeue.
307805 packets header predicted
1223 acknowledgments not containing data received
1378 predicted acknowledgments
0 TCP data loss events
58319048 packets collapsed in receive queue due to low socket buffer
1 connections reset due to early user close
****************Output from net proc file system *****************************
root@RD-60-011D6F:~# cat /proc/net/sockstat
sockets: used 74
TCP: inuse 9 orphan 0 tw 12 alloc 9 mem 2562
UDP: inuse 9
RAW: inuse 0
FRAG: inuse 0 memory 0
root@RD-60-011D6F:~# cat /proc/net/softnet_stat
00089ccf 00000000 000065eb 00000000 00000000 00000000 00000000 00000000 00000000
root@RD-60-011D6F:~# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:002B 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 1351 1 c6ea8ba0 3000 0 0 2 -1
1: 00000000:07D0 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 3548 1 c6ea8420 3000 0 0 2 -1
2: 00000000:0050 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 1444 1 c6ea87e0 3000 0 0 2 -1
3: 00000000:0015 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 1350 1 c76f8400 3000 0 0 2 -1
4: 00000000:0016 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 939 1 c76f8b80 3000 0 0 2 -1
5: 00000000:0017 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 1041 1 c76f87c0 3000 0 0 2 -1
6: 0A00076F:07D0 0A000770:0418 01 00000000:00961F00 00:00000000 00000000 0 0 3549 3 c6ea8060 201 40 0 2 -1
7: C0A8076F:0050 C0A80AE7:E5F0 06 00000000:00000000 03:0000090A 00000000 0 0 0 2 c3e2fc20
8: C0A8076F:0050 C0A80AE7:E5FA 06 00000000:00000000 03:00001052 00000000 0 0 0 2 c3e2f260
9: C0A8076F:0050 C0A80AE7:E5FD 06 00000000:00000000 03:0000114C 00000000 0 0 0 2 c3e2f500
10: C0A8076F:0050 C0A80AE7:E5FF 06 00000000:00000000 03:00001287 00000000 0 0 0 2 c3e2f560
11: C0A8076F:0050 C0A80AE7:E5EC 06 00000000:00000000 03:0000051C 00000000 0 0 0 2 c3e2f320
12: C0A8076F:0050 C0A80AE7:E5EE 06 00000000:00000000 03:0000073C 00000000 0 0 0 2 c3e2f680
13: C0A8076F:0050 C0A80AE7:E601 01 00000000:00000000 02:000AF8AE 00000000 72 0 12752 2 c4843bc0 234 40 8 3 -1
14: C0A8076F:0016 C0A80AE7:A668 01 00000000:00000000 02:00067189 00000000 0 0 1139 2 c76f8040 224 40 3 3 -1
I would greatly appreciate any help or pointers that give a closer clue to resolve the poor performance on my system when using tcp,
Thanks,