On 7/21/15 3:51 PM, Brian Kantor wrote:
(Please trim inclusions from previous messages)
_______________________________________________
On Tue, Jul 21, 2015 at 02:26:36PM -0400, Bryan Fields wrote:
What is the configuration of the UCSD gateway?
I answered that in a previous email earlier today. Again: It's a
dual-core 3.2 Ghz Xeon processor with two 1 GbE ports. The port 'em0'
is connected to a 1G switch which is in turn connected at 10GbE to the
building infrastructure switch/router. Port 'em1' is output-only to
the network 'telescope'. The system never swaps or pages.
It's a partial answer. What version of FBSD is it running, what's the ram,
what south bridge, what chip set of the nic's? All these things matter
immensely in a software router.
It does all the packet filtering, selection, and
diversion using
kernel-mode 'ipfw'. The very few packets which are destined for legitimate
AMPR hosts are forwarded and encapsulated by a user-mode program. That
program consumes almost no resource because there are so few packets
headed to or from legitimate AMPR hosts and that's all it's given to
handle.
Cool, where is the source of the gateway program? If it's not open source,
why not?
It would be cool to have some netflow or real time metering of the legit
AMPRnet traffic over the gateway. ala an AMPRnet dashboard
Statistics and experiments show that the bottleneck is
the IP input
routines processing the ipfw rules. Since this is single-threaded inside
the kernel, more cores over the effective 4 we have now will probably
not help. As you can see from the snapshot below, the task queue for the
input interface is full and that is where the packets are being dropped.
/0% /10 /20 /30 /40 /50 /60 /70 /80 /90 /100
root em0 taskq XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root idle: cpu2 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root idle: cpu3 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root idle: cpu1 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
root idle: cpu0 X
root em1 taskq X
root ipipd X
<snip>
Turning off the filtering/diversion ('ipfw disable
firewall') almost
immediately ends the congestion with the em0 taskq sitting below 50% and
packets no longer get dropped. Turning it back on resumes the problem.
Of course, when it's off, no ipip is processed.
So this is interesting, FBSD had some issues with single threaded in older
releases, which is why knowing the release running would help. In the newer
releases the netperf team has really improved the ipfw performance via
parallelization. My first hand experience on FBSD is a bit lacking, it's been
a few years since I've touched a FBSD box other than as an Olive.
TBQH, Linux has a better networking stack in terms of performance now. I've
had most of my experience with the newer Linux kernels and it's been able to
handle 4xQSFP in and out (160g FD) in the internal testing I've seen at work.
The iptables filter scales nicely using SMP too.
I'm not sure about DPDK on FBSD either, but linux is able to make use of it
for packet filtering now.
73's
--
Bryan Fields
727-409-1194 - Voice
727-214-2508 - Fax
http://bryanfields.net