Hi Assi
Results for UDP:
------------------------------------------------------------
Client connecting to
, UDP port 7000
Sending 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 44.131.243.3 port 35310 connected with 44.8.0.160 port 7000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 8.00 GBytes 6.86 Gbits/sec
[ 3] Sent 893 datagrams
[ 3] WARNING: did not receive ack of last datagram after 10 tries.
For some reason that I can’t work out I get no response from the TCP test.
Regards
Andy Brittain
G0HXT
g0hxt(a)greatbrittain.co.uk
On 23 Jul 2015, at 16:50, Assi Friedman
<assi(a)kiloxray.com> wrote:
(Please trim inclusions from previous messages)
_______________________________________________
Folks:
I'll leave the iperf server running on
kk7kx.ampr.org for the benefit of the
community. I combined both to operate on port 7000. To connect, use the
following basic syntax:
TCP: iperf -c
kk7kx.ampr.org -p 7000
UDP: iperf -u -c
kk7kx.ampr.org -p 7000
If you do use it, please be considerate.
Thank you,
Assi kk7kx/4x1kx
-----Original Message-----
From: 44net-bounces+assi=kiloxray.com(a)hamradio.ucsd.edu
[mailto:44net-bounces+assi=kiloxray.com@hamradio.ucsd.edu] On Behalf Of Will
Gwin
Sent: Wednesday, July 22, 2015 10:43 PM
To: 44net(a)hamradio.ucsd.edu
Subject: Re: [44net] Packet loss through UCSD?
(Please trim inclusions from previous messages)
_______________________________________________
On 7/22/15 10:27 PM, David Ranch wrote:
Linux's ipfwadm and ipchains were stateless
"packet filters" but iptables
has been fully stateful for many many
years. We are now at the cusp of
nftables on Linux which makes things even more programmable though I don't
know about the performance.
My mistake. It's been years since I was actively comparing the different
fire-walling methods in use by Linux. I went hardware for years and only
within the last few years went to software, when I went OpenBSD due to
native IPsec support as well as pf.
> Filtering at a router is a sure fire way to
bring throughput to a crawl.
Proper campus routers are designed with ASICs
optimized for routing in
hardware, and fire-walling is done in software.
Modern ASIC based firewalls can handle 100,000s of stateless filters on a
per
interface basis.
Note I said 'router', not 'firewall'. Routers are designed from the
silicon
up to forward packets, reduce broadcast domains and connect networks.
Firewalls are designed from the silicon up to restrict the flow of packets.
Yes firewalls will forward packets from one network to another, but their
primary purpose is inspection and restriction.
> I have seen enterprise small office routers
handle 450~500mbps of
straight routing but max out around 40mbps when fire-walling
because it's
CPU bound. The results are similar when stepping up to large chassis
routers.
It depends on the class of devices you're buying. There are many
inexpensive
Enterprise grade firewalls (always stateful) that can run many
100s of Megabit and a few thousand dollars will get you into the 10G+ range.
Again, please note that I said 'router', not 'firewall'. As to the type
of
router I was referring to in that specific example was a Cisco enterprise
branch router. Campus and data center grade routers do minimal traffic
filtering if any due to the CPU hit they incur, hence why large hardware
firewalls exist. Proper tool for the job.
Yeah.. but we don't need that throughput or
scale.
The current configuration was choking, hence the discussion. Brian has
worked with CAIDA and resolved the congestion for now.
Just statelessly filtering at the border edge
with a modern router would
solve much of these issues.
Please note that router and firewall are not the same thing. They can do the
same job, but not as effectively as the device purpose built for the job.
Also Brian already stated:
The port 'em0' is connected to a 1G
switch which is in turn connected at
10GbE to the building infrastructure
switch/router.
and
to do so requires administrative access to the
campus border router that
we don't have.
Fire-walling is done at the AMPR edge, but traffic was overwhelming the
current configuration. Moving filtering to the provider router is
technologically improper and operationally restricted, hence my suggestion
to split filtering and tunneling onto separate machines to increase
capacity.
The suggestion Tom made of running an IGP to selectively advertise only
subnets which have valid destinations via the tunnels would also restrict
the amount of traffic that will ultimately be blocked from reaching the
firewall. This type of routing combined with a large null route is a common
practice in large enterprise networks. Reducing the amount of traffic that
is going to get blocked from reaching the AMPR edge will help system load
but won't help with the timeouts due to slow [or down] tunnel peers.
As this thread has demonstrated, there are a few different ways to increase
capacity of the AMPR gateway. While it may not be necessary at this time,
it's still useful information to have for whoever is going to be responding
next time there is an issue.
--
Will
_________________________________________
44Net mailing list
44Net(a)hamradio.ucsd.edu
http://hamradio.ucsd.edu/mailman/listinfo/44net
_________________________________________
44Net mailing list
44Net(a)hamradio.ucsd.edu
http://hamradio.ucsd.edu/mailman/listinfo/44net