The top 5 misconfigured gateways: These account for more dropped packets
than all the rest of them.
These are stats from midnight local to about 20 hours later.
The fifth entry is kind of interesting; it has an inner source address
of amprgw itself, which is clearly wrong. I don't quite see how that
could be happening.
These AREN'T causing any noticeable problems for amprgw or anyone else,
but a lot of things must not be working properly for those gateway operators.
I mention them here because the gateway operators may be scratching
their heads over why some connections don't seem to be working.
- Brian
Last update at Thu May 4 20:15:01 2017
gateway inner src #errs indx error type
---------------- ---------------- ----- ---- -------------------------------
77.138.34.39 192.168.1.180 26168 [19] dropped: non-44 inner source address
174.97.179.219 44.92.21.35 24627 [ 8] dropped: encap to encap
23.30.150.141 23.30.150.141 17839 [19] dropped: non-44 inner source address
59.167.198.158 44.225.125.2 8137 [ 8] dropped: encap to encap
85.234.252.133 169.228.66.251 6361 [19] dropped: non-44 inner source address
[etc]
> Oops; 23.30.150.141 is me. I think I've mitigated it via firewall rules now.
It is often caused by traffic from the gateway system itself where the system has
decided to use the internet source address rather than the desired 44-net address.
On your system this is more likely to happen because your internet address is lower
than 44. (when there is a complete tie on what address to use one of the last decisions
sometimes is to use the lowest address)
You can often this by setting a preferred source address on some route(s), in this
case probably a default route in the AMPRnet-specific route table pointing to amprgw.
Rob
On Wed, 26 Apr 2017, Brian Kantor wrote:
> A few times a minute, a host claiming to be ke6jjj-8 (44.4.39.8)
> is sending an encapped packet that is peculiar: it is either 40 or 44
> bytes long, but the length field in the IP header is set to a varying but
> very large packetsize (for example, 61,683 bytes) and the Don'tFragment
> bit is set so the amprgw IP kernel sending routine can't break it up
> into MTU-sized fragments - thus it gets a transmit failure and isn't
> sent anywhere.
I think I'm closer to finding out why this happens. I use a firewall rule
(using FreeBSD ipf(8)) to make sure that traffic leaving my network with a
source of 44.0/8 is redirected to the tunneling interface:
# Make certain AMPR/44 hosts reaching outwards will tunnel through AMPR
# gate
pass out quick on vlan0 to gif0:44.0.0.1 from 44/8 to any
I believe, however, that in picking up and relocating the packet this way, I am
perhaps taking a packet that was destined for some interface acceleration (such
as offlined checksumming and the like) and placing it on an interface queue
that has no such optimizations.
Still more research required, but I see the problem now.
-J
Hi there
Long ago the AMPRNET was defined that both IP PID 94 and 4 was working ?
What is situation today ? will it work with PID94 ?
if possible may someone explain why we moved to PID4 ? as at this period I wasnt active with the AMPRNet
Thanks for any info
Ronen - 4Z4ZQ
http://www.ronen.org
2) Why there is no option to add attachment (or graphics ) to the list ? is it because of Disk space ? etc ?
from time to time it is usefull to have a screen capture of log of network related issues
Ronen Pinchooks (4Z4ZQ) WebSite<http://www.ronen.org/>
www.ronen.orgronen.org (Ronen Pinchooks (4Z4ZQ) WebSite) is hosted by domainavenue.com
The ipip router at UCSD is not very busy (99% idle), but I am surprised
at the number of dropped packets. It's higher than I would have hoped,
suggesting there are a significant number of misconfigured routers on
the network.
- Brian
------------------------------------------------------------------
started at Tue May 2 20:49:25 2017
snapshot at Tue May 2 22:00:00 2017
uptime: 0+01:10:35 (4235 seconds)
idle: 4194.615196 secs (99%)
packets/bytes
---------/---------
181908/50252930 ipip encapped input
170575/45757610 forwarded out unencapsulated
1/56 dropped: no source gateway
1666210/125325363 unencapsulated input
1666196/125324029 encapped out
0/0 dropped: no destination gateway
5994/447291 dropped: encap to encap
0/0 ttl exceeded
0/0 icmp sent
0/0 dropped: packet too large
0/0 dropped: zero outer source address
0/0 dropped: broadcast outer destination address
0/0 dropped: packet too short
0/0 dropped: zero inner source address
481/66757 dropped: broadcast inner destination address
13/676 dropped: multicast inner destination address
4781/337160 dropped: non-44 inner source address
0/0 dropped: embedded encap protocol
35/1400 dropped: ip_len > MTU and DF
0/0 dropped: ip_len != packet size
0/0 dropped: output packet too short
0/0 dropped: output packet too long
4181/347878 dropped: output blocked by firewall
1/40 dropped: kernel send error
0/0 dropped: multicast inner source address
1171/1756463 outgoing encapped IPIP packet will be fragmented by kernel
2019413 route lookups took 1782499 microseconds (0.88 usec/lookup)
2 route table updates took 0.008852 seconds (4 msec each avg)
> - Are you saying that some AMPRNet OPs simply forward packets to their
> WAN interface...WITH INVALID SRC IPs from their 44.0.0.0/8 RANGE?!?!
Some do that, yes. The SRC IPs are not invalid but they are not their allocated IP from their ISP.
Some poor ISPs allow that (lack of BCP38 source address filtering)
However, the reason we are on the IPIP network is to allow others that are there to communicate
with us without doing that. They can send their traffic in a proper IPIP tunnel. That would not
be possible when we were only on BGP.
Rob
> There are actually
> several instances where there is an encap route for a /16 and
> then there are some /28s or /29s nested inside it.
> BGP-only subnets aren't in the encap file.
Our BGP-routed /16 is in the encap file as well, for compatibility with
gateway stations that cannot send their net-44 traffic over internet due
to source address filtering, or that are otherwise configured in such a way that
net-44 traffic is never sent to internet directly.
So we are one of those /16 networks with several small networks and single
addresses inside it routed to different gateways. I don't know if the other
instances are a result of the same setup.
Good to hear that those are properly handled.
Rob
> The delay is entirely caused by clearing the array. I'm using 'bzero()'
> to do that, as that's the fastest way I know of to zero an existing
> array, but it still takes 25ms to zero a vector of 16 million shorts.
> (Stepping through it with a for loop takes roughly 5 times as long.)
Well this is where you could gain some time using the mmap, depending on various
factors. When you do a fresh mmap of /dev/zero (or ANON) every time you need a
new clear array, that will execute much quicker than clearing all that space,
because in fact it is only setting up some page table entries that all point to
the same already zeroed memory block.
Then, when you start populating it of course you lose some of that advantage as
each write into a page causes a page fault and a new memory block being allocated
and zeroed and inserted into the page table.
Only experimentation can show what the total time of the mmap + page COW operations
is when compared to the bzero. It will depend on the density of the routed AMPRnet
space.
So, you would change the array[2][2**24] into a *array[2] (2 pointers to 16M entries)
and mmap/munmap them every time you need them cleared.
Rob
> Deletions zero the corresponding entries in the addrs table, so if the
> deletion count was non-zero, then when you're through deleting all the
> expired entries, you run through the subnets table and load the remaining
> routes into the addr table.
Ok that sounds good, it should save a lot of time because most of the table
is zero and never touched. I presume you do the reloading in order of
decreasing subnet size so "nested" subnets are properly handled.
(of course nested subnets primarily occur in the case of BGP routed networks
and in that case it does not matter too much because the traffic is not
supposed to go via amprgw anyway)
Rob
> Replacing the earlier sequential search through the routing
> table with a binary search has significantly sped up lookups,
> which can happen up to twice per packet forwarded (once for
> source address, once for destination address).
With "only" 16 million addresses in AMPRnet, why don't you use a 16-million entry array
holding the next hop for each IP in 44.x.x.x or zero for those addresses that have no tunnel?
Then you only need a single index operation to get the next hop for a packet.
That requires 64MB of memory to store, hardly a significant amount today.
And it can double-up as the filter to forward traffic only to/from registered addresses.
Of course the update operation becomes more expensive, but it could probably be done in-place
without disrupting packet forwarding. Or you could build a second table and then switch
to it after the build is complete (requiring 128MB).
Rob