Hi there
Long ago the AMPRNET was defined that both IP PID 94 and 4 was working ?
What is situation today ? will it work with PID94 ?
if possible may someone explain why we moved to PID4 ? as at this period I wasnt active with the AMPRNet
Thanks for any info
Ronen - 4Z4ZQ
http://www.ronen.org
2) Why there is no option to add attachment (or graphics ) to the list ? is it because of Disk space ? etc ?
from time to time it is usefull to have a screen capture of log of network related issues
Ronen Pinchooks (4Z4ZQ) WebSite<http://www.ronen.org/>
www.ronen.orgronen.org (Ronen Pinchooks (4Z4ZQ) WebSite) is hosted by domainavenue.com
The ipip router at UCSD is not very busy (99% idle), but I am surprised
at the number of dropped packets. It's higher than I would have hoped,
suggesting there are a significant number of misconfigured routers on
the network.
- Brian
------------------------------------------------------------------
started at Tue May 2 20:49:25 2017
snapshot at Tue May 2 22:00:00 2017
uptime: 0+01:10:35 (4235 seconds)
idle: 4194.615196 secs (99%)
packets/bytes
---------/---------
181908/50252930 ipip encapped input
170575/45757610 forwarded out unencapsulated
1/56 dropped: no source gateway
1666210/125325363 unencapsulated input
1666196/125324029 encapped out
0/0 dropped: no destination gateway
5994/447291 dropped: encap to encap
0/0 ttl exceeded
0/0 icmp sent
0/0 dropped: packet too large
0/0 dropped: zero outer source address
0/0 dropped: broadcast outer destination address
0/0 dropped: packet too short
0/0 dropped: zero inner source address
481/66757 dropped: broadcast inner destination address
13/676 dropped: multicast inner destination address
4781/337160 dropped: non-44 inner source address
0/0 dropped: embedded encap protocol
35/1400 dropped: ip_len > MTU and DF
0/0 dropped: ip_len != packet size
0/0 dropped: output packet too short
0/0 dropped: output packet too long
4181/347878 dropped: output blocked by firewall
1/40 dropped: kernel send error
0/0 dropped: multicast inner source address
1171/1756463 outgoing encapped IPIP packet will be fragmented by kernel
2019413 route lookups took 1782499 microseconds (0.88 usec/lookup)
2 route table updates took 0.008852 seconds (4 msec each avg)
> - Are you saying that some AMPRNet OPs simply forward packets to their
> WAN interface...WITH INVALID SRC IPs from their 44.0.0.0/8 RANGE?!?!
Some do that, yes. The SRC IPs are not invalid but they are not their allocated IP from their ISP.
Some poor ISPs allow that (lack of BCP38 source address filtering)
However, the reason we are on the IPIP network is to allow others that are there to communicate
with us without doing that. They can send their traffic in a proper IPIP tunnel. That would not
be possible when we were only on BGP.
Rob
> There are actually
> several instances where there is an encap route for a /16 and
> then there are some /28s or /29s nested inside it.
> BGP-only subnets aren't in the encap file.
Our BGP-routed /16 is in the encap file as well, for compatibility with
gateway stations that cannot send their net-44 traffic over internet due
to source address filtering, or that are otherwise configured in such a way that
net-44 traffic is never sent to internet directly.
So we are one of those /16 networks with several small networks and single
addresses inside it routed to different gateways. I don't know if the other
instances are a result of the same setup.
Good to hear that those are properly handled.
Rob
> The delay is entirely caused by clearing the array. I'm using 'bzero()'
> to do that, as that's the fastest way I know of to zero an existing
> array, but it still takes 25ms to zero a vector of 16 million shorts.
> (Stepping through it with a for loop takes roughly 5 times as long.)
Well this is where you could gain some time using the mmap, depending on various
factors. When you do a fresh mmap of /dev/zero (or ANON) every time you need a
new clear array, that will execute much quicker than clearing all that space,
because in fact it is only setting up some page table entries that all point to
the same already zeroed memory block.
Then, when you start populating it of course you lose some of that advantage as
each write into a page causes a page fault and a new memory block being allocated
and zeroed and inserted into the page table.
Only experimentation can show what the total time of the mmap + page COW operations
is when compared to the bzero. It will depend on the density of the routed AMPRnet
space.
So, you would change the array[2][2**24] into a *array[2] (2 pointers to 16M entries)
and mmap/munmap them every time you need them cleared.
Rob
> Deletions zero the corresponding entries in the addrs table, so if the
> deletion count was non-zero, then when you're through deleting all the
> expired entries, you run through the subnets table and load the remaining
> routes into the addr table.
Ok that sounds good, it should save a lot of time because most of the table
is zero and never touched. I presume you do the reloading in order of
decreasing subnet size so "nested" subnets are properly handled.
(of course nested subnets primarily occur in the case of BGP routed networks
and in that case it does not matter too much because the traffic is not
supposed to go via amprgw anyway)
Rob
> Replacing the earlier sequential search through the routing
> table with a binary search has significantly sped up lookups,
> which can happen up to twice per packet forwarded (once for
> source address, once for destination address).
With "only" 16 million addresses in AMPRnet, why don't you use a 16-million entry array
holding the next hop for each IP in 44.x.x.x or zero for those addresses that have no tunnel?
Then you only need a single index operation to get the next hop for a packet.
That requires 64MB of memory to store, hardly a significant amount today.
And it can double-up as the filter to forward traffic only to/from registered addresses.
Of course the update operation becomes more expensive, but it could probably be done in-place
without disrupting packet forwarding. Or you could build a second table and then switch
to it after the build is complete (requiring 128MB).
Rob
> Using a simple global array to be the list of addresses and pointers,
> full load time went from about 3ms to 15ms. But it works just fine and
> the lookups are so fast the microsecond timer registers zero.
Great! I would think that the gain in efficiency of the routing more than
offsets the extra processing for setup. You could consider making it multithreaded
but I don't think anyone would notice an occasional 15ms delay and/or some drops.
On our radio network such behaviour is quite normal.
Another possibility would be to first scan for the "common case" of all subnets
and gateways remaining the same but only the endpoint address of one or two
gateways changing, and in that case omit the entire table setup and only patch the
nexthop addresses of those gateways.
Rob
Hello all, and thank you for your assistance. I have 44.10.10.0/24
allocated and announced via BGP. The subnet terminates to an Ubuntu
server in a data center. I want to allocate addresses from this subnet
via tunnels to other locations. For example, I would like to assign an
address or a block of addresses to my home location (Cisco 1900 router)
from this subnet. Is this possible, or do I need to look at a different
option? Thank you!
--
73 de Phil Pacier, AD6NH
APRS Tier2 Network Coordinator
http://www.aprs2.net
I've made some changes to the amprgw routing mechanism.
Replacing the earlier sequential search through the routing
table with a binary search has significantly sped up lookups,
which can happen up to twice per packet forwarded (once for
source address, once for destination address).
Three times an hour, a background process fetches the encap
routing table from the portal, and if it has changed, signals
the router process to update its routing table. The routing
table update seems to take about 3 msec during which packets
can't be looked up and so are not forwarded.
This fetch also updates the data source for the rip sender,
so you'll receive updates more quickly now.
- Brian