> Using a simple global array to be the list of addresses and pointers,
> full load time went from about 3ms to 15ms. But it works just fine and
> the lookups are so fast the microsecond timer registers zero.
Great! I would think that the gain in efficiency of the routing more than
offsets the extra processing for setup. You could consider making it multithreaded
but I don't think anyone would notice an occasional 15ms delay and/or some drops.
On our radio network such behaviour is quite normal.
Another possibility would be to first scan for the "common case" of all subnets
and gateways remaining the same but only the endpoint address of one or two
gateways changing, and in that case omit the entire table setup and only patch the
nexthop addresses of those gateways.
Rob
Hello all, and thank you for your assistance. I have 44.10.10.0/24
allocated and announced via BGP. The subnet terminates to an Ubuntu
server in a data center. I want to allocate addresses from this subnet
via tunnels to other locations. For example, I would like to assign an
address or a block of addresses to my home location (Cisco 1900 router)
from this subnet. Is this possible, or do I need to look at a different
option? Thank you!
--
73 de Phil Pacier, AD6NH
APRS Tier2 Network Coordinator
http://www.aprs2.net
I've made some changes to the amprgw routing mechanism.
Replacing the earlier sequential search through the routing
table with a binary search has significantly sped up lookups,
which can happen up to twice per packet forwarded (once for
source address, once for destination address).
Three times an hour, a background process fetches the encap
routing table from the portal, and if it has changed, signals
the router process to update its routing table. The routing
table update seems to take about 3 msec during which packets
can't be looked up and so are not forwarded.
This fetch also updates the data source for the rip sender,
so you'll receive updates more quickly now.
- Brian
> I think the most efficient technique is to make the per-address lookup
> table an mmap'd array of pointers to entries in the existing route table.
> That makes it effectively addrtable[2][2**24], right?
Yes. Of course a single pointer would be 8 bytes when it is compiled for 64 bits,
but it would not need to be. When your existing route table is an array
rather than a collection of malloc'ed objects linked by pointers, the "pointer"
from the address lookup table into the route table could be the smaller index
into the route table (that would easily fit in an unsigned short integer, allowing
for 65535 gateways).
Or, in 32 bit mode a simple pointer can be used (4 bytes per entry).
Rob
> You're absolutely right. I dropped some decimal places. Thanks! However,
> gateway addresses don't fit in a byte, they're stored in an unsigned long,
> which is 4 bytes each. 4 * 16 million, right? And I think you need
> counters and flags. So it's an array of structs of some modest size each.
My estimate of 128MB was based on having 4 bytes per entry and 2 tables for
convenient updating (update one table then toggle a single indicator or pointer
to make the updated table active).
Of course when you require more bytes per entry the table will expand, but
8-16 bytes per entry should still fit comfortably in a modern machine.
> But I see a complication: You will have to have n entries per route,
> which will make loading the table a little less straightforward.
Well, searching a datastructure for the correct route isn't straightforward
either... remember when there is a tunnel to e.g. 44.137.0.0/16 and another
one to 44.137.1.80/28 (an example from the current table) then any traffic to
44.137.1.81 should go to the tunnel for 44.137.1.80/28, not something that
you can easily do with a binary search. However, a lookup in the table
populated with entries for 44.137.0.0/16 and then overwritten with entries
for 44.137.1.80/28 is easy. It will find the correct gateway with a single
array index operation. So you only need to build the table starting
from the subnets sorted by number of subnet bits to get it right.
Indeed it is best to make the program multithreaded, or you could put the
lookup table in shared memory and have a process doing the routing and another
process doing the updating. I would still opt for having a "current" and
a "next" table where the routing code always has a working table and the
switch to the next version is instaneous.
Of course you can also do a quick check before starting the laborious update
process, to see if the new encap table is different from the previous one.
Rob
> My estimate of 128MB was based on having 4 bytes per entry and 2 tables for
> convenient updating (update one table then toggle a single indicator or pointer
> to make the updated table active).
> Of course when you require more bytes per entry the table will expand, but
> 8-16 bytes per entry should still fit comfortably in a modern machine.
Another possibility would be to have a 16-million entry array of short integers
holding a "gateway number" (starting at 1) for each IP address, and a separate
table of gateways holding all the other info you want to keep per gateway.
(e.g. counters)
Then the processing of a packet would first index the destination IP in the
first array, retrieving the gateway number (0 means drop the packet), then
use that number as an index in the gateway table to access the per-gateway
data including the endpoint address and the counters. This would require only
slightly more than 32MB of memory for the tables, which should be no problem without
any tricks.
Rob
> 128MB is only 3% of 4GB. What would be the problem reserving that for a lookup table?
BTW, a nice trick for such tables is to do a mmap of /dev/zero with the proper size as a
starting point, then write only the entries that are nonzero.
That way the vast spaces that are only zeroes will not occupy physical memory, and when
they are read by the forwarding code they read back as zeroes.
Linux also has a "|MAP_ANONYMOUS" flag to obtain the same result, I don't know if BSD has that. |Rob
> Interesting concept. Someday if we have enough memory I may try it, but
> right now amprgw (an old machine) has only 4 GB of memory. It'd die swapping.
128MB is only 3% of 4GB. What would be the problem reserving that for a lookup table?
Rob
On Sun, Apr 30, 2017 at 5:31 AM, Marc <monsieurmarc(a)btinternet.com> wrote:
> Maybe we should start sharing block lists.
Marc,
HamWAN has a public blacklist system. Feel free to subscribe to it. It
does not publish a full list, but rather sends addresses one by one,
instantly*, as they are blocked.
*It takes about 1.5 seconds for report of a hack attempt to propagate
to our logging system, pass analysis, and be published to our edge
routers' firewall.
Here is the code behind the system (including a Mikrotik script you
can use to subscribe):
https://github.com/kd7lxl/blacklist-service
Anything blocked by the HamWAN network will be published here:
http://monitoring.hamwan.net/blacklist
If it seems like it's not responding, that's normal. It is an HTTP
longpoll service, so it will hang until there is data to be published,
then that data is sent immediately. This mechanism allows pushing data
(in this case, a blacklisted address) to a Mikrotik router without
having to store admin credentials of that router on the blacklist
system. Since it uses a standard protocol, it can be adapted for other
platforms.
Tom KD7LXL