> * Some of you query the NTP with your Public IP thru AMPRGW instead of directly to me thru the mesh (just a note to make your SRC IP an AMPR address, not your Public IP - I may disable your access thru AMPGW in the future, as it's announced as "AMPR-only")
Unfortunately it is quite common for gateway stations to send tunneled traffic towards 44net addresses (via IPIP) with a public IP as the source.
I normally block all such traffic, except when the public IP is the gateway public address (as I got tired of trying to reach sysops where this error was present).
People *should really use* proper source address selection and policy routing and NOT send tunnel traffic with other than 44net traffic (both source and destination) inside it to any gateway station except by prior agreement.
(e.g. AMPRGW can send traffic from a 44net address to public destination addresses, and so can some gateways)
To make life easier, DO NOT TRY to setup a gateway on the same system where your applications are also running, unless you have good knowledge of networking configuration and know about such concepts as policy routing (ip route rule, multiple route tables) and setting a preferred source address in a route.
When you use a separate router and application machine, such errors are much less likely to occur, and configuring the firewall is also much easier.
Get a separate Pi or MikroTik or whatever to run your gateway, and then have a PC or another system to run your BBS or conference or whatever you want to run on AMPRnet!
Rob
> Those with a dynamic address CAN participate as their public gateway
> can now be a FQDN for their dynamic service. I have some within the
> New York State subnet (44.68/16).
The issue is that IPIP tunnels have to be validate with their external
address
for at least SOME security, and this means that when the address changes
there
is nothing else we can do than drop their packets until the change comes
through
the portal and RIP system.
With a system that has those dynamic addresses connect only to one or two
VPN routers in a secure manner (e.g. L2TP/IPsec) we would not have that
problem.
Also, those address changes would not be important to other systems on
the network.
Out of the 561 registered gateways, we only ever receive traffic only
from 73 of them.
(the others could be either inactive or not be sending traffic to the
Netherlands)
Their address changes would not be important to us.
Rob
For a long time now, I have been allowing IPIP only from registered
gateways and
disallowed nested IPIP. Indeed I have seen in the past that IPIP
packets were sent with
the intention of being forwarded through "allow trusted subnets" rules
and then maybe
back out to internet hosts that were targeted for DDoS or similar.
When looking in the logs of those rules, I usually see dropped packets
from hosts that
are apparently on dynamic addresses and have changed address, but this
change has
not yet reached me through the ampr-rip announcements.
However, there are indeed also instances of apparently unrelated
intrusion attempts.
It remains my position that we should change from this IPIP mesh to a
more modern
VPN system where stations with dynamic addresses can participate through
a local
VPN server that participates in a network that uses standard protocols
to form a
dedicated AMPRnet tunnel network with automatic routing, that can be
used by standard
equipment and can be made more resilient against unwanted use (e.g. by
using GRE/IPsec
and L2TP/IPsec tunnels instead of the traditional IPIP).
However, I no longer want to beat a dead horse. We had the discussion a
while ago but
unfortunately it was then redirected onto a separate mailing list.
Rob
Greetings to all the other colleagues and friends, I am Gabriel Medinas
YV5KXE from Caracas, Venezuela.
For years (20 years) I have been coordinator of the AmprNet network for
Venezuela with assignment 44.152.0.0 / 16 trying to assign and maintain the
network of radio amateurs on network 44.
For some unknown reason, I do not have access to this coordination and I
use this route to try to communicate with people who can help me restart
the Amprnet network through my gateway in Caracas yv5kxe.org. and with the
knowledge of the pass way of Brian Kantor, the communication has been more
complicated for me.
I have register in AmprNet Portal and have register my Gateway yv5kxe.org
Jnos linux 2.0j, telnet port 2332, Convers, DXcluster, Netrom link USA
(laxnet), Caracas City.
I appreciate any help in solving this problem in the sense of being able to
restore the service over the local networks of Packet Radio, CONV, and
NetRom.
Thank you.
Gabriel
YV5KXE
4M5G
gmedinas.com
With the shutdown of the WA7V system after a long and dedicated stretch,
Hub_NA of the WWconvers needed a new home. With the help of WA7V and
testing with KD6OAT, Hub_NA is still functioning but with a new IP address
of 44.68.41.2 (gw.n2nov.ampr.org) on port 3600. The software also has
the capability to make use of IRC clients that we might get included in
a future version of JNOS. All bbs sysops in North America are welcome
to connect their chat clients and servers to Hub_NA and join the rest of
the WWconvers network. There is plenty of room for specialized convers
channels. For the 44Net allocations in NY State (44.68/16) I suggest
a common channel of #4468 to chat among ourselves.
--
Charles J. Hargrove - N2NOV
NYC-ARECS/RACES Citywide Radio Officer/Skywarn Coord.
NYC-ARECS/RACES Nets 441.100/136.5 PL
ARnewsline Broadcast Mon. @ 8:00PM
NYC-ARECS Weekly Net Mon. @ 8:30PM
http://www.nyc-arecs.org
NY-NBEMS Net Saturdays @ 10AM & USeast-NBEMS Net Wednesdays @ 7PM
on 7.036 Mhz USB (alt 3.536)/1500 hz waterfall spot; MFSK-16 or 32
"Information is the oxygen of the modern age. It seeps through the walls
topped
by barbed wire, it wafts across the electrified borders." - Ronald Reagan
"The more corrupt the state, the more it legislates." - Tacitus
"Molann an obair an fear" - Irish Saying
(The work praises the man.)
"No matter how big and powerful government gets, and the many services it
provides, it can never take the place of volunteers." - Ronald Reagan
In our local network we have several different kinds of tunnels, with
different header overhead.
As the usual MTU on an internet connection is 1500 (the ethernet MTU),
the typical MTU
for an IPIP tunnel is 1480, for GRE it is 1476, for GRE6 it is 1454, etc.
However, not everyone has a 1500 byte internet MTU. Some people have
PPPoE connections
to internet with MTU of typically 1492, sometimes 1480. So the
effective MTU of the
mentioned (and other) tunnel types becomes 8 or 20 bytes less. Some
people get a fixed address
subnet from their ISP and it is provided as some tunnel with an MTU of
1456 (quite common here).
This results in a wide variety of MTU values in our network.
Frequently issues arise for new connections where the chosen MTU for
some tunnel turns
out to be too large, and full-size packets are dropped. And in an
environment where those
tunneled packets encounter a point where the outer packet is too large
for the interface MTU,
the usual mechanism of returning "ICMP destination unreachable,
fragmentation required"
does not work very well, because the ICMP is returned to the router that
encapsulated the
packet, not the original source of the traffic. And I have never seen
an encapsulating router
that translated the ICMP to a new ICMP packet referring to the inner
addresses and sent it
back to the original source.
Also, there are sometimes issues when routes are changed by BGP. Of
course many routers
have TCP MSS clamping configured where the TCP MSS is reduced whenever
the TCP SYN
passes through a place with lower MTU, but this happens only on the
initial connection setup.
When the MTU later reduces due to a route change, this still results in
failure of the connection.
I wonder if other gateway operators have done something to alleviate
this problem.
Solutions that can be considered:
- ignore DF. much of the current TCP traffic has DF (don't fragment)
set, but this often causes
communications to unnecessarily break. Without DF, packets would be
fragmented as originally
designed in the IP protocol. sending everything with DF and
interpreting the ICMP responses
is the mechanism behind "Path MTU discovery", which was designed to
avoid fragmentation
and the overhead it causes in routers. however, in the AMPRnet we
seldomly encounter
so much traffic that CPU loading of the routers is an issue.
- standardize on a "default MTU" whenever we cannot offer a 1500 byte
MTU. this does
not solve all problems, but at least it solves some of them.
Note that most routers fragment packets in a particularly inefficient
way. When a packet
a few bytes too large for the next hop has to be forwarded (and DF is
not set), they will not
split the packet in two approximately equal halves, but rather they send
a first fragment as
large as the outgoing MTU can accept, then a small fragment with the
remainder of the
original packet. This can result in multiple fragmentations along the
way: first it has to be
fragmented to fit into a 1480 byte MTU of an IPIP tunnel, then further
on it has to be
fragmented again to fit a GRE or L2TP/IPsec tunnel with smaller MTU.
Whereas no
further fragmentation would be required when it had been split in equal
halves the first time.
So, I wonder what others do (if anything) to avoid the problems caused
by oversized packets
and maybe to avoid fragmentation. For some time, I have experimented
with "ignore DF"
and of course it keeps traffic flowing, but it is unclear if it causes
problems for some users.
Next I would consider to use a standard MTU value on all tunnels, so
there are mostly two
MTU values left in the network: 1500 and that smaller, to be determined,
value.
Of course the MTU should not be so low that it causes terrible
overhead. In the past we had
a 256 byte MTU on AX.25 packet radio (or even 216 when it was over
NET/ROM), but that
causes a 15% header overhead and made us very unpopular amongst plain
AX.25 users.
Fortunately the WiFi links we use today allow 1500 byte packets :-)
The minimal required MTU for IPv6 is 1280. The maximal MTU we can
accomodate with
the worst case tunnel headers is about 1400. So the preferable default
MTU would be
somewhere between 1280 and 1400.
Are people even using 256-byte MTU links today? Would it be worth it to
select an MTU
value that can be more efficiently fragmented into 256-byte packets? Or
is there another
small MTU size that would be a candidate for such considerations?
So again, I wonder what others have done w.r.t. this matter. Are admins
of gateways that
offer many kinds of different tunnels using a standard MTU in their
systems, or just
the max MTU that each tunnel technology allows?
Do you copy DF from the inner to the outer packet in a tunnel? Do you
ignore DF?
What would be your position on establishing a standard MTU for tunnels,
and what size
would you propose?
Rob PE1CHL
All,
I have a question to ponder again. In preparation for emergencies, I wanted to consider some of the following.
- Passing traffic thru another GW, we can use the test/example subnet
- If AMPRGW at USCD is unreachable, could we have other capable devices that elect between themselves and announce the routes from Chris' server?
- Can we test the redundancy of Chris' route server from whom they elect and receive
- I'm not sure if anyone is doing AX.25 to IP or vice versa; but I would like to try
- I'm curious if anyone has compiled kissattach and all utilities compiled; etc. in OpenWrt to connect a TNC and radio - I want to do an end-to-end test (I recall someone offered me a makefile for some libraries) - I want to migrate my base APRS radio to the router. I recall I have libax25 compiled...
73,
- Lynwood
KB3VWG