I use IPIP behind a NAT by forwarding all IPIP traffic to a particular host
- it’s a separate protocol so quite easy to do actually.
On Mon, 20 May 2019 at 17:18, Steve L via 44Net <44net(a)mailman.ampr.org>
wrote:
>
>
>
> ---------- Forwarded message ----------
> From: Steve L <kb9mwr(a)gmail.com>
> To: AMPRNet working group <44net(a)mailman.ampr.org>
> Cc:
> Bcc:
> Date: Mon, 20 May 2019 11:14:45 -0500
> Subject: Re: [44net] UCSD tunnel behing NAT and Firewall setting ?
> IPIP requires Protocol 4 forwarding (or DMZ) at the firewall to the
> gateway.
>
> OpenVPN handshakes are about every 5 seconds, between the client and
> server. The client creates and maintains an active connection to the
> server at all times. This allows the server to track a reverse way
> back to the client.
>
> Since we are decentralized, meaning we don't all reach each other thru
> a central server, we'd have to have maintain handshaking to each other
> ampr gateway. I forget what Brian last said there were in terms of a
> number of IPIP gateways, but that would obviously be a lot of data,
> and thus not practical.
>
> The only other VPN like architecture that I know of that works like
> what we are doing is Tinc, as it supports mesh routing too. But I
> haven't played with it yet.
>
> Your other option is to setup a VPS, bring in a subnet via BPG, and
> then used whatever method you like (OpenVPN, etc) to bring it from the
> VPS to your firewall restricted gateway. A solution that John, K7VE
> has been pointing out (https://groups.io/g/net-44-vpn)
>
> Steve
>
>
> On Mon, May 20, 2019 at 1:41 AM R P via 44Net <44net(a)mailman.ampr.org>
> wrote:
> > ---------- Forwarded message ----------
> > From: R P <ronenp(a)hotmail.com>
> > To: "44net(a)mailman.ampr.org" <44net(a)mailman.ampr.org>
> > Cc:
> > Bcc:
> > Date: Mon, 20 May 2019 06:37:54 +0000
> > Subject: UCSD tunnel behing NAT and Firewall setting ?
> > Hi there
> > I know that VPN can be done behind firewall NAT (from the client side)
> > Can the IPIP be made (from the gateway side) behind a Firewall (that
> allow any traffic outbound) and a NAT ?
> > Untill few month ago my gateway sited on the DMZ and it worked
> > But i had changed the DMZ to point another IP and it seems that the
> IPIP still work .. I wonder if it is a router problem or the IPIP can
> pass thru like a VPN can pass
> > Thanks For any Info
> > ronen- 4Z4ZQ
>
>
>
> ---------- Forwarded message ----------
> From: Steve L via 44Net <44net(a)mailman.ampr.org>
> To: AMPRNet working group <44net(a)mailman.ampr.org>
> Cc: Steve L <kb9mwr(a)gmail.com>
> Bcc:
> Date: Mon, 20 May 2019 11:14:45 -0500
> Subject: Re: [44net] UCSD tunnel behing NAT and Firewall setting ?
> _________________________________________
> 44Net mailing list
> 44Net(a)mailman.ampr.org
> https://mailman.ampr.org/mailman/listinfo/44net
>
IPIP requires Protocol 4 forwarding (or DMZ) at the firewall to the gateway.
OpenVPN handshakes are about every 5 seconds, between the client and
server. The client creates and maintains an active connection to the
server at all times. This allows the server to track a reverse way
back to the client.
Since we are decentralized, meaning we don't all reach each other thru
a central server, we'd have to have maintain handshaking to each other
ampr gateway. I forget what Brian last said there were in terms of a
number of IPIP gateways, but that would obviously be a lot of data,
and thus not practical.
The only other VPN like architecture that I know of that works like
what we are doing is Tinc, as it supports mesh routing too. But I
haven't played with it yet.
Your other option is to setup a VPS, bring in a subnet via BPG, and
then used whatever method you like (OpenVPN, etc) to bring it from the
VPS to your firewall restricted gateway. A solution that John, K7VE
has been pointing out (https://groups.io/g/net-44-vpn)
Steve
On Mon, May 20, 2019 at 1:41 AM R P via 44Net <44net(a)mailman.ampr.org> wrote:
> ---------- Forwarded message ----------
> From: R P <ronenp(a)hotmail.com>
> To: "44net(a)mailman.ampr.org" <44net(a)mailman.ampr.org>
> Cc:
> Bcc:
> Date: Mon, 20 May 2019 06:37:54 +0000
> Subject: UCSD tunnel behing NAT and Firewall setting ?
> Hi there
> I know that VPN can be done behind firewall NAT (from the client side)
> Can the IPIP be made (from the gateway side) behind a Firewall (that allow any traffic outbound) and a NAT ?
> Untill few month ago my gateway sited on the DMZ and it worked
> But i had changed the DMZ to point another IP and it seems that the IPIP still work .. I wonder if it is a router problem or the IPIP can pass thru like a VPN can pass
> Thanks For any Info
> ronen- 4Z4ZQ
Hi there
I know that VPN can be done behind firewall NAT (from the client side)
Can the IPIP be made (from the gateway side) behind a Firewall (that allow any traffic outbound) and a NAT ?
Untill few month ago my gateway sited on the DMZ and it worked
But i had changed the DMZ to point another IP and it seems that the IPIP still work .. I wonder if it is a router problem or the IPIP can pass thru like a VPN can pass
Thanks For any Info
ronen- 4Z4ZQ
http://www.ronen.org
Ronen Pinchooks (4Z4ZQ) WebSite<http://www.ronen.org/>
ronen.org (Ronen Pinchooks (4Z4ZQ) WebSite) is hosted by domainavenue.comwww.ronen.org
To whom it may concern:
It looks like the system at 44.170.109.92, DNS name 9a5c-webcam.ampr.org, has been affected by a worm of some kind.
It is scanning the IP space to find new victims.
Another thing is that the routing to here is strange. It appears to be on BGP routed space, but the traffic is received via IPIP tunnel.
(so it is being rejected anyway, but that is how I encountered it in the logs)
Rob
Brian,
Thanks for noticing that block. Unfortunately, something is still blocking Google. Their “Live Test” still comes back with a crawl anomaly.
In they documents they claim that their bots can come from a wide array of ip addresses and that they don’t publish them. Is it possible
that there is another ip or block that has been blocked off, that you might be able to be opened. At least long enough to see if that fixes the problem.
Google won’t say what all their googlebot IPs are but I found this:
Known Googlebots:
64.233.160.0 64.233.191.255
66.102.0.0 66.102.15.255
66.249.64.0 66.249.95.255
72.14.192.0 72.14.255.255
74.125.0.0 74.125.255.255
209.85.128.0 209.85.255.255
216.239.32.0 216.239.63.255
Google owns these (maybe google bots)
64.18.0.0/20 64.18.0.0 - 64.18.15.255
64.233.160.0/19 64.233.160.0 - 64.233.191.255
66.102.0.0/20 66.102.0.0 - 66.102.15.255
66.249.80.0/20 66.249.80.0 - 66.249.95.255
72.14.192.0/18 72.14.192.0 - 72.14.255.255
74.125.0.0/16 74.125.0.0 - 74.125.255.255
108.177.8.0/21 108.177.8.0 - 108.177.15.255
172.217.0.0/19 172.217.0.0 - 172.217.31.255
173.194.0.0/16 173.194.0.0 - 173.194.255.255
207.126.144.0/20 207.126.144.0 - 207.126.159.255
209.85.128.0/17 209.85.128.0 - 209.85.255.255
216.58.192.0/19 216.58.192.0 - 216.58.223.255
216.239.32.0/19 216.239.32.0 - 216.239.63.255
2001:4860:4000::/36 2001:4860:4000:0:0:0:0:0 - 2001:4860:4fff:ffff:ffff:ffff:ffff:ffff
2404:6800:4000::/36 2404:6800:4000:0:0:0:0:0 - 2404:6800:4fff:ffff:ffff:ffff:ffff:ffff
2607:f8b0:4000::/36 2607:f8b0:4000:0:0:0:0:0 - 2607:f8b0:4fff:ffff:ffff:ffff:ffff:ffff
2800:3f0:4000::/36 2800:3f0:4000:0:0:0:0:0 - 2800:3f0:4fff:ffff:ffff:ffff:ffff:ffff
2a00:1450:4000::/36 2a00:1450:4000:0:0:0:0:0 - 2a00:1450:4fff:ffff:ffff:ffff:ffff:ffff
2c0f:fb50:4000::/36 2c0f:fb50:4000:0:0:0:0:0 - 2c0f:fb50:4fff:ffff:ffff:ffff:ffff:ffff
I will certainly be very respectful of the bandwidth. As I said before, we really don’t get a lot of hits and the site is more for our members than anyone else (plus the occasional new person wanting to join).
Someone mentioned that my page size is a bit large. Yes, I do have some Javascript and it does make it appear as though the page size is 3mb, but that is a deceptive assessment. That 3mb includes a jQuery library, that most people already have in their cache, since so many people are using jQuery. In actual fact, if jQuery is on your machine (Likely) the actual page size is in the mid kbs. A quote from jQuery Doc:
"If you serve jQuery from a popular CDN such as Google's Hosted Libraries or cdnjs, it won't be redownloaded if your visitor has been on a site that referenced it, from the same source (as long as the cached version has not expired).”
Thanks for trying to help me resolve this.
Roger
VA7LBB
> On May 14, 2019, at 12:00 PM, 44net-request(a)mailman.ampr.org wrote:
>
> Send 44Net mailing list submissions to
> 44net(a)mailman.ampr.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://mailman.ampr.org/mailman/listinfo/44net
> or, via email, send a message with subject or body 'help' to
> 44net-request(a)mailman.ampr.org
>
> You can reach the person managing the list at
> 44net-owner(a)mailman.ampr.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of 44Net digest..."
> Today's Topics:
>
> 1. Portal API (Nate Sales)
> 2. Re: Google indexing (Brian Kantor)
> 3. Re: Google indexing (Rob Janssen)
>
> From: Nate Sales <nate.wsales(a)gmail.com>
> Subject: Portal API
> Date: May 13, 2019 at 2:22:55 PM PDT
> To: AMPRNet working group <44net(a)mailman.ampr.org>
>
>
> Hello,
> Is there any plan to make the API more complete? It would be really cool to
> be able to update gateways and such programatically.
> 73,
> -Nate
>
>
>
>
> From: Brian Kantor <Brian(a)bkantor.net>
> Subject: Re: [44net] Google indexing
> Date: May 13, 2019 at 2:25:38 PM PDT
> To: AMPRNet working group <44net(a)mailman.ampr.org>
>
>
> On Mon, May 13, 2019 at 11:58:18AM -0700, Roger wrote:
>> I wanted to thank everyone for their help with the google issue I’m having. It is not resolved but I’ve made some discoveries. It looks like a fair number of the ampr.org sites that come up on google may in fact be done via BGP. Rob’s is and the others that I did a traceroute on, terminate on an address that is not 44.
>> But that said, I now think this is a 100% Google issue. I don’t know what kind of stupidity they are up to but Yandex and Bing, have no problems indexing my site. I have read of others having similar issues. Bing and Yandex actually use Google’s same system for verification and they crawl just fine.
>>
>> 73
>> Roger
>> VA7LBB
>
> After Roger mentioned that AMPRNet BGP-advertised web sites were
> getting indexed, but not very many others, and then someone posted
> that Google's indexing bots often run in the IP address range
> 66.249.x.x, I took a look at the ingress filter in amprgw.
>
> 66.249.90.x and 66.249.91.x were indeed blocked.
>
> I have unblocked them. Roger, you may see Google crawling your web
> site from addresses in those subnets now. If you have some way to
> stimulate them to do so, you might want to try that.
>
> I don't know how among many possible ways that those addresses got
> on the blocking list, as it was too long ago for the current logs
> to reflect it.
> - Brian
>
>
>
>
>
>
> From: Rob Janssen <pe1chl(a)amsat.org>
> Subject: Re: [44net] Google indexing
> Date: May 14, 2019 at 11:01:09 AM PDT
> To: "44net(a)mailman.ampr.org" <44net(a)mailman.ampr.org>
>
>
>> 66.249.90.x and 66.249.91.x were indeed blocked.
>
> Ahh... that explains a lot!
>
>> I don't know how among many possible ways that those addresses got
>> on the blocking list, as it was too long ago for the current logs
>> to reflect it.
>
> Maybe there was "a lot" of traffic? Possibly also "a lot" in terms of those days.
>
> But of course everyone running a website on an IPIP tunneled ampr.org site has some
> responsibility in this. Make sure when you have areas with lots of data, those large
> files are not indexed. This can be done using robots.txt files, headers in the page
> content, etc.
>
> E.g. you run a site with equipment schematics. You have some text pages with indexes
> and a lot of huge PDF files with the scanned schematics themselves. It is not difficult
> to make Google (and other crawlers) index only the text index files and not the PDFs.
>
> Or you have a local amateur group site and it has lots of photographs and maybe even
> video of the fieldday or other events. It is possible to make the huge 30-megapixel
> photographs and the video not being indexed and only index the text content and maybe
> the thumbnails.
>
> When this is done in a responsible manner, indexing the websites that are behind IPIP
> tunnels should not cause much more "useless traffic" than there already is due to
> jerks like shodan.io, stretchoid.com and the like.
> (those are scanning the entire IP range, not just websites that have been announced
> to Google or are linked from other sites)
>
> Rob
>
>
>
>
> _______________________________________________
> 44Net mailing list
> 44Net(a)mailman.ampr.org
> https://mailman.ampr.org/mailman/listinfo/44net
> 66.249.90.x and 66.249.91.x were indeed blocked.
Ahh... that explains a lot!
> I don't know how among many possible ways that those addresses got
> on the blocking list, as it was too long ago for the current logs
> to reflect it.
Maybe there was "a lot" of traffic? Possibly also "a lot" in terms of those days.
But of course everyone running a website on an IPIP tunneled ampr.org site has some
responsibility in this. Make sure when you have areas with lots of data, those large
files are not indexed. This can be done using robots.txt files, headers in the page
content, etc.
E.g. you run a site with equipment schematics. You have some text pages with indexes
and a lot of huge PDF files with the scanned schematics themselves. It is not difficult
to make Google (and other crawlers) index only the text index files and not the PDFs.
Or you have a local amateur group site and it has lots of photographs and maybe even
video of the fieldday or other events. It is possible to make the huge 30-megapixel
photographs and the video not being indexed and only index the text content and maybe
the thumbnails.
When this is done in a responsible manner, indexing the websites that are behind IPIP
tunnels should not cause much more "useless traffic" than there already is due to
jerks like shodan.io, stretchoid.com and the like.
(those are scanning the entire IP range, not just websites that have been announced
to Google or are linked from other sites)
Rob
Hi All!
I wanted to thank everyone for their help with the google issue I’m having. It is not resolved but I’ve made some discoveries. It looks like a fair number of the ampr.org sites that come up on google may in fact be done via BGP. Rob’s is and the others that I did a traceroute on, terminate on an address that is not 44.
But that said, I now think this is a 100% Google issue. I don’t know what kind of stupidity they are up to but Yandex and Bing, have no problems indexing my site. I have read of others having similar issues. Bing and Yandex actually use Google’s same system for verification and they crawl just fine.
73
Roger
VA7LBB
On May 9, 2019, at 02:09, Rob Janssen <pe1chl(a)amsat.org> wrote:
>> Now that I know where to look.. PMTU has caused me a lot of headache
>> lately. I believe it could be the problem. Sending large packets to
>> 44.135.179.28 yields no reply. tracepath does send back need to frag,
>> but when TTL expires at amprgw.ucsd.edu. I believe amprgw.ucsd.edu
>> should send back need-to-frag for higher TTLs as well.
>
> That is always a bit tricky, often those packets *are* sent back but they
> are blocked somewhere closer to the client, and/or the TCP stack of the
> system does not process them in a reasonable way.
>
> It is possible to work around that by adjusting the MSS of a TCP SYN
> passing the point where outgoing MTU is smaller than incoming MTU
> (incidentally something that I invented and implemented in NET in 1995,
> but later almost any router and routing software started to support it)
> so as a result the TCP segments sent by the endpoints will be smaller and
> won't need to be fragmented.
>
> Roger can do that on his own server, e.g. like this:
>
> iptables -t mangle -A INPUT -p tcp --syn -j TCPMSS --set-mss 1400
> iptables -t mangle -A OUTPUT -p tcp --syn -j TCPMSS --set-mss 1400
>
> Or on a router/gateway along the path (using FORWARD instead of INPUT/OUTPUT).
>
> However, I'm not convinced that this is the problem as the site works OK
> for me over internet. Why wouldn't it work for Google then?
>
> Rob
>
>
> we have a very similar system here in Italy.
Nice!
> looking at the high number of access requests, in the end,
> the actual number of OMs whom is using the system is very low.
> Many of them looses interest or request the access just as a "nice to
> have".
That is always a bit of a problem. People ask for a certificate, connect to
the server and then they think "well, what do I do next?" or "what can I do
now that I cannot do on internet?".
There are some information sites having small lists of sites that can be
visited and some search engines, and I have proposed to setup a more standardized
and automated website to find running services and a description what they offer,
but I do not consider it my task (as an address coordinator and admin for part of
the systems) to build and maintain that. Others can pick up that part of the work.
Without a clear description of what we can do with the network, it is not
surprising that not everybody keeps using it.
But it looks like you have quite some connected users too! Good!
Rob
Thanks everyone for the MTU discussion/ suggestions on the google issues. I’ll try adjusting the MTU.
The https issue only just started. I had a letsencrypt certificate and auto-renew screwed up and I haven’t had a chance to fix it. That wasn’t a problem when I started this google thread, so won’t be the main issue with google.
Roger.
VA7LBB
On May 9, 2019, at 04:35, Scott Nicholas <scott.nicholas(a)scottn.us> wrote:
>> However, I'm not convinced that this is the problem as the site works OK
>> for me over internet. Why wouldn't it work for Google then?
>
> We have to speculate somewhat..I don't know what the crawler uses for
> TCP stack. I know each OS is different in the ways it deals with MTU
> and blackholes.
> I did a wireshark capture on my Windows desktop and show a lot of
> black/red. I lowered my MTU by 40 and re-tried and there was lots of
> green.
> The TCP MSS coming from the web server was already lowered by 20. I
> don't know what it looks like on the far end. I'm also speculating
> that it's not getting ICMP or seeing a lower MSS for me.
>
> Another reason I saw red -- a few links point to HTTPS and it is not
> enabled. Fix that, and set a lower MTU on the host, and we'll at
> least fix *some* problem. Maybe not the Googling indexing one. :)
>
> Regards,
> Scott
>