I'm currently trying to get upstream to fix a CSTP restart problem that happens every 30 minutes with 4.07. It will do a CSTP restart every 30 minutes for me which subsequently creates a partially working connection (ping works, tcp packets don't). Reproducible: Always Steps to Reproduce: 1. /etc/init.d/openconnect.vpn0 start 2. Verify all sockets operate properly on the vpn (ping, svn, http, etc) 3. Wait 30 minutes 4. Syslog will show a reconnect. 5. Verify ping works 6. Try another service (e.g. svn up) 7. svn up will fail to connect. Actual Results: After 30 minutes the vpn reconnects but only has a partially working connection. Expected Results: After 30 minutes vpn should reconnect and all sockets should continue working.
Workaround in the mean time: Restart the service every 30 minutes manually (you can use the init script) --or-- Copy the 4.07 ebuild to your local portage overlay and rename it to 3.20, then change these lines: $(use_enable nls ) \ $(use_with openssl ) \ $(use_with gnutls ) to: $(use_enable nls ) Note that when downgrading, you will lose gnutls support and you must use openssl.
It appears the default MTU is set too low. Overriding the default and setting it to --mtu 1406 seems to keep the connection more stable. I'm still working with upstream to see what they can do about this.
I never had this problem, I'm using openconnect more than 12 months several hours a week, some days 6 hours and longer without interruption. The MTU of the tun0 interface is 1300 for my config, I don't know how to set this so I assume it's a configuration information coming from the server side. Just for information: the underlying ppp0 or eth0 interfaces (depending of the internet connection type) have both MTU 1500.
(In reply to comment #3) > I never had this problem, I'm using openconnect more than 12 months several > hours a week, some days 6 hours and longer without interruption. > The MTU of the tun0 interface is 1300 for my config, I don't know how to set > this so I assume it's a configuration information coming from the server > side. > Just for information: the underlying ppp0 or eth0 interfaces (depending of > the internet connection type) have both MTU 1500. My MTU defaults to 951 if I don't set it. The MTU for the vpn is defined in the tun interface. When you want to override the MTU, you just set it in vpnopts_(vpn tunnel name) like this: vpnopts_vpn0="--mtu 1406 ...other flags" 1406 seems to be the maximum MTU allowed. I haven't had any problems when I override the MTU and set it to this when testing. I'm still trying to get information from upstream as to why the default MTU is so low.
is this still present with 4.08 or 5.01?
I'm no longer able to test this since I don't have access to an anyconnect vpn anymore and since nobody else has had this problem, I'm closing this bug.