TLS shake hands error
zqWu opened this issue · comments
!!!!want help!!!!
branch master and 0.4.4
server config
server 10.140.0.0 255.255.255.0
topology subnet
verb 3
# Filled by Secrets object. Use generic names
key /etc/openvpn/pki/private.key
ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/certificate.crt
dh none
ecdh-curve secp256k1
key-direction 0
keepalive 10 60
persist-key
persist-tun
push "block-outside-dns"
proto tcp
cipher AES-256-CBC
tls-cipher TLS-ECDHE-RSA-WITH-AES-128-GCM-SHA256:TLS-ECDHE-ECDSA-WITH-AES-128-GCM-SHA256:TLS-ECDHE-RSA-WITH-AES-256-GCM-SHA384
# Rely on scheduler to do port mapping, internally always 1194
port 1194
dev tun0
user nobody
group nogroup
push "dhcp-option DOMAIN svc.cluster.local"
push "dhcp-option DNS 10.96.0.10"
client config
client
nobind
dev tun
key-direction 1
remote-cert-tls server
remote xxxx 1194 tcp
<key>...</key>
<cert>...</cert>
<ca>...</ca>
<tls-auth>...</tls-auth>
key-direction 1
tls-cipher TLS-ECDHE-RSA-WITH-AES-128-GCM-SHA256:TLS-ECDHE-ECDSA-WITH-AES-128-GCM-SHA256:TLS-ECDHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA256
cipher AES-256-CBC
server log
MULTI: multi_create_instance called
Re-using SSL/TLS context
Control Channel MTU parms [ L:1623 D:1182 EF:68 EB:0 ET:0 EL:3 ]
Data Channel MTU parms [ L:1623 D:1450 EF:123 EB:406 ET:0 EL:3 ]
Local Options String (VER=V4): 'V4,dev-type tun,link-mtu 1559,tun-mtu 1500,proto TCPv4_SERVER,keydir 0,cipher AES-256-CBC,auth SHA1,keysize 256,tls-auth,key-method 2,tls-server'
Expected Remote Options String (VER=V4): 'V4,dev-type tun,link-mtu 1559,tun-mtu 1500,proto TCPv4_CLIENT,keydir 1,cipher AES-256-CBC,auth SHA1,keysize 256,tls-auth,key-method 2,tls-client'
TCP connection established with [AF_INET]10.244.0.0:2530
TCPv4_SERVER link local: (not bound)
TCPv4_SERVER link remote: [AF_INET]10.244.0.0:2530
R 10.244.0.0:2530 TLS: Initial packet from [AF_INET]10.244.0.0:2530, sid=b13fc9ef 56a51211
WRR 10.244.0.0:2530 OpenSSL: error:141F7065:SSL routines:final_key_share:no suitable key share
10.244.0.0:2530 TLS_ERROR: BIO read tls_read_plaintext error
10.244.0.0:2530 TLS Error: TLS object -> incoming plaintext read error
10.244.0.0:2530 TLS Error: TLS handshake failed
10.244.0.0:2530 Fatal TLS error (check_tls_errors_co), restarting
10.244.0.0:2530 SIGUSR1[soft,tls-error] received, client-instance restarting
TCP/UDP: Closing socket
client log
Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
TCP/UDP: Preserving recently used remote address: [AF_INET]xxxxx:1194
Socket Buffers: R=[131072->131072] S=[131072->131072]
Attempting to establish TCP connection with [AF_INET]xxxxx:1194 [nonblock]
MANAGEMENT: >STATE:1604551160,TCP_CONNECT,,,,,,
TCP connection established with [AF_INET]xxxxx:1194
TCP_CLIENT link local: (not bound)
TCP_CLIENT link remote: [AF_INET]xxxxx:1194
MANAGEMENT: >STATE:1604551162,WAIT,,,,,,
MANAGEMENT: >STATE:1604551162,AUTH,,,,,,
TLS: Initial packet from [AF_INET]xxxxx:1194, sid=5e5bd90d 77e70dff
Connection reset, restarting [0]
SIGUSR1[soft,connection-reset] received, process restarting
MANAGEMENT: >STATE:1604551162,RECONNECTING,connection-reset,,,,,
MANAGEMENT: CMD 'hold release'
I managed to resolve this by replacing the EDCH curve secp256k1
with secp521r1
.
@mtsgrd did you change the openvpn.tmpl and recreated the docker image? Or how did you fixed it?
@Edelf I came across this issue and my approach was to create a configmap using the openvpn.tmpl
from this repository after editing it:
kubectl -n openvpn create cm openvpn-tmpl --from-file ./openvpn.tmpl
Then change ./kube/deployment.yaml
with these lines:
under spec.template.spec.containers[openvpn].volumeMounts
- mountPath: /etc/openvpn/templates/openvpn.tmpl
name: openvpn-tmpl
subPath: openvpn.tmpl
under spec.template.spec.volumes
- name: openvpn-tmpl
configMap:
name: openvpn-tmpl
defaultMode: 0400
I didn't bother editing the whole YAML so had to edit the final line in ./kube/deploy.sh
from create
to apply
, and then update the deployment by running ./kube/deploy.sh
, but it works :) hopefully this helps!