ifupdown-ng / ifupdown-ng

flexible ifup/ifdown implementation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ifup: skipping auto interface XX (already configured), use --force to force configuration

EasyNetDev opened this issue · comments

Hi,

I've created an executor for Teaming and when I'm trying to start my teaming interface I'm getting this error in the end:

root@R02:/opt/Kitts/ifupdown-ng# ifup -v po1
ifupdown: lo: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: lo: attempting to run loopback executor for phase depend
ifupdown: lo: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: po1.650: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ IF_VLAN_RAW_DEVICE=po1
+ IF_VLAN_ID=650
+ return 0
+ echo po1
ifupdown: po1.650: attempting to run vlan executor for phase depend
ifupdown: po1.650: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo internet
ifupdown: po1: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: po1: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po1
+ [ -n te0-0 ]
+ get_depend_list te0-0
+ local MEMBERS_LIST
+ [ te0-0 ]
+ MEMBERS_LIST= te0-0
+ shift
+ [ ]
+ echo te0-0
ifupdown: po1: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: te0-0: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-0: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase depend
ifupdown: internet: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: internet: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: po1: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: po1: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po1
+ [ -n te0-0 ]
+ get_depend_list te0-0
+ local MEMBERS_LIST
+ [ te0-0 ]
+ MEMBERS_LIST= te0-0
+ shift
+ [ ]
+ echo te0-0
ifupdown: po1: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: te0-0: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-0: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase depend
ifupdown: te0-0: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-0: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase depend
ifupdown: po2: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: po2: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po2
+ [ -n te0-1 ]
+ get_depend_list te0-1
+ local MEMBERS_LIST
+ [ te0-1 ]
+ MEMBERS_LIST= te0-1
+ shift
+ [ ]
+ echo te0-1
ifupdown: po2: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: te0-1: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-1: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-1: attempting to run post executor for phase depend
ifupdown: te0-1: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-1: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-1: attempting to run post executor for phase depend
ifupdown: te0-3: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-3: attempting to run post executor for phase depend
ifupdown: te0-3: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: gi2-0: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: gi2-0: attempting to run post executor for phase depend
ifupdown: gi2-0: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: internet: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: internet: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: servers: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: servers: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: servers2: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: servers2: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: vpn: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: vpn: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: vpn2: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: vpn2: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: mgmt: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: mgmt: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: esxi: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: esxi: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: iptv: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: iptv: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifup: acquiring lock on /run/ifstate.po1.lock
ifup: skipping auto interface po1 (already configured), use --force to force configuration

Which is wired because I'm not trying to acquire /run/ifstate.po1.lock file. At this point the team executor is not able to create the logical interface at all.

If I'm using "--force" I'm getting the interface in the system:

root@R02:/usr/libexec/ifupdown-ng# ifup --force -v po1
ifupdown: lo: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: lo: attempting to run loopback executor for phase depend
ifupdown: lo: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: po1.650: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ IF_VLAN_RAW_DEVICE=po1
+ IF_VLAN_ID=650
+ return 0
+ echo po1
ifupdown: po1.650: attempting to run vlan executor for phase depend
ifupdown: po1.650: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo internet
ifupdown: po1: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: po1: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po1
+ [ -n te0-0 ]
+ get_depend_list te0-0
+ local MEMBERS_LIST
+ [ te0-0 ]
+ MEMBERS_LIST= te0-0
+ shift
+ [ ]
+ echo te0-0
ifupdown: po1: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: te0-0: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-0: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase depend
ifupdown: internet: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: internet: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: po1: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: po1: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po1
+ [ -n te0-0 ]
+ get_depend_list te0-0
+ local MEMBERS_LIST
+ [ te0-0 ]
+ MEMBERS_LIST= te0-0
+ shift
+ [ ]
+ echo te0-0
ifupdown: po1: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: te0-0: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-0: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase depend
ifupdown: te0-0: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-0: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase depend
ifupdown: po2: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: po2: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po2
+ [ -n te0-1 ]
+ get_depend_list te0-1
+ local MEMBERS_LIST
+ [ te0-1 ]
+ MEMBERS_LIST= te0-1
+ shift
+ [ ]
+ echo te0-1
ifupdown: po2: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: te0-1: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-1: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-1: attempting to run post executor for phase depend
ifupdown: te0-1: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-1: attempting to run team executor for phase depend
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-1: attempting to run post executor for phase depend
ifupdown: te0-3: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: te0-3: attempting to run post executor for phase depend
ifupdown: te0-3: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: gi2-0: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: gi2-0: attempting to run post executor for phase depend
ifupdown: gi2-0: attempting to run mpls executor for phase depend
/usr/libexec/ifupdown-ng/mpls
+ [ depend != pre-up ]
+ exit 0
ifupdown: internet: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: internet: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: servers: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: servers: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: servers2: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: servers2: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: vpn: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: vpn: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: vpn2: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: vpn2: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: mgmt: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: mgmt: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: esxi: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: esxi: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifupdown: iptv: attempting to run link executor for phase depend
/usr/libexec/ifupdown-ng/link
+ is_vlan
+ [ -z  ]
+ return 1
+ [  = veth -a  ]
ifupdown: iptv: attempting to run vrf executor for phase depend
/usr/libexec/ifupdown-ng/vrf
+ echo
ifup: acquiring lock on /run/ifstate.po1.lock
ifup: changing state of interface po1 to 'up'
ifupdown: changing state of dependent interface te0-0 (of po1) to up
ifupdown: te0-0: attempting to run link executor for phase create
/usr/libexec/ifupdown-ng/link
+ [  = dummy ]
+ [  = veth ]
+ is_vlan
+ [ -z  ]
+ return 1
ifupdown: te0-0: attempting to run team executor for phase create
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase create
ifupdown: te0-0: attempting to run link executor for phase pre-up
/usr/libexec/ifupdown-ng/link
ifupdown: te0-0: attempting to run team executor for phase pre-up
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase pre-up
/bin/run-parts /etc/network/if-pre-up.d
ifupdown: te0-0: attempting to run link executor for phase up
/usr/libexec/ifupdown-ng/link
+ IF_LINK_OPTIONS=
+ [ -n  ]
+ [ -n  ]
+ ip link set up dev te0-0
+ [  ]
ifupdown: te0-0: attempting to run team executor for phase up
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase up
/bin/run-parts /etc/network/if-up.d
ifupdown: te0-0: attempting to run link executor for phase post-up
/usr/libexec/ifupdown-ng/link
ifupdown: te0-0: attempting to run team executor for phase post-up
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z  ]
+ exit 0
ifupdown: te0-0: attempting to run post executor for phase post-up
/sbin/ip link set $IFACE alias Ten0/0
ifupdown: po1: attempting to run link executor for phase create
/usr/libexec/ifupdown-ng/link
+ [  = dummy ]
+ [  = veth ]
+ is_vlan
+ [ -z  ]
+ return 1
ifupdown: po1: attempting to run team executor for phase create
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po1
+ [ -d /sys/class/net/po1 ]
+ ip link add po1 type team
+ prepare_config_file
+ write_config_file_top
+ echo {
+ echo -n       "device": "po1"
+ prepare_runner_config
+ local runner
+ local notify_peers
+ local mcast_rejoin
+ local link_watch
+ local tx_balancer
+ local tx_hash
+ local json_OPT_1
+ local json_OPT_2
+ local json_OPT_3
+ local json_OPT_4
+ local json_OPT_5
+ local json_OPT_6
+ local json_OPT_7
+ local json_OPT_8
+ echo+ tr [:upper:] [:lower:]
 lacp
+ IF_TEAM_RUNNER=lacp
+ [ -z l4 l3 vlan eth ]]
+ get_options runner.tx_hash l4 l3 vlan eth
+ local srch_opt=runner.tx_hash
+ local cval=l4 l3 vlan eth
+ sub_opt=tx_hash
+ build_tx_hash l4 l3 vlan eth
+ local tx_hash
+ local list_hashes
+ local list_hashes_json=
+ tx_hash="tx_hash": [
+ echo l4 l3 vlan eth
+ tr + \n
+ hashes_list=l4 l3 vlan eth
+ list_hashes_json=, "l4"
+ list_hashes_json=, "l4", "l3"
+ list_hashes_json=, "l4", "l3", "vlan"
+ list_hashes_json=, "l4", "l3", "vlan", "eth"
+ tx_hash="tx_hash": ["l4", "l3", "vlan", "eth"]
+ echo "tx_hash": ["l4", "l3", "vlan", "eth"]
+ tx_hash="tx_hash": ["l4", "l3", "vlan", "eth"]
+ get_options runner.tx_balancer.name basic
+ local srch_opt=runner.tx_balancer.name
+ local cval=basic
+ sub_opt=tx_balancer.name
+ echo "name": "basic"
+ json_OPT_1="name": "basic"
+ get_options runner.tx_balancer.balancing_interval 50
+ local srch_opt=runner.tx_balancer.balancing_interval
+ local cval=50
+ sub_opt=tx_balancer.balancing_interval
+ [ -z basic ]
+ [ -z 50 ]
+ convert_to_int 50
+ printf %d\n 50
+ cval=50
+ [ 50 -ge 0 ]
+ echo "balancing_interval": 50
+ json_OPT_2="balancing_interval": 50
+ get_options runner.active true
+ local srch_opt=runner.active
+ local cval=true
+ sub_opt=active
+ [ -z true ]
+ truefalse true
+ echo true
+ echo "active": true
+ json_OPT_3="active": true
+ get_options runner.fast_rate true
+ local srch_opt=runner.fast_rate
+ local cval=true
+ sub_opt=fast_rate
+ [ -z true ]
+ truefalse true
+ echo true
+ echo "fast_rate": true
+ json_OPT_4="fast_rate": true
+ get_options runner.sys_prio
+ local srch_opt=runner.sys_prio
+ local cval=
+ sub_opt=sys_prio
+ [ -z  ]
+ cval=65535
+ [ 65535 -lt 0 -a 65535 -gt 65535 ]
+ echo "sys_prio": 65535
+ json_OPT_5="sys_prio": 65535
+ get_options runner.min_ports 1
+ local srch_opt=runner.min_ports
+ local cval=1
+ sub_opt=min_ports
+ [ -z 1 ]
+ convert_to_int 1
+ printf %d\n 1
+ cval=1
+ [ 1 -lt 1 -a 1 -gt 255 ]
+ echo "min_ports": 1
+ json_OPT_6="min_ports": 1
+ get_options runner.agg_select_policy
+ local srch_opt=runner.agg_select_policy
+ local cval=
+ sub_opt=agg_select_policy
+ echo "agg_select_policy": "lacp_prio"
+ json_OPT_7="agg_select_policy": "lacp_prio"
+ build_tx_balancer "name": "basic" "balancing_interval": 50
+ local tx_balancer_name="name": "basic"
+ local tx_balancer_int="balancing_interval": 50
+ [ -z basic ]
+ local tx_balancer_json="tx_balancer": {
+ local tx_balancer_opt_json=
+ [ -n "name": "basic" ]
+ tx_balancer_opt_json="name": "basic"
+ [ -n "balancing_interval": 50 ]
+ tx_balancer_opt_json="name": "basic", "balancing_interval": 50
+ tx_balancer_json="tx_balancer": {"name": "basic", "balancing_interval": 50}
+ echo "tx_balancer": {"name": "basic", "balancing_interval": 50}
+ tx_balancer="tx_balancer": {"name": "basic", "balancing_interval": 50}
+ build_runner "tx_balancer": {"name": "basic", "balancing_interval": 50} "tx_hash": ["l4", "l3", "vlan", "eth"] "active": true "fast_rate": true "sys_prio": 65535 "min_ports": 1 "agg_select_policy": "lacp_prio"
+ local tx_balancer="tx_balancer": {"name": "basic", "balancing_interval": 50}
+ local tx_hash="tx_hash": ["l4", "l3", "vlan", "eth"]
+ shift
+ shift
+ local runner_json=
+ runner_json="runner": {"name": "lacp"
+ [ -n "active": true ]
+ [ -n "active": true ]
+ runner_json="runner": {"name": "lacp", "active": true
+ shift
+ [ -n "fast_rate": true ]
+ [ -n "fast_rate": true ]
+ runner_json="runner": {"name": "lacp", "active": true, "fast_rate": true
+ shift
+ [ -n "sys_prio": 65535 ]
+ [ -n "sys_prio": 65535 ]
+ runner_json="runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535
+ shift
+ [ -n "min_ports": 1 ]
+ [ -n "min_ports": 1 ]
+ runner_json="runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1
+ shift
+ [ -n "agg_select_policy": "lacp_prio" ]
+ [ -n "agg_select_policy": "lacp_prio" ]
+ runner_json="runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio"
+ shift
+ [ -n  ]
+ [ -n "tx_balancer": {"name": "basic", "balancing_interval": 50} ]
+ runner_json="runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}
+ [ -n "tx_hash": ["l4", "l3", "vlan", "eth"] ]
+ runner_json="runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}, "tx_hash": ["l4", "l3", "vlan", "eth"]
+ runner_json="runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}, "tx_hash": ["l4", "l3", "vlan", "eth"] }
+ echo "runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}, "tx_hash": ["l4", "l3", "vlan", "eth"] }
+ runner="runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}, "tx_hash": ["l4", "l3", "vlan", "eth"] }
+ echo ethtool
+ tr [:upper:] [:lower:]
+ IF_TEAM_LINK_WATCH=ethtool
+ IF_TEAM_LINK_WATCH=ethtool
+ get_options link_watch.delay_up 250
+ local srch_opt=link_watch.delay_up
+ local cval=250
+ sub_opt=delay_up
+ [ -z 250 ]
+ convert_to_int 250
+ printf %d\n 250
+ cval=250
+ [ 250 -lt 0 ]
+ echo "delay_up": 250
+ json_OPT_1="delay_up": 250
+ get_options link_watch.delay_down 0
+ local srch_opt=link_watch.delay_down
+ local cval=0
+ sub_opt=delay_down
+ [ -z 0 ]
+ convert_to_int 0
+ printf %d\n 0
+ cval=0
+ [ 0 -lt 0 ]
+ echo "delay_down": 0
+ json_OPT_2="delay_down": 0
+ build_linkwatch "delay_up": 250 "delay_down": 0
+ local linkwatch_json=
+ local linkwatch_options=
+ linkwatch_json="link_watch": {"name": "ethtool"
+ linkwatch_options=
+ [ -n "delay_up": 250 ]
+ linkwatch_options=, "delay_up": 250
+ shift
+ [ -n "delay_down": 0 ]
+ linkwatch_options=, "delay_up": 250, "delay_down": 0
+ shift
+ [ -n  ]
+ linkwatch_json="link_watch": {"name": "ethtool", "delay_up": 250, "delay_down": 0}
+ echo "link_watch": {"name": "ethtool", "delay_up": 250, "delay_down": 0}
+ link_watch="link_watch": {"name": "ethtool", "delay_up": 250, "delay_down": 0}
+ get_options notify_peers.count
+ local srch_opt=notify_peers.count
+ local cval=
+ sub_opt=count
+ [ -z  ]
+ return
+ json_OPT_1=
+ get_options notify_peers.interval
+ local srch_opt=notify_peers.interval
+ local cval=
+ sub_opt=interval
+ [ -z  ]
+ return
+ json_OPT_2=
+ build_notifypeers_mcastrejoin notify_peers
+ local data_name=notify_peers
+ shift
+ local data_json=
+ local data_options=
+ [ -z  ]
+ return
+ notify_peers=
+ get_options mcast_rejoin.count
+ local srch_opt=mcast_rejoin.count
+ local cval=
+ sub_opt=count
+ [ -z  ]
+ return
+ json_OPT_1=
+ get_options mcast_rejoin.interval
+ local srch_opt=mcast_rejoin.interval
+ local cval=
+ sub_opt=interval
+ [ -z  ]
+ return
+ json_OPT_2=
+ build_notifypeers_mcastrejoin mcast_rejoin
+ local data_name=mcast_rejoin
+ shift
+ local data_json=
+ local data_options=
+ [ -z  ]
+ return
+ mcast_rejoin=
+ write_config_runner "link_watch": {"name": "ethtool", "delay_up": 250, "delay_down": 0}   "runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}, "tx_hash": ["l4", "l3", "vlan", "eth"] }
+ local link_watch="link_watch": {"name": "ethtool", "delay_up": 250, "delay_down": 0}
+ local mcast_rejoin=
+ local notify_peers=
+ local runner="runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}, "tx_hash": ["l4", "l3", "vlan", "eth"] }
+ [ -n "link_watch": {"name": "ethtool", "delay_up": 250, "delay_down": 0} ]
+ echo ,
+ echo -n       "link_watch": {"name": "ethtool", "delay_up": 250, "delay_down": 0}
+ [ -n  ]
+ [ -n "runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}, "tx_hash": ["l4", "l3", "vlan", "eth"] } ]
+ echo ,
+ echo -n       "runner": {"name": "lacp", "active": true, "fast_rate": true, "sys_prio": 65535, "min_ports": 1, "agg_select_policy": "lacp_prio", "tx_balancer": {"name": "basic", "balancing_interval": 50}, "tx_hash": ["l4", "l3", "vlan", "eth"] }
+ [ -n  ]
+ prepare_ports_member_config
+ local PORT
+ local json_port_opts=
+ local json_port=
+ local is_first_port=1
+ local port_member
+ local port_member_conf
+ local port_member_opts
+ write_config_port_start
+ echo ,
+ echo -n       "ports": {
+ get_port_options_json te0-0
+ local PORT=te0-0
+ local PORT_OPT_VAL=
+ local json_OPT_1=
+ local json_OPT_2=
+ local json_OPT_3=
+ local json_OPT_4=
+ local json_OPT_5=
+ local json_OPT_6=
+ local json_OPT_7=
+ local json_OPT_8=
+ local json_OPT_9=
+ local json_OPT_10=
+ local json_OPT_11=
+ local json_port_opts=
+ ifquery -p team-port-lacp-prio te0-0
+ PORT_OPT_VAL=
+ get_options ports.lacp_prio
+ local srch_opt=ports.lacp_prio
+ local cval=
+ sub_opt=lacp_prio
+ [ -z  ]
+ cval=0
+ [ 0 -lt 0 ]
+ echo "lacp_prio": 0
+ json_OPT_1="lacp_prio": 0
+ ifquery -p team-port-lacp-key te0-0
+ PORT_OPT_VAL=20
+ get_options ports.lacp_key 20
+ local srch_opt=ports.lacp_key
+ local cval=20
+ sub_opt=lacp_key
+ [ -z 20 ]
+ convert_to_int 20
+ printf %d\n 20
+ cval=20
+ [ 20 -lt 0 ]
+ echo "lacp_key": 20
+ json_OPT_2="lacp_key": 20
+ ifquery -p team-link-watch te0-0
+ PORT_OPT_VAL=
+ get_options link_watch.name
+ local srch_opt=link_watch.name
+ local cval=
+ sub_opt=name
+ json_OPT_3=
+ ifquery -p team-link-watch-delay-up te0-0
+ PORT_OPT_VAL=
+ get_options link_watch.delay_up
+ local srch_opt=link_watch.delay_up
+ local cval=
+ sub_opt=delay_up
+ [ -z  ]
+ cval=0
+ [ 0 -lt 0 ]
+ echo "delay_up": 0
+ json_OPT_4="delay_up": 0
+ ifquery -p team-link-watch-delay-down te0-0
+ PORT_OPT_VAL=
+ get_options link_watch.delay_down
+ local srch_opt=link_watch.delay_down
+ local cval=
+ sub_opt=delay_down
+ [ -z  ]
+ cval=0
+ [ 0 -lt 0 ]
+ echo "delay_down": 0
+ json_OPT_5="delay_down": 0
+ build_port_options_json "lacp_prio": 0 "lacp_key": 20  "delay_up": 0 "delay_down": 0
+ local json_port_opts=
+ [ -n "lacp_prio": 0 ]
+ [ -n "lacp_prio": 0 ]
+ json_port_opts=, "lacp_prio": 0
+ shift
+ [ -n "lacp_key": 20 ]
+ [ -n "lacp_key": 20 ]
+ json_port_opts=, "lacp_prio": 0, "lacp_key": 20
+ shift
+ [ -n  ]
+ echo "lacp_prio": 0, "lacp_key": 20
+ json_port_opts="lacp_prio": 0, "lacp_key": 20
+ json_port="te0-0": {"lacp_prio": 0, "lacp_key": 20}
+ write_config_port 1 "te0-0": {"lacp_prio": 0, "lacp_key": 20}
+ local is_first_port=1
+ [ -n "te0-0": {"lacp_prio": 0, "lacp_key": 20} ]
+ [ 1 -eq 1 ]
+ echo
+ echo -n               "te0-0": {"lacp_prio": 0, "lacp_key": 20}
+ [ 1 = 1 ]
+ is_first_port=0
+ write_config_port_end
+ echo
+ echo -n       }
+ write_config_file_bottom
+ echo
+ echo }
+ start_teamd_service
+ is_systemd
+ systemctl is-system-running
+ systemd_status=running
+ return 0
+ start_daemon_via_systemd
+ systemctl is-enabled teamd@po1.service
+ service_status=
+ add_members
+ local port_conf
+ local port_member
+ port_member=te0-0
+ ip link set te0-0 down
+ ip link set te0-0 master po1
+ ip link set te0-0 up
+ exit 0
ifupdown: po1: attempting to run mpls executor for phase create
/usr/libexec/ifupdown-ng/mpls
+ [ create != pre-up ]
+ exit 0
ifupdown: po1: attempting to run link executor for phase pre-up
/usr/libexec/ifupdown-ng/link
ifupdown: po1: attempting to run team executor for phase pre-up
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po1
+ exit
ifupdown: po1: attempting to run mpls executor for phase pre-up
/usr/libexec/ifupdown-ng/mpls
+ [ pre-up != pre-up ]
+ [ yes ]
+ yesno yes
+ echo 1
+ value=1
+ [ 1 = 1 ]
+ modprobe mpls_iptunnel
+ [ -f /proc/sys/net/mpls/conf/po1/input -o  ]
+ /bin/sh -c echo 1 > /proc/sys/net/mpls/conf/po1/input
/bin/run-parts /etc/network/if-pre-up.d
ifupdown: po1: attempting to run link executor for phase up
/usr/libexec/ifupdown-ng/link
+ IF_LINK_OPTIONS=
+ [ -n 9000 ]
+ IF_LINK_OPTIONS= mtu 9000
+ [ -n  ]
+ ip link set up dev po1 mtu 9000
+ [  ]
ifupdown: po1: attempting to run team executor for phase up
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po1
ifupdown: po1: attempting to run mpls executor for phase up
/usr/libexec/ifupdown-ng/mpls
+ [ up != pre-up ]
+ exit 0
/bin/run-parts /etc/network/if-up.d
ifupdown: po1: attempting to run link executor for phase post-up
/usr/libexec/ifupdown-ng/link
ifupdown: po1: attempting to run team executor for phase post-up
/usr/libexec/ifupdown-ng/team
+ CONFIG_FILE=/run/ifteaming
+ [ -z lacp ]
+ CONFIG_FILE=/run/ifteaming.po1
ifupdown: po1: attempting to run mpls executor for phase post-up
/usr/libexec/ifupdown-ng/mpls
+ [ post-up != pre-up ]
+ exit 0

Found the issue. Sometimes the interfaces remain in /run/ifstate and you can't recreate them. After I've deleted the interface from /run/ifstate I could rerun ifup/ifdown without issues.