jwoglom / tconnectsync

Syncs data from Tandem Source (formerly t:connect) to Nightscout for the t:slim X2 and Mobi insulin pumps

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Crash with "TypeError: get() got an unexpected keyword argument 'timeout'"

bcleonard opened this issue · comments

Been running v0.8.3 for about 3 weeks now. I have tconnectsync running in docker through systemd. I get data but it is crashing after every run.

The end of docker logs:

2022-09-17 14:11:06 DEBUG split_therapy_events: 0 bolus, 0 CGM, 0 BG
2022-09-17 14:11:06 WARNING No last CGM reading is able to be determined from CIQ
2022-09-17 14:11:06 WARNING Downloading t:connect CSV data
2022-09-17 14:11:06 WARNING Falling back on WS2 CSV data source because BOLUS is an enabled feature and CIQ bolus data was empty!!
2022-09-17 14:11:06 WARNING <!!> The WS2 data source is unreliable and may prevent timely synchronization
2022-09-17 14:11:06 DEBUG Instantiating new WS2Api
Traceback (most recent call last):
File "/home/appuser/main.py", line 5, in
main()
File "/home/appuser/tconnectsync/init.py", line 87, in main
sys.exit(u.process(tconnect, nightscout, time_start, time_end, args.pretend, features=args.features))
File "/home/appuser/tconnectsync/autoupdate.py", line 48, in process
added = process_time_range(tconnect, nightscout, time_start, time_end, pretend, features=features)
File "/home/appuser/tconnectsync/process.py", line 102, in process_time_range
csvdata = tconnect.ws2.therapy_timeline_csv(time_start, time_end)
File "/home/appuser/tconnectsync/api/ws2.py", line 83, in therapy_timeline_csv
req_text = self.get('therapytimeline2csv/%s/%s/%s?format=csv' % (self.userGuid, startDate, endDate), timeout=10)
TypeError: get() got an unexpected keyword argument 'timeout'

I'm trying to pull the BASAL BOLUS data.

40aea81 might fix it -- the reason for this not happening until that point is that the standard Tandem APIs likely either failed or timed out which resulted in falling back on the ws2 api.

Closing this issue since v0.8.4 and above should resolve this issue going forward.