hrkfdn / ncspot

Cross-platform ncurses Spotify client written in Rust, inspired by ncmpc and the likes.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ncspot hangs after a period of inactivity

mcookAmazon opened this issue · comments

Describe the bug

I am running ncspot 1.0.0 (312f9ff) on a raspberry pi

Linux raspberrypi 5.10.63-v7l+ #1496 SMP Wed Dec 1 15:58:56 GMT 2021 armv7l GNU/Linux

I control ncspot via mpris.

An example command I send to play a track is:
dbus-send --print-reply --dest=org.mpris.MediaPlayer2.ncspot /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.OpenUri "string:https://open.spotify.com/track/3MZn1thdvB7QCYXY1AiTjG"

When I start ncspot, it works fine. If I leave it running for some period of time*, it becomes unresponsive to requests to play tracks. The UI still allows navigation. I'm able to move from tab to tab and I'm able to launch the queue view and go back to the library view. However, if I attempt to run the above dbus-send command, it times out with an error:

pi@raspberrypi:~ $ dbus-send --print-reply --dest=org.mpris.MediaPlayer2.ncspot /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.OpenUri "string:https://open.spotify.com/track/3MZn1thdvB7QCYXY1AiTjG"
Error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

The UI is also unable to play. If I hit ENTER on a track in the library, it does not play the track.

I have also tried to control it via nc:

pi@raspberrypi:~ $ nc -U /run/user/1000/ncspot/ncspot.sock {"mode":"Stopped","playable":{"type":"Track","id":"4okLKeB83VLZFGGtrMQxpd","uri":"spotify:track:4okLKeB83VLZFGGtrMQxpd","title":"Under Pressure","track_number":13,"disc_number":1,"duration":237520,"artists":["Queen","David Bowie"],"artist_ids":["1dfeR4HaWDbWqFHLkxsg1d","0oSGxfWSnnOXhD2fKuz2Gy"],"album":"Best of Bowie","album_id":"1jdQFC3s8PZUc5i7vovZTv","album_artists":["David Bowie"],"cover_url":"https://i.scdn.co/image/ab67616d0000b273a47e80463147d1877608d56b","url":"https://open.spotify.com/track/4okLKeB83VLZFGGtrMQxpd","added_at":null,"list_index":0,"is_local":false,"is_playable":true}} play
While ncspot responds with its current state, entering the 'play' command does not return any data.

*I have not nailed down the exact time period yet, but I know this occurs within a couple of hours.

Restarting ncspot allows the application to return to working as expected.

To Reproduce
See above.

Expected behavior
The application should continue to play songs after being inactive for arbitrary periods of time.

Kernel, OS, and Hardware information
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 5.10.63-v7l+ #1496 SMP Wed Dec 1 15:58:56 GMT 2021 armv7l GNU/Linux

pi@raspberrypi:~ $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

pi@raspberrypi:~ $ cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 270.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3

processor : 1
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 270.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3

processor : 2
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 270.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3

processor : 3
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 270.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3

Hardware : BCM2711
Revision : c03114
Serial : 100000002a538533
Model : Raspberry Pi 4 Model B Rev 1.4

Backtrace/Debug log
Please attach a debug log and backtrace if ncspot has crashed.

Please see the attached stack trace. I believe the crash was induced when I tried to get a song to play by clicking on it with the mouse.

backtrace.log

Instructions on how to capture debug logs: https://github.com/hrkfdn/ncspot#debugging

For backtraces, make sure you run a debug build of ncspot, e.g. by running the
command mentioned in the compilation
instructions
. You can find the
latest backtrace at ~/.cache/ncspot/backtrace.log.

Additional context
Add any other context about the problem here.

most probably a dup of #1257

Could be a byproduct of cb96f46

I managed to reproduce it on fresh code, player gets completely unresponsive after being paused for a while. One of the threads is blocked reading the token:

(lldb) thread backtrace
  thread #2, name = 'tokio-runtime-w'
    frame #0: 0x00007f10f3fdf629 libc.so.6`syscall + 25
    frame #1: 0x0000562b2d8c5ec4 ncspot`std::thread::park [inlined] std::sys::unix::futex::futex_wait at futex.rs:62:21
    frame #2: 0x0000562b2d8c5e7c ncspot`std::thread::park [inlined] std::sys_common::thread_parking::futex::Parker::park at futex.rs:52:13
    frame #3: 0x0000562b2d8c5e70 ncspot`std::thread::park at mod.rs:1070:9
    frame #4: 0x0000562b2bd35e07 ncspot`std::sync::mpmc::context::Context::wait_until(self=0x00007f10f39f1c78, deadline=core::option::Option<std::time::Instant>::None @ 0x00007f10f39f1a88) at context.rs:139:17
    frame #5: 0x0000562b2b91711a ncspot`std::sync::mpmc::list::Channel<T>::recv::{{closure}}(cx=0x00007f10f39f1c78) at list.rs:444:27
    frame #6: 0x0000562b2be4c79c ncspot`std::sync::mpmc::context::Context::with::{{closure}} at context.rs:50:13
    frame #7: 0x0000562b2be4c6bf ncspot`std::sync::mpmc::context::Context::with::{{closure}}(cell=0x00007f10f39fc678) at context.rs:58:31
    frame #8: 0x0000562b2c020627 ncspot`std::thread::local::LocalKey<T>::try_with(self=0x0000562b2e0d6428, f=std::sync::mpmc::context::{impl#0}::with::{closure_env#1}<std::sync::mpmc::list::{impl#3}::recv::{closure_env#1}<core::option::Option<librespot_core::keymaster::Token>>, ()> @ 0x00007f10f39f1e18) at local.rs:270:16
    frame #9: 0x0000562b2be4a08a ncspot`std::sync::mpmc::context::Context::with(f=<unavailable>) at context.rs:53:9
    frame #10: 0x0000562b2b916fe6 ncspot`std::sync::mpmc::list::Channel<T>::recv(self=0x00007f10ec3dc100, deadline=core::option::Option<std::time::Instant>::None @ 0x00007f10f39f1f20) at list.rs:434:13
    frame #11: 0x0000562b2bdf62dc ncspot`std::sync::mpmc::Receiver<T>::recv(self=0x00007f10f39f2348) at mod.rs:307:43
    frame #12: 0x0000562b2c0449a6 ncspot`std::sync::mpsc::Receiver<T>::recv(self=0x00007f10f39f2348) at mod.rs:849:9
    frame #13: 0x0000562b2bdb10ca ncspot`ncspot::spotify_api::WebApi::update_token(self=0x00007f109800ab70) at spotify_api.rs:96:32
    frame #14: 0x0000562b2baee1d1 ncspot`ncspot::spotify_api::WebApi::api_with_retry(self=0x00007f109800ab70, cb=ncspot::spotify_api::{impl#1}::track::{closure_env#0} @ 0x00007f10f39f2c10) at spotify_api.rs:140:29
    frame #15: 0x0000562b2bdb2f45 ncspot`ncspot::spotify_api::WebApi::track(self=0x00007f109800ab70, track_id="79ZkdfOTnhPbtsODTpxZQ6") at spotify_api.rs:295:9
    frame #16: 0x0000562b2bfec8d3 ncspot`ncspot::mpris::MprisPlayer::metadata::{{closure}}(p=ncspot::model::playable::Playable::Track @ 0x00007f10f39f4420) at mpris.rs:135:21
    frame #17: 0x0000562b2bba1476 ncspot`core::option::Option<T>::and_then(self=core::option::Option<ncspot::model::playable::Playable>::Some @ 0x00007f10f39f4608, f=ncspot::mpris::{impl#4}::metadata::{closure_env#0} @ 0x00007f10f39f4540) at option.rs:1411:24
    frame #18: 0x0000562b2bf951e7 ncspot`ncspot::mpris::MprisPlayer::metadata(self=0x00007f109800ab10) at mpris.rs:129:29
    frame #19: 0x0000562b2bfee3fb ncspot`ncspot::mpris::MprisPlayer::metadata_changed::{{closure}}((null)=0x00007f10f39fa3b0) at mpris.rs:75:1
    frame #20: 0x0000562b2bfea5b0 ncspot`ncspot::mpris::MprisManager::serve::{{closure}}((null)=0x00007f10f39fa3b0) at mpris.rs:518:48
    frame #21: 0x0000562b2bfe9202 ncspot`ncspot::mpris::MprisManager::new::{{closure}}((null)=0x00007f10f39fa3b0) at mpris.rs:487:86
    frame #22: 0x0000562b2bf54b79 ncspot`tokio::runtime::task::core::Core<T,S>::poll::{{closure}}(ptr=0x0000562b2fb43230) at core.rs:328:17
    frame #23: 0x0000562b2bf4f08b ncspot`tokio::runtime::task::core::Core<T,S>::poll [inlined] tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut(self=0x0000562b2fb43230, f=tokio::runtime::task::core::{impl#6}::poll::{closure_env#0}<ncspot::mpris::{impl#0}::new::{async_block_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> @ 0x00007f10f39fa400) at unsafe_cell.rs:16:9
    frame #24: 0x0000562b2bf4f065 ncspot`tokio::runtime::task::core::Core<T,S>::poll(self=0x0000562b2fb43220, cx=core::task::wake::Context @ 0x00007f10f39fa3b0) at core.rs:317:13
    frame #25: 0x0000562b2b98efd1 ncspot`tokio::runtime::task::harness::poll_future::{{closure}} at harness.rs:485:19
    frame #26: 0x0000562b2bc8c4d3 ncspot`<core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once(self=core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<ncspot::mpris::{impl#0}::new::{async_block_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>> @ 0x00007f10f39fa488, (null)=() @ 0x00007f10f39fa487) at unwind_safe.rs:272:9
    frame #27: 0x0000562b2bf1c755 ncspot`std::panicking::try::do_call(data=0x00007f10f39fa528) at panicking.rs:552:40
    frame #28: 0x0000562b2bf2805b ncspot`__rust_try + 27
    frame #29: 0x0000562b2bf15698 ncspot`std::panicking::try(f=core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<ncspot::mpris::{impl#0}::new::{async_block_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>> @ 0x00007f10f39fa548) at panicking.rs:516:19
    frame #30: 0x0000562b2bb7a92a ncspot`std::panic::catch_unwind(f=core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<ncspot::mpris::{impl#0}::new::{async_block_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>> @ 0x00007f10f39fa588) at panic.rs:142:14
    frame #31: 0x0000562b2b988d5e ncspot`tokio::runtime::task::harness::poll_future(core=0x0000562b2fb43220, cx=core::task::wake::Context @ 0x00007f10f39fa698) at harness.rs:473:18
    frame #32: 0x0000562b2b98fd1f ncspot`tokio::runtime::task::harness::Harness<T,S>::poll_inner(self=0x00007f10f39fa740) at harness.rs:208:27
    frame #33: 0x0000562b2b99da83 ncspot`tokio::runtime::task::harness::Harness<T,S>::poll(self=tokio::runtime::task::harness::Harness<ncspot::mpris::{impl#0}::new::{async_block_env#0}, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> @ 0x00007f10f39fa740) at harness.rs:153:15
    frame #34: 0x0000562b2b926d7b ncspot`tokio::runtime::task::raw::poll(ptr=core::ptr::non_null::NonNull<tokio::runtime::task::core::Header> @ 0x00007f10f39fa768) at raw.rs:271:5
    frame #35: 0x0000562b2d1662e7 ncspot`tokio::runtime::task::raw::RawTask::poll(self=tokio::runtime::task::raw::RawTask @ 0x00007f10f39fa788) at raw.rs:201:18
    frame #36: 0x0000562b2d18abe2 ncspot`tokio::runtime::task::LocalNotified<S>::run(self=tokio::runtime::task::LocalNotified<alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> @ 0x00007f10f39fa7a8) at mod.rs:416:9
    frame #37: 0x0000562b2d13d77d ncspot`tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}} at worker.rs:576:13
    frame #38: 0x0000562b2d13d5c4 ncspot`tokio::runtime::scheduler::multi_thread::worker::Context::run_task at coop.rs:107:5
    frame #39: 0x0000562b2d13d52d ncspot`tokio::runtime::scheduler::multi_thread::worker::Context::run_task [inlined] tokio::runtime::coop::budget(f=tokio::runtime::scheduler::multi_thread::worker::{impl#1}::run_task::{closure_env#0} @ 0x00007f10f39fa9d0) at coop.rs:73:5
    frame #40: 0x0000562b2d13d48b ncspot`tokio::runtime::scheduler::multi_thread::worker::Context::run_task(self=0x00007f10f39fadc8, task=tokio::runtime::task::Notified<alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> @ 0x00007f10f39fa970, core=0x0000562b2fad18f0) at worker.rs:575:9
    frame #41: 0x0000562b2d13cc65 ncspot`tokio::runtime::scheduler::multi_thread::worker::Context::run(self=0x00007f10f39fadc8, core=0x0000562b2fad18f0) at worker.rs:526:24
    frame #42: 0x0000562b2d13c8a9 ncspot`tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}} at worker.rs:491:21
    frame #43: 0x0000562b2d1852d0 ncspot`tokio::runtime::context::scoped::Scoped<T>::set(self=0x00007f10f39fc3e0, t=0x00007f10f39fadc0, f=tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure_env#0} @ 0x00007f10f39fac38) at scoped.rs:40:9
    frame #44: 0x0000562b2d19e40b ncspot`tokio::runtime::context::set_scheduler::{{closure}}(c=0x00007f10f39fc3a8) at context.rs:176:26
    frame #45: 0x0000562b2d16c1f2 ncspot`std::thread::local::LocalKey<T>::try_with(self=0x0000562b2e18af48, f=tokio::runtime::context::set_scheduler::{closure_env#0}<(), tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure_env#0}> @ 0x00007f10f39fad68) at local.rs:270:16
    frame #46: 0x0000562b2d16a4cb ncspot`std::thread::local::LocalKey<T>::with(self=0x0000562b2e18af48, f=<unavailable>) at local.rs:246:9
    frame #47: 0x0000562b2d19e384 ncspot`tokio::runtime::context::set_scheduler(v=0x00007f10f39fadc0, f=tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure_env#0} @ 0x00007f10f39fad88) at context.rs:176:9
    frame #48: 0x0000562b2d13c7b1 ncspot`tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}((null)=0x00007f10f39faf20) at worker.rs:486:9
    frame #49: 0x0000562b2d171518 ncspot`tokio::runtime::context::runtime::enter_runtime(handle=0x00007f10f39fafc8, allow_block_in_place=true, f=tokio::runtime::scheduler::multi_thread::worker::run::{closure_env#0} @ 0x00007f10f39faec0) at runtime.rs:65:16
    frame #50: 0x0000562b2d13c53c ncspot`tokio::runtime::scheduler::multi_thread::worker::run(worker=strong=1, weak=0) at worker.rs:478:5
    frame #51: 0x0000562b2d13c3ab ncspot`tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}} at worker.rs:447:45
    frame #52: 0x0000562b2d17324e ncspot`<tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll(self=core::pin::Pin<&mut tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>> @ 0x00007f10f39fb028, _cx=0x00007f10f39fb150) at task.rs:42:21
    frame #53: 0x0000562b2d17a41c ncspot`tokio::runtime::task::core::Core<T,S>::poll::{{closure}}(ptr=0x0000562b2faedb28) at core.rs:328:17
    frame #54: 0x0000562b2d179f0f ncspot`tokio::runtime::task::core::Core<T,S>::poll [inlined] tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut(self=0x0000562b2faedb28, f=tokio::runtime::task::core::{impl#6}::poll::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> @ 0x00007f10f39fb1a0) at unsafe_cell.rs:16:9
    frame #55: 0x0000562b2d179ee5 ncspot`tokio::runtime::task::core::Core<T,S>::poll(self=0x0000562b2faedb20, cx=core::task::wake::Context @ 0x00007f10f39fb150) at core.rs:317:13
    frame #56: 0x0000562b2d156635 ncspot`tokio::runtime::task::harness::poll_future::{{closure}} at harness.rs:485:19
    frame #57: 0x0000562b2d18c0f4 ncspot`<core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once(self=core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>> @ 0x00007f10f39fb228, (null)=() @ 0x00007f10f39fb227) at unwind_safe.rs:272:9
    frame #58: 0x0000562b2d14fd36 ncspot`std::panicking::try::do_call(data=0x00007f10f39fb2c8) at panicking.rs:552:40
    frame #59: 0x0000562b2d15187b ncspot`__rust_try + 27
    frame #60: 0x0000562b2d14e408 ncspot`std::panicking::try(f=core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>> @ 0x00007f10f39fb2e8) at panicking.rs:516:19
    frame #61: 0x0000562b2d1218ab ncspot`std::panic::catch_unwind(f=core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>> @ 0x00007f10f39fb328) at panic.rs:142:14
    frame #62: 0x0000562b2d15608f ncspot`tokio::runtime::task::harness::poll_future(core=0x0000562b2faedb20, cx=core::task::wake::Context @ 0x00007f10f39fb438) at harness.rs:473:18
    frame #63: 0x0000562b2d152739 ncspot`tokio::runtime::task::harness::Harness<T,S>::poll_inner(self=0x00007f10f39fb4e0) at harness.rs:208:27
    frame #64: 0x0000562b2d152417 ncspot`tokio::runtime::task::harness::Harness<T,S>::poll(self=tokio::runtime::task::harness::Harness<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> @ 0x00007f10f39fb4e0) at harness.rs:153:15
    frame #65: 0x0000562b2d16662d ncspot`tokio::runtime::task::raw::poll(ptr=core::ptr::non_null::NonNull<tokio::runtime::task::core::Header> @ 0x00007f10f39fb508) at raw.rs:271:5
    frame #66: 0x0000562b2d1662e7 ncspot`tokio::runtime::task::raw::RawTask::poll(self=tokio::runtime::task::raw::RawTask @ 0x00007f10f39fb528) at raw.rs:201:18
    frame #67: 0x0000562b2d18aca7 ncspot`tokio::runtime::task::UnownedTask<S>::run(self=tokio::runtime::task::UnownedTask<tokio::runtime::blocking::schedule::BlockingSchedule> @ 0x00007f10f39fb558) at mod.rs:453:9
    frame #68: 0x0000562b2d1498f7 ncspot`tokio::runtime::blocking::pool::Task::run(self=tokio::runtime::blocking::pool::Task @ 0x00007f10f39fb588) at pool.rs:159:9
    frame #69: 0x0000562b2d14d549 ncspot`tokio::runtime::blocking::pool::Inner::run(self=0x0000562b2fadcdb0, worker_thread_id=0) at pool.rs:513:17
    frame #70: 0x0000562b2d14d274 ncspot`tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}} at pool.rs:471:13
    frame #71: 0x0000562b2d145216 ncspot`std::sys_common::backtrace::__rust_begin_short_backtrace(f=<unavailable>) at backtrace.rs:154:18
    frame #72: 0x0000562b2d122cd2 ncspot`std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}} at mod.rs:529:17
    frame #73: 0x0000562b2d18c552 ncspot`<core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once(self=<unavailable>, (null)=() @ 0x00007f10f39fb8b7) at unwind_safe.rs:272:9
    frame #74: 0x0000562b2d14f8f3 ncspot`std::panicking::try::do_call(data=0x00007f10f39fb960) at panicking.rs:552:40
    frame #75: 0x0000562b2d15187b ncspot`__rust_try + 27
    frame #76: 0x0000562b2d14f551 ncspot`std::panicking::try(f=<unavailable>) at panicking.rs:516:19
    frame #77: 0x0000562b2d122adf ncspot`std::thread::Builder::spawn_unchecked_::{{closure}} at panic.rs:142:14
    frame #78: 0x0000562b2d122ace ncspot`std::thread::Builder::spawn_unchecked_::{{closure}} at mod.rs:528:30
    frame #79: 0x0000562b2d122f8f ncspot`core::ops::function::FnOnce::call_once{{vtable.shim}}((null)=0x0000562b2faedbd0, (null)=() @ 0x00007f10f39fbb8f) at function.rs:250:5
    frame #80: 0x0000562b2d8d9865 ncspot`std::sys::unix::thread::Thread::new::thread_start [inlined] <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once at boxed.rs:2007:9
    frame #81: 0x0000562b2d8d985d ncspot`std::sys::unix::thread::Thread::new::thread_start [inlined] <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once at boxed.rs:2007:9
    frame #82: 0x0000562b2d8d9856 ncspot`std::sys::unix::thread::Thread::new::thread_start at thread.rs:108:17
    frame #83: 0x00007f10f3f6dff9 libc.so.6`___lldb_unnamed_symbol3612 + 697
    frame #84: 0x00007f10f3fe1688 libc.so.6`___lldb_unnamed_symbol4006 + 7

partial log output:

[2024-01-23][13:03:25] [ureq::unit] [DEBUG] response 401 to GET https://api.spotify.com/v1/tracks/79ZkdfOTnhPbtsODTpxZQ6?market=from_token
[2024-01-23][13:03:25] [ncspot::spotify_api] [DEBUG] http error: StatusCode(Response[status: 401, status_text: Unauthorized, url: https://api.spotify.com/v1/tracks/79ZkdfOTnhPbtsODTpxZQ6?market=from_token])
[2024-01-23][13:03:25] [ncspot::spotify_api] [DEBUG] token unauthorized. trying refresh..
[2024-01-23][13:03:25] [ncspot::spotify_api] [INFO] Token will expire in -PT396.007409498S, renewing

cc @ThomasFrans

I don't know what would cause this bug in the commit where I changed the channel. Functionally it should be the same. That commit was also made after the original two issues, so there might be another bug in the token refresh code. I'll have more time to look into this in a few days.

From a quick glance at the thread backtrace, it does seem like it's blocked trying to receive the token. That would mean the worker doesn't send the token somehow, maybe because there is an issue getting a new one?

I added some logging and I can see that worker never picks up the command. It stops picking anything from the queue once update_token is stuck.

As an experiment I've changed token_rx.recv to recv_timeout and the sequence becomes:

[2024-01-24][16:28:13] [ncspot::spotify_api] [DEBUG] token unauthorized. trying refresh..
[2024-01-24][16:28:13] [ncspot::spotify_api] [INFO] Token will expire in -PT2204.902881682S, renewing
[2024-01-24][16:28:13] [ncspot::spotify_api] [INFO] RequestToken sent to worker, reading response
KS: 5 seconds timeout here. Worker doesn't pick it up
[2024-01-24][16:28:18] [ncspot::spotify_api] [ERROR] Timeout reading token response from the worker!
KS: now worker wakes up and notices the command, but too late!
[2024-01-24][16:28:18] [ncspot::spotify_worker] [INFO] worker: token requested
[2024-01-24][16:28:18] [ncspot::spotify_worker] [INFO] worker: requesting token from "hm://keymaster/token/authentica
ted?client_id=xxx&scope=user-read-private,playlist-read-private,playlist-read-collaborat
ive,playlist-modify-public,playlist-modify-private,user-follow-modify,user-follow-read,user-library-read,user-library
-modify,user-top-read,user-read-recently-played"
[2024-01-24][16:28:18] [ncspot::spotify_worker] [INFO] new token received: Token { access_token: "xxx", expires_in: 3600, token_type: "Bearer", scope: ["user-read-private", "playli
st-read-private", "playlist-read-collaborative", "playlist-modify-public", "playlist-modify-private", "user-follow-mo
dify", "user-follow-read", "user-library-read", "user-library-modify", "user-top-read", "user-read-recently-played"] 
}
KS: worker attempts to send the token back to an already disconnected sender
[2024-01-24][16:28:18] [ncspot::spotify_worker] [ERROR] can't send result to the sender: SendError { .. }

This is one of these issues that I'd want to bisect but the problem is that I don't know of a way to reliably trigger a token refresh. Something is clearly going wrong with the token refresh in the worker thread but when looking at the diff of cb96f46 I don't see at all what might have introduced a bug. I double-checked online and the std::sync::mpsc channel should work for sending stuff from an asynchronous context to a synchronous one. I still don't fully understand Rust async but I'm wondering whether blocking on the receiving end of the asynchronous channel caused the Tokio runtime to wake up and perform one 'step' of the worker thread (the select!() at src/spotify_worker.rs:99). I somehow feel like for some reason the asynchronous runtime is not waking up when not using the block_on() function.

I looked in my cache directory and sure enough I have the same backtrace, even though I don't remember ncspot crashing in the last days. I have removed the backtrace file and I'll see what happens today after suspending a bunch. If the same backtrace is generated again, I'll investigate further. Otherwise it might be smart to revert that commit for now as I have no clue what is causing this.

I found that I'm running a version older than the one that includes cb96f46 and also have the hanging issue.

$ ncspot --version
version: ncspot 1.0.0 (92e0852)
(lldb) thread backtrace
* thread #1, name = 'ncspot', stop reason = signal SIGSTOP
  * frame #0: 0x000077e68b9ea73d libc.so.6`syscall + 29
    frame #1: 0x0000586513ecf969 ncspot`parking_lot::condvar::Condvar::wait_until_internal::h35520569ad03c231 + 601
    frame #2: 0x0000586513b5b8a0 ncspot`ncspot::spotify_api::WebApi::update_token::h9077df7685138a36 + 3792
    frame #3: 0x0000586513bc3b6f ncspot`ncspot::ui::search_results::SearchResultsView::new::h7cf3acbb6c9e0cc7 + 3215
    frame #4: 0x0000586513bc2d59 ncspot`ncspot::ui::search::SearchView::new::_$u7b$$u7b$closure$u7d$$u7d$::hd5ce9cfb44cb6530 + 169
    frame #5: 0x0000586513ce75ee ncspot`cursive_core::cursive::Cursive::on_event::he296aaff3a0b2024 + 3358
    frame #6: 0x00005865138b619b ncspot`cursive_core::cursive_run::CursiveRunner$LT$C$GT$::step::h743facc17529f4a9 + 155
    frame #7: 0x0000586513bdbdfb ncspot`ncspot::main::hbf528be32758e872 + 12091
    frame #8: 0x00005865139208ef ncspot`std::sys_common::backtrace::__rust_begin_short_backtrace::h10db1421e64c3eef + 15
    frame #9: 0x0000586513c07bdf ncspot`main + 1119
    frame #10: 0x000077e68b903cd0 libc.so.6`___lldb_unnamed_symbol3187 + 128
    frame #11: 0x000077e68b903d8a libc.so.6`__libc_start_main + 138
    frame #12: 0x00005865138722f5 ncspot`_start + 37

That kind of excludes the possibility of the channel change introducing the bug. I remember having these hangs before as well, I just thought that it was because of my VPN. I'll try to look further into this. Maybe @hrkfdn has an idea about what could be going wrong with the worker and the RequestToken command as I don't fully understand the web API and token refresh code.

for the record, there's no crash (and no backtraces), worker just hangs. I have a pretty reliable way to repro, so let me know if you have any experimental patches or need extra logging