gklijs / schema_registry_converter

A crate to convert bytes to something more useable and the other way around in a way Compatible with the Confluent Schema Registry. Supporting Avro, Protobuf, Json schema, and both async and blocking.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

panicked when working with tokio async

undeflife opened this issue · comments

when use reqwest to request schema registry with tokio async ,like my code below in warp main thread or use tokio::spawn to create a async block will exit

use warp::Filter;
use schema_registry_converter::schema_registry::{SrSettings};

#[tokio::main]
async fn main() {
     let hello = warp::path!("hello" / String)
        .map(|name| format!("Hello, {}!", name));
     let sr_settings = SrSettings::new(String::from("http://localhost:8081/"));

    warp::serve(hello)
        .run(([127, 0, 0, 1], 3030))
        .await;
}
thread 'main' panicked at 'Cannot drop a runtime in a context where blocking is not allowed. This happens when a runtime is dropped from within an asynchronous context.', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.22/src/runtime/blocking/shutdown.rs:49:21
stack backtrace:
   0:        0x1083da9ee - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h72782cdbf82d2e78
   1:        0x1083fc1ac - core::fmt::write::h0f2c225c157771c1
   2:        0x1083d52d9 - std::io::Write::write_fmt::h6d219fc26cb45a24
   3:        0x1083dc865 - std::panicking::default_hook::{{closure}}::hde29d026f53869b1
   4:        0x1083dc5a2 - std::panicking::default_hook::h5de23f27de9ce8ce
   5:        0x1083dcdc5 - std::panicking::rust_panic_with_hook::h720143ee15fc80ba
   6:        0x10840664e - std::panicking::begin_panic::h80d64999a84f0366
   7:        0x108297874 - tokio::runtime::blocking::shutdown::Receiver::wait::hc15bfd5c68b76ca0
   8:        0x1082cbfc5 - tokio::runtime::blocking::pool::BlockingPool::shutdown::h702cd79db6adf80b
   9:        0x1082cc07d - <tokio::runtime::blocking::pool::BlockingPool as core::ops::drop::Drop>::drop::h0ab515030d41ffa8
  10:        0x1082d6215 - core::ptr::drop_in_place::h1df0de4e6c94e3fe
  11:        0x1082d8252 - core::ptr::drop_in_place::h94c49217ab7681ef
  12:        0x10801a738 - reqwest::blocking::wait::enter::he7b4b5e4e35343bf
  13:        0x108019bd7 - reqwest::blocking::wait::timeout::h7234a2ddbbcb0534
  14:        0x10803b703 - reqwest::blocking::client::ClientHandle::new::hd976e58a3aac69e4
  15:        0x10803ae1d - reqwest::blocking::client::ClientBuilder::build::h18af5d75937efe22
  16:        0x10803aece - reqwest::blocking::client::Client::new::h09aac586c8993277
  17:        0x107eff6ea - schema_registry_converter::schema_registry::SrSettings::new::h3b74a91814783f96
  18:        0x107d0a3f3 - my_web::main::{{closure}}::hbb912dd7df4aeb4d
  19:        0x107ca4120 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::ha2815f8e7aca5dc1
  20:        0x107c8865d - tokio::runtime::enter::Enter::block_on::{{closure}}::h6b5e9b23e5c4d655
  21:        0x107c8e506 - tokio::coop::with_budget::{{closure}}::hb18b9d07d0e693a7
  22:        0x107cebdee - std::thread::local::LocalKey<T>::try_with::ha9811a7840d57628
  23:        0x107ceb4ac - std::thread::local::LocalKey<T>::with::hcb4e45ba7e69edaa
  24:        0x107c884ad - tokio::runtime::enter::Enter::block_on::h8dea6c389ed18880
  25:        0x107d11315 - tokio::runtime::thread_pool::ThreadPool::block_on::h274b9930f19cc162
  26:        0x107ca76e8 - tokio::runtime::Runtime::block_on::{{closure}}::hf60396b6d86ee761
  27:        0x107cc90b8 - tokio::runtime::context::enter::hcb90097146788eb6
  28:        0x107c653db - tokio::runtime::handle::Handle::enter::h7963a1ebbf377697
  29:        0x107ca763d - tokio::runtime::Runtime::block_on::h61ecdca8fd87f99b
  30:        0x107cb24cc - my_web::main::h372f3d2471b960b7
  31:        0x107d1171e - std::rt::lang_start::{{closure}}::h6532f1318acae0d5
  32:        0x1083dd14f - std::rt::lang_start_internal::hbbd10965adc92ae7
  33:        0x107d11701 - std::rt::lang_start::ha5adc3b371471675
  34:        0x107cb2532 - main

Currently it's all blocking sync. Mainly because cashing will be hard. Also I don't have much experience using async jet.

So the reqwest client inside the SrSettings one is blocking. I don't really understand what your trying to do.

Something I could do, is at least for the schema_registry module, is offering async calls, by using the async reqwest client.

My main concern is with the converters. Kafka is really fast, so if I get a million messages with the some id in a second, and the Sr calls also need a second, I don't want to that a million times.

Most likely there are solutions to this, I just haven't take a look at them. I'm trying to finalise for the 2.0.0 release, it's probably good to mention the current version is blocking.

I'd love to hear how you think about those things.

What I'm trying to do is to build a http server by using warp to put all request into a channel and then with a async task to produce those requests into Kafka.

Ok, but are you using Schema Registry? In that case it would be nice if you could use this library for some of the stuff related to that.

Just to be sure, this library is not sending anything to Kafka, you need either https://crates.io/crates/rdkafka or https://crates.io/crates/Kafka for that. Now that rdkafka is async by default it would probably make sense to have async converters as well. But at this point I really don't know how much work it would be.

Sure . I'm trying to using this crate to encode message to Avro

Ok, can't promise anything, and don't think I want it to be part of 2.0.0. But seems like a valid use case with rdkafka being async by default. And know the http calls already use reqwest I don't see any problems beside the cache. But seems like a problem that has likely be solved already, so definitely going to look into it.

That's great and thanks for you work.

Thanks, at the very least I want to be confident enough I don't need to break to api to implement it. So thinking of adding at least support to schema_registry for async, and have the converters return an error when used with async SrSettings. Api would then be something like:

let sr_settings = SrSettings::new_builder(String::from("https://condluent.cloud.url"))
    .async()
    .build()
    .unwrap();

to create the async version. I'm not convinced async as default is a good idea, but I'll give it some thought.

I decided to bite the bullet, and split the code in blocking and async, much like how reqwest has done. I'm also going to default to async, as that seems to make most sense as both reqwest and rdkafka are async by default. I'm not sure how much will be available async when 2.0.0 comes out, but at least I don't need breaking changes adding more async features later on this way.

So unlike my last message it will be probably the case you need to add the feature "blocking" to get access to the blocking api, and all else will be async, but like I said, the initial available async api will likely be minimal since I really want to have 2.0.0 out before Kafka summit.

@undeflife Could you maybe check if you can use it async now? Just merged in the async Avro encoder/decoder. It should use async by default, so you should only need the feature avro, and then be able to use the AvroEncoder and AvroDecoder with async implementations.

Works as expect! Thanks.