telnyx's repositories
telnyx-node
Node SDK for the Telnyx API
telnyx-python
Python SDK for the Telnyx API
libcluster_consul
Consul strategy for libcluster
telnyx-ruby
Telnyx API Ruby Client
telnyx-webrtc-ios
The Telnyx iOS WebRTC Client SDK provides all the functionality you need to start making voice calls from an iPhone.
telnyx-webrtc-android
Telnyx Android WebRTC SDK - Enable real-time communication with WebRTC and Telnyx
flutter-voice-sdk
Telnyx Flutter WebRTC SDK - Enable real-time communication with WebRTC and Telnyx
telnyx-java
Java SDK for the Telnyx API
freeswitch
FreeSWITCH is a Software Defined Telecom Stack enabling the digital transformation from proprietary telecom switches to a versatile software implementation that runs on any commodity hardware. From a Raspberry PI to a multi-core server, FreeSWITCH can unlock the telecommunications potential of any device.
telnyx-dotnet
.NET SDK for the Telnyx API
demo-node-telnyx
Samples & Examples with Telnyx-Node
telnyx-video-android
Telnyx Android Video SDK - Enables real-time video and audio communication with WebRTC and Telnyx
ai-chatbot
A NodeJS backend with the essential building block components to develop robust chatbots powered by AI, utilizing Telnyx Inference and Storage API Products.
erlang-dirent
An iterative directory listing library for Erlang
telnyx-meet-android
Telnyx Meet Android App
Telnyx-Android-Jetpack-Compose-WebRTC-Sample
Jetpack Compose Sample integration of the Telnyx Android WebRTC Library
telnyx-android-video-java
A simple Java Demo App for Telnyx Video
telnyx-prism-mock
Simple service for modifying/routing requests originally intended for telnyx-mock to be processed by prism mock server
cluster-api-provider-proxmox
Cluster API Provider for Proxmox VE (CAPMOX)
infra-oci-calico-upstream
Cloud native networking and network security
janus-message-sdk
A Kotlin Multiplatform Library for Janus Messages Used by both Android and iOS
janus-message-sdk-android
An Android library that deploys aar generated by the janus-meesage-sdk
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs