Kwelity / vector

A High-Performance, Logs, Metrics, & Events Router

Home Page:https://vector.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Chat/Forum  •  Mailing List  •  Install


Vector

Vector is a high-performance observability data router. It makes collecting, transforming, and sending logs, metrics, and events easy. It decouples data collection & routing from your services, giving you control and data ownership, among many other benefits.

Built in Rust, Vector places high-value on performance, correctness, and operator friendliness. It compiles to a single static binary and is designed to be deployed across your entire infrastructure, serving both as a light-weight agent and a highly efficient service, making the process of getting data from A to B simple and unified.

About

Setup

Usage

Resources

Features

Performance

Test Vector Filebeat FluentBit FluentD Logstash SplunkUF SplunkHF
TCP to Blackhole 86mib/s n/a 64.4mib/s 27.7mib/s 40.6mib/s n/a n/a
File to TCP 76.7mib/s 7.8mib/s 35mib/s 26.1mib/s 3.1mib/s 40.1mib/s 39mib/s
Regex Parsing 13.2mib/s n/a 20.5mib/s 2.6mib/s 4.6mib/s n/a 7.8mib/s
TCP to HTTP 26.7mib/s n/a 19.6mib/s <1mib/s 2.7mib/s n/a n/a
TCP to TCP 69.9mib/s 5mib/s 67.1mib/s 3.9mib/s 10mib/s 70.4mib/s 7.6mib/s

To learn more about our performance tests, please see the Vector test harness.

Correctness

Test Vector Filebeat FluentBit FluentD Logstash Splunk UF Splunk HF
Disk Buffer Persistence ⚠️
File Rotate (create)
File Rotate (copytruncate)
File Truncation
Process (SIGHUP) ⚠️
JSON (wrapped)

To learn more about our performance tests, please see the Vector test harness.

Installation

Run the following in your terminal, then follow the on-screen instructions.

curl https://sh.vector.dev -sSf | sh

Or view platform specific installation instructions.

Sources

Name Description
file Ingests data through one or more local files and outputs log events.
journald Ingests data through log records from journald and outputs log events.
kafka Ingests data through Kafka 0.9 or later and outputs log events.
statsd Ingests data through the StatsD UDP protocol and outputs metric events.
stdin Ingests data through standard input (STDIN) and outputs log events.
syslog Ingests data through the Syslog 5424 protocol and outputs log events.
tcp Ingests data through the TCP protocol and outputs log events.
udp Ingests data through the UDP protocol and outputs log events.
vector Ingests data through another upstream Vector instance and outputs log and metric events.

+ request a new source

Transforms

Name Description
add_fields Accepts log events and allows you to add one or more log fields.
add_tags Accepts metric events and allows you to add one or more metric tags.
coercer Accepts log events and allows you to coerce log fields into fixed types.
field_filter Accepts log and metric events and allows you to filter events by a log field's value.
grok_parser Accepts log events and allows you to parse a log field value with Grok.
json_parser Accepts log events and allows you to parse a log field value as JSON.
log_to_metric Accepts log events and allows you to convert logs into one or more metrics.
lua Accepts log events and allows you to transform events with a full embedded Lua engine.
regex_parser Accepts log events and allows you to parse a log field's value with a Regular Expression.
remove_fields Accepts log events and allows you to remove one or more log fields.
remove_tags Accepts metric events and allows you to remove one or more metric tags.
sampler Accepts log events and allows you to sample events with a configurable rate.
split Accepts log events and allows you to split a field's value on a given separator and zip the tokens into ordered field names.
tokenizer Accepts log events and allows you to tokenize a field's value by splitting on white space, ignoring special wrapping characters, and zip the tokens into ordered field names.

+ request a new transform

Sinks

Name Description
aws_cloudwatch_logs Batches log events to AWS CloudWatch Logs via the PutLogEvents API endpoint.
aws_kinesis_streams Batches log events to AWS Kinesis Data Stream via the PutRecords API endpoint.
aws_s3 Batches log events to AWS S3 via the PutObject API endpoint.
blackhole Streams log and metric events to a blackhole that simply discards data, designed for testing and benchmarking purposes.
clickhouse Batches log events to Clickhouse via the HTTP Interface.
console Streams log and metric events to the console, STDOUT or STDERR.
elasticsearch Batches log events to Elasticsearch via the _bulk API endpoint.
file Streams log events to a file.
http Batches log events to a generic HTTP endpoint.
kafka Streams log events to Apache Kafka via the Kafka protocol.
prometheus Exposes metric events to Prometheus metrics service.
splunk_hec Batches log events to a Splunk HTTP Event Collector.
tcp Streams log events to a TCP connection.
vector Streams log events to another downstream Vector instance.

+ request a new sink

License

Copyright 2019, Vector Authors. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use these files except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.


Developed with ❤️ by Timber.io

About

A High-Performance, Logs, Metrics, & Events Router

https://vector.dev

License:Apache License 2.0


Languages

Language:Rust 100.0%