epacke / logstash-pipeline-tester

Tool for testing logstash pipelines

Home Page:https://loadbalancing.se/2020/03/11/logstash-pipeline-tester/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pipeline to pipeline returns no output in the UI

anubisg1 opened this issue · comments

I am testing a pipeline to pipeline scenario.

this is the 2 pipelines example:

Input pipeline (filter section omitted for brevity)

input {
  udp {
    port => "8515"
    type => "syslog"
  }
  
  tcp {
    port => "8515"
    type => "syslog"
  }
}

filter { ... }

output {
  if "cisco_router" in [tags] {
    pipeline {
      send_to => [cisco_router]
    }
  }

  if "cisco_switch" in [tags] {
    pipeline {
      send_to => [cisco_switch]
    }
  }

  if "cisco_nexus" in [tags] {
    pipeline {
      send_to => [cisco_nexus]
    }
  }
}

processing pipeline (filter section omitted for brevity)

input {
  pipeline {
    address => cisco_switch
  }
}

filter { ... }

output {
  http {
    format => "json"
    http_method => "post"
    url => "http://pipeline-ui:8080/api/v1/receiveLogstashOutput"
  }
  #pipeline { send_to => [enrichments] }
}

This is pipelines.conf

- pipeline.id: classification
  path.config: "/usr/share/logstash/config/inputs/main-class/classification.conf"
- pipeline.id: cisco_switch
  path.config: "/usr/share/logstash/config/processors/syslog_audit_cisco.switch.conf"

in the docker-compose file, i am mounting ./logstash/logstash-config/inputs:/usr/src/pipeline as a volume (as i understand that the UI looks for pipelines only inside /usr/src/pipeline folder.)

this is the behaviour that i see.

  1. if logs are sent to logstash "externally", from example, a real device instead of the UI, then in the UI i see the output of the second/final pipeline . This behaviour is in my opinion expected
  2. if i send logs from the UI , i see nothing in the web ui even if /api/v1/sendLogLines returns 200

i then tested the following.

  1. add the http output to the first pipeline (i.e. i added) as an additional output
  http {
    format => "json"
    http_method => "post"
    url => "http://pipeline-ui:8080/api/v1/receiveLogstashOutput"
  }
  1. send logs externally , in the UI i see the output of the first pipeline AND of the second pipeline (again, this is expected)
  2. send logs from the UI, i see ONLY the output from the 1st pipeline... it looks like the UI only listens to messages coming from the 1st pipeline and not from the second one when messages are sent from the UI.. i can't see a specific reason why.

I'm on the clock for a customer now so I can't help. Will look at this sometime this week. I think the backend could use some love in terms of verbosity. It pretty much always returns a 200. 😆

Check out the new UI btw?

Check out the new UI btw?

you just pushed it, that's not fair :D :D so let me do that

Check out the new UI btw?

It's fixed with the new UI.. closing as fixed!

actually i might need to re-open this might be related to TCP vs UDP input in the first pipeline ..
specifically the fact that when using UDP, a host.ip field is generated, while with using TCP it isn't ...
This would cause the 1st pipeline to have no "output"

this might be a logstash problem, or a backend socket problem... so providing mroe info now

let see if can provide more info...

the full 1st pipeline is the following:

input {
  udp {
    port => "8515"
    type => "syslog"
  }
  
  tcp {
    port => "8515"
    type => "syslog"
  }
}


filter {

  # a. lookup DB
  translate {
    id => "filter-ingress"
    source => "[host][ip]"
    target => "[tmp][device_info]"
    dictionary_path => "/usr/share/logstash/config/inventory.json"
    refresh_interval => 3000
    fallback => '{"key1":"not_found"}'
  }  

  # b. handle failures first
  # because a "fallback" value can only contain a string, additional processing is done to ensure that failed lookups store values in proper fields
  if [tmp][device_info] == '{"key1":"not_found"}' {
    json { 
      source => "[tmp][device_info]" 
      target => "[tmp][device_info]"
    }
    mutate {
      remove_field => ["[tmp]"]
      add_tag => "unknown_device"
    }
  }


  # c. add proper fields from [tmp] translated
   if [tmp][device_info]  {
    mutate {
      add_field => { "[observer][hostname]" => "%{[[tmp][device_info][hostname]]}"}
      add_field => { "[cloud][account][name]" => "%{[[tmp][device_info][customer]]}"}
      add_field => { "[geo][city_name]" => "%{[[tmp][device_info][site]]}"}
      add_field => { "[geo][country_name]" => "%{[[tmp][device_info][country]]}"}
    }

  # d. tag device types
    if [tmp][device_info][dev_type] == "cisco_router" {
      mutate {
        remove_field => ["[tmp][device_info][dev_type]"]
        add_tag => "cisco_router"
      }
    }

    if [tmp][device_info][dev_type] == "cisco_switch" {
      mutate {
        remove_field => ["[tmp][device_info][dev_type]"]
        add_tag => "cisco_switch"
      }
    }
 
    if [tmp][device_info][dev_type] == "cisco_nexus" {
      mutate {
        remove_field => ["[tmp][device_info][dev_type]"]
        add_tag => "cisco_nexus"
      }
    }

    if [tmp][device_info][dev_type] == "f5_ltm" {
      mutate {
        remove_field => ["[tmp][device_info][dev_type]"]
        add_tag => "f5_ltm"
      }
    }
    if [tmp][device_info][dev_type] == "checkpoint_fw" {
      mutate {
        remove_field => ["[tmp][device_info][dev_type]"]
        add_tag => "checkpoint_fw"
      }
    }
    if [tmp][device_info][dev_type] == "linux" {
      mutate {
        remove_field => ["[tmp][device_info][dev_type]"]
        add_tag => "linux"
      }
    }

    # d. remove [tmp] files
    mutate {
      remove_field => [ "[tmp]", "[event]" ]
    }
  } # if tmp.device_info exists

} # filter

output {

 #Enable for Debug Only
  http {
    format => "json"
    http_method => "post"
    url => "http://pipeline-ui:8080/api/v1/receiveLogstashOutput"
  }

  if "cisco_router" in [tags] {
    pipeline {
      send_to => [cisco_router]
    }
  }

  if "cisco_switch" in [tags] {
    pipeline {
      send_to => [cisco_switch]
    }
  }

  if "cisco_nexus" in [tags] {
    pipeline {
      send_to => [cisco_nexus]
    }
  }

  if "f5_ltm" in [tags] {
    pipeline {
      send_to => [f5_ltm]
    }
  }

  if "checkpoint_fw" in [tags] {
    pipeline {
      send_to => [checkpoint_fw]
    }
  }
}

when sending to the UDP input i get the following back

{
  "message": "test\n",
  "tags": [
    "cisco_switch"
  ],
  "cloud": {
    "account": {
      "name": "Andrea"
    }
  },
  "@timestamp": "2022-11-22T12:24:23.174359146Z",
  "host": {
    "ip": "172.20.0.2"
  },
  "observer": {
    "hostname": "Andrea_VM"
  },
  "type": "syslog",
  "@version": "1",
  "geo": {
    "city_name": "Brno",
    "country_name": "Czech Republic"
  }
}

when sending the output to the TCP input i get instead:

{
  "@version": "1",
  "message": "test",
  "@timestamp": "2022-11-22T12:27:39.880650921Z",
  "type": "syslog",
  "event": {
    "original": "test"
  }
}

It's a pipeline problem then ...

fixed with

filter {
  # "fix for TCP input Plugin"
  if [@metadata][input][tcp][source] and ![host] {
    mutate {
      copy => {
        "[@metadata][input][tcp][source][ip]"   => "[host][ip]"
      }
    }
  }
}

Best bugs are those who resolves themselves. Nice troubleshooting buddy!

thank you for the updated UI :)

No problem! One more update coming tonight. Then I think I'm done for a while longer. :)