unfetter-discover / unfetter-analytic

Main Build directory

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some CAR rules fail because of the use of the inexistent `exe` field

2xyo opened this issue · comments

commented

Some CAR rules fail because of the use of the inexistent exe field :

analytic-system    | Reg.exe called from Command Shell
analytic-system    | 
analytic-system    | CAR Number: CAR_2013_03_001
analytic-system    | 
analytic-system    | 
analytic-system    | Registry modifications are often essential in establishing persistence via known Windows mechanisms.Many legitimate modifications are done graphically via regedit.exe or by using the corresponding channels, or even calling the Registry APIs directly.  The built-in utility reg.exe provides a command-line interface to the registry, so that queries and modifications can be performed from a shell, such as cmd.exe. When a user is responsible for these actions, the parent of cmd.exe will likely be explorer.exe. Occasionally, power users and administrators write scripts that do this behavior as well, but likely from a different process tree. These background scripts must be learned so they can be tuned out accordingly.
analytic-system    | 
analytic-system    | 
analytic-system    | 
analytic-system    | 17/11/24 21:13:19 ERROR Executor: Exception in task 2.0 in stage 1.0 (TID 3)
analytic-system    | org.apache.spark.api.python.PythonException: Traceback (most recent call last):
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 171, in main
analytic-system    |     process()
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 166, in process
analytic-system    |     serializer.dump_stream(func(split_index, iterator), outfile)
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
analytic-system    |     vs = list(itertools.islice(iterator, batch))
analytic-system    |   File "/usr/share/unfetter/src/CAR_2013_03_001.py", line 55, in <lambda>
analytic-system    |     'exe': item[1]["data_model"]["fields"]["exe"],
analytic-system    | KeyError: 'exe'
analytic-system    | 
analytic-system    |    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
analytic-system    |    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
analytic-system    |    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
analytic-system    |    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
analytic-system    |    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
analytic-system    |    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
analytic-system    |    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
analytic-system    |    at org.apache.spark.scheduler.Task.run(Task.scala:99)
analytic-system    |    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
analytic-system    |    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
analytic-system    |    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
analytic-system    |    at java.lang.Thread.run(Thread.java:748)
analytic-system    | 17/11/24 21:13:19 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
analytic-system    | org.apache.spark.api.python.PythonException: Traceback (most recent call last):
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 171, in main
analytic-system    |     process()
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 166, in process
analytic-system    |     serializer.dump_stream(func(split_index, iterator), outfile)
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
analytic-system    |     vs = list(itertools.islice(iterator, batch))
analytic-system    |   File "/usr/share/unfetter/src/CAR_2013_03_001.py", line 55, in <lambda>
analytic-system    |     'exe': item[1]["data_model"]["fields"]["exe"],
analytic-system    | KeyError: 'exe'
analytic-system    | 
analytic-system    |    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
analytic-system    |    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
analytic-system    |    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
analytic-system    |    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
analytic-system    |    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
analytic-system    |    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
analytic-system    |    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
analytic-system    |    at org.apache.spark.scheduler.Task.run(Task.scala:99)
analytic-system    |    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
analytic-system    |    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
analytic-system    |    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
analytic-system    |    at java.lang.Thread.run(Thread.java:748)
analytic-system    | 17/11/24 21:13:19 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
analytic-system    | Traceback (most recent call last):
analytic-system    |   File "/usr/share/unfetter/src/run_unfetter_analytic.py", line 212, in <module>
analytic-system    |     rdd = analytic.analyze(rdd, args.begin, args.end)
analytic-system    |   File "/usr/share/unfetter/src/CAR_2013_03_001.py", line 74, in analyze
analytic-system    |     guid_list = reg_cmd_rdd.map(lambda item: (item[1]['process_guid'])).collect()
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 808, in collect
analytic-system    |   File "/usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
analytic-system    |   File "/usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
analytic-system    | py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
analytic-system    | : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 171, in main
analytic-system    |     process()
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 166, in process
analytic-system    |     serializer.dump_stream(func(split_index, iterator), outfile)
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
analytic-system    |     vs = list(itertools.islice(iterator, batch))
analytic-system    |   File "/usr/share/unfetter/src/CAR_2013_03_001.py", line 55, in <lambda>
analytic-system    |     'exe': item[1]["data_model"]["fields"]["exe"],
analytic-system    | KeyError: 'exe'
analytic-system    | 
analytic-system    |    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
analytic-system    |    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
analytic-system    |    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
analytic-system    |    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
analytic-system    |    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
analytic-system    |    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
analytic-system    |    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
analytic-system    |    at org.apache.spark.scheduler.Task.run(Task.scala:99)
analytic-system    |    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
analytic-system    |    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
analytic-system    |    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
analytic-system    |    at java.lang.Thread.run(Thread.java:748)
analytic-system    | 
analytic-system    | Driver stacktrace:
analytic-system    |    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
analytic-system    |    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
analytic-system    |    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
analytic-system    |    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
analytic-system    |    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
analytic-system    |    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
analytic-system    |    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
analytic-system    |    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
analytic-system    |    at scala.Option.foreach(Option.scala:257)
analytic-system    |    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
analytic-system    |    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
analytic-system    |    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
analytic-system    |    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
analytic-system    |    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
analytic-system    |    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
analytic-system    |    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
analytic-system    |    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
analytic-system    |    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1954)
analytic-system    |    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1968)
analytic-system    |    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
analytic-system    |    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
analytic-system    |    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
analytic-system    |    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
analytic-system    |    at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
analytic-system    |    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
analytic-system    |    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
analytic-system    |    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
analytic-system    |    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
analytic-system    |    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
analytic-system    |    at java.lang.reflect.Method.invoke(Method.java:498)
analytic-system    |    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
analytic-system    |    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
analytic-system    |    at py4j.Gateway.invoke(Gateway.java:280)
analytic-system    |    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
analytic-system    |    at py4j.commands.CallCommand.execute(CallCommand.java:79)
analytic-system    |    at py4j.GatewayConnection.run(GatewayConnection.java:214)
analytic-system    |    at java.lang.Thread.run(Thread.java:748)
analytic-system    | Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 171, in main
analytic-system    |     process()
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 166, in process
analytic-system    |     serializer.dump_stream(func(split_index, iterator), outfile)
analytic-system    |   File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
analytic-system    |     vs = list(itertools.islice(iterator, batch))
analytic-system    |   File "/usr/share/unfetter/src/CAR_2013_03_001.py", line 55, in <lambda>
analytic-system    |     'exe': item[1]["data_model"]["fields"]["exe"],
analytic-system    | KeyError: 'exe'
analytic-system    | 
analytic-system    |    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
analytic-system    |    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
analytic-system    |    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
analytic-system    |    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
analytic-system    |    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
analytic-system    |    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
analytic-system    |    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
analytic-system    |    at org.apache.spark.scheduler.Task.run(Task.scala:99)
analytic-system    |    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
analytic-system    |    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
analytic-system    |    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
analytic-system    |    ... 1 more

Available fields :

    "data_model": {
      "fields": {
        "severity": "Information",
        "src_tid": 1868,
        "keywords": -9223372036854776000,
        "log_name": "Microsoft-Windows-Sysmon/Operational",
        "record_number": 5325,
        "pid": 1504,
        "parent_image_path": "C:\\Windows\\System32\\cmd.exe",
        "uuid": "S-1-5-18",
        "ppid": "3484",
        "integrity_level": "High",
        "logon_guid": "{6B166207-852C-5A18-0000-00200D6D0100}",
        "hostname": "IE11Win7",
        "logon_id": "0x16d0d",
        "process_guid": "{6B166207-8930-5A18-0000-0010D3420600}",
        "utc_time": "2017-11-24T21:03:44Z",
        "event_code": 1,
        "image_path": "C:\\Windows\\System32\\PING.EXE",
        "terminal_session_id": "\"C:\\Windows\\system32\\cmd.exe\" ",
        "hashes": "SHA1=6AC7947207D999A65890AB25FE344955DA35028E",
        "parent_process_guid": "{6B166207-8562-5A18-0000-0010E6B20300}",
        "user": "IE11WIN7\\IEUser",
        "command_line": "ping  google.com"
      },

Must be because of the changes to 3.sysmon.conf, commented ruby lines due to dropped support on new Logstash version

You can update the last few lines of the 3.sysmon.conf for Logstash 5x compatibility. This will get rid of those exe field errors.

if ([data_model][fields][image_path]) { ruby {code => 'event.set("[data_model][fields][exe]", event.get("[data_model][fields][image_path]"))'} } if ([data_model][fields][parent_image_path]) { ruby {code => 'event.set("[data_model][fields][parent_exe]", event.get("[data_model][fields][parent_image_path]"))'} }