google / grr

GRR Rapid Response: remote live forensics for incident response

Home Page:https://grr-doc.readthedocs.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Launch binaries failed with "Error 1406: Data too long for column 'data_value' at row 1"

certxlm opened this issue · comments

Environment

  • GRR installed from DEB
  • tested on both 3.4.5.1 and 3.4.6.0
  • server is Ubuntu 18.04
  • clients are Windows 10 and Ubuntu 18.04

Describe the issue
When starting a flow "LaunchBinary (3.4.51)" or "execute binary hack (3.4.6.0)", the flow is stopped before even being picked up by the client and fails with, after a long while, the error given below.

Same result on linux clients with a relatively simple binary (cp), and on windows clients with a cli binary that works perfectly when launched manually. Note that this is a fresh install of GRR, not an upgrade.

Error logs

<_InactiveRpcError of RPC that terminated with:
	status = StatusCode.UNKNOWN
	details = "Error 1406: Data too long for column 'data_value' at row 1"
	debug_error_string = "{"created":"@1654186737.047671253","description":"Error received from peer ipv4:172.17.84.36:4444","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Error 1406: Data too long for column 'data_value' at row 1","grpc_status":2}"
> (from http://lucxu20.int.excellium.lu/api/v2/clients/C.0ea9ee0871fa3889/flows)

Additional context
Modifying the tables broadcasts and pending_messages as shown below fixes the issue, but since I'm not sure of the root cause, that might really not be the best course of action:

alter table pending_messages
  modify column data_value longblob;
alter table broadcasts
 modify column data_value longblob;

I have reproduced the issue and I find the suggested fix reasonable. The only thing - I think that MEDIUMBLOB should be enough. MEDIUMBLOB is 16Mb (whereas BLOB is 65Kb), and as we chunk the data, we never send more than 16mb. I will prepare the fix and update Fleetspeak's code.