Receiving reports on a higher level and not already parsed
AlesSturala opened this issue · comments
Hello,
About a year ago, I asked about the possibility to receive data in a higher level format than already parsed DTO objects. The use-case is that the GAQL definition is dynamic, it is not known at a compile-time. We just want to take whatever GAQL comes our way, execute it and dynamically process it and save it into a file (AVRO in our case).
The proposed solution was to serialize it to JSON and then parse it dynamically:
string rowJson = JsonFormatter.Default.Format(googleAdsRow);
GoogleAdsRow parsedRow = GoogleAdsRow.Parser.ParseJson(rowJson);
This is becoming more and more of an issue as that is very inefficient. Basically, at the moment:
- The library receives the data and parses it into objects
- We take and serialize it into a string
- Then parse the string dynamically into a memory
- Then save what is in the memory to a file
This is very inefficient and for large reports of several GBs we run into memory issues with some very large accounts. Is there a way to skip step 1 that the library does implicitly and just get access to the raw response (protobuf I believe) so we can process it the way we want and not how the library does?
Hi @AlesSturala it isn't clear why you do steps 2 and 3. they seem redundant to me and an issue with your overall application architecture. I'm assuming that there isn't a way to fix that part at least short term, so I'll not get into those details.
There isn't a way to do this with the library, unless you want to get into the business of doing HTTP/2 and parsing binary streams (which I don't recommend). However you can swap out the call to GoogleAdsService.SearchStream with an implementation to the REST endpoint. You get back a gzipped JSON file, which when decompressed, gives you the input to step 3.
https://developers.google.com/google-ads/api/rest/common/search has an example towards the end. You can get the accesstoken and developer token from an instance of GoogleAdsConfig
. An example is given below:
Note: The problem of decompressing a large gzip file and then parsing the huge JSON will still remain. There's no workaround to that issue unless you reduce the size of your reports. You also need to remember to update the endpoints every time there's an API migration.
private static void DownloadReportUsingRest(GoogleAdsConfig config, string customerId)
{
string query =
$@"SELECT
ad_group.id,
ad_group.status,
ad_group_criterion.criterion_id,
ad_group_criterion.keyword.text,
ad_group_criterion.keyword.match_type
FROM ad_group_criterion
WHERE ad_group_criterion.type = 'KEYWORD'
AND ad_group.status = 'ENABLED'
AND ad_group_criterion.status IN ('ENABLED', 'PAUSED')";
string url = $"https://googleads.googleapis.com/v9/customers/{customerId}/googleAds:searchStream";
string accessToken = config.OAuth2AccessToken;
for (int i = 0; i < 10; i++)
{
WebRequest req = HttpUtilities.BuildRequest(url, "POST", config);
req.Headers.Add("developer-token", config.DeveloperToken);
req.Headers.Add("Authorization", $"Bearer {accessToken}");
req.Headers.Add("login-customer-id", config.LoginCustomerId);
req.ContentType = "application/json";
req.Headers.Add("Accept-Encoding", "gzip, compress");
using (StreamWriter writer = new StreamWriter(req.GetRequestStream()))
{
writer.Write($"{{ \"query\": \"{query}\" }}");
}
try
{
WebResponse response = req.GetResponse();
using (StreamReader rdr = new StreamReader(response.GetResponseStream()))
{
string contents = rdr.ReadToEnd();
}
}
catch (WebException ex)
{
using (StreamReader rdr = new StreamReader(ex.Response.GetResponseStream()))
{
string contents = rdr.ReadToEnd();
}
}
Thread.Sleep(30000);
}
}
private static GoogleAdsConfig GetConfig()
{
return new GoogleAdsConfig()
{
DeveloperToken = "***",
OAuth2Mode = OAuth2Flow.APPLICATION,
OAuth2ClientId = "****",
OAuth2ClientSecret = "****",
OAuth2RefreshToken = "****",
LoginCustomerId = "9185018835"
};
}