Github Script can't use outputs - github

I am trying to create a PR message as a result of a failed action but the message's body is always erroring out as being "blank".
This is the code I'm running on my action, all those console.logs are empty and I can't understand why.
Thanks for any help

Related

a call to query consoleText returns with HttpStatus 100 - how to deal with that?

I am working on a program than launches Jenkins jobs using the REST API. After the job has completed, I'd like to get its log, so I call http://jenkins.domain.com/job/my_job_name/#/consoleText in my code.
In 75% of the cases that works and I get the text in return. But there a some cases where it comes back with HttpStatus 100 and no text. (Opening the URL with the browser then shows the text, so clearly there is something to return.) (I haven't found any pattern that would explain it, like "exceptionally large log" or so.)
I found no documentation about calls returning 100 and have no idea how to proceed. Simple repeating the call gives the same result. So how can I get the expected result?
Surprisingly "exceptionally large" was the answer. This caused a timeout (followed by some inappropriate handling) in the library that I used to handle the HttpGet. (Fortunately it was fixed very quickly.)

Debugging Transformer Errors in Mirth Connect Server Log

Fairly new to Mirth, so looking for advice in regards to debugging/getting more information from errors reported in the Server Log in Mirth Connect. I know what channel this is originating from, but that's about it. This error is received 10 times for each message coming through. It should be noted that the channel is working properly except for this error cluttering up the logs.
The Error:
ERROR (transformer:?): TypeError: undefined is not an xml object.
What I've Tried:
Ruled out Channel Map variables (mappers), they don't have null default values, they match up with vars in the incoming xml message, even changed to Javascript transformers to modify the catch to try to narrow down the issue, but no luck.
Modified external javascript source files to include more error handling (wrapped each file in a try/catch that would log with identifying info) but this didn't change the result at all.
Added a new Alert to send info if errors are received, but this alert never fired.
Anything else to try? Thanks for any/all help!
This is a Rhino message that happens when you use an e4x operator on a variable that isn't an xml object. The following two samples will both throw the same error you are seeing when obj is undefined. Otherwise, 'undefined' in your error will be replaced with obj.toString();
// Putting a dot between the variable and () indicates an xml filter
// instead of a function call
obj.('test');
// Two consecutive dots returns all xml descendant elements of obj
// named test instead of retrieving a property named test from obj.
obj..test;

Is the problem about Datastage Message Handler?

I have a problem about message handler on IBM Datastage. I am trying to suppress this message: ""no more conversion warnings will be issued" . I added to Message Handler at project level. But when I run the job, it gives the warning message again.
Thank you in advance.
Message handler only work for Parallel jobs - what is your job type?
Have you added the message handler in the administrator client?
If yes follow these steps:
In the job log check the second entry „Attached Message Handlers:“ and check if the message handler is active.
Open the message with the warning and check the message ID and compare it to the one of your message handler - the need tobe identical

GoodData Export Reports API Call results in incomplete file

I've developed a method that does the following steps, in this order:
1) Get a report's metadata via /gdc/md//obj/
2) From that, get the report definition and use that as payload for a call to /gdc/xtab2/executor3
3) Use the result from that call as payload for a call to /gdc/exporter/executor
4) Perform a GET on the returned URI to download the generated CSV
So this all works fine, but the problem is that I often get back a blank CSV or an incomplete CSV. My workaround has been to put a sleep() in between getting the URI back and actually calling a GET on the URI. However, as our data grows, I have to keep increasing the delay on this, and even then it is no guarantee that I got complete data. Is there a way to make sure that the report has finished exporting data to the file before calling the URI?
The problem is that export runs as asynchronous task - result on the URL returned in payload of POST to /gdc/exporter/executor (in form of /gdc/exporter/result/{project-id}/{result-id}) is available after exporter task finishes its job.
If the task has not been done yet, GET to /gdc/exporter/result/{project-id}/{result-id} should return status code 202 which means "we are still exporting, please wait".
So you should periodically poll on the result URL until it returns status 200 which will contain a payload (or 40x/50x if something wrong happened).

fields parameter not working with graph batch requests

I have moved my graph requests to a batch, and noticed that when more than one request parameter is passed, the request fails with error 400.
For example, this works when not batched:
facebook->api('/me/friends?limit=5000&fields=id')
But when the same graph url is moved to a batch request, I get a 400 error.
When I remove one of the parameters (either fields or limit), it works:
/me/friends?fields=id
/me/friends?limit=10
Anyone knows if this is a bug or should be like this for some reason?
Finally found the answer to my problem. It appears that the & char must be escaped with %26 in order for this to work.
So my code sample should be:
facebook->api('/me/friends?limit=5000%26fields=id')
Wish this was documented...