Why do i get a "Post request fails. Cannot get predictions" error when calling gcloud AIP (unified) batch prediction? - gcloud

calling a batch prediction for a model deployed on AIP unified
results in an empty results file and a single error file with the following content:
('Post request fails. Cannot get predictions. Error: Unexpected response- {"predictions":[{"data":[{"715":{"0":1}}]}]}.', 1)
This is weird since the error message contains the correct output: {"predictions":[{"data":[{"715":{"0":1}}]}]}, which means that the prediction ran successfully and was the receiving end was able to read it,
the input is JSONL format and includes only a single instance for test purposes
Any idea?

Related

How to get the server's error message when `Net::LDAP::schema` fails?

I wrote some Perl program that uses $ldap->schema to get the server's schema.
So far the servers used returned a schema.
However I have one server does returns just undef, so I tried passing some dn => 'CN=Schema,CN=Configuration,...' to schema().
Unfortunately I still get an undef result.
Trying to get the schema using ldapsearch, I see an error message from the server like:
text: 000004DC: LdapErr: DSID-0C090A71, comment: In order to perform this operation a successful bind must be completed on the connection., ...
I'd like to get the server's error message from the $ldap->schema method.
How can I do that using the 0.44 version of Net::LDAP (I know it's a bit old meanwhile)?
It doesn't report any details. But you could trace through it with the debugger, or make a copy of it and add debugging statements.

How do I translate the following POST request into ESP8266 AT-command format?

I've got a working local website that takes in HTML form data.
The fields are:
Temperature
Humidity
The server successfully receives the data and spits out a graph updated with the new entries.
Using a browser tool, I was able to capture the actual POST request as follows:
http://127.0.0.1:5000/add_data
Temperature=25.4&Humidity=52.2
Content-Length:30
Now, I want to migrate from using the human interface browser with manual entries to an ESP01 device using AT commands.
According to the ESP AT-commands documentation, a POST request is performed using the following command:
AT+HTTPCPOST=
Find the link below for the full description of the command.
I cannot seem to get this POST request working. The ESP01 device immediately returns an "ERROR" message without any delay, as though it did not even try to send the request, that the syntax might be wrong.
Among many variations, the following is my best attempt:
AT+HTTPCPOST="http://MYIPADDR:5000/add_data",30,2,"Temperature: 25.4","Humidity: 52.2"
With MYIPADDR above replaced with my IP address.
How do I translate a post request into ESP01 AT command format, and are there any prerequisites needed to be in place to perform such a request?
I did connect the ESP01 device to the WiFi network.
Here's the link to the POST AT command description:
https://docs.espressif.com/projects/esp-at/en/release-v2.2.0.0_esp8266/AT_Command_Set/HTTP_AT_Commands.html#cmd-httpcpost
The documentation says:
AT+HTTPCPOST=url,length[,<http_req_header_cnt>][,<http_req_header>..<http_req_header>]
Response:
OK
The symbol > indicates that AT is ready for receiving serial data, and you can enter the data now. When the requirement of message length
determined by the parameter is met, the transmission starts.
...
Parameters
: HTTP URL. : HTTP data length to POST. The maximum
length is equal to the system allocable heap size.
<http_req_header_cnt>: the number of <http_req_header> parameters.
[<http_req_header>]: you can send more than one request header to the
server.
You're sending:
AT+HTTPCPOST="http://MYIPADDR:5000/add_data",30,2,"Temperature: 25.4","Humidity: 52.2"
The length is 30. The problem is that everything after the length is HTTP header fields; you need to send the variables in the body. So the command is:
AT+HTTPCPOST="http://MYIPADDR:5000/add_data",30
followed on the next line by after the ESP-01 send the > character:
Temperature=25.4&Humidity=52.2
Because you passed 30 as the body length, the ESP-01 will read exactly 30 characters after the end of the AT command and send that data as the post body. If the size of that data changes (for instance, maybe the temperature is 2.2, so one digit less), you'll need to send the new length rather than 30.

Debugging Transformer Errors in Mirth Connect Server Log

Fairly new to Mirth, so looking for advice in regards to debugging/getting more information from errors reported in the Server Log in Mirth Connect. I know what channel this is originating from, but that's about it. This error is received 10 times for each message coming through. It should be noted that the channel is working properly except for this error cluttering up the logs.
The Error:
ERROR (transformer:?): TypeError: undefined is not an xml object.
What I've Tried:
Ruled out Channel Map variables (mappers), they don't have null default values, they match up with vars in the incoming xml message, even changed to Javascript transformers to modify the catch to try to narrow down the issue, but no luck.
Modified external javascript source files to include more error handling (wrapped each file in a try/catch that would log with identifying info) but this didn't change the result at all.
Added a new Alert to send info if errors are received, but this alert never fired.
Anything else to try? Thanks for any/all help!
This is a Rhino message that happens when you use an e4x operator on a variable that isn't an xml object. The following two samples will both throw the same error you are seeing when obj is undefined. Otherwise, 'undefined' in your error will be replaced with obj.toString();
// Putting a dot between the variable and () indicates an xml filter
// instead of a function call
obj.('test');
// Two consecutive dots returns all xml descendant elements of obj
// named test instead of retrieving a property named test from obj.
obj..test;

Using variables in SOAP request file in JMeter

In a JMeter (v2.13) test plan I have a SOAP/XML-RPC sampler. The SOAP request itself is loaded from a random file.
Sample request
<mySoapRequest>
<value>555</value>
</mySoapRequest>
This works fine.
I would now like to replace this fixed value with a variable which is defined in JMeter, i.e.
<mySoapRequest>
<value>${someValue}</value>
</mySoapRequest>
It seems as if JMeter does not resolve this variable. The actual SOAP request sent to the service does not contain 555 but ${someValue}. Is there any workaround so that I could use variables in the file?
That can be done using FileToString and eval functions.
For this XML,
<mySoapRequest>
<value>${someValue}</value>
</mySoapRequest>
In the SOAP/XML RPC request Data section, use the functions as shown below to get the value replaced at run time.
${__eval(${__FileToString(C:\users\me\desktop\soap.xml)})}
__FileToString - The FileToString function can be used to read an entire file. Each time it is called it reads the entire file.
__eval - The eval function returns the result of evaluating a string expression.

GoodData Export Reports API Call results in incomplete file

I've developed a method that does the following steps, in this order:
1) Get a report's metadata via /gdc/md//obj/
2) From that, get the report definition and use that as payload for a call to /gdc/xtab2/executor3
3) Use the result from that call as payload for a call to /gdc/exporter/executor
4) Perform a GET on the returned URI to download the generated CSV
So this all works fine, but the problem is that I often get back a blank CSV or an incomplete CSV. My workaround has been to put a sleep() in between getting the URI back and actually calling a GET on the URI. However, as our data grows, I have to keep increasing the delay on this, and even then it is no guarantee that I got complete data. Is there a way to make sure that the report has finished exporting data to the file before calling the URI?
The problem is that export runs as asynchronous task - result on the URL returned in payload of POST to /gdc/exporter/executor (in form of /gdc/exporter/result/{project-id}/{result-id}) is available after exporter task finishes its job.
If the task has not been done yet, GET to /gdc/exporter/result/{project-id}/{result-id} should return status code 202 which means "we are still exporting, please wait".
So you should periodically poll on the result URL until it returns status 200 which will contain a payload (or 40x/50x if something wrong happened).