Tutorials for Natural Language Classifier - 500 error occurred - ibm-cloud

I wanted to create a classifier, so following the tutorial I did the following :
curl -i -u "apikey:{apikey}" \
-F training_data=#{train.csv} \
-F training_metadata="{\"language\":\"en\",\"name\":\"TutorialClassifier\"}" \
"{url}/v1/classifiers"
But, the following 500 error occurs:
HTTP/1.1 200 Connection established
HTTP/1.1 100 Continue
X-EdgeConnect-MidMile-RTT: 0
X-EdgeConnect-Origin-MEX-Latency: 113
HTTP/1.1 500 Internal Server Error
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 176
Expires: Wed, 11 Mar 2020 04:44:04 GMT
Date: Wed, 11 Mar 2020 04:44:04 GMT
Connection: close
<HTML><HEAD><TITLE>Error</TITLE></HEAD><BODY>
An error occurred while processing your request.<p>
Reference #179.35e52e17.1583901844.39c6106
</BODY></HTML>
What is causing this error?
Thanks!

I followed the instructions in the tutorial and able to create a classifier without any error. Here are the request and response
Request:
curl -i -u "apikey:API_KEY" \
-F training_data=#weather_data_train.csv \
-F training_metadata="{\"language\":\"en\",\"name\":\"TutorialClassifier\"}" \
"URL/v1/classifiers"
Response:
HTTP/1.1 100 Continue
X-EdgeConnect-MidMile-RTT: 230
X-EdgeConnect-Origin-MEX-Latency: 95
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 449
X-XSS-Protection: 1
Content-Security-Policy: default-src 'none'
X-Content-Type-Options: nosniff
Cache-Control: no-cache, no-store
Pragma: no-cache
Expires: 0
strict-transport-security: max-age=31536000; includeSubDomains;
x-global-transaction-id: xxxxxxx
X-DP-Watson-Tran-ID: xxxxxx
X-EdgeConnect-MidMile-RTT: 230
X-EdgeConnect-Origin-MEX-Latency: 2000
X-EdgeConnect-MidMile-RTT: 230
X-EdgeConnect-Origin-MEX-Latency: 95
Date: Wed, 11 Mar 2020 07:05:02 GMT
Connection: keep-alive
{
"classifier_id" : "xxxxx",
"name" : "TutorialClassifier",
"language" : "en",
"created" : "2020-03-11T07:05:01.126Z",
"url" : "URL/v1/classifiers/xxx",
"status_description" : "The classifier instance is in its training phase, not yet ready to accept classify requests",
"status" : "Training"
}%
All I did is
Create the Natural Language Classifier service.
Download the .csv and .json files.
On a terminal or command prompt, point to the folder where I downloaded the .csv and .json files.
From the service credentials page of the service, copy the apikey and url values and replace the placeholders ({apikey}, {url}) in the curl command.
Execute the command to see the above response.

Related

Flutter Dio - extraneous request to server

I am debugging http requests to our server and decided to try Dio dart package. After some trials (with no difference in results from standard http packages), I decided to stop using the Dio package.
I though happen to notice extraneous requests from random location (traced back to China telecom). Considering we are only trying to setup the server, and the requests started showing up only after I used Dio in my flutter app - Is DIO snooping on my server?
Seen on Server
X-Forwarded-Protocol: https
X-Real-Ip: 183.136.225.35
Host: 0.0.0.0:5002
Connection: close
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36 QIHU 360SE
Accept: */*
Referer: ******
Accept-Encoding: gzip
2022-10-06 15:06:06,768 [DEBUG] root:
X-Forwarded-Protocol: https
X-Real-Ip: 45.79.204.46
Host: 0.0.0.0:5002
Connection: close
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
Accept: */*
Referer: *****
Accept-Encoding: gzip
Traceroute on IP
4 142 ms 141 ms 153 ms 116.119.68.60
5 140 ms 138 ms 140 ms be6391.rcr21.b015591-1.lon13.atlas.cogentco.com [149.14.224.161]
6 139 ms 139 ms 139 ms be2053.ccr41.lon13.atlas.cogentco.com [130.117.2.65]
7 144 ms 142 ms 142 ms 154.54.61.158
8 191 ms 190 ms 190 ms chinatelecom.demarc.cogentco.com [149.14.81.226]
9 299 ms * 299 ms 202.97.13.18
10 * 316 ms * 202.97.90.30
11 * 317 ms * 202.97.24.141
12 * * * Request timed out.
13 317 ms 308 ms 320 ms 220.191.200.166
14 334 ms 354 ms * 115.233.128.133
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
19 * * * Request timed out.
20 * * * Request timed out.
21 325 ms 325 ms 333 ms 183.136.225.35

Troubleshooting connectivity from a pod in kubernetes

I have created a pod and service, called node-port.
root#hello-client:/# nslookup node-port
Server: 10.100.0.10
Address: 10.100.0.10#53
Name: node-port.default.svc.cluster.local
Address: 10.100.183.19
I can enter inside a pod and see the resolution happening.
However, the TCP connection is not happening from a node.
root#hello-client:/# curl --trace-ascii - http://node-port.default.svc.cluster.local:3050
== Info: Trying 10.100.183.19:3050...
What are the likely factors contributing to failure?
What are some suggestions to troubleshoot this?
On a working node/cluster. I expect this to work like this.
/ # curl --trace-ascii - node-port:3050
== Info: Trying 10.100.13.83:3050...
== Info: Connected to node-port (10.100.13.83) port 3050 (#0)
=> Send header, 78 bytes (0x4e)
0000: GET / HTTP/1.1
0010: Host: node-port:3050
0026: User-Agent: curl/7.83.1
003f: Accept: */*
004c:
== Info: Mark bundle as not supporting multiuse
<= Recv header, 17 bytes (0x11)
0000: HTTP/1.1 200 OK
<= Recv header, 38 bytes (0x26)
0000: Server: Werkzeug/2.2.2 Python/3.8.13
<= Recv header, 37 bytes (0x25)
0000: Date: Fri, 26 Aug 2022 04:34:48 GMT
<= Recv header, 32 bytes (0x20)
0000: Content-Type: application/json
<= Recv header, 20 bytes (0x14)
0000: Content-Length: 25
<= Recv header, 19 bytes (0x13)
0000: Connection: close
<= Recv header, 2 bytes (0x2)
0000:
<= Recv data, 25 bytes (0x19)
0000: {. "hello": "world".}.
{
"hello": "world"
}
== Info: Closing connection 0
/ #

How can I make HTTP::Proxy work with HTTPS URLs?

In the following code sample, I start a proxy server using HTTP::Proxy and attempt to use it to request an HTTPS URL, but the proxy server either doesn't actually make the request, or doesn't return the response. However, if I make the URL use HTTP (not secure), the request succeeds. I've installed both IO::Socket::SSL and LWP::UserAgent::https (yay secret deps!), but am still unable to get HTTPS requests to go through the proxy. How can I get HTTP::Proxy to work with HTTPS URLs?
Here's my code:
#!/usr/bin/env perl
use strict;
use warnings;
use Data::Printer;
use HTTP::Proxy ':log';
use Mojo::UserAgent ();
my $URL = 'https://www.yahoo.com';
my $PROXY_PORT = 8667;
my $pid = fork();
if ($pid) { # I am the parent
print "Press ^c to kill proxy server...\n";
my $proxy = HTTP::Proxy->new( port => $PROXY_PORT );
$proxy->logmask(ALL);
$proxy->via(q{});
$proxy->x_forwarded_for(0);
$proxy->start;
waitpid $pid, 0;
}
elsif ($pid == 0) { # I am the child
sleep 3; # Allow the proxy server to start
my $ua = Mojo::UserAgent->new;
$ua->proxy
->http("http://127.0.0.1:$PROXY_PORT")
->https("http://127.0.0.1:$PROXY_PORT");
my $tx = $ua->get($URL);
if ($tx->error) {
p $tx->error;
}
else {
print "Success!\n";
}
}
else {
die 'Unknown result after forking';
}
Saving the above script as testcase-so.pl and running it:
$ MOJO_CLIENT_DEBUG=1 ./testcase-so.pl
Press ^c to kill proxy server...
-- Blocking request (https://www.yahoo.com)
-- Connect c66a92739c09c76fa24029e8079808c7 (https://www.yahoo.com:443)
-- Client >>> Server (https://www.yahoo.com)
CONNECT www.yahoo.com:443 HTTP/1.1\x0d
User-Agent: Mojolicious (Perl)\x0d
Content-Length: 0\x0d
Host: www.yahoo.com\x0d
Accept-Encoding: gzip\x0d
\x0d
-- Client >>> Server (https://www.yahoo.com)
[Tue Oct 9 12:02:54 2018] (12348) PROCESS: Forked child process 12352
[Tue Oct 9 12:02:54 2018] (12352) SOCKET: New connection from 127.0.0.1:45312
[Tue Oct 9 12:02:54 2018] (12352) REQUEST: CONNECT www.yahoo.com:443
[Tue Oct 9 12:02:54 2018] (12352) REQUEST: Accept-Encoding: gzip
[Tue Oct 9 12:02:54 2018] (12352) REQUEST: Host: www.yahoo.com
[Tue Oct 9 12:02:54 2018] (12352) REQUEST: User-Agent: Mojolicious (Perl)
[Tue Oct 9 12:02:54 2018] (12352) REQUEST: Content-Length: 0
[Tue Oct 9 12:02:54 2018] (12352) RESPONSE: 200 OK
[Tue Oct 9 12:02:54 2018] (12352) RESPONSE: Date: Tue, 09 Oct 2018 12:02:54 GMT
[Tue Oct 9 12:02:54 2018] (12352) RESPONSE: Transfer-Encoding: chunked
[Tue Oct 9 12:02:54 2018] (12352) RESPONSE: Server: HTTP::Proxy/0.304
-- Client <<< Server (https://www.yahoo.com)
HTTP/1.1 200 OK\x0d
Date: Tue, 09 Oct 2018 12:02:54 GMT\x0d
Transfer-Encoding: chunked\x0d
Server: HTTP::Proxy/0.304\x0d
\x0d
[Tue Oct 9 12:03:14 2018] (12352) CONNECT: Connection closed by the client
[Tue Oct 9 12:03:14 2018] (12352) PROCESS: Served 1 requests
[Tue Oct 9 12:03:14 2018] (12352) CONNECT: End of CONNECT proxyfication
\ {
message "Proxy connection failed"
}
[Tue Oct 9 12:03:15 2018] (12348) PROCESS: Reaped child process 12349
[Tue Oct 9 12:03:15 2018] (12348) PROCESS: 1 remaining kids: 12352
[Tue Oct 9 12:03:15 2018] (12348) PROCESS: Reaped child process 12352
[Tue Oct 9 12:03:15 2018] (12348) PROCESS: 0 remaining kids:
^C[Tue Oct 9 12:04:04 2018] (12348) STATUS: Processed 2 connection(s)
$
And with the $URL switched to not use https:
$ MOJO_CLIENT_DEBUG=1 ./testcase-so.pl
Press ^c to kill proxy server...
-- Blocking request (http://www.yahoo.com)
-- Connect f792ee97a0362ab493575d8116e69e59 (http://127.0.0.1:8667)
-- Client >>> Server (http://www.yahoo.com)
GET http://www.yahoo.com HTTP/1.1\x0d
Accept-Encoding: gzip\x0d
Content-Length: 0\x0d
Host: www.yahoo.com\x0d
User-Agent: Mojolicious (Perl)\x0d
\x0d
[Tue Oct 9 12:09:38 2018] (12656) PROCESS: Forked child process 12659
-- Client >>> Server (http://www.yahoo.com)
[Tue Oct 9 12:09:38 2018] (12659) SOCKET: New connection from 127.0.0.1:58288
[Tue Oct 9 12:09:38 2018] (12659) REQUEST: GET http://www.yahoo.com
[Tue Oct 9 12:09:38 2018] (12659) REQUEST: Accept-Encoding: gzip
[Tue Oct 9 12:09:38 2018] (12659) REQUEST: Host: www.yahoo.com
[Tue Oct 9 12:09:38 2018] (12659) REQUEST: User-Agent: Mojolicious (Perl)
[Tue Oct 9 12:09:38 2018] (12659) REQUEST: Content-Length: 0
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: 301 Moved Permanently
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Cache-Control: no-store, no-cache
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Date: Tue, 09 Oct 2018 14:10:01 GMT
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Transfer-Encoding: chunked
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Via: http/1.1 media-router-fp1006.prod.media.bf1.yahoo.com (ApacheTrafficServer [c s f ])
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Location: https://www.yahoo.com/
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Server: ATS
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Content-Language: en
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Content-Length: 8
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Content-Type: text/html
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: Content-Security-Policy: sandbox allow-forms allow-same-origin allow-scripts allow-popups allow-popups-to-escape-sandbox allow-presentation; report-uri https://csp.yahoo.com/beacon/csp?src=ats&site=frontpage&region=US&lang=en-US&device=desktop&yrid=&partner=;
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: X-Frame-Options: SAMEORIGIN
[Tue Oct 9 12:09:38 2018] (12659) RESPONSE: X-XSS-Protection: 1; report="https://csp.yahoo.com/beacon/csp?src=fp-hpkp-www"
-- Client <<< Server (http://www.yahoo.com)
HTTP/1.1 301 Moved Permanently\x0d
Cache-Control: no-store, no-cache\x0d
Date: Tue, 09 Oct 2018 14:10:01 GMT\x0d
Transfer-Encoding: chunked\x0d
Via: http/1.1 media-router-fp1006.prod.media.bf1.yahoo.com (ApacheTrafficServer [c s f ])\x0d
Location: https://www.yahoo.com/\x0d
Server: ATS\x0d
Content-Language: en\x0d
Content-Length: 8\x0d
Content-Type: text/html\x0d
Content-Security-Policy: sandbox allow-forms allow-same-origin allow-scripts allow-popups allow-popups-to-escape-sandbox allow-presentation; report-uri https://csp.yahoo.com/beacon/csp?src=ats&site=frontpage&region=US&lang=en-US&device=desktop&yrid=&partner=;\x0d
X-Frame-Options: SAMEORIGIN\x0d
X-XSS-Protection: 1; report="https://csp.yahoo.com/beacon/csp?src=fp-hpkp-www"\x0d
\x0d
-- Client <<< Server (http://www.yahoo.com)
8\x0d
redirect\x0d
0\x0d
\x0d
Success!
[Tue Oct 9 12:09:38 2018] (12659) SOCKET: Getting request failed: Client closed
[Tue Oct 9 12:09:39 2018] (12656) PROCESS: Reaped child process 12657
[Tue Oct 9 12:09:39 2018] (12656) PROCESS: 1 remaining kids: 12659
[Tue Oct 9 12:09:39 2018] (12656) PROCESS: Reaped child process 12659
[Tue Oct 9 12:09:39 2018] (12656) PROCESS: 0 remaining kids:
^C[Tue Oct 9 12:09:45 2018] (12656) STATUS: Processed 2 connection(s)
$
There is a bug in HTTP::Proxy in that it returns the wrong response to a CONNECT request:
-- Client <<< Server (https://www.yahoo.com)
HTTP/1.1 200 OK\x0d
Date: Tue, 09 Oct 2018 12:02:54 GMT\x0d
Transfer-Encoding: chunked\x0d
Server: HTTP::Proxy/0.304\x0d
\x0d
The response to a CONNECT request can have no body which means that it should not have a HTTP header announcing a body like Transfer-Encoding: chunked does. This bug happens with all clients which do a CONNECT request using HTTP/1.1. If the CONNECT is instead done with HTTP/1.0 the problem vanishes since Transfer-Encoding: chunked is not defined with HTTP/1.0 yet and thus HTTP::Proxy does not send it.
The same problem happens when trying to use curl with HTTP::Proxy, thus this is not a problem solely of Mojo::UserAgent. I`ve made a patch to HTTP::Proxy to respond properly. See this pull request for the details and for the (small) diff you need to apply.

Responding with a stream sometimes result in "Connection reset by peer" error

In our application we have routes that are streaming JSON documents. Here is an example:
/** GET api/1/tenant/(tenantId)/ads/ */
def getAllAdsByOwner(advertiserId: AdvertiserId): Route =
get {
httpRequiredSession { username =>
getAllTenantAds(username, advertiserId) { (adSource: Source[AdView, Any]) =>
complete(adSource)
}
}
}
Most of the time it works as expected, but sometimes, especially when there are many simultaneous requests, the server starts resetting connection just after the headers have been sent.
I tested with a script that requests this route with curl in a loop and aborting if the request failed. It was running for about 2 minutes before stopping. Trace when request fails is the following:
<= Recv header, 17 bytes (0x11)
0000: HTTP/1.1 200 OK
<= Recv header, 54 bytes (0x36)
0000: Access-Control-Allow-Origin: https://<...>
<= Recv header, 135 bytes (0x87)
0000: Access-Control-Expose-Headers: Content-Type, Authorization, Refr
0040: esh-Token, Set-Authorization, Set-Refresh-Token, asset-content-l
0080: ength
<= Recv header, 40 bytes (0x28)
0000: Access-Control-Allow-Credentials: true
<= Recv header, 24 bytes (0x18)
0000: Content-Encoding: gzip
<= Recv header, 23 bytes (0x17)
0000: X-Frame-Options: DENY
<= Recv header, 33 bytes (0x21)
0000: X-Content-Type-Options: nosniff
<= Recv header, 26 bytes (0x1a)
0000: Content-Security-Policy: .
<= Recv header, 20 bytes (0x14)
0000: default-src 'self';.
<= Recv header, 63 bytes (0x3f)
0000: style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;.
<= Recv header, 59 bytes (0x3b)
0000: font-src 'self' 'unsafe-inline' https://fonts.gstatic.com;.
<= Recv header, 99 bytes (0x63)
0000: script-src 'self' 'unsafe-inline' 'unsafe-eval' https://*.google
0040: apis.com https://maps.gstatic.com;.
<= Recv header, 69 bytes (0x45)
0000: img-src 'self' data: https://*.googleapis.com https://*.gstatic.
0040: com;.
<= Recv header, 8 bytes (0x8)
0000:
<= Recv header, 26 bytes (0x1a)
0000: Server: akka-http/10.1.3
<= Recv header, 37 bytes (0x25)
0000: Date: Wed, 27 Jun 2018 15:20:24 GMT
<= Recv header, 28 bytes (0x1c)
0000: Transfer-Encoding: chunked
<= Recv header, 32 bytes (0x20)
0000: Content-Type: application/json
<= Recv header, 2 bytes (0x2)
0000:
== Info: Recv failure: Connection reset by peer
== Info: stopped the pause stream!
== Info: Closing connection 0
curl: (56) Recv failure: Connection reset by peer
The same request inspected in Wireshark:
screen shot
Reading logs didn't give any hint about probable source of the problem. Response logged as successful:
[27-06-2018 19:44:52.837][INFO] access: 'GET /api/1/tenant/ca764a91-8616-409c-8f08-c64a40d3fc07/ads' 200 596ms
Versions of used software:
Scala: 2.11.11
akka: 2.5.13
akka-http: 10.1.3
Configuration:
akka.conf
akka-http-core.conf
I tried increasing akka.http.host-connection-pool.max-connections to 128 but it didn't help. Maybe someone has an idea if this is a bug in akka-http or configuration problem?
If there is no I/O on your open connection for the idle-timeout, Akka will close the connection which often appears as a "connection reset by peer" error. Try increasing the akka.http.server.idle-timeout value.
Because your akka.http.server.request-timeout value is the same as akka.http.server.idle-timeout, it is a race condition between which timeout will occur first when there is no I/O. Sometimes, you will see a 503; other times, you will experience a connection reset error.

Zend FW response object and image data - adding linefeed?

I have run into a problem using php in Zend Framweork to dynamically scale images for return as mime-type image/jpeg.
The problem manifests itself as firefox reporting 'cannot display image because it contains errors'. This is the same problem reported in: return dynamic image zf2
To replicate the problem, I removed any file IO and copied code from a similar stack overflow example verbatim (in a zend FW action controller):
$resp = $this->getRespose();
$myImage = imagecreate(200,200);
$myGray = imagecolorallocate($myImage, 204, 204, 204);
$myBlack = imagecolorallocate($myImage, 0, 0, 0);
imageline($myImage, 15, 35, 120, 60, $myBlack);
ob_start();
imagejpeg($myImage);
$img_string = ob_get_contents();
$scaledSize = ob_get_length();
ob_end_clean();
imagedestroy($myImage);
$resp->setContent($img_string);
$resp->getHeaders()->addHeaders(array(
'Content-Type' => $mediaObj->getMimeType(),
'Content-Transfer-Encoding' => 'binary'));
$resp->setStatusCode(Response::STATUS_CODE_200);
return $resp;
When I use wget to capture the jpeg response, I notice a '0A' as the first byte of the output rather than the field separator 'FF'. There is no such '0A' in the data captured in the buffer, nor in the response's content member. Attempting to open the wget output with GIMP fails, unless I remove the 0A. I am guessing that Zend FW is using the line-feed as a field separator for the response fields vs. the content, but I'm not sure if that is the problem, or if it is, how to fix it.
My response fields look OK:
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Sat, 02 Jun 2018 23:30:09 GMT
Server: Apache/2.4.7 (Ubuntu) OpenSSL/1.0.1f
Set-Cookie: PHPSESSID=nsgk1o5au7ls4p5g6mr9kegoeg; path=/; HttpOnly
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Transfer-Encoding: binary
Content-Length: 1887
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: image/jpeg
Here is the dump of the first few bytes of the wget with the jpeg stream that fails:
00000000 0a ff d8 ff e0 00 10 4a 46 49 46 00 01 01 01 00 |.......JFIF.....|
00000010 60 00 60 00 00 ff fe 00 3e 43 52 45 41 54 4f 52 |......>CREATOR|
Any idea where the '0A' is coming from? I am running zend framework 2.5.1, PHP 7.2.2
Thank you Tim Fountain.
I did finally find the offending file buried in some doctrine entities I had created. Sure enough, a stray "?>" with an empty line.
Much appreciated