I have created a pod and service, called node-port.
root#hello-client:/# nslookup node-port
Server: 10.100.0.10
Address: 10.100.0.10#53
Name: node-port.default.svc.cluster.local
Address: 10.100.183.19
I can enter inside a pod and see the resolution happening.
However, the TCP connection is not happening from a node.
root#hello-client:/# curl --trace-ascii - http://node-port.default.svc.cluster.local:3050
== Info: Trying 10.100.183.19:3050...
What are the likely factors contributing to failure?
What are some suggestions to troubleshoot this?
On a working node/cluster. I expect this to work like this.
/ # curl --trace-ascii - node-port:3050
== Info: Trying 10.100.13.83:3050...
== Info: Connected to node-port (10.100.13.83) port 3050 (#0)
=> Send header, 78 bytes (0x4e)
0000: GET / HTTP/1.1
0010: Host: node-port:3050
0026: User-Agent: curl/7.83.1
003f: Accept: */*
004c:
== Info: Mark bundle as not supporting multiuse
<= Recv header, 17 bytes (0x11)
0000: HTTP/1.1 200 OK
<= Recv header, 38 bytes (0x26)
0000: Server: Werkzeug/2.2.2 Python/3.8.13
<= Recv header, 37 bytes (0x25)
0000: Date: Fri, 26 Aug 2022 04:34:48 GMT
<= Recv header, 32 bytes (0x20)
0000: Content-Type: application/json
<= Recv header, 20 bytes (0x14)
0000: Content-Length: 25
<= Recv header, 19 bytes (0x13)
0000: Connection: close
<= Recv header, 2 bytes (0x2)
0000:
<= Recv data, 25 bytes (0x19)
0000: {. "hello": "world".}.
{
"hello": "world"
}
== Info: Closing connection 0
/ #
Related
I am debugging http requests to our server and decided to try Dio dart package. After some trials (with no difference in results from standard http packages), I decided to stop using the Dio package.
I though happen to notice extraneous requests from random location (traced back to China telecom). Considering we are only trying to setup the server, and the requests started showing up only after I used Dio in my flutter app - Is DIO snooping on my server?
Seen on Server
X-Forwarded-Protocol: https
X-Real-Ip: 183.136.225.35
Host: 0.0.0.0:5002
Connection: close
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36 QIHU 360SE
Accept: */*
Referer: ******
Accept-Encoding: gzip
2022-10-06 15:06:06,768 [DEBUG] root:
X-Forwarded-Protocol: https
X-Real-Ip: 45.79.204.46
Host: 0.0.0.0:5002
Connection: close
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
Accept: */*
Referer: *****
Accept-Encoding: gzip
Traceroute on IP
4 142 ms 141 ms 153 ms 116.119.68.60
5 140 ms 138 ms 140 ms be6391.rcr21.b015591-1.lon13.atlas.cogentco.com [149.14.224.161]
6 139 ms 139 ms 139 ms be2053.ccr41.lon13.atlas.cogentco.com [130.117.2.65]
7 144 ms 142 ms 142 ms 154.54.61.158
8 191 ms 190 ms 190 ms chinatelecom.demarc.cogentco.com [149.14.81.226]
9 299 ms * 299 ms 202.97.13.18
10 * 316 ms * 202.97.90.30
11 * 317 ms * 202.97.24.141
12 * * * Request timed out.
13 317 ms 308 ms 320 ms 220.191.200.166
14 334 ms 354 ms * 115.233.128.133
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
19 * * * Request timed out.
20 * * * Request timed out.
21 325 ms 325 ms 333 ms 183.136.225.35
I wanted to create a classifier, so following the tutorial I did the following :
curl -i -u "apikey:{apikey}" \
-F training_data=#{train.csv} \
-F training_metadata="{\"language\":\"en\",\"name\":\"TutorialClassifier\"}" \
"{url}/v1/classifiers"
But, the following 500 error occurs:
HTTP/1.1 200 Connection established
HTTP/1.1 100 Continue
X-EdgeConnect-MidMile-RTT: 0
X-EdgeConnect-Origin-MEX-Latency: 113
HTTP/1.1 500 Internal Server Error
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 176
Expires: Wed, 11 Mar 2020 04:44:04 GMT
Date: Wed, 11 Mar 2020 04:44:04 GMT
Connection: close
<HTML><HEAD><TITLE>Error</TITLE></HEAD><BODY>
An error occurred while processing your request.<p>
Reference #179.35e52e17.1583901844.39c6106
</BODY></HTML>
What is causing this error?
Thanks!
I followed the instructions in the tutorial and able to create a classifier without any error. Here are the request and response
Request:
curl -i -u "apikey:API_KEY" \
-F training_data=#weather_data_train.csv \
-F training_metadata="{\"language\":\"en\",\"name\":\"TutorialClassifier\"}" \
"URL/v1/classifiers"
Response:
HTTP/1.1 100 Continue
X-EdgeConnect-MidMile-RTT: 230
X-EdgeConnect-Origin-MEX-Latency: 95
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 449
X-XSS-Protection: 1
Content-Security-Policy: default-src 'none'
X-Content-Type-Options: nosniff
Cache-Control: no-cache, no-store
Pragma: no-cache
Expires: 0
strict-transport-security: max-age=31536000; includeSubDomains;
x-global-transaction-id: xxxxxxx
X-DP-Watson-Tran-ID: xxxxxx
X-EdgeConnect-MidMile-RTT: 230
X-EdgeConnect-Origin-MEX-Latency: 2000
X-EdgeConnect-MidMile-RTT: 230
X-EdgeConnect-Origin-MEX-Latency: 95
Date: Wed, 11 Mar 2020 07:05:02 GMT
Connection: keep-alive
{
"classifier_id" : "xxxxx",
"name" : "TutorialClassifier",
"language" : "en",
"created" : "2020-03-11T07:05:01.126Z",
"url" : "URL/v1/classifiers/xxx",
"status_description" : "The classifier instance is in its training phase, not yet ready to accept classify requests",
"status" : "Training"
}%
All I did is
Create the Natural Language Classifier service.
Download the .csv and .json files.
On a terminal or command prompt, point to the folder where I downloaded the .csv and .json files.
From the service credentials page of the service, copy the apikey and url values and replace the placeholders ({apikey}, {url}) in the curl command.
Execute the command to see the above response.
So I have deployments exposed behing a GCE ingress.
On the deployment, implemented a simple readinessProbe on a working path, as follows :
readinessProbe:
failureThreshold: 3
httpGet:
path: /claim/maif/login/?next=/claim/maif
port: 8888
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
Everything works well, the first healthchecks comes 20 seconds later, and answer 200 :
{address space usage: 521670656 bytes/497MB} {rss usage: 107593728 bytes/102MB} [pid: 92|app: 0|req: 1/1] 10.108.37.1 () {26 vars in 377 bytes} [Tue Nov 6 15:13:41 2018] GET /claim/maif/login/?next=/claim/maif => generated 4043 bytes in 619 msecs (HTTP/1.1 200) 7 headers in 381 bytes (1 switches on core 0)
But, just after that, I get tons of other requests from other heathchecks, on / :
{address space usage: 523993088 bytes/499MB} {rss usage: 109850624 bytes/104MB} [pid: 92|app: 0|req: 2/2] 10.132.0.14 () {24 vars in 277 bytes} [Tue Nov 6 15:13:56 2018] GET / => generated 6743 bytes in 53 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 515702784 bytes/491MB} {rss usage: 100917248 bytes/96MB} [pid: 93|app: 0|req: 1/3] 10.132.0.20 () {24 vars in 277 bytes} [Tue Nov 6 15:13:56 2018] GET / => generated 1339 bytes in 301 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 518287360 bytes/494MB} {rss usage: 103759872 bytes/98MB} [pid: 93|app: 0|req: 2/4] 10.132.0.14 () {24 vars in 277 bytes} [Tue Nov 6 15:13:58 2018] GET / => generated 6743 bytes in 52 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 518287360 bytes/494MB} {rss usage: 103837696 bytes/99MB} [pid: 93|app: 0|req: 3/5] 10.132.0.21 () {24 vars in 277 bytes} [Tue Nov 6 15:13:58 2018] GET / => generated 6743 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 523993088 bytes/499MB} {rss usage: 109875200 bytes/104MB} [pid: 92|app: 0|req: 3/6] 10.132.0.4 () {24 vars in 275 bytes} [Tue Nov 6 15:13:58 2018] GET / => generated 6743 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
As I understand it, the documentations says that
The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP(S) health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. This is an example of an Ingress that adopts the readiness probe from the endpoints as its health check.
But I don't understand this behaviour.
How can I limit the healthchecks to be just the one I defined on my deployment ?
Thanks,
You need to define ports in your deployment.yaml for port numbers used in readinessProbe like
ports:
- containerPort: 8888
name: health-check-port
In our application we have routes that are streaming JSON documents. Here is an example:
/** GET api/1/tenant/(tenantId)/ads/ */
def getAllAdsByOwner(advertiserId: AdvertiserId): Route =
get {
httpRequiredSession { username =>
getAllTenantAds(username, advertiserId) { (adSource: Source[AdView, Any]) =>
complete(adSource)
}
}
}
Most of the time it works as expected, but sometimes, especially when there are many simultaneous requests, the server starts resetting connection just after the headers have been sent.
I tested with a script that requests this route with curl in a loop and aborting if the request failed. It was running for about 2 minutes before stopping. Trace when request fails is the following:
<= Recv header, 17 bytes (0x11)
0000: HTTP/1.1 200 OK
<= Recv header, 54 bytes (0x36)
0000: Access-Control-Allow-Origin: https://<...>
<= Recv header, 135 bytes (0x87)
0000: Access-Control-Expose-Headers: Content-Type, Authorization, Refr
0040: esh-Token, Set-Authorization, Set-Refresh-Token, asset-content-l
0080: ength
<= Recv header, 40 bytes (0x28)
0000: Access-Control-Allow-Credentials: true
<= Recv header, 24 bytes (0x18)
0000: Content-Encoding: gzip
<= Recv header, 23 bytes (0x17)
0000: X-Frame-Options: DENY
<= Recv header, 33 bytes (0x21)
0000: X-Content-Type-Options: nosniff
<= Recv header, 26 bytes (0x1a)
0000: Content-Security-Policy: .
<= Recv header, 20 bytes (0x14)
0000: default-src 'self';.
<= Recv header, 63 bytes (0x3f)
0000: style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;.
<= Recv header, 59 bytes (0x3b)
0000: font-src 'self' 'unsafe-inline' https://fonts.gstatic.com;.
<= Recv header, 99 bytes (0x63)
0000: script-src 'self' 'unsafe-inline' 'unsafe-eval' https://*.google
0040: apis.com https://maps.gstatic.com;.
<= Recv header, 69 bytes (0x45)
0000: img-src 'self' data: https://*.googleapis.com https://*.gstatic.
0040: com;.
<= Recv header, 8 bytes (0x8)
0000:
<= Recv header, 26 bytes (0x1a)
0000: Server: akka-http/10.1.3
<= Recv header, 37 bytes (0x25)
0000: Date: Wed, 27 Jun 2018 15:20:24 GMT
<= Recv header, 28 bytes (0x1c)
0000: Transfer-Encoding: chunked
<= Recv header, 32 bytes (0x20)
0000: Content-Type: application/json
<= Recv header, 2 bytes (0x2)
0000:
== Info: Recv failure: Connection reset by peer
== Info: stopped the pause stream!
== Info: Closing connection 0
curl: (56) Recv failure: Connection reset by peer
The same request inspected in Wireshark:
screen shot
Reading logs didn't give any hint about probable source of the problem. Response logged as successful:
[27-06-2018 19:44:52.837][INFO] access: 'GET /api/1/tenant/ca764a91-8616-409c-8f08-c64a40d3fc07/ads' 200 596ms
Versions of used software:
Scala: 2.11.11
akka: 2.5.13
akka-http: 10.1.3
Configuration:
akka.conf
akka-http-core.conf
I tried increasing akka.http.host-connection-pool.max-connections to 128 but it didn't help. Maybe someone has an idea if this is a bug in akka-http or configuration problem?
If there is no I/O on your open connection for the idle-timeout, Akka will close the connection which often appears as a "connection reset by peer" error. Try increasing the akka.http.server.idle-timeout value.
Because your akka.http.server.request-timeout value is the same as akka.http.server.idle-timeout, it is a race condition between which timeout will occur first when there is no I/O. Sometimes, you will see a 503; other times, you will experience a connection reset error.
I have run into a problem using php in Zend Framweork to dynamically scale images for return as mime-type image/jpeg.
The problem manifests itself as firefox reporting 'cannot display image because it contains errors'. This is the same problem reported in: return dynamic image zf2
To replicate the problem, I removed any file IO and copied code from a similar stack overflow example verbatim (in a zend FW action controller):
$resp = $this->getRespose();
$myImage = imagecreate(200,200);
$myGray = imagecolorallocate($myImage, 204, 204, 204);
$myBlack = imagecolorallocate($myImage, 0, 0, 0);
imageline($myImage, 15, 35, 120, 60, $myBlack);
ob_start();
imagejpeg($myImage);
$img_string = ob_get_contents();
$scaledSize = ob_get_length();
ob_end_clean();
imagedestroy($myImage);
$resp->setContent($img_string);
$resp->getHeaders()->addHeaders(array(
'Content-Type' => $mediaObj->getMimeType(),
'Content-Transfer-Encoding' => 'binary'));
$resp->setStatusCode(Response::STATUS_CODE_200);
return $resp;
When I use wget to capture the jpeg response, I notice a '0A' as the first byte of the output rather than the field separator 'FF'. There is no such '0A' in the data captured in the buffer, nor in the response's content member. Attempting to open the wget output with GIMP fails, unless I remove the 0A. I am guessing that Zend FW is using the line-feed as a field separator for the response fields vs. the content, but I'm not sure if that is the problem, or if it is, how to fix it.
My response fields look OK:
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Sat, 02 Jun 2018 23:30:09 GMT
Server: Apache/2.4.7 (Ubuntu) OpenSSL/1.0.1f
Set-Cookie: PHPSESSID=nsgk1o5au7ls4p5g6mr9kegoeg; path=/; HttpOnly
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Transfer-Encoding: binary
Content-Length: 1887
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: image/jpeg
Here is the dump of the first few bytes of the wget with the jpeg stream that fails:
00000000 0a ff d8 ff e0 00 10 4a 46 49 46 00 01 01 01 00 |.......JFIF.....|
00000010 60 00 60 00 00 ff fe 00 3e 43 52 45 41 54 4f 52 |......>CREATOR|
Any idea where the '0A' is coming from? I am running zend framework 2.5.1, PHP 7.2.2
Thank you Tim Fountain.
I did finally find the offending file buried in some doctrine entities I had created. Sure enough, a stray "?>" with an empty line.
Much appreciated