So I have deployments exposed behing a GCE ingress.
On the deployment, implemented a simple readinessProbe on a working path, as follows :
readinessProbe:
failureThreshold: 3
httpGet:
path: /claim/maif/login/?next=/claim/maif
port: 8888
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
Everything works well, the first healthchecks comes 20 seconds later, and answer 200 :
{address space usage: 521670656 bytes/497MB} {rss usage: 107593728 bytes/102MB} [pid: 92|app: 0|req: 1/1] 10.108.37.1 () {26 vars in 377 bytes} [Tue Nov 6 15:13:41 2018] GET /claim/maif/login/?next=/claim/maif => generated 4043 bytes in 619 msecs (HTTP/1.1 200) 7 headers in 381 bytes (1 switches on core 0)
But, just after that, I get tons of other requests from other heathchecks, on / :
{address space usage: 523993088 bytes/499MB} {rss usage: 109850624 bytes/104MB} [pid: 92|app: 0|req: 2/2] 10.132.0.14 () {24 vars in 277 bytes} [Tue Nov 6 15:13:56 2018] GET / => generated 6743 bytes in 53 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 515702784 bytes/491MB} {rss usage: 100917248 bytes/96MB} [pid: 93|app: 0|req: 1/3] 10.132.0.20 () {24 vars in 277 bytes} [Tue Nov 6 15:13:56 2018] GET / => generated 1339 bytes in 301 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 518287360 bytes/494MB} {rss usage: 103759872 bytes/98MB} [pid: 93|app: 0|req: 2/4] 10.132.0.14 () {24 vars in 277 bytes} [Tue Nov 6 15:13:58 2018] GET / => generated 6743 bytes in 52 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 518287360 bytes/494MB} {rss usage: 103837696 bytes/99MB} [pid: 93|app: 0|req: 3/5] 10.132.0.21 () {24 vars in 277 bytes} [Tue Nov 6 15:13:58 2018] GET / => generated 6743 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
{address space usage: 523993088 bytes/499MB} {rss usage: 109875200 bytes/104MB} [pid: 92|app: 0|req: 3/6] 10.132.0.4 () {24 vars in 275 bytes} [Tue Nov 6 15:13:58 2018] GET / => generated 6743 bytes in 50 msecs (HTTP/1.1 200) 4 headers in 124 bytes (1 switches on core 0)
As I understand it, the documentations says that
The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP(S) health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. This is an example of an Ingress that adopts the readiness probe from the endpoints as its health check.
But I don't understand this behaviour.
How can I limit the healthchecks to be just the one I defined on my deployment ?
Thanks,
You need to define ports in your deployment.yaml for port numbers used in readinessProbe like
ports:
- containerPort: 8888
name: health-check-port
Related
I have created a pod and service, called node-port.
root#hello-client:/# nslookup node-port
Server: 10.100.0.10
Address: 10.100.0.10#53
Name: node-port.default.svc.cluster.local
Address: 10.100.183.19
I can enter inside a pod and see the resolution happening.
However, the TCP connection is not happening from a node.
root#hello-client:/# curl --trace-ascii - http://node-port.default.svc.cluster.local:3050
== Info: Trying 10.100.183.19:3050...
What are the likely factors contributing to failure?
What are some suggestions to troubleshoot this?
On a working node/cluster. I expect this to work like this.
/ # curl --trace-ascii - node-port:3050
== Info: Trying 10.100.13.83:3050...
== Info: Connected to node-port (10.100.13.83) port 3050 (#0)
=> Send header, 78 bytes (0x4e)
0000: GET / HTTP/1.1
0010: Host: node-port:3050
0026: User-Agent: curl/7.83.1
003f: Accept: */*
004c:
== Info: Mark bundle as not supporting multiuse
<= Recv header, 17 bytes (0x11)
0000: HTTP/1.1 200 OK
<= Recv header, 38 bytes (0x26)
0000: Server: Werkzeug/2.2.2 Python/3.8.13
<= Recv header, 37 bytes (0x25)
0000: Date: Fri, 26 Aug 2022 04:34:48 GMT
<= Recv header, 32 bytes (0x20)
0000: Content-Type: application/json
<= Recv header, 20 bytes (0x14)
0000: Content-Length: 25
<= Recv header, 19 bytes (0x13)
0000: Connection: close
<= Recv header, 2 bytes (0x2)
0000:
<= Recv data, 25 bytes (0x19)
0000: {. "hello": "world".}.
{
"hello": "world"
}
== Info: Closing connection 0
/ #
I'm using imaplib, and trying to fetch emails with a certain subject header value.
My code:
res, tmp = self.mail.uid('search', None, 'HEADER Subject "SUBJECT_HERE"')
print(tmp)
print(res)
print("test 2 goes:")
rr, tt = self.mail.search(None, 'HEADER Subject "SUBJECT_HERE"')
print(tt)
print(rr)
Result:
[b'225 232 323 324 346 366 382 419 420 425 450 463 517 607 670 751 833
911 1043 1129 1133 1134 1287 1350 1799 1854 1957 1960 1962 1991 2005
2040 2071 2110 2119 2121 2153 2158 2182 2188 2189 2228 2230 2239 2249
2334 2335 2372 2378 2396 2435 2497 2567 2568 2573 2574 2575 2632 2633
2634 2648 2649 2709 2785 2819 2821 2828 2829 2868 2885 2895 2902 2906
2920 2993 2997 2998 3000 3001 3009'] OK test 2 goes:
[b'220 227 318 319 340 360 376 413 414 419 444 457 511 601 664 745 827
905 1037 1123 1127 1128 1281 1344 1793 1848 1951 1954 1956 1985 1999
2034 2065 2104 2113 2115 2147 2152 2176 2182 2183 2222 2224 2233 2243
2328 2329 2366 2372 2390 2429 2491 2561 2562 2567 2568 2569 2625 2626
2627 2641 2642 2702 2778 2812 2814 2821 2822 2861 2878 2888 2895 2899
2913 2986 2990 2991 2993 2994 3002'] OK
I thought that those two commands result in the same results.
But as shown above, those two fetch different emails.
What is the difference??
One (SEARCH) returns message sequence numbers (MSN), which are numbered from 1 to N and change as messages are added and deleted. A message that is number 5 now could be number 4 tomorrow if you delete a message before it.
The other (UID SEARCH) returns UIDs, which do not change as messages are deleted. They're two completely different set of identifiers. A message with UID 5 will remain UID 5 until it is deleted it (or moved, etc.).
The UID will never reused in that incarnation of a folder. If the folder is deleted and recreated, or the mail server is rebuilt, the folder's UIDVALIDITY should change so you can detect that your cache is no longer valid.
Using uwsgi running two psgi Perl Dancer apps.
Is it normal for uwsgi vassals to repeatedly announce their loyalty to the Emperor, upon nearly every request?
Here's a small portion of my uwsgi.log file:
announcing my loyalty to the Emperor...
Mon Aug 17 20:51:59 2015 - [emperor] vassal www.ini is now loyal
[pid: 1713|app: 0|req: 4/11] 0.0.0.0 () {44 vars in 873 bytes} [Mon Aug 17 20:52:12 2015] GET /sitemap-index.xml => generated 284 bytes in 7 msecs (HTTP/1.1 200) 4 headers in 146 bytes (0 switches on core 0)
[pid: 1706|app: 0|req: 2/12] 0.0.0.0 () {42 vars in 808 bytes} [Mon Aug 17 20:52:22 2015] GET / => generated 113840 bytes in 207 msecs (HTTP/1.1 200) 4 headers in 143 bytes (0 switches on core 0)
[pid: 1709|app: 0|req: 1/13] 0.0.0.0 () {42 vars in 844 bytes} [Mon Aug 17 20:52:35 2015] GET /about => generated 124031 bytes in 1325 msecs (HTTP/1.1 200) 4 headers in 143 bytes (0 switches on core 0)
announcing my loyalty to the Emperor...
Mon Aug 17 20:52:36 2015 - [emperor] vassal www.ini is now loyal
[pid: 1713|app: 0|req: 5/14] 0.0.0.0 () {44 vars in 865 bytes} [Mon Aug 17 20:52:38 2015] GET / => generated 113840 bytes in 129 msecs (HTTP/1.1 200) 4 headers in 143 bytes (0 switches on core 0)
These announcements of loyalty to the Emperor appear to occur with nearly every request.
Are these poor vassals trying to kiss up to the Emperor for special favors, or (more likely) is there something wrong with my configuration?
GWT RPC requests fail in Opera when site has international domain name.
The error message is as follows:
Unable to initiate the asynchronous service invocation
(GreetingService_Proxy.getDate) -- check the network connection
(DOMException) code: 12 INDEX_SIZE_ERR: 1 DOMSTRING_SIZE_ERR: 2
HIERARCHY_REQUEST_ERR: 3 WRONG_DOCUMENT_ERR: 4 INVALID_CHARACTER_ERR:
5 NO_DATA_ALLOWED_ERR: 6 NO_MODIFICATION_ALLOWED_ERR: 7 NOT_FOUND_ERR:
8 NOT_SUPPORTED_ERR: 9 INUSE_ATTRIBUTE_ERR: 10 INVALID_STATE_ERR: 11
SYNTAX_ERR: 12 INVALID_MODIFICATION_ERR: 13 NAMESPACE_ERR: 14
INVALID_ACCESS_ERR: 15 VALIDATION_ERR: 16 TYPE_MISMATCH_ERR: 17
SECURITY_ERR: 18 NETWORK_ERR: 19 ABORT_ERR: 20 URL_MISMATCH_ERR: 21
QUOTA_EXCEEDED_ERR: 22 TIMEOUT_ERR: 23 INVALID_NODE_TYPE_ERR: 24
DATA_CLONE_ERR: 25: SYNTAX_ERR
How could this be solved?
have some weird problems with sphinx.
Here's the query log:
[Mon Jan 31 05:43:21.362 2011] 0.158 sec [any/0/ext 511 (0,2000)] [_file] superman
[Mon Jan 31 05:43:51.739 2011] 0.143 sec [any/0/ext 952 (0,2000)] [_file] superman
[Mon Jan 31 05:44:22.042 2011] 0.003 sec [any/0/ext 952 (0,2000)] [_file] superman
[Mon Jan 31 05:44:52.313 2011] 0.003 sec [any/0/ext 952 (0,2000)] [_file] superman
[Mon Jan 31 05:45:22.553 2011] 0.003 sec [any/0/ext 952 (0,2000)] [_file] superman
If you see, the result returned is 511 for the first time, then 952 (the correct result) for the rest. I've tried searching with different result and all seems to be the same.
Some observation:
1) If there's less than 511, the result returned is always correct. It's only when the result is > 511 and less than the max that it is wrong.
2) If there are more result than the max, the returned result will be max (correct).
3) The rest of the results are usually correct, up until the sphinx db re-indexed. Then we'll get 511 again.
Tried it on different sphinx installation, getting the same result.
my client code:
$cl->setServer("localhost", 3312);
$cl->setMaxQueryTime(10);
$cl->SetLimits(0, 2000, 2000);
$cl->setMatchMode(SPH_MATCH_ANY);
call_user_func_array(array($cl, 'setSortMode'), array(SPH_SORT_EXTENDED, '#id DESC'));
$result = $cl->query('superman', '_file');
sphinx.conf:
index download_file
{
source = file
path = /disk1/data/sphinx/file
morphology = stem_en
enable_star=1
min_word_len = 3
min_prefix_len = 0
min_infix_len = 3
}
searchd
{
max_matches = 100000
port = 3312
log = /var/log/searchd/searchd.log
query_log = /var/log/searchd/query.log
pid_file = /var/log/searchd/searchd.pid
}
indexer
{
max_iops = 40
mem_limit = 128M
}