what is the valid http request that can be used within a prestop hook? - kubernetes

According to the below documentation, the line "HTTP - Executes an HTTP request against a specific endpoint on the Container."
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-implementations
Using preStop hook, I tried to curl to run the following script but it returns nothing. Is the prestop hook limited to use the Http request within the container i.e, localhost?
echo "test curl" > /proc/1/fd/1
echo $(curl -s /dev/null http://google.com) > /proc/1/fd/1
echo $(curl -s -o /dev/null -w "%{http_code}" http://google.com) > /proc/1/fd/1

No, as I know you are not limited to use preStop's httpGet only withing the container. Your cointainer should just have access yo requested url, etc. So in your case you should have access to google.
May I know what exactly you wanna to achieve? Are you trying to redirect curl output to proc with PID:1 ?
Your command perfectly works in containers(that has curl itself), when I specify redirect to STDOUT, I mean /proc/self/fd/1
kubectl exec -ti curl -- bash
root#curl:/# echo $(curl -s -o /dev/null -w "%{http_code}" http://google.com) > /proc/self/fd/1
301
Btw, you can use exec instead of httpGet in preStop, where you can combine echo and curl
Yaml will be similar to
lifecycle:
preStop:
exec:
command: ["curl", "-XPOST", "-s", "http://google.com" > "/proc/1/fd/1"]
Please play with command and adjust for your needs. I havent tested it, wrote on flight

Related

Running “kubectl proxy” for a single request

Is there any way to run kubectl proxy, giving it a command as input, and shutting it down when the response is received?
I'm imagining something with the -u (unix socket) flag, like this:
kubectl proxy -u - < $(echo "GET /api/v1/namespaces/default")
I don't think it's possible, but maybe my socket fu just isn't strong enough.
You don't need a long-running kubectl proxy for this.
Try this:
kubectl get --raw=/api/v1/namespaces/default
kubectl proxy won't give you any way to run a one-off request and terminate the proxy.
Generic way to start a command in the background, run a command and terminate the initially started command finally would be to write a bash script like:
#!/usr/bin/env bash
set -eu
kubectl proxy &
proxy_pid=$!
echo $proxy_pid
until curl -fsSL http://localhost:8001/ > /dev/null; do
echo "waiting for kubectl proxy" >&2
sleep 5
# TODO add max retries so you can break out of this
done
curl http://localhost:8001/api/v1/namespaces/default
function cleanup {
echo "killing kubectl proxy" >&2
kill $proxy_pid
}
trap cleanup EXIT
If you actually want to use sockets:
Start the unix domain socket like kubectl proxy -u ./foo.sock
Make sure your cURL supports unix domain sockets and call curl --unix-socket ./foo.sock http:/api/v1/namespaces/default etc.

Wget, preventing session log out

I'm trying to crawl a website which needs to be logged in with wget but it stops everytime it finds a logout url (https://example.com/logout/).
I've tried excluding the directories but without success.
This is my command:
wget --content-disposition --header "Cookie: session_cookies" -k -m -r -E -p --level=inf --retry-connrefused -D site.com -X */logout/*,*/settings/* -o log.txt https://example.com/
I've tried with -R option instead of -X but that didn't work.
Can be solved by the keyword "--reject-regex", like this: "--reject-regex logout", see:wget-devTips

How to save Charles session to a file from command line?

I wonder how to save charles session to a file.
Consider following script :
open -ga Charles --args -headless -config charles.xml results.chls
#...some interactions here
pgrep -f Charles | xargs kill
I'm expecting to see something in the results.chls but file is empty....
I figured it out myself. Seems like the only way is enable web control access for Charles and use http like this:
curl --silent -x localhost:8888 http://control.charles/session/export-har -o "${EXPORT_FILE}" > /dev/null

How to confirm Solr is running from the command line?

We have a few servers that are going to be rebooted soon and I may have to restart Apache Solr manually.
How can I verify (from the command line) that Solr is running?
The proper way is to use Solr's STATUS command. You could parse its XML response, but as long as it returns something to you with an HTTP status of 200, it should be safe to assume it's running. You can perform an HTTP HEAD request using curl with:
curl -s -o /dev/null -I -w '%{http_code}' http://example.com:8983/solr/admin/cores?action=STATUS
NOTE: Also, you can add a -m <seconds> to the command to only wait so many seconds for a response.
This will make a request to the Solr admin interface, and print out 200 on success which can be used from a bash script such as:
RESULT=$(curl -s -o /dev/null -I -w '%{http_code}' http://example.com:8983/solr/admin/cores?action=STATUS)
if [ "$RESULT" -eq '200' ]; then
# Solr is running...
else
# Solr is not running...
fi
If you are on the same machine where Solr is running then this is my favourite:
$> solr status

How to set up cron using curl command?

After apache rebuilt my cron jobs stopped working.
I used the following command:
wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
Now my DC support suggests me to change the wget method to curl. What would be the correct value in this case?
-O - is equivalent to curl's default behavior, so that's easy.
-q is curl's -s (or --silent)
--retry N will substitute for wget's -t N
All in all:
curl -s --retry 1 http://example.com/cgi-bin/loki/autobonus.pl
try run change with the full path of wget
/usr/bin/wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
you can find the full path with:
which wget
and more, check if you can reach the destination domain with ping or other methods:
ping example.com
Update:
based on the comments, seems to be caused by the line in /etc/hosts:
127.0.0.1 example.com #change example.com to the real domain
It seems that you have restricted options in terms that on the server where the cron should run you have the domain pinned to 127.0.0.1 but the virtual host configuration does not work with that.
What you can do is to let wget connect by IP but send the Host header so that the virtual host matching would work:
wget -O - -q -t 1 --header 'Host: example.com' http://xx.xx.35.162/cgi-bin/loki/autobonus.pl
Update
Also probably you don't need to run this over the web server, so why not just run:
perl /path/to/your/script/autobonus.pl