When my computer power downs, i would like for it to run a script that basically does an http-request to Iot devices in the network telling them that the computer has shutdown. I think i have everything needed for me to make this happen, all i need now is integrating them together.
Windows 10 has already have a feature that lets me run scripts during powerup/shutdown gpedit.msc -> computer configuration -> windows settings -> scripts -> shutdown
Windows 10 also has built in curl so i am planning on using that for the http request. I need to do 5 get requests before shutdown
curl --silent --output nul --show-error --fail 192.168.0.10/shutdown
curl --silent --output nul --show-error --fail 192.168.0.11/shutdown
curl --silent --output nul --show-error --fail 192.168.0.12/shutdown
curl --silent --output nul --show-error --fail 192.168.0.13/shutdown
curl --silent --output nul --show-error --fail 192.168.0.14/shutdown
in the gpedit.msc shutdown script there are 2 choices ordinary script and powershellscript, which one is better? although i have the commands the curl commands ready, i dont know how to save them. Create a file then just the commands above? what will the file extension be?
Batch files are the easiest.
You need two batch files
One to run the curl commands
One to open the CMD window and call the first batch file
Example. shutdown.bat is the batch that is run on shutdown
shutdown.bat open the cmd window and runs shutdowncurl.bat
shutdowncurl.bat
curl --silent --output nul --show-error --fail 192.168.0.10/shutdown
curl --silent --output nul --show-error --fail 192.168.0.11/shutdown
curl --silent --output nul --show-error --fail 192.168.0.12/shutdown
curl --silent --output nul --show-error --fail 192.168.0.13/shutdown
curl --silent --output nul --show-error --fail 192.168.0.14/shutdown
shutdown.bat
cmd /k shutdowncurl
pause
In this case the two batch files are in the same folder.
You could use the full path names as well
To test in Windows Explorer double click shutdown.bat or right click and choose open
After testing, remove the pause from shutdown.bat
I tested this where the shutdowncurl.bat looked like this:
curl --help
curl --help
curl --help
curl --help
Related
PUBLIC_DNS=$(aws ec2 describe-instances --region ${AWS_DEFAULT_REGION} --filters 'Name=tag:Name,Values=udapeople-backend-ec2-*' --query "Reservations[*].Instances[0].PublicDnsName" --output text)
echo ${PUBLIC_DNS}
curl -H "Content-Type: text/plain" \
-H "token: ${CIRCLE_WORKFLOW_ID}" \
--request PUT \
--data ${PUBLIC_DNS} \
https://api.memstash.io/values/public_dns
curl: no URL specified!
curl: try 'curl --help' or 'curl --manual' for more information
Exited with code exit status 2
CircleCI received exit code 2
your error isn't with Circle CI but with your curl command. The error message is saying it doesn't have a URL to which to PUT. I do see that you included a URL in your curl command, so maybe the problem is in your line endings. Try removing the line endings and run your circle CI job again. You can also try running the command from your local command line.
This is because memstash.io is not working as a website or webservice, there is no issue in your code , memstash is working as a memory-cache for CD jobs so you can find another CD caching service or you have a good option to use circle ci caching itself , try to search for circleCI docs
According to the below documentation, the line "HTTP - Executes an HTTP request against a specific endpoint on the Container."
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-implementations
Using preStop hook, I tried to curl to run the following script but it returns nothing. Is the prestop hook limited to use the Http request within the container i.e, localhost?
echo "test curl" > /proc/1/fd/1
echo $(curl -s /dev/null http://google.com) > /proc/1/fd/1
echo $(curl -s -o /dev/null -w "%{http_code}" http://google.com) > /proc/1/fd/1
No, as I know you are not limited to use preStop's httpGet only withing the container. Your cointainer should just have access yo requested url, etc. So in your case you should have access to google.
May I know what exactly you wanna to achieve? Are you trying to redirect curl output to proc with PID:1 ?
Your command perfectly works in containers(that has curl itself), when I specify redirect to STDOUT, I mean /proc/self/fd/1
kubectl exec -ti curl -- bash
root#curl:/# echo $(curl -s -o /dev/null -w "%{http_code}" http://google.com) > /proc/self/fd/1
301
Btw, you can use exec instead of httpGet in preStop, where you can combine echo and curl
Yaml will be similar to
lifecycle:
preStop:
exec:
command: ["curl", "-XPOST", "-s", "http://google.com" > "/proc/1/fd/1"]
Please play with command and adjust for your needs. I havent tested it, wrote on flight
I'm trying to crawl a website which needs to be logged in with wget but it stops everytime it finds a logout url (https://example.com/logout/).
I've tried excluding the directories but without success.
This is my command:
wget --content-disposition --header "Cookie: session_cookies" -k -m -r -E -p --level=inf --retry-connrefused -D site.com -X */logout/*,*/settings/* -o log.txt https://example.com/
I've tried with -R option instead of -X but that didn't work.
Can be solved by the keyword "--reject-regex", like this: "--reject-regex logout", see:wget-devTips
I get a war file daily and deploy it on glass fish server via Remote Desktop connection (Windows server). I want make it do auto deployment e.g I just put war file on some predefined location and run script and that script deploy latest war and restart glass fish server as well.
So what should I do? Any batch script or any other script?
You can put your WAR file in the auto deploy folder of GlassFish:
as-install/domains/domain1/autodeploy
Or you can use a script to deploy with the asadmin tool
Or you can use the REST interface to deploy:
curl -s -S \
-H 'Accept: application/json' -X POST \
-H 'X-Requested-By: dummy' \
-F id=#/path/to/application.war \
-F force=true http://localhost:4848/management/domain/applications/application
I have a job created using spoon and imported to the DI repository.
Without scheduling it using PDI job scheduler how can I run PDI Job on a Data Integration Server using REST web services? So that I can call it whenever I want.
Before beginning these steps, please make sure that your Carte server (or Carte server embedded in the DI server) is configured to connect to the repository for REST calls. The process and description can be found on the wiki page. Note that the repositories.xml needs to be defined and in the appropriate location for the DI Server as well.
Method 1 : (Run Job and continue, no status checks):
Start a PDI Job (/home/admin/Job 1):
curl -L "http://admin:password#localhost:9080/pentaho-di/kettle/runJob?job=/home/admin/Job%201" 2> /dev/null | xmllint --format -
Method 2 : (Run Job and poll job status regularly):
Generate a login cookie:
curl -d "j_username=admin&j_password=password&locale=en_US" -c cookies.txt http://localhost:9080/pentaho-di/j_spring_security_check
Check DI Server status:
curl -L -b cookies.txt http://localhost:9080/pentaho-di/kettle/status?xml=Y | xmllint --format -
Result:
<?xml version="1.0" encoding="UTF-8"?>
<serverstatus>
<statusdesc>Online</statusdesc>
<memory_free>850268568</memory_free>
<memory_total>1310720000</memory_total>
<cpu_cores>4</cpu_cores>
<cpu_process_time>22822946300</cpu_process_time>
<uptime>100204</uptime>
<thread_count>59</thread_count>
<load_avg>-1.0</load_avg>
<os_name>Windows 7</os_name>
<os_version>6.1</os_version>
<os_arch>amd64</os_arch>
<transstatuslist>
<transstatus>
<transname>Row generator test</transname>
<id>de44a94e-3bf7-4369-9db1-1630640e97e2</id>
<status_desc>Waiting</status_desc>
<error_desc/>
<paused>N</paused>
<stepstatuslist>
</stepstatuslist>
<first_log_line_nr>0</first_log_line_nr>
<last_log_line_nr>0</last_log_line_nr>
<logging_string><![CDATA[]]></logging_string>
</transstatus>
</transstatuslist>
<jobstatuslist>
</jobstatuslist>
</serverstatus>
Start a PDI Job (/home/admin/Job 1):
curl -L -b cookies.txt "http://localhost:9080/pentaho-di/kettle/runJob?job=/home/admin/Job%201" | xmllint --format -
Result:
<webresult>
<result>OK</result>
<message>Job started</message>
<id>dd419628-3547-423f-9468-2cb5ffd826b2</id>
</webresult>
Check the job's status:
curl -L -b cookies.txt "http://localhost:9080/pentaho-di/kettle/jobStatus?name=/home/admin/Job%201&id=dd419628-3547-423f-9468-2cb5ffd826b2&xml=Y" | xmllint --format -
Result:
<?xml version="1.0" encoding="UTF-8"?>
<jobstatus>
<jobname>Job 1</jobname>
<id>dd419628-3547-423f-9468-2cb5ffd826b2</id>
<status_desc>Finished</status_desc>
<error_desc/>
<logging_string><![CDATA[H4sIAAAAAAAAADMyMDTRNzDUNzJSMDSxMjawMrZQ0FXwyk9SMATSwSWJRSUK+WkKWUCB1IrU5NKSzPw8LiPCmjLz0hVS80qKKhWiXUJ9fSNjSdQUXJqcnFpcTEibW2ZeZnFGagrEgahaFTSKUotLc0pso0uKSlNjNckwCuJ0Eg3yQg4rhTSosVwABykpF2oBAAA=]]></logging_string>
<first_log_line_nr>0</first_log_line_nr>
<last_log_line_nr>13</last_log_line_nr>
<result>
<lines_input>0</lines_input>
<lines_output>0</lines_output>
<lines_read>0</lines_read>
<lines_written>0</lines_written>
<lines_updated>0</lines_updated>
<lines_rejected>0</lines_rejected>
<lines_deleted>0</lines_deleted>
<nr_errors>0</nr_errors>
<nr_files_retrieved>0</nr_files_retrieved>
<entry_nr>0</entry_nr>
<result>Y</result>
<exit_status>0</exit_status>
<is_stopped>N</is_stopped>
<log_channel_id/>
<log_text>null</log_text>
<result-file/>
<result-rows/>
</result>
</jobstatus>
Get the status description from the jobStatus API:
curl -L -b cookies.txt "http://localhost:9080/pentaho-di/kettle/jobStatus?name=/home/admin/Job%201&id=dd419628-3547-423f-9468-2cb5ffd826b2&xml=Y" 2> /dev/null | xmllint --xpath "string(/jobstatus/status_desc)" -
Result:
Finished
PS : curl & libxml2-utils installed via apt-get.
The libxml2-utils package is optional, used solely for formatting XML output from the DI Server. This shows how to start a PDI job using a Bash shell.
Supported in version 5.3 and later.