how to use Aria2c RPC server as a daemon? - daemon

I wish to run the Aria2c RPC server as a daemon so I can schedule download jobs from my own client using the RPC interface. But I want it to run as a daemon at the same time.

If you need an aria2c instance quickly, run the following command:
aria2c --enable-rpc --rpc-listen-all
This command asks aria2c to enable RPC mode (i.e. act as a daemon) and listen to all incoming traffic, which is not ideal for a public-facing server.
You may need to add additional options like --rpc-user and --rpc-passwd (together), or --rpc-secret to run an aria2c more securely.

Related

"how to create event handler for restart apache service"

I want to create Nagios core event handler whenever I stop apache service
Nagios log is generating and seems like it invoking event handler script, but is not executing it.
I am following these documents.
This is logs of nagios:
SERVICE ALERT: tecmint;HTTP load;CRITICAL;HARD;4;connect to address <ip> and port 80: Connection refused
[1607493385] SERVICE EVENT HANDLER: tecmint;HTTP load;CRITICAL;HARD;4;restart-httpd
Why Apache is not starting?
If you want to monitor and restart Apache in a remote server then you need to use SSH or NRPE with NRPE is preferred in this case as it is faster and doesn't require SSH kay pair exchange.
Briefly you would have 1 master Nagios server and 1 or more Nagios agent(s)
The master would run check_nrpe with some arguments to ask agent to check a service and optionally run an event handler (script)
like that
/usr/local/nagios/libexec/check_nrpe -H agent_IP_Address -c command
where is something like check_http which will be installed in Agent as a plugin
Master should have Nagios core installed
Agent should have NRPE agent and libexec installed
as in this manual:
https://assets.nagios.com/downloads/nagiosxi/docs/Installing_The_XI_Linux_Agent.pdf
Command, Hosts, and Services definitions will stay in the master
The script that restart Apache (the event handler) should be in the agent
This is a full reference of how to install and configure NRPE master-agent model
https://assets.nagios.com/downloads/nagioscore/docs/nrpe/NRPE.pdf?__hstc=118811158.538bdb251b7c177fd3206bea46d0e972.1616532961907.1616532961907.1616532961907.1&__hssc=118811158.11.1616532961908&__hsfp=2505829642

Connection to Google Cloud SQL via proxy works in all scenarios except via socket in Docker container

Hopefully I'm doing something wrong, I've read all documentation and scoured forums but can't seem to get to the bottom of an issue I'm experiencing. I'm using OSX btw.
Things that are working:
Connect to cloud SQL from local OS using proxy via either TCP or Socket
Connect to cloud SQL from local OS using proxy in container via TCP
Connect to cloud SQL from GKE using proxy in the same pod via TCP
Things that are not working:
Connect to cloud SQL from local OS using proxy in contain via sockets
Connect to cloud SQL from GKE using proxy in the same pod via socket
I suspect both of these problems are actually the same problem. I'm using this command to run the proxy inside of the container:
docker run -v [PATH]:/cloudsql \
gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy -dir=/cloudsql \
-instances=[INSTANCE_CONNECTION_NAME] -credential_file=/cloudsql/[FILE].json
And the associated socket is being generated with the directory. However when I attempt to connect I get the following error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/cloudsql/node-sql:us-central1:nodedb' (61)
The proxy doesn't generate a new line when I try to connect which makes me think that it's not receiving the request, it simply says Ready for new connections and waits.
Any idea what's going wrong, or how I could troubleshoot this further?
For "Connect to cloud SQL from GKE using proxy in the same pod via socket" can you please follow the tutorial at https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine? We have a working WordPress example there that has the cloudsql-proxy as a sidecar container (i.e. in the same Pod, but over TCP).
I don't think you can do "in the same pod via socket" unless you’re running multiple processes in a single container (which you shouldn’t as a best practice). If you do a sidecar container, you can use TCP, so you don’t need a unix socket (moreover, I'm not sure how you’d share files between containers of a Pod).
Also, the docker run -v /local.sock:/remote.sock (I think) will be creating a file/directory locally as /local.sock and making that available inside the container as /remote.sock. This might not work because the docker-engine doesn't know that /local.sock is meant to be a Unix socket and it creates a regular file.

How to restart operating systems with a XMPP/EJABBER server using python

I want to create a XMPP server on my network and then send message to it with python so that this server can restart the target computer on the network , now how can i set up this server and how can i do the rest of the process?
tnx
I am not sure in what context you are trying to do this, but XMPP has been used in context outside of usual chat and instant messengers (e.g. load balancers, rpc, ...).
There can be several ways of doing this. One way I can think right now is by using Jabber RPC xep-0009 which says:
This specification defines an XMPP protocol extension for
transporting XML-RPC encoded requests and responses between two XMPP entities.
The protocol supports all syntax and semantics of XML-RPC except that
it uses XMPP instead of HTTP as the underlying transport.
Workflow wise here is how you can make this work:
You will need a jabber server which is up and running say on host-A
You will need to configure a startup service on other hosts in the network (say on host-B, host-C, host-D). This startup service is nothing but a xmpp client daemon which will start in the background whenever host is started.
This xmpp client configured as startup service are special in the sense that they will accept incoming rpc calls (support for XEP-0009) and execute received commands on the host.
Received RPC commands can be synonymous to shutdown, kill -9 xxxx depending upon your specific needs.
Finally, xmpp client on host-C can send one or more commands wrapped inside an stanza to xmpp client running on host-B.
You can use one of the existing python xmpp client library and simply extend their working examples for your use case. You will also need to check details on how to configure startup service depending upon your Operating System (e.g. update-rc.d for ubuntu or sc.exe for windows)

supervisord with haproxy, paster, and node js

I have to run paster serve for my app and nodejs for my real time requirements both are configured through haproxy, but here I need to run haproxy as sudo to bind port 80 and other processes as normal user, how to do it? I tried different ways, but no use. I tried this command
command=sudo haproxy
I think this is not the ways we should do this. Any ideas?
You'll need to run supervisord as root, and configure it to run your various services under non-privileged users instead.
[program:paster]
# other configuration
user = wwwdaemon
In order for this to work, you cannot set the user option in the [supervisord] section (otherwise the daemon cannot restart your haproxy server). You therefore do want to make sure your supervisord configuration is only writeable by root so no new programs can be added to a running supervisord daemon, and you want to make sure the XML-RPC server options are well protected.
The latter means you need to review any [unix_http_server], [inet_http_server] and [rpcinterface:x] sections you have configured to be properly locked down. For example, use the chown and chmod options for the [unix_http_server] section to limit access to the socket file to privileged users only.
Alternatively, you can run a lightweight front-end server with minimal configuration to proxy port 80 to a non-privileged port instead, and keep this minimal server out of your supervisord setup. ngnix is an excellent server to do this, for example, installed via the native packaging system of your server (e.g. apt-get on Debian or Ubuntu).

Access running mono application via command line

What is the best way to access a running mono application via the command line (Linux/Unix)?
Example: a mono server application is running and I want to send commands to it using the command line in the lightest/fastest way possible, causing the server to send back a response (e.g. to stdout).
I would say make a small, simple controller program that takes in your required command line arguments and uses remoting to send the messages to the running daemon.
This would be similar to the tray icon controller program talking to the background service that is prevalent in most Windows service patterns.
Mono's gsharp tool is a graphical REPL that lets you Attach to Process.
#Rich B: This is definately a suitable solution, which I already had implemented - however on the server I have to use, the remoting approach takes around 350ms for a single request.
I've measured the time on the server side of the request handling - the request is handled in less than 10ms, so it has to be the starting of the client program and the tcp connection, that takes up the time.
Hence the hope that I can find another way to post the requests to the server application.
You can use the system.net.sockets abstractions to create a service on a TCP port, and then telnet to that port.
Check the library status page; Mono's coverage here is a bit patchy.