pactl called from systemd service always reports "pa_context_connect() failed connection refused" - pulseaudio

I've setup a systemd service file to perform some pactl operations at system startup for a test process. While the commands work fine when performed from a terminal I always get "pa_context_connect() failed connection refused" when running the same script from the systemd service by starting the service. I'm also using the 'User=' directive in the service file to ensure that the auto-login user matches the user used to run the service commands.
I've read that this is somehow related to the pulseaudio session not being valid in the environmentless context of the systemd service but I haven't been able to figure that out further.

Although it might be a bit late for whatever project you might have been be working on, here's what I found out.
The regular systemctl, the PID 1, indeed cannot access the environement variables of the current user when launching a service. Since pactl relies on those variables to find what instance of pulseaudio it needs to connect to, it is unable to do so when launched though a service. I'm sure there's a fairly dirty workaround for this, but I found something better.
Most systems have a second instance of systemd running in userspace (accessible through systemctl --user while not connected as root). This instance indeed can access all the userspace environment variables and I found that pactl doesn't return any errors when being called either directly or through a script.
All you need to do is put your services in either /usr/lib/systemd/user/, /etc/systemd/user/, or ~/.config/systemd/user/, remove the User= directive from your service file and run systemctl --user daemon-reload as a regular user to make sure they've been detected.

Related

MongoDB: Error connecting to 127.0.0.1:27017, No connetion could be made because target machine actively refused it

I've been using Mongo for a while now, and I never had any kind of errors. But today, I tried running the mongo command in my terminal and I got the following error:
Error connecting to 127.0.0.1:27017 :: caused by :: No connection could be made because the target machine actively refused it. :
I have my PATH variable for Mongo properly configured in my environment variables as follows:
C:\Program Files\MongoDB\Server\4.4\bin
so I doubt that is the issue. I remember going through my task manager yesterday and I accidentally terminated a program running in the background related to Mongo, but I can't seem to remember exactly what it was called, and I really think that that's the root of my problem, because before having terminated that Mongo program in my task manager I had never ran across this connection problem before.
By terminating a program in the background, I'm going to assume you didn't just end process, otherwise a simple computer restart would fix your issue. And in some cases, that same program would've relaunched when you launched MongoDB. But if you disabled a service and need to find which service needs to be running to be able to connect to your MongDB then I would suggest going through your Windows Services list and seeing which ones you disabled and looking one relating to TCP or SNMP.
This is because MongoDB Wire Protocol is a simple socket-based, request-response style protocol. You communicate with the database server through a regular TCP/IP socket and since you can't remember which one you "terminated" and any number of services related to networking can cause a dependency to be absent, I can't be more specific in helping you determine which one you need to turn back on and you'll have to do it through trial and error but I can at least offer you some guidance, hopefully.
Specifically you can either
Run system configuration using
msconfig
In a run box, navigating to the Services tab, order the list by Date Disabled to find the service that was disabled which correlates with when you when snooping through task manager, or
Run Task manager and navigate to the Services Tab, then Open Services, and order them by Status or by Name, and look for any service that includes TCP/IP, COM+, Port direction, etc. to see which one is disabled and change the configuration from anything but Disabled and then stat it manually and run MongDB again.
It's about as specific as I can get without knowing anything more than you terminated some program running in the background but I hope it helps.
The background process (daemon) for MongoDB is called 'mongod'. It's an executable in your bin directory inside your mongodb installation. You can just execute it in the terminal.
Run:
C:\Program Files\MongoDB\Server\4.4\bin\mongod.exe

Apache CloudStack: No templates showing when adding instance

I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.

DBus how to start service

I am curious how to start my own service for DBus.
On official site I have found a lot of information regarding working with DBus services from client point of view, but how to start and develop service not enough:
1) Where should be located interface file ServiceName.xml
2) Where should be located service file ServiceName.service
3) How to launch service manually, not on start of system.
Can anybody help me or provide some usefull links ?
Make a service that is started by the service manager of the OS (initd, systemd,etc). In that program instantiate the server-side object using the dbus library.
Normally, you'll configure to start the service on boot, but with systemd it's also possible to configure it to start when something connects to specific socket or when something tries to use specific device object. It's called 'socket activation' and 'dbus activation' (see current systemd docs).
If you want to start service manually - then do systemctl disable <service-name> to disable start on boot. To start a service manually: systemctl start <service-name>.
The *.xml files aren't obligatory. Maybe look into other packages to see where they put these files.
The *.systemd files should be in some usual place (see systemd docs) like /usr/lib/systemd/system.

Cannot disable systemd serial-getty service

On Raspberry Pi with Arch Linux there is a service active called serial-getty#AMA0.
The unit file is: /usr/lib/systemd/system/serial-getty#.service
As root I can invoke
systemctl stop serial-getty#ttyAMA0
systemctl disable serial-getty#ttyAMA0
But after reboot the service is enabled and running again.
Why is the service enabled after disabling it? How can I disable it permanent?
UPDATE
systemd uses generators at /usr/lib/systemd/system-generators/ is a binary called systemd-getty-generator. This binary runs at system start and adds the symlink serial-getty#ttyAMA0.service to /run/systemd/generator/getty.target.wants.
I eventually found a dirty solution. I commented out all actions in /usr/lib/systemd/system/serial-getty#.service. The service did appear to start anyway, but without blocking ttyAMA0.
The correct way to stop a service ever being enabled again is to use:
systemctl mask serial-getty#ttyAMA0.service
(using ttyAMA0 as the example in this case). This will add a link to null to the entry for that service.
Try this code:
system("systemctl stop serial-getty#ttyAMA0.service");
system("systemctl disable serial-getty#ttyAMA0.service");
I use it, and it works well.

server restart daemon cannot bind socket, address already in use, manual restart daemon ok

in Solaris 10, when server restart, the backup daemon (tina_daemon) does not work properly when calling for another executable to shutdown the application services.
however, when manually restart the backup daemon, the calling for the executable works fine.
I found the following lines in the log-file:
8|9|tina_daemon|780|1|3|1373964994|1373964994|10739|tina_daemon_1|<host name deleted>|~|root|~|backup_poc_tina|backup-poc|Event handler daemon started|0|~|~|~|~|~|~|
8|9|tina_daemon|880|256|3|1373964994|1373964994|10739|tina_daemon_1|<host name deleted>|~|root|~|backup_poc_tina|backup-poc|Service opened, host "<host name deleted>", Time Navigator Enterprise Edition Version 4.2.8.8 P4680|0|~|~|~|~|~|~|
8|14|TNUnixSocketImpl::initNetServiceTcp|11|1|1|1373965001|1373965001|10963|tina_daemon_1|<host name deleted>|~|root|~|backup_poc_tina|backup-poc|Unable to bind 5 socket, retos=125 "Address already in use"|0|~|~|~|~|~|~|
8|14|vos_init_net_service_tcp|1|1|4|1373965001|1373965001|10963|tina_daemon_1|<host name deleted>|~|root|~|backup_poc_tina|backup-poc|Unable to initialize network service of TCP type, retex=TN_ERR_CONFLICT_RESS (conflict in access to the same resource)|0|~|~|~|~|~|~|
is it a potential that not enough share memory? since the tina_daemon use 2 ports which already define in the /etc/services
I feel very weird since when manually using root to restart the tina_daemon, everything works fine. When server restart with the startup script (which i changed to similar to manul restart), it does not work well for calling another executable to shutdown the application. It shutdown the application half way, and cause the application hang (and ultimate need to restart the application)