systemd service inside kubernetes is unable to get the env - kubernetes

I setup a Centos systemd service but I'm not able to read the kubernetes env variables. If I run the bash inside the pod I'm able to see env (such as _UI_SERVICE_PORT_TCP_443=443, KUBERNETES_PORT_443_TCP_ADDR=10.202.0.1 or container=docker) but not when I execute a bash script as a service inside the container.
I also tried Type=forking and ExecStart=/bin/bash believing the executed bash will inherit the kubernetes env but it's clean.
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/ LANG=en_US.UTF-8 SHLVL=1
_=/bin/printenv
[Unit] Description= script after boot on k8s After=e.service
[Service] Type=forking ExecStart=#BINDIR#/virtual_service.py

Your problem seems to be related to the handling of environment variables in services. From my understanding, the env vars are stripped when running as a service, so you won't have access to what bash sees when your process runs as a service..
This answer provides a good description and some workarounds.
Hope this helps!

I found the answer for this.
/proc/1/environ contains the environment and i managed to read the env while i'm running as a service.
hope this will help someone in the future.

Related

What difference does specifying the user root make in a systemd service?

Which user is a systemd service run as by default in a centos if no user is explicitly specified?
My assumption was that the service would then run as root. However, there seem to be differences in terms of permissions if you explicitly specify User=root.
[Unit]
Requires=docker.service
After=docker.service
[Service]
User=root
...
ExecStartPre=/usr/local/bin/docker-compose down -v --remove-orphans
ExecStartPre=/usr/local/bin/docker-compose rm -fv
ExecStart=/usr/local/bin/docker-compose up --remove-orphans
[Install]
WantedBy=multi-user.target
In this specific case, a docker compose up is executed in the systemd service. The docker images are obtained via the ECR. The credentials for this are provided using amazon-ecr-credential-helper.
When trying to get the image from the ECR, the error message you get is "no basic auth credentials".
But since everything works as desired if you specify the user=root in the systemd service, I assume that the configuration of the amazon-ecr-credential-helper works with docker and that the problem is to be found in the systemd context.
Does any of you have any idea what the explicit specification user=root does?
From man systemd.exec:
User=, Group=
Set the UNIX user or group that the processes are executed as, respectively. Takes a single user or group name, or a numeric ID
as argument. For system services (services run by the system
service manager, i.e. managed by PID 1) and for user services of
the root user (services managed by root's instance of systemd
--user), the default is "root", but User= may be used to specify
a different user...
The default is already root. Specifying User=root does not change anything except perhaps to be explicit so a reader understands that this is really being run by root. It makes no difference to systemd

Location of Kubernetes config directory with Docker Desktop on Windows

I am running a local Kubernetes cluster through Docker Desktop on Windows. I'm attempting to modify my kube-apiserver config, and all of the information I've found has said to modify /etc/kubernetes/manifests/kube-apiserver.yaml on the master. I haven't been able to find this file, and am not sure what the proper way is to do this. Is there a different process because the cluster is through Docker Desktop?
Is there a different process because the cluster is through Docker Desktop?
You can get access to the kubeapi-server.yaml with a Kubernetes that is running on Docker Desktop but in a "hacky" way. I've included the explanation below.
For setups that require such reconfigurations, I encourage you to use different solution like for example minikube.
Minikube has a feature that allows you to pass the additional options for the Kubernetes components. You can read more about --extra-config ExtraOption by following this documentation:
Minikube.sigs.k8s.io: Docs: Commands: Start
As for the reconfiguration of kube-apiserver.yaml with Docker Desktop
You need to run following command:
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Above command will allow you to run:
vi /etc/kubernetes/manifests/kube-apiserver.yaml
This lets you edit the API server configuration. The Pod running kubeapi-server will be restarted with new parameters.
You can check below StackOverflow answers for more reference:
Stackoverflow.com: Answer: Where are the Docker Desktop for Windows kubelet logs located?
Stackoverflow.com: Answer: How to change the default nodeport range on Mac (docker-desktop)?
I've used this answer without $ screen command and I was able to reconfigure kubeapi-server on Docker Desktop in Windows

How to run systemctl in a pod

Getting access denied error while running the systemctl command in a pod.
Whenever try to start any service, for example, MySQL or tomcat server in a pod, it gives access denied error.
Is there any way by which I can run systemctl within a pod.
This is a problem related to Docker, not Kubernetes.
According to the page Run multiple services in a container in docker docs:
It is generally recommended that you separate areas of concern by
using one service per container
However if you really want to use a process manager, you can try supervisord, which allows you to use supervisorctl commands, similar to systemctl. The page above explains how to do that:
Here is an example Dockerfile using this approach, that assumes the
pre-written supervisord.conf, my_first_process, and my_second_process
files all exist in the same directory as your Dockerfile.
FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
That's a rather short question. The 'systemctl' command does try to talk to the systemd daemon which is not running in a pod by default (it could however). Running multiple services is yet another question about service management. It both cases it could help to use a tool like the docker-systemctl-replacement overwriting /usr/bin/systemctl and registering it as the init-CMD of the container.

Best way to run a pyramid pserve server as daemon

I used to run my pyramid server as a daemon with the pserve --daemon command.
Given that it's deprecated, I'm looking for the best replacement. This link recommends to run it with screen or tmux, but it seems too heavy to just run a web server. Another idea would be to launch it with setsid.
What would be a good way to run it ?
Create a service file in /etc/systemd/system. Here a example (pyramid.service):
[Unit]
Description=pyramid_development
After=network.target
[Service]
# your Working dir
WorkingDirectory=/srv/www/webgis/htdocs/app
# your pserve path with ini
ExecStart=/srv/www/app/env/bin/pserve /srv/www/app/development.ini
[Install]
WantedBy=multi-user.target
Enable the service:
systemctl enable pyramid.service
Start/Stop/Restart the service with:
systemctl start pyramid.service
systemctl restart pyramid.service
systemctl stop pyramid.service
The simplest option is to install supervisord and setup a conf file for the service. The program would just be env/bin/pserve production.ini. There are countless examples online of how to do this.
The best option is to integrate with your system's process manager (systemd usually, but maybe also upstart or sysvinit or openrc). It is very easy to write a systemd unit file for starting pserve and then it will be started/stopped along with the rest of your system. Log files are even handled automatically in these cases.

How do you pass an environment variable to Solr running inside Docker when the environment variable only exists inside the container?

I need to do a dataimport from a PostgreSQL container running inside docker to a Solr server also running inside of Docker.
In my docker run command I specify the --link option which creates the environment variable $POSTGRESQL_PORT_5432_TCP_ADDR inside the solr docker container, and I need to pass this into Solr to use in my solrconfig.xml file.
I've heard that this is possible by passing JVM environment variables to the Solr startup command, but docker run starts Solr automatically. The only workaround I've found is doing something like:
docker run --name solr -d -p 8983:8983 --link postgresql --volumes-from solr_cores makuk66/docker-solr /bin/true
Starting the container with bin/true so it does nothing, and then
docker exec -it solr /bin/bash
to get into the container, finally running the solr startup command myself with the flag
-Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR
However this is an involved manual process, and I'm wondering if there's a better way.
Looking on the page Taking Solr to Production you see
The bin/solr script simply passes options starting with -D on to the JVM during startup. For running in production, we recommend setting these properties in the SOLR_OPTS variable defined in the include file. Keeping with our soft-commit example, in /var/solr/solr.in.sh, you would do:
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
So all you need to do is edit the SOLR_OPTS environment variable in solr.bin.sh.
It's a bit different for Docker because you don't directly have access to solr.bin.sh, but it after some trial and error, it was as easy as adding this to my Dockerfile.
RUN echo 'SOLR_OPTS="$SOLR_OPTS -Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR"' >> /opt/solr/bin/solr.in.sh
Then you can use it in the solrconfig.xml file as
${solr.database.ip}
An important thing to note is that you can call the JVM environment variable whatever you want as long as you make sure not to overwrite anything important. I could have called it
-Dsolr.potato
if I wanted to.
For some reason the solr.in.cmd file looks exactly the same as solr.in.sh which confused me on how to set variables there. In windows containers, the command to accomplish the same - from a dockerfile, would be:
RUN Add-Content C:\solr\bin\solr.in.cmd 'set SOLR_OPTS=%SOLR_OPTS% -Dsolr.database.ip=%POSTGRESQL_PORT_5432_TCP_ADDR%'