How does openfaas solve the time zone problem of the container in the pod? - kubernetes

I am currently deploying openfaas on my local virtual machine's kubernetes cluster. I found that the time zone of the container started after publishing the function is inconsistent with the host machine. How should I solve this problem?
[root#k8s-node-1 ~]# date
# Host time
2021年 06月 09日 星期三 11:24:40 CST
[root#k8s-node-1 ~]# docker exec -it 5410c0b41f7a date
# Container time
Wed Jun 9 03:24:40 UTC 2021

As #coderanger pointed out in the comments section, the timezone difference is not related to OpenFaaS.
It depends on the image you are using, most of the images use UTC timezone.
Normally this shouldn't be a problem, but in some special cases you may want to change this timezone.
As described in this article, you can use the TZ environment variable to set the timezone of a container (there are also other ways to change the timezone).
If you have your own Dockerfile, you can use the ENV instruction to set this variable:
NOTE: The tzdata package has to be installed in the container for setting the TZ variable.
$ cat Dockerfile
FROM nginx:latest
RUN apt-get install -y tzdata
ENV TZ="Europe/Warsaw"
$ docker build -t mattjcontainerregistry/web-app-1 .
$ docker push mattjcontainerregistry/web-app-1
$ kubectl run time-test --image=mattjcontainerregistry/web-app-1
pod/time-test created
$ kubectl exec -it time-test -- bash
root#time-test:/# date
Wed Jun 9 17:22:03 CEST 2021
root#time-test:/# echo $TZ
Europe/Warsaw

Related

selinux not working under containerd with selinux-enable=true

I have two k8s cluster, one using docker and another using containerd directly, both with selinux enabled.
but I found selinux not actually working on the containerd one, although this two cluster have the same version of containerd and runc.
did i miss some setting with containerd?
docker: file label is container_file_t, and process runs as container_t, selinux works fine
K8s version: 1.17
Docker version: 19.03.6
Containerd version: 1.2.10
selinux enable by adding ["selinux-enabled": true] to /etc/docker/daemon.json
// create pod using tomcat official image then check the process and file label
# kubectl exec tomcat -it -- ps -eZ
LABEL PID TTY TIME CMD
system_u:system_r:container_t:s0:c655,c743 1 ? 00:00:00 java
# ls -Z /usr/local/openjdk-8/bin/java
system_u:object_r:container_file_t:s0:c655,c743 /usr/local/openjdk-8/bin/java
containerd: file label is container_var_lib_t, and process runs as spc_t, selinux makes no sense
K8s version: 1.15
Containerd version: 1.2.10
selinux enable by setting [enable_selinux = true] in /etc/containerd/config.toml
// create pod using tomcat official image then check the process and file label
# kubectl exec tomcat -it -- ps -eZ
LABEL PID TTY TIME CMD
system_u:system_r:spc_t:s0 1 ? 00:00:00 java
# ls -Z /usr/local/openjdk-8/bin/java
system_u:object_r:container_var_lib_t:s0 /usr/local/openjdk-8/bin/java
// seems run as spc_t is correct
# sesearch -T -t container_var_lib_t | grep spc_t
type_transition container_runtime_t container_var_lib_t : process spc_t;
From this issue we can read:
Containerd includes minimal support for SELinux. More accurately, it
contains support to run ON systems using SELinux, but it does not make
use of SELinux to improve container security.
All containers run with the
system_u:system_r:container_runtime_t:s0 label, but no further
segmentation is made
There is no full support for what you are doing using Containerd. Your approach is correct but the problem is lack of support to this functionality.

The connection to the server xxxx:6443 was refused - did you specify the right host or port?

I follow this to install kubernetes on my cloud.
When I run command kubectl get nodes I get this error:
The connection to the server localhost:6443 was refused - did you specify the right host or port?
How can I fix this?
If you followed only mentioned docs it means that you have only installed kubeadm, kubectl and kubelet.
If you want to run kubeadm properly you need to do 3 steps more.
1. Install docker
Install Docker ubuntu version. If you are using another system chose it from left menu side.
Why:
If you will not install docker you will receive errror like below:
preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": e
xecutable file not found in $PATH
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
2. Initialization of kubeadm
You have installed properly kubeadm and docker but now you need to initialize kubeadm. Docs can be found here
In short version you have to run command
$ sudo kubeadm init
After initialization you will receive information to run commands like:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
and token to join another VM to cluster. It looks like
kubeadm join 10.166.XX.XXX:6443 --token XXXX.XXXXXXXXXXXX \
--discovery-token-ca-cert-hash sha256:aXXXXXXXXXXXXXXXXXXXXXXXX166b0b446986dd05c1334626aa82355e7
If you want to run some special action in init phase please check this docs.
3. Change node status to Ready
After previous step you will be able to execute
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-kubeadm NotReady master 4m29s v1.16.2
But your node will be in NotReady status. If you will describe it $ kubectl describe node you will see error:
Ready False Wed, 30 Oct 2019 09:55:09 +0000 Wed, 30 Oct 2019 09:50:03 +0000 KubeletNotReady runtime network not ready: Ne
tworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
It means that you have to install one of CNIs. List of them can be found here.
EDIT
Also one thing comes to my mind.
Sometimes when you turned off and on VM you need to restart
kubelet and docker service. You can do it by using
$ service docker restart
$ systemctl restart kubelet
Hope it helps.
Looks like kubeconfig file is missing.. Did you copy admin.conf file to ~/.kube/config ?
Verify if there are any proxies set like "http_proxy" or "https_proxy", mostly we set it as environment variables. If yes, then remove the proxies and it should work for you.
I did the following 2 steps. The kubectl works now.
$ service docker restart
$ systemctl restart kubelet

Docker Postgres Container keeps respawning on macOS

I created a trivial Dockerfile to build an image based on the official Docker PostgreSQL image:
FROM postgres
As far as I can tell, I did not even start it explicitly, I only docker build . it.
Now, whenever I try to remove the container, it keeps getting recreated and restarted:
hostname:~ username$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
postgres <none> e84edf994e8b 3 weeks ago 234MB
hostname:~ username$ date && docker ps -a
Thu May 24 13:26:31 CEST 2018
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6482553729a4 postgres:latest "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 5432/tcp some-postgres.1.a0udiazm08y67gcxhnxbhinh8
hostname:~ username$ date && docker stop 6482553729a4 && docker rm 6482553729a4
Thu May 24 13:26:47 CEST 2018
6482553729a4
6482553729a4
hostname:~ username$ date && docker ps -a
Thu May 24 13:26:52 CEST 2018
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7d180c7a4532 postgres:latest "docker-entrypoint.s…" 4 seconds ago Created some-postgres.1.jlqe02b1zt9o77gh8ky4zhzr9
hostname:~ username$ date && docker ps -a
Thu May 24 13:27:01 CEST 2018
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7d180c7a4532 postgres:latest "docker-entrypoint.s…" 13 seconds ago Up 7 seconds 5432/tcp some-postgres.1.jlqe02b1zt9o77gh8ky4zhzr9
I tried drastic measures, too:
hostname:~ username$ docker kill $(docker ps -q) && docker rm -f $(docker ps -a -q) && docker rmi -f $(docker images -q)
7d180c7a4532
7d180c7a4532
Untagged: postgres#sha256:1c2cc88d0573332ff1584f72f0cf066b1db764166786d85f5541b3fc1e362aee
Deleted: sha256:e84edf994e8bc77bf6c60970a2bd32c905ed8782296e67aa46c949a4b47cb678
hostname:~ username$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
70f3de1d4b8a e84edf994e8b "docker-entrypoint.s…" 45 seconds ago Up 40 seconds 5432/tcp some-postgres.1.alq5qjn7adyjvbjeo023kx2fq
Apparently, that container doesn't even need a local image to run:
hostname:~ username$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
I tried rebooting, restarting the Docker daemon, I searched the container image's Dockerfile and docker-entrypoint.sh for some keep-this-container-running-at-all-costs-even-if-it-pisses-off-the-user option, but there doesn't seem anything that strikes me as related.
I did notice there is a /Applications/Docker.app/Contents/MacOS/com.docker.supervisor -watchdog fd:0 process running which possibly keeps restarting containers, but that thought is pure speculation and also, I can't seem to find anything that tells Docker to keep restarting that container and not any of the others.
I'm not a Docker expert, but I have used a number of public / official containers and also created some of my own and I have never seen this problem before.
What is going on here ?
There is a hard-way for resetting Docker in such (usually) one-time situations. I've had a few situations like this before but it never persisted after forcibly purging docker.
You should go to:
navigation bar -> docker -> preferences -> Reset
And do: Reset to factory deafaults
Another less drastic way:
docker stop $(docker ps -a -q) &
docker update --restart=no $(docker ps -a -q)

docker postgres failed to start with specified port

I'm new to docker and I'm trying to run a postgres database docker with the following command :
docker run --name rva-db -e POSTGRES_PASSWORD=rva -e POSTGRES_DB=rva-db -d postgres -p 5432:5432
If I'm trying to run it without the -p option, it seems to work fine, but I can't reach it from my local pg-admin, I thought I need to add the port link to reach it.
Anyway, the container always crash after few seconds and when i'm trying to start it with the start command I'm getting the following return :
docker start -a rva-db
FATAL: invalid value for parameter "port": "5432:5432"
What did I miss ?
FYI, I'm running it on a MacOS with the following docker version :
$ docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.7.1
Git commit: 6f9534c
Built: Thu Sep 8 10:31:18 2016
OS/Arch: darwin/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 17:52:38 2016
OS/Arch: linux/amd64
Run the container typing -p option before the image name
docker run --name rva-db -e POSTGRES_PASSWORD=rva -e POSTGRES_DB=rva-db -d -p 5432:5432 postgres
As for Docker run reference docker run has this format
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
Options must be before image name. After that you can set entrypoint or command (when theyy differ from default from Dockerfile) and their arguments.

Share host timezone with docker container

I'm trying to sync the timezone of a docker container with my host. My host is using ISM and the docker container (using a tomcat image) uses UTC by default. I've read that we should mount a volume to share the timezone of the host:
$ docker run -t -i -p 8080:8080 -p 8090:8090 -v /etc/localtime:/etc/localtime:ro tomcat:7.0.69-jre8 /bin/bash
After that I can check that the date retrieved is the same as the host:
$ date
Fri Jul 22 13:53:45 IST 2016
When I deploy my application and I try to update a date, I can see that the date 22/07/2016 is using my browser timezone, which is the same as the host where the docker container is running. But debbuging the server side of the app I can see that the date is converted into UTC timezone. This means that the docker container is not really using the host volume I did mount.
Am I missing anything?
Another way I tried and did work was updating the timezone in the docker container:
$ dpkg-reconfigure tzdata // Selecting the corresponding options afterwards
This way I can see the same timezone in both: client side and server side of my app.
After debugging and reading about date and time, I think it makes sense that the backend stores the date and time in UTC/GMT, that way is independent of the client's timezone when it's saved in the DB. So it wouldn't be a good practise to change the tomcat server timezone to match the host (it shouldn't really matter).
The issue I had was the front end was using date and time (UTC/GMT +1) and the time was set to 00:00h and when it reaches the back end, the date and time is converted to UTC/GMT which makes it 23:00 of the previous day. The persistence layer was just storing the date which it's wrong as we lose data (the time) and when we try to retrieve that record from the DB we will get the previous date without the time so it's not the result we would expect.
I hope my explanation makes sense