I am provisioning a cluster of CoreOS machines. But I am having trouble downloading the kubernetes tar ball because of its size (~450MB). I have managed to use this same techinique to download the latest etcd2, fleet and flannel, but when downloading such a big file as kubernetes my service fails or stop without any stack strace. It think is something related with the fact systemd is neither waiting nor restarting the service as I would expect.This is my service file:
[Unit]
Description=updates kubernetes v1.2
[Service]
Type=oneshot
User=root
WorkingDirectory=/home/core
ExecStart=/usr/bin/mkdir -p /opt/bin
ExecStart=/usr/bin/mkdir -p /home/core/kubernetes
ExecStart=/bin/wget https://github.com/kubernetes/kubernetes/releases/download/v1.2.0/kubernetes.tar.gz
ExecStart=/usr/bin/tar zxf /home/core/kubernetes -C /home/core/kubernetes --strip-components=1
ExecStart=/usr/bin/mv kubernetes/platforms/linux/amd64/kubectl /opt/bin/kubectl
ExecStart=/usr/bin/tar zxf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
ExecStart=/usr/bin/chmod a+x kubernetes/server/bin/*
ExecStart=/usr/bin/mv kubernetes/server/bin/* /opt/bin
ExecStart=/usr/bin/rm -f /home/core/kubernetes
I bet you need to set/increase the TimeoutStartSec= parameter which is probably defaulted to 30 seconds or something like that.
Related
Is there a possibility to change container restart policy using podman? We can set policy during creating container podman run --restart always, but how to change it when the container is created?
Using docker we have docker update command which allows us to do so. Unfortunately there is no podman update command. Can it be done? Or do I need to create a new container?
when using podman you should create a systemd service that will manage podman container.
create systemd file "/etc/systemd/system/containername.service"
[Unit]
Description=your container
[Service]
Restart=always
ExecStart=/usr/bin/podman start -a containername
ExecStop=/usr/bin/podman stop -t 2 containername
[Install]
WantedBy=local.target
run command:
systemctl daemon-reload
enable service to start at boot
systemctl enable containername.service
restart service
systemctl restart containername.service
You can also add some other restart systemd parameters like:
RestartSec (Configures the time to sleep before restarting a service), StartLimitInterval (seconds service is it not permitted to start any more), StartLimitBurst
for more details check man pages: "man systemd.service"
I have been trying to install Entando 6 on my Mac following the instructions on http://docs.entando.com, however when deploying to Kubernetes I get an error with quickstart-kc-deployer. Has anyone managed to successfully go through with the installation?
deployment failure
Also I am new to Kubernetes and trying to access any logs, however as of now I have not been able to access logs and understand a bit more what the root cause of the failure is. Help on that is also more than welcome as well.
Thanks.
If you're in a local development environment the best bet would be to try the new instructions at dev.entando.org. If you're installing on a cloud Kubernetes provider try the updated instructions here.
I've reproduced them here for completeness:
Install Multipass (https://multipass.run/#install
Launch VM
multipass launch --name ubuntu-lts --cpus 4 --mem 8G --disk 20G
Open a shell multipass shell ubuntu-lts
Install k3s curl -sfL https://get.k3s.io | sh -
Download Entando custom resource definitions
curl -L -C - https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/custom-resources.tar.gz | tar -xz
Create custom resources
sudo kubectl create -f dist/crd
Create namespace
sudo kubectl create namespace entando
Download Helm chart
curl -L -C - -O https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/entando.yaml
Configure access to your cluster
IP=$(hostname -I | awk '{print $1}')
sed -i "s/192.168.64.25/$IP/" entando.yaml
If you want to deploy on a cloud provider (EKS, AKS, GKE) then there are new instructions under the Configuration and Operations section at
https://dev.entando.org/next/tutorials
I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?
It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.
If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.
ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2
I'm starting to work with docker to automate envorinments, then I'm trying to build a simple LAMP so the Dockerfile is the following:
FROM centos:7
ENV container=docker
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
RUN yum -y update && yum clean all
RUN yum -y install firewalld httpd mariadb-server mariadb php php-mysql php-gd php-pear php-xml php-bcmath php-mbstring php-mcrypt php-php-gettext
#Enable services
RUN systemctl enable httpd.service
RUN systemctl enable mariadb.service
#start services
RUN systemctl start httpd.service
RUN systemctl start mariadb.service
#Open firewall ports
RUN firewall-cmd --permanent --add-service=http
RUN firewall-cmd --permanent --add-service=https
RUN firewall-cmd --reload
EXPOSE 80
CMD ["/usr/sbin/init"]
so when I build the image
docker build -t myimage .
Then when I run the code I get the following mistake:
The command '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1
When I enter to interactive mode(jumping the commands after RUN systemctl start httpd.service and rebuidling the image):
docker run -t -i myimage /bin/bash
And after try to start manually the service httpd I get the following mistake:
Failed to get D-Bus connection: No connection to service manager.
so, I don't know what am I doing wrong?
First of all, welcome to Docker! :-) Loads of Docker tutorials and docs are written around Ubuntu containers, but I like Centos too.
Ok, there are a couple of things to talk about here:
You're running up against a known issue with systemd-based Docker containers where they seem to need extra privileges to run, and even then lots of extra config is required to get them working. The Red Hat team are experimenting with some fixes (mentioned in comments) but not sure where that's at.
If you wish to try getting it working, these are the best instructions I've found, but I've played with this several times in the last couple of weeks and not got it working yet.
What people might say is "the real issue" here is that a Docker container should not be thought of as a "mini Virtual Machine". Docker is designed to run one "root" process per container, and the container system makes it easy to compose multiple containers together - they are small on disk, light on memory usage and easy to network together.
Here's a blog post from Docker which gives some background on this. There's also the "Docker Fundamentals" docs on Dockerizing applications and Working with containers.
So arguably the best way to proceed with the setup you're attempting to create here (though it might sound more complicated at the beginning) is to break your "stack" up into the services you need, and then use a tool like docker-compose (introduction, documentation) to create single-purpose Docker containers as required.
In your case above, you have two services, a web server and a database server. Therefore two Docker containers should work well, connected together by the database network connection. Here are some examples:
example with Symfony app, nginx and MariaDB
example with MariaDB + NodeJS
If you run one service per Docker container, you don't need to use systemd to manage them, as the Docker daemon manages each container sort of like it is a Unix process. When the process dies, the Docker container dies, and this is important because the Docker server monitors containers and can restart them automatically, or notify you.
This looks more like a perfect example where my docker-systemctl-replacement would fit into. It can easily interpret "systemctl start httpd.service" without an active SystemD around. I have done the same for some database services but not specifically the mariadb.service - may be you could give it a try.
I've installed the last stable version of Debian (Jessie) and /etc/inittab doesn't exist. I have read the new init system is called Sysv.
I need to launch a service with parameter, I used to add a line in inittab like
u1:23:respawn:/etc/init.d/my_service foreground
I'm trying to add this one with sysvrc-conf -p but I don't know how...
How can I do that without inittab?
Thank you so much.
Found this question by google, maybe someone else finds this usefull: The new init system for Debian Jessie is systemd. The old way in Debian Wheezy was Sysv with /etc/inittab.
To create a respawn service with systemd just create a file in /etc/systemd/system/ i.e. mplayer2.service
[Unit]
Desription=mplayer with systemd, respawn
After=network.target
[Service]
ExecStart=/usr/bin/mplayer -nolirc -ao alsa -vo null -really-quiet http://stream.sunshine-live.de/hq/mp3-128/Facebook-og-audio-tag/
Restart=always
[Install]
WantedBy=multi-user.target
and activate it
systemctl enable mplayer2.service
reboot or start it manually
systemctl daemon-reload
systemctl start mplayer2.service
If you reboot or kill the process, it will be restarted automatically some seconds later.