I hava a service using installation as an init.d service (System V) on Centos 7.
I would like that after reboot the service starts automatically.
How can i do it?.
Thanks, I have tried:
1- /sbin/chkconfig --add my-service
2- /sbin/chkconfig --on my-service
3- chkconfig --list:my-service 0:off 1:off 2:on 3:on 4:on 5:on 6:off
4- /sbin/service my-service start
5- reboot centos7
6- /sbin/service my-service status: Not running
The service does not start automatically
The correct method with systemd is :
systemctl enable myservice
If it's a initv service you can use chkconfig anyway (maybe install it)
Related
My local kubernetes cluster is running by Rancher Desktop -
% kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
I have created a very basic job - to do a telnet at localhost and port 6443 and see if the connection is reachable by the job pod running in the cluster ->
apiVersion: batch/v1
kind: Job
metadata:
name: telnet-test
spec:
template:
spec:
containers:
- name: test-container
image: getting-started:latest
imagePullPolicy: IfNotPresent
command: ["/usr/bin/telnet"]
args: ["127.0.0.1","6443"]
restartPolicy: Never
backoffLimit: 4
Docker image is also basic , installing telnet ->
#Download base image ubuntu 16.04
FROM ubuntu:16.04
# Update Software repository
RUN apt update && apt upgrade
# Install nginx, php-fpm and supervisord from ubuntu repository
RUN apt install -y telnet
CMD ["which","telnet"]
EXPOSE 6443
When I run this job , I get connection refused ->
telnet: Unable to connect to remote host: Connection refused
Trying 127.0.0.1...
Any idea what I could be missing here ?
"kubectl cluster-info" shows you on which NODE and port your Kubernetes api-server is Running. So these are processes running on either a virtual machine or on a physical machine.
IP address 127.0.0.1 is also known as the localhost address, and belong to the local network adapter. Hence it is NOT a real IP that you can call from any other machine.
When you test 127.0.0.1:6443 inside your container image running as a Pod or with "docker run", you are not trying to call the NODE on port 6443. Instead you are trying to call the localhost address on port 6443 INSIDE the container.
When you install Kubernetes, it would be better if you configure the cluster address as the :6443 or :6443 instead of using a localhost address.
I'm on an ec2 instance trying to get my cluster created. I have kubectl already installed and here are my services and workloads yaml files
services.yaml
apiVersion: v1
kind: Service
metadata:
name: stockapi-webapp
spec:
selector:
app: stockapi
ports:
- name: http
port: 80
type: LoadBalancer
workloads.yaml
apiVersion: v1
kind: Deployment
metadata:
name: stockapi
spec:
selector:
matchLabels:
app: stockapi
replicas: 1
template: # template for the pods
metadata:
labels:
app: stockapi
spec:
containers:
- name: stock-api
image: public.ecr.aws/u1c1h9j4/stock-api:latest
When I try to run
kubectl apply -f workloads.yaml
I get this as an error
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I also tried changing the port in my services.yaml to 8080 and that didn't fix it either
This error comes when you don't have ~/.kube/config file present or configured correctly on the client / where you run the kubectl command.
kubectl reads the clusterinfo and which port to connect to from the ~/.kube/config file.
if you are using eks here's how you can create config file
aws eks create kubeconfig file
Encountered the exact error in my cluster when I executed the "kubectl get nodes" command.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I ran the following command in master node and it fixed the error.
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
I was following the instructions on aws
With me, I was on a Mac. I had docker desktop installed. This seemed to include kubectl in homebrew
I traced it down to a link in usr/local/bin and renamed it to kubectl-old
Then I reinstalled kubectl, put it on my path and everything worked.
I know this is very specific to my case, but may help others.
I found how to solve this question. Run the below commands
1.sudo -i
2.swapoff -a
3.exit
4.strace -eopenat kubectl version
and you can type kubectl get nodes again.
Cheers !
In my case I had a problem with a certificate authority. Found out that by checking the kubectl config
kubectl config view
The clusters part was null, instead of having something similar to
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
It was not parsed because of time differences between my machine and a server (several seconds was enough).
Running
sudo apt-get install ntp
sudo apt-get install ntpdate
sudo ntpdate ntp.ubuntu.com
Had solved the issue.
My intention is to create my own bare-metal Kubernetes Cloud with up to six nodes. I immediate ran into issues with the below intermittent issue.
"The connection to the server 192.168.1.88:6443 was refused - did you specify the right host or port?"
I have performed a two node installation (master and slave) about 20 different times using a multitude of “how to” sites. I would say I have had the most success with the below link…
https://www.knowledgehut.com/blog/devops/install-kubernetes-on-ubuntu
…however every install results in the intermittent issue above.
Given this issue is intermittent, I have to assume the necessary folder structure and permissions exist, the necessary application prerequisites exist, services/processes are starting under the correct context, and the installation was performed properly (using sudo at the appropriate times).
Again, the problem is intermittent. I will work, then stop, and the start again. A reboot sometimes corrects the issue.
Using Ubuntu ubuntu-22.04.1-desktop-amd64.
I have read a lot of comments on line concerning this issue, and a majority of the recommended fixes deals with creating directories and installing packages under the correct user context.
I do not believe this is the problem I am having given the issue is intermittent.
Linux is not my strong suite…and I would rather not go the Windows route. I am doing this to learn something new and would like to experience the entire enchilada (from OS install, to Kubernetes Cluster install, and then to Docker deployments).
I could be said that problems are the best teachers. And I would agree with that. However, I would like to absolutely know that I am starting with a stable working set of instructions before devoting endless hours trouble shooting bad/incorrect documentation.
Any ideas on how to proceed with correcting this problem?
d0naldashw0rth(at)yahoo(dot)com
I got the same error and after switching from root user to regular user (ubuntu, etc...) my problem was fixed.
Exactly the same issues as Donald wrote.
I've tried all the suggestions as you described above.
sudo systemctl stop kubelet
sudo systemctl start kubelet
strace -eopenat kubectl version
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
The cluster crashes intermittent. Sometimes works, sometimes not.
The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
Any idea else? Thank you!
Problem
To make k8s multinodes dev env, I was trying to use NFS persistent volume in minikube with multi-nodes and cannot run pods properly. It seems there's something wrong with NFS setting. So I run minikube ssh and tried to mount the nfs volume manually first by mount command but it doesnt work, which bring me here.
When I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node, the output is
mount.nfs: requested NFS version or transport protocol is not supported
Some relavant info is
NFS client: minikube nodes
NFS server: my Mac PC
minikube driver: docker
Cluster comprises 3 nodes. (1 master and 2 worker nodes)
Currently there's no k8s resources (such as deployment, pv and pvc) in cluster.
minikube nodes' os is Ubuntu so I guess "nfs-utils" is not relavant and not installed. "nfs-common" is preinstalled in minikube.
Please see the following sections for more detail.
Goal
The goal is mount cmd in minikube nodes succeeds and nfs share on my Mac pc mounts properly.
What I've done so far is
On NFS server side,
created /etc/exports file on mac pc. The content is like
/PATH/TO/EXPORTED/DIR/ON/MACPC -mapall=user:group 192.168.xx.xx(=the output of "minikube ip")
and run nfsd update and then showmount -e cmd outputs
Exports list on localhost:
/PATH/TO/EXPORTED/DIR/ON/MACPC 192.168.xx.xx(=the output of "minikube ip")
rpcinfo -p shows rpcbind(=portmapper in linux), status, nlockmgr, rquotad, nfs, mountd are all up in tcp and udp
ping 192.168.xx.xx(=the output of "minikube ip") says
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
and continues
It seems I can't reach minikube from host.
On NFS client side,
started nfs-common and rpcbind services with systemctl cmd in all minikube nodes. By running sudo systemctl status rpcbind and sudo systemctl status nfs-common, I confirmed rpcbind and nfs-common are running.
minikube ssh output
Last login: Mon Mar 28 09:18:38 2022 from 192.168.xx.xx(=I guess my macpc's IP seen from minikube cluster)
so I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node.
The output is
mount.nfs: requested NFS version or transport protocol is not supported
rpcinfo -p shows only portmapper and status are running. I am not sure this is ok.
ping 192.168.xx.xx(=macpc's IP) works properly.
ping host.minikube.internal works properly.
nc -vz 192.168.xx.xx(=macpc's IP) 2049 outputs connection refused
nc -vz host.minikube.internal 2049 outputs succeeded!
Thanks in advance!
I decided to use another type of volume instead.
I am learning Kubernetes at the moment. I have built a simple python application that uses Flask to expose rest APIs. Flask by default uses port 5000 to run the server. My API look like -
http://0.0.0.0:5000/api
Application is built into a docker image
FROM python:3.8.6-alpine
COPY . /app
WORKDIR /app
RUN \
apk add --no-cache python3 postgresql-libs && \
apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev postgresql-dev && \
python3 -m pip install -r requirements.txt --no-cache-dir && \
apk --purge del .build-deps
ENTRYPOINT ["python3"]
CMD ["app.py"]
I deploy this in a Kubernetes pod with pod definition
apiVersion: v1
kind: Pod
metadata:
name: python-webapp
labels:
type: web
use: internal
spec:
containers:
- name: python-webapp
image: repo/python-webapp:latest
Everything works fine and I am able to access the api on the pod directly and through Kubernetes service. I am boggled how does the POD know that the application in the container is running on port 5000? Where is the mapping for a port on the container to port on the pod?
I am boggled how does the POD know that the application in the container is running on port 5000?
The pod does not know that. The app in the container, in the pod can respond to any request on any port.
But to expose this to outside the cluster, you likely will forward traffic from a specific port to a specific port on your app, via a service (that can map to different ports) and a load balancer.
You can use Network Policies to restrict traffic in your cluster, e.g. to specific ports or services.
i have run ubuntu container and install rabbitmq inside it, how i can access the rabbitmq from internet ?
i try adding port with "-p 15672:15672" but still cant access it
[dephi#boolatnya.xyz]$ docker run -dit -p 15672:15672 ubuntu:latest --name ubuntu-rab
[dephi#boolatnya.xyz]$ docker exec -it ubuntu-rab /bin/bash
# apt upgrade
# apt-get install rabbitmq-server -y
# service rabbitmq-server start
when i access from myippublic:15672 it cant be reached, it should show rabbitmq login page
RabbitMQ is listening on port 5672 after you start the rabbitmq-server.
Management UI listens on port 15672
You need to enable management plugin before you can access it. Try to run this inside the container and check if you have access to the login page:
rabbitmq-plugins enable rabbitmq_management