I'm experimenting with using kompose on k3s to turn the compose file into a K8s file, but when I type kompose up, it asks me to enter a username and password, but I don't know what to write.
The specific output is as follows
# kompose up
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
Please enter Username: test
Please enter Password: test
FATA Error while deploying application: Get https://127.0.0.1:6443/api: x509: certificate signed by unknown authority
However, the kompose convert command was successfully executed
I would appreciate it if you could tell me how to solve it?
The kompose version is 1.21.0 (992df58d8), and install it by 'curl and chmod'
The k3s version is v1.17.3+k3s1 (5b17a175), and install it by 'install.sh script'
OS is Ubuntu18.04.3 TLS
I seem to have found my problem, because I use k3s the install.sh scripts installed by default, it will k8s configuration file stored in the /etc/rancher/k3s/k3s.yaml instead of k8s ~/.Kube/config.
This caused kompose up to fail to obtain certs.
You can use the /etc/rancher/k3s/k3s.yaml copied to ~/.Kube/config.
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Then kompose up executes successfully.
The answer that the OP gave is for a different issue than the certificate signed by unknown authority that they posted. The certificate issue is almost certainly caused by a self signed cert. For that, you have to get your workstation's OS to accept the cert. For Linux, I use:
openssl s_client -showcerts -connect 127.0.0.1:6443 2>/dev/null </dev/null | \
sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | \
sudo tee /usr/local/share/ca-certificates/k8s.crt
sudo update-ca-certificates
sudo systemctl restart docker
Related
I install the Bitnami Helm chart, using the example shown in the README:
helm install my-db \
--namespace dar \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
bitnami/postgresql
Then, following the instructions shown in the blurb which prints after the installation is successful I forward the port to port 5432 then try and connect:
PGPASSWORD="secretpassword" psql --host 127.0.0.1 -U postgres -d my-database -p 5432
But I get the following error:
psql: error: could not connect to server: FATAL: password authentication failed for user "postgres"
How can this be? Is the Helm chart buggy?
Buried deep in the stable/postgresql issue tracker is the source of this very-hard-to-debug problem.
When you run helm uninstall ... it errs on the side of caution and doesn't delete the storage associated with the database you got when you first ran helm install ....
This means that once you've installed Postgres once via Helm, the secrets will always be the same in subsequent installs, regardless of what the post-installation blurb tells you.
To fix this, you have to manually remove the persistent volume claim (PVC) which will free up the database storage.
kubectl delete pvc data-my-db-postgresql-0
(Or whatever the PVC associated with your initial Helm install was named.)
Now a subsequent helm install ... will create a brand-new PVC and login can proceed as expected.
This is my first time of setting up Kubernetes on Google Cloud Platform.
These are the steps I followed:
I created an account on Google Cloud Platform and spun up a new instance:
https://console.cloud.google.com/compute
Installed the gcloud SDK:
curl https://sdk.cloud.google.com | bash
Configured my Google Cloud Platform account information
gcloud auth login
Installed the latest verion of Kubernetes
curl -sS https://get.k8s.io | bash
Launched a new cluster:
kubernetes/cluster/kube-up.sh
Confirmed that my configuration along with the cluster management credentials are stored in:
sudo nano /home/promisepreston/.kube/config
Installed kubectl on the server
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Ran the command below which outputted the URL for the master services including DNS, UI, and monitoring
kubectl cluster-info
Deployed the Dashboard UI by running the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
And finally, I tried accessing the Dashboard by running the following command:
kubectl proxy
Which should make the Dashboard available at:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
However, when I visit that URL I get error:
Unable to connect
And even when I try the command below:
curl http://localhost:8001/api
I get the error below:
curl: (7) Failed to connect to localhost port 8001: Connection refused
I have looked through a lot of documentation and tried multiple solutions, but none seems to work for me.
Installed kubectl on the server
You need kubectl on machine, from which you're going to access your cluster. If you installed it on the server and you ran kubectl proxy on the server - then you can access the proxy only from your server (depends on your network config).
If you do curl http://localhost:8001/api on the server - it will work.
So, you need to install kubectl on your machine, set up the k8s context for it and then run kubectl proxy - after that, all requests to proxy will be forwarded to your cluster.
In each request to k8s API server you need to be authenticated, when you run kubectl proxy - basically proxy will take care of authentication and SSL/TLS related stuff.
Read this for more info: Use an HTTP Proxy to Access the Kubernetes API
and The Kubernetes API
Configure Access to Multiple Clusters - may also be useful
Basically you need to do the following:
Note: These should be done directly on your local machine, and not on the server or the terminal connecting to the server, but directly on your local machine:
Install the gcloud SDK:
# Add the Cloud SDK distribution URI as a package source
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
# Import the Google Cloud public key
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
# Update the package list and install the Cloud SDK
sudo apt-get update && sudo apt-get install google-cloud-sdk
Configure your Google Cloud Platform account information:
gcloud auth login
Install Kubectl the Kubernetes command line tool:
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Install Minikube that will be using to install Kubernetes on your local machine:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
Start Minikube to pull the latest image of Kubenetes on your local system and configure it with Kubectl:
minikube start
If you already have some clusters set up, you can now use it to access your shiny new cluster:
kubectl get po -A
Minikube bundles the Kubernetes Dashboard, allowing you to get easily acclimated to your new environment:
minikube dashboard
I have been trying to install Entando 6 on my Mac following the instructions on http://docs.entando.com, however when deploying to Kubernetes I get an error with quickstart-kc-deployer. Has anyone managed to successfully go through with the installation?
deployment failure
Also I am new to Kubernetes and trying to access any logs, however as of now I have not been able to access logs and understand a bit more what the root cause of the failure is. Help on that is also more than welcome as well.
Thanks.
If you're in a local development environment the best bet would be to try the new instructions at dev.entando.org. If you're installing on a cloud Kubernetes provider try the updated instructions here.
I've reproduced them here for completeness:
Install Multipass (https://multipass.run/#install
Launch VM
multipass launch --name ubuntu-lts --cpus 4 --mem 8G --disk 20G
Open a shell multipass shell ubuntu-lts
Install k3s curl -sfL https://get.k3s.io | sh -
Download Entando custom resource definitions
curl -L -C - https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/custom-resources.tar.gz | tar -xz
Create custom resources
sudo kubectl create -f dist/crd
Create namespace
sudo kubectl create namespace entando
Download Helm chart
curl -L -C - -O https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/entando.yaml
Configure access to your cluster
IP=$(hostname -I | awk '{print $1}')
sed -i "s/192.168.64.25/$IP/" entando.yaml
If you want to deploy on a cloud provider (EKS, AKS, GKE) then there are new instructions under the Configuration and Operations section at
https://dev.entando.org/next/tutorials
The error below is triggered when executing kubectl -n gitlab-managed-apps logs install-helm.
I've tried regenerating the certificates, and bypassing the certificate check. Somehow it is using my internal certificate instead of the certificate of the source.
root#dev # kubectl -n gitlab-managed-apps logs install-helm
+ helm init --tiller-tls --tiller-tls-verify --tls-ca-cert /data/helm/helm/config/ca.pem --tiller-tls-cert /data/helm/helm/config/cert.pem --tiller-tls-key /data/helm/helm/config/key.pem
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: x509: certificate is valid for *.tdebv.nl, not kubernetes-charts.storage.googleapis.com
What might be the issue here? Screenshot below is the error Gitlab is giving me (not much information either).
After having the same issue I finally found the solution for it:
In the /etc/resolv.conf file on your Master and Worker nodes you have to search and remove the search XYZ.com entry.
If you are using Jelastic you have to remove this entry every time after a restart. It gets added by Jelastic automatically. I already contacted them so maybe they will fix it soon.
Creating "~/.helm/repository/repositories.yaml" with the following content solved the problem.
cat << EOF >> ~/.helm/repository/repositories.yaml
apiVersion: v1
repositories:
- caFile: ""
cache: ~/.helm/repository/cache/stable-index.yaml
certFile: ""
keyFile: ""
name: stable
password: ""
url: https://kubernetes-charts.storage.googleapis.com
username: ""
- caFile: ""
cache: ~/.helm/repository/cache/local-index.yaml
certFile: ""
keyFile: ""
name: local
password: ""
url: http://127.0.0.1:8879/charts
username: ""
EOF
#helm init
I experienced the same issue on Kubernetes with the Calico network stack under Debian Buster.
Checking a lot of configs and parameters, I ended up with getting it to work by changing the policy for the forward rule to ACCEPT. This made it clear that the issue is somewhere around the firewall.
Running iptables -L gave me the following unveiling warning: # Warning: iptables-legacy tables present, use iptables-legacy to see them
The output given by the list command does not contain any Calico rules. Running iptables-legacy -L showed me the Calico rules, so it seems obvious now why it didn't work. So Calico seems to use the legacy interface.
The issue is the change in Debian to iptables-nft in the alternatives, you can check via:
ls -l /etc/alternatives | grep iptables
Doing the following:
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy
Now it works all fine! Thanks to Long at the Kubernetes Slack channel for pointing the route to solving it.
We currently have a docker registry setup, that has security. Normally, in order to access it, from a developer's perspective, I have to do a long with the docker login --username=someuser --password=somepassword --email user#domain.com https://docker-registry.domain.com.
However, since I am currently trying to do an automatized deployment of a docker container in the cloud, one of the operations, which is the docker pull command, fails because the login was not performed (it works if I add the login in the template, but that's bad).
I was suggested to use the certificate to allow the pull from being done (.crt file). I tried installing the certificate using the steps explained here: https://www.linode.com/docs/security/ssl/ssl-apache2-centos
But it does not seem to work, I still have to do a manual login in order to be able to perform my docker pull from the registry.
Is there a way I can replace the login command by the use of the certificate?
As I see, it's wrong URL for SSL authentication between docker server and private registry server.
You can follow this:
Running a domain registry
While running on localhost has its uses, most people want their registry to be more widely available. To do so, the Docker engine requires you to secure it using TLS, which is conceptually very similar to configuring your web server with SSL.
Get a certificate
Assuming that you own the domain myregistrydomain.com, and that its DNS record points to the host where you are running your registry, you first need to get a certificate from a CA.
Create a certs directory:
mkdir -p certs
Then move and/or rename your crt file to: certs/domain.crt, and your key file to: certs/domain.key.
Make sure you stopped your registry from the previous steps, then start your registry again with TLS enabled:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
You should now be able to access your registry from another docker host:
docker pull ubuntu
docker tag ubuntu myregistrydomain.com:5000/ubuntu
docker push myregistrydomain.com:5000/ubuntu
docker pull myregistrydomain.com:5000/ubuntu
Gotcha
A certificate issuer may supply you with an intermediate certificate. In this case, you must combine your certificate with the intermediate's to form a certificate bundle. You can do this using the cat command:
cat domain.crt intermediate-certificates.pem > certs/domain.crt