Internal certificate used when installing Helm Tiller Kubernetes - kubernetes

The error below is triggered when executing kubectl -n gitlab-managed-apps logs install-helm.
I've tried regenerating the certificates, and bypassing the certificate check. Somehow it is using my internal certificate instead of the certificate of the source.
root#dev # kubectl -n gitlab-managed-apps logs install-helm
+ helm init --tiller-tls --tiller-tls-verify --tls-ca-cert /data/helm/helm/config/ca.pem --tiller-tls-cert /data/helm/helm/config/cert.pem --tiller-tls-key /data/helm/helm/config/key.pem
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: x509: certificate is valid for *.tdebv.nl, not kubernetes-charts.storage.googleapis.com
What might be the issue here? Screenshot below is the error Gitlab is giving me (not much information either).

After having the same issue I finally found the solution for it:
In the /etc/resolv.conf file on your Master and Worker nodes you have to search and remove the search XYZ.com entry.
If you are using Jelastic you have to remove this entry every time after a restart. It gets added by Jelastic automatically. I already contacted them so maybe they will fix it soon.

Creating "~/.helm/repository/repositories.yaml" with the following content solved the problem.
cat << EOF >> ~/.helm/repository/repositories.yaml
apiVersion: v1
repositories:
- caFile: ""
cache: ~/.helm/repository/cache/stable-index.yaml
certFile: ""
keyFile: ""
name: stable
password: ""
url: https://kubernetes-charts.storage.googleapis.com
username: ""
- caFile: ""
cache: ~/.helm/repository/cache/local-index.yaml
certFile: ""
keyFile: ""
name: local
password: ""
url: http://127.0.0.1:8879/charts
username: ""
EOF
#helm init

I experienced the same issue on Kubernetes with the Calico network stack under Debian Buster.
Checking a lot of configs and parameters, I ended up with getting it to work by changing the policy for the forward rule to ACCEPT. This made it clear that the issue is somewhere around the firewall.
Running iptables -L gave me the following unveiling warning: # Warning: iptables-legacy tables present, use iptables-legacy to see them
The output given by the list command does not contain any Calico rules. Running iptables-legacy -L showed me the Calico rules, so it seems obvious now why it didn't work. So Calico seems to use the legacy interface.
The issue is the change in Debian to iptables-nft in the alternatives, you can check via:
ls -l /etc/alternatives | grep iptables
Doing the following:
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy
Now it works all fine! Thanks to Long at the Kubernetes Slack channel for pointing the route to solving it.

Related

Kubernetes : do we need to set http_proxy and no_proxy in apiserver manifest?

My cluster is behind a corporate proxy, and I have manually set http_proxy=myproxy, https_proxy=myproxy and no_proxy=10.96.0.0/16,10.244.0.0/16,<nodes-ip-range> in the three kubernetes core manifests (kube-apiserver.yaml, kube-controller-manager.yaml and kube-scheduler.yaml). Now, I want to upgrade kubernetes with kubeadm. But I know kubeadm will regenerate these manifests from the kubeadm-config configmap when upgrading, so without these environment variables. I can't find an extraEnvs key in kubeadm-config configmap (like extraArgs and extraVolumes).
Do I really need to set these variables in all kubernetes manifests ? If not, I think kubeadm will throw a warning because all communications will use the proxy (and I don't want that).
How can I pass these variables to kubeadm when upgrading ?
There are no such a flags available for Kubeadm at the moment. You may want to open github request for that feature.
You can use the way described here or here and export variables:
$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1
And then use sudo -E bash to use the current
$ sudo -E bash -c "kubeadm init... "
Alternative way would be to reference those variables in the command as showed here:
NO_PROXY=master-ip,node-ip,127.0.0.1 HTTPS_PROXY=http://proxy-ip:port/ sudo kubeadm init --pod-network-cidr=192.168.0.0/16...

Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)

Please see the command below:
helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
which I got from here: https://github.com/helm/charts/tree/master/stable/mssql-linux
After just one month it appears the --name is no longer needed so I now have (see here: Helm install unknown flag --name):
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
The error I see now is:
Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)
What is the problem?
Update
Following on from the answers; the command above now works, however I cannot connect to the database using SQL Studio Manager from my local PC. The additional steps I have followed are:
1) kubectl expose deployment mymssql-mssql-linux --type=NodePort --name=mymssql-mssql-linux-service
2) kubectl get service - the below service is relevant here
mymssql-mssql-linux-service NodePort 10.107.98.68 1433:32489/TCP 7s
3) Then try to connect to the database using SQL Studio Manager 2019:
Server Name: localhost,32489
Authentication: SQL Server Authentication
Login: sa
Password: I have tried: b64enc quote and MyStrongPassword1234
I cannot connect using SQL Studio Manager.
Check if the stable repo is added or not
helm repo list
If not then add
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
And then run below to install mssql-linux
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Try:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
and then run your helm command.
Explanation:
Helm in version 3 does not have any repository added by default (helm v2 had stable repository add by default), so you need to add it manually.
Update:
First of all, if you are using helm keep everything in helm values it makes thinks cleaner and easier to find it later rather than mixing kubeclt and helm - I am referring to exposing service via kubeclt.
Ad. 1,2. You have to read some docs to understand Kubernetes services.
With expose command and type NodePort you are exposing your MySQL server on port 32489 - in your case, on Kubernetes nodes. You can check IP of Kubernetes nodes with kubectl get nodes -owide, so your database is available on :32489. This approach is very tricky, it might work fine for PoC purposes, but this is not a recommended way especially on cloud-hosted Kubernetes. The same result you can achieve appending you helm command with --set service.type=NodePort.
Ad. 3 For debugging purposes you can use kubectl port-forward to port forward traffic from container to your local machine. kubectl port-forward deploymeny/mymssql-mssql-linux 1433 should do the trick and you should be able to connect to MySQL on localhost:1433.
In case if the chart you want to use is not published to hub you can install the package directly using the path to the unpacked chart directory.
For example (works for helm v3.2.4):
git clone https://github.com/helm/charts/
cd helm/charts/stable
helm install --name mymssql ./mssql-linux --set acceptEula.value=Y --set edition.value=Developer

How to deploy a release after changing the configurations?

I have had jhub released in my cluster successfully. I then changed the config to pull another docker image as stated in the documentation.
This time, while running the same old command:
# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.2 \
--values jupyter-hub-config.yaml
where the jupyter-hub-config.yaml file is:
proxy:
secretToken: "<a secret token>"
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
I get the following problem:
UPGRADE FAILED
ROLLING BACK
Error: "jhub" has no deployed releases
Error: UPGRADE FAILED: "jhub" has no deployed releases
I then deleted the namespace via kubectl delete ns/jhub and the release via helm delete --purge jhub. Again tried this command in vain, again the same error.
I read few GH issues and found that either the YAML file was invalid or that the --force flag worked. However, in my case, none of these two are valid.
I expect to make this release and also learn how to edit the current releases.
Note: As you would find in the aforementioned documentation, there is a pvc created.
I had the same issue when I was trying to update my config.yaml file in GKE. Actually what worked for me is to redo these steps:
run curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --service-account tiller --history-max 100 --wait
[OPTIONAL] helm version to verify that you have a similar output as the documentation
Add repo
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
Run upgrade
RELEASE=jhub
NAMESPACE=jhub
helm upgrade $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
After changes in kubeconfig the next solution worked for me
helm init --tiller-namespace=<ns> --upgrade
Works with kubectl 1.10.0 and helm 2.3.0. I guess this upgrades tiller to compatible helm version.
Don't forget to set KUBECONFIG variable before use this comman - this step itself may solve your issue if you didn't do this after changing your kubeconfig.
export KUBECONFIG=<*.kubeconfig>
In my case in the config cluster.server field has been changed, but context.name and current-context fields I left the same as in the previous config, not sure if it matters. And I faced the same issue on the firs try to deploy new release with helm, but after first successful deploy it's enough to change KUBECONFIG variable.
I hope it helps.
Added the following to my gCloud. I run it everytime I update my config.yaml file. Make sure to be connected to the correct Kubernetes cluster before running.
update.sh
# Installs Helm.
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
# Make Helm aware of the JupyterHub Helm chart repo.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
# Re-installs the chart configured by your config.yaml.
RELEASE=jhub
JUPYTERHUB_VERSION=0.9.0
helm upgrade $RELEASE jupyterhub/jupyterhub \
--version=${JUPYTERHUB_VERSION} \
--values config.yaml

Getting error while trying to setup kubernetes on debian using helm

While running the helm init I was getting an error:
Error: error installing: the server could not find the requested resource (post deployments.extensions)
But I solved it by running :
helm init --client-only
But when I run:
helm upgrade --install --namespace demo demo-databases-ephemeral charts/databases-ephemeral --wait
I'm getting:
Error: serializer for text/html; charset=utf-8 doesn't exist
I found nothing convincing as a solution and I'm not able to proceed forward in the setup.
Any help would be appreciated.
Check if your ~/.kube/config exists and is properly set up. If not, run the following command:
sudo cp -i /etc/kubernetes/admin.config ~/.kube/config
Now check if kubectl is properly setup using:
kubectl version
This answer is specific to the issue you are getting. If this does not resolve the issue, please provide more error log.
Apparently, your kube-dns pod not able to find api server, so it returns text/html, rather then JSON
1) Check errors in dns container apart from Error: serializer for text/html; charset=utf-8 doesn't exist
kubectl logs <kube-dns-pod> -n kube-system kubedns
2) Update your dns pod config with following flags:
--kubecfg-file=~/.kube/config <-- path to your kube-config file
--kube-master-url=https://0.0.0.0:3000 <--address to your master node

How do I change Spinnaker configs after an installation with helm?

I'm new to using Spinnaker and Halyard. I'm following this guide by Google.
When installing Spinnaker, they use helm and attach on a spinnaker-config.yaml file that looks like this:
./helm install -n cd stable/spinnaker -f spinnaker-config.yaml --timeout 600 \
--version 1.1.6 --wait
spinnaker-config.yaml:
export SA_JSON=$(cat spinnaker-sa.json)
export PROJECT=$(gcloud info --format='value(config.project)')
export BUCKET=$PROJECT-spinnaker-config
cat > spinnaker-config.yaml <<EOF
gcs:
enabled: true
bucket: $BUCKET
project: $PROJECT
jsonKey: '$SA_JSON'
dockerRegistries:
- name: gcr
address: https://gcr.io
username: _json_key
password: '$SA_JSON'
email: 1234#5678.com
# Disable minio as the default storage backend
minio:
enabled: false
# Configure Spinnaker to enable GCP services
halyard:
spinnakerVersion: 1.10.2
image:
tag: 1.12.0
additionalScripts:
create: true
data:
enable_gcs_artifacts.sh: |-
\$HAL_COMMAND config artifact gcs account add gcs-$PROJECT --json-path /opt/gcs/key.json
\$HAL_COMMAND config artifact gcs enable
enable_pubsub_triggers.sh: |-
\$HAL_COMMAND config pubsub google enable
\$HAL_COMMAND config pubsub google subscription add gcr-triggers \
--subscription-name gcr-triggers \
--json-path /opt/gcs/key.json \
--project [project_guid] \
--message-format GCR
EOF
I need to add another pubsub with a different name than gcr-triggers and noticed that anything I try to add in a pipeline won't persist. I suspect this is because it needs to be added with hal like so:
note: I've already created and verified gcloud subscriptions and add-iam-policy-binding.
hal config pubsub google subscription add [new_trigger] \
--subscription-name [new_trigger] \
--json-path /opt/gcs/key.json \
--project $PROJECT \
--message-format GCR
I suspect installing spinnaker like so is kind of unconventional (correct me if I'm wrong). I've never ran a hal binary from my master machine where kubectl is run and this was not necessary in the guide. Spinnaker's architecture has a bunch of pods I can see. I've poked around in them and didn't see hal.
My question is: with this guide, how am I suppose to hal config new things? What's the normal way this is done?
Helm is a package manager similar to apt in some linux distros.
Because this is a Micro service Architecture running in Kubernetes you must access the Halyard Pod (actually it should be a stateful set)
Get Halyard Pod
export HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
Access Halyard Pod in Spinnaker namespace kubectl -n spinnaker exec -it ${HALYARD} /bin/bash
Test access by running the command hal config you should get the full config of spinnaker
After you apply the changes you need dont forget to use hal deploy apply