How to install latest Kong chart with ingress controller? - kubernetes

I have a problem installing Helm chart for Kong and not sure if I'm doing something wrong or it's a chart issue.
If I run:
helm upgrade kong stable/kong \
--install \
--set ingressController.enabled=true
Everything is installed but on the second run (and I would like to execute always same command in CI/CD) it fails because migraiton tasks have wrong password.
If I try to hardcode the password:
helm upgrade kong stable/kong \
--install \
--set ingressController.enabled=true,env.pg_password=<hardcoded-string>
It fails even on first try with failed password.
What is the expected way to deploy Kong as ingress controller in a continous manner?
Edit 1
Versions:
rafal.wrzeszcz#devel0:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
rafal.wrzeszcz#devel0:~$ helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Edit 2
kong-postgres proof of existance:
rafal.wrzeszcz#devel0:~$ kubectl get secret kong-postgresql -o yaml --namespace <my-namespace>
apiVersion: v1
data:
postgresql-password: <base64-encoded-string>
…

Related

unable to implement multicluster fedeartion using kubefed

I am trying to federate multiple clusters (minikube on mac) using kubefed https://github.com/kubernetes-sigs/kubefed.
With the latest version of helm (helm3) and Kubernetes none of the available examples works.
https://itnext.io/a-kubefed-tutorial-to-synchronise-k8s-clusters-86108194ed79
2.https://betterprogramming.pub/build-a-federation-of-multiple-kubernetes-clusters-with-kubefed-v2-8d2f7d9e198a
3.https://github.com/runyontr/kubefed-demo
Since tiller is missing in helm 3. all of the mentioned demos fails when we deploy kubefed using helm chart.
helm --kube-context cluster1 install kubefed-charts/kubefed --name kubefed --version=0.1.0-rc3 --namespace kube-federation-system
Is there a working example/demo available which can be used for the reference for helm3 and the latest K8.
My env details;
OS: Mac
Helm Version:
version.BuildInfo{Version:"v3.8.2", GitCommit:"6e3701edea09e5d55a8ca2aae03a68917630e91b", GitTreeState:"clean", GoVersion:"go1.18.1"}
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

Problem with Krew “Error: flags cannot be placed before plugin name”

I have a local minikube cluster (version: v1.21.0) with kubectl:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:32:49Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
I installed krew according to the documentation: https://krew.sigs.k8s.io/docs/user-guide/setup/install/
Then, when I try to execute any command this is the result:
Error: flags cannot be placed before plugin name: --cluster
For example:
minikube kubectl krew version
Error: flags cannot be placed before plugin name: --cluster
Why you are running the minikube in command before the kubectl
minikube kubectl krew version
You can set and use the context of K8s using this command
kubectl config use-context CONTEXT_NAME
Using the kubectl only you can access the Krew and install the plugins
kubectl krew install access-matrix
example
kubectl access-matrix
Read more at : https://krew.sigs.k8s.io/docs/user-guide/quickstart/
https://github.com/kubernetes/kubectl/issues/884

Kubernetes deployment Error: unknown flag: --replicas issue

While creating a deployment using command
kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
I am getting error Error: unknown flag: --replicas
controlplane $ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
controlplane $ kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
Error: unknown flag: --replicas
See 'kubectl create deployment --help' for usage.
Could anyone please help me with the reason for this as this command is working on other Kubernetes clusters?
You may try to put a blank character between -- and the commands
For example
kubectl create deploy nginx --image=nginx:1.7.8 -- replicas=2
It's work for me.
It looks like that --replicas and --port flags were added in version 1.19 based on the v1-19 release notes and that's why you are seeing the error.
So, you need the minimum version 1.19 to able to use the replicas and port flags as part of the kubectl create deployment command.
You can however use the kubectl scale/expose command after creating the deployment.
Relevant PR links for replicas and port.
if you trying to update the replica parameter in Azure release pipeline inside the help upgrade command then refer to the following link
Overriding Helm chart values
here it explains that you can override the parameters inside the vallues.yaml file with set command like this
helm upgrade $(RELEASE_ENV) --install \
infravc/manifests/helm/web \
--set namespace=$(NAMESPACE) \
--set replicas=$(replicas) \
--set replicasMax=$(replicasMax) \
--set ingress.envSuffix=$(envSuffix) \
--set ENV.SECRET=$(appSecretNonprod) \
--set ENV.CLIENT_ID=$(clientIdNonprod) \

kubectl suddently asking for username

My last deployment pipeline stage on gitlab suddently started asking for username when running "kubectl apply" command to deploy my application in my cluster in GKE (version 1.18.12-gke.1201 and also tested in 1.17). I'm using the imge dtzar/helm-kubectl to run kubectl commands in the pipeline (also tested older versions of this image). This is the log generated when running the stage:
$ kubectl config set-cluster $ITINI_STAG_KUBE_CLUSTER --server="$ITINI_STAG_KUBE_URL" --insecure-skip-tls-verify=true
Cluster "itini-int-cluster" set.
$ kubectl config set-credentials gitlab --token="$ITINI_STAG_KUBE_TOKEN"
User "gitlab" set.
$ kubectl config set-context default --cluster=$ITINI_STAG_KUBE_CLUSTER --user=gitlab
Context "default" created.
$ kubectl config use-context default
Switched to context "default".
$ kubectl apply -f deployment.yml
Please enter Username: error: EOF
I reproduced the same commands locally, and it's working fine.
When running "kubectl version" in the pipeline, this is the output:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
When running it locally (with exact same environment variables and tokens configured in the pipeline), this is the output:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.12-gke.1201", GitCommit:"160736941f00e54acb1c0a4647166b6f6eb211d9", GitTreeState:"clean", BuildDate:"2020-12-08T19:38:26Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
What could be causing this problem?
The problem was actually in gitlab. My token variable was marked as "protected" in the gitlab settings, and it was being replaced by an empty string when running the config command. So, the problem was solved by unmarking it as protected.

Why do I get schema error creating a Minikube pod with kubectl?

I am new to kubernetes and have successfully setup minikube, kubectl and docker on Windows 10 Pro with Hyper-V
I am now trying to create a Pod using the following kubectl apply -f first-pod.yaml.
Here is a copy of my .yaml file
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
A number of Stack Overflow post recommend checking kubectl version. I have done that and believe it is correct. I am running the latest version of kubectl and kubernetes.
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
SchemaError(io.k8s.api.admissionregistration.v1beta1.RuleWithOperations): invalid object doesn't have additional properties
I have also stopped and restarted both minikube and docker. Any other ideas?
My version of kubectl is conflicting with the one provided by the docker-desktop. Solved it by:
Running Get-Command kubectl in Powershell returned
C:\ProgramFiles\Docker\Docker\Resources\bin
Going to Environment
Variables moving C:\kube above
C:\ProgramFiles\Docker\Docker\Resources\bin
Restarting Powershell