i have ingress controller up and running in default namespace. my other namespaces have their own ingress yaml files. whenever i try to deploy that. i get the following error:
Error from server (InternalError): error when creating "orchestration-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: x509: certificate is valid for ingress-nginx-controller-admission, ingress-nginx-controller-admission.ingress-nginx.svc, not ingress-nginx-controller-admission.default.svc```
This solved my error. i removed previous version of ingress and deployed this one.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml
Related
I recently updated my ec2 instances to use imdSV2 but had to rollback because of the following issue:
It looks like after i did the upgrade my init containers started failing and i saw the following in the logs:
time="2022-01-11T14:25:01Z" level=info msg="PUT /latest/api/token (403) took 0.753220 ms" req.method=PUT req.path=/latest/api/token req.remote=XXXXX res.duration=0.75322 res.status=403 time="2022-01-11T14:25:37Z" level=error msg="Error getting instance id, got status: 401 Unauthorized"
We are using Kube2iam for the same. Any advice what changes need to be done on the Kube2iam side to support imdSV2? Below is some info from my kube2iam daemonset:
EKS =1.21
image = "jtblin/kube2iam:0.10.9"
Im setting up a gitlab runner in my kubernets cluster.
The runner is properly deployed and running. When I trigger any pipeline, during the prepare stage it fails with an authentication error to pull from my private docker registry:
Preparing the "kubernetes" executor 00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image myprivaterepo.com/terraform:light ...
Using attach strategy to execute scripts...
Preparing environment 00:04
Waiting for pod gitlab-runner/runner-d8cjrcgf-project-2156-concurrent-0nhsjb to be running, status is Pending
ContainersNotInitialized: "containers with incomplete status: [init-permissions]"
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
WARNING: Failed to pull image with policy "": image pull failed: rpc error: code = Unknown desc = failed to pull and unpack image "myprivaterepo.com/terraform:light": failed to resolve reference "myprivaterepo.com/terraform:light": pulling from host myprivaterepo.com failed with status code [manifests light]: 401 Unauthorized
ERROR: Job failed: prepare environment: waiting for pod running: pulling image "myprivaterepo.com/terraform:light": image pull failed: rpc error: code = Unknown desc = failed to pull and unpack image "myprivaterepo.com/terraform:light": failed to resolve reference "myprivaterepo.com/terraform:light": pulling from host myprivaterepo.com failed with status code [manifests light]: 401 Unauthorized. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
I already tried by adding in the runner deployment imagePullSecrets (kubernetes.io/dockerconfigjson) and also in the gitlab -> Settings -> CI/CD -> environment variable -> DOCKER_AUTH_CONFIG but no success for any of those.
Where is the correct place to add it? Im using helm chart.
my .gitlab-ci.yaml:
.base-terraform:
image:
name: myprivaterepo.com/terraform:light
In my DOCKER_AUTH_CONFIG I had a domain with different port than the actual one. gilab-ci used this project env variable automatically as it should be.
I am new to aws and kubectl, I need to deploy one of the app to aws. After deploying to eks cluster, I edited the ingress in the kubectl but unfortunately it returned 404 not found. (i am pretty sure the new service container works fine)
after checking from kubectl describe ingress, here are some events reports:
Warning FailedBuildModel 40m ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-4a93-4e27-9d6b-xxxxxxxx
Warning FailedBuildModel 22m ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-5368-41e1-8a4d-xxxxxxxx
Warning FailedBuildModel 5m8s ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-20ea-4bd0-b1cb-xxxxxxxx
Anyone has ideas about this issue?
I use Rancher to access the cluster , but the access failed and an error was reported:
Cluster health check failed: Failed to communicate with API server: Get "https://172.20.0.1:443/api/v1/namespaces/kube-system?timeout=45s": context deadline exceeded.
Error Get "https://172.20.0.1:443/api/v1/namespaces?timeout=45s": waiting for cluster [c-9dbht] agent to connect.
I located that cattle-cluster-agent is in crashLoopBackOff state and reported the following error:
error msg="failed to unmarshal https://releases.rancher.com/index.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type repo.IndexFile".
level=error msg="error syncing 'rancher-latest': handler helm-clusterrepo-download: failed to parse response from https://releases.rancher.com/index.yaml, requeuing".
Observed a panic: runtime.boundsError{x:3, y:0, signed:true, code:0x2} (runtime error: slice bounds out of range [:3] with capacity 0).
The cattle-cluster-agent's container is constantly being restarted.
Anyone knows how to solve it?
I recently installed FluxCD 1.19.0 on an Azure AKS k8s cluster using fluxctl install. We use a private git (self hosted bitbucket) which Flux is able to reach and check out.
Now Flux is not applying anything with the error message:
ts=2020-06-10T09:07:42.7589883Z caller=loop.go:133 component=sync-loop event=refreshed url=ssh://git#bitbucket.some-private-server.com:7999/infra/k8s-gitops.git branch=master HEAD=7bb83d1753a814c510b1583da6867408a5f7e21b
ts=2020-06-10T09:09:00.631764Z caller=sync.go:73 component=daemon info="trying to sync git changes to the cluster" old=7bb83d1753a814c510b1583da6867408a5f7e21b new=7bb83d1753a814c510b1583da6867408a5f7e21b
ts=2020-06-10T09:09:01.6130559Z caller=sync.go:539 method=Sync cmd=apply args= count=3
ts=2020-06-10T09:09:20.2097034Z caller=sync.go:605 method=Sync cmd="kubectl apply -f -" took=18.5965923s err="running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding" output=
ts=2020-06-10T09:09:38.7432182Z caller=sync.go:605 method=Sync cmd="kubectl apply -f -" took=18.5334244s err="running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding" output=
ts=2020-06-10T09:09:57.277918Z caller=sync.go:605 method=Sync cmd="kubectl apply -f -" took=18.5346491s err="running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding" output=
ts=2020-06-10T09:09:57.2779965Z caller=sync.go:167 component=daemon err="<cluster>:namespace/dev: running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding; <cluster>:namespace/prod: running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding; dev:service/hello-world: running kubectl: error: unable to recognize \"STDIN\": an error on the server (\"\") has prevented the request from succeeding"
ts=2020-06-10T09:09:57.2879489Z caller=images.go:17 component=sync-loop msg="polling for new images for automated workloads"
ts=2020-06-10T09:09:57.3002208Z caller=images.go:27 component=sync-loop msg="no automated workloads"
From what I understand, Flux passes the resource definitions to kubectl, which then applies them?
The way I interpret the error would mean that kubectl isn't passed anything to. However I opened a shell in the container and made sure Flux was in fact checking something out - which it did.
I tried raising the verbosity to 9, but it didn't return anything that I deemed relevant (detailed outputs of the http requests and responses against the Kubernetes API).
So what is happening here?
The problem was with the version of kubectl used in the 1.19 flux release, so I fixed it by using a prerelease: https://hub.docker.com/r/fluxcd/flux-prerelease/tags