Jenkinsx Nexus pod is going into "CrashLoopBackOff" state with "Login to nexus failed with the default password and the provided password secret file" - kubernetes

I am trying to create a pipeline in Jenkinsx with Openshift cluster. The pipeline is successfully created but the springboot application build fails with "Nexus 401 authentication error".
So restarted all the pods under jx namespace. All the pods came up and running except the nexus pod.
The nexus pod is going into "CrashLoopBackOff" error with the following error:
"ERROR: Login to nexus failed. Tried both the default password and the provided password secret file"
I have observed that there is a nexus secret with the user and password details. I am unable to login to nexus dashboard using those credentials.
Credentials:
username: admin
password: hUw3nNQ!0eD,uuBS9qm9
I also suspect that the issue might be because of the exclamation in the password.
Please let me know if there is anyway to update the secrets to login to nexus.

Related

Pods deployed in OKD4.8 are going to "ImagePullBackOff" state with "unauthorized: authentication required" error

I have successfully installed OKD4.8 and able to deploy applications. When I try to deploy a new application many days after the installation the pods are going to "ImagePullBackOff" state with "unauthorized: authentication required" error.
To reproduce: Install OKD4.8 and deploy a few applications and leave the setup for some days and deploy a new application then the pod is going into "ImagePullBackOff" state with "unauthorized: authentication required" error.
Log bundle
$ sudo podman pull image-registry.openshift-image-registry.svc:5000/frontend/nginx:v1.0
WARN[0000] Failed to decode the keys ["storage.options.override_kernel_check"] from "/etc/containers/storage.conf".
Trying to pull image-registry.openshift-image-registry.svc:5000/frontend/nginx:v1.0...
Error: initializing source docker://image-registry.openshift-image-registry.svc:5000/frontend/nginx:v1.0: unable to retrieve auth token: invalid username/password: unauthorized: authentication required
I think this might be happening due to the oc login session being expired which the podman fails to authenticate the default internal registry with the old token. So, kindly share me any process to avoid this kind of behaviour
Client Version: 4.8.0-0.okd-2021-11-14-052418
Server Version: 4.8.0-0.okd-2021-11-14-052418
Kubernetes Version: v1.21.2-1555+9e8f924492b7d7-dirty
I have installed OKD4.8 on VMWare machine using the UPI method.

error in add-iam-policy-binding to ESP end point service GCloud

I am trying to create an end point for an API to be deployed into existing GKE cluster by following the instructions in Getting started with Cloud Endpoints for GKE with ESPv2
I clone the sample code in the repo and modified the content of openapi.yaml:
# [START swagger]
swagger: "2.0"
info:
description: "A simple Google Cloud Endpoints API example."
title: "Endpoints Example"
version: "1.0.0"
host: "my-api.endpoints.my-project.cloud.goog"
I then deployed it via the command:
endpoints/getting-started (master) $ gcloud endpoints services deploy openapi.yaml
Now I can see that it has been created:
$ gcloud endpoints services list
NAME TITLE
my-api.endpoints.my-project.cloud.goog
I also have postgreSQL service account:
$ gcloud iam service-accounts list
DISPLAY NAME EMAIL DISABLED
my-postgresql-service-account my-postgresql-service-acco#my-project.iam.gserviceaccount.com False
In the section Endpoint Service Configuration of documentation it says to add the role to the attached service account for the endpoint service as follows, but I get this error:
$ gcloud endpoints services add-iam-policy-binding my-api.endpoints.my-project.cloud.goog
--member serviceAccount:my-postgresql-service-acco#my-project.iam.gserviceaccount.com
--role roles/servicemanagement.serviceController
ERROR: (gcloud.endpoints.services.add-iam-policy-binding) User [myusername#mycompany.com] does not have permission to access services instance [my-api.endpoints.my-project.cloud.goog:getIamPolicy] (or it may not exist): No access to resource: services/my-api.my-project.cloud.goog
The previous lines show the service exits, I guess? Now I am not sure how to resolve this? What permissions do I need? who can give me permission and what permissions he should have? how can I check? Is there any other solution?
The issue got resolved after I was assigned the role of "Project_Admin". It was not ideal as it was giving too much permission to me. The role "roles/endpoints.portalAdmin" was also tried but did not help.

OpenShift, how do I give myself clutser-admin?

I just started using OpenShift and have permissions problems. I am on the free trial for OpenShift 4.3.3 and cannot get my containers to run as root. I am the only user on my instance and I have admin, but it says I need cluster-admin to run the containers as root?
I tried running:
oc policy add-role-to-group cluster-admin anyuid
and that returned:
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io "cluster-admin" is forbidden: user "hustlin" (groups=["system:authenticated:oauth" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:
{APIGroups:["*"], Resources:["*"], Verbs:["*"]}
{NonResourceURLs:["*"], Verbs:["*"]}
Going through OpenShift Online -> Administrator view -> User Management -> Roles -> cluster-admin -> Role Bindings, it states:
Restricted Access
You don't have access to this section due to cluster policy.
Error details
rolebindings.rbac.authorization.k8s.io is forbidden: User "hustlin" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
I feel like it should not be this difficult for me to run a container as root. Just testing out OpenShift and I haven't been able to successfully run a single container on the platform, they all eventually go to CrashLoopBackOff.
Yes, I did try the:
oc login -u system:admin
command and it prompted me for my password before returning:
error: username system:admin is invalid for basic auth
I even tried following this guide from the OpenShift blog, but it would not recognize oadm.

ArgoCD can't connect to a github account with 2FA

I'm trying to use CircleCI + ArgoCD for CD/CI on a digitalocean kubernetes cluster, is there a way to connect ArgoCD to a github account that have 2FA enabled? Because every time I go in the connect repo section it gives me "Unable to connect repository: authentication required" but the credentials are the correct one
Try to use a access token, check this bloc https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line

Using KeyCloak Gateway in a K8S Cluster

I have KeyCloak Gateway running successfully locally providing Google OIDC authentication for the Kubernetes dashboard. However using the same settings results in an error when the app is deployed as a pod in the cluster itself.
The error I see when the Gateway is running in a K8S pod is:
unable to exchange code for access token {"error": "invalid_request: Credentials in post body and basic Authorization header do not match"}
I'm calling the gateway with the following options:
--enable-logging=true
--enable-self-signed-tls=true
--listen=:443
--upstream-url=https://mydashboard
--discovery-url=https://accounts.google.com
--client-id=<client id goes here>
--client-secret=<secret goes here>
--resources=uri=/*
With these settings applied to a container in a pod I can browse to the Gateway, am redirected to Google to log in, and then am redirected back to the Gateway where the error above is generated.
What could account for the difference between running the application locally and running it in a pod that would generate the above error?
This turned out to be a copy/paste fail in the end, with the client secret being incorrect. The error message wasn't much help here, but at least it was a simple fix.