Connecting to codefresh image registry via kubectl, problem with secrets - kubernetes

I am trying to deploy an application via kubectl using an image stored on Codefresh. I have it running perfectly when i place the image on a public registry.
The problem is when I apply the deployment.yaml I get a "ImagePullBackOff" error on the pods. I'm assuming, i think correctly, that this is because I need a secret to be able to access my Codefresh image.
This is the container part of my current deployment.yaml:
spec:
containers:
- name: dockapp
#States the image that will be put inside the pod. Secret to get access is declared below
#registry.hub.docker.com/jamiedovu/dockapp:latest
image: r.cfcr.io/jamiew87/my-app-image:master
ports:
- containerPort: 8080
name: http
imagePullSecrets:
- name: regcred
My question is, what is it i need to put into the secret "regcred" to be able to connect to this private registry. The Kubernetes documentation only demonstrates how to do one for docker.

I think it's explained in the docs.
export DOCKER_REGISTRY_SERVER=r.cfcr.io
export DOCKER_USER=YOUR_USERNAME
export DOCKER_PASSWORD=YOUR_REGISTRY_PASSWORD
export DOCKER_EMAIL=YOUR_EMAIL
kubectl create secret docker-registry cfcr\
--docker-server=$DOCKER_REGISTRY_SERVER\
--docker-username=$DOCKER_USER\
--docker-password=$DOCKER_PASSWORD\
--docker-email=$DOCKER_EMAIL

For people in the future with problems,
The codefresh repository is an actual docker repository. Not knowing this was giving me the problems.
So in the docker-username etc places you put your codefresh credentials, and instead of the password you put a secret that you generate within codefresh. This gives you access to the r.cfcr.io repository.

Related

unable to deploy local container image to k8s cluster

I have tried to deploy one of the local container images I created but keeps always getting the below error
Failed to pull image "webrole1:dev": rpc error: code = Unknown desc =
Error response from daemon: pull access denied for webrole1,
repository does not exist or may require 'docker login': denied:
requested access to
I have followed the below article to containerize my application and I was able to successfully complete this but when I try to deploy it to k8s pod I don't succeed
My pod.yaml looks like below
apiVersion: v1
kind: Pod
metadata:
name: learnk8s
spec:
containers:
- name: webrole1dev
image: 'webrole1:dev'
ports:
- containerPort: 8080
and below are some images from my PowerShell
I am new to dockers and k8s so thanks for the help in advance and would appreciate if I get some detailed response.
When you're working locally, you can use an image name like webrole, however that doesn't tell Docker where the image came from (because it didn't come from anywhere, you built it locally). When you start working with multiple hosts, you need to push things to a Docker registry. For local Kubernetes experiments you can also change your config so you build your image in the same Docker environment as Kubernetes is using, though the specifics of that depend on how you set up both Docker and Kubernetes.

Secret appregistry-mw-proxy-secret not found after deploying HCL Connections Customizer Helm chart

I'm installing the component pack 6.5.0.0 for HCL Connections. Orient me works, but after deploying the customizer, my mw-proxy pods got stuck at ContainerCreating. They show the following event log error:
MountVolume.SetUp failed for volume "appregistry-mw-proxy-secret-vol" : secrets "appregistry-mw-proxy-secret" not found
I never heared of those secret and looked inside the chart. mw-proxy-cloud-deployment.yaml try to mount those secret:
volumes:
- name: nfs
persistentVolumeClaim:
claimName: customizernfsclaim
- name: appregistry-mw-proxy-secret-vol
secret:
secretName: appregistry-mw-proxy-secret
The problem is that I could not found any information what this secret is for and how it should be mounted. In the documentation they just require bootstrap, connections-env and infrastructure charts. All of them were installed. I just tried creating some file as secret:
echo Test123 > pwd-test
k create secret generic appregistry-mw-proxy-secret --from-file=pwd-test
After deleting all the pods, they came up running. But I don't know what this secret is for and what the customizer expects. Maybe this break some functionality of the application.
My questions are:
What is this secret for?
How do I create it correctly? (User, password, certificate, whatever)
Is there any documentation about it?
Have you tried to add the parameter
env.force_regenerate=true
to the bootstrap helmchart ?
There's also the createSecret=true in the connections-env helm chart.
If you used this documentation,
the order of the helm deployments is wrong.
The infrastructure deployment creates the secret "appregistry-mw-proxy-secret". So, first deploy infrastructure and after that mw-proxy and the pods will start.

Standard way of keeping Dockerhub credentials in Kubernetes YAML resource

I am currently implementing CI/CD pipeline using docker , Kubernetes and Jenkins for my micro services deployment. And I am testing the pipeline using the public repository that I created in Dockerhub.com. When I tried the deployment using Kubernetes Helm chart , I were able to add my all credentials in Value.yaml file -the default file getting for adding the all configuration when we creating a helm chart.
Confusion
Now I removed my helm chart , and I am only using deployment and service n plane YAML files. SO How I can add my Dockerhub credentials here ?
Do I need to use environment variable ? Or Do I need to create any separate YAML file for credentials and need to give reference in Deployment.yaml file ?
If I am using imagePullSecrets way How I can create separate YAML file for credentials ?
From Kubernetes point of view: Pull an Image from a Private Registry you can create secrets and add necessary information into your yaml (Pod/Deployment)
Steps:
1. Create a Secret by providing credentials on the command line:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
2. Create a Pod that uses your Secret (example pod):
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
You can pass the dockerhub creds as environment variables at jenkins only and Imagepullsecrets are to be made as per kubernetes doc, as they are one time things, you can directly add them to the required clusters

Kubernetes image pull from nexus3 docker host is failed

We have created a nexus3 docker host private registry on CentOS machine and same ip details updated on daemon.json under docker folder.
Docker pull and push is working fine.
Same image while trying to kubernetes deploy is failing with image pull state.
$ Kubectl run deployname --image=nexus3provaterepo:port/image
Before we create secret entries via command $ Kubectl create secret with same inform of user ID and password, like docker login -u userid -p passwd
Here my problem is image pull is failing from nexus3 docker host.
Please suggest me how to verify login via kubernetes command and resolve this pull image issue.
Looking yours suggestions, Thanks in advance
So when pulling from private repos you need to specify an imagePullSecret like such:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
# Specify the secret with your users credentials
imagePullSecrets:
- name: regcred
You would then use the kubectl apply -f functionality, I am not actually sure you can use this in the imperative cli version of running a deployment but all the doucmentation on this can be found at here

Error while trying to do a kubernetes deployment from a private registry

I am trying to deploy a docker image from a Private repository using Kubernetes and seeing the below error
Waiting: CrashLoopBackoff
You need to pass image pull secret to kubernetes.
Get docker login json
Create a k8s secret with this json
Refer a secret from a pod
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: k8s-secret-name
Docs: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Usually, the bad state caused by an image pull called ImagePullBackOff, so I suggest kubectl get events to check the root cause.
The issue got resolved.
I have deleted the Registry , re-created the Registry and tried deploying a different docker image. I could successfully deploy and also could test the deployed application.
Regards,
Ravikiran.M