I am trying to automate build and deployment using gitlab CI. for this,i have added few steps like build, test, quality checks, review&deployment. Currently i am facing an issue on deployment, i am creating the docker image and pushing those images into the azure container registry and from there i'm trying to deploy on azure kubernetes by using helm. also i added ingress on the same. but due to some issue docker image is not able to pull the image on kubernetes and throwing below error-
and my gitlab ci pipeline getting success.
This is my deployment function which is written in .gitlab-ci.yml file-
you need to grant AKS service principal ACRPull permission. that will allow it to silently auth to the ACR without you doing anything (you dont even need to create a docker secret in kubernetes).
AKS_RESOURCE_GROUP=myAKSResourceGroup
AKS_CLUSTER_NAME=myAKSCluster
ACR_RESOURCE_GROUP=myACRResourceGroup
ACR_NAME=myACRRegistry
# Get the id of the service principal configured for AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
# Get the ACR registry resource id
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)
# Create role assignment
az role assignment create --assignee $CLIENT_ID --role acrpull --scope $ACR_ID
https://learn.microsoft.com/bs-latn-ba/azure/container-registry/container-registry-auth-aks
Related
Looking for the best way to integrate ACR with AKS for Producation environment, Seems there are multiple ways like, during installation, and after installation, using service principala,a nd using image pull secret etc..
So for our production environment looking for most recommended option, where the requirement as follows.
Is it mandatory to attach acr during aks creation itself
What will be the advantage if we are integrating ACR along with AKS instalation itself. (seems , we dont want to pass the image pull secret to the pod spec in that case and for other options we need to)
What is the another way to integrate ACR with AKS ( az aks update) command will help in this case? if yes, what will be the difference from the previous method where we integrated during AKS installation.
IF I want to setup a secodary AKS cluster in another region, but need to connect the ACR georeplicated instance of Primary instance of ACR , How i can get it done? In this case is it mandaory to attach tge ACR during AKS installation or later post installation also its good to go?
IMHO the best way is Azure RBAC. You dont need to attach the ACR while creating the AKS. You can leverage Azure RBAC and assign the Role "AcrPull" to the Kubelet identity of your nodepool. This can be done for every ACR you have:
export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export ACR_ID=$(az acr show -g <resource group> -n <acr name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "AcrPull" --scope $ACR_ID
Terraform:
resource "azurerm_role_assignment" "example" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
}
I'm configuring CI/CD in OpenShift: Dev > Stage > Prod and I'm facing some issues in Stage to reach Dev ImageStream. The whole setup looks like this:
Dev - runs Tekton pipeline and on the last task triggers BuildConfig > Build outputs new image to ImageStream > ImageStream new tag triggers DeploymentConfig > Deployment happens
Stage - I'd like to reach tag in ImageStream in Dev so I could build and deploy application in Stage.
I'm using OpenShift internal registry image-registry.openshift-image-registry.svc:port
In Stage what I've done is one Task in Pipeline to execute image-pull command:
oc import-image image-registry.openshift-image-registry.svc:port/namespace/name:version --confirm
but I get the following error:
Error from server (Forbidden): imagestreams.image.openshift.io "name" is forbidden:
User "system:serviceaccount:namespace:sa" cannot get resource "imagestreams" in API group "image.openshift.io" in the namespace "namespace"
I've a serviceAccount sa in Dev and Stage the same which only has github-secret.
According to some examples like OpenShift documentation Cluster-role bindings:
$ oc adm policy add-cluster-role-to-user <role> <username>
Binds a given role to specified users for all projects in the cluster.
This meaning in same cluster boundaries.
and stackoverflow previous post:
oc policy add-role-to-user \
system:image-puller system:serviceaccount:testing2:default \
--namespace=testing1
Your project testing2 will be able to access images from project testing1 in your openshift.
This meaning between projects (good) but in the same cluster (I need different cluster)
is there a way to set a role binding to be able to reach ImageStream from a different cluster? Or a cluster role? Or is it other way to achieve this?
Any help is appreciated
You need a service account with system:image-puller role in the namespace you have your image stream then get the token from this service account and use this token as a pull secret from your other cluster.
I would recommend making a mirror ImageStream in your pulling cluster to manage the link.
CLUSTER_TARGET=cluster-b
CLUSTER_PULLING=cluster-a
C_B_NAMESPACE=Y
C_B_SERVICEACCOUNT_FOR_PULL=${CLUSTER_PULLING}-sa
C_B_REGISTRY=image-registry.cluster-b.com:5000
IMAGE_ID=image:tag
# in oc command for Cluster B
oc create sa $C_B_SERVICEACCOUNT_FOR_PULL -n $C_B_NAMESPACE
oc policy add-role-to-user system:image-puller system:serviceaccount:$C_B_SERVICEACCOUNT_FOR_PULL -n $C_B_NAMESPACE
SA_TOKEN=$(oc sa get-token $C_B_SERVICEACCOUNT_FOR_PULL -n $C_B_NAMESPACE)
# in oc command for Cluster A
C_A_NAMESPACE=X
SECRET="{\"auths\":{\"$C_B_REGISTRY\":{\"auth\":\"$(base64 $SA_TOKEN)\",\"email\":\"you#example.com\"}}"
oc create secret generic ${CLUSTER_TARGET}-pullsecret \
--from-literal=.dockerconfigjson=$SECRET \
--type=kubernetes.io/dockerconfigjson -n $C_A_NAMESPACE
oc secrets link default ${CLUSTER_TARGET}-pullsecret --for=pull -n $C_A_NAMESPACE
oc tag $C_B_REGISTRY/${C_B_NAMESPACE}/${IMAGE_ID} ${C_A_NAMESPACE}/${IMAGE_ID} --scheduled -n $C_A_NAMESPACE
# now you have a scheduled pull between A and B to your local ImageStream.
#If you want to use from another namespace in Cluster A:
oc create namespace Z
oc policy add-role-to-user system:image-puller system:serviceaccount:Z:default -n $C_A_NAMESPACE
echo "now pods in Z can reference image-registry.openshift-image-registry.svc/${C_A_NAMESPACE}/${IMAGE_ID}"
Checkout the pull secrets here
For Tekton, I'm not sure but basically you need:
pull secret to pull from external repo
service account with image-puller to pull locally (hense the local mirroring image stream to make your life easier)
Trying to access Kubernetes dashboard (Azure AKS) by using below command but getting error as attached.
az aks browse --resource-group rg-name --name aks-cluster-name --listen-port 8851
Please read AKS documentation of how to authenticate the dashboard from link. This also explains about how to enable the addon for newer version of k8s also.
Pasting here for reference
Use a kubeconfig
For both Azure AD enabled and non-Azure AD enabled clusters, a kubeconfig can be passed in. Ensure access tokens are valid, if your tokens are expired you can refresh tokens via kubectl.
Set the admin kubeconfig with az aks get-credentials -a --resource-group <RG_NAME> --name <CLUSTER_NAME>
Select Kubeconfig and click Choose kubeconfig file to open file selector
Select your kubeconfig file (defaults to $HOME/.kube/config)
Click Sign In
Use a token
For non-Azure AD enabled cluster, run kubectl config view and copy the token associated with the user account of your cluster.
Paste into the token option at sign in.
Click Sign In
For Azure AD enabled clusters, retrieve your AAD token with the following command. Validate you've replaced the resource group and cluster name in the command.
kubectl config view -o jsonpath='{.users[?(#.name == "clusterUser_<RESOURCE GROUP>_<AKS_NAME>")].user.auth-provider.config.access-token}'
Try to run this
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}').
You will get many values for some other keys such as Name, Labels, ..., token . The important one is the token that related to your name. Then copy that token and paste it.
I am trying to follow this tutorial to backup a persistent volume in Azure AKS:
https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv
I can see the volumes doing
az aks get-credentials --resource-group MYRESOURCEGROUP --name HIDDEN --subscription MYSUBSCRIPTION
kubectl get pv
(Both disk and file, managed-premium and standard storage classes)
but then I do:
az disk list --resource-group MYRESOURCEGROUP --subscription MYSUBSCRIPTION
and I get an empty list, so I can't know the source full path to perform the snapshot.
Am I missing something?
Upgrade your az cli version.
I was getting this issue with az cli 2.0.75 returning an empty array for the disk list, with an AKS PV.
upgraded to az cli 2.9.1 and same command worked.
that happens because AKS is creating a service resource group with AKS resources, it is called something like MC_%AKS-name%_%AKS-resource-group-name%_%region% (not configurable at the time of writing). you should list disks in that resource group to view those.
I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials ?
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json as an identity token, put it to the CI config and used it within container with gcloud CLI installed. I used the key file/token to log in to GCP and let gcloud generate kubectl config - the same approach used for GCP container registry login.
This works fine but using kubectl in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl and it's config in the CI jobs.