Why imagePullPolicy cannot be changed to other than Always in Kubernetes - kubernetes

I have a kubernetes cluster set up and I would like to use local images. I have configured .yaml file so that it contains (in containers -> image -section) "imagePullPolicy: Never" like this:
spec:
containers:
- image: <name>:<version>
name: <name>
imagePullPolicy: Never
resources: {}
restartPolicy: Always
I have deployed this service to kubernetes but image cannot be pulled (getting ImagePullBackOff -error when viewing pods with kubectl get pod) since image cannot be found from internet/registry and for some unknown reason imagePullPolicy is in Always-value. This can be seen e.g. from /var/log/messages from text:
"spec":{"containers":[{"image":"<name>","imagePullPolicy":"Always","name":"<name>","
So my question is: Why is this imagePullPolicy in Always-value though I have set imagePullPolicy to Never in my .yaml file (which has of course been taken into use)? Is there some default value for imagePullPolicy that runs over the value described in .yaml file?
My environment is Centos7 and I'm using Kontena Pharos 2.2.0 (uses e.g. docker 1.13.1 (Apache License 2.0) and kubernetes 1.13.2 (Apache License 2.0)).
I expected that when I set "imagePullPolicy: Never" in .yaml file the value should now be Never (and not Always).
Thank you so much for helping!

welcome on StackOverflow.
It happens so, because your Kubernetes cluster has presumably enabled admission control plugin in api server, called 'AlwaysPullImages', which role is to overwrite (mutate) objects prior storing them in Kubernetes data store - etcd.
This is a default behavior of clusters bootstrapped with Kontena Pharos since version v2.0.0-alpha.2.
You can disable this admission plugin in your main cluster.yml config file:
...
addons:
ingress-nginx:
enabled: true
admission_plugins:
- name: AlwaysPullImages
enabled: false
...
You should expect then to see PODs failing with different status reason, if image is not found on local registry:
client-deployment-99699599d-lfqmr 0/1 ErrImageNeverPull 0 42s
Please read more on using of Admission Controllers here

Related

GKE AppArmor profile is unconfined eventhough the node has it defined and working

I am trying to load an apparmor profile I created using GKE and some of the following instructions.
To apply the created app armor profile I followed this instructions:
https://cloud.google.com/container-optimized-os/docs/how-to/secure-apparmor#creating_a_custom_security_profile
which is just the apparmor parser applied to the node[s], and some follow up instructions to apply this same profile creation during restart of the node.
Basically is running the following line:
/sbin/apparmor_parser --replace --write-cache /etc/apparmor.d/no_raw_net
and testing that a container with this profile is secured as expected.
As a second step I defined an environment variable with the apparmor profile name inside of an environment variable of the pod. As explained in here:
https://cloud.google.com/migrate/anthos/docs/troubleshooting/app-armor-profile
Basically is defining the pod in this way:
spec:
containers:
- image: gcr.io/my-project/my-container:v1.0.0
name: my-container
env:
- name: HC_APPARMOR_PROFILE
value: "apparmor-profile-name"
securityContext:
privileged: true
Inside of the host the apparmor profile works as expected. But I cannot provide this profile.
Also tried removing the security context section of the pod that is defined as true in the documentation for gke.
Last but not least I tried with k8s pod annotation which is a feature of k8s to set a profile to a given container as explained here:
https://kubernetes.io/docs/tutorials/security/apparmor/
with this the pod looks like this:
apiVersion: v1
kind: Pod
metadata:
name: hello-apparmor-2
annotations:
container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-allow-write
spec:
containers:
- name: hello
image: busybox
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
but also had no good luck to apply the given profile.
Also tried to apply user-data config as a custom metadata for the cloud-init of the node instance, so it can add also the profile I created to app armor, and double check that the creation matters is not an issue but the edition of the cluster matadata is disable post creation of the cluster, and the creation of a new cluster node with the user-data is not allowed due to the fact that user-data is reserved for the container optimized os user data that will be defined by google.
No matter what I do I always end up either having unconfined profile for the current container or "cri-containerd.apparmor.d (enforce)" depending if the security context is set to true or not...
Do you have any advice on how can I provide the given profile to a pod in GKE?
If I understood the question correctly, seems like you are mixing the profile's filename with the profile name.
annotations:
container.apparmor.security.beta.kubernetes.io/<container-name>: localhost/<profile-name>
Here, <profile-name> is the name of the profile, it's not the same as the filename of the profile. Eg: in the below example filename is no_raw_net and profile name is no-ping.
cat > /etc/apparmor.d/no_raw_net <<EOF
#include <tunables/global>
profile no-ping flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
network inet tcp,
network inet udp,
network inet icmp,
deny network raw,
deny network packet,
file,
mount,
}
EOF
As mentioned I missed the way I was naming things, but besides that I also would like to mention one more alternative: https://github.com/kubernetes-sigs/security-profiles-operator which is to work with some kubernetes CRDs that allows to integrate with apparmor, seccomp, and SELinux.
Some of the implementation like AppArmor looks like it is still in WIP at the moment of this writing and I hope this initiative moves forward.

I'm installing a custom WordPress image from Bitnami with Helm. Can't pull image from private repository in Docker Hub

I'm trying to deploy a WordPress instance with custom plugins and theme on Minikube.
First, I've created a custom WordPress Docker Image based on Bitnami's Image. I've pushed it to Docker Hub and made the repository private.
Now, I'm trying to deploy the Image using Bitnami's WordPress Helm Chart. For this, I:
Created a secret regcred in the same namespace as the deployment, as described in Kubernetes Docs.
kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1 --docker-username=USERNAME --docker-password=PWORD --docker-email=EMAIL
Changed the chart's values-production.yaml (here) to the following:
.
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
global:
imageRegistry: docker.io
imagePullSecrets:
- regcred
# storageClass: myStorageClass
## Bitnami WordPress image version
## ref: https://hub.docker.com/r/bitnami/wordpress/tags/
##
image:
registry: docker.io
repository: MYUSERNAME/PRIVATEIMAGE
tag: latest
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
pullSecrets:
- regcred
## Set to true if you would like to see extra information on logs
##
debug: true
...
I'm thinking that the pod should be able to pull the private repository, but it never can. It's status is stuck at Waiting: ImagePullBackOff
Can am I doing wrong? I'm following this tutorial, btw. Also, I'm running things on my Windows 10 through WSL2 (Ubuntu distro).
Hello Gonçalo Figueiredo,
Reproducing this issue, it worked just fine when I used kubectl create secret docker-registry and deployed the chart in my GKE cluster in my linux vm.
When I tried using Minikube on my local machine, I did come across a ImagePullBackOff error. The problem was that I had recently changed my dockerHub password and the credentials in my host machine were obsolete. Doing a new docker login solved the issue.
Not sure if it could be happening something similar on your end. If not, could you please check that your private repo is up and try using another method to create the Secret containing the Docker credentials?
Found the solution. I only had to change the server property in the secret from https://index.docker.io/v1 to docker.io.
I'm now facing another issue, but I think this one is fixed.

How to install the JProfiler agent in a Kubernetes container?

What do I have to put into a container to get the agent to run? Just libjprofilerti.so on its own doesn't work, I get
Could not find agent.jar. The agentpath parameter must point to
libjprofilerti.so in an unmodified JProfiler installation.
which sounds like obvious nonsense to me - surely I can't have to install over 137.5 MB of files, 99% of which will be irrelevant, in each container in which I want to profile something?
-agentpath:/path/to/libjprofilerti.so=nowait
An approach is to use Init Container.
The idea is to have an image for JProfiler separate from the application's image. Use the JProfiler image for an Init Container; the Init Container copies the JProfiler installation to a volume shared between that Init Container and the other Containers that will be started in the Pod. This way, the JVM can reference at startup time the JProfiler agent from the shared volume.
It goes something like this (more details are in this blog article):
Define a new volume:
volumes:
- name: jprofiler
emptyDir: {}
Add an Init Container:
initContainers:
- name: jprofiler-init
image: <JPROFILER_IMAGE:TAG>
command: ["/bin/sh", "-c", "cp -R /jprofiler/ /tmp/"]
volumeMounts:
- name: jprofiler
mountPath: "/tmp/jprofiler"
Replace /jprofiler/ above with the correct path to the installation directory in the JProfiler's image. Notice that the copy command will create /tmp/jprofiler directory under which the JProfiler installation will go - that is used as mount path.
Define volume mount:
volumeMounts:
- name: jprofiler
mountPath: /jprofiler
Add to the JVM startup arguments JProfiler as an agent:
-agentpath:/jprofiler/bin/linux-x64/libjprofilerti.so=port=8849
Notice that there isn't a "nowait" argument. That will cause the JVM to block at startup and wait for a JProfiler GUI to connect. The reason is that with this configuration the profiling agent will receive its profiling settings from the JProfiler GUI.
Change the application deployment to start with only one replica. Alternatively, start with zero replicas and scale to one when ready to start profiling.
To connect from the JProfiler's GUI to the remote JVM:
Find out the name of the pod (e.g. kubectl -n <namespace> get pods) and set up port forwarding to it:
kubectl -n <namespace> <pod-name> port-forward 8849:8849
Start JProfiler up locally and point it to 127.0.0.1, port 8849.
Change the local port 8849 (the number to the left of :) if it isn't available; then, point JProfiler to that different port.
Looks like you are missing the general concept here.
It's nicely explained why to use containers in the official documentation.
The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.
Of course you don't need to install the libraries on each containers separately.
Kubernetes is using Volumes to share files between Containers.
So you can create a local type of Volume with JProfiles libs inside.
A local volume represents a mounted local storage device such as a disk, partition or directory.
You also need to keep in mind that if you share the Volume between Pods, those Pods will not know about JProfiles libs being attached. You will need to configure the Pod with correct environment variables/files through the use of Secrets or ConfigMaps.
You can configure your Pod to pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: jp-pod
name: jp-pod
spec:
containers:
- image: k8s.gcr.io/busybox
name: jp
envFrom:
secretRef:
name: jp-secret
jp-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: jp-secret
type: Opaque
data:
JPAGENT_PATH="-agentpath:/usr/local/jprofiler10/bin/linux-x64/libjprofilerti.so=nowait"
I hope this helps you.

How to deploy new app versions in kubernetes

In this stackoverflow question: kubernetes Deployment. how to change container environment variables for rolling updates?
The asker mentions mentions he edited the deployment to change the version to v2. What's the workflow for automated deployments of a new version assuming the container v2 already exists? How do you then deploy it without manually editing the deployment config or checking in a new version of the yaml?
If you change the underlying container (like v1 -> another version also named v1) will Kubernetes deploy the new or the old?
If you don't want to:
Checking in the new YAML version
Manually updating the config
You can update the deployment either through:
A REST call to the deployment in question by patching/putting your new image as a resource modification. i.e. PUT /apis/extensions/v1beta1/namespaces/{namespace}/deployments -d {... deployment with v2...}
Set the image kubectl set image deployment/<DEPLOYMENT_NAME> <CONTAINER_NAME>:< IMAGE_NAME>:v2
Assuming v1 is already running and you try to deploy v1 again with the same environment variable values etc., then k8s will not see any difference between your current and updated deployment resource.
Without diff, the k8s scheduler assumes that the desired state is already reached and won't schedule any new pods, even when imagePullPolicy: Always is set. The reason is that imagePullPolicy only has an effect on newly created pods. So if a new pod is being scheduled, then k8s will always pull the image again. Still, without any diff in your deployment, no new pod will be scheduled in the first place ..
For my deployments I always set a dummy environment variable, like a deploy timestamp DEPLOY_TS, e.g.:
containers:
- name: my-app
image: my-app:{{ .Values.app.version }} ## value dynamically set by my deployment pipeline
env:
- name: DEPLOY_TS
value: "{{ .Values.deploy_ts }}" ## value dynamically set by my deployment pipeline
The value of DEPLOY_TS is always set to the current timestamp - so it is always a different value. That way k8s will see a diff on every deploy and schedule a new pod - even if the same version is being re-deployed.
(I am currently running k8s 1.7)

Error when deploying kube-dns: No configuration has been provided

I have just installed a basic kubernetes cluster the manual way, to better understand the components, and to later automate this installation. I followed this guide: https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
The cluster is completely empty without addons after this. I've already deployed kubernetes-dashboard succesfully, however, when trying to deploy kube-dns, it fails with the log:
2017-01-11T15:09:35.982973000Z F0111 15:09:35.978104 1 server.go:55]
Failed to create a kubernetes client:
invalid configuration: no configuration has been provided
I used the following yaml template for kube-dns without modification, only filling in the cluster IP:
https://coreos.com/kubernetes/docs/latest/deploy-addons.html
What did I do wrong?
Experimenting with kubedns arguments, I added --kube-master-url=http://mykubemaster.mydomain:8080 to the yaml file, and suddenly it reported in green.
How did this solve it? Was the container not aware of the master for some reason?
In my case, I had to put numeric IP on "--kube-master-url=http://X.X.X.X:8080". It's on yaml file of RC (ReplicationController), just like:
...
spec:
containers:
- name: kubedns
...
args:
# command = "/kube-dns"
- --domain=cluster.local
- --dns-port=10053
- --kube-master-url=http://192.168.99.100:8080