Can anyone please guide how to pull private images from Kubernetes? - kubernetes

kubectl create secret docker-registry private-registry-key --docker-username="devopsrecipes" --docker-password="xxxxxx" --docker-email="username#example.com" --docker-server="https://index.docker.io/v1/"
secret "private-registry-key" created
This command is used for accessing private docker repos.
As referenced: http://blog.shippable.com/kubernetes-tutorial-how-to-pull-private-docker-image-pod
But, not able to pull the image.
When tried to access ="https://index.docker.io/v1/"
It is giving page not found error.
Please guide me.

You also need to refer the imagePullSecrets in the pod / deployment spec you create:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: private-registry-key
Read more about imagePullSecrets here.

I Just tried creating the same on my cluster.
kubectl create secret docker-registry private-registry-key --docker-username="xx" --docker-password="xx" --docker-email="xx" --docker-server="https://index.docker.io/v1/"
Output:
secret/private-registry-key created
My Yaml file looks like
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: vaibhavjain882/ubuntubase:latest
command: ["sleep", "30"]
imagePullSecrets:
- name: private-registry-key
NAME READY STATUS RESTARTS AGE
private-reg 1/1 Running 0 35s
Note: Just verify, if you are passing the correct docker image name. In my case its "vaibhavjain882/ubuntubase:latest"

Related

Creating "imagePullSecret" to work with the deployment.yaml file which contains an image from the private repository

I have created a deployment yaml file with an image name with my private docker repository.
when running the command :
kubectl get pods
I see that the status of the created pod from that deployment file is ImagePullBackOff. from what I have read it is because I am pulling the image from a private registry without imagePullSecret.
how do I create "imagePullSecret" as a yaml file to work with the deployment.yaml file which contains an image from the private repository? or is it a feild which should be part of the deployment file?
There is a field in spec of the pod called imagePullSecrets, whichi you could define with your deployment yaml:
imagePullSecrets:
- name: registry-secret
Then you can define the missing "imagePullSecret" as a yaml file:
apiVersion: v1
kind: Secret
metadata:
name: registry-secret
namespace: xxx-namespace
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
The data field .dockerconfigjson can be created from this docs.
First you create the imagePullSecret from your docker
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Alternatively, you can create the imagePullSecret by providing the required arguments in the command line
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then use the imagePullSecret name in the pod manifest
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
Reference:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Executing a Script using a Cronjob Kubernetes Cluster

I have a 3 node K8 v1.21 cluster in AWS and looking for SOLID config to run a script using a cronjob. I have seen many documents on here and Google using cronjob and hostPath to Persistent Volumes/Claims to using ConfigMaps, the list goes one.
I keep getting "Back-off restarting failed container/CrashLoopBackOff" errors.
Any help is much appreciated.
cronjob.yaml
The script I am trying to run is basic for testing only
#! /bin/<br/>
kubectl create deployment nginx --image=nginx
Still getting the same error.
kubectl describe pod/xxxx
This hostPath in AWS cluster created using eksctl works.
apiVersion: v1
kind: Pod
metadata:
name: redis-hostpath
spec:
containers:
- image: redis
name: redis-container
volumeMounts:
- mountPath: /test-mnt
name: test-vol
volumes:
- name: test-vol
hostPath:
path: /test-vol
UPDATE
Tried running your config in GCP on a fresh cluster. Only thing I changed was the /home/script.sh to /home/admin/script.sh
Did you test this on your cluster?
Warning FailedPostStartHook 5m27s kubelet Exec lifecycle hook ([/home/mchung/script.sh]) for Container "busybox" in Pod "dumb-job-1635012900-qphqr_default(305c4ed4-08d1-4585-83e0-37a2bc008487)" failed - error: rpc error: code = Unknown desc = failed to exec in container: failed to create exec "0f9f72ccc6279542f18ebe77f497e8c2a8fd52f8dfad118c723a1ba025b05771": cannot exec in a deleted state: unknown, message: ""
Normal Killing 5m27s kubelet FailedPostStartHook
Assuming you're running it in a remote multi-node cluster (since you mentioned AWS in your question), hostPath is NOT an option there for volume mount. Your best choice would be to use a ConfigMap and use it as volume mount.
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-script
data:
script.sh: |
# write down your script here
And then:
apiVersion: batch/v1
kind: CronJob
metadata:
name: redis-job
spec:
schedule: '*/5 * * * *'
jobTemplate:
spec:
template:
spec:
containers:
- name: redis-container
image: redis
args:
- /bin/sh
- -c
- /home/user/script.sh
volumeMounts:
- name: redis-data
mountPath: /home/user/script.sh
subPath: script.sh
volumes:
- name: redis-data
configMap:
name: redis-script
Hope this helps. Let me know if you face any difficulties.
Update:
I think you're doing something wrong. kubectl isn't something you should run from another container / pod. Because it requires the necessary binary to be existed into that container and an appropriate context set. I'm putting a working manifest below for you to understand the whole concept of running a script as a part of cron job:
apiVersion: v1
kind: ConfigMap
metadata:
name: script-config
data:
script.sh: |-
name=StackOverflow
echo "I love $name <3"
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: dumb-job
spec:
schedule: '*/1 * * * *' # every minute
jobTemplate:
spec:
template:
spec:
containers:
- name: busybox
image: busybox:stable
lifecycle:
postStart:
exec:
command:
- /home/script.sh
volumeMounts:
- name: some-volume
mountPath: /home/script.sh
volumes:
- name: some-volume
configMap:
name: script-config
restartPolicy: OnFailure
What it'll do is it'll print some texts in the STDOUT in every minute. Please note that I have put only the commands that container is capable to execute, and kubectl is certainly not one of them which exists in that container out-of-the-box. I hope that is enough to answer your question.

how does K8S handles multiple remote docker registeries in POD definition using imagePullSecrets list

I would like to access multiple remote registries to pull images.
In the k8s documentation they say:
(If you need access to multiple registries, you can create one secret
for each registry. Kubelet will merge any imagePullSecrets into a
single virtual .docker/config.json)
and so the POD definition should be something like this:
apiVersion: v1
kind: Pod
spec:
containers:
- name: ...
imagePullSecrets:
- name: secret1
- name: secret2
- ....
- name: secretN
Now I am not sure how K8S will pick the right secret for each image? will all secrets be verified one by one each time? and how K8S will handle the failed retries? and if a specific amount of unauthorized retries could lead to some lock state in k8sor docker registries?
/ Thanks
You can use following script to add two authentications in one secret
#!/bin/bash
u1="user_1_here"
p1="password_1_here"
auth1=$(echo -n "$u1:$p1" | base64 -w0)
u2="user_2_here"
p2="password_2_here"
auth2=$(echo -n "$u2:$p2" | base64 -w0)
cat <<EOF > docker_config.json
{
"auths": {
"repo1_name_here": {
"auth": "$auth1"
},
"repo2_name_here": {
"auth": "$auth2"
}
}
}
EOF
base64 -w0 docker_config.json > docker_config_b64.json
cat <<EOF | kubectl apply -f -
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
data:
.dockerconfigjson: $(cat docker_config_b64.json)
metadata:
name: specify_secret_name_here
namespace: specify_namespace_here
EOF
Kubernetes isn't going to try all secrets until find the correct. When you create the secret, you are referencing that it's a docker registry:
$ kubectl create secret docker-registry user1-secret --docker-server=https://index.docker.io/v1/ --docker-username=user1 --docker-password=PASSWORD456 --docker-email=user1#email.com
$ kubectl create secret docker-registry user2-secret --docker-server=https://index.docker.io/v1/ --docker-username=user2 --docker-password=PASSWORD123 --docker-email=user2#email.com
$ kubectl get secrets user1-secret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-01-13T13:15:52Z"
name: user1-secret
namespace: default
resourceVersion: "1515301"
selfLink: /api/v1/namespaces/default/secrets/user1-secret
uid: d2f3bb0c-3606-11ea-a202-42010a8000ad
type: kubernetes.io/dockerconfigjson
As you can see, type is kubernetes.io/dockerconfigjson is telling Kubernetes to treat this differently.
So, when you reference the address of your container as magic.example.com/magic-image on your yaml, Kubernetes will have enough information to connect the dots and use the right secret to pull your image.
apiVersion: v1
kind: Pod
metadata:
name: busyboxes
namespace: default
spec:
imagePullSecrets:
- name: user1-secret
- name: user2-secret
containers:
- name: jenkins
image: user1/jenkins
imagePullPolicy: Always
- name: busybox
image: user2/busybox
imagePullPolicy: Always
So as this example describes, it's possible to have 2 or more docker registry secrets with the same --docker-server value. Kubernetes will manage to take care of it seamlessly.

How to update a container only?

I am working locally with minikube and every time i make a change on the code, i delete the service (and the deployment) and create a new one.
This operation generate a new IP for each container so i also need to update my frontend, and also to insert new data in my db container, since i loose every data every time i delete the service.
It’s way too much wasted time to work efficiently.
I would like to know if there is a way to update a container without generating new IPs, and without deleting the pod (because i don't want to delete my db container everytime i update the backend code)?
It's easy to update the existing Deployment with a new image without necessity to delete it.
Imagine we have a YAML file with the Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
To run this deployment, run the following command:
$ kubectl create -f nginx-deployment.yaml --record
(--record - appends the current command to the annotations of the created or updated resource. This is useful for future reviews, such as investigating which commands were executed in each Deployment revision, and for making a rollback.)
To see the Deployment rollout status, run
$ kubectl rollout status deployment/nginx-deployment
To update nginx image version, just run the command:
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Or you can edit existing Deployment with the command:
$ kubectl edit deployment/nginx-deployment
To see the status of the Deployment update process, run the command:
$ kubectl rollout status deployment/nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
or
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 36s
Each time you update the Deployment, it updates the Pods by creating new ReplicaSet, scaling it to 3 replicas, and scaling down old ReplicaSet to 0. If you update the Deployment again during the previous update in progress, it starts to create new ReplicaSet immediately, without waiting for completion of the previous update.
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-1180356465 3 3 3 4s
nginx-deployment-2538420311 0 0 0 56s
If you made a typo while editing the Deployment (for example, nginx:1.91) you can rollback it to the previous good version.
First, check the revisions of this deployment:
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create -f nginx-deployment.yaml --record
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
Because we recorded the command while creating this Deployment using --record, we can easily see the changes we made in each revision.
To see the details of each revision, run:
$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.
Now you can rollback to the previous version using command:
$ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back
Or you can rollback to a specific version:
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment "nginx-deployment" rolled back
For more information, please read the part of Kubernetes documentation related to Deployment
First of all, in your front-end, use DNS names instead of IP addresses to reach your backend. This will save you from rebuilding your front-end app every time you deploy your backend.
That being said, there is no need to delete your service just to deploy a new version of your backend. In fact, you just need to update your deployment, making it refer to the new docker image you have built using the latest code for your backend.
Finally, as long as I understand, you have both your application and your database inside the same Pod. This is not a good practice, you should separate them in order not to cause a downtime in your database when you deploy a new version of your code.
As a sidenote, not sure if this is the case, but if you are using minikube as a development environment you're probably doing it wrong. You should use docker alone with volume binding, but that's out of scope of your question.
Use kops and create a production like cluster in AWS on the free tier.
In order to fix this you need to make sure you use a loadbalancer for your frontends. Create a service for your db container exposing the port so your frontends can reach it, and put that in your manifest for your frontends so its static. Service discovery will take care of the ip address and your containers will automatically connect to the ports. You can also setup persistent storage for your DBs. When you update your frontend code, use this to update your containers so nothing will change.
kubectl set image deployment/helloworld-deployment basicnodeapp=buildmystartup/basicnodeapp:2
Here is how I would do a state-full app in production AWS using wordpress for an example.
###############################################################################
#
# Creating a stateful app with persistent storage and front end containers
#
###############################################################################
* Here is how you create a stateful app using volumes and persistent storage for production.
* To start off we can automate the storage volume creation for our mysql server with a storage object and persistent volume claim like so:
$ cat storage.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-1b
$ cat pv-claim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: db-storage
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
* Lets go ahead and create these so they are ready for our deployment of mysql
$ kubectl create -f storage.yml
storageclass "standard" created
$ kubectl create -f pv-claim.yml
persistentvolumeclaim "db-storage" created
* Lets also create our secrets file that will be needed for mysql and wordpress
$ cat wordpress-secrets.yml
apiVersion: v1
kind: Secret
metadata:
name: wordpress-secrets
type: Opaque
data:
db-password: cGFzc3dvcmQ=
# random sha1 strings - change all these lines
authkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OA==
loggedinkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OQ==
secureauthkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MQ==
noncekey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MA==
authsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mg==
secureauthsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mw==
loggedinsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NA==
noncesalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NQ==
$ kubectl create -f wordpress-secrets.yml
* Take note of the names we assigned. We will need these for the mysql deployment
* We created the storage in us-east-1b so lets set a node label for our node in that AZ so our deployment is pushed to that node and can attach our volume.
$ kubectl label nodes ip-172-20-48-74.ec2.internal storage=mysql
node "ip-172-20-48-74.ec2.internal" labeled
* Here is our mysql pod definition. Notice at the bottom we use a nodeSelector
* We will need to use that same one for our deployment so it can reach us-east-1b
$ cat wordpress-db.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: wordpress-db
spec:
replicas: 1
selector:
app: wordpress-db
template:
metadata:
name: wordpress-db
labels:
app: wordpress-db
spec:
containers:
- name: mysql
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-storage
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: db-storage
nodeSelector:
storage: mysql
* Before we go on to the deployment lets expose a service on port 3306 so wordpress can connect.
$ cat wordpress-db-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress-db
spec:
ports:
- port: 3306
protocol: TCP
selector:
app: wordpress-db
type: NodePort
$ kubectl create -f wordpress-db-service.yml
service "wordpress-db" created
* Now lets work on the deployment. We are going to use EFS to save all our pictures and blog posts so lets create that on us-east-1b also
* So first lets create our EFS NFS share
$ aws efs create-file-system --creation-token 1
{
"NumberOfMountTargets": 0,
"SizeInBytes": {
"Value": 0
},
"CreationTime": 1501863105.0,
"OwnerId": "812532545097",
"FileSystemId": "fs-55ed701c",
"LifeCycleState": "creating",
"CreationToken": "1",
"PerformanceMode": "generalPurpose"
}
$ aws efs create-mount-target --file-system-id fs-55ed701c --subnet-id subnet-7405f010 --security-groups sg-ffafb98e
{
"OwnerId": "812532545097",
"MountTargetId": "fsmt-a2f492eb",
"IpAddress": "172.20.53.4",
"LifeCycleState": "creating",
"NetworkInterfaceId": "eni-cac952dd",
"FileSystemId": "fs-55ed701c",
"SubnetId": "subnet-7405f010"
}
* Before we launch the deployment lets make sure our mysql server is up and connected to the volume we created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
db-storage Bound pvc-82c889c3-7929-11e7-8ae1-02fa50f1a61c 8Gi RWO standard 51m
* ok status bound means our container is connected to the volume.
* Now lets launch the wordpress frontend of two replicas.
$ cat wordpress-web.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:4-php7.0
# uncomment to fix perm issue, see also https://github.com/kubernetes/kubernetes/issues/2630
# command: ['bash', '-c', 'chown', 'www-data:www-data', '/var/www/html/wp-content/upload', '&&', 'apache2', '-DFOREGROUND']
ports:
- name: http-port
containerPort: 80
env:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authkey
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinkey
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthkey
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncekey
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authsalt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthsalt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinsalt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncesalt
- name: WORDPRESS_DB_HOST
value: wordpress-db
volumeMounts:
- mountPath: /var/www/html/wp-content/uploads
name: uploads
volumes:
- name: uploads
nfs:
server: us-east-1b.fs-55ed701c.efs.us-east-1.amazonaws.com
path: /
* Notice we put together a string for the NFS share.
* AZ.fs-id.Region.amazonaws.com
* Now lets create our deployment.
$ kubectl create -f wordpress-web.yml
$ cat wordpress-web-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- port: 80
targetPort: http-port
protocol: TCP
selector:
app: wordpress
type: LoadBalancer
* And now the load balancer for our two nodes
$ kubectl create -f wordpress-web-service.yml
* Now lets find our ELB and create a Route53 DNS name for it.
$ kubectl get services
$ kubectl describe service wordpress
Name: wordpress
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=wordpress
Type: LoadBalancer
IP: 100.70.74.90
LoadBalancer Ingress: acf99336a792b11e78ae102fa50f1a61-516654231.us-east-1.elb.amazonaws.com
Port: <unset> 80/TCP
NodePort: <unset> 30601/TCP
Endpoints: 100.124.209.16:80,100.94.7.215:80
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
38m 38m 1 service-controller Normal CreatingLoadBalancer Creating load balancer
38m 38m 1 service-controller Normal CreatedLoadBalancer Created load balancer
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wordpress-deployment 2 2 2 2 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sysdig-agent-4sxv2 1/1 Running 0 3d
sysdig-agent-nb2wk 1/1 Running 0 3d
sysdig-agent-z42zj 1/1 Running 0 3d
wordpress-db-79z87 1/1 Running 0 54m
wordpress-deployment-2971992143-c8gg4 0/1 ContainerCreating 0 1m
wordpress-deployment-2971992143-h36v1 1/1 Running 0 1m
I think you actually need to solve 2 issues:
Do not restart the service. Restart only your pod. Then the service won't change its IP address.
Database and your application don't need to be 2 containers in the same pod. They should be 2 separate pods. Then you need another service to expose the database to your application.
So the final solution should be like this:
database pod - runs once, never restarted during development.
database service - created once, never restarted.
application pod - this is the only thing you will reload when the application code is changed. It needs to access the database, so you write literally "database-service:3306" or something like this in your application. "database-service" here is the name of the service you created in (2).
application service - created once, never restarted. You access the application from outside of minikube by using IP address of this service.

Secret volumes do not work on multinode docker setup

I have setup a multinode kubernetes 1.0.3 cluster using instructions from https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode.md.
I create a secret volume using the following spec in myns namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: myns
labels:
name: mysecret
data:
myvar: "bUNqVlhCVjZqWlZuOVJDS3NIWkZHQmNWbXBRZDhsOXMK"
Create secret volume:
$ kubectl create -f mysecret.yml --namespace=myns
Check to see if secret volume exists:
$ kubectl get secrets --namespace=myns
NAME TYPE DATA
mysecret Opaque 1
Here is the Pod spec of the consumer of the secret volume:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: myns
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
name: busybox
volumeMounts:
- name: mysecret
mountPath: /etc/mysecret
readOnly: true
volumes:
- name: mysecret
secret:
secretName: mysecret
Create the Pod
kubectl create -f busybox.yml --namespace=myns
Now if I exec into the docker container to inspect the contents of the /etc/mysecret directory. I find it to be empty.
What namespace are your pod and secret in? They must be in the same namespace. Would you post a gist or pastebin of the Kubelet log? That contains information that can help us diagnose this.
Also, are you running the Kubelet on your host directly or in a container?