How To Add GCP Service Account To Kubernetes Workload/Project - kubernetes

I am attempting to add a GCP service account Y that has access to a specific storage bucket named 'security-keychain'. I'm trying to figure out what config or changes are necessary to make my current project capable of accessing said service account and then the bucket.
I did look over https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ but didn't feel like it provided much insight.
Here are my current GCP Kubernetes config for this project. My project will be a nginx reverse proxy, in case you're wondering, and the bucket has access to all the SSL certificates and keys I need.
deployment.yml
apiVersion: extensions/v1
kind: Deployment
metadata:
labels:
run: my-project
name: my-project
spec:
replicas: 4
selector:
matchLabels:
run: my-project
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: my-project
spec:
containers:
- image: gcr.io/my-company/my-project
imagePullPolicy: Always
name: my-project
resources:
limits:
cpu: "800m"
memory: 4Gi
requests:
cpu: "500m"
memory: 2Gi
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
securityContext:
capabilities: {}
privileged: true
capabilities:
add:
- SYS_ADMIN
lifecycle:
postStart:
exec:
command: ["gcsfuse", "-o", "nonempty", "security-keychain", "/mnt/security-keychain"]
preStop:
exec:
command: ["fusermount", "-u", "/mnt/security-keychain"]
service.yml
apiVersion: v1
kind: Service
metadata:
name: my-project-service # service name
spec:
type: LoadBalancer # gives your app a public IP so the outside world can get to it
loadBalancerIP: 99.99.99.99 # declared in VPC Networks > External IP Addresses in GCP Console
ports:
- port: 80 # port the service listens on
targetPort: 80 # port the app listens on
protocol: TCP
name: http
- port: 443 # port the service listens on
targetPort: 443 # port the app listens on
protocol: TCP
name: https
selector:
run: my-project

You can follow this guide to archive your goal if your architecture implies that you must use a service account.Keep in mind that you can mount directly a Kubernetes pod/container to a Google Cloud Storage bucket following this other guide

Related

Access pod from another pod with kubernetes url

I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP

Openshift - accessing non http port

I have a fairly simple (SpringBoot) app that listens on the following port:
8080 - for HTTP (swagger page)
1141 - non HTTP traffic. It is for FIX (https://en.wikipedia.org/wiki/Financial_Information_eXchange) port. i.e. direct socket to socket, TCP/IP port. The FIX engine used is QuickfixJ.
I'm trying to deploy this app on OpenShift cluster. The configuration looks like below:
Here are the YAMLs I have:
Deployment config:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: pricing-sim-depl
name: pricing-sim-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
app: pricing-sim-depl
strategy:
resources:
limits:
cpu: 200m
memory: 1024Mi
requests:
cpu: 100m
memory: 512Mi
type: Recreate
template:
metadata:
labels:
app: pricing-sim-depl
spec:
containers:
- image: >-
my-docker-registry/alex/pricing-sim:latest
name: pricing-sim-pod
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 1141
protocol: TCP
tty: true
resources:
limits:
cpu: 200m
memory: 1024Mi
requests:
cpu: 100m
memory: 512Mi
Then I created a ClusterIP service for accessing the HTTP Swagger page:
apiVersion: v1
kind: Service
metadata:
labels:
app: pricing-sim-sv
name: pricing-sim-service
namespace: my-namespace
spec:
ports:
- name: swagger-port
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: pricing-sim-depl
type: ClusterIP
and also the Router for accessing it:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: pricing-sim-tn-swagger
name: pricing-sim-tunnel-swagger
namespace: my-namespace
spec:
host: pricing-sim-swagger-my-namespace.apps.cpaas.service.test
port:
targetPort: swagger-port
to:
kind: Service
name: pricing-sim-service
weight: 100
wildcardPolicy: None
The last component is a NodePort service to access the FIX port:
apiVersion: v1
kind: Service
metadata:
labels:
app: pricing-sim-esp-service
name: pricing-sim-esp-service
namespace: my-namespace
spec:
type: NodePort
ports:
- port: 1141
protocol: TCP
targetPort: 1141
nodePort: 30005
selector:
app: pricing-sim-depl
So far, the ClusterIP & Router works fine. I can access the swagger page at
http://fxc-fix-engine-swagger-my-namespace.apps.cpaas.service.test
However, I'm not sure how I can access the FIX port (defined by NodePort service above). First, I cant use Router - as it is not a HTTP endpoint (and thats why I defined it as NodePort).
Looking at OpenShift page, I can see the following for 'pricing-sim-esp-service':
Selectors:
app=pricing-sim-depl
Type: NodePort
IP: 172.30.11.238
Hostname: pricing-sim-esp-service.my-namespace.svc.cluster.local
Session affinity: None
Traffic (one row)
Route/Node Port: 30005
Service Port: 1141/TCP
Target Port: 1141
Hostname: none
TLS Termination: none
BTW.. i'm following the suggestion on this StackOverflow post: OpenShift :: How do we enable traffic into pod on a custom port (non-web / non-http)
I've also tried using LoadBalancer service type. Which actually gives external IP on the service page above. But that 'external IP' doesnt seem to be accessible from my local PC either.
The version of openshift we are running is:
OpenShift Master: v3.11.374
Kubernetes Master: v1.11.0+d4cacc0
OpenShift Web Console: 3.11.374-1-3365aaf
Thank you in advance!

Kubernetes: The service manifest doesn't provide an endpoint to access the application

This yaml tries to deploy a simple Arangodb architecture in k8s, I know there are operators for ArangoDB, but it is a simple PoC to understand k8s pieces and iterate this db with other apps.
The problem is this YAML file executes correctly but I don't get any IP:PORT to connect, however when I execute that docker image in local it works.
# create: kubectl apply -f ./arango.yaml
# delete: kubectl delete -f ./arango.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nms
name: arangodb-deployment
spec:
replicas: 1
selector:
matchLabels:
app: arangodb-pod
template:
metadata:
labels:
app: arangodb-pod
spec:
containers:
- name: arangodb
image: arangodb/arangodb:3.5.3
env:
- name: ARANGO_ROOT_PASSWORD
value: "pass"
ports:
- name: http
containerPort: 8529
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
namespace: nms
name: arangodb-svc
spec:
type: LoadBalancer
selector:
app: arangodb-pod
ports:
- targetPort: 8529
protocol: TCP
port: 8529
targetPort: http
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: nms
name: arango-storage
labels:
app: arangodb-pod
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
Description seems clear:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
arangodb-svc LoadBalancer 10.0.150.245 51.130.11.13 8529/TCP 14m
I am executing kubectl apply -f arango.yaml from AKS but I cannot access to any IP:8529. Some recommendations?
I would like to simulate these commands:
docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD=pass -d --name arangodb-instance arangodb/arangodb:3.5.3
docker start arangodb-instance
You must allow the NodePort 31098 at NSG level from your VNet configuration and attach that NSG rule to AKS cluster.
Also please try and update the service manifest with the changes that you went through with the help in comments.
- targetPort: 8529
protocol: TCP
port: 8529
targetPort: http --< **Its completely wrong field, the manifest wont be parsed.**
The above manifest is wrong, for NodePort (--service-node-port-range=30000-32767) the manifest should look something like this:
spec:
type: NodePort
selector:
app: arangodb-pod
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- name: http
port: 8529
targetPort: 8529
# Optional field
nodePort: 31044
You can connect at public-NODE-IP:NodePort from outside AKS.
For service type loadbalancer, your manifest should look like:
spec:
type: LoadBalancer
selector:
app: arangodb-pod
ports:
- name: http
protocol: TCP
port: 8529
targetPort: 8529
For LoadBalancer you can connect with LoadBalancer-External-IP:external-port
However, in both the above cases NSG whitelist rule should be there. You should whitelist your local machine's IP or the IP of the machine from wherever you are accessing it.
you have to ingress controller or you could also go with loadbalancer type as service assiging a static ip which you prefer. Both will work

defining 2 ports in deployment.yaml in Kubernetes

I have a docker image from I am doing
docker run --name test -h test -p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:install
I am trying to put into a kubernetes deploy file and I have this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websphere
spec:
replicas: 1
template:
metadata:
labels:
app: websphere
spec:
containers:
- name: websphere
image: ibmcom/websphere-traditional:install
ports:
- containerPort: 9443
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: Always
my service.yaml
apiVersion: v1
kind: Service
metadata:
name: websphere
labels:
app: websphere
spec:
type: NodePort #Exposes the service as a node ports
ports:
- port: 9443
protocol: TCP
targetPort: 9443
selector:
app: websphere
May I have guidance on how to map 2 ports in my deployment file?
You can add as many ports as you need.
Here your deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websphere
spec:
replicas: 1
template:
metadata:
labels:
app: websphere
spec:
containers:
- name: websphere
image: ibmcom/websphere-traditional:install
ports:
- containerPort: 9043
- containerPort: 9443
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: IfNotPresent
Here your service.yml:
apiVersion: v1
kind: Service
metadata:
name: websphere
labels:
app: websphere
spec:
type: NodePort #Exposes the service as a node ports
ports:
- port: 9043
name: hello
protocol: TCP
targetPort: 9043
nodePort: 30043
- port: 9443
name: privet
protocol: TCP
targetPort: 9443
nodePort: 30443
selector:
app: websphere
Check on your kubernetes api-server configuration what is the range for nodePorts (usually 30000-32767, but it's configurable).
EDIT
If I remove from deployment.yml the resources section, it starts correctly (after about 5 mins).
Here a snippet of the logs:
[9/10/18 8:08:06:004 UTC] 00000051 webcontainer I
com.ibm.ws.webcontainer.VirtualHostImpl addWebApplication SRVE0250I:
Web Module Default Web Application has been bound to
default_host[:9080,:80,:9443,:506 0,:5061,:443].
Problems come connecting to it (I use ingress with traefik), because of certificates (I suppose):
[9/10/18 10:15:08:413 UTC] 000000a4 SSLHandshakeE E SSLC0008E:
Unable to initialize SSL connection. Unauthorized access was denied
or security settings have expired. Exception is
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
connection?
To solve that (I didn't go further) this may help: SSLHandshakeE E SSLC0008E: Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired
Trying to connect with port-forward:
and using dthe browser to connect, I land on this page:
Well in kubernetes you can define your ports using #port label. This label comes under ports configuration in your deployment. According to the configurations you can simply define any numbers of ports you wish. Following example shows how to define two ports.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
- name: https
protocol: TCP
port: 443
targetPort: 9377

Kubernetes rollout give 503 error when switching web pods

I'm running this command:
kubectl set image deployment/www-deployment VERSION_www=newImage
Works fine. But there's a 10 second window where the website is 503, and I'm a perfectionist.
How can I configure kubernetes to wait for the image to be available before switching the ingress?
I'm using the nginx ingress controller from here:
gcr.io/google_containers/nginx-ingress-controller:0.8.3
And this yaml for the web server:
# Service and Deployment
apiVersion: v1
kind: Service
metadata:
name: www-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: www
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: www-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: www
spec:
containers:
- image: myapp/www
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: http-port
name: www
ports:
- containerPort: 80
name: http-port
protocol: TCP
resources:
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: www.sh
mode: 256
path: env.sh
secretName: env-secret
The Docker image is based on a node.js server image.
/healthz is a file in the webserver which returns ok I thought that liveness probe would make sure the server was up and ready before switching to the new version.
Thanks in advance!
within the Pod lifecycle it's defined that:
The default state of Liveness before the initial delay is Success.
To make sure you don't run into issues better configure the ReadinessProbe for your Pods too and consider to configure .spec.minReadySeconds for your Deployment.
You'll find details in the Deployment documentation