How do I publish .NET Core to Digital Ocean Kubernetes - kubernetes

I am trying to publish a .NET Core Web App and a .NET Core API.
I have been googling and can't find a way to deploy 1 let alone 2 .NET Core apps to a Digital Ocean Kubernetes Cluster, I have 2 nodes and have created a valid manifest and build a Docker image locally and it seems to pass the validation. But I can't actually deploy it. I'm new to Kubernetes and anything I find seems to be related to Google's Kubernetes or Azure Kubernetes.
I don't, unfortunately, have more information than this.

I have one. Weird thing is that DO is actually smart to not have docs
since it doesn't have to. You can recycle Google's and Azure's K8
documentation to work on your DO cluster. The key difference is only
in the namings I suppose, there could be more differentiations but so
far, I haven't met a single problem while applying instructions from
GCP's docs.
https://nozomi.one is running on DO's k8 cluster.
Here's an awesome-dotnetcore-digitalocean-k8 for you.
Errors you may/will face:
Kubernetes - Error message ImagePullBackOff when deploy a pod
Utilising .NET Core appsettings in docker k8
Push the secret file here (Recommended only for staging or below, unless you have a super secret way to deploy this):
kubectl create secret generic secret-appsettings --from-file=./appsettings.secrets.json
And then create a deployment configuration similar to this. Notice that we've added the appsettings at the last few lines:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: xxxxx
spec:
replicas: 3
template:
metadata:
labels:
app: xxxxx
spec:
containers:
- name: xxxxx
image: xxxxx/xxxxxx:latest
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Production"
volumeMounts:
- name: secrets
mountPath: /app/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: secret-appsettings
Deploying this script is as simple as:
kubectl create -f deployment.yaml
And if you want to test locally in docker first:
docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/test-app:v1
All in all, everything above will help you to deploy your pods.
You need to understand that deploying a new project/app works in this systematic way:
Create a deployment, which is something that pulls the image for you and creates pods that will be deployed to the nodes.
Create a service, that will point proper ports and more (Never tried to do more lol) to your app/s.
This is how a service looks like:
apiVersion: v1
kind: Service
metadata:
name: nozweb
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 80
selector:
app: nozweb
Always ensure that spec:selector:app is specifically following:
spec:
replicas: 3
template:
metadata:
labels:
app: xxxxx
In your deployment configuration. That's how they symlink.
Create an ingress (Optional) that will help act as a reverse proxy to your .NET Core app/project. This is optional because we got kestrel running!

Related

Cannot deploy simple “Hello World” application on Kubernetes Cluster

I have been following this tutorial on creating a hello-world app
https://medium.com/#bhargavshah2011/hello-world-on-kubernetes-cluster-6bec6f4b1bfd
First I create a cluster on gcloud (called hello-world2)
then connect to it locally :
Next I do a git clone of the project listed in the article.
git clone https://github.com/skynet86/hello-world-k8s.git
cd hello-world-k8s/
Inside the directory I find this hello-world.yaml .
It basically lists a deployment and service (I have renamed everything from hello-world to hello-world2)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world2-deployment
labels:
app: hello-world2
spec:
selector:
matchLabels:
app: hello-world2
replicas: 2
template:
metadata:
labels:
app: hello-world2
spec:
containers:
- name: hello-world2
image: bhargavshah86/kube-test:v0.1
ports:
- containerPort: 80
resources:
limits:
memory: 256Mi
cpu: "250m"
requests:
memory: 128Mi
cpu: "80m"
---
apiVersion: v1
kind: Service
metadata:
name: hello-world2
spec:
selector:
app: hello-world2
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30081
type: NodePort
I apply this file by running :
kubectl create -f hello-world.yaml
Next I run kubectl get all and all my services are nicely listed.
Now, the article claims, I can see my new application by going to this URL : http://localhost:30081.
But I get nothing when I navigate to this link :
What am I doing wrong here?
Also, another question (sorry..). How do I connect this service/deployment to my cluster? is there some kind of cluster apply service command which i need to do? Is my service implicitly already connected?
You need to port-forward the service to your localhost. They most handy way to do this is using https://github.com/txn2/kubefwd. This will batch forward all services in a namespace and even will make the DNS names work locally. Very useful when you debugging 1 service from the IDE locally and everything else stays in the cloud.
Cluster DNS. Every Service objects gets a DNS record inside the cluster DNS server automatically: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services. From inside the cluster (other pods), you call your service using service_name.service_namespace.svc.cluster.local:service_port notation.
You can do port-foward use
kubectl port-forward <pod_name> 8080:80
and then go to your browser http://localhost:8080
ABOUT YOUR PROBLEM
Well the link you shared in there application deployed on localhost not on google cloud. So if you want to deploy on google cloud and want to access application in the web browser then you have to create ingress.
But still you can see if your application response, just ssh into one of your VM on kubernetes cluster and do curl http://localhost:30081. because these nodeport are exposed internally and not exposed to outside world that's the reason you can not access it through web browser for that you need some kind of proxy of ingress stuff. I hope these thing clear your understanding

How to access app once deployed via Kubernetes?

I have a very simple Python app that works fine when I execute uvicorn main:app --reload. When I go to http://127.0.0.1:8000 on my machine, I'm able to interact with the API. (My app has no frontend, it is just an API built with FastAPI). However, I am trying to deploy this via Kubernetes, but am not sure how I can access/interact with my API.
Here is my deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
When I enter kubectl describe deployments my-deployment in the terminal, I get back a print out of the deployment, the namespace it is in, the pod template, a list of events, etc. So, I am pretty sure it is properly deployed.
How can I access the application? What would the url be? I have tried a variety of localhost + port combinations to no avail. I am new to kubernetes so I'm trying to understand how this works.
Update:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: default
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: site
image: nginx:1.16.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
Again, when I use the k8s CLI, I'm able to see my deployment, yet when I hit localhost:30001, I get an Unable to connect message.
You have given containerPort: 80 but if your app listens on port 8080 change it to 8080.
There are different ways to access an application deployed on kubernetes
Port Forward using kubectl port-forward deployment/my-deployment 8080:8080
Creare a NodePort service and use http://<NODEIP>:<NODEPORT>
Create a LoadBalanceer service. This works only in supported cloud environment such as AWS, GKE etc.
Use ingress controller such nginx to expose the application.
By Default k8s application are exposed only within the cluster, if you want to access it from outside of the cluster then you can select any of the below options:
Expose Deployment as a node port service (kubectl expose deployment my-deployment --name=my-deployment-service --type=NodePort), describe the service and get the node port assigned to it (kubectl describe svc my-deployment-service). Then try http://<node-IP:node-port>/
For production grade cluster the best practice is to use LoadBalancer type (kubectl expose deployment my-deployment --name=my-deployment-service --type=LoadBalancer --target-port=8080) as part of this service you get an external IP which can be used to access your service http://EXTERNAL-IP:8080/
You can also see the details about the endpoint using kubectl get ep
Thanks,

Cannot access a LoadBalancer service at Kubernetes

I managed to deploy a python app at the kubernetes cluster . The python app image is deployed at AWS ECR (Elastic Container Registry).
My deployment is:
(NAME)charting-rest-server (READY)1/1 (UP-TO-DATE)1 (AVAILABLE)1 (AGE)33m (CONTAINERS)charting-rest-server (IMAGES) *****.dkr.ecr.eu-west-2.amazonaws.com/charting-rest-server:latest (SELECTOR)app=charting-rest-server
And my service is:
(NAME)charting-rest-server-service (TYPE)LoadBalancer (CLUSTER-IP)10.100.4.207 (EXTERNAL-IP)*******.eu-west-2.elb.amazonaws.com (PORT(s))8765:32735/TCP (AGE)124m (SELECTOR)app=charting-rest-server
According to this AWS guide , when I do curl *****.us-west-2.elb.amazonaws.com:80 I should be able to externally access the Load Balancer , who is going to route me to my pod's ip.
But all I get is
(6) Could not resolve host: *******.eu-west-2.elb.amazonaws.com
And come to think about it if I want to have access to my pod and send some requests I should have an external-ip like 111.111.111.111 (obv an example).
EDIT
the deployment's yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: charting-rest-server
spec:
selector:
matchLabels:
app: charting-rest-server
replicas: 1
template:
metadata:
labels:
app: charting-rest-server
spec:
containers:
- name: charting-rest-server
image: *****.eu-west-2.amazonaws.com/charting-rest-server:latest
ports:
- containerPort: 5000
the service's yaml:
apiVersion: v1
kind: Service
metadata:
name: charting-rest-server-service
spec:
type: LoadBalancer
selector:
app: charting-rest-server
ports:
- protocol: TCP
port: 80
targetPort: 5000
I already tried with the suggestions from the comments , using an ingress instance but I only ended up spending a huge amount of time trying to understand how they work , "am I doing something wrong"?/etc .
I will put the yaml file I used here but it made no change since my ADDRESS field was empty - no ip to use.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: charting-rest-server-ingress
spec:
rules:
- host: charting-rest-server-service
http:
paths:
- path:/
backend:
serviceName: charting-rest-server-service
servicePort: 80
I am stuck in that problem for so much time so I would appreciate some help.
You already created a Service with type LoadBalancer, but it looks like you have incorrect ports configured.
Your Deployment is created with containerPort: 5000 and your Service is pointing to targetPort: 9376. Those needs to match for the Deployment to be exposed.
If you are having a hard time writing yaml for the Service you can expose the Deployment using following kubectl command:
kubectl expose --namespace=tick deployment charting-rest-server --type=LoadBalancer --port=8765 --target-port=5000 --name=charting-rest-server-service
Once you fix those ports you will be able to access the service from outside using it's hostname:
status:
loadBalancer:
ingress:
- hostname: aba02b223436111ea85ea06a051f04d8-1294697222.eu-west-2.elb.amazonaws.com
I also recommend this guide Tutorial: Expose Services on your AWS Quick Start Kubernetes cluster.
If you need more control over the http rules please consider using ingress, you can read more about ALB Ingress Controller on Amazon EKS also Using a Network Load Balancer with the NGINX Ingress Controller on Amazon EKS.

Setting environment variables based on services in Kubernetes

I am running a simple app based on an api and web interface in Kubernetes. However, I can't seem to get the api to talk to the web interface. In my local environment, I just define a variable API_URL in the web interface with eg. localhost:5001 and the web interface correctly connects to the api. As api and web are running in different pods I need to make them talk to each other via services in Kubernetes. So far, this is what I am doing, but without any luck.
I set-up a deployment for the API
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: gcr.io/myproject-22edwx23/api:latest
ports:
- containerPort: 5001
I attach a service to it:
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service
spec:
type: NodePort
selector:
component: api
ports:
- port: 5001
targetPort: 5001
and then create a web deployment that should connect to this api.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: web
image: gcr.io/myproject-22edwx23/web:latest
ports:
- containerPort: 5000
env:
- name: API_URL
value: http://api-cluster-ip-service:5001
afterwards, I add a service for the web interface + ingress etc but that seems irrelevant for the issues. I am wondering if the setting of API_URL correctly picks up the host of the api via http://api-cluster-ip-service:5001?
Or can I not rely on Kubernetes getting the appropriate dns for the api and should the web app call the api via the public internet.
If you want to check API_URL variable value, simply run
kubectl exec -it web-deployment-pod env | grep API_URL
The kube-dns service listens for service and endpoint events from the Kubernetes API and updates its DNS records as needed. These events are triggered when you create, update or delete Kubernetes services and their associated pods.
kubelet sets each new pod's search option in /etc/resolv.conf
Still, if you want to http from one pod to another via cluster service it is recommended to refer service's ClusterIP as follows
api-cluster-ip-service.default.svc.cluster.local
You should have service IP assigned to env variable within your web pod, so there's no need to re-invent it:
sukhoversha#sukhoversha:~/GCP$ kk exec -it web-deployment-675f8fcf69-xmqt8 env | grep -i service
API_CLUSTER_IP_SERVICE_PORT=tcp://10.31.253.149:5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP=tcp://10.31.253.149:5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP_PORT=5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP_ADDR=10.31.253.149
To read more about DNS for Services. A service defines environment variables naming the host and port.
If you want to use environment variables you can do the following:
Example in Python:
import os
API_URL = os.environ['API_CLUSTER_IP_SERVICE_SERVICE_HOST'] + ":" + os.environ['API_CLUSTER_IP_SERVICE_SERVICE_PORT']
Notice that the environment variable is based on your service name. If you want to check all environment variables available in a Pod:
kubectl get pods #get {pod name}
kubectl exec -it {pod_name} printenv
P.S. Be careful that a Pod gets its environment variables during its creation and it will not be able to get it from services created after it.

Cloud Composer unable to connect to Cloud SQL Proxy service

We launched a Cloud Composer cluster and want to use it to move data from Cloud SQL (Postgres) to BQ. I followed the notes about doing this mentioned at these two resources:
Google Cloud Composer and Google Cloud SQL
https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
We launch a pod running the cloud_sql_proxy and launch a service to expose the pod. The problem is that Cloud Composer cannot see the service stating the error when attempting to use an ad-hoc query to test:
cloud not translate host name "sqlproxy-service" to address: Name or service not known"
Trying by the service IP address results in the page timing out.
The -instances passed to cloud_sql_proxy work when used in a local environment or cloud shell. The log files seem to indicate no connection is ever attempted
me#cloudshell:~ (my-proj)$ kubectl logs -l app=sqlproxy-service
me#2018/11/15 13:32:59 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2018/11/15 13:32:59 using credential file for authentication; email=my-service-account#service.iam.gserviceaccount.com
2018/11/15 13:32:59 Listening on 0.0.0.0:5432 for my-proj:my-ds:my-db
2018/11/15 13:32:59 Ready for new connections
I see a comment here https://stackoverflow.com/a/53307344/1181412 that possibly this isn't even supported?
Airflow
YAML
apiVersion: v1
kind: Service
metadata:
name: sqlproxy-service
namespace: default
labels:
app: sqlproxy
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
selector:
app: sqlproxy
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sqlproxy
labels:
app: sqlproxy
spec:
selector:
matchLabels:
app: sqlproxy
template:
metadata:
labels:
app: sqlproxy
spec:
containers:
- name: cloudsql-proxy
ports:
- containerPort: 5432
protocol: TCP
image: gcr.io/cloudsql-docker/gce-proxy:latest
imagePullPolicy: Always
command: ["/cloud_sql_proxy",
"-instances=my-proj:my-region:my-db=tcp:0.0.0.0:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
The information you found in the answer you linked is correct - ad-hoc queries from the Airflow web server to cluster-internal services within the Composer environment are not supported. This is because the web server runs on App Engine flex using its own separate network (not connected to the GKE cluster), which you can see in the Composer architecture diagram.
Since that is the case, your SQL proxy must be exposed on a public IP address for the Composer Airflow web server to connect to it. For any services/endpoints listening on RFC1918 addresses within the GKE cluster (i.e. not exposed on a public IP), you will need additional network configuration to accept external connections.
If this is a major blocker for you, consider running a self-managed Airflow web server. Since this web server would run in the same cluster as the SQL proxy you set up, there would no longer be any issues with name resolution.