I have a few microservices running in a kubernetes cluster. Each service has its own APIs and corresponding swagger.json.
I'm considering deploying a swagger-ui pod inside kubernetes to show these swagger.json and execute the APIs.
Tried it and realized that:
swagger-ui can't find the services' swagger.jsons. Even though the swagger-ui pod can resolve the serives' dns name, but my browser can't. Seems the code is running in browser instead of the pod.
For the same reason that it's browser running the code, the swagger-ui can't be used to execute the APIs, since the services are not reachable from outside kubernetes.
So my question is,
is there a way to let swagger-ui run code inside the pod? so that it can reach the services and execute their apis?
is there ANY way to execute the kubernetes services' apis via webui, if we don't use swagger-ui?
Thanks a lot!
is there a way to let swagger-ui run code inside the pod? so that it
can reach the services and execute their apis?
You can expose or share the file of JSON to swagger POD and swagger UI can server those files, that's one way possible. However it's not good idea to set the RWM (Read-write many) setting up the NFS inside K8s and share the pods file system across each other.
is there ANY way to execute the kubernetes services' apis via webui,
if we don't use swagger-ui?
You can also try the ReDoc : https://github.com/Redocly/redoc
which is similar to swagger also in below github repo there is also example available for both.
To run the swagger inside the Kubernetes you can try
Two ways swagger on getting file either from the file system or from the URL.
For example, if you are looking forward to running the deployment of swagger UI.
apiVersion: apps/v1
kind: Deployment
metadata:
name: swagger-ui
labels:
app: swagger-ui
spec:
replicas: 1
selector:
matchLabels:
app: swagger-ui
template:
metadata:
labels:
app: swagger-ui
spec:
containers:
- name: swagger-ui
image: swaggerapi/swagger-ui #build new image for adding local swaggerfile
ports:
- containerPort: 8080
env:
- name: BASE_URL
value: /
- name: API_URLS
value: >-
[
{url:'https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v3.0/petstore.yaml',name:'Pet Store Example'},
{url:'https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v3.0/uspto.yaml',name:'USPTO'},
{url:'/example-swagger.yaml',name:'Local Example'}
]
https://github.com/harsh4870/Central-API-Documentation-kubernetes
You can also read about the newman : https://azevedorafaela.com/2018/12/18/how-to-test-internal-microservices-in-a-kubernetes-cluster/
I solved it here: https://stackoverflow.com/a/68798178/12877180
Executing kubernetes services by using their names via webui can be achived by using nginx reverse proxy mechanizm.
https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
This mechanizm will redirect your API request invoked from the browser level to internal cluster service name where the nginx server is running.
An example nginx configuration
server {
listen 80;
location /api/ {
proxy_pass http://<service-name>.<namespace>:<port>;
}
location / {
root /usr/share/nginx/html;
index index.html;
add_header 'Access-Control-Allow-Origin' '*';
try_files $uri $uri/ /index.html =404;
}
}
In above example all your .../api/... calls will be redirect to http://<service-name>.<namespace>:<port>/api/... endpoint.
Unfortunately i dont't know how to achive the same goal with the Swagger-UI
NOTE
If the proxy_pass url would end with '/' the location url ex. '/api/' would not be added to desired url.
Related
I have an API and a GUI application.
The API is a simple Python FastAPI app with one route. The GUI is a React app that is served via nginx.
So I have a working local cluster with those two apps. Each has a service. GUI additionally has an Ingress. Using k3d, I am able to get to the GUI app, because I start the cluster with: k3d cluster create -p "8081:80#loadbalancer", so I can go to localhost:8081 and I see it. The Ingress config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo-gui-ingress
annotations:
external-dns/manage-entries: "true"
external-dns.alpha.kubernetes.io/ttl: "300"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: foo-gui-service
port:
number: 80
Service configuration are pretty much the same, just different names:
apiVersion: v1
kind: Service
metadata:
name: foo-gui-service
labels:
app.kubernetes.io/name: foo-gui
app.kubernetes.io/component: server
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/part-of: foo-gui
spec:
selector:
app: foo-gui
component: foo-gui
ports:
- protocol: TCP
port: 80
targetPort: 80
GUI makes calls to the API. The API Service is called foo-api-service, so I want to make calls to http://foo-api-service:80/<route>. It works in a temporary pod that I install for debugging network issues (I can curl the API). It even works when I attach to the GUI Pod.
The problem is: it doesn't work on the GUI. I receive the following error in the dev console in the browser: POST http://foo-api-service/test net::ERR_NAME_NOT_RESOLVED. It looks like the http://foo-api-service is being resolved in my networking, not in the cluster's.
My question is: Is there a way to overcome that? To have http://foo-api-service calls to resolve this name inside the local cluster? Or should I provide Ingress for the API as well and use the domain name from API's Ingress as a URL in the GUI app?
I'm fairly new to Kubernetes and I have played around with it for a few days now to get a feeling for it. Trying out to set up an Nginx Ingress controller on the google-cloud platform following this guide, I was able to set everything up as written there - no problems, I got to see the hello-app output.
However, when I tried replicating this in a slightly different way, I encountered a weird behavior that I am not able to resolve. Instead of using the image --image=gcr.io/google-samples/hello-app:1.0 (as done in the tutorial) I wanted to deploy a standard nginx container with a custom index page to see if I understood stuff correctly. As far as I can tell, all the steps should be the same except for the exposed port: While the hello-app exposes port 8080 the standard port for the nginx container is 80. So, naively, I thought exposing (i.e., creating a service) with this altered command should do the trick:
kubectl expose deployment hello-app --port=8080 --target-port=80
where instead of having target-port=8080 as for the hello-app, I put target-port=80. As far as I can tell, all other thins should stay the same, right? In any way, this does not work and when I try to access the page I get a "404 - Not Found" although the container is definitely running and serving the index page (I checked by using port forwarding from the google cloud which apparently directly makes the page accessible for dev purposes). In fact, I also tried several other combinations of ports (although I believe the above one should be the correct one) to no avail. Can anyone explain to my why the routing does not work here?
If you notice the tutorial inside the ingress configuration path: "/hello"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "34.122.88.204.nip.io"
http:
paths:
- pathType: Prefix
path: "/hello"
backend:
service:
name: hello-app
port:
number: 8080
you might have updated port number and service name config however if path /hello which means you request is going to Nginx container but not able to file the page hello.html so it's giving you 404.
You hit endpoint IP/hello (Goes to Nginx ingress controller)-->
checked for path /hello and forwarded request to service -->
hello-app (service forwarded request to PODs) --> Nginx POD (it
doesn't have anything at path /hello so 404)
404 written by Nginx side, in your case either it will be Nginx ingress controller or else container(POD) itself.
So try you ingress config without setting path path: "/" and hit the endpoint you might see the output from Nginx.
My goal is this:
A single public DNS record for my entire cluster
A single ingress for my entire cluster
I don't want to have to update the ingress for new deployments/services- it has a way of automatically routing to them
I'm using gke: https://cloud.google.com/kubernetes-engine/docs/concepts/service-discovery
So by default a service gets an internal DNS record like my-svc.my-namespace.svc.my-zone
What I want is to be able to hit that service like this: http://myapps.com/my-svc.my-namespace.svc.my-zone/someEndpoint
Is this possible? It would mean I could deploy new deployments and services and they would be immediately be accessible to consumers outside the cluster.
Do you have to associate an ingress with a backend service? I don't want to do that because it means I'll need to update it to add every new deployment/service and I want to make it dynamic. Can you have an ingress use internal DNS for routing to services?
I think there are several ways to accomplish this. One way would be to not use the Ingress resources at all and instead, put an Nginx proxy in front of your services. Then configure it to proxy all requests. A configuration like this should work.
location ~ ^\/(.+?)\/(.*)$ {
proxy_pass http://$1/$2;
}
This is probably best achieved using a combination of nginx proxy and Ingress.
First you create an ingress with your desired host name and add a single backend that points to your nginx proxy.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: youringress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- yourhostname
secretName: yourtlssecret
rules:
- host: yourhostname
http:
paths:
- path: /your-app-path
backend:
serviceName: nginxservice
servicePort: nginxserviceport
The path value can be a generic slash which will route all the incoming requests to the nginx proxy server first which then forwards it to the respective service based on the path. You wouldn't need to update this ingress everytime you have a new service deployed as its main duty is just to get the request to the nginx proxy server.
Then you create your nginx proxy server with multiple location blocks that routes the requests to the respective services in your cluster based on the kube-dns names.
server {
listen 1080;
server_name localhost $hostname;
location /path-to-the-service1 {
proxy_pass http://name-of-service1
}
location /path-to-the-service2 {
proxy_pass http://name-of-service2
}
location /path-to-the-service3 {
proxy_pass http://name-of-service3
}
}
In your cluster if you create a service with the name say frontendservice, the nginx proxy server can reach it using the kube-dns name
location /path-to-the-frontendservice {
proxy_pass http://frontendservice
}
You can keep adding up your new deployments as a separate location block in the nginx proxy server which will take care of the routing for you.
You should then be able to hit any service using your url like
http://my-app-url/path-to-the-service/
I have an isomorphic JavaScript app that uses Vue's SSR plugin running on K8s. This app can either be rendered server-side by my Express server with Node, or it can be served straight to the client as with Nginx and rendered in the browser. It works pretty flawlessly either way.
Running it in Express with SSR is a much higher resource use however, and Express is more complicated and prone to fail if I misconfigure something. Serving it with Nginx to be rendered client side on the other hand is dead simple, and barely uses any resources in my cluster.
What I want to do is have a few replicas of a pod running my Express server that's performing the SSR, but if for some reason these pods go down, I want a fallback service on the ingress that will serve from a backup pod with just Nginx serving the client-renderable code.
Setting up the pods is easy enough, but how can I tell an ingress to serve from a different service then normal if the normal service is unreachable and/or responding too slowly to requests?
The easiest way to setup NGINX Ingress to meet your needs is by using the default-backend annotation.
This annotation is of the form
nginx.ingress.kubernetes.io/default-backend: <svc name> to specify
a custom default backend. This <svc name> is a reference to a
service inside of the same namespace in which you are applying this
annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the
Ingress rule does not have active endpoints. It will also handle the
error responses if both this annotation and the custom-http-errors
annotation is set.
Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '404'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
In this example NGINX is serving custom-http-backend as primary resource and if this service fails, it will redirect the end-user to default-http-backend.
You can find more details on this example here.
I have been trying to create HTTPS endpoint in Google Cloud K8s environment.
I have built a flask application in Python that serves on the waitress production environment via port 5000.
serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)
I created a docker file and pushed this to the google cloud repository. Then, created a Kubernetes cluster with one workload containing this image. After, I exposed this via external IP by creating LoadBalancer. (After pushing the image to the Google repository, everything is managed through the Google Cloud Console. I do not have any configuration file, it should be through the Google Cloud Console.)
Now, I do have an exposed IP and port number to access my application. Let's say this IP address and the port is: 11.111.11.222:1111. Now, I can access this IP via Postman and get a result.
My goal is to implement, If it is possible, to expose this IP address via HTTPS as well, by using any google cloud resources. (redirection, creating ingress, etc)
So, in the end I want to reach the application through http://11.111.11.222:111 and https://11.111.11.222:111
Any suggestions?
A LoadBalancer translates to a network load balancer. You can configure multiple ports for this e.g. 80 and 443. Then your application must handle the TLS part.
The ingress resource creates an HTTP(S) LB
From the GKE perspective you can try to configure Ingress resource with HTTPS enabled:
Steps:
Create a basic flask app inside a pod (for example purposes only)
Expose an app via service object of type nodePort
Create a certificate
Create an Ingress resource
Test
Additional information (added by EDIT)
Create a basic flask app inside a pod (for example purposes only)
Below is a flask script which will respond with <h1>Hello!</h1>:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return "<h1>Hello!</h1>"
if __name__ == "__main__":
from waitress import serve
serve(app, host="0.0.0.0", port=8080)
By default it will respond on port 8080.
Link to an answer with above script.
Expose an app via service object of type nodePort
Assuming that deployment is configured correctly with working app inside, you can expose it via service object type of nodePort with following YAML definition:
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
type: NodePort
selector:
app: ubuntu
ports:
- name: flask-port
protocol: TCP
port: 80
targetPort: 8080
Please make sure that:
selector is configured correctly
targetPort is pointing to port which is app is running on
Create a certificate
For Ingress object to work with HTTPS you will need to provide a certificate. You can create it with GKE official documentation on: Cloud.google.com: Managed certificates
Be aware of a fact that you will need a domain name to do that.
Create an Ingress resource
Below is an example Ingress resource which will point your requests to your flask application:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: flask-ingress
annotations:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: flask-service
servicePort: flask-port
Please take a specific look on part of YAML definition below and change accordingly to your case:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
Please wait for everything to configure correctly.
After that you will have access to your application by domain.name with ports:
80(http)
443(https)
Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
-- Kubernetes.io: Ingress TLS
Test
You can check if above steps are configured correctly by:
entering https://DOMAIN.NAME in your web browser and check if it responds with Hello with HTTPS enabled
using a tool curl -v https://DOMAIN.NAME.
Please let me know if this solution works for you.
Additional information (added by EDIT)
You can try to configure service object of type LoadBalancer which will be operate at layer 4 as #Florian said in his answer.
Please refer to official documentation: Kubernetes.io: Create external load balancer
You can also use Nginx Ingress controller and either:
Expose TCP/UDP service by following: Kubernetes.github.io: Ingress nginx: Exposing tcp udp services which will operating at L4.
Create an Ingress resource that will have SSL Passthrough configured by following: Kubernetes.github.io: Ingress nginx: Ssl passthrough
After researching, I found the answer in Google Cloud Run. It is very simple to deploy HTTP based flask app in the container. As serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)(No need for self-certificate or HTTPS in this part, just make sure the HTTP app works) and then push it Cloud Run.
Adjust the service parameters, depend on how much resources do you need to run it. In the machine settings, set the port that you are using in the docker container to be mapped. for instance, in my case, it is 5000. When you create the service, Google provides you a domain address with HTTPS. You can use that URL and access your resources.
That's it!
For more information on Cloud Run:
https://cloud.google.com/serverless-options
The differences between computing platforms: https://www.signalfx.com/blog/gcp-serverless-comparison/