How are Kubernetes Services implemented? - kubernetes

Ok, this may be a tough one for a K8s neophite like me, so I'll rely in the many experts that lurk the shadows here because I cannot get an answer.
I would like to know if K8s implements ClusterIP services using Nginx or not, and if not, if it is a similar implementation. My cause of suspicion is getting 499 status codes from one internal microservice to the applciation's gateway, but both implementations are ASP.Net, and ASP.Net does not use status code 499.
The Internet says status code 499 is only used by Nginx.
So here I am, completely confused by the facts:
Client HTTP request reaches the gateway.
Gateway routes to internal HTTP server.
Internal HTTP server, an ASP.Net server, allegedly responds with status code 499.
So, if 499 is not coming from the ASP.Net server, who's sending it? I can only conclude that the sender is K8s itself, the Service ClusterIP part, right? To the gateway there's no difference, I suppose.
Thanks! Let me know if I should clarify anything.

Related

How to serve gRPC and REST with TLS enable on the same port

I'm trying to create a gRPC/REST PoC written in Go.
I would like to serve the gRPC and REST on the same port, in a TLS connection. I'm also using serving metrics.
When accessing my service through HTTP/1, all is working as expected
When accessing the /metrics url, all is working as expected
When accessing the gRPC service directly, using a client I received a connection close response
I do not know how to debug this kind of error.
I have created a repo on https://github.com/lrobinot/grpc-poc to reproduce the issue.
Can someone give me pointers to some resources or show me my enormous error :)
Thanks in advance!

Returning HTTP 502 error code in RESTful api when upstream provider fails

Busy building an API, and some parts of our API heavily depend on a third party.
When we are unable to connect to the 3rd party, or the connection fails, I simply returned an error 500. However, I was wondering if it wouldn't make more sense to return a 502 Bad Gateway or a 504 Gateway Timeout?
However, my interpretation is that it could only be relevant for proxies, and not for API's?
In that case I would suggest to use 503 Service Unavailable and use the Retry-After Header to specify the time the client should wait before retrying.
When it's a matter of RESTful APIs I always check this super complete guide, which contains all the answers for all the all the questions you will ever imagine.
Service Unavailable - service is (temporarily) not available (e.g. if a required component or downstream service is not available) — client retry may be sensible. If possible, the service should indicate how long the client should wait by setting the Retry-After header.

REST API with Single Page Application over HTTPS on Firefox only

I am developing a web service using REST API. This REST API is running on port 6443 for HTTPS. Client is going to be a Single page application running on port 443 for HTTPS on same machine. The problem I am facing is:
While I hit the url say: https://mymachine.com/new_ui I get certificate exception for an invalid certificate because I use a self signed one, so mymachine.com:443 gets added to server exception. But still requests doen't go to REST API as they are running on https://mymachine.com:6443/restservice. If I manually add mymachine.com:6443 to server exception on firefox it works but it will not be the case in production for customers.
Some options that I thought are:
1. Give another pop up and ask to add REST server on port 6443 exception too.But this doesn't look proper as why an end user should accept the cerf for same domain twice. Also REST api server port can change.
Can we programmatically add exception for domain and both the ports in one shot? Ofcourse with the consent of the user. 3. Use a reverse proxy. But then its going to have memory footprint on our system. Also it will be time consuming.
Please suggest some options. How do I deal with it. Thank you

Health check route organisation in microservice(ish) setup behind AWS ALB

How to name health check routes among several services behind ALB?
I'm moving my API and database to AWS. Before moving I split up my monolith REST API into four services:
public API (to which apps and websites connects)
admin API (for admin web site)
messaging API (web socket server for realtime communication with apps)
workers (queue based task processors)
I'm now trying to figure out a good organisation of the routes. At first created two subdomains, api.mydomain.com and www.mydomain.com.
I directed the api subdomain to my ALB which routed traffic based on the path only, like this:
"/sockets" -> messaging-api
"/admin" -> admin-api
"/" -> public-api
Now I'm trying to implement the health check routes. I'd like to name them "/health". But the health checks needs to be directed to each target group. Since the ALB only routes based on the path I cannot have /health on more than one server.
Possible solutions:
1. Separate the services via subdomains
I could create a subdomain for each service like:
- api.mydomain.com
- sockets.mydomain.com
- admin.mydomain.com
With this setup I could have a /health in each service without collisions.
2. Separate the health check routes via naming
I could name the health check route differently for each service like:
api.mydomain.com/health-public-api
api.mydomain.com/health-messaging-api
api.mydomain.com/health-admin-api
Suggestions?
Both the above solutions seems viable, but I'd like to know if maybe one of the solutions will bite me later, when for example more services are added, or when I'll add a graphQL API later on.
edit:
I just bumped into one drawback with solution #1. My local
dev-enviromnemt is setup with a docker image for each service and
nginx for routing the requests. On top of this I use ngrok to be able
to reach the dev environment from the Internet.
I think it would be hard to solve the service separation in based on
subdomains, but I really don't need the /health routes in the dev
enviromnent, so I guess I could just pretend they are not there.
Answering my own question as documentation and possibly some input to others.
tl;dr:
I went with a third option, separating all services via the first level in the the path. The main difference from my previous structure is that my main api (aka public-api) has moved from the root to a subpath called /app. I also renamed it to app-api.
api.mydomain.com/app/..
api.mydomain.com/admin/..
api.mydomain.com/sockets/..
api.mydomain.com/auth/..
www.mydomain.com/..
This solution gives me several pros and no cons (I think).
Pros:
Easy to route request both in ALB and in local dev environment via nginx without the extra work needed for SNI
subdomains separates api vs web sites very clearly
/health routes gets unique names by default since they live under separate paths.
the apps (web and smartphones) can use a common api url (api.mydomain.com/) and still reach all services, i.e. they don't need to store several differently initialized Axios connections. No biggie, but still..
I also opted for making /health a little more future proof and standardized on the following structure in each service.
api.mydomain.com/servicename/health
api.mydomain.com/servicename/health/is-up
api.mydomain.com/servicename/health/is-ready
up = responding to requests, ready = all dependencies are connected (i.e. databases, etc)
/health returns status 200 along with a json object describing the readiness.
/health/is-up responds with 200 or nothing (i.e. not reachable at all)
/health/is-ready responds with 200 if all dependencies are ready, otherwise 500.
The target groups in AWS will use /is-ready for health checks, but for now it's the same thing as /is-up since I haven't implemented the readiness tests yet.

Exe as Webservice Endpoint

I got a webservice endpoint and I stumple upon how to correctly implement it.
It seems to be an parameterized exe-file which returns an XML Reply.
There is no documentation.
I am used to soap, wcf and rest but this is completely unknown to me, has anyone a guide or a best case how to implement such a service?
I can consume it with a HTTP GET but there are some questions left to me:
I know the questions are quite broad... But I could not find anything about it in the interwebz.
Is there a secure way to publish exe files as webservice?
Are there any critical downsides implementing such an interface?
Make I myself a fool and this is just an alias?
Example Url:
http://very.exhausting.company/Version/SuperStrange.exe?parameter=String
Web servers
What you call a webservice endpoint is nothing else than a web server listening on some host (normally 0.0.0.0) and some port on a physical or virtual machine and responding with some HTTP response to HTTP requests sent to that host, port and URIs that the web server cares to process.
Any web server is itself an application or a static or dynamic component of an application as the following examples illustrate:
JBoss, Glassfish, Tomcat etc. are applications, known as application servers, into which containers/servlets/plugins implementing web servers and corresponding endpoints are deployed. These listen on some port exposing generic web servers routing requests to those containers and their servlets;
a fat jar started with java -jar on a JVM which deploys a vert.x verticle featuring a vert.x HttpServer listening on some port is nothing else than a web server;
an interpreter such as node.js parsing and executing JavaScript code based on the express module will most likely deploy a web server on some port;
finally, a statically or dynamically linked application written in languages such as C++ or Go can expose a web server listing on some port.
All of the above cases feature different deployment mechanisms, but what they deploy is essentially the same: a piece of software that listens for HTTP requests on some port, executes some logic based on request and returns HTTP responses to the caller.
Your windows exe file is most likely a statically linked application that provides a web server.
Protocols
So we know you have a web server as it reacts to an HTTP GET. How does it relate to REST, SOAP etc? Effectively, REST, SOAP etc are higher level protocols. TCP is the low level, HTTP is based on top of that and your server supports that. REST, SOAP and everything else that you mention are higher level protocols that are based, among others, on HTTP. So all you know is that your application (web server) supports HTTP, but you do not know which higher level data exchange protocol it implements. It definitely implements some, at least a custom one that its author came up with to exchange data between a client and this application.
You can try to reverse engineer it, but it is not clear how would you find out about all possible endpoints, arguments, payload structures, accepted headers etc. Essentially, you have a web server publishing some sort of an API, but there is no generic way of telling what that API is.
Security
The world around you does not have to know how the API is published. You can put any of the above 4 web server implementations behind exactly the same firewall or a reverse proxy with SSL termination exposing just one host and port over SSL. So there is no difference in security, with respect to the world, whether you deploy it as exe or as a war into JBoss. This is not to say, that your exe file is secure: depending on how it is implemented it may allow all sorts of attacks, but again, this is equally true for any mechanism.