How to view network requests logs in GKE? - kubernetes

How does one view request headers, payloads and response headers from an application thats deployed in GKE? For example if a node api invokes a function cleanData(); and it makes an api request to intuit API, how does one capture the network request/response headers in GCP logs?
I tried going to GCP console logs and its all just console.logs of the application and no network logs

Related

502 bad gateway using Openshift (Kubernetes)

I have an Openshift 4.6 platform running an applicative pod.
We use postman to send request to the pod.
The applicative pod return a 200 http response code, but get a 502 in postman.
So there is a interim component inside OpenShift/K8s that should transform the 200 into a 502.
Is there a way to debug/trace more information in Egress ?
Thanks
Nicolas
The HTTP 502 error is likely returned by the OpenShift Router that is forwarding your request to your application.
In practice this often means that the OpenShift Router (HAProxy) is sending the request to your application and it does not receive any or an unexpected answer from your application.
So I would recommend that you check your applications logs if there is any error in your application and if your application returns a valid HTTP answer. You can test this by using curl localhost:<port> from your application Pods to see if there is a response being returned.

Questions about istio external authorization

Problem statement:
My goal is to have istio with external authorization service (ideally HTTP, if not possible than GRPC would do as well). There is a requirement to be able to control what exact status code will be returned to client on authorization service. The latter requirement is the most problematic part.
My research
I have read istio documentation on external authorizer
I have made a prototype with HTTP Auth service, but whatever non 200 status
code I return from Auth Service the client always receives 403
Forbidden
In mesh config specification I see the only possibility to set statusOnError but it will be used only in case auth service is unreachable and it can not be dynamically changed.
Also in envoy documentation for GRPC service I see possibility to set custom status
HTTP attributes for a denied response.
{
"status": "{...}",
"headers": [],
"body": "..."
}
Questions:
Is having custom status possible only with GRPC auth service?
Is istio using envoy API-V3 or API-V2?
Any suggestion how to cook istio with external authorizer and custin status codes?
I made the GRPC Auth service prototype and found the answer. It is counter-intuitive but GRPC external auth service is really more flexible than HTTP one. And it really allows to set arbitrary status code

How do I prevent anonymous requests to a REST API / NGINX server while allowing authenticated requests to endpoints?

Initial disclosure:
I’m new to nginx and reverse proxy configuration in general.
Background
I have a Swagger-derived, FOSS, https-accessible REST API [written by another party] running on a certain port of an EC2 CentOS 7 instance behind an nginx 1.16.1 reverse proxy (to path https://foo_domain/bar_api/); for my purposes, this API needs to be reachable from a broad variety of services not all of which publish their IP ranges, i.e., the API must be exposed to traffic from any IP.
Access to the API’s data endpoints (e.g., https://foo_domain/bar_api/resource_id) is controlled by a login function located at
https://foo_domain/bar_api/foobar/login
supported by token auth, which is working fine.
Problem
However, the problem is that an anonymous user is able to GET
https://foo_domain/bar_api
without logging in, which results in potentially sensitive data about the API server configuration being returned, such as the API’s true port, server version, some of the available endpoints and parameters, etc. This is not acceptable for the purpose, from a security standpoint.
Question
How do I prevent anonymous GET requests to the /bar_api/ endpoint, while allowing login and authenticated data requests to endpoints beyond /bar_api/ to proceed unhindered? Or, otherwise, how do I prevent any data from being returned upon such requests?

How do you send GRPC metadata through HTTP REST when transcoding is used?

I have a gRPC API running in Google Cloud. I'm using Google's Extensible Service Proxy to connect it to a Google Endpoints Service. Then I enabled transcoding in the ESP so that a REST API is offered as well as a gRPC one. One thing that is important in my API is that each request is user-authenticated. In normal gRPC I'm having the user token sent with the metadata of each request along with the API key.
My question is how does this work with the transcoded REST API. How can I get the user token sent with each request?
I see that the API key which is processed by the ESP get's added to the request URL as a parameter, but what about my custom metadata, how does that get through?
I've figured it out. I just need to put the metadata in the request headers.
curl -H "authorization: Bearer token-goes-here" https:api.domain/path?key=api-key

Request behind load-balancer

I have nginx server and have information about connected users.
Also I can send 'GET' request with connection id to server and disconnect my users. BUT, I can't send request to selected pod, because load-balance redirect request to another pod. I decide just send request to all pods, but how can I do it?
Perhaps you'll need to change your approach and use a kind of control plane on which pods subscribe to receive control messages. As you write the load balancer redirect to another pod in every request as this is its primary function.
Regards.