How to access model microservice deployed behind Istio and Dex? - kubernetes

I built a deploy pipeline to serve ML models using Kubeflow (v0.6) and Seldon Core, but now that models are deployed I can't figure out how to pass the auth. layer and consume the services.
My kubernetes instance is on bare-metal and setup is identical to this: https://www.kubeflow.org/docs/started/getting-started-k8s/
I was able to follow these instructions launch example-app and issue an IDToken for a staticClient, but when I pass the token as 'Authorization: Bearer' I get redirected to dex logon page.
(part of) Dex configMap:
staticClients:
- id: kubeflow-authservice-oidc
redirectURIs:
# After authenticating and giving consent, dex will redirect to
# this url for the specific client.
- https://10.50.11.180/login/oidc
name: 'Kubeflow AuthService OIDC'
secret: [secret]
- id: model-consumer-1
secret: [secret]
redirectURIs:
- 'http://127.0.0.1:5555/callback'
When I try to access the service:
curl -H "Authorization: Bearer $token" -k https://10.50.11.180/seldon/kubeflow/machine-failure-classifier-6e462a70-a995-11e9-b30b-080027dfd9f4/api/v0.1/predictions
Found.
What am I missing? :(

I found out that serving seldon models with Istio worked better if they were in a namespace other than 'kubeflow'.
I Followed these instructions: https://docs.seldon.io/projects/seldon-core/en/latest/examples/istio_canary.html, (created new gateway and namespaces) and was able to bypass Dex.

Have you tried VirtualService?
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: <name-of-your-choice>
spec:
gateways:
- <your-gateway>
hosts:
- <your-host>
http:
- match:
- uri:
prefix: "<your-api-path-uri>"
rewrite:
uri: "<your-rewrite-logic>"
route:
- destination:
host: <name-of-your-service>.<namespace>.svc.<cluster-domain>
port: <port-of-the-service>
Virtual service will help you route traffic as specified.

I'm three years to late. Try to get your cookie from the dashboard in the developer mode
document.cookie
Replace XXX with your cookie.
curl -H -k https://10.50.11.180/seldon/kubeflow/machine-failure-classifier-6e462a70-a995-11e9-b30b-080027dfd9f4/api/v0.1/predictions --data-urlencode 'json={"data":{"ndarray":[["try to stop flask from using multiple threads"]]}}' -H "Cookie: authservice_session=XXX" -v

Related

ArgoCD CLI login with server running with --insecure

So I have installed ArgoCD on to my cluster. I then patched it with,
kubectl -n argocd patch deployment argocd-server --type json -p='[ { "op": "replace", "path":"/spec/template/spec/containers/0/command","value": ["argocd-server","--insecure"] }]'
so that i can host it with Contour dealing with the TLS / SSL Cert. Heres the config for the ingress / Contour:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: argocd
namespace: argocd
spec:
virtualhost:
fqdn: argo.xxx.com
tls:
secretName: default/cert
routes:
- requestHeadersPolicy:
set:
- name: l5d-dst-override
value: argocd-server.argocd.svc.cluster.local:443
services:
- name: argocd-server
port: 443
conditions:
- prefix: /
loadBalancerPolicy:
strategy: Cookie
But now cant login to the Argo server with the cli, even using port-forward (which worked, before i patched the server with the 'insecure' flag).
When trying to use the port-forward access, i get this
error creating error stream for port 8080 -> 8080: EOF
Using,
kubectl port-forward svc/argocd-server -n argocd 8080:443
So I have tried as many options / flags as i can think of to login via the ingress / contour url,
argocd login argo.xxx.com --plaintext --insecure --grpc-web
argocd login argo.xxx.com --plaintext --insecure
argocd login argo.xxx.com --plaintext
argocd login argo.xxx.com --insecure --grpc-web
I either get back a 404 or a 502. Sometimes an empty error code,
FATA[0007] rpc error: code = Unavailable desc =
FATA[0003] rpc error: code = Unknown desc = POST http://argo.xxx.com:443/session.SessionService/Create failed with status code 502
FATA[0002] rpc error: code = Unknown desc = POST https://argo.xxx.com:443/argocd/session.SessionService/Create failed with status code 404
With out any flags added to login, this is the error i get back,
FATA[0007] rpc error: code = Internal desc = transport: received the unexpected content-type "text/plain; charset=utf-8"
It might have been a while, and could have been solved, but I had a similar issue.
With the ArgoCD version 2.4.14 I was able to resolve the issue using the command:
argocd login --insecure --port-forward --port-forward-namespace=argocd --plaintext
Username: admin
Password:
'admin:login' logged in successfully
Context 'port-forward' updated
I allowed the CLI to use its selector app.kubernetes.io/name=argocd-server to find the service in the argocd namespace.

403 forbidden when login in harbor 2.0.1 in kubernetes cluster

I am installed harbor v2.0.1 in kubernetes v1.18, now when I am login harbor, it give me this tips:
{"errors":[{"code":"FORBIDDEN","message":"CSRF token invalid"}]}
this is my traefik 2.2.1 ingress config(this is the docs I am reference):
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`harbor-portal.dolphin.com`) && PathPrefix(`/c/`)
services:
- name: harbor-harbor-core
port: 80
- kind: Rule
match: Host(`harbor-portal.dolphin.com`)
services:
- name: harbor-harbor-portal
port: 80
I am check harbor core logs only show ping success message.Shoud I using https? I am learnging in local machine. Is https mandantory? I am searching from internet and find just a little resouce to talk aboout it.what should I do to make it work?
I read the source code, and tried in harbor core pod like this:
harbor [ /harbor ]$ curl --insecure -w '%{http_code}' -d 'principal=harbor&password=Harbor123456' http://localhost:8080/c/login
{"errors":[{"code":"FORBIDDEN","message":"CSRF token invalid"}]}
my expose type is nodePort. modify the values.yaml file, "externalURL: https" change to "externalURL: http"
before: externalURL: https://10.240.11.10:30002
after: externalURL: http://10.240.11.10:30002
and then reinstall the harbor

How do I segregate internal and external loads using Istio Ingress?

On my Kubernetes cluster I would like to segregate access to internal and external apps. In my example below I have app1 and app2 both exposed to the internet but would like only app1 exposed to the internet and app2 only available for users in the internal vnet.
My initial thought was to just make a new service (blue box) and use the "internal=true" attribute and my cloud provider creates the internal IP and I'm good. The issue is the gateway points to the deployment (pods) so it seem like to create an internal ingress I need to copy all 3 blue boxes.
Is there an easy way to tie in a new service and gateway without a new deployment (blue boxes) or maybe restrict external access via policy?
Based on my knowledge you can create virtual service to do that
The reserved word mesh is used to imply all the sidecars in the mesh. When this field is omitted, the default gateway (mesh) will be used, which would apply the rule to all sidecars in the mesh. If a list of gateway names is provided, the rules will apply only to the gateways. To apply the rules to both gateways and sidecars, specify mesh as one of the gateway names.
You can check my another answer on stackoverflow, there is whole reproduction of someone problem where i made virtual service with a gateway to access(in a example just a curl) from outside, and if you want to make it only inside the mesh just delete this gateway and leave only mesh one, like in below example.
Specially the virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh #inside cluster
hosts:
- nginx.default.svc.cluster.local #inside cluster
http:
- name: match-myuid
match:
- uri:
prefix: /
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
And some external and internal tests
External
with additional gateway to allow external traffic
curl -v -H "host: nginx.com" loadbalancer_istio_ingress_gateway_ip/
HTTP/1.1 200 OK
without additional gateway to allow external traffic, just the mesh one
curl -v -H "host: nginx.com" loadbalancer_istio_ingress_gateway_ip/
HTTP/1.1 404 Not Found
Internal
Created some basic ubuntu pod for tests
kubectl exec -ti ubu1 -- /bin/bash
With mesh gateway
curl -v nginx/
HTTP/1.1 200 OK
Without mesh gateway
curl -v nginx/
HTTP/1.1 404 Not Found
Based on that you can use gateway "mesh" which will work only inside the mesh and won't allow external requests.
I can bring you pack of yamls to test if you want, if you wanna test it.
Let me know if that answer your question or you have any more questions.

How do I forward headers to different services in Kubernetes (Istio)

I have a sample application (web-app, backend-1, backend-2) deployed on minikube all under a JWT policy, and they all have proper destination rules, Istio sidecar and MTLS enabled in order to secure the east-west traffic.
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: oidc
spec:
targets:
- name: web-app
- name: backend-1
- name: backend-2
peers:
- mtls: {}
origins:
- jwt:
issuer: "http://myurl/auth/realms/test"
jwksUri: "http://myurl/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
When I run the following command I receive a 401 unauthorized response when requesting the data from the backend, which is due to $TOKEN not being forwarded to backend-1 and backend-2 headers during the http request.
$> curl http://minikubeip/api "Authorization: Bearer $TOKEN"
Is there a way to forward http headers to backend-1 and backend-2 using native kubernetes/istio? Am I forced to make application code changes to accomplish this?
Edit:
This is the error I get after applying my oidc policy. When I curl web-app with the auth token I get
{"errors":[{"code":"APP_ERROR_CODE","message":"401 Unauthorized"}
Note that when I curl backend-1 or backend-2 with the same auth-token I get the appropriate data. Also, there is no other destination rule/policy applied to these services currently, policy enforcement is on, and my istio version is 1.1.15.
This is the policy I am applying:
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: default
spec:
# peers:
# - mtls: {}
origins:
- jwt:
issuer: "http://10.148.199.140:8080/auth/realms/test"
jwksUri: "http://10.148.199.140:8080/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
should the token be propagated to backend-1 and backend-2 without any other changes?
Yes, policy should transfer token to both backend-1 and backend-2
There is a github issue , where users had same issue like You
A few informations from there:
The JWT is verified by an Envoy filter, so you'll have to check the Envoy logs. For the code, see https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth
Pilot retrieves the JWKS to be used by the filter (it is inlined into the Envoy config), you can find the code for that in pilot/pkg/security
And another problem with that in stackoverflow
where accepted answer is:
The problem was resolved with two options: 1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri) 2. Disable the usage of mTLS and its policy (Known issue: https://github.com/istio/istio/issues/10062).
From istio documentation
For each service, Istio applies the narrowest matching policy. The order is: service-specific > namespace-wide > mesh-wide. If more than one service-specific policy matches a service, Istio selects one of them at random. Operators must avoid such conflicts when configuring their policies.
To enforce uniqueness for mesh-wide and namespace-wide policies, Istio accepts only one authentication policy per mesh and one authentication policy per namespace. Istio also requires mesh-wide and namespace-wide policies to have the specific name default.
If a service has no matching policies, both transport authentication and origin authentication are disabled.
Istio supports header propagation. Probably didn't support when this thread was created.
You can allow the original header to be forwarded by using forwardOriginalToken: true in JWTRules or forward a valid JWT payload using outputPayloadToHeader in JWTRules.
Reference: ISTIO JWTRule documentation

How to know the Ambassador service prefix at runtime in my microservice in Kubernetes

Is there a way to learn the Ambassador service prefix at runtime in my microservice in Kubernetes?
Taking this config example:
---
apiVersion: ambassador/v1
kind: Mapping
name: myservice_get_mapping
prefix: /myprefix/
service: myservice
From within my docker container, I would like to get the '/myprefix/'. Either via some env variable to the deployment or programmatically if cannot be done using env variable.
Thanks.
Assuming the fact that Ambassador Mapping resource associates REST resources with Kubernetes services, you can fetch annotation metadata via JSONPath and then parse prefix: field, if I understand your question correctly.
Example for k8s service from Ambassador documentation:
kind: Service
metadata:
name: httpbin
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-ui_mapping
prefix: /test/
service: http://tour
spec:
ports:
- name: httpbin
port: 80
Command line to fetch prefix: field value:
$ kubectl get svc httpbin -o jsonpath='{.metadata.annotations}'| grep -w "prefix:"| awk '{print $2}'
/test/
Update:
Alternatively, you might also consider to retrieve the same result through the direct call to REST API through Bearer token authentication method:
curl -k -XGET -H "Authorization : Bearer $MY_TOKEN" 'https://<API-server_IP>/api/v1/namespaces/default/services/httpbin' -H "Accept: application/yaml"| grep -w "prefix:"| awk '{print $2}
$MY_TOKEN variable has to be supplied with appropriate token which is entitled to perform above query against REST API as I've already pointed out in my former answer.