I'm using minio in Kubernetes and it works great. However, I can't seem to to change the domain and protocol for a pre-signed URL. Minio keeps giving me http://minio.test.svc:9000/delivery/ where as I want https://example.com/delivery. I've tried setting MINIO_DOMIN in the pod but it seems to have not effect; I think I'm misusing this var anyway.
It all depends on how you create your Minio client instance. Specifying host and port as below will make Minio resolve your domain to IP address and use IP rather than the domain. Sample JavaScript code:
import { Client as MinioClient } from 'minio';
const client = new MinioClient(
endPoint: 'yourdomain.com',
port: 9000,
accessKey: process.env.MINIO_ACCESS_KEY,
secretKey: process.env.MINIO_SECRET_KEY,
useSSL: false
);
If you create your minio instance like above, your domain will be resolved to it's corresponding IP address, and thus minio will work with http://x.x.x.x:9000 as opposed to https://yourdomain.com
Also to note, if your client is configured as above, trying to use useSSL: true will throw SSL error as below
write EPROTO 140331355002752:error:1408F10B:SSL routines:ssl3_get_record:wrong
version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332
For minio to use your domain as https://yourdomain.com, you need to have a web server like nginx to proxy your requests to your minio server. Minio has documented how you can achieve this here. Add SSL to your domain as documented here then proceed to create your minio client as below:
import { Client as MinioClient } from 'minio';
const client = new MinioClient(
endPoint: 'yourdomain.com',
port: 443,
accessKey: process.env.MINIO_ACCESS_KEY,
secretKey: process.env.MINIO_SECRET_KEY,
useSSL: true
);
Note the change in port and useSSL parameters.
Minio will now use https://yourdomain.com in all cases. Signed urls will also be https.
I bashed my head on this problem for a couple of days and managed to resolve it with NGINX in my Kubernetes cluster.
NGINX controller Kubernetes: need to change Host header within ingress
You use the ingress annotations to change the Host header of all incoming traffic to your Minio ingress so that it will always be the same Host name.
Related
I've ingress nginx controller exposed via private NLB (Network Load Balancer). I want to enable host whitelisting on ingress Nginx.
My use case is to allow request from VPC1 to VPC2 and only request coming from VPC1 should be allowed to go through this private nginx. For this I've used below annotation
nginx.ingress.kubernetes.io/whitelist-source-range
The problem I got from this is that ingress-nginx was not receiving client real IP. After doing some research I found out that I've to enable proxy protocol on NLB. To do this I added following annotations and configurations.
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true
metrics:
enabled: true
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
To be precise I've added only this part
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true
I've also tried this annotation with same config
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
The error I'm receiving is
broken header: "" while reading PROXY protocol, client: xx.xx.xx.xx
I'm not able to figure out what I'm doing wrong. Any help is greatly appreciated.
Update 1:
I checked on aws console proxy protocol was not enabled by this annotation. When I manually enabled it everything worked. But I'm not understanding why this is not working, is it related to the version of ingress nginx I'm using ?
In order to enable proxy protocol and make this annotation work service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true
We have to use aws load balancer controller. With this annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb kubernetes uses its own load balancer controller which do not support this annotation.
To use aws load balancer controller we need to add these annotations.
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance/ip (on the basis of your use case).
For more details please refer this documentation.
If you want to dive deeper and check kubernetes codebase please follow this link.
I have a website running inside a kubernetes cluster.
I can access it localy, but want to make it available over the internet. (I have a registered domain), but the external IP keeps pending
I worked with this instruction: https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd
This is the code for the service and ingress
kind: Service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: website
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.carina.bernrieder.de
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 3000
So I'm using helm to install the nginx-controller, but after that Kubectl get all the external IP of the nginx controller keeps pending.
EXTERNAL-IP is expected to be pending in a non cloud environment such as minikube. You should be able to access the application using curl www.carina.bernrieder.de
Here is guide on using nginx ingress to expose an application on minikube
As #Arghya Sadhu mentioned, in local environment it is the expected behaviour. Maybe it will be easier to understand when you look a bit more deeply on how it works in cloud environments. Without going into details, if you apply an Ingress resource on GKE, EKS or AKS, a few more things happen "under the hood". A loadbalancer with an external IP is automatically created so your ingress can use it to forward external traffic to Pods deployed on your kubernetes cluster.
Minikube doesn't have such capabilities as it cannot make any call to any API for additional infrastructure resources to be created, as it happens on cloud environments.
But let's start from the beginning. You didn't mention in your question anything about your external IP or domain configuration. If you don't have an external static IP to which your domain has been redirected, it have no chances to work anyway.
As to this point, I won't fully agree:
You should be able to access the application using curl
www.carina.bernrieder.de
Yes, you will be able to access it via your domain (actually via any domain that you don't even need to own) provided you add the following entry in your /etc/hosts file so DNS won't be used and it will be resolved based on this locally defined mapping:
172.17.0.15 www.carina.bernrieder.de
As you can read here:
Note: If you are running Minikube locally, use minikube ip to get the
external IP. The IP address displayed within the ingress list will be
the internal IP.
But keep in mind that both those IPs will be private IPs. The one, that is displayed within the ingress list will be internal cluster ip and the external one will be extarnal only from your Minikube cluster perspective. It will be still the IP in your local network assigned to your Minikube vm.
And as you said in your question you want to make it available over the Internet. As you can see it has no chances to work without additional configuration.
Another important thing. You didn't mention where your Minikube is actually installed, so I guess you set it up on your local computer and most probably you're behind NAT router. If this is your case, it won't be so easy to expose it on a public internet. You will need to configure proper port forwarding rules on your router and of course you need a static IP or you need to configure dynamic DNS to be able to access your computer on the Internet via your dynami public IP.
Minikube was designed mainly for playing locally with kubernetes and not for production environments. Of course you can use it to run your small app, but then you may think about installing it on a VM in a cloud environment or some sort of VPS server.
I have been trying to create HTTPS endpoint in Google Cloud K8s environment.
I have built a flask application in Python that serves on the waitress production environment via port 5000.
serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)
I created a docker file and pushed this to the google cloud repository. Then, created a Kubernetes cluster with one workload containing this image. After, I exposed this via external IP by creating LoadBalancer. (After pushing the image to the Google repository, everything is managed through the Google Cloud Console. I do not have any configuration file, it should be through the Google Cloud Console.)
Now, I do have an exposed IP and port number to access my application. Let's say this IP address and the port is: 11.111.11.222:1111. Now, I can access this IP via Postman and get a result.
My goal is to implement, If it is possible, to expose this IP address via HTTPS as well, by using any google cloud resources. (redirection, creating ingress, etc)
So, in the end I want to reach the application through http://11.111.11.222:111 and https://11.111.11.222:111
Any suggestions?
A LoadBalancer translates to a network load balancer. You can configure multiple ports for this e.g. 80 and 443. Then your application must handle the TLS part.
The ingress resource creates an HTTP(S) LB
From the GKE perspective you can try to configure Ingress resource with HTTPS enabled:
Steps:
Create a basic flask app inside a pod (for example purposes only)
Expose an app via service object of type nodePort
Create a certificate
Create an Ingress resource
Test
Additional information (added by EDIT)
Create a basic flask app inside a pod (for example purposes only)
Below is a flask script which will respond with <h1>Hello!</h1>:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return "<h1>Hello!</h1>"
if __name__ == "__main__":
from waitress import serve
serve(app, host="0.0.0.0", port=8080)
By default it will respond on port 8080.
Link to an answer with above script.
Expose an app via service object of type nodePort
Assuming that deployment is configured correctly with working app inside, you can expose it via service object type of nodePort with following YAML definition:
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
type: NodePort
selector:
app: ubuntu
ports:
- name: flask-port
protocol: TCP
port: 80
targetPort: 8080
Please make sure that:
selector is configured correctly
targetPort is pointing to port which is app is running on
Create a certificate
For Ingress object to work with HTTPS you will need to provide a certificate. You can create it with GKE official documentation on: Cloud.google.com: Managed certificates
Be aware of a fact that you will need a domain name to do that.
Create an Ingress resource
Below is an example Ingress resource which will point your requests to your flask application:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: flask-ingress
annotations:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: flask-service
servicePort: flask-port
Please take a specific look on part of YAML definition below and change accordingly to your case:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
Please wait for everything to configure correctly.
After that you will have access to your application by domain.name with ports:
80(http)
443(https)
Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
-- Kubernetes.io: Ingress TLS
Test
You can check if above steps are configured correctly by:
entering https://DOMAIN.NAME in your web browser and check if it responds with Hello with HTTPS enabled
using a tool curl -v https://DOMAIN.NAME.
Please let me know if this solution works for you.
Additional information (added by EDIT)
You can try to configure service object of type LoadBalancer which will be operate at layer 4 as #Florian said in his answer.
Please refer to official documentation: Kubernetes.io: Create external load balancer
You can also use Nginx Ingress controller and either:
Expose TCP/UDP service by following: Kubernetes.github.io: Ingress nginx: Exposing tcp udp services which will operating at L4.
Create an Ingress resource that will have SSL Passthrough configured by following: Kubernetes.github.io: Ingress nginx: Ssl passthrough
After researching, I found the answer in Google Cloud Run. It is very simple to deploy HTTP based flask app in the container. As serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)(No need for self-certificate or HTTPS in this part, just make sure the HTTP app works) and then push it Cloud Run.
Adjust the service parameters, depend on how much resources do you need to run it. In the machine settings, set the port that you are using in the docker container to be mapped. for instance, in my case, it is 5000. When you create the service, Google provides you a domain address with HTTPS. You can use that URL and access your resources.
That's it!
For more information on Cloud Run:
https://cloud.google.com/serverless-options
The differences between computing platforms: https://www.signalfx.com/blog/gcp-serverless-comparison/
I am exploring the istio service mesh on my k8s cluster hosted on EKS(Amazon).
I tried deploying istio-1.2.2 on a new k8s cluster with the demo.yml file used for bookapp demonstration and most of the use cases I understand properly.
Then, I deployed istio using helm default profile(recommended for production) on my existing dev cluster with 100s of microservices running and what I noticed is my services can can call http endpoints but not able to call external secure endpoints(https://www.google.com, etc.)
I am getting :
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong
version number
Though I am able to call external https endpoints from my testing cluster.
To verify, I check the egress policy and it is mode: ALLOW_ANY in both the clusters.
Now, I removed the the istio completely from my dev cluster and install the demo.yml to test but now this is also not working.
I try to relate my issue with this but didn't get any success.
https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044
I don't understand what I am missing or what I am doing wrong.
Note: I am referring to this setup: https://istio.io/docs/setup/kubernetes/install/helm/
This is most likely a bug in Istio (see for example istio/istio#14520): if you have any Kubernetes Service object, anywhere in your cluster, that listens on port 443 but whose name starts with http (not https), it will break all outbound HTTPS connections.
The instance of this I've hit involves configuring an AWS load balancer to do TLS termination. The Kubernetes Service needs to expose port 443 to configure the load balancer, but it receives plain unencrypted HTTP.
apiVersion: v1
kind: Service
metadata:
name: breaks-istio
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
selector: ...
ports:
- name: http-ssl # <<<< THIS NAME MATTERS
port: 443
targetPort: http
When I've experimented with this, changing that name: to either https or tcp-https seems to work. Those name prefixes are significant to Istio, but I haven't immediately found any functional difference between telling Istio the port is HTTPS (even though it doesn't actually serve TLS) vs. plain uninterpreted TCP.
You do need to search your cluster and find every Service that listens to port 443, and make sure the port name doesn't start with http-....
I've configured a NGINX Ingress to use SSL. This works fine, but I'm trying to use hostAliases to route all requests from one domain back to my cluster and its failing with the following error:
Error: unable to verify the first certificate
My alias:
hostAliases:
- ip: "MY.CLUSTER.IP"
hostnames:
- "my.domain.com"
Is there a way to use this aliasing and still get ssl working?
According to this host alias is just a record in /etc/hosts file, which overwrites "usual" DNS resolution. If I understand your issue correctly, in this case you just need to have valid TLS certificate for "my.domain.com" installed on MY.CLUSTER.IP
Hope it helps.