I have SonarQube running in an Azure Container Instance that is not in a container registry. I'm trying to change the FQDN from Http to Https; however, all of the examples I see only provide instructions for using a registry. Is there a way to do this without using a ACR?
Yes, This could be possible withou using ACR and with enabling SSL connections in a sidecar container.
ACI does not have a built in support for https. However, to enable a ssl connection you will need a webserver in your container with the required certs refer sidecar container or front your container with an application gateway. You can also consider using app service or kubernetes for achieving this.
For security advice see Azure security baseline for Container Instances.
Please check similar issue on stack overflow which has more information.
Reference : https://learn.microsoft.com/en-us/answers/questions/50827/container-instance-dns-using-http-and-not-https.html
Related
Does anyone know the pros and cons for installing the CloudSQL-Proxy (that allows us to connect securely to CloudSQL) on a Kubernetes cluster as a service as opposed to making it a sidecar against the application container?
I know that it is mostly used as a sidecar. I have used it as both (in non-production environments), but I never understood why sidecar is more preferable to service. Can someone enlighten me please?
The sidecar pattern is preferred because it is the easiest and more secure option. Traffic to the Cloud SQL Auth proxy is not encrypted or authenticated, and relies on the user to restrict access to the proxy (typically be running local host).
When you run the Cloud SQL proxy, you are essentially saying "I am user X and I'm authorized to connect to the database". When you run it as a service, anyone that connects to that database is connecting authorized as "user X".
You can see this warning in the Cloud SQL proxy example running as a service in k8s, or watch this video on Connecting to Cloud SQL from Kubernetes which explains the reason as well.
The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP. This is because the Cloud SQL Auth proxy provides strong encryption and authentication using IAM, which can help keep your database secure.
When you connect using the Cloud SQL Auth proxy, the Cloud SQL Auth proxy is added to your pod using the sidecar container pattern. The Cloud SQL Auth proxy container is in the same pod as your application, which enables the application to connect to the Cloud SQL Auth proxy using localhost, increasing security and performance.
As sidecar is a container that runs on the same Pod as the application container, because it shares the same volume and network as the main container, it can “help” or enhance how the application operates. In Kubernetes, a pod is a group of one or more containers with shared storage and network. A sidecar is a utility container in a pod that’s loosely coupled to the main application container.
Sidecar Pros: Scales indefinitely as you increase the number of pods. Can be injected automatically. Already used by serviceMeshes.
Sidecar Cons: A bit difficult to adopt, as developers can't just deploy their app, but deploy a whole stack in a deployment. It consumes much more resources and it is harder to secure because every Pod must deploy the log aggregator to push the logs to the database or queue.
Refer to the documentation for more information.
I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
The tutorial is very poor in explanations. I'm stuck in this step. Can someone tell me what are this fields and what should I put on them?
I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
For the domain, you can likely use whatever you'd like, just be aware that when gitlab generates links that are designed to point to itself they won't resolve. You can work-around that with something like dnsmasq or editing /etc/hosts, if it's important to you
For the externalIP, that will be what minikube ip emits, and is the IP through which you will communicate with gitlab (since you will not be able to use the Pod's IP addresses outside of minikube). If gitlab does not use a Service of type NodePort, you're in for some more hoop-jumping to expose those ports via minikube's IP
The certmanager-issuer.email you can just forget about, because it 100% will not issue you a Let's Encrypt cert running on minikube unless they have fixed cermanager to use the dns01 protocol. In order for Let's Encrypt to issue you a cert, they have to connect to the webserver for which they are issuing the cert, and (as you might guess) they will not be able to connect to your minikube IP. If you want to experience SSL on your gitlab instance, then issue the instance a self-signed cert and call it a draw.
The tutorial is very poor in explanations.
That's because what you are trying to do is perilous; minikube is not designed to run an entire gitlab instance, for the above and tens of other reasons. Google Cloud Platform offers generous credits to kick the tires on kubernetes, and will almost certainly have all the things you would need to make that stuff work.
The goal is to enable Kubernetes api server to connect to resources on the internet when it is on a private network on which internet resources can only be accessed through a proxy.
Background:
A kubernetes cluster is spun up using kubespray containing two apiserver instances that run on two VMs and are controlled via a manifest file. The Azure AD is being used as the identity provider for authentication. In order for this to work the API server needs to initialize its OIDC component by connecting to Microsoft and downloading some keys that are used to verify tokens issued by Azure AD.
Since the Kubernetes cluster is on a private network and needs to go through a proxy before reaching the internet, one approach was to set https_proxy and no_proxy in the kubernetes API server container environment by adding this to the manifest file. The problem with this approach is that when using Istio to manage access to APIs, no_proxy needs to be updated whenever a new service is added to the cluster. One solution could have been to add a suffix to every service name and set *.suffix in no proxy. However, it appears that using wildcards in the no_proxy configuration is not supported.
Is there any alternate way for the Kubernetes API server to reach Microsoft without interfering with other functionality?
Please let me know if any additional information or clarifications are needed.
I'm not sure how you would have Istio manage the egress traffic for your Kubernetes masters where your kube-apiservers run, so I wouldn't recommend it. As far as I understand, Istio is generally used to manage (ingress/egress/lb/metrics/etc) actual workloads in your cluster and these workloads generally run on your nodes, not masters. I mean the kube-apiserver actually manages the CRDs that Istio uses.
Most people use Docker on their masters, you can use the proxy environment variables for your containers like you mentioned.
We tried a couple of solutions to avoid having to set http(s)_proxy and no_proxy env variables in the kube-apiserver and constantly whitelist new services in the cluster...
Introduce a self managed proxy server which would determine what traffic goes is forwarded to an internet connected proxy and what traffic is not proxied:
squid proxy seemed to do the trick by defining some ACLs. One issue we had was that node names were not resolved by kube-dns so we had to add manual entries into the hosts files of containers (not sure how these were handled by default).
we also tried writing a proxy using node but it had trouble with https in some scenarios.
Introduce a self managed identity provider between azure and our k8s cluster which was configured to use the internet connected proxy this avoiding having to configure the proxy in the kube-apiserver
We landed up going with option 2 as it gave us more flexibility in the long term.
I am trying to run ejabberd on google kubernetes engine. As I am using daemonset as kubernetes resource to deploy manage kubernetes pods of ejabberd, I need to setup custom healthcheck(which must receive status code 200 to be successful) for ejabberd container. (:5280/admin doesn't work as there is basic auth there, :5222 and :5269 send response that libcurl cannot parse, thus both doesn't work).
Tried to configure api and set custom healthcheck an api irl, but actually it's not secure and more configuration to be done.
does anyone passed through this problem and what solution can be done for this?
I created an ACS (Azure Container Service) using Kubernetes by following this link : https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-windows-walkthrough & I deployed my .net 4.5 app by following this link : https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-ui . My app needs to access Azure SQL and other resources that are part of some other resource groups in my account, but my container is not able to make any outbound calls to network - both inside azure and to internet. I opened some ports to allow outbound connections, that is not helping either.
When I create an ACS does it come with a gateway or should I create one ? How can I configure ACS so that it allows outbound network calls ?
Thanks,
Ashok.
Outbound internet access works from an Azure Container Service (ACS) Kubernetes Windows cluster if you are connecting to IP Addresses other than the range 10.0.0.0/16 (that is you are not connecting to another service on your VNET).
Before Feb 22,2017 there was a bug where Internet access was not available.
Please try the latest deployment from ACS-Engine: https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.windows.md., and open an issue there if you still see this, and we (Azure Container Service) can help you debug.
For the communication with service running inside the cluster, you can use the Kube-dns which allows you to access service by its name. You can find more details at https://kubernetes.io/docs/admin/dns/
For the external communication (internet), there is no need to create any gateway etc. By default your containers inside a pod can make outbound connections. To verify this, you can run powershell in one of your containers and try to run
wget http://www.google.com -OutFile testping.txt
Get-Contents testping.txt
and see if it works.
To run powershell, ssh to your master node - instructions here
kubectl exec -it <pod_name> -- powershell