Why I get "Forbidden" status while I trying to redirect, with HAProxy, calls on CRC installed in a CentOS Docker container? - redirect

I have this scenario:
a HOST machine running Debian that runs docker containers.
a CentOS docker container that have CodeReady Containers (CRC) installed on itself. CRC working on the container, via command line, without problems.
I want access, from the Host machine, to CRC web console that works on https://console-openshift-console.apps-crc.testing (on a specific IP in the hosts file of the container).
I found this RedHat guide for accessing CRC remotely.
And, applied to Docker containers, making the following changes to haproxy.conf:
global
log 127.0.0.1 local0
debug
defaults
log global
mode http
timeout connect 5000
timeout check 5000
timeout client 30000
timeout server 30000
frontend apps
bind CONTAINER_IP:80
bind CONTAINER_IP:443
option tcplog
mode tcp
default_backend apps
backend apps
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
frontend api
bind CONTAINER_IP:6443
option tcplog
mode tcp
default_backend api
backend api
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
enabling forwarding for the container:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
and also starting CRC behind a proxy:
$ crc config set http-proxy http://example.proxy.com:<port>
$ crc config set https-proxy http://example.proxy.com:<port>
$ crc config set no-proxy <comma-separated-no-proxy-entries>
I can successfully call the url https://console-openshift-console.apps-crc.testing from the Host machine (that have dnsmasq as DNS resolver properly configured)!!!
but I get this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Notes:
when CRC starts I have a warning: WARN Wildcard DNS resolution for apps-crc.testing does not appear to be working
even trying to login with oc, on Host machiche via command line, fail with an error message with status "Forbidden": Error from server (InternalError): Internal error occurred: unexpected response: 403.
Where is the problem? I can't figure it out.
For those interested, this is the project's Git repository on GitHub.

This message means that the user "system:anonymous" have not the permission to access the cluster. Have you done a login into the crc cluster as written in the documentation?
3.3. Accessing the OpenShift cluster
oc login -u developer https://api.crc.testing:6443

This is the final message when you run crc start
To access the cluster, first set up your environment by following 'crc oc-env' instructions.
Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p xxxx-xxxx-xxxxx-xxxx https://api.crc.testing:6443'.
To access the cluster, first set up your environment by following 'crc oc-env' instructions.
Therefore, you have to run first to have the oc client available on the command line:
crc oc-env
Then you have to run login with oc client. In my installation was:
oc login -u developer https://api.crc.testing:6443

Related

Unable to access application through minikube tunnel

I'm currently using minikube and I'm trying to access my application by utilizing the minikube tunnel since the service type is LoadBalancer.
I'm able to obtain an external IP when I execute the minikube tunnel, however, when I try to check it on the browser it doesn't work. I've also tried Postman and curl, they both don't work.
To add to this, if I shell into the pod I can use curl and it does work. Furthermore, I executed kubectl port-forward and I was able to access my application through localhost.
Does anyone have any idea as to why I'm not being able to access my application even though everything seems to be running correctly?
Your service is probably bound to localhost. Minikube starts the cluster in a VM or docker (depending on the driver you are using) that is bound to an external IP, $(minikube ip).
When you are running a minikube tunnel you're tunneling from minikube cluster external IP to the internal IP of the load balancer, the LB service in Kubernete the External IP goes from "Pending" to an actual internal IP and something like this should work:
curl -H 'Host: localhost' -v $(minikube ip)
However, it doesn't in the browser, since in the above command you are sending the request to the minikube's IP, not localhost. What I do for this to work is a ssh tunnel like this one:
ssh -i $(minikube ssh-key) docker#$(minikube ip) -L 8008:localhost:80
This maps the LB listener in port 80, in minikube's cluster, to 8008 in localhost. The external IP of the service remains pending but it works since the Kube controller can still find it. If you want to map port 80 then you will need to add sudo.
If the version of ssh on your system (the one in your path) is less than 8.0, 'minikube tunnel' will silently fail to instantiate the ssh tunnel for some port forwards. (e.g. privileged ports)
Open a command prompt as administrator, and type 'where.exe ssh'. Navigate to that location in windows explorer, and right-click on 'ssh.exe'. Choose Properties->Details to see the version.
If this is less than version 8.0 you must upgrade that to at least version 8.0 to prevent this silent failure of ssh by 'minikube tunnel'.
After upgrading, ssh, ensure that the newer version is the one that will be executed by using the 'where.exe' command again. If there are two on your system, then reorder the paths in your path environment variable. Restart your shell (or better) reboot the system so that all processes environments pick up the path changes.
Then try 'minikube tunnel' again. When it is working, you should see an ssh instance in the task manager for each tunnel that minikube creates.
In my case minikube service <serviceName> solved this issue.
For further details look here in minikube docs.

Openshift: oc login failed

There is an openshift cluster in our organization, and I always get an error when using "oc login" on my computer. However, I can log in successfully by using others' computers. The error is the following:
oc login
error: dial tcp: i/o timeout - verify you have provided the correct host and port and that the server is currently running
Thanks
Please make it sure you install the "Kubectl" in macOS first.
try this
oc login <web interface url>
for example
oc login https://192.168.1.100:8443
OR
oc login https://myopenshift.com:8443

kubectl : Unable to connect to the server : dial tcp 192.168.214.136:6443: connect: no route to host

I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>:
userX#ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX#ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.
If you use minikube sometimes all you need is just to restart minikube.
Run:
minikube start
I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.
Below are the steps I took to fix it.
1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with: kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.
2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.
3 ) If port 6443 you should get something related to the certificate like:
curl: (60) SSL certificate problem: unable to get local issuer certificate
4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).
4.C ) Run sudo firewall-cmd --reload.
4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports: 6443/tcp <---- Here
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
The common practice is to copy config file to the home directory
sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also, make sure that api-server address is valid.
server: https://<master-node-ip>:6443
If not, you can manually edit it using any text editor.
You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable
export kubeconfig=<path>/admin.conf
after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.
Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes
Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.
The way I solved it (bruteforce solution) is by
completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
reinstall
This is what solved the problem in my case.
There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).
if you getting the below error then you also check once the token validity.
Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host
Check your token validity by using the command kubeadm token list if your token is expired then you have to reset the cluster using kubeadm reset and than initialize again using command kubeadm init --token-ttl 0.
Then again check the status of the token using kubeadm token list. Note here the TTL value will be <forever> and Expire value will be <never>.
example:-
[root#master1 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Ubuntu 22.04 LTS Screenshot
Select docker-desktop and run again your command, e.g kubectl apply -f <myimage.yaml>
Run minikube start command
The reason behind that is your minikube cluster with driver docker stopped
when you shutdown the system
To all those who are trying to learn and experiment kubernetes using Ubuntu on Oracle VM:
IP address is assigned to Guest OS/VM based on the network adapter selection. Based on your network adapter selection, you need to configure the settings in Oracle VM network section or your router settings.
See the link for most common Oracle VM network adapter.
https://www.nakivo.com/blog/virtualbox-network-setting-guide/
I was using bridge adapter which put VM and host OS in parallel. So the my router was randomly assigning IP to my VM after every restart and my cluster stopped working and getting the same exact error message posted in the question.
> k get pods -A
> Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
> systemctl status kubelet
> ........
> ........ "Error getting node" err="node \"node\" not found"
Cluster started working again after reserving static IP address to my VM in router settings.(if you are using NAT adapter, you should configure it in VM network settings)
When you are reserving IP address to your VM, make sure to assign the same old IP address which was used for configuring kubelet.

dotnet core app api do not keep running on kubernetes

I'm setting a dotnet core app into kubernetes cluster and i getting error "Unable to start kestrel".
Dockerfile is working ok on local machine.
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, Action`1 configureOptions)
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.
To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'.
Unhandled Exception: System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found.
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, Action`1 configureOptions)
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.
To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'.
System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found.
Unable to start Kestrel.
My dockerfile:
[...build step]
FROM microsoft/dotnet:2.1-aspnetcore-runtime
COPY --from=build-env /app/out ./app
ENV PORT=5000
ENV ASPNETCORE_URLS=http://+:${PORT}
WORKDIR /app
EXPOSE $PORT
ENTRYPOINT [ "dotnet", "Gateway.dll" ]
I expected application started successfully but i getting this error "unable to start kestrel".
[UPDATE]
I've removed https port from app and tried again without https but now application just start and stop without any error or warning. Container log bellow:
Running local using dotnet run or building image and running from container, everything work. Application just shut down into kubernetes.
I am using dotnet core 2.2
[UPDATE]
I've generated a cert, added in project, setup in kestrel and i got same result. Localhost using docker imagem it work, but in kubernetes (google cloud), it just shutdown immediately after it started.
localhost:
$ docker run --rm -it -p 5000:5000/tcp -p 5001:5001/tcp juooo:latest
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {f7808ac5-0a0d-47d0-86cb-c605c2db84a3} may be persisted to storage in unencrypted form.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Overriding address(es) 'https://+:5001, http://+:5000'. Binding to endpoints defined in UseKestrel() instead.
Hosting environment: Production
Content root path: /app
Now listening on: https://0.0.0.0:5001
Application started. Press Ctrl+C to shut down.
I found a event log with a kubernetes error saying that kubernetes was unable to hit (:5000/). So i tried create a controller targeting root application (because it's a api, so don't have a root like a web app) and it worked.
The problem seems to be with the SSL certificate not being correctly configured while creating docker image. On dev machine,it will be using the developer certificate however on other machines it should be stored somewhere. Check this
I am pretty sure you need to open up a firewall rule to run on any port other than 80, something like:
gcloud compute firewall-rules create test-node-port --allow tcp:5000
Taken from kubernetes how-to located here: https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps

exposing api via secure gateway

I want to expose one blue zone api to external customers via secure-gateway, I am using docker as the client, but I always met below errors (the api server is in DST environment), can anyone help me on this? I have added the host name and port into ACL file, also, I tried adding --allow when I run docker, it will disable 'deny all'
[INFO] (Client ID d83dty5MIJA_rVI) Connection #2 is being established to ralbz001234.cloud.dst.ibm.com:8888
[2017-09-06 20:59:19.210] [ERROR] (Client ID d83dty5MIJA_rVI) Connection #1 to destination ralbz001234.cloud.dst.ibm.com:8888 had error: EHOSTUNREACH
When I add secure-gateway, the resource loacated filed, I choose On-Premises, is this correct?
EHOSTUNREACH is an issue with the underlying system not being able to find a route to the host you've provided. From the machine hosting the docker client, are you able to access the resource located at ralbz001234.cloud.dst.ibm.com:8888? If the host is able to connect, then you could try adding --net=host to the docker run command:
docker run --net=host -it ibmcom/secure-gateway-client <gatewayID> -t <security_token> --allow
If the host is unable to connect as well, then this post may shed more light on routing.