ArgoCD CLI login with server running with --insecure - kubernetes

So I have installed ArgoCD on to my cluster. I then patched it with,
kubectl -n argocd patch deployment argocd-server --type json -p='[ { "op": "replace", "path":"/spec/template/spec/containers/0/command","value": ["argocd-server","--insecure"] }]'
so that i can host it with Contour dealing with the TLS / SSL Cert. Heres the config for the ingress / Contour:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: argocd
namespace: argocd
spec:
virtualhost:
fqdn: argo.xxx.com
tls:
secretName: default/cert
routes:
- requestHeadersPolicy:
set:
- name: l5d-dst-override
value: argocd-server.argocd.svc.cluster.local:443
services:
- name: argocd-server
port: 443
conditions:
- prefix: /
loadBalancerPolicy:
strategy: Cookie
But now cant login to the Argo server with the cli, even using port-forward (which worked, before i patched the server with the 'insecure' flag).
When trying to use the port-forward access, i get this
error creating error stream for port 8080 -> 8080: EOF
Using,
kubectl port-forward svc/argocd-server -n argocd 8080:443
So I have tried as many options / flags as i can think of to login via the ingress / contour url,
argocd login argo.xxx.com --plaintext --insecure --grpc-web
argocd login argo.xxx.com --plaintext --insecure
argocd login argo.xxx.com --plaintext
argocd login argo.xxx.com --insecure --grpc-web
I either get back a 404 or a 502. Sometimes an empty error code,
FATA[0007] rpc error: code = Unavailable desc =
FATA[0003] rpc error: code = Unknown desc = POST http://argo.xxx.com:443/session.SessionService/Create failed with status code 502
FATA[0002] rpc error: code = Unknown desc = POST https://argo.xxx.com:443/argocd/session.SessionService/Create failed with status code 404
With out any flags added to login, this is the error i get back,
FATA[0007] rpc error: code = Internal desc = transport: received the unexpected content-type "text/plain; charset=utf-8"

It might have been a while, and could have been solved, but I had a similar issue.
With the ArgoCD version 2.4.14 I was able to resolve the issue using the command:
argocd login --insecure --port-forward --port-forward-namespace=argocd --plaintext
Username: admin
Password:
'admin:login' logged in successfully
Context 'port-forward' updated
I allowed the CLI to use its selector app.kubernetes.io/name=argocd-server to find the service in the argocd namespace.

Related

How can I fix 'failed calling webhook "webhook.cert-manager.io"'?

I'm trying to set up a K3s cluster. When I had a single master and agent setup cert-manager had no issues. Now I'm trying a 2 master setup with embedded etcd. I opened TCP ports 6443 and 2379-2380 for both VMs and did the following:
VM1: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --cluster-init
VM2: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --server https://MASTER_IP:6443
# k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
VM1 Ready control-plane,etcd,master 130m v1.22.7+k3s1
VM2 Ready control-plane,etcd,master 128m v1.22.7+k3s1
Installing cert-manager works fine:
# k3s kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
# k3s kubectl get pods --namespace cert-manager
NAME READY STATUS
cert-manager-b4d6fd99b-c6fpc 1/1 Running
cert-manager-cainjector-74bfccdfdf-gtmrd 1/1 Running
cert-manager-webhook-65b766b5f8-brb76 1/1 Running
My manifest has the following definition:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: info#example.org
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- selector: {}
http01:
ingress: {}
Which results in the following error:
# k3s kubectl apply -f manifest.yaml
Error from server (InternalError): error when creating "manifest.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded
I tried disabling both firewalls, waiting a day, reset and re-setup, but the error persists. Google hasn't been much help either. The little info I can find goes over my head for the most part and no tutorial seems to do any extra steps.
Try to specify the proper ingress class name in your Cluster Issuer, like this:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: info#example.org
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- http01:
ingress:
class: nginx
Also, make sure that you have the cert manager annotation and the tls secret name specified in your Ingress like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
...
spec:
tls:
- hosts:
- domain.com
secretName: letsencrypt-account-key
A good starting point for troubleshooting issues with the webhook can be found int the docs, e.g. there is a section for problems on GKE private clusters.
In my case, however, this didn't really solve the problem. For me the issue was that when I played around with cert-manager I happen to install and uninstall it multiple times. It turned out that just removing the namespace, e.g. kubectl delete namespace cert-manager didn't remove the webhooks and other non-obvious resources.
Following the official guide for uninstalling cert-manager and applying the manifests again solved the issue.
I do this it, and work for me.
helm install
cert-manager jetstack/cert-manager
--namespace cert-manager
--create-namespace
--version v1.8.0
--set webhook.securePort=10260
source: https://hackmd.io/#maelvls/debug-cert-manager-webhook

Access ArgoCD server

I am following the Installation Instructions from https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server
and even though the service type has been changes to LoadBalancer I cannot manage to login.
The information I have is:
$ oc describe svc argocd-server
Name: argocd-server
Namespace: argocd
Labels: app.kubernetes.io/component=server
app.kubernetes.io/name=argocd-server
app.kubernetes.io/part-of=argocd
Annotations: <none>
Selector: app.kubernetes.io/name=argocd-server
Type: LoadBalancer
IP: 172.30.70.178
LoadBalancer Ingress: a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30942/TCP
Endpoints: 10.128.3.91:8080
Port: https 443/TCP
TargetPort: 8080/TCP
NodePort: https 30734/TCP
Endpoints: 10.128.3.91:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
If I do:
$ oc login https://a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
The server is using a certificate that does not match its hostname: x509: certificate is valid for localhost, argocd-server, argocd-server.argocd, argocd-server.argocd.svc, argocd-server.argocd.svc.cluster.local, not a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
error: Seems you passed an HTML page (console?) instead of server URL.
Verify provided address and try again.
I managed to successfully login argocd-server by the following
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
argoPass=$(kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d)
argocd login --insecure --grpc-web k3s_master:32761 --username admin \
--password $argoPass
If you wish to expose an ArgoCD server via ingress, you can disable the TLS by patching the argocd-server deployment:
no-tls.yaml
spec:
template:
spec:
containers:
- name: argocd-server
command:
- argocd-server
- --insecure
kubectl patch deployment -n argocd argocd-server --patch-file no-tls.yaml
check your svc on argocd
kubectl get svc -n argocd
If the type is NodePort then that is fine, if not then here is the conversion method:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
And then to access the ArgoCD server to browser, put the IP address of the VM plus the port that NodePort will provide as in the picture
Once run, your ArgoCD cluster will be available at https://<hosted-node-ip>:<NodePort>
By default, argocd server run with self signed tls enabled. Your are trying to access the argocd server with different url with tls enabled. The API server should be run with TLS disabled. Edit the argocd-server deployment to add the --insecure flag to the argocd-server command.
containers:
- command:
- argocd-server
- --staticassets
- /shared/app
- --insecure
I was able to login argocd-server by using below command
argocd login --insecure --grpc-web test-server.companydomain.com --grpc-web-root-path /argocd
Username: admin
Password:
'admin:login' logged in successfully
I am using traefik as ingress controller. My server can be accessed on browser https://test-server.companydomain.com/argocd

Using a custom certificate for the Kubernetes api server with minikube

I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way

kubectl: error You must be logged in to the server (Unauthorized)

I've created a service account for CI purposes and am testing it out. Upon trying any kubectl command, I get the error:
error: You must be logged in to the server (Unauthorized)
Below is my .kube/config file
apiVersion: v1
clusters:
- cluster:
server: <redacted>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: bamboo
name: default
current-context: 'default'
kind: Config
preferences: {}
users:
- name: bamboo
user:
token: <redacted>
The service account exists and has a cluster role: edit and cluster role binding attached.
What am I doing wrong?
I reproduce the error if I copy the token directly without decoding. Then applied the following steps to decode and set the token and it is working as expected.
$ TOKENNAME=`kubectl -n <namespace> get serviceaccount/<serviceaccount-name> -o jsonpath='{.secrets[0].name}'`
$ TOKEN=`kubectl -n <namespace> get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode`
$ kubectl config set-credentials <service-account-name> --token=$TOKEN
So, I think it might be your case.

cert-manager-webhook: FailedDiscoveryCheck, namespace hangs in termination

I deleted a namespace that has a service that is exposed with nginx-ingress with a Let's Encrypt certificate controlled by cert-manager. Deletion of the namespace is hanging with status Terminating.
It is likely a problem with the internal API as explained here. When I run:
kubectl api-resources
it returns that the certmanager webhook API isn't reachable:
error: unable to retrieve the complete list of server APIs: webhook.certmanager.k8s.io/v1beta1: the server is currently unable to handle the request
When I run kubectl get apiservices v1beta1.webhook.certmanager.k8s.io -o yaml, for checking its status conditions:
...
service:
name: cert-manager-webhook
namespace: nginx-ingress
port: 443
version: v1beta1
versionPriority: 15
status:
conditions:
- lastTransitionTime: "2020-01-21T15:02:23Z"
message: 'failing or missing response from https://10.24.32.6:10250/apis/webhook.certmanager.k8s.io/v1beta1:
bad status from https://10.24.32.6:10250/apis/webhook.certmanager.k8s.io/v1beta1:
404'
reason: FailedDiscoveryCheck
status: "False"
type: Available
All nginx-ingress and cert-manager pods are in good health. I have done an update on certmanager in the time that I have deployed and deleted this namespace, which might be an explanation of the issue. How can this problem be solved?
versions:
Kubernetes: v1.15.4-gke.22
cert-manager v0.12.0
nginx-ingress: 1.29.3
A simle solution to solve the issue is presented here. But this does not describe how such a problem arises or can be prevented.
Create a temporary JSON file that describes the terminating namespace:
kubectl get namespace <terminating-namespace> -o json >tmp.json
Edit the file tmp.json by removing the kubernetes value from the finalizers field and save the file.
Set a temporary proxy IP and port:
kubectl proxy
From a new terminal window, make an API call with your temporary proxy IP and port:
curl -k -H "Content-Type: application/json" -X PUT --data-binary #tmp.json http://127.0.0.1:8001/api/v1/namespaces/<terminating-namespace>/finalize