Webserver 404 with Airflow Deployed on Minikube - kubernetes

I followed the instructions in the official repo on installing on kubernetes, however I get a 404 when I try to use the UI. Could anyone tell me what the issue might be?
Repo:
https://github.com/apache/incubator-airflow/tree/master/scripts/ci/kubernetes
To clarify, the instructions I followed were:
Point kubectl to the local minikube cluster (v1.10.0)
Clone repo (commit 89c1f530da04088300312ad3cec9fa74c3703176)
cd incubator-airflow/scripts/ci/kubernetes
./docker/build.sh
./kube/deploy.sh

nevermind... I must have missed the memo that the default username/password is airflow/airflow even though I thought that authenticate was set to False.
Solution:
Go to localhost:8080/login and enter username/password airflow/airflow.

Related

Failed to pull image with policy "always": Error response from daemon

I am facing Failed to pull image with policy "always": Error response from daemon:Get https://registry-1.docker.io/v2/library/docker/manifests/20.10.17-dind: unauthorized: incorrect username or password (manager.go:203:0s) error while trying to run my pipeline to push my code on docker hub.
I tried different solution but everytime I get same error. I am using username of my dockerhub rather than email but facing same issue. One of my friend told me it may be dind issue you have to mentioned docker and dind image and service latest version tags but still same issue. Please help me, I really appriciate your efforts in advance.
Please check code screenshot attached with it.
Check if this is an authentication issue similar to this one
We noticed that the job succeeds if we delete Docker config file ~/.docker/config.json, containing credentials from previous CI jobs.
That is the reason why you should always use docker logout <registry> if the job runs in a non-disposable environment

Why am I getting this "unauthorized" error when trying to mirror OKD installation images from Quay.io?

I have been working on an installation of OKD on an air-gapped environment. The first major step has been mirroring the OKD images so that they can be moved over to the new environment and pulled locally. I've been following a combination of the OpenShift documentation and this article, as well as this resource for getting my certificates set up. I have been making slow but consistent progress.
However, I am now having trouble when attempting to actually mirror the files using
oc adm -a ${LOCAL_SECRET_JSON} release mirror \
--from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \
--to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
--to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}
I get the following, encouraging response:
info: Mirroring 120 images to host.okd-registry.dns:5000/ocp4/openshift4 ...
followed by blobs: and manifests: lines, and finally the line
stats: shared=0 unique=7 size=105.3MiB ratio=1.00
I then get about 50 lines stating
error: unable to retrieve source image quay.io/openshift-release-dev/ocp-v4.0-art-dev manifest
sha256:{some value}: unauthorized: access to the requested resource is not authorized
I have a quay account but I am not sure if that is required even after my research, and if it is, where or how I would log into it. I have attempted doing so using oc login followed by various addresses within the release structure, but if this is the solution, I may be using the wrong arguments as I have not been able to find any instructions on doing this.
I have also tried the command with sudo. I doubt that is an issue but I tried it anyway.
I suppose the issue could be with my certificates, but I am not sure how to determine if this is the case.
Any guidance or suggestions would be much appreciated.
It has been determined that the OKD documentation is inaccurate at the time that I am posting this answer, and was instructing readers to pull from the OCP image repository rather than the OKD repository, which apparently requires additional credentials. A bug has been logged and the documentation will hopefully be updated soon.
The correct environment variables and full command to mirror the images are as follows:
LOCAL_REGISTRY=localhost:5000 (or your local domain name and port for the registry)
LOCAL_REPOSITORY=okd
LOCAL_SECRET_JSON=<full path to your pull secret>
OCP_RELEASE=4.5.0-0.okd-2020-10-15-235428
PRODUCT_REPO=openshift
RELEASE_NAME=okd
ARCHITECTURE=not-used-in-okd
oc adm -a ${LOCAL_SECRET_JSON} release mirror \
--from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE} \
--to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
--to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE} --dry-run

Creating Kubernetes Endpoint in VSTS generates error

What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed.
Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint?
This is followed by TsuyoshiUshio/KubernetesTask issue 35
I try to reproduce, however, I can't do it.
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download task/kubeconfig.
Workaround might be like this:
kubectl version in your local machine and check the current server/client version
specify the same version as the server on the download task. (by default it is 1.5.2)
See the log of your release pipeline which is fail, you can see which kubectl command has been executed, do the same thing on your local machine with fitting your local pc's environment.
The point is, before go to the VSTS, download the kubectl by yourself.
Then, put the kubeconfg on the default folder like ~/.kube/config or set environment variables KUBECONFIG to the binary.
Then execute kubectl get nodes and make sure if it works.
My kubeconfig is different format with yours. If you use AKS, az aks install-cli command and az aks get-credentials command.
Please refer https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough .
If it works locally, the config file must work on the VSTS task environment. (or this task or VSTS has a bug)
I had the same problem on VSTS.
Here is my workaround to get a Service Connection working (in my case to GCloud):
Switched Authentication to "Service Account"
Run the two commands told by the info icon next to the fields Token and Certificate: "Token to authenticate against Kubernetes.
Use the ‘kubectl get serviceaccounts -o yaml’ and ‘kubectl get secret
-o yaml’ commands to get the token."
kubectl get secret -o yaml > kubectl-secret.yaml
Search inside the the file kubectl-secret.yaml the values ca.crt and token
Enter the values inside VSTS to the required fields
The generated config I was using had a duplicate line, removing this corrected the issue for me.
users:
- name: cluster_stuff_here
- name: cluster_stuff_here

issue when deploy basic auth in kubernetes dashboard

I try to add a basic authentication to my kubernetes cluster by changing the file.
/etc/kubernetes/manifest/kubernetes-apiserver.yaml
There i add 3 flags
-- basic-auth-file=/etc/kubernetes/basic-auth.csv
-- authorization-mode=ABAC
-- authentication-mode=basic
But when i add those lines and i restart my system. My kubernetes freezes and won't start. Is this the right way to add flags to an already running kubernetes cluster ? Is this the right way to add basic authentication to kubernetes dashboard ?
I used this tutorial for the basic authentication: https://github.com/kubernetes/dashboard/wiki/Access-control#basic
Conceptually you doing everything right, but the problem is that for Modern Kubernetes version, at least for 1.9, authentication-mode is not a valid CLI flag for API server. All available flags you can check in documentation.
It is a bit outdated documentation in the repo. Actually, basic authentification will be enabled when you provided basic-auth-file option.
So, just remove authentication-mode flag and use only basic-auth-file and authorization-mode. If should help.
For enable a user/password authorization, based on documentation of dashboard, you need to add authentication-mode CLI arg to a Dashboard.

Deploy API REST IBM Hyperledger Composer Blockchain (bad flag in substitute command: 'U' ERROR)

I'm getting this error trying to deploy a card to a working blockchain on cloud, any idea? Thanks in advance. I'm using a mac, following the guide (Kubernetes installed/configured well, I think):
https://ibm-blockchain.github.io/interacting/
./create/create_composer-rest-server.sh --paid --business-network-card /Users/sm/jsblock/tutorial-network/PeerAdmin#fabric-network.card
Configured to setup a paid storage on ibm-cs
Preparing yaml file for create composer-rest-server
sed: 1: "s/%COMPOSER_CARD%//User ...": bad flag in substitute command: 'U'
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
error: no objects passed to create
Composer rest server created successfully
There is an error in that document. And you are also specifying the Peer Admin card when you need to use a Network Admin Card.
There are 2 sets of 'parallel' documents for paid and free clusters. The command you are using has --paid in it in error. If you remove --paid and use the Network Admin Card I think it will solve the problem.
Your command will look something like this: ./create/create_composer-rest-server.sh --business-network-card admin#YOUR-network
I tried again with the install proccess (deployed blockchain instance seemed to work weall): https://ibm-blockchain.github.io/simple/
But I noticed that the script:
./create_all.sh
...never ended (mostly an internal kubernetes problems I guess). And I reset the installation:
./delete_all.sh
./create_all.sh
I tried again, now everything Ok. I could get my awesome API REST talking with my Hyperledger Blockchain, deployed on IBM Cloud. Ready to develop something amazing.