Creating Kubernetes Endpoint in VSTS generates error - kubernetes

What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed.
Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint?

This is followed by TsuyoshiUshio/KubernetesTask issue 35
I try to reproduce, however, I can't do it.
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download task/kubeconfig.
Workaround might be like this:
kubectl version in your local machine and check the current server/client version
specify the same version as the server on the download task. (by default it is 1.5.2)
See the log of your release pipeline which is fail, you can see which kubectl command has been executed, do the same thing on your local machine with fitting your local pc's environment.
The point is, before go to the VSTS, download the kubectl by yourself.
Then, put the kubeconfg on the default folder like ~/.kube/config or set environment variables KUBECONFIG to the binary.
Then execute kubectl get nodes and make sure if it works.
My kubeconfig is different format with yours. If you use AKS, az aks install-cli command and az aks get-credentials command.
Please refer https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough .
If it works locally, the config file must work on the VSTS task environment. (or this task or VSTS has a bug)

I had the same problem on VSTS.
Here is my workaround to get a Service Connection working (in my case to GCloud):
Switched Authentication to "Service Account"
Run the two commands told by the info icon next to the fields Token and Certificate: "Token to authenticate against Kubernetes.
Use the ‘kubectl get serviceaccounts -o yaml’ and ‘kubectl get secret
-o yaml’ commands to get the token."
kubectl get secret -o yaml > kubectl-secret.yaml
Search inside the the file kubectl-secret.yaml the values ca.crt and token
Enter the values inside VSTS to the required fields

The generated config I was using had a duplicate line, removing this corrected the issue for me.
users:
- name: cluster_stuff_here
- name: cluster_stuff_here

Related

Why am I getting this "unauthorized" error when trying to mirror OKD installation images from Quay.io?

I have been working on an installation of OKD on an air-gapped environment. The first major step has been mirroring the OKD images so that they can be moved over to the new environment and pulled locally. I've been following a combination of the OpenShift documentation and this article, as well as this resource for getting my certificates set up. I have been making slow but consistent progress.
However, I am now having trouble when attempting to actually mirror the files using
oc adm -a ${LOCAL_SECRET_JSON} release mirror \
--from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \
--to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
--to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}
I get the following, encouraging response:
info: Mirroring 120 images to host.okd-registry.dns:5000/ocp4/openshift4 ...
followed by blobs: and manifests: lines, and finally the line
stats: shared=0 unique=7 size=105.3MiB ratio=1.00
I then get about 50 lines stating
error: unable to retrieve source image quay.io/openshift-release-dev/ocp-v4.0-art-dev manifest
sha256:{some value}: unauthorized: access to the requested resource is not authorized
I have a quay account but I am not sure if that is required even after my research, and if it is, where or how I would log into it. I have attempted doing so using oc login followed by various addresses within the release structure, but if this is the solution, I may be using the wrong arguments as I have not been able to find any instructions on doing this.
I have also tried the command with sudo. I doubt that is an issue but I tried it anyway.
I suppose the issue could be with my certificates, but I am not sure how to determine if this is the case.
Any guidance or suggestions would be much appreciated.
It has been determined that the OKD documentation is inaccurate at the time that I am posting this answer, and was instructing readers to pull from the OCP image repository rather than the OKD repository, which apparently requires additional credentials. A bug has been logged and the documentation will hopefully be updated soon.
The correct environment variables and full command to mirror the images are as follows:
LOCAL_REGISTRY=localhost:5000 (or your local domain name and port for the registry)
LOCAL_REPOSITORY=okd
LOCAL_SECRET_JSON=<full path to your pull secret>
OCP_RELEASE=4.5.0-0.okd-2020-10-15-235428
PRODUCT_REPO=openshift
RELEASE_NAME=okd
ARCHITECTURE=not-used-in-okd
oc adm -a ${LOCAL_SECRET_JSON} release mirror \
--from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE} \
--to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
--to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE} --dry-run

Error in Google Cloud Shell Commands while working on the lab (Securing Google Cloud with CFT Scorecard)

I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}

Azure DevOps Pipelines "Waiting for console output from an agent..."

I require something from the output of a running release task in order for it to complete (an authenticate code). But the console is now not updating. All I get is "Waiting for console output from an agent..."
This happens on both our self-hosted agents (Linux or Windows) and on the Hosted Ubuntu 1604 agent.
The step in question is the standard Kubernetes task: https://github.com/Microsoft/azure-pipelines-tasks/tree/master/Tasks/KubernetesV1
This was not always happening.
To rule out the possibility of kubectl awaiting console input (as has been discussed above), you could try
kubectl apply --dry-run=client [other args]
or
kubectl apply --dry-run=server [other args]
This could give you guidance as to how to proceed, perhaps with --force or --overwrite flags if needed.
I have the same issue. After troubleshooting and canceling the task, I noticed that the agent was waiting for a response from the user.
In my case, I was trying to unzip a file where the destination folder already exists with content. So the system was asking the user to replace the destination folder content that's why the agent was waiting.
2020-03-23T04:14:57.8941954Z unzip /home/azure-deploy-test/AutoEcole.zip -d /home/test-deployment/
2020-03-23T04:14:57.9086229Z Archive: /home/azure-deploy-test/AutoEcole.zip
2020-03-23T04:14:57.9087639Z
2020-03-23T04:14:57.9136932Z ##[error]replace /home/test-deployment/AutoEcole? [y]es, [n]o, [A]ll, [N]one, [r]ename:
2020-03-23T04:53:12.1979529Z ##[error]The operation was canceled.
This was an issue with Microsoft's Azure DevOps Services that has been acknowledged and rectified by Microsoft.
This issue was reported as an issue with the "Liveness in Release Management UI".
All you have to do is access your project using the below URL:-
https://dev.azure.com/{your organization}/{your project}.
This is an official solution provided by Microsoft. This resolved the issue for me.
Please share more details in the comments section if you still face the issue.

installing kubernetes on coreos with rkt and automated script

I'm trying to install kuberentes with rkt on my real (not virtual) coreos servers at home using the scripts at https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and I have some questions.
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
thanks!
update
Rob thank you so much for your response. I wasn't clear enough regarding etcd2. I already have etcd2 tls installed and properly configured on my coreos servers. so I configured my etcd servers in the controller-install.sh file:
export ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
but when I run the controller-install.sh script, it returns and repeat the following output:
Waiting for etcd...
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
...
so I was guessing it's because i didn't define etcd related tls certificates in the controller script and that is why it stuck in that faze.
on my macbook pro laptop I have the following alias configured:
alias myetcdctl="~/apps/etcd-v3.0.8-darwin-amd64/etcdctl --endpoint=https://coreos-2.tux-in.com:2379 --ca-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/ca.pem --cert-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1.pem --key-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1-key.pem --timeout=10s"
so when I run myetcdctl member list I get:
8832ce6a269a7dac: name=ccff826d5f564c67abf35467306f80a0 peerURLs=https://coreos-3.tux-in.com:2380 clientURLs=https://coreos-3.tux-in.com:2379 isLeader=true
a2c0ac9708ef90fc: name=dc38bc8f20e64940b260d3f7b260430d peerURLs=https://coreos-2.tux-in.com:2380 clientURLs=https://coreos-2.tux-in.com:2379 isLeader=false
so I'm guessing that I don't really have a problem there.
any ideas?
thanks!
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
These scripts don't start an etcd server. You will need to set one up manually and will be able to use TLS and as many nodes as you would like. This isn't clear in the current form of the document, I will attempt a PR to fix.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
Only CONTROLLER_ENDPOINT be a domain name.
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
These scripts include/start the rkt API service. As you can see below, it also has a Restart parameter set (source):
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service

Failure: GCE credentials requested outside a GCE instance

When I try to copy my files to Google Cloud Storage using
gsutil cp file.gz gs://somebackup
Get this error:
Your "GCE" credentials are invalid. For more help, see "gsutil help creds", or re-run the gsutil config command (see "gsutil help config").
Failure: GCE credentials requested outside a GCE instance.
BTW, this was working until last yesterday.
Just ran into this as well and contacted Google support. It's occurring because the instance was created with Storage permissions as Read Only, visible on the instance details page:
Apparently this can't be changed after the instance is created (!). Our solution was to mount a temp disk, copy the file there, unmount it and then remount it on a second instance (with proper Storage permissions) and do the gsutil copy from there.
Try to do a "gcloud auth login" from the command line
I've had this before and my problem was that I've set the wrong project.
Make sure you set the project ID and not project name when you run
gcloud config set project <projectID>