OpenShift RestAPI Equivalent of oc tag - rest

What is the OpenShift RestAPI equivalent of oc tag some.docker.regiery some-image-tag:latest?
From the help page:
Tag existing images into image streams
The tag command allows you to take an existing tag or image from an image stream, or a Docker image pull spec, and set
it as the most recent image for a tag in 1 or more other image streams. It is similar to the 'docker tag' command, but
it operates on image streams instead.
Pass the --insecure flag if your external registry does not have a valid HTTPS certificate, or is only served over HTTP.
Pass --scheduled to have the server regularly check the tag for updates and import the latest version (which can then
trigger builds and deployments). Note that --scheduled is only allowed for Docker images.
Usage:
oc tag [--source=SOURCETYPE] SOURCE DEST [DEST ...] [flags]

Answering my own question for future reference:
Run the command as you would on the cli but add --loglevel=8. (i.e. oc tag some.docker.regiery some-image-tag:latest --loglevel=8).
Then you can see exactly what API call was used and exactly what params were used and you can replicate that! :)

Related

GitHub actions - use a custom container registry

When using GitHub actions (for example CodeQL code scanning) you can specify a container image in which the action would run, see https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#jobsjob_idcontainerimage
In the docs it says:
The Docker image to use as the container to run the action. The value can be the Docker Hub image name or a registry name.
I need to specify an image which resides on a private registry, not in (the public) Docker Hub. The docs seem to suggest it is possible ("or a registry name") but I am not sure if and how I could specify a private image (I refer to the private image in docker as https://my.server.com:1234/dir/image-name:latest).
Is it possible? If yes, how?
I've tested the option that you've mentioned but unfortunately it is not working.
Planning to get support from GitHub to try and figure out if it's possible. (Have a corporate account)

Why am I getting this "unauthorized" error when trying to mirror OKD installation images from Quay.io?

I have been working on an installation of OKD on an air-gapped environment. The first major step has been mirroring the OKD images so that they can be moved over to the new environment and pulled locally. I've been following a combination of the OpenShift documentation and this article, as well as this resource for getting my certificates set up. I have been making slow but consistent progress.
However, I am now having trouble when attempting to actually mirror the files using
oc adm -a ${LOCAL_SECRET_JSON} release mirror \
--from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \
--to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
--to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}
I get the following, encouraging response:
info: Mirroring 120 images to host.okd-registry.dns:5000/ocp4/openshift4 ...
followed by blobs: and manifests: lines, and finally the line
stats: shared=0 unique=7 size=105.3MiB ratio=1.00
I then get about 50 lines stating
error: unable to retrieve source image quay.io/openshift-release-dev/ocp-v4.0-art-dev manifest
sha256:{some value}: unauthorized: access to the requested resource is not authorized
I have a quay account but I am not sure if that is required even after my research, and if it is, where or how I would log into it. I have attempted doing so using oc login followed by various addresses within the release structure, but if this is the solution, I may be using the wrong arguments as I have not been able to find any instructions on doing this.
I have also tried the command with sudo. I doubt that is an issue but I tried it anyway.
I suppose the issue could be with my certificates, but I am not sure how to determine if this is the case.
Any guidance or suggestions would be much appreciated.
It has been determined that the OKD documentation is inaccurate at the time that I am posting this answer, and was instructing readers to pull from the OCP image repository rather than the OKD repository, which apparently requires additional credentials. A bug has been logged and the documentation will hopefully be updated soon.
The correct environment variables and full command to mirror the images are as follows:
LOCAL_REGISTRY=localhost:5000 (or your local domain name and port for the registry)
LOCAL_REPOSITORY=okd
LOCAL_SECRET_JSON=<full path to your pull secret>
OCP_RELEASE=4.5.0-0.okd-2020-10-15-235428
PRODUCT_REPO=openshift
RELEASE_NAME=okd
ARCHITECTURE=not-used-in-okd
oc adm -a ${LOCAL_SECRET_JSON} release mirror \
--from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE} \
--to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
--to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE} --dry-run

Creating Kubernetes Endpoint in VSTS generates error

What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed.
Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint?
This is followed by TsuyoshiUshio/KubernetesTask issue 35
I try to reproduce, however, I can't do it.
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download task/kubeconfig.
Workaround might be like this:
kubectl version in your local machine and check the current server/client version
specify the same version as the server on the download task. (by default it is 1.5.2)
See the log of your release pipeline which is fail, you can see which kubectl command has been executed, do the same thing on your local machine with fitting your local pc's environment.
The point is, before go to the VSTS, download the kubectl by yourself.
Then, put the kubeconfg on the default folder like ~/.kube/config or set environment variables KUBECONFIG to the binary.
Then execute kubectl get nodes and make sure if it works.
My kubeconfig is different format with yours. If you use AKS, az aks install-cli command and az aks get-credentials command.
Please refer https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough .
If it works locally, the config file must work on the VSTS task environment. (or this task or VSTS has a bug)
I had the same problem on VSTS.
Here is my workaround to get a Service Connection working (in my case to GCloud):
Switched Authentication to "Service Account"
Run the two commands told by the info icon next to the fields Token and Certificate: "Token to authenticate against Kubernetes.
Use the ‘kubectl get serviceaccounts -o yaml’ and ‘kubectl get secret
-o yaml’ commands to get the token."
kubectl get secret -o yaml > kubectl-secret.yaml
Search inside the the file kubectl-secret.yaml the values ca.crt and token
Enter the values inside VSTS to the required fields
The generated config I was using had a duplicate line, removing this corrected the issue for me.
users:
- name: cluster_stuff_here
- name: cluster_stuff_here

How do I upload a file with metadata to jfrog artifactory with curl

I upload a file like this:
curl -u ${CREDS} --upload-file ${file} ${url}
Is there a way to add a body or headers that will set some metadata for the file? Like build number.
You can actually deploy artifacts with properties to Artifactory OSS using matrix parameters, for example:
curl -uadmin:password -T file.tar "http://localhost:8081/artifactory/generic-local/file.tar;foo=bar;"
And get the artifact properties using REST API, for example:
curl -uadmin:password "http://localhost:8081/artifactory/api/storage/generic-local/file.tar?properties"
Viewing properties from the UI and other features are limited to the Pro edition.
Seems this is a pro feature. Documentation: Set Item Properties
PUT /api/storage/{repoKey}{itemPath}?properties=p1=v1[,v2][|p2=v3][&recursive=1]
Not helping me :-/

How to change a buildConfig in OpenShift 3

I'm working through the sample tutorial on OpenShift 3. I created the example application nodejs-mongodb-example. But in the "edit parameters" step, I put the wrong URL to my forked repository, and I get a failed build.
I thought maybe I'd be able to extract the buildConfig file (is that a template?) on the command line, but I haven't found a way to do that.
Is there a way to edit and replace this bad buildConfig without deleting all of the application objects and starting over?
You can use the oc edit command to edit an existing object. For example, oc edit buildconfig/myapp to edit the BuildConfig named myapp.
To add to #ncdc answer, here are the docs for oc :
https://docs.openshift.com/enterprise/3.0/cli_reference/basic_cli_operations.html
and specifically for oc edit :
https://docs.openshift.com/enterprise/3.0/cli_reference/basic_cli_operations.html#application-modification-cli-operations