how to push docker image via **rest** api given config - rest

I want to create a new image in a remote docker registry by providing only partial data:
According to the docs
https://docs.docker.com/registry/spec/api/#pushing-an-image
in order to push a docker image, i can:
* post a tar layer that i have.
* post a manifest
and the registry will support my new new image.
For example:
* I have locally a java app in a tar layer.
* The remote docker registry already has a java8 base image.
* I want to upload the tar layer and a manifest that references the java8 base image and have the docker registry support the new image for my app.
(The layer tar i get from a 3rd party build tool called Bazel if anyone cares)
From the docs i gather that i can take the existing java8 image manifest, download it, append (or pre-pend) my new layer to the layers section and viola.
Looking at the manifest spec
https://docs.docker.com/registry/spec/manifest-v2-2/#image-manifest-field-descriptions
I see there's a "config object" section with digest as reference to config file. This makes sense, i may need to redefine the entrypoint for example. So suppose i also have a docker config in a file that i guess i need to let the registry know about somehow.
Nowhere (that i can see) in the API does it state where or how to upload the config or if i need to do that at all - maybe it's included in the layer tar or something.
Do i upload the config as a layer? is it included in the tar? if not why do i give a reference to it by digest?
Best answer i can hope for would be a sequence of http calls to a docker-registry that do what i'm trying. Alternatively just explaining what the config is, and how to go about it would be very helpful.

found the solution here:
https://www.danlorenc.com/posts/containers-part-2/
very detailed, great answer, don't know who you are but i love you!
From inspecting some configs from existing images, Docker seems to require a few fields:
{
"architecture": "amd64",
"config": {
},
"history": [
{
"created_by": "Bash!"
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:69e4bd05139a843cbde4d64f8339b782f4da005e1cae56159adfc92311504719"
]
}
}
The config section can contain environment variables, the default CMD and ENTRYPOINT of your container and a few other settings. The rootfs section contains a list of layers and diff_ids that look pretty similar to our manifest. Unfortunately, the diff_ids are actually slightly different than the digests contained in our manifest, they’re actually a sha256 of the ‘uncompressed’ layers.
We can create one with this script:
cat <<EOF > config.json
{
"architecture": "amd64",
"config": {
},
"history": [
{
"created_by": "Bash!"
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:$(gunzip layer.tar.gz --to-stdout | shasum -a 256 | cut -d' ' -f1)"
]
}
}
EOF
Config Upload
Configs are basically stored by the registry as normal blobs. They get referenced differently in the manifest, but they’re still uploaded by their digest and stored normally.
The same type of script we used for layers will work here:
returncode=$(curl -w "%{http_code}" -o /dev/null \
-I -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://gcr.io/v2/$PROJECT/hello/blobs/$config_digest)
if [[ $returncode -ne 200 ]]; then
# Start the upload and get the location header.
# The HTTP response seems to include carriage returns, which we need to strip
location=$(curl -i -X POST \
https://gcr.io/v2/$PROJECT/hello/blobs/uploads/ \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-d "" | grep Location | cut -d" " -f2 | tr -d '\r')
# Do the upload
curl -X PUT $location\?digest=$config_digest \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
--data-binary #config.json
fi

Related

Google Cloud Container Registry - Check image size

I sent a docker file for Google Cloud Build. The build was created successfully.
The artifact URL is:
gcr.io/XXX/api/v1:abcdef017e651ee2b713828662801b36fc2c1
How I can check the image size? (MB\GB)
There isn't API for this. But I have a workaround, this Linux command line
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://gcr.io/v2/XXX/api/v1/manifests/abcdef017e651ee2b713828662801b36fc2c1 2>/dev/null | \
jq ".layers[].size" | \
awk '{s+=$1} END {print s}'
Detail line by line
Create a curl with a secure token from the GCLOUD CLI
Get the image manifest, which describe all the layers and their size
Get only the layers' sizes
Sum the sizes
-> Result is in Byte.

How to get raw content directly from api.github.com (or raw.githubusercontent.com)

First of all, please take note of the new API changes:
https://developer.github.com/changes/2020-02-10-deprecating-auth-through-query-param/
The problem seems to be that I have to exchange a github personal access token for a temporary token, in order to read from raw.githubusercontent.com.
I have this request info:
set -e
export github_personal_access_token=a8f464fdxxxxxxxxxxxxxxxxxxxxxxfb89e6be
export file_url="https://api.github.com/repos/oresoftware/live-mutex/contents/package.json?ref=master"
mkdir tmp && cd tmp
curl -H "Authorization: token $github_personal_access_token" "$file_url" 2> err.log > output.json
the output.json looks like:
{
"name": "package.json",
"path": "package.json",
"sha": "6a2d55983bb641ff217d822d8e60dbb6c8f85ea3",
"size": 1343,
"url": "https://api.github.com/repos/ORESoftware/live-mutex/contents/package.json?ref=master",
"html_url": "https://github.com/ORESoftware/live-mutex/blob/master/package.json",
"git_url": "https://api.github.com/repos/ORESoftware/live-mutex/git/blobs/6a2d55983bb641ff217d822d8e60dbb6c8f85ea3",
"download_url": "https://raw.githubusercontent.com/ORESoftware/live-mutex/master/package.json",
"type": "file",
"content": "ewogICJuYW1lIjogImxpdmUtbXV0ZXgiLAogICJ2ZXJzaW9uIjogIjAuMi4y\nNCIsCiAgImRlc2NyaXB0aW9uIjogIlNpbXBsZSBtdXRleCB0aGF0IHVzZXMg\nYSBUQ1Agc2VydmVyOyB1c2VmdWwgaWYgeW91IGNhbm5vdCBpbnN0YWxsIFJl\nZGlzLCBldGMuIiwKICAibWFpbiI6ICJkaXN0L21haW4uanMiLAogICJ0eXBp\nbmdzIjogImRpc3QvbWFpbi5kLnRzIiwKICAidHlwZXMiOiAiZGlzdC9tYWlu\nLmQudHMiLAogICJiaW4iOiB7CiAgICAibG14X2FjcXVpcmVfbG9jayI6ICJh\nc3NldHMvY2xpL2FjcXVpcmUuanMiLAogICAgImxteF9yZWxlYXNlX2xvY2si\nOiAiYXNzZXRzL2NsaS9yZWxlYXNlLmpzIiwKICAgICJsbXhfaW5zcGVjdF9i\ncm9rZXIiOiAiYXNzZXRzL2NsaS9pbnNwZWN0LmpzIiwKICAgICJsbXhfbGF1\nbmNoX2Jyb2tlciI6ICJhc3NldHMvY2xpL3N0YXJ0LXNlcnZlci5qcyIsCiAg\nICAibG14X3N0YXJ0X3NlcnZlciI6ICJhc3NldHMvY2xpL3N0YXJ0LXNlcnZl\nci5qcyIsCiAgICAibG14X2xzIjogImFzc2V0cy9jbGkvbHMuanMiLAogICAg\nImxteCI6ICJhc3NldHMvbG14LnNoIgogIH0sCiAgInNjcmlwdHMiOiB7CiAg\nICAidGVzdCI6ICIuL3NjcmlwdHMvdGVzdC5zaCIsCiAgICAicG9zdGluc3Rh\nbGwiOiAiLi9hc3NldHMvcG9zdGluc3RhbGwuc2giCiAgfSwKICAicjJnIjog\newogICAgInRlc3QiOiAiLi90ZXN0L3NldHVwLXRlc3Quc2ggJiYgc3VtYW4g\nLS1kZWZhdWx0IgogIH0sCiAgInJlcG9zaXRvcnkiOiB7CiAgICAidHlwZSI6\nICJnaXQiLAogICAgInVybCI6ICJnaXQraHR0cHM6Ly9naXRodWIuY29tL09S\nRVNvZnR3YXJlL2xpdmUtbXV0ZXguZ2l0IgogIH0sCiAgImF1dGhvciI6ICJP\nbGVnemFuZHIgVkQiLAogICJsaWNlbnNlIjogIk1JVCIsCiAgImJ1Z3MiOiB7\nCiAgICAidXJsIjogImh0dHBzOi8vZ2l0aHViLmNvbS9PUkVTb2Z0d2FyZS9s\naXZlLW11dGV4L2lzc3VlcyIKICB9LAogICJob21lcGFnZSI6ICJodHRwczov\nL2dpdGh1Yi5jb20vT1JFU29mdHdhcmUvbGl2ZS1tdXRleCNyZWFkbWUiLAog\nICJkZXBlbmRlbmNpZXMiOiB7CiAgICAiQG9yZXNvZnR3YXJlL2pzb24tc3Ry\nZWFtLXBhcnNlciI6ICIwLjAuMTI0IiwKICAgICJAb3Jlc29mdHdhcmUvbGlu\na2VkLXF1ZXVlIjogIjAuMS4xMDYiLAogICAgImNoYWxrIjogIl4yLjQuMiIs\nCiAgICAidGNwLXBpbmciOiAiXjAuMS4xIiwKICAgICJ1dWlkIjogIl4zLjMu\nMiIKICB9LAogICJkZXZEZXBlbmRlbmNpZXMiOiB7CiAgICAiQHR5cGVzL25v\nZGUiOiAiXjEwLjEuMiIsCiAgICAiQHR5cGVzL3RjcC1waW5nIjogIl4wLjEu\nMCIsCiAgICAiQHR5cGVzL3V1aWQiOiAiXjMuNC4zIgogIH0KfQo=\n",
"encoding": "base64",
"_links": {
"self": "https://api.github.com/repos/ORESoftware/live-mutex/contents/package.json?ref=master",
"git": "https://api.github.com/repos/ORESoftware/live-mutex/git/blobs/6a2d55983bb641ff217d822d8e60dbb6c8f85ea3",
"html": "https://github.com/ORESoftware/live-mutex/blob/master/package.json"
}
}
but I just want the raw file content, not the metadata. The metadata does give me a link to the raw content:
https://raw.githubusercontent.com/ORESoftware/live-mutex/master/package.json
but for private repos, it requires an access token. So is there an easier way to do this other than this?
curl -H "Authorization: token $github_personal_access_token" "$file_url" |
jq -r '.content' | base64 -d > output.json
like I said, the biggest problem is I don't have a valid access_token in hand, and I can get an access token to download the file from the download_url, but that requires extra scripting steps. Looking for a single command. AKA, I don't want to have to install jq in a docker image if possible.
GitHub supports different media types to indicate what the client wishes to accept. In your case, you can get the raw file like this:
curl -H "Accept: application/vnd.github.v3.raw" \
-H "Authorization: token $github_personal_access_token" \
"$file_url" 2> err.log > output.json

SumoLogic dashboards - how do I automate?

I am getting some experience with SumoLogic dashboards and alerting. I would like to have all possible configuration in code. Does anyone have experience with automation of SumoLogic configuration? At the moment I am using Ansible for general server and infra provisioning.
Thanks for all info!
Best Regards,
Rafal.
(The dashboards, alerts, etc. are referred to as Content in Sumo Logic parlance)
You can use the Content Management API, especially the content-import-job. I am not an expert in Ansible, but I am not aware of any way to plug that API into Ansible.
Also there's a community Terraform provider for Sumo Logic and it supports content:
resource "sumologic_content" "test" {
parent_id = "%s"
config =
{
"type": "SavedSearchWithScheduleSyncDefinition",
"name": "test-333",
"search": {
"queryText": "\"warn\"",
"defaultTimeRange": "-15m",
[...]
Disclaimer: I am currently employed by Sumo Logic
Below is the shell script to import the dashboard. Here it is SumoLogic AU instance. eg: https://api.au.sumologic.com/api. This will be changed based on your country.
Note: You can export all of your dashboard as json files.
#!/usr/bin/env bash
set -e
# if you are using AWS parameter store
# accessKey=$(aws ssm get-parameter --name path_to_your_key --with-decryption --query 'Parameter.Value' --region=ap-southeast-2 | tr -d \")
# accessSecret=$(aws ssm get-parameter --name name path_to_your_secret --with-decryption --query 'Parameter.Value' --region=ap-southeast-2 | tr -d \")
# yourDashboardFolderName="xxxxx" # this is the folder id in the sumologic where you want to create dashboards
# if you are using just key and secreat
accessKey= "your_sumologic_key"
accessSecret= "your_sumologic_secret"
yourDashboardFolderName="xxxxx" # this is the folder id in the sumologic
# you can place all the json files of dashboard in ./Sumologic/Dashboards folder.
for f in $(find ./Sumologic/Dashboards -name '*.json'); \
do \
curl -X POST https://api.au.sumologic.com/api/v2/content/folders/$yourDashboardFolderName/import \
-H "Content-Type: application/json" \
-u "$accessKey:$accessSecret" \
-d #$f \
;done

Upload secret file credentials to Jenkins with REST / CLI

How can I create a Jenkins Credential via REST API or Jenkins CLI? The credential should be of type "secret file", instead of a username / password combination.
The question is similar to this question, but not the same or a duplicate.
You can do it as follows:
curl -X POST \
https://jenkins.local/job/TEAM-FOLDER/credentials/store/folder/domain/_/createCredentials \
-F secret=#/Users/maksym/secret \
-F 'json={"": "4", "credentials": {"file": "secret", "id": "test",
"description": "HELLO-curl", "stapler-class":
"org.jenkinsci.plugins.plaincredentials.impl.FileCredentialsImpl",
"$class":
"org.jenkinsci.plugins.plaincredentials.impl.FileCredentialsImpl"}}'
just finished with it today https://www.linkedin.com/pulse/upload-jenkins-secret-file-credential-via-api-maksym-lushpenko/?trackingId=RDcgSk0KyvW5RxrBD2t1RA%3D%3D
To create Jenkins credentials via the CLI you can use the create-credentials-by-xml command:
java -jar jenkins-cli.jar -s <JENKINS_URL> create-credentials-by-xml system::system::jenkins _ < credential-name.xml
The best way to know the syntax of this is to create a credential manually, and then dump it:
java -jar jenkins-cli.jar -s <JENKINS_URL> get-credentials-as-xml system::system::jenkins _ credential-name > credential-name.xml
Then you can use this XML example as a template, it should be self-explanatory.
If you want to update an existing secret file, the simplest way I found was to delete and re-create.
A delete request, to extend #lumaks answer (i.e. with the same hostname, folder name and credentials id), looks like:
curl -v -X POST \
-u "user:password" \
https://jenkins.local/job/TEAM-FOLDER/credentials/store/folder/domain/_/credential/test/doDelete
This will return either HTTP status code 302 Found or 404 Not Found for existing and non-existing creds file respectively.

Visual Recognition error 400: Cannot execute learning task no classifier name given

I am using Visual Recognition curl command to add a classification to an image:
curl -u "user":"password" \
-X POST \
-F "images_file=#image0.jpg" \
-F "classifier_ids=classifierlist.json" \
"https://gateway.watsonplatform.net/visual-recognition-beta/api/v2/classifiers?version=2015-12-02"
json file:
{
"classifiers": [
{
"name": "tomato",
"classifier_id": "tomato_1",
"created": "2016-03-23T17:43:11+00:00",
"owner": "xyz"
}
]
}
(Also tried without the classifiers array. Got the same error)
and getting an error:
{"code":400,"error":"Cannot execute learning task : no classifier name given"}
Is something wrong with the json?
To specify the classifiers you want to use you need to send a JSON object similar to:
{"classifier_ids": ["Black"]}
An example using Black as classifier id in CURL:
curl -u "user":"password" \
-X POST \
-F "images_file=#image0.jpg" \
-F "classifier_ids={\"classifier_ids\":[\"Black\"]}"
"https://gateway.watsonplatform.net/visual-recognition-beta/api/v2/classify?version=2015-12-02"
If you want to list the classifier ids in a JSON file then:
curl -u "user":"password" \
-X POST \
-F "images_file=#image0.jpg" \
-F "classifier_ids=#classifier_ids.json"
"https://gateway.watsonplatform.net/visual-recognition-beta/api/v2/classify?version=2015-12-02"
Where classifier_ids.json has:
{
"classifier_ids": ["Black"]
}
You can test the Visual Recognition API in the API Explorer.
Learn more about the service in the documentation.
The model schema you are referencing, and what is listed in the API reference, is the format of the response json. It is an example of how the API will return your results.
The format of the json that you use to specify classifiers should be a simple json object, as German suggests. In a file, it would be:
{
"classifier_ids": ["tomato_1"]
}
You also need to use < instead of # for the service to read the contents of the json file correctly. (And you might need to quote the < character on a command line since it has special meaning (redirect input).) So your curl would be:
curl -u "user":"password" \
-X POST \
-F "images_file=#image0.jpg" \
-F "classifier_ids=<classifier_ids.json"
"https://gateway.watsonplatform.net/visual-recognition-beta/api/v2/classify?version=2015-12-02"