Artifactory API to upload directory with contents - rest

I was looking into the Artifactory REST API doc but I could not find any API that allows us to upload a directory with contents (can be files, subdirectory). There is only API to upload an individual file. I also found this where it stated that it can be done through GUI or the JFrog CLI. I wonder if I misread some documents and if there actually is a way to do that with the REST API.

In order to upload (a.k.a "deploy") a directory with its contents via REST API you can use the Deploy Artifacts from Archive endpoint.
You'll need to create an archive file (zip, tar, tar.gz) with the directory and its contents, and call the upload API with the target folder, specifying the X-Explode-Archive: true request header.
E.g. (simplified - omitted auth, etc.):
curl -X PUT https://jfrog.foo.bar/artifactory/my-repo-local/ \
-H "X-Explode-Archive: true" \
-T my-file.tar.gz

Related

download gitub artifact from url using wget

I am trying to follow these docs to download an artifact from github using githubs API:
https://docs.github.com/en/rest/actions/artifacts#download-an-artifact
I ran the curl command given in the docs, and it gave me the following url from which to download the artifact (I have replaced the specifics with ...)
https://pipelines.actions.githubusercontent.com/serviceHosts/..../_apis/pipelines/1/runs/16/signedartifactscontent?artifactName=my-artifact&urlExpires=....&urlSigningMethod=HMACV2&urlSignature=....
I am able to download the artifact by putting the URL into my browser (it automatically downloads when the URL is visited) however I tried to use wget to download it via console and got this error:
wget https://pipelines.actions.githubusercontent.com/... # the command I ran
HTTP request sent, awaiting response... 400 Bad Request # the error I got
How can I download a zip file to console? Should I use something other than wget?
I'd like to clarify that viewing this link in the browser is possible even when not logged in to github (or when in private browsing). Also, I can download the zip file at the link as many times as I would like before the link expires after 1 minute. Also my repo is private, which is necessary for my work. I need to use an access token when doing the curl command as described in the docs, however the link that is returned to me does not require any authentication when accessed via a browser.
The api docs seem a bit ambiguous here. It is possible that the redirect can only be accessed a single time in which case you should try generating the redirect and first using wget to parse it. You can then unzip the file using the unzip command.
If that is not the case I believe this statement in the api docs is key:
Anyone with read access to the repository can use this endpoint. If the repository is private you must use an access token with the repo scope. GitHub Apps must have the actions:read permission to use this endpoint.
My guess is that your repository is private and you are logged in on the browser to Github which allows you to be authenticated hence why you are able to download from the redirect link. I would suggest trying from incognito mode to test this.
Migrating the repository to public would allow you to bypass this issue. Alternatively you can pass the authentication token as a header to wget like so in order to authenticate with the server to pull the file.
header='--header=Authorization: token <TOKEN>'
wget "$header" https://pipelines.actions.githubusercontent.com/... -O output_file
The problem was that I didn't put quotes around my url. I needed to do this:
wget "https://pipelines.actions.githubusercontent.com/serviceHosts/..../_apis/pipelines/1/runs/16/signedartifactscontent?artifactName=my-artifact&urlExpires=....&urlSigningMethod=HMACV2&urlSignature=...."

How to copy a file from AWS rest API gateway to s3 bucket?

Using an API gateway, I created an S3 bucket to copy an image (image/jpg). This website describes how I can upload images using Amazon's API gateway: https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-upload-image-s3/.
When I type the URL by adding the bucket and object name, I get the following error:
{"message":"Missing Authentication Token"}
I would like to know, first of all, where API can find my image? How can I verify that the image is a local file as stated in the introduction? In addition to that, should I use the curl command to complete the transformation? I am unable to use Potsman.
I have included a note from the link, how can I change the type of header in a put request?
What is the best way to copy files from the API gateway to S3 without using the Lambda function?
Assuming you set up your API and bucket following the instructions in the link, you upload a local file to your bucket using curl command as below.
curl -X PUT -H "Accept: image/jpeg" https://my-api-id.execute-api.us-east-2.amazonaws.com/my-stage-name/my-bucket-name/my-local-file.jpg
Note that the header indicates it will accept jpeg files only. Depending on the file you are uploading to S3, you will change/add this header.
To answer the questions directly:
What is the best way to copy files from the API gateway to S3 without using the Lambda function? - follow steps in this link https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-upload-image-s3/
where API can find my image? - your image is located in your local host/computer. You use curl command to upload it via the API that you created
should I use the curl command to complete the transformation? - not sure what you meant by transformation. But you use curl command to upload the image file to your S3 bucket via API Gateway
how can I change the type of header in a put request? - you use -H to add headers in your curl command

Can't download GSuite exported data using gsutil

I am trying to download the exported data from my GSuite (Google Workplace) account. I ran the data export tool and it is sitting in a bucket. I want to download all of the files but it says that the only way I can download multiple files is to use the gsutil utility.
I installed it using pip instal -U gsutil.
I tried running the following command:
gsutil cp -r \
gs://takeout-export-3ba9a6a2-c080-430a-bece-6f830889cc83/20201202T070520Z/ \
gs://takeout-export-3ba9a6a2-c080-430a-bece-6f830889cc83/Status\ Report.html \
.
...but it failed with an error:
ServiceException: 401 Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.
I suppose that is because I am not authenticated. I tried going through the motions with gsutil config, but it is now asking me for a "Project ID", which I cannot find anywhere in the cloud storage web page showing the bucket with the exported files.
I tries following the top answer for this question, but the project ID does not appear to be optional anymore.
How do I download my files?
The project ID is "optional" in the sense that it's only used for certain scenarios, e.g. when you want to create a bucket (without explicitly specifying a project for it to live in), that project is specified as its parent. For most things, like your scenario of copying existing GCS objects to your local filesystem, your default project ID doesn't matter; you can just type whatever you want for the project ID in order to generate your boto file for authentication.

How to upload a credential.json file to Hasura cluster without adding it to a git remote repository

I am supposed to use a credvalue.json in some API being used by the program, but I don't want to upload this credentials to GitHub repository and still use it in microservice
I tried adding it to .gitignore and copying it to src folder in docker but it results in file not found, but if I remove it from . gitignore it works well
I can't use hasura secrets, it's the credvalue.json file required by the library
Also just for a use case, the API requires me to specify the path of this JSON file as an environment variables, so what should be the path of file uploaded JSON file?
You should be able to use hasura secrets with files as well with the -f flag.
Here are the link to the docs: https://docs.hasura.io/0.15/manual/project/secrets/mounting-secret-as-file.html
You can basically create a secret from a file and then mount that secret as a file your microservice container.

Can I download Bamboo built artifacts using Bamboo Rest - API?

This page states:
Bamboo's REST APIs provide the following capabilities:
Retrieve the artifacts for a build.
and here I see the documentation:
http://myhost.com:8085/bamboo/rest/api/latest/plan/{projectKey}-{buildKey}/artifact
[GET]
When I try this link with the bamboo server I have, like:
https://my.bamboo.server/rest/api/latest/plan/MY-PLAN/artifact
All I get is:
<artifacts expand="artifacts">
<link href="http://my.bamboo.server/rest/api/latest/plan/MY-PLAN/artifact" rel="self"/>
<artifacts start-index="0" max-result="0" size="0"/>
</artifacts>
So am I understanding the REST documentation completely wrong, or is there something wrong possibly with MY-PLAN and this link is supposed to provide me a war file as I expect?
I'm afraid you are misunderstanding the REST documentation; by "Retrieve the artifacts for a build", it means "retrieves information about the build artifacts defined for a given plan". As you have already seen, all you get back is an XML or JSON document describing the artifacts defined.
If you want to download an actual build artifact, you'll need to write a script that uses /rest/api/latest/result/ to get the latest successful build info and, from that, form an actual download link to the artifact.
There are some issues related to your question: https://jira.atlassian.com/browse/BAM-11706
and BAM-16315 (which was deleted, because it contained customer details)
Here is the rest api documentation
https://docs.atlassian.com/atlassian-bamboo/REST/latest
Search for "/latest/result" documentation
http://myhost.com:8085/rest/api/latest/result/{projectKey}-{buildKey}-{buildNumber : ([0-9]+)|(latest)} [GET]
Example xml request
https://bamboo.server.com/rest/api/latest/result/projectKey-buildKey-buildNumber?expand=artifacts
Example json request
https://bamboo.server.com/rest/api/latest/result/projectKey-buildKey-buildNumber.json?expand=artifacts
Parse the artifacts node in the response. Each artifact should have an href property.
Pass the href to curl to download the artifact. You will probably need to setup a Bamboo token for rest api authentication.
Example curl request
curl -X GET -H "Authorization: Bearer ${BAMBOO_TOKEN}" $ARTIFACT_HREF
Spent a long time looking for this answer. Ended up having to piece together information and here is what I got. As #oallauddin mentions above, you need to get the url of the file from the xml.
curl -H "Authorization: Bearer <KEY>" https://bamboo.server.com/rest/api/latest/result/projectKey-buildKey-buildNumber?expand=artifacts
Alternative you can get it as json format
curl -H "Authorization: Bearer <KEY>" https://bamboo.server.com/rest/api/latest/result/projectKey-buildKey-buildNumber.json?expand=artifacts
I think where a lot of people are getting stuck is trying to use the Authorization header to download it after. BAM-20764 shows that this is NOT possible. You have to remove the header and use basic authentication with -u. Not both. Only the -u by itself.
curl -u username:password https://bamboo.server.com/browse/projectKey-build-Key-buildNumber/artifact/shared/<ARTIFACT_NAME>/<filename> --remote-name
The --remote-name/-O flag or --remote-header-name/-J is preferred here if you want to download the file with the default original name.
You've the link
<link href="http://my.bamboo.server/rest/api/latest/plan/MY-PLAN/artifact" rel="self"/>
Using curl you can download the artifact.
curl --user ${username}:{password} http://my.bamboo.server/rest/api/latest/plan/MY-PLAN/artifact