I am trying to install Argo CLI by following this (https://github.com/argoproj/argo-workflows/releases) documentation.
# Download the binary
curl -sLO https://github.com/argoproj/argo/releases/download/v3.1.3/argo-linux-amd64.gz
# Unzip
gunzip argo-linux-amd64.gz
# Make binary executable
chmod +x argo-linux-amd64
# Move binary to path
mv ./argo-linux-amd64 /usr/local/bin/argo
# Test installation
argo version
The above instructions are not working. So, I followed the answer to this (How to update Argo CLI?) question.
curl -sLO https://github.com/argoproj/argo/releases/download/v2.12.0-rc2/argo-linux-amd64
chmod +x argo-linux-amd64
./argo-linux-amd64
But I am getting the following error:
./argo-linux-amd64: line 1: Not: command not found
I also tried moving the argo-linux-amd64 binary to /usr/local/bin/argo but still getting the same error (as expected).
Is there any solution to this?
Thank you.
The download links on the Releases page are incorrect. Try this one:
curl -sLO https://github.com/argoproj/argo-workflows/releases/download/v3.1.3/argo-linux-amd64.gz
I've submitted an issue to get the links fixed.
I'm trying to download a webpage from archive.org (ie http://wayback.archive.org/web/20110410223952id_/http://www.goldalert.com/gold-price-hovers-at-1460-as-ecb-hikes-rates-2/ ) with wget. I want to download it in /00001/index.html. How would I go about doing this?
I tried wget -p -k http://wayback.archive.org/web/20110410223952id_/http://www.goldalert.com/gold-price-hovers-at-1460-as-ecb-hikes-rates-2/ -O 00001/index.html but that didn't work. I than cd into the directory and removed the 00001 from the O flag. It didn't work either. I than just removed the -O flag. It worked but I get the whole archive.org directory (ie wayback.archive.org new directory web new diretory etc...) and the filename's not changed :(
What do I do?
Sorry for the obviously noob question.
wget http://wayback.archive.org/web/20110410223952id_/http://www.goldalert.com/gold-price-hovers-at-1460-as-ecb-hikes-rates-2/ -O 00001/index.html
Solve my own question. So simple.
I've spent far more time on this than I care to admit. I am trying to just deploy one file into my Artifactory server from the command line. I'm doing this using gradle because that is how we manage our java builds. However, this artifact is an NDK/JNI build artifact, and does not use gradle.
So I just need the simplest gradle script to do the deploy. Something equivalent to:
scp <file> <remote>
I am currently trying to use the artifactory plugin, and am having little luck in locating a reference for the plugin.
curl POST did not work for me . PUT worked correctly . The usage is
curl -X PUT $SERVER/$PATH/$FILE --data-binary #localfile
example :
$ curl -v --user username:password --data-binary #local-file -X PUT "http://<artifactory server >/artifactory/abc-snapshot-local/remotepath/remotefile"
Instead of using the curl command, I recommend using the jfrog CLI.
Download from here - https://www.jfrog.com/getcli/ and use the following command (make sure the file is executable) -
./jfrog rt u <file-name> <upload-path>
Here is a simple example:
./jfrog rt u sample-service-1.0.0.jar libs-release-local/com/sample-service/1.0.0/
You will be prompted for credentials and the repo URL the first time.
You can do lots of other stuff with this CLI tool. Check out the detailed instructions here - https://www.jfrog.com/confluence/display/RTF/JFrog+CLI.
The documentation for the artifactory plugin can be found, as expected, in Artifactory User Guide.
Please note that it is adviced to use the newer plugin - artifactory-publish, which supports the new Gradle publishing model.
Regarding uploading from the command line, you really don't need gradle for that. You can execute a simple PUT query using CURL or any other tool.
And of course if you just want to get your file into Artifactory, you can always deploy it via the UI.
Take a look the Artifactory REST API, mostly you can't use scp command, instead use the curl command towards REST API.
$ curl -X POST $SERVER/$PATH/$FILE --data #localfile
Mostly it looks like
$ curl -X POST http://localhost:8081/artifactory/abc-snapshot-local/remotepath/remotefile --data #localfile
The scp command is only used if you really want to access the internal folder which is managed by artifactory
$ curl -v -X PUT \
--user username:password \
--upload-file <path to your file> \
http://localhost:8080/artifactory/libs-release-local/my/jar/1.0/jar-1.0.jar
Ironically, I'm answering my own question. After a couple more hours working on the problem, I found a sample project on github: https://github.com/JFrogDev/project-examples
The project even includes a straightforward bash script for doing the exact deploy/copy from the command line that I was looking for, as well as a couple of less straightforward gradle scripts.
As per official docs, You can upload any file using the following command:
curl -u username:password -T <PATH_TO_FILE> "https://<ARTIFACTORY_SERVER>/<REPOSITORY_PATH>/<TARGET_FILE>"
Note: The user should have write access to this path.
I was wondering if there was a command to download the contents of a remote folder, i.e all the files contained within that specific folder.
For instance, if we take the URL http://plugins.svn.wordpress.org/hello-dolly/trunk/ - How would it be possible to download the two files contained within the trunk onto my local machine without having to download each file manually?
Also, if there is a way to download all contents including both files AND any listed subdirectories that would be great.
If you ever need to download an entire Web site, perhaps for off-line viewing, wget can do the job.
For example:
$ wget \
--recursive \
--no-clobber \
--page-requisites \
--html-extension \
--convert-links \
--restrict-file-names=windows \
--domains wordpress.org \
--no-parent \
http://plugins.svn.wordpress.org/hello-dolly/trunk/
This command downloads the Web site http://plugins.svn.wordpress.org/hello-dolly/trunk/
The options are:
--recursive: download the entire Web site.
--domains wordpress.org: don't follow links outside wordpress.org.
--no-parent: don't follow links outside the directory tutorials/html/.
--page-requisites: get all the elements that compose the page (images, CSS and so on).
--html-extension: save files with the .html extension.
--convert-links: convert links so that they work locally, off-line.
--restrict-file-names=windows: modify filenames so that they will work in Windows as well.
--no-clobber: don't overwrite any existing files (used in case the download is interrupted and resumed).