What is the command to download entire container using swift client? - openstack-swift

I am using following example command to download the container.
swift -A https://127.0.0.1:443/auth/v1.0 -U swiftops:swiftops -K download container_name
But I am not able to download the entire container. I can download only 10000 objects there. Is there any way to do this automatically. I heard about --all parameter but I am not able to understand its usage.
swift -A https://127.0.0.1:443/auth/v1.0 -U swiftops:swiftops -K download container_name -a
It was giving me an invalid parameter usage exception.
Kindly help me on this.

Using argument or paramter -m with the container name can download entire container. Below is the updated command.
swift -A https://127.0.0.1:443/auth/v1.0 -U swiftops:swiftops -K download container_name -m container_name

Related

Locust docker container cannot find the locust file

I try to run locustfile in locustio/locust docker image and it cannot find the locustfile, despite the locustfile exists in the locust directory.
~ docker run -p 8089:8089 -v $PWD:/locust locustio/locust locust -f /locust/locustfile.py
Could not find any locustfile! Ensure file ends in '.py' and see --help for available options.
(I'm reposting this question as my own, because the original poster deleted it immediately after getting an answer!)
Remove the extra "locust" from your command, so that it becomes:
docker run ... locustio/locust -f /locust/locustfile.py

One command to restore local PostgreSQL dump into kubectl pod?

I'm just seeing if there is one command to restore a local backup to a pod running postgres. I've been unable to get it working with:
kubectl exec -it <pod> -- <various commands>
So I've just resorted to:
kubectl cp <my-file> <my-pod>:<my-file>
Then restoring it.
Thinking there is likely a better way, so thought I'd ask.
You can call pg_restore command directly in the pod specifying path to your local file as a dump source (connection options may vary depending on image you're using), e.g:
kubectl exec -i POD_NAME -- pg_restore -U USERNAME -C -d DATABASE < dump.sql
If the file was in s3 or another location available to the pod, you could always have a script inside the container that can download the file and perform the restore, in a single bash file.
That should allow you to perform the restore in a single command.
cat mybackup.dmp | kubectl exec -i ... -- pgrestore ...
Or something like that.

wget doesn't see/download certain files?

I am trying to download all files starting with traceroute from https://data-store.ripe.net/datasets/atlas-daily-dumps/ via wget.
I am running the following command:
wget -A traceroute* -m -np https://data-store.ripe.net/datasets/atlas-daily-dumps/ --no-check-certificate
It creates the directories, checks index.html's and then within 5 minutes it stops, without downloading any traceroute files.
When I try another type of file via
wget -A connection* -m -np https://data-store.ripe.net/datasets/atlas-daily-dumps/ --no-check-certificate
it donwloads the connection files no problem. What can be the issue?
You probably have a local file that matches the glob traceroute*; you need to put single quotes around it so the shell won't match anything:
wget -A 'traceroute*' -m -np https://data-store.ripe.net/datasets/atlas-daily-dumps/ --no-check-certificate
specifying traceroute*.bz2 seems to have fixed the problem

How to fill in a Docker mongodb image

I don't really understand how I can fill in my mongodb image with a simple script like demoInstall.js.
I have my image and I can launch a mongodb instance. But I can not access to the docker image "mongo shell" to fill this one with the script.
I tried this command :
sudo docker run --entrypoint=/bin/cat mongo/ubuntu /tmp/devInstall.js | mongo --host IPAdress
But it's using the local mongo and not the image :/
Finally my aim is simple, I need to pull my image on a virgin server and launch a basic bash script who fill some informations in the Docker db.
The command you use does pipe on the output of the docker locally. You might call with explicit bash -c instead:
sudo docker run -it --rm mongo/ubuntu /bin/bash -c '/bin/cat /tmp/devInstall.js | mongo --host IPAdress'
I am not sure the IPAdress will be available though. You might want to define it via environmental parameter or container linking.
I would mount a volume with this argument:
-v /local_init_script_folder:/bootstrap
And then with a similar commandline like you proposed, you cann access the contents of the folder as /bootstrap from within the container.

How to customize heroku CLI?

I need to download my database at heroku, how to add in these flags: -a (data only), -x (no privileges), -O (no owner) to the CLI ??
Currently I use:
heroku pgbackups:capture
$ curl -o latest.dump `heroku pgbackups:url`
It doesn't seem like you can pass flags to pgbackups:capture. You can, however, uses pg_dump directly.
pg_dump DATABASE_NAME -h DATABASE_HOST -p PORT -U USER -F c -a -x -O -f db.dump
You can get the database values by running heroku pg:credentials DATABASE_URL You can also use the plugin a colleague and I wrote: parse_db_url. This will let you run a command like heroku pg:parse_db_url --format pg_dump and get a usable pg_dump command as output.