Riak-KV: how to create bucket in docker-compose file? - docker-compose

I try to use the original riak-kv image in docker-compose and I want on init add one bucket but docker-compose up won't start. How I can edit volumes.schemas to add bucket on init?
Original image allows to add riak.conf file in docker-compose ? If yes, then how can I do that?

Creating a bucket type with a custom datatype
I assume you want to create a bucket type when starting your container. You have to create a file in the /etc/riak/schemas directory with the bucket's name, like bucket_name.dt. The file should contain a single line with the type you would like to create (e.g. counter, set, map, hll).
You can also use the following command to create the file:
echo "counter" > schemas/bucket_name.dt
After that you have to just mount the schemas folder with the file to the /etc/riak/schemas directory in the container:
docker run -d -P -v $(pwd)/schemas:/etc/riak/schemas basho/riak-ts
Creating a bucket type with default datatype
Currently creating a bucket type with default datatype is only available if you add a custom post-start script under the /etc/riak/poststart.d directory.
Create a shell script with the command you would like to run. An example can be found here.
You have to mount it as a read-only file into the /etc/riak/poststart.d folder:
docker run -d -P -v $(pwd)/poststart.d/03-bootstrap-my-datatype.sh:/etc/riak/poststart.d/03-bootstrap-my-datatype.sh:ro basho/riak-ts
References
See the whole documentation for the docker images here. The rest can be found in GitHub.
Also, the available datatypes can be found here.

Related

docker-compose cannot find the yaml file

I've placed a docker compose file project.yaml at the location /etc/project/project.yaml
the file and well as the project directory have the same file permission, i.e. -rxwrxxrwx
but when I run docker-compose
sudo docker-compose -f ./project.yaml up -d
if errors out with the following
Cannot find the file ./project.yaml
I have checked various times and it seems there is no permission issue. Can anyone tell why we have this problem and what would be the solution
Beside using the full path, as commented by quoc9x, double-check your current working directory when you call a command with a relative path ./project.yaml
If you are not in the right folder, that would explain the error message.

"No such file or directory" when importing local file into docker container using docker exec

When running a shell command in docker exec with a local file as an argument, it fails with
-bash:  docker/mongo.archive: No such file or directory
$ docker exec -i 4cb4a63af40c sh -c 'mongorestore --archive' < 'docker/mongo.archive'
-bash:  docker/mongo.archive: No such file or directory
However, the file clearly exists at the given location:
$ ls docker/mongo.archive
docker/mongo.archive
I remember using the exact same command and it worked. Also, I tried calling the command from within its directory (./docker) as well as from outside, using relative paths. Using the absolute path fails as well. Any ideas?
Remark: 4cb4a63af40c is a mongodb container.
Adjust the quotes
docker exec -i 4cb4a63af40c sh -c 'mongorestore --archive < docker/mongo.archive'

Unable to copy hidden files using gcloud scp in cloud build - remote builder

I'm running cloud build using remote builder, able to copy all file in the workspace to my own VM but, unable to copy hidden files
Command used to copy files
gcloud compute scp --compress --recurse '/workspace/*' [username]#[instance_name]:/home/myfolder --ssh-key-file=my-key --zone=us-central1-a
so, this copies only non-hidden files.
Also used dot operator to copy hidden files
gcloud compute scp --compress --recurse '/workspace/.' [username]#[instance_name]:/home/myfolder --ssh-key-file=my-key --zone=us-central1-a
Still not able to copy and got below error
scp: error: unexpected filename: .
Can anyone suggest to me how to copy hidden files to VM using gcloud scp.
Thanks in advance
If you remove the trailing character after the slash, it may work. For example, this worked for me:
gcloud compute scp --compress --recurse 'test/' [username]#[instance_name]:/home/myfolder

Sparkleformation: No output received from `sfn print`

I am attempting to try out sparkle formation following the documentation, but I do not receive any output from the command bundle exec sfn print --file compute. I have also tried different paths to my compute file with no change. I would like to view the template cloudformation json that this is supposed to create. How can I do that?
The guide is somewhat incomplete and I needed to add a name for my stack and the exact path to the template file into the command: bundle exec sfn print my-stack-name --file sparkleformation/compute.rb

GCS - multiple credentials in a single boto file

New to GCS (just got started with it today). Looks very promising.
Is there anyway to use multiple S3 accounts (or GCS) in a single boto file? I only see the option to assign keys to one S3 and one GCS account in a single file. I'd like to use multiple credentials.
We're like to copy from S3 to S3, or GCS to GCS, with each of those buckets using different keys.
You should be able to setup multiple profiles within your .boto file.
You could add something like:
[profile prod]
gs_access_key_id=....
gs_secret_access_key=....
[profile dev]
gs_access_key_id=....
gs_secret_access_key=....
And then from your code you can add a profile_name= parameter to the connection call:
con = boto.gs.connection(profile_name="dev")
You can definitely use multiple boto files, just make sure that the credentials in each of them are valid. Every time you need to switch between them, run the following command with the right path.
$ BOTO_CONFIG=/path/to_boto gsutil cp SOME_FILE gs://bucket
Example :
BOTO_CONFIG=/etc/boto.cfg gsutil -m cp text.txt gs://bucket
Additionally, you can have aliases for your different profiles. Just create an alias for each command and you are set !