I uploaded a folder structure with a single file inside to an existing gcloud storage bucket.
C:\Users\Administrator\Desktop>gcloud alpha storage cp -r testfolder gs://auction-engine-upload
Copying file://testfolder\testSubfolder\MAXPOWER.png to gs://auction-engine-upload/testfolder/testSubfolder/MAXPOWER.png
Completed files 1/1 | 10.0kiB/10.0kiB
Then I tried to verify the file was uploaded by using the ls command:
gcloud alpha storage ls gs://auction-engine-upload
This lists about 40 directories that are not the /testfolder directory, so I tried a few different ways to get only the /testfolder to list:
gcloud alpha storage ls gs://auction-engine-upload/testfolder
gcloud alpha storage ls gs://auction-engine-upload/testfolder/
gcloud alpha storage ls gs://auction-engine-upload/testfolder/*
But I keep getting this error:
ERROR: (gcloud.alpha.storage.ls) One or more URLs matched no objects.
Am I screwing up syntax or is the file actually not uploaded?
I don't have access to change the permissions in the bucket, so I had to have the account owner create another bucket and give me permission to create the file there.
Related
I would like to move files that are available in the system working directory in the azure pipeline to the Kubernetes pod.
Method one (Kubectl cp command)
kubectl cp D:\a\r1\a\test-files\westus\test.txt /test-745f6564dd:/inetpub/wwwroot/
D:\a\r1\a\test-files\westus\test.txt -- my system working directory file location
(name-space)/test-745f6564dd:/inetpub/wwwroot/ -- kubernetes pod location
I have tried to use kubectl cp command but facing an error.
error: one of src or dest must be a local file specification
Method two command line tool in azure devops
Even i tried to use command line to copy files from one directory to another directory.
cd C:\inetpub\wwwroot>
copy C:\inetpub\wwwroot\west\test.txt C:\inetpub\wwwroot\
Once this task is executed in the azure pipeline, its throwing error.
he syntax of the command is incorrect.
Method three azure cli
I have tried to use azure cli and login into Kubernetes and tried to try one of the below codes. But not throwing any errors even file is not copied too.
az aks get-credentials --resource-group test --name test-dev
cd C:\inetpub\wwwroot
dir
copy C:\inetpub\wwwroot\west\test.txt C:\inetpub\wwwroot\
Is there any way do this operation.
For the first error:
error: one of src or dest must be a local file specification
Try to run the kubectl cp command from the same directory where your file is there and instead of giving the whole path try like below:
kubecto cp test.txt /test-745f6564dd:/inetpub/wwwroot/test.txt
I try to use the original riak-kv image in docker-compose and I want on init add one bucket but docker-compose up won't start. How I can edit volumes.schemas to add bucket on init?
Original image allows to add riak.conf file in docker-compose ? If yes, then how can I do that?
Creating a bucket type with a custom datatype
I assume you want to create a bucket type when starting your container. You have to create a file in the /etc/riak/schemas directory with the bucket's name, like bucket_name.dt. The file should contain a single line with the type you would like to create (e.g. counter, set, map, hll).
You can also use the following command to create the file:
echo "counter" > schemas/bucket_name.dt
After that you have to just mount the schemas folder with the file to the /etc/riak/schemas directory in the container:
docker run -d -P -v $(pwd)/schemas:/etc/riak/schemas basho/riak-ts
Creating a bucket type with default datatype
Currently creating a bucket type with default datatype is only available if you add a custom post-start script under the /etc/riak/poststart.d directory.
Create a shell script with the command you would like to run. An example can be found here.
You have to mount it as a read-only file into the /etc/riak/poststart.d folder:
docker run -d -P -v $(pwd)/poststart.d/03-bootstrap-my-datatype.sh:/etc/riak/poststart.d/03-bootstrap-my-datatype.sh:ro basho/riak-ts
References
See the whole documentation for the docker images here. The rest can be found in GitHub.
Also, the available datatypes can be found here.
I'm running cloud build using remote builder, able to copy all file in the workspace to my own VM but, unable to copy hidden files
Command used to copy files
gcloud compute scp --compress --recurse '/workspace/*' [username]#[instance_name]:/home/myfolder --ssh-key-file=my-key --zone=us-central1-a
so, this copies only non-hidden files.
Also used dot operator to copy hidden files
gcloud compute scp --compress --recurse '/workspace/.' [username]#[instance_name]:/home/myfolder --ssh-key-file=my-key --zone=us-central1-a
Still not able to copy and got below error
scp: error: unexpected filename: .
Can anyone suggest to me how to copy hidden files to VM using gcloud scp.
Thanks in advance
If you remove the trailing character after the slash, it may work. For example, this worked for me:
gcloud compute scp --compress --recurse 'test/' [username]#[instance_name]:/home/myfolder
For backups, I have set versioning in GCS.
Then I created folder and I put a file in the folder. After that, I've deleted the folder.
Then I used gsutil ls -alr command, but I cannot find the file in the bucket.
I found the folder, but I cannot restore the file in the folder.
When I delete a folder, why can't I restore a file in that folder even if setting versioning of GCS?
Files in the Google Cloud Storage bucket that are archived and NOT live during the deletion of the folder remain in the archived list and can be retrieved.
For example you can:
Create a folder in bucket using Google Cloud Console: gs://[BUCKET_NAME]/example
Put a file to the folder using Google Cloud Console: gs://[BUCKET_NAME]/example/file_1.txt
Put another file to the folder using Google Cloud Console: gs://[BUCKET_NAME]/example/file_2.txt
Using the Google Cloud Console delete the file_1.txt
Using the Google Cloud Console delete the folder example
Run the command gsutil ls -alr gs://[BUCKET_NAME]/example
You will see a result as follows:
$ gsutil ls -alr gs://[BUCKET_NAME]/example
gs://[BUCKET_NAME]/example/:
11 2019-02-27T11:48:54Z gs://[BUCKET_NAME]/example/#1551268... metageneration=1
14 2019-02-27T11:49:49Z gs://[BUCKET_NAME]/example/file_1.txt#1551268189... metageneration=1
TOTAL: 2 objects, 25 bytes (25 B)
You will notice that only the file_1.txt is available for retrial since it is the one that was archived and NOT LIVE when the folder was deleted.
Also, to list all the archived objects of the bucket you can run gsutil ls -alr gs://[BUCKET_NAME]/**.
So if your files were archived and deleted before the folder was deleted, you can list them using gsutil ls -alr gs://[BUCKET_NAME]/** and retrieve them using another command, for more info visit the Using Object Versioning > Copying archived object versions documentation.
So my problem is that a have a few files not showing up in gcsfuse when mounted. I see them in the online console and if I 'ls' with gsutils.
Also, if If I manually create the folder in the bucket, i then can see the files inside it, but I need to create it first. Any suggestions?
gs://mybucket/
dir1/
ok.txt
dir2
lafu.txt
If I mount mybucket with gcsfuse and do 'ls' it only returns dir1/ok.txt.
Then I'll create the folder dir2 inside dir1 at the root of the mounting point, and suddenly 'lafu.txt' shows up.
By default, gcsfuse won't show a directory "implicitly" defined by a file with a slash in its name. For example if your bucket contains an object named dir/foo.txt, you won't be able to find it unless there is also an object nameddir/.
You can work around this by setting the --implicit-dirs flag, but there are good reasons why this is not the default. See the documentation for more information.
Google Cloud Storage doesn't have folders. The various interfaces use different tricks to pretend that folders exist, but ultimately there's just an object whose name contains a bunch of slashes. For example, "pictures/january/0001.jpg" is the full name of a single object.
If you need to be sure that a "folder" exists, put an object inside it.
#Brandon Yarbrough suggests creating needed directory entries in the GCS bucket. This avoids the performance penalty described by #jacobsa.
Here is a bash script for doing so:
# 1. Mount $BUCKET_NAME at $MOUNT_PT
# 2. Run this script
MOUNT_PT=${1:-HOME/mnt}
BUCKET_NAME=$2
DEL_OUTFILE=${3:-y} # Set to y or n
echo "Reading objects in $BUCKET_NAME"
OUTFILE=dir_names.txt
gsutil ls -r gs://$BUCKET_NAME/** | while read BUCKET_OBJ
do
dirname "$BUCKET_OBJ"
done | sort -u > $OUTFILE
echo "Processing directories found"
cat $OUTFILE | while read DIR_NAME
do
LOCAL_DIR=`echo "$DIR_NAME" | sed "s=gs://$BUCKET_NAME/==" | sed "s=gs://$BUCKET_NAME=="`
#echo $LOCAL_DIR
TARG_DIR="$MOUNT_PT/$LOCAL_DIR"
if ! [ -d "$TARG_DIR" ]
then
echo "Creating $TARG_DIR"
mkdir -p "$TARG_DIR"
fi
done
if [ $DEL_OUTFILE = "y" ]
then
rm $OUTFILE
fi
echo "Process complete"
I wrote this script, and have shared it at https://github.com/mherzog01/util/blob/main/sh/mk_bucket_dirs.sh.
This script assumes that you have mounted a GCS bucket locally on a Linux (or similar) system. The script first specifies the GCS bucket and location where the bucket is mounted. It then identifies all "directories" in the GCS bucket which are not visible locally, and creates them.
This (for me) fixed the issue with folders (and associated objects) not showing up in the mounted folder structure.