Moving files from one folder to an archive folder in same remote directory - powershell

I am trying to move remote files from one folder to another but keep getting a failure error.
Code:
$command = #("mv /specific/directory/path/source/* /specific/directory/path/destination", "ls //specific/directory/path/source")
$psftpPath = "local/path/to/psftp.exe"
$command |& $psftpPath -pw $password "$User#$Host" -be
Error
mv /user/specific/directory/path/source/file.extension /user/specific/directory/path/destination/file.extension: failure
The ls command does show all the right files in source

The error Failure refers to status code 4 with the following common reasons:
Renaming a file to a name of already existing file.
Creating a directory that already exists.
Moving a remote file to a different filesystem (HDD).
Uploading a file to a full filesystem (HDD).
Exceeding a user disk quota.
I haven't tried your script, but I think that you can can follow the list above as a checklist.

Related

How to download all bucket files. (The issue with the -m flag gsutil)

I am trying to copy all files from cloud storage bucket recursively and I am having problem with the -m flag as I have investigated.
The command that I am running
gsutil -m cp -r gs://{{ src_bucket }} {{ bucket_backup }}
I am getting something like this:
CommandException: 1 file/object could not be transferred.
where the number of files/objects differs every time.
After investigation I have tried to reduce number of threads/processes which used with the -m option, but this has not helped, so I am looking for some advice about this. I have 170 MiB data on the bucket which is approximately 300k files. I need to download them as fast as possible
UPD:
Logs with -L flag
[Errno 2] No such file or directory: '<path>/en_.gstmp' -> '<path>/en'
6 errors like that.
The root of the issue might be that both directory and file of the same name exist in the GCS bucket. Try executing the command with -L flag, so you will get additional logs on the execution and you will be able to find the file that is causing this error.
I would suggest you delete that file and make sure there is no directory in the bucket of that name and then upload this file to the bucket again.
Also check if any of the directory created with Jar name. Delete them and processed the copy files.
And check if the required file is already at destination and delete the file at destination and execute copy again.
There are alternatives to copy, for example, it is possible to transfer files using rsync, as described here.
You can also check similar threads: thread1 , thread2 & thread3

install4j: Installation doesnt create an alternativeLogfile

When i Invoke the installer with:
installerchecker_windows-x64_19_2_1_0-SNAPSHOT.exe
-q
-c
-varfile install.varfile
-Dinstall4j.alternativeLogfile=d:/tmp/logs/installchecker.log
-Dinstall4j.logToStderr=true
it creates and writes the standard log file installation.log in the .install4j Directory, but doesnt create my custom log in d:/tmp/logs. As configured there is an additional error.log with the correct content.
The installation.log shows the comand-line config : install4j.alternativeLogfile=d:/tmp/logs/installchecker.log
The Directory d:/tmp/logs has full access.
Where is the failure in my config ?
The alternative log file is intended to debug situations where the installer fails. To avoid moving the log file to its final destination in .install4j/installation.log, the VM parameter
-Dinstall4j.noPermanentLogFile=true
can be specified.

Folders not showing up in Bucket storage

So my problem is that a have a few files not showing up in gcsfuse when mounted. I see them in the online console and if I 'ls' with gsutils.
Also, if If I manually create the folder in the bucket, i then can see the files inside it, but I need to create it first. Any suggestions?
gs://mybucket/
dir1/
ok.txt
dir2
lafu.txt
If I mount mybucket with gcsfuse and do 'ls' it only returns dir1/ok.txt.
Then I'll create the folder dir2 inside dir1 at the root of the mounting point, and suddenly 'lafu.txt' shows up.
By default, gcsfuse won't show a directory "implicitly" defined by a file with a slash in its name. For example if your bucket contains an object named dir/foo.txt, you won't be able to find it unless there is also an object nameddir/.
You can work around this by setting the --implicit-dirs flag, but there are good reasons why this is not the default. See the documentation for more information.
Google Cloud Storage doesn't have folders. The various interfaces use different tricks to pretend that folders exist, but ultimately there's just an object whose name contains a bunch of slashes. For example, "pictures/january/0001.jpg" is the full name of a single object.
If you need to be sure that a "folder" exists, put an object inside it.
#Brandon Yarbrough suggests creating needed directory entries in the GCS bucket. This avoids the performance penalty described by #jacobsa.
Here is a bash script for doing so:
# 1. Mount $BUCKET_NAME at $MOUNT_PT
# 2. Run this script
MOUNT_PT=${1:-HOME/mnt}
BUCKET_NAME=$2
DEL_OUTFILE=${3:-y} # Set to y or n
echo "Reading objects in $BUCKET_NAME"
OUTFILE=dir_names.txt
gsutil ls -r gs://$BUCKET_NAME/** | while read BUCKET_OBJ
do
dirname "$BUCKET_OBJ"
done | sort -u > $OUTFILE
echo "Processing directories found"
cat $OUTFILE | while read DIR_NAME
do
LOCAL_DIR=`echo "$DIR_NAME" | sed "s=gs://$BUCKET_NAME/==" | sed "s=gs://$BUCKET_NAME=="`
#echo $LOCAL_DIR
TARG_DIR="$MOUNT_PT/$LOCAL_DIR"
if ! [ -d "$TARG_DIR" ]
then
echo "Creating $TARG_DIR"
mkdir -p "$TARG_DIR"
fi
done
if [ $DEL_OUTFILE = "y" ]
then
rm $OUTFILE
fi
echo "Process complete"
I wrote this script, and have shared it at https://github.com/mherzog01/util/blob/main/sh/mk_bucket_dirs.sh.
This script assumes that you have mounted a GCS bucket locally on a Linux (or similar) system. The script first specifies the GCS bucket and location where the bucket is mounted. It then identifies all "directories" in the GCS bucket which are not visible locally, and creates them.
This (for me) fixed the issue with folders (and associated objects) not showing up in the mounted folder structure.

Robocopy Error 3 "System cannot find the file specified" for 1 file in the folder

I am trying to copy a folder from the network path to my machine .But while copying the folder the robocopy suddenly got stuck in copying 1 file from the folder and did not proceed forward . I am seeing the error ERROR 3 (0x00000003) , "The System cannot find the path specified." , even though the file did exist in the source directory.
The command that I use is
ROBOCOPY source destination /MIR /Z /Log+:logs.txt
I am seeing this issue when my executable is triggered by the task scheduler. This issue does not happen when i run the exe directly. Any idea why this could be happening and also how to fix this problem
When robocopy claims it cannot find a file, it often means it is running into permission issues. If your script works from the command line, but not from the task scheduler, make sure the task uses your user credentials. You can set them in the task properties on the general tab under security options.

I want to prompt a user for the first 5 characters of a file then have it search for the files

I'm trying to write a script that will prompt the user for the first 5 charters of a file name then have it search the directories for any files that start with that. Then I want it to check to see if a folder is created with the file names and if not create one then move the files there. But if there is a directory for it already then to just move the files too the folder.
Break it down step by step:
"prompt the user for the first 5 characters of a file name" -- you can use the shell read command to get the data. Try a simple shell script:
#!/bin/bash
read foo
echo "foo = $foo"
"if a folder is created with the file names" -- you can use find to see if a file exists. for example:
find . -name abcde\*
"But if there is a directory for it already then to just move the files too the folder." -- the command mkdir takes a -p option so that, if the directory already exists, it won't do anything.