Midnight Commader copies file locally before actual transfer to remote folder - file-transfer

Using mc, everytime I move a file over to a remote location, mc seems to cache that file or copy it to somewhere temporary, before making the actual transfer to a remote folder. Is it possible to force mc not to do that before making the actual transfer over the network?

Related

Export a CSV file from AS400 to my pc through Cl program

I want to export a database file that is created through a query, from the AS400 machine to my pc in the form of a csv file.
Is there a way to create that connection of the AS400 and my pc through a cl program?
An idea of what I want to do can be derived from the following code:
CLRPFM DTABASENAME
RUNQRY QRY(QRYTEST1)
CHGVAR VAR(&PATH) VALUE('C:\TESTS')
CHGVAR VAR(&PATH1) VALUE('C:\TESTS')
CHGVAR VAR(&CMD) VALUE(%TRIM(&PATH) *CAT '/DTABASENAME.CSV' !> &PATH !> &PATH1)
STRPCO PCTA(*YES)
STRPCCMD PCCMD(&CMD) PAUSE(*YES)
where I somehow get my database file, give the path that I want it to be saved in, in my pc , and lastly run the pc command accordingly
Take a look at
Copy From Query File (CPYFRMQRYF)
Which will allow you to create a database physical file from the query.
You may also want to look at
Copy To Import File (CPYTOIMPF)
Which will copy data from a database physical file to an Integrated File System (IFS) stream file (such as .CSV); which are the type of files you'd find on a PC.
ex:
CPYTOIMPF FROMFILE(MYLIB/MYPF) TOSTMF('/home/myuser/DTABASENAME.CSV') RCDDLM(*CRLF) DTAFMT(*DLM) STRDLM(*DBLQUOTE) STRESCCHR(*STRDLM) RMVBLANK(*TRAILING)
FLDDLM(',')
However, there's no single command to transfer data to your PC. Well technically, I suppose that's not true. If you configure a (SMB or NFS) file share on your PC and configure the IBM SMB or NFS client; you could in fact CPYTOIMPF directly to that file share or use the Copy Object (CPY) command to copy from the IFS to the network share.
If your PC has an FTP server available, you could send the data via the IBM i's FTP client. Similarly, if you have a SSH server on your PC, OpenSSL is available via PASE and SFTP or SCP could be used. You could also email the file from the i.
Instead of trying to send the file to your PC from the i. An easier solution would be to kick off a process on the PC that runs the download. My preference would be a Access Client Solution (ACS) data transfer.
You configure and save (as a .dtfx file) the transfer
Then you can kick it off with a
STRPCCMD cmd('java -jar C:\ACS\acsbundle.jar /plugin=download C:\testacs.dtfx')
More detailed information can be found in the Automating ACS Data Transfer document
The ACS download compoent is SQL based, so you could probably remove the need to use Query/400 at all
Assuming that you have your IFS QNTC mapped to your network domain. You could use the command CPYTOIMPF to copy the data directly from an IBMI DB2 file to a network directory.
This sample would result in a CSV file.
CPYTOIMPF FROMFILE(file) TOSTMF('//QNTC/servername or ip/path/filename.csv') STMFCCSID(*PCASCII) RCDDLM(*CRLF) STRDLM(*NONE)
Use the FLDDLM(';') option in addition to make semicolon separated values, omit it to use comma as value separator.

Watchman doesn't notice changes on network folder

I am trying to get watchman running in order to monitor an NFS mounted folder.
I was able to get everything running within the local file system.
Now, I have changed the config to monitor a network folder from my NAS. It is locally mounted.
Watchman server is running on the Linux client.
All watchman commands on the Linux client.
watchman watch
watchman -- trigger /home/karsten/CloudStation/karsten/CloudStation/Karsten_NAS/fotos/zerene 'photostack' -- /home/karsten/bin/invoke_rawtherapee.sh
Folder is located on the NAS, according to
mtab:
192.168.xxx.xxx:/volume1/homes /home/karsten/CloudStation nfs rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.xxx.xxx,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.xxx.xxx 0 0
If I move files into the folder on the local machine they get recognized and watchman triggers the actions.
BUT if I move files into the same folder from a remote client connected to the same NAS folder nothing happens.
Any idea what I need to change to make watchman recognize the files dropped from another client into that folder?
Many thanks in advance
Karsten
Unfortunately, it is not possible.
From https://facebook.github.io/watchman/docs/install.html#system-requirements:
Watchman relies on the operating system facilities for file notification, which means that you will likely have very poor results using it on any kind of remote or distributed filesystem.
NFS doesn't tie into the inotify layer in any kernel (the protocol simply doesn't support this sort of change notification), so you will only be able to be notified of changes that are made to the mounted volume by the client (because those are looped back through inotify locally), not for changes made anywhere else.

Talend: Using tfilelist to access files from a shared network path

I have a Talend job that searches a directory and then uploads it to our database.
It's something like this: dbconnection>twaitforfile>tfilelist>fileschema>tmap>db
I have a subjobok that then commits the data into the table iterates through the directory and movies files to another folder.
Recently I was instructed to change the directory to a shared network path using the same components as before (I originally thought of changing components to tftpfilelist, etc.)
My question being how to direct it to the shared network path. I was able to get it to go through using double \ but it won't read any of the new files arriving.
Thanks!
I suppose if you use tWaitForFile on the local filesystem Talend/Java will hook somehow into the folder and get a message if a new file is being put into it.
Now, since you are on a network drive first of all this is out of reach of the component. Second, the OS behind the network drive could be different.
I understand your job is running all the time, listening. You could change the behaviour to putting a tLoop first which would check the file system for new files and then proceed. There must be some delta check in how the new files get recognized.

Powershell script run against share on server

I'm running a powershell script that's on my local PC on a file share that's on a server. I had code in the script to let the user select to delete something permanently (using Remove-Item) or to send something to the Recycle bin using this code:
[Microsoft.VisualBasic.FileIO.Filesystem]::DeleteFile($file.fullname,'OnlyErrorDialogs','SendToRecycleBin')
When run locally (either from my desktop, or from the server) against a folder that's local to that respective location, it works fine. A file that is identified gets deleted & immediately shows up in the recycle bin.
However, if run from my desktop to the file share, it deletes the file, but it doesn't show up in either the server's recycle bin or the local one either. I've tried UNC naming and mapped drive naming, and have come to believe this may be by design.
Is there a workaround for this?
Only files deleted from redirected folders end up in the recycle bin. If you want to be able to undelete files deleted across the network then you need to use a third-party utility.

What is the best way to transfer files between servers constantly?

I need to transfer files between servers ... no specific move, but continuos moves. What is the best way ?
scp, ftp, rsync ... other ?
Of course if it's a "common" method (like ftp) I would block just to works between the IP's of the servers.
Also I need a SECURED form to transfer files ... I mean, to be totally sure that the files have moved successfully .
Has rsync a way to know the files were moved successfully ? maybe an option to check size or checksum or whatever.
Note: The servers are in different location
try rsync, utility that keeps copies of a file on two computer systems