How to stop a wormhole command in progress - file-transfer

I am trying to use the magic-wormhole package to transfer a file.
I initiated the transfer with wormhole send filename
However, I discovered that the computer to which I was trying to send the file does not have wormhole.
Is there any way I can kill this command so that no one can download this file? Right now to the best of my understanding this command is floating out there wasting memory in some way.

Related

Upload and Download in Studio 5000

I want to go online on my PLC, bit it seems I need to download or upload the program, I didn't make any changes on my program but I'm in doubt that I could make a mess with the machinery that is connected to the PLC, If I upload the program something might happen??
I'm trying to not make a mess with machinery that is connected to the PLC
In Studio 5000:
Upload means copy the program that is currently running on your PLC to your computer.
Download means deleting the program and all data values that are currently running on your PLC and replacing them with the offline program on your computer.
In order to go online, the program you have open on your computer needs to match the one in the PLC. If it will not let you go online, it means that the programs do not match.
In general, if you are in doubt, upload is the one you want to choose, as this preserves the program that is currently running. You do a download only when you know that you have made changes that need to be transferred to the PLC.

Schedule script to attach CSV file report to a data source in servicenow

Schedule script to attach CSV file report to a data source in servicenow.
Schedule script that automatically attach csv file to the data source in servicenow.
how can we achieve this scenario?
Well this can be achieved in multiple ways. Bit of a vague description you have there. So I'll just drop a few general ideas for you:
If you don't mind turning things around, you could consider an external program push the file directly to ServiceNow and then run the associated TransformMap:
https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/import-sets/task/t_PostCSVOrExcelFilesToImportSet.html
If you have an FTP, you can have a scheduled script that will fetch the file from the FTP and run the transform:
https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/import-sets/task/t_ScheduleADataImport.html
You could use the MID Server application to have your custom logic of retrieving the file data. This is probably most complex to set up but also giving you the biggest advantages, like having your file encrypted etc. Basically, MID Server checks every couple seconds for a piece of code to be executed (called probes), for example you could trigger some Powershell script sitting on your server with it.
I'm sure there's other options as well. Good luck!

Determining a specific application's network transfer speed via command prompt?

My goal is to create a network activity light for a specific program. I'm a mechanical guy, so I can figure out the hardware and the logic, but I have no clue where to start with the coding. Ultimately this needs to run without user interaction, so I figured some sort of script would be a good place to start. I tried looking at netstat -e, but I didn't see any obvious way to determine the bandwidth a specific application was using. Thoughts? I'm using Windows 7.
I accomplished my goal using the "typeperf" command:
typeperf "\Process(FlashMediaLiveEncoder)\IO Data Bytes/sec"

How can my Perl script use Amazon clouds?

I want my Perl script can handle a large number of users.
I'm going to run the script on Amazon clouds servers.
This is my understanding of how the clouds work.
At first the script instances are run on a single server.
Then at the moment the server gets overloaded by too many users, the second server is added to run script instances.
Do I understand clouds right?
Do I have to do anything special to make this process work?
Or maybe everything is run seamlessly and the only thing I have to do is to upload the script to the image?
That is a bit too narrow of a definition for cloud computing but probably close enough for the purposes of this question. The process isn't seamless, you have to actually detect that you're running too hot for the singe machine and add another instance. You can do this from perl using the API. It does, however, take real time to spin up another instance so it makes more sense to distribute your task initially.
If your perl script is something which can cleanly run in parallel already then you don't have to make many changes. Just shove it onto a number of instances and away you go.

Detect a file in transit?

I'm writing an application that monitors a directory for new input files by polling the directory every few seconds. New files may often be several megabytes, and so take some time to fully arrive in the input directory (eg: on copy from a remote share).
Is there a simple way to detect whether a file is currently in the process of being copied? Ideally any method would be platform and filesystem agnostic, but failing that specific strategies might be required for different platforms.
I've already considered taking two directory listings separaetd by a few seconds and comparing file sizes, but this introduces a time/reliability trade-off that my superiors aren't happy with unless there is no alternative.
For background, the application is being written as a set of Matlab M-files, so no JRE/CLR tricks I'm afraid...
Edit: files are arriving in the input directly by straight move/copy operation, either from a network drive or from another location on a local filesystem. This copy operation will probably be initiated by a human user rather than another application.
As a result, it's pretty difficult to place any responsibility on the file provider to add control files or use an intermediate staging area...
Conclusion: it seems like there's no easy way to do this, so I've settled for a belt-and-braces approach - a file is ready for processing if:
its size doesn't change in a certain period of time, and
it's possible to open the file in read-only mode (some copying processes place a lock on the file).
Thanks to everyone for their responses!
The safest method is to have the application(s) that put files in the directory first put them in a different, temporary directory, and then move them to the real one (which should be an atomic operation even when using FTP or file shares). You could also use naming conventions to achieve the same result within one directory.
Edit:
It really depends on the filesystem, on whether its copy functionality even has the concept of a "completed file". I don't know the SMB protocol well, but if it has that concept, you could write an app that exposes an SMB interface (or patch Samba) and an API to get notified for completed file copies. Probably a lot of work though.
This is a middleware problem as old as the hills, and the short answer is: no.
The two 'solutions' put the onus on the file-uploader: (1) upload the file in a staging directory and then move it into the destination directory (2) upload the file, and then create/upload a 'ready' file that indicates the state of the content file.
The 1st one is the better, but both are inelegant. The truth is that better communication media exist than the filesystem. Consider using some IPC that involves only a push or a pull (and not both, as does the filesystem) such as an HTTP POST, a JMS or MSMQ queue, etc. Furthermore, this can also be synchronous, allowing the process receiving the file to acknowledge the content, even check it for worthiness, and hand the client a receipt - this is the righteous road to non-repudiation. Follow this, and you will never suffer arguments over whether a file was or was not delivered to your server for processing.
M.
One simple possibility would be to poll at a fairly large interval (2 to 5 minutes) and only acknowledge the new file the second time you see it.
I don't know of a way in any OS to determine whether a file is still being copied, other than maybe checking if the file is locked.
How are the files getting there? Can you set an attribute on them as they are written and then change the attribute when write is complete? This would need to be done by the thing doing the writing ... which sounds like it isn't an option.
Otherwise, caching the listing and treating a file as new if it has the same file size for two consecutive listings is the best way I can think of.
Alternatively, you could use the modified time on the file - the file has to be new and have a modified time that is at least x in the past. But I think this will be about equivalent to caching the listing.
It you are polling the folder every few seconds, its not much of a time penalty is it? And its platform agnostic.
Also, linux only: http://www.linux.com/feature/144666
Like cron but for files. Not sure how it deals with your specific problem - but may be of use?
What is your OS. In unix you can use the "lsof" utility to determine if a user has the file open for write. Apparently somewhere in the MS Windows Process Explorer there is the same functionality.
Alternativly you could just try an exclusive open on the file and bail out of this fails. But this can be a little unreliable and its easy to tread on your own toes.