I am using a tWaitForFile component from a Talend Studio Project and I want to know if there is a way to be sure a file to trig the event when this file is fully written on disk.
I tried to set the advanced property : "Wait the file to be released"
but it seems this is useless, the file trigs the component even it is not finished to be transmitted.
Does anybody have the same behaviour and a solution to fix that?
The version of Tos is: 4.2.3
The advanced setting "Wait for file to be released" only works on Windows. It has no effect on Unix, which probably explains why it did not work for you.
It is generally difficult, or even impossible, for a Unix process to figure out if a file has been written completely or not. Consequently, there is no easy way to do this in Talend, either.
(For example, if you wanted to wait until the file size does not change anymore -- how long do you wait?)
A common solution involves the process writing to the file: Create the file under a different name first, and when it is written completely, rename it to the name that the other process expects. That way, it will show up in its full size immediately.
Related
I've written a simple script that has multiple custom functions stored as modules. I have done it this way because I was always been told that if your function can be reused by other things then it should be a module and not a .\ source include. I'm starting to think that mantra isn't right in my current scenario. I am trying to convert the script to an single .exe so that I can install it as a windows service.
Probably should acknowledge that I understand why you wouldn't want to include system modules like Active Directory or IIS management for the obvious issue that could lead to but I'm only trying to include custom functions in a single disputable non editable way.
I have used PowerGUI in the past but can't find any valid exe's for that since DELL have removed it and from memory, I don't think I've ever used it with a module.
I've tried PS2EXE-GUI and PS2EXE. Both of these make the exe and everything works fine while the modules exist. However, as soon as I put the exe on a server that hasn't got the Modules deployed to it, it fails to run. I thought the compile followed all the dependencies and included them as part of the build into the single exe? That appears to not be the case.
I've also tried the PowerShell Studio 2018 by Sapien, but based on their forums you can't include modules into the complied exe. Which again feels wrong if they are actually just custom functions, but it's the way they've written it.
I see https://poshtools.com/docs/posh-pro-tools/merge-script/ would possibly do what I need but that's chargeable and it looks like it actually merges all the content back into a single file. Given the time pressure I'm starting to think I'll have to pay if there are really no other better options. I just don't have time to join everything together manually and I can't help thinking there is a better way I'm missing!
Can anybody please suggest other options?
Could I also get clarification around my original mantra (functions go in modules...)?
"No, never!" or "Yes, always!" or "It's just wrong in this scenario."
I'm building a program that acts on files that it has to download from one of my company's servers. We have several million of these. For instance, my normal invocation could be:
python my_script.py file-id
And then my_script.py will go download file-id and do its work on it.
It's useful to be able to specify one fixed file to download and act on while I make changes to our code, but when it comes to testing at scale, I'll usually find out that maybe a dozen files couldn't be processed correctly, and I need to go and debug our program with it.
For this purpose, editing the settings.json file works, but it's kind of cumbersome that I have to change the parameter, save, run, and revert every time I just want to test a new input.
Is there a way that I pass an argument to a debug configuration as I start debugging, instead of having to change the settings.json file?
I am working through the tutorial files included with the ACT-R Standalone Windows distribution. This isn't part of any academics assignment; I'm working on this to learn cognitive modeling and writing production systems. I am using Lispbox, an EMACS-SLIME-LISP bundle to write my cognitive models. The distro and lispbox reside on my flash drive. Finally, the distro uses Clozure Common Lisp.
The problem is that whenever I try to reload a model after making changes, ACT-R gives me this error:
Error Reloading:
#|warning: no load file recorded |#
#|warning: cannot use reload |#
It only does this for my unit 2 assignment model. Not any other model, including the one I have written in unit 1.
Now this is a big issue for me - instead of simply pressing "reload" on ACT-R's GUI, I'm forced to close ACT-R entirely and open it again every time I want to reload the model.
I'm thinking this is a problem with EMACS. I have tried reinstalling ACT-R, and deleting any .lisp~ files or anything else that Emacs has saved in addition to the file I wrote. I still get this error.
Could you please help me understand what's going on and how I can fix this if it ever arises again in the future? I would like to get back to working on my assignment as soon as possible.
I have emailed the creator of ACT-R; He told me that I must include the statement
(clear all)
at the beginning of every file, so the software uses the most up-to-date file when reloading.
I am using Eclipse Kepler Service Release 2 , EPIC 0.5.46 and Strawberry Perl 5 version 18 for perl programming. For debugging I am using Eclipse debugger and PadWalker .
I have an interactive perl program that writes to files based on answers provided by the users to multiple prompts. While debugging , every time i change a single line of code I have to rerun the whole program again and provide inputs to every prompt , which is really time consuming.
Is there a way to make changes to the code in a sub routine , in the middle of debugging session such that the instruction pointer resets itself to the first line of that sub routine. This way i do not have to restart the session to recompile the new code.
Appreciate your inputs and suggestions. Thank You!!!
What you want to do can be done, and I've done it many times in Perl myself. For example, see this.
However although what you describe may work (and is a bit dangerous), the way it is generally done a bit different and safer.
First one has to assume a regular kind of command structure like a command processor, or say a web server.
In a command processor or web server, you read a command (or get a web request), perform an action, then read another command, perform another action and so on. From your description, it sounds like you have such a structure.
In my case, I have each debugger command stored as in Perl file. This is helpful not only for facilitating this task, but also for understanding, testing and changing the code.
Given this kind of program structure, instead of trying to change the program counter, you complete the command and at the level where you are about to read a new command, you make the change and then reload the file which changes the code.
The specific Perl construct to do this is called do. Don't use require or use which will load in a Perl file only if that file or module hasn't been previously loaded. In your situation, you want to reload even if it has been loaded before.
So now how do you get to be able to issue a do command? As you suggest, you could do it through a debugger. Assuming you have this overall program stucture as described above, you put the breakpoint somewhere a common point in the caller which loops over things to process, rather than try to change things in indvidual commands.
And you don't even need a debugger to do this! Many web frameworks like Ruby on Rails, have a "development" mode where they save timestamps on files that implement functionality. If the file has changed they issue the "do" command before running the request.
I've been searching for this for awhile now, and I am not sure if I am just not using the correct search terms or if the answer is really that hard to find.
What I am trying to do is to create a new Windows service for a game server from a batch file, and then have a task run another batch file every 30 minutes or more that would run two commands on the game server's command line and do some file work.
Specifically, I am running a Minecraft server using Bukkit for a gaming community I help run, and I want to make sure that the thing is always up unless I specifically tell it to stop (like a service). Bukkit is run directly from a batch file and has it's own command line thing running on it.
I am told that you CAN run this type of thing as a service, but the command line will be hidden from view and/or interaction. This is the second part of my query. I have a handy little backup.bat file that copies all the world files and userdata files into a backup directory, 7zips it, and deletes the directory. The only thing is, is that Minecraft likes to always have the worlds' region files open and writing at all times, meaning that it could cause map corruption if I just run it straight off. To compensate, I need to run the command "save-off" on the server to disable the file hooks temporarily, run the backup, and as soon as it finishes, run "save-on" so that the game can continue without lost data.
What I would like to know about this second one is, is it possible to interface with the game service through a batch file, or do I need to create an application to do that? If the latter, how exactly does one go about doing that? I have moderate C++ knowledge (up through my second OO-C++ course in college), and can possibly learn another language if absolutely necessary.
So, in short, two questions:
1. Is it possible to, and how to run a BAT file as a Windows Service?
2. How to interface with said service via BAT files, and if not possible, what kind of application do I need to write (redirection to or writing a tutorial works for me).
Thank you in advance for any and all help!
Old question, user account doesn't seem active on SO anymore, but hey, if you stumble upon this because you have a similar problem:
Since we are speaking about a Bukkit Minecraft server, turn to the "Essentials" plugin for Bukkit.
It now includes a Backup function that does exactly what the OP asks for, namely stop the save so the files can be manipulated without corruption, launch a script, then starts again.
The script can be a backup one (examples provided in the linked page) but can be used to run any operation on the world's files.