I have two scripts that do the same thing but for different companies, and during the process they both use the same tables.
It's imperative that only one script runs at once, as sometimes the timings vary greatly, and they are scheduled rather close together purposely. My question is, what is the best method to ensure these scripts do not run together? I tried to have a global field, set to 1 at the beginning of the script, and 0 at the end, so when the 2nd script runs, if global field = 1 - exit script -
This did not work, as both these scripts are scheduled server side, and I have read that the GLOBAL variable is local in this instance.
I assume, we are talking about FileMaker Server schedules.
Global variable will be reset every time you run a scheduled script. Every script will run on it's own session. You can not use them to ensure the scripts do not clash.
As far as I know, FileMaker Server does not run two schedules at the same time. The second script will be delayed until the first one finishes.
FileMaker Server can run simultaneous schedules if they are script schedules, thus an overlap can occur.
What you need to do is set a field that is not a global, so that the schedules can check against the value of that field.
A single record table would be ideal for this.
Make sure that you commit after setting the field, or you may get record locking issues.
Create an OS-level script that uses the fmsadmin command line to run one script, then run the second.
Set the FM Server schedule to run the OS script (which then runs the PSoS scripts).
Related
In my locustfile I defined test_on_start and test_on_stop events to read a file needed for the test and to write detailed statistics in a CSV at the end of the test. when running in distributed mode, these events occur on the master, not the worker. I am assembling a list of detailed stats for each task in a task sequence and at the end of the test writing a CSV file when the test stops. I found this stackoverflow question which references a setup and teardown. I added these to my class User(HttpUser): but they appear to not be executed.
How can I mimic these events when the test is running on a worker in distributed mode?
Is there a better way?
I am using User on_start and on_stop already - my on_start calls a function to select a random user from a list which was created when the #events.test_start.add_listener is fired, which only happens on the master and not on the workers, so the worker doesn't have any user login data.
It seems counter productive to open the file, read it, select a user at random and close it every time the User on_start method is called. User on_start also sets up the iteration list [] which is where i store the times per task.
When the task sequence is done, meaning the last task is executed, i do a self.interrupt() which runs on_stop, which is where I take the iteration times, and put them into a second list, which is later written using the CSV module. maybe it would be better to just write the data to the CSV during on_stop
The setup/teardown for individual Users has been removed (because they were confusing, as it was run on the first instance of that User class, and when people set properties on that instance got very confused by the fact that later instances didnt get that). Tbh, I wish they had just been replaced by class methods...
The User still has on_start/stop methods though, and if you combine that with a flag it may be able to do what you want. Something like this:
class MyUser(HttpUser):
stopped = False
...
def on_stop(self):
if not MyUser.stopped:
MyUser.stopped = True
# write your csv
# this doesnt guarantee that all your Users are finished though.
https://docs.locust.io/en/stable/writing-a-locustfile.html#on-start-and-on-stop-methods
I have a program, it loads a few tasks from a file prepared by user and start executing them according the scheduling shown in the file.
Example: taskFile.txt
Task1: run every hour
Task2: run every 2 seconds
...
TaskN: run every monday at 10:00
This first part is Ok, i solved by using ScheduledExecutorService and i am very satisfied. The tasks are load and run as they should.
Now, let's image that the user, by GUI (at runtime), decides that Task2 should run every minute, and he wants to remove Task3.
I cannot find any way to access one specific task in the pool, in order to remove/modify it.
So I cannot update tasks at runtime. When user changes a task, I can only modify the taskFile.txt and restart the application, in order to reload all tasks according the newly updated taskFile.txt.
Do you know any way to access a single task in order to modify/delete it?
Or even, a way to remove one given task, so i can insert a new one in the pool, with the modifications wanted by the user.
Thanks
This is not elegant, but works.
Let's suppose you need 10 threads, and sometimes you need to manage a specific thread.
Instead to have a pool with 10 thread, use 10 pools with one thread for each, keep them in your favourite data structure, and act on the pool_1 when you want to modify thread_1.
It's possible to remove the older Runnable from the pool and put a new one with the needed changes.
Otherways, anything put in the pool became anonymous and will be not directly manageable.
If somebody has a better solution...
I am building a system to restart computers for patch purposes. Most of the skeleton is there and working, I use workflows and some functions to allow me to capture errors and reboot the systems in a number of ways in case of failures.
One thing I am not sure of is how to set up the timing. I am working on a web interface where people can schedule their reboots, either dynamic (one-time) or regularly scheduled (monthly). The server info and times for the boots is stored in a SQL database.
The part that I am missing is how to trigger the reboots when scheduled. All I can think of is allowing for whole hour increments, and run a script hourly checking to see if any servers in the db have a reboot time that "matches" the current time. This will likely work, but is somewhat inflexible.
Can anyone think of a better way? Some sort of daemon?
For instance, user X has 300 servers assigned to him. He wants 200 rebooted at 10 PM on each Friday, and 50 once a month on Saturday at 11 PM. There will be over a dozen users rebooting 3000-4000 computers, sometimes multiple times monthly.
Ok, let's say you have a script that takes a date and time as an argument that will look up what computers to reboot based on that specific date and time, or schedule, or whatever it is that you're storing in your sql db that specifies how often to reboot things. For the sake of me not really knowing sql that well, we'll pretend that this is functional (would require the PowerShell Community Extensions snapin for Invoke-AdoCommand cmdlet):
[cmdletbinding()]
Param([string]$RebootTime)
$ConStr = 'Data Source=SQLSrv01;Database=RebootTracking;Integrated Security=true;'
$Query = "Select * from Table1 Where Schedule = $RebootTime"
$Data = Invoke-AdoCommand -ProviderName SqlClient -ConnectionString $ConStr -CommandText $Query
$data | ForEach{Do things to shutdown $_.ServerName}
You said you already have things setup to reboot the servers so I didn't even really try there. Then all you have to do is setup a scheduledjob for whenever any server is supposed to be rebooted:
Register-ScheduledJob –FilePath \\Srv01\Scripts\RebootServers.ps1 –Trigger #{Frequency=Weekly; At="10:00PM"; DaysOfWeek="Saturday"; Interval=4} -ArgumentList #{'RebootTime'="Day=Saturday;Weeks=4;Time=22:00"}
That's an example, but you know, you could probably work with it to accomplish your needs. That will run it once every 4 Saturdays (about monthly). That one scheduled task will query the sql server to match up a string against a field, and really you could format that any way you want to make it as specific or general as desired for the match. That way one task could reboot those 200 servers, and another could reboot the other 50 all dependent on the user's desires.
I wrote a powerCLI script that can automatically deploy a new VM with some given parameters.
In few words, the script connects to a given VC and start the deployment from an existing template.
Can I regulate the number of instances of my script that will run on the same computer ?
Can I regulate the number of instances of my script that will run on different computers but when both instances will be connected to the same VC ?
To resolve the issue i thought of developing a server side appilcation where each instance of my script will connect to, and the server will then handle all the instances , but i am not sure if such thing is possible in powerCLI/Powershell.
Virtually anything is poshable, or so they say. What you're describing may be overkill, however, depending on your scenario. Multiple instances of the same script will each run in its own Powershell process. Virtual Center allows hundreds of simultaneous connections. Of course the content or context of your script might dictate that it shouldn't run in simultaneous instances. I haven't experimented, but it seems like there are ways to determine the name of running Powershell scripts. So if you keep the script name consistent on each computer, you could probably build in some checks along the lines of the linked answer.
But depending on your particulars, it might be easier to go a different way. For example, if you don't want the script to run simultaneously because you have hard-coded the name of a new-osCustomizationSpec, for example, a simple\klugey solution might be to do a check for that new spec, and disconnect/exit/rollback if it exists. A better solution might be to give the new spec a unique name. But the devil is in the details. Hope that helps a bit.
I have a bunch of TSQL change scripts, all named appropriately in sequence.
I want to combine these into one big script with a few twists. I include a version number function in the script that I update for each script so that once change 1 is run it returns 1, once 2 is run it returns 2 and so on. This function remains in the database and always returns the version of the schema/database.
I want to wrap each change script with a few lines that prevent a script that has already been run from running again, likewise it should prevent a change script from running on a schema version that is too "low". This allows me to bunch all the change scripts into one, and only missing changescripts will be applied when it all runs.
This is all fine, but I cant find a way to make osql / Query Analyzer / Sql Server Studio skip the parts that have alreay been run.
GOTO won't work across batches (Scripts contain "GO")
IF BEGIN END won't work likewise because of GO
Update: To reiterate, I don't need help remembering the current version number, I need a way to skip parts of the script to prevent already applied updates from reapplying.
I have tried a number of methods:
I can wrap the batch in an db_executeSql statement or EXECUTE but this leads to scoping problems.
I can wrap each batch in a IF dbo.DB_VERSION()!=REQUIRED_VERSION THEN BEGIN .... END construct, but this is messy and makes handling errors difficult.
Encountering situations where the change should not be applied is expected and is not an exceptional situation. So simply RETURNing when not applicable is not Ok.
Any other suggestions?
You can keep the version number if a table, and use the function to check the value in the table and compare in an IF statement to the version of the change you wish to make.