I have a notebook instance with a notebook file. I use the instance's startup script to run this notebook file using papermill.
I want the notebook file to be run only when I remotely start the instance, and not from google cloud console.
I'd like to know if one of these is possible, or if there's another solution:
1 - The script will detect that the instance was started from the dashboard.
2 - I will remove the startups script and use another script that can be run by a remote command.
3 - The shutdown script will remove the startup script.
Script definition is kept under /var/run/google.startup.script for the scenarios mentioned:
Seems complicated detect if the instance was started from the dashboard.
Is feasible to delete the startup script mentioned earlier, and run your own script remotely.
Delete the script.
Before to delete the scrip I suggest you to make a backup of it in case of any issue arises for delete it, also keep in mind that AI Platform Notebooks is a managed service, any admin configuration could cause potential issues in your instance, be careful when deleting or modifying the startup script.
My advise will be actually run a notebook via a Scheduler system, I posted here the multiple options: GCP run a prediction of a model every day
Related
I accidentally screwed up my darktable configuration, so I reloaded it from scratch. To avoid losing all my recorded changes I have done to my pictures, I wrote a powershell backup script for the darktable database. I want to launch this script from the windows task scheduler when ever I launch darktable. I have found the event id which indicates in the security log of a new process has occurred which I should be able to use to automatically launch my backup script from task scheduler. I want to add code to the script to check the services to see if darktable is actually running and only perform the backup if it is. Anyone know how I can identify this?
Before explaining what my problem is, please know that I have looked up for solutions on similar topics but none of them seems to work nor even to corresponds to my problem.
What I am trying to do:
I have this python code on multiple files that I run with flask with the following command:
python -m flask run --host=0.0.0.0
So far, everything works, but I would like this code to automatically run everytime the computer boots. In the future this will be used on mini PCs without any graphical interface nor human intervention.
Since I need to do some configuration checks before running the web server, I've created a powershell script that ends with Flask running (using the previous command).
So far, everything works too. Now we're coming to the problem:
I'd like this script to run when I boot the machine. Specificity: Every things needs to work with Administrator privileges, on the local system without any interaction.
I've tried scheduled tasks but Flask won't run even if the rest of the script works (like creating folders or other things)
Ok, it's not a big deal I have other ways to do it, so I've created a Windows Service in C# to run the Script at startup on the local system.
The script works, I've checked the privileges too, everything's fine but arriving at the flask command line that is supposed to make it run, nothing works.
It's the same thing if I run flask using "pythonw" which is supposed to run python as a background process.
What the problem seems to be:
Well, as long as I run flask and I have either a command prompt or a powershell terminal, everything works greats. But if in a way or another I run the script as a background process, it won't work.
Normally it would take around 30 seconds for Flask to start-up. Here if I try to create a folder right after flask ended starting up (as a test) I can see the folder is created almost instantly, which means the process is immediately killed.
The problem doesn't seem to come from the service itself but really Windows that kills the process I don't know why
I'm running out of idea so if you guys have anything that I could try it would really help me.
I have a simple python script that is hosted on Heroku and I'm using the Heroku Scheduler to run the script every hour/day. The script will possibly update a simple .txt file (could also be a config var if possible) when it runs. When it does run and conditions are met, I need that value stored and used when the next scheduled script runs. The value changed is simply a date.
However, since the app is containerized based on the most recent code I have on Github, it doesn't store those changes anywhere to be used again. Is there any way I can accomplish to update the file and use it every time it runs? Any simple add-ons or other solutions I can use?
Heroku Dynos have a local file system that does not survive an application restart or redeployment, therefore it cannot be used to persist data.
Typically you have 2 options:
use a database. On Heroku you can use (there is also a Free tier) Postgres
save the file on external storage (S3, Dropbox, even GitHub). See Files on Heroku for details and examples
I have to think this is a solved issue but I am just not getting it to work. So I have come to you StackOverflow with this issue:
I have a windows server 2016 machine running in amazon ec2. I have a machine.ps1 script in a config directory.
I create an image of the box. (I have tried with checking noreboot and unchecking it)
When I create a new instance of the image I want it to run machine.ps1 at launch to set the computer name and then set routes and some config settings for the box. The goal is to do this without logging into the box.
I have read and tried:
Running Powershell scripts at Start up
and used this to ensure user data was getting passed in:
EC2 Powershell Launch Tools
I have tried setting up a scheduled task that runs the machine.ps1 on start up (It just hangs)
I see the initializeInstance.ps1 on start up task and have tried to even coop that replacing the line to run userdata with the line to run my script. Nothing.
If I log into the box and run machine.ps1, it will restart the computer and set the computer name and then I need to run it once more to set routes. This works manually. I just need to find a way to do it automagically.
I want to launch these instances from powershell not with launch configurations and auto scale.
You can use User data
Whenever you deploy a new server, workstation or virtual machine there is nearly always a requirement to make final changes to the system before it’s ready for use. Typically this is normally done with a post-deployment script that might be triggered manually on start-up or it might be a final step in a Configuration Manager task sequence or if you using Azure you may use the Custom Script Extension. So how do you achieve similar functionality using EC2 instances in Amazon Web Services (AWS)? If you’ve created your own Amazon Machine Image (AMI) you can set the script to run from the Runonce registry key, but then can be a cumbersome approach particularly if you want to make changes to the script and it’s been embedded into the image. AWS offers a much more dynamic method of injecting a script to run upon start-up through a feature called user data.
Please refer following link for ther same:
Poershell User data
Windows typically won't let a powershell script call another powershell script unless it is being run as Administrator. It is a weird 'safety' feature. But it is perfectly okay to load the ps1 files and use any functions inside them.
The UserData script is typically run as "system". You would THINK that would pass muster. But it fails...
The SOLUTION: Make ALL of your scripts into powershell functions instead.
In your machine.ps1 - wrap the contents with function syntax
function MyDescriptiveName { <original script contents> }
Then in UserData - use the functions like this
# To use a relative path
Set-Location -Path <my location>
# Load script file into process memory
. <full-or-relpath>/machine.ps1
# Call function
MyDescriptiveName <params-if-applicable>
If the function needs to call other functions (aka scripts), you'll need to make those scripts into functions and load the script file into process memory in UserData also.
I am using Putty to transfer files from my windows machine to Linux machine.
I am able to transfer, when i run the script and also if i run the same script using Schedule task with my credentials.
if schedule the task to run using system account(SYSTEM) or other user account, file transfer not happening.
Do i need to save any session vales?
PuTTY saves session information in the registry for the current user only, this information will simply be not available for the other accounts you mentioned. So you either need to provide them by exporting yours and importing them in the other user's accounts or simply provide everything needed on the shell command invoked to copy your files. The latter sounds much easier to me in combination with a little script which gets invoked by the task scheduler.