using PowerShell to create automated systens - powershell

I'm looking forward to develop an automated notification and logging-off system that
notifies and logs off accounts from a computer. So far I planned an example when a class is
scheduled, except accounts that are registered on the scheduled class. It may
notify the logged-in users a certain period of time before the class time and
log them off just before the class time. Or, it could limit their access, for
example to the printer once the class has started.
So my Question is can I use PowerShell to develop this project ? How far can it be useful, or I should think about using python!
Thanks Fellas!

I'm not sure PowerShell brings anything special to the party. What you are talking about would require a PowerShell session running in the background and perhaps even tying into some sort of eventing, perhaps with the timer class. It might be just as easy to automate something using the task scheduler. At the appointed time check the logged on user and if they don't meet the requirement log them off. You could use PowerShell to create the tasks and handle the processing or any other language really.

Related

Way to pull Exchange permissions

Maybe an easy question for someone who knows Powershell and O365 well. Is there a way to configure it so when a command is run for example to pull all access to a shared mailbox, that either a service account is permissioned each time to pull that information or the user who is running the script? I looked at connecting an SA to the script but it would have too much access to 0365 to give it the specific permissions. So the account is not permissioned for the access by default but every time the script/command is ran its permissioned for that inquiry which it shows then it won't have access until the next time its called.
Looking to add this type of function to a script which we only want the helpdesk people to see the information when they run the script and the specific command in the script.
Hopefully explained clear enough :)
Thanks all.
I don't think there is a way to do that natively. You could fiddle something with Azure PIM but that's more for one-off operations than minute action that are done often.
You could however circumvent that by making some sort of web interface that triggers commands on another server using a privileged SA and returns the output through the web interface. You can just make it so that the interface can only request one specific command to be run, and the only thing you have to worry about is sanitizing your parameters well to avoid unwanted injection.
Alternatively, what are you trying to protect against by restricting access so much ? Isn't it something that could be done more easily using a read-only account and some clearly defined policy ? If your helpdesk people overstep their allowed scope, that's a management/HR problem as much as a technical one.

Can LSF be configured to restrict access to a queue based on executable or by passing a token

I'm a casual and mostly inexperienced LSF user, so please bear with...
I develop software in a corporate setting that submits jobs to LSF for processing. We have a set of machines that we want to use for a specific application but not open up to the public at large for any other usage. There is something in place now that allowsa few specific users access to use the machines. But we also want any user to use them IF they are running a certain application (a shell script that runs a perl script in this case).
I suppose registering the application(s) would be one approach. Another might be to pass a secret/encrypted token or key. Or maybe there are other mechanisms for this.
Is there an LSF based solution for this ?
Thanks
There's a couple of LSF features that can help here. A queue or application profile can have dedicated hosts, and users (the HOSTS and USERS parameters).
Queues can have a job starter to check and reject invalid job commands.

SCOM Rule for Fake Alerts

I am working on a tool to generate fake data for System Center Operations Manager for internal testing purposes. I wrote a script as part of a discovery that is able to create an instance of any class I want and make SCOM fake-discover it. Currently, I'm using a class for AD Printer. Now the next step is to somehow create alerts on behalf of the Printer. For this, I wrote a rule targeted at the AD Printer, which reads from the logs to detect when it should be fired. The logs are being written to from a PowerShell script. However, I see no results. But when I target the same rule to All Windows Computers, I see the alerts.
From what I understand the rule will run on all agents that have an instance of the target class. Since I fake-discovered the AD Printer on this agent (which also happens to be the Management Server), should the rule not run on this?
Any other suggestions on how I can achieve this are welcome as well.
PS. I probably cannot share any of my code as I am under an NDA, but I can clarify my approach further, if needed.
Yes, the Powershell script should run on the agents which have instances of the AD Printer. I recommend you to check the OperationsManager event log for script errors. The easiest way to generate (fake) alerts is to set up a simple, Event-based text log monitor: one specific word can trigger the unhealthy state (which in turn generates an alert), while another word resets the monitor to the healthy state. You can specify criteria for both events. Look at this blog post for further details.

AS/400 End User - run keystrokes automatically

I'm a novice with AS/400. I have a bit of coding experience and know that there's always an access to the backend if you're clever enough. But developers in my organisation said that it's hard to communicate with the server and make it run things remotely.
So I'm wondering if you anyone's got any ideas how I can schedule a simple task. I login to the "Personal Communication", which is the client app. Then I go to a certain menu, ie I543, enter a parameter "1". And Press "ENTER" to run a report which have a file output.
I know there is that "Macro" function within Personal Communication. But that relies on send keys which does not work on a locked screen, nor do I want to activate it manually, which really defies the point of automation.
I was hoping I can schedule a simple call command somehow to activate some kind of procedure. Just need to know if possible and where to start looking? Thanks.
Last millennium's AS/400 and today's IBM i both have a basic job scheduler built in.
From a command line WRKJOBSCDE.
You need to find out what happens when you select menu I543 option 1. Assuming it's a simple CALL MYRPT or SBMJOB CMD(CALL MYRPT) then adding a scheduled job to run the report is easy.
However, you probably don't have the authority to do so. Nor should your developers necessarily be able to do so. Your system administrator is the right person. In a small shop, that might be the guy doing development. In a large one, it's another person or team.
But your developers should have at least pointed you toward the admin and the job scheduler.

How to handle large amounts of scheduled tasks on a web server?

I'm developing a website (using a LAMP stack) which must handle many user-made scheduling tasks. It works as following: an user creates an event and sets a date, and others users (as many as 63) may join. A few hours before the set date, the system must email each user subscribed to that event. And that's it.
However, I have never handled scheduling, and the only tools I know (poorly) are cron and at. My plan is to create an at job for each event, which will call a script that gets all subscribers emails and mails them.
My question is: is my plan/design good? Is it scalable? Are there better options that I should be aware of?
Why a separate cron job for each event? I've done something similar thing for a newsletter with a cron job just running once per hour and if there are any newsletters to be sent it just handles them. In your case you'd have a script that runs once every hour and gets a list of users for events that happen in the desired time interval since.
It will work. As far as scalability, at the minimum make sure that the script runs in it's own process so it doesn't bog down the server unnecessarily.
Create a php-cli script perhaps?
I'm doing most of my work in Rails nowadays, and there's a wealth of background processing libraries one of them is Resque it uses the redis server to keep track of the jobs
I found a PHP clone https://github.com/chrisboulton/php-resque
Might be overkill for your use case, but give it a shot perhaps
If you would consider a proper framework that uses an application server (and not a simple webserver), Spring has a task scheduling layer that's simple to use. Scheduling jobs on the server really requires more than what a simple LAMP install can do, but I haven't used PHP in a while so maybe there's an equivalent.
Here's an article that compares some of your options.