Glpi automatic action - server

I have a problem with an automatic action in GLPI, since the "Queuedmail", which is the action that sends the mail in queue every 1 minute according to my configuration, is not working, I do not know what to do, the only way to make it work is by manually executing it.
The version I use of GLPI is 9.1.6

This is very little information.
You need to check:
Quedmail Status: Is it set to Scheduled?
Quedmail Run mode:
If set to CLI you need to add a cron job (This is recommended). http://wiki.glpi-project.org/doku.php?id=en:config:crontab
Just for testing, change Run mode to GLPI. It would run based on anyones GLPi usage.
Regards,

Related

How to track installer script in a pipeline not executing?

I'm new to the whole Azure DevOps world and just got transferred to a new team that does just that.
One of my assignments is to fix an issue with a pipeline where one of the steps runs a shell script that installs an application. Currently, the step seems to run without any issue shown on the log, but when we connect to the container's pod, the app is not there.
If we run the script directly inside the pod, the application is installed correctly. I'm not sure how to track this. One of the things I've tried was to check the event log to see if there's any error while the installation is executed:Get-Eventlog -LogNmae "Windows PowerShell" -Newest 20, so far no luck here. Again, kinda of new at this, not sure what other tools are out there to track the reason why the script is not installing during the pipeline execution.
To troubleshoot your pipeline run, you can configure your pipeline logs to be more verbose.
1, To configure verbose logs for a single run, you can start a new build by choosing Run pipeline and selecting Enable system diagnostics, Run.
2,To configure verbose logs for all runs, you can add a variable named system.debug and set its value to true.
You can also try logging into your agent server and check for the event log. See this blog for view event log on windows.
The issue was related to how the task was awaited. Adding this piped params helped us solve the issue:
RUN powershell C:\dev\myprocess.ps1 -PassThru | Wait-Process;

zlogin usage in Runeck task

As part of rundeck task i'm trying to login to a global zone, use the command zoneadm list and trying to login to each of the local zone [to shut down various apps & to issue reboot] using the command /usr/sbin/zlogin and execute hostname command to ensure it did login to localzone
however this is not working
Is there a better way to do this? Please guide
Make sure that your job is dispatching to your remote node correctly, you can call the command on "Commands" (right panel) pointing to your node (referenced in the "Nodes" textbox) in that way you can discard possible path/user rights issue, take a look at this. Now, zlogin seems an interactive shell, and as you can see, you need to use it in the non-interactive mode.

How to restart an exe when it is exits in windows 10?

I have a process in windows which i am running in startup. Now i need to make it if somehow that process get killed or stopped i need to restart it again in Windows 10?
Is there any way. Process is a HTTP server which if somehow stopped in windows i need to restart it. I have tried of writing a power-shell in which I'll check task-list status of process and then if not found I'll restart but that is not a good way. Please suggest some good way to do it.
I have a golang exe; under a particular scenario my process got killed or stopped i need to start it up again automatically. This has to be done imediately after the exe got killed. What is the best way to achieve this?
I will give you a brief rundown. You can enable Audit Process Termination in local group policy of the machine as shown below. In your case, success audits would be enough. Please note that the pic is for Windows 7. It may change with OS.
Now every time a process gets terminated, a success event will be generated and written to the security eventlog.
This will allow you to create a task scheduler that triggers on the generation of this event that calls a script that would run the process again. Simple right?
Well, you might have some trouble setting that task up especially when you want to pass details about the generating event to the script. This should help you get through that.
You can user Task scheduler for this purpose. There is a option of "restart on failure" which can be selected and whenever your process get failed it will restart again.
Reference :- https://social.technet.microsoft.com/Forums/windowsserver/en-US/4545361c-cc1f-4505-a0a1-c2dcc094109a/restarting-scheduled-task-that-has-failed?forum=winserverManagement

Suite crm notification not working

I am new to suite crm and have successfully set it up and is up and running.
The notification system is not working at all and not showing up any alerts.
Here is what I have done
Succesfully setup suite crm and working well
Set up cronjobs on server as mentioned on the admin/ sheduler sections
Repaired scheduler after setting up the scheduler
Tested to see if the notification works or not by setting up the renewal reminder date fields in the contract module but the notification is not showing up.
What am I missing or doing wrong. In the admin/scheduler settings I can see the lists of schedulers. How to know which scheduler will serve my purpose. Is it possible to create new scheduler. I cannot see any options to create a new scheduler.
Make sure you have set properly set permission on server. 755 on all suitecrm files and 777 for cache and custom. After that execute repair and rebuild. Execute any database mismatch query.
After that make sure that "Optimise Advanced OpenDiscovery Index" scheduler is executing without any issue. See following image for reference:

Scheduled Tasks for Web Applications

What are the different approaches for creating scheduled tasks for web applications, with or without a separate web/desktop application?
If we're talking Microsoft platform, then I'd always develop a separate Windows Service to handle such batch tasks.
You can always reference the same assemblies that are being used by your web application to avoid any nasty code duplication.
Jeff discussed this on the Stack Overflow blog -
https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
Basically, Jeff proposed using the CacheItemRemovedCallback as a timer for calling certain tasks.
I personally believe that automated tasks should be handled as a service, a Windows scheduled task, or a job in SQL Server.
Under Linux, checkout cron.
I think Stack Overflow itself is using an ApplicationCache expiration to run background code at intervals.
If you're on a Linux host, you'll almost certainly be using cron.
Under linux you can use cron jobs (http://www.unixgeeks.org/security/newbie/unix/cron-1.html) to schedule tasks.
Use URL fetchers like wget or curl to make HTTP GET requests.
Secure your URLs with authentication so that no one can execute the tasks without knowing the user/password.
I think Windows' built-in Task Scheduler is the suggested tool for this job. That requires an outside application.
This may or may not be what you're looking for, but read this article, "Simulate a Windows Service using ASP.NET to run scheduled jobs". I think StackOverflow may use this method or it was at least talked about using it.
A very simple method that we've used where I work is this:
Set up a webservice/web method that executes the task. This webservice can be secured with username/pass if desired.
Create a console app that calls this web service. If desired, you can have the console app send parameters and/or get back some sort of metrics for output to the console or external logging.
Schedule this executable in the task scheduler of choice.
It's not pretty, but it is simple and reliable. Since the console app is essentially just a heartbeat to tell the app to go do its work, it does not need to share any libraries with the application. Another plus of this methodology is that it's fairly trivial to kick off manually when needed.
Use URL fetchers like wget or curl to make HTTP GET requests.
Secure your URLs with authentication so that no one can execute the tasks without knowing the user/password.
You can also tell cron to run php scripts directly, for example. And you can set the permissions on the PHP file to prevent other people accessing them or better yet, don't have these utility scripts in a web accessible directory...
Java and Spring -- Use quartz. Very nice and reliable -- http://static.springframework.org/spring/docs/1.2.x/reference/scheduling.html
I think there are easier ways than using cron (Linux) or Task Scheduler (Windows). You can build this into your web-app using:
(a) quartz scheduler,
or if you don't want to integrate another 3rd party library into your application:
(b) create a thread on startup which uses the standard Java 'java.util.Timer' class to run your tasks.
I recently worked on a project that does exactly this (obviously it is an external service but I thought I would share).
https://anticipated.io/
You can receive a webhook or an SQS event at a specific scheduled time. Dealing with these schedulers can be a pain so I thought I'd share in such case someone is looking to offload their concerns.