Sitecore commands with autofac - command

I have created a sitecore command which triggers an index rebuild.
I would like to be able to inject services with autofac.
Therefore I have followed this tutorial : http://maze-dev.blogspot.be/2014/03/dependency-injection-in-custom-sitecore.html
After having everything in place, it seems like the sitecore scheduling task tries to create a new instance of this command. While these already injected in the commandconfiguration class.
Is there anything else that needs to be done?

The problem is that a Sitecore scheduled task runs in a separate thread, and since the command is registered as InstancePerLifetimeScope (if following the example in the linked blog post), Autofac will inject a new instance in the scheduled task.
Instead, in your scheduled task you should probably get the command from the CommandManager, using something like:
var command = CommandManager.GetCommand("mynamespance:mycategory:mycommand");
and then call Execute on the command.
Now, since the CommandConfigurator at bootstrap time registers the resolved command instance in the static CommandManager, the instance can effectively be seen as a singleton, and it should be available fully injected in the scheduled task (if the command is retrieved through the CommandManager, that is.) If the command is also executed from elsewhere in your Sitecore solution, it will most likely be on another thread. In that case it is probably a good idea to consider if your command implementation is thread safe.

Related

Get $LSB_JOBID on execution host

I'm having trouble to access the JOB-ID of an submitted, non-interactive job within that job. When using an interactive job, I can access the job-id via $LSB_JOBID. But that variable is not propagated to the execution-host.
However, different sources state, that LSB_JOBID is propagated and others state that it isn't (look for -env). Are there any solutions to this? My system creates temp-directories for each job which can be accessed via the jobid which is why I definitely need it within the job.
Thanks in advance!
LSB_JOBID is set for non interactive jobs. Have you asked your cluster admin about this? There are a few LSF features that could override the default behaviour, like esub, job starter, or eexec.

What events (.net, WMI, etc.) can I hook to take an action when a PowerShell module is imported?

I want to create a listener in PowerShell that can take an action when an arbitrary PowerShell module is imported.
Is there any .net event or WMI event that is triggered during module import (manually or automatically) that I can hook and then take an action if the module being imported matches some criteria?
Things that I have found so far that might be components of a solution
Module event logging
Runspace pool state changed
Triggering PowerShell when event log entry is created
Maybe not directly useful but if we could hook the same event from within a running PowerShell process that might help
Use PowerShell profile to load PowerShellConfiguration module
Create a proxy function for Import-Module to check whether the module being imported matches one that needs configuration loaded for it
In testing Import-Module isn't called when auto loading imports a module so this doesn't catch every imported module
Context
I want to push the limits of aspect oriented programming/separation of concerns/DRY in PowerShell where things like module state (API keys, API root URLs, credentials, database connection strings, etc.) can all be set via set functions that only change the state of in memory module scoped internal variables so that an external system can pull those values from any arbitrary means of persistence (psd1, PSCustomObject, registry, environment variables, json, yaml, database query, etcd, web service call, anything else that is appropriate to your specific environment).
The problem keeps coming up in the modules we write and is made even more painful when trying to support powershell core cross platform where different means of persistence might not be available (like the registry) but may be the best option for some people in their environment (group policy pushing registry keys).
Supporting an infinitely variable means of persisting configuration within each module that is written is the wrong way to handle this but is what is done across many modules today which results in varying levels of compatibility not because the core functionality doesn't work but simply due to how the module persists and retrieves configuration information.
The method of persisting and then loading some arbitrary module configuration should be independent of the module's implementation but to do that I need a way to know when the module is loaded so that I can trigger pulling the appropriate values from whatever the right persistence mechanism is in the particular environment we are in to then configure the module with the appropriate state.
An example of how I thinks this might work is maybe there is a .net event on the runspace object that is triggered when a module is loaded. This might have to be tied to a WMI event that executes each time a PowerShell runspace is instantiated. If we had a PowerShellConfiguration module that knew what modules it had been setup to load configuration into, then the wmi event could trigger the import of the PowerShellConfiguration module which on import would start listening to the .net event for importing modules into the runspace and call the various configuration related Set methods of a module when it sees the module imported.

Can I trap the Informatica Amazon S3Bucket name doesn’t match standards

In Informatica we have mapping source qualifiers connecting to Amazon Web Services—AWS.
We often and erratically get a failure that our s3 bucket names do not comply with naming standards. We restart the workflows again and they continue on every time successfully.
Is there a way to trap for this specifically and then maybe call a command object to restart the workflow command via PMCMD?
How are you starting the workflows in regular runs?
If you are using a shell script, you can add a logic to restart if you see a particular error. I have created a script a while ago to restart workflows for a particular error.
In a nut shell it works like this
start workflow (with pmcmd)
#in case of an error
check repository db and get the error
if the error is specific to s3 bucket name
restart the workflow
Well... It's possible for example to have workflow one (W1):
your_session --> cmd_touch_file_if_session_failed
and another workflow (W2), running continuously:
event_wait_for_W1_file --> pmcmd_restart_W1 --> delete_watch_file
Although it would be a lot better to nail down the cause for your failures and get it resolved.

How to make self updating pipeline in concourse

I would like to make a pipeline that as first step checks its own configuration and updates itself if needed.
What tool / API should I use for this? Is there a docker image that has this installed for the correct concourse version? What is the advised way to authenticate in concourse from such task?
Regarding the previous answer suggesting the Fly binary, see the Fly resource.
However, having a similar request, I am going to try with the Pipeline resource. It seems more specific and has var injection solved directly through parameters.
I still have to try it out, but it seems to me that it would be more efficient to have a single pipeline which updates all pipelines, and not having to insert this job in all of your pipelines.
Also, a specific pipeline should not be concerned with itself, just the source code it builds (or whatever it does). If you want to start a pipeline if its config file changed, this could be done by modifying a triggering resource, e.g. pushing an empty "pipeline changed" commit
naively, it'd be a task which gets the repo the pipeline is committed to, and does a fly set-pipeline to update the configuration. However there are a few gotchas here:
fly binary. you'll want fly executable to be available to your container which runs this task, and it should be same version of fly as the concourse that's being targeted. Probably that means you should download it directly via curl from the host.
authenticating with the concourse server. you'll need to provide credentials for fly to use -- probably via parameters.
parameter updates. if new parameters become needed, you'll need to use some kind of single source for all the parameters that need to be set, and use --load-vars-from rather than just --var. My group uses Lastpass notes with a bunch of variables saved in them and download via the lpass tool, but that gets hard if you use 2FA or similar.
moving the server. You will need the external address of the concourse to be injected as a parameter as well, if you want to be resilient to it changing.

Scheduled Tasks for Web Applications

What are the different approaches for creating scheduled tasks for web applications, with or without a separate web/desktop application?
If we're talking Microsoft platform, then I'd always develop a separate Windows Service to handle such batch tasks.
You can always reference the same assemblies that are being used by your web application to avoid any nasty code duplication.
Jeff discussed this on the Stack Overflow blog -
https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
Basically, Jeff proposed using the CacheItemRemovedCallback as a timer for calling certain tasks.
I personally believe that automated tasks should be handled as a service, a Windows scheduled task, or a job in SQL Server.
Under Linux, checkout cron.
I think Stack Overflow itself is using an ApplicationCache expiration to run background code at intervals.
If you're on a Linux host, you'll almost certainly be using cron.
Under linux you can use cron jobs (http://www.unixgeeks.org/security/newbie/unix/cron-1.html) to schedule tasks.
Use URL fetchers like wget or curl to make HTTP GET requests.
Secure your URLs with authentication so that no one can execute the tasks without knowing the user/password.
I think Windows' built-in Task Scheduler is the suggested tool for this job. That requires an outside application.
This may or may not be what you're looking for, but read this article, "Simulate a Windows Service using ASP.NET to run scheduled jobs". I think StackOverflow may use this method or it was at least talked about using it.
A very simple method that we've used where I work is this:
Set up a webservice/web method that executes the task. This webservice can be secured with username/pass if desired.
Create a console app that calls this web service. If desired, you can have the console app send parameters and/or get back some sort of metrics for output to the console or external logging.
Schedule this executable in the task scheduler of choice.
It's not pretty, but it is simple and reliable. Since the console app is essentially just a heartbeat to tell the app to go do its work, it does not need to share any libraries with the application. Another plus of this methodology is that it's fairly trivial to kick off manually when needed.
Use URL fetchers like wget or curl to make HTTP GET requests.
Secure your URLs with authentication so that no one can execute the tasks without knowing the user/password.
You can also tell cron to run php scripts directly, for example. And you can set the permissions on the PHP file to prevent other people accessing them or better yet, don't have these utility scripts in a web accessible directory...
Java and Spring -- Use quartz. Very nice and reliable -- http://static.springframework.org/spring/docs/1.2.x/reference/scheduling.html
I think there are easier ways than using cron (Linux) or Task Scheduler (Windows). You can build this into your web-app using:
(a) quartz scheduler,
or if you don't want to integrate another 3rd party library into your application:
(b) create a thread on startup which uses the standard Java 'java.util.Timer' class to run your tasks.
I recently worked on a project that does exactly this (obviously it is an external service but I thought I would share).
https://anticipated.io/
You can receive a webhook or an SQS event at a specific scheduled time. Dealing with these schedulers can be a pain so I thought I'd share in such case someone is looking to offload their concerns.