Can I trap the Informatica Amazon S3Bucket name doesn’t match standards - amazon-redshift

In Informatica we have mapping source qualifiers connecting to Amazon Web Services—AWS.
We often and erratically get a failure that our s3 bucket names do not comply with naming standards. We restart the workflows again and they continue on every time successfully.
Is there a way to trap for this specifically and then maybe call a command object to restart the workflow command via PMCMD?

How are you starting the workflows in regular runs?
If you are using a shell script, you can add a logic to restart if you see a particular error. I have created a script a while ago to restart workflows for a particular error.
In a nut shell it works like this
start workflow (with pmcmd)
#in case of an error
check repository db and get the error
if the error is specific to s3 bucket name
restart the workflow

Well... It's possible for example to have workflow one (W1):
your_session --> cmd_touch_file_if_session_failed
and another workflow (W2), running continuously:
event_wait_for_W1_file --> pmcmd_restart_W1 --> delete_watch_file
Although it would be a lot better to nail down the cause for your failures and get it resolved.

Related

Windows Services - How can I find the darktable instance in windows services

I accidentally screwed up my darktable configuration, so I reloaded it from scratch. To avoid losing all my recorded changes I have done to my pictures, I wrote a powershell backup script for the darktable database. I want to launch this script from the windows task scheduler when ever I launch darktable. I have found the event id which indicates in the security log of a new process has occurred which I should be able to use to automatically launch my backup script from task scheduler. I want to add code to the script to check the services to see if darktable is actually running and only perform the backup if it is. Anyone know how I can identify this?

How to track installer script in a pipeline not executing?

I'm new to the whole Azure DevOps world and just got transferred to a new team that does just that.
One of my assignments is to fix an issue with a pipeline where one of the steps runs a shell script that installs an application. Currently, the step seems to run without any issue shown on the log, but when we connect to the container's pod, the app is not there.
If we run the script directly inside the pod, the application is installed correctly. I'm not sure how to track this. One of the things I've tried was to check the event log to see if there's any error while the installation is executed:Get-Eventlog -LogNmae "Windows PowerShell" -Newest 20, so far no luck here. Again, kinda of new at this, not sure what other tools are out there to track the reason why the script is not installing during the pipeline execution.
To troubleshoot your pipeline run, you can configure your pipeline logs to be more verbose.
1, To configure verbose logs for a single run, you can start a new build by choosing Run pipeline and selecting Enable system diagnostics, Run.
2,To configure verbose logs for all runs, you can add a variable named system.debug and set its value to true.
You can also try logging into your agent server and check for the event log. See this blog for view event log on windows.
The issue was related to how the task was awaited. Adding this piped params helped us solve the issue:
RUN powershell C:\dev\myprocess.ps1 -PassThru | Wait-Process;

Jenkins-How to schedule a jenkins job when another job completes on remote machine

I have two remote machines: A and B. Both have Jenkins installed.
A: This will build from trunk.
B: To trigger automation.
How can I configure Jenkins job on Machine B, when the build is successful on Machine A?
I had the same requirement because one of the servers that I was using belonged to a different company and therefore, while it would be possible, it was clearly going to take a long time to get buy-in for me to alter their jenkins set-up, even though I was allowed access to monitor it and its outputs. However, if you don't have these restrictions, then you should definitely follow the whole master-slave configuration to address this. That said, here is a solution I came up with and just to note that I've explained why this was a genuine requirement, although I hope to go down the master-slave route myself when possible.
Install the ScriptTrigger plug-in for Jenkins and you can then watch the remote jenkins instance with a script similar to the following:
LAST_SUCCESSFUL_UPSTREAM_BUILD=`curl http://my.remote.jenkins.instance.com:8080/job/remoteJobName/lastSuccessfulBuild/buildNumber`
LAST_KNOWN_UPSTREAM_BUILD=`cat $WORKSPACE/../lastKnownUpstreamBuild || echo 0`
echo $LAST_SUCCESSFUL_UPSTREAM_BUILD> $WORKSPACE/../lastKnownUpstreamBuild
exit $(( $LAST_SUCCESSFUL_UPSTREAM_BUILD > $LAST_KNOWN_UPSTREAM_BUILD ))
Get the ScriptTrigger to schedule a build whenever the exit code is '1'. Set-up a suitable polling interval and there you have it.
This will obviously only schedule a build if the upstream job succeeds. Use "lastBuild" or "lastFailedBuild" instead of "lastSuccessfulBuild" in the URL above as your requirements dictate.
NOTE: Implemented using a BASH shell. May work in other UNIX shells, won't work in Windows.

Get Chef to execute a mongodb script after mongodb has started

We're currently using chef to provision our servers and we want our recipe/cookbook to automatically add some data to the mongo database once its installed and running.
This is where we start to run into problems. We were using an execute resource to run the mongo script like this:
execute "install-mongodb-config" do
command "mongo #{node[:mongodb][:mongo_db_host]}/#{node[:mongodb][:mongo_db]} \"#{node[:mongodb][:mongo_add_config_script]}\""
action :run
end
This part of the recipe always failed no matter what we tried! I won't get into the details of everything we tried here (unless i need to) but lets just say that i've exhausted all possibilities of subscribes and notifies (i think).
The problem originates from the fact that we are using the mongodb::10gen_repo to install mongodb. The recipe exits when apt-get installs the package and then chef continues on to execute more resourses.
We have tried executing the above resource directly after mongodb::10gen_repo but it doesn't seem like mongodb is available and the mongo shell cannot connect and run the script. The error we see is somewhat like this:
MongoDB shell version: 2.0.2
Thu Sep 6 18:40:45 ReferenceError: setTimeout is not defined mongotest.js:2
failed to load: mongoAddConfig.js
Nothing we have tried has been able to get around this in a nice chef way. The thing that we resorted to was to replace the execute resource with the following:
execute "install-mongodb-config" do
command "sleep 60; mongo #{node[:mongodb][:mongo_db_host]}/#{node[:mongodb][:mongo_db]} \"#{node[:mongodb][:mongo_add_config_script]}\""
action :run
end
Which just makes the command sleep for 60 seconds before the mongo script is run. I know this isn't the Right way to do this but it works for now.
Can anyone suggest the Right way to do this? I have a feeling that I will need to talk to the guys that created the mongodb chef script and request a feature!
First of all. Remove this "sleep 60". This can be done by chef: All resources have common attributes and "retries" and "retry_delay" are part of them. So the easiest way would be:
execute "install-mongodb-config" do
command "mongo some_command"
action :run
retries 6
retry_delay 10
end
If you have more than 2-3 places, where you have to run some command on mongo database, consider creating LWRP, similar to one created in this mongodb cookbook. (Particularly check the libraries/mongodb.rb file). You can hide the logic that waits for the server to respond there.
Is it important that the same Chef run that installs the software also injects the initial configuration? The 'chefly' method to constructing cookbooks and recipes is to guard against idempotency in order to ensure that they can be run over and over again without producing unintended results.
In this particular case, I would limit the first recipe to only just installing and starting up mongodb. This recipe would do nothing if it saw that mongodb was already running on the host. Then, I'd have another recipe that would run only if it saw that mongo had been setup and was running. It would query the mongodb to see if the initial configuration had been done. If so, it would simply return. If not, it would run your configuration routine.
In this way, these recipes could run all the time, anytime, on your machine. Even if someone uninstalled mongodb, chef would get around to ensuring that it was set back up again and pristine.
So, I don't know much at all about chef. But your problem seems to be that you try to immediately connect after bringing the server up.
Server's are not immediately available when you bring them up since there is a bit of overhead that goes into electing a primary, getting all the server status's etc.
You can recreate this without chef by trying to bring up a replica set and immediately trying to connect to it in a simple script. So it's not chef specific.
Not sure if there is a way around the server startup lag since bringing up a primary is expected to be a relatively infrequent occurrence compared to just adding nodes to a set.
The only potential solution I see that is cleaner is adding a longer Timeout for the connection to be formed in the configuration. You can find how to do this in the mongodb documentation here: http://www.mongodb.org/display/DOCS/Connections
The flag of interest for you is likely connectTimeoutMS

Azure deployment with PowerShell, "New-AzureDeployment : There was no endpoint listening at https://management.core.windows.net/..."

Following the guide and powershell script from this article,
https://www.windowsazure.com/en-us/develop/net/common-tasks/continuous-delivery/
I've run into an extremely odd error:
9/4/2012 9:02 PM - Creating New Deployment: In progress
New-AzureDeployment : There was no endpoint listening at https://management.core.windows.net/5921d8af-88a1-4f63-9673-5e1ae1df7e8a/services/storageservices/Build_2012-09-04_02-27.1/dist/LNEC_Admin.Azure.cspkg/keys that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.
It's odd because we're on build "Build_2012-09-04_08-16.1", not the one mentioned in the URL above (which no longer even exists on the filesystem). This is under Jenkins CI which runs under the NETWORK SERVICE account. If I run it by hand with my own account the same error results, but with a lnecint in place of the build directory: https://management.core.windows.net/5921d8af-88a1-4f63-9673-5e1ae1df7e8a/services/storageservices/lnecint/keys
That keyword "lnecint" isn't mentioned anywhere in any config (I've searched every file on the entire machine and TFS server). It was the name of a storage account, but it's long ago been deleted.
VS 2012, Azure SDK 1.7.1
There's definitely an issue with your endpoint. Can you check what parameters you're passing to the "New-AzureDeployment" Cmdlet?