Exception when exiting - chef-recipe

I'm writing a chef recipe as shown below. I hope the recipe can stop to continue executing the resources after this, but without giving the exception.
Do you have any ideas about this except from doing exit(0)?
ruby_block "verify #{current_container_name}" do
block do
require "docker"
begin
container = Docker::Container.get(current_container_name)
rescue Docker::Error::NotFoundError => exception
container = nil
end
if container.nil?
exit(0)
end
end
end

You could use ignore_failure true in this ruby block instead of handling the exception. That way it would still output the error messages, but wouldn't treat it as a failure so would continue to execute subsequent resources.

If you want to abort a chef-run under a special circumstance - like the current Docker-container is not available - this is not possible. The solution is to rethink your problem - you want some code to be only run when a special condition is met.
You do this by either leaving the recipe (with a return true), encapsulating your configuration steps in a conditional-clause (like a if my_container.nil? then ... end) or you use node-attributes to step through conditions.
Let's say your cookbook x relies on three recipes, 1, 2 and 3. So if you'd like to define that 2 and 3 are only run if 1 was successful, you're able to to write the state of the 1st recipe into the node-attributes (f.e. node.normal['recipe1'] = 'successful').
In the other recipes you'll then define an entry-gate like:
return true if node['recipe1'] != 'succesful'
But be aware, if you're using node-attributes you'll need to use the ruby_block-resource (mostly) at the end of your first recipe because the bare-ruby-code is evaluated and run during the resource-compilation - which takes place before the converge-run.

Related

Failed to expand block containing connect

I´m trying to set a connection with a conditional on the temperature, to represent a temperature-sensible charging pipe that works inside a stratified heat storage tank, with the following for loop reading the several volumes of the tank,
`for i in 2:nSeg loop
if (sensor_T_inflow.T > vol[i].T) and (sensor_T_inflow.T < vol[i-1].T) then
connect(feedPort, vol[i].ports[3])
annotation (Line(points={{-50,-60},{6,-60},{6,
-16},{16,-16}}, color={0,127,255}));
end if;
end for;`
However I get the errors such as this one for the volume 2:
Failed to expand block containing connect:
if (sensor_T_inflow.T > vol[2].T and sensor_T_inflow.T < vol[1].T) then connect(feedPort, vol[2].ports[3]); end if;
The model contained invalid connect statements.
Check aborted.
This might have a silly solution but I checked the names of all my elements and their temperature properties and they match with this code snippet, and I confirmed the error comes specially from the if conditional. Here is my model If you could take a look at it.
I tried commenting out the conditional and the check was succesful, only that I have a misbalance of about 40 variables to equations. If you got any advice on how to solve those misbalances I´d be much grateful as well.
Thanks a lot in advance.
The connect-statements may not depend on time-varying variables.
(The technical statement is first in the list in https://specification.modelica.org/master/connectors-and-connections.html#restrictions-of-connections-and-connectors )
A solution would be to turn that into an array of valve-components, and have them controlled in a similar way.

How to perform a conditional test within a Talend job?

I am looking to trigger a series of processes, and I want to tell if each one succeeds or fails before starting the subsequent ones.
I am using tSSH (on Talend 6.4.1) to trigger a process and I only want the job to continue if it is a success. The tSSH "component" doesn't appear to fail if it receives a non-zero return code, so I have tried using an assert. However, even if the assert fails, it doesn't appear to prevent the component and subjob being "OK" which is a bit odd, so I can't use on-(component|subjob)-ok to link to the next job.
I don't seem to be able to find any conditional evaluation components which will allow me to stop the continuation of the job or subjob based on the evaluation result.
The only way I can find is to have
tSSH1 --IF globalMap.get("tSSH_1_EXIT_CODE").equals(0)--> tSSH2...
--IF !globalMap.get("tSSH_1_EXIT_CODE").equals(0)--> (failure logging subjob)
which means coding the test twice with negation.
Am I missing something, or are there no such conditional components?
you can put a if condition on tSSH component for success /failure using global variable of tSSH component i.e.
((String)globalMap.get("tSSH_1_STDERR")) and ((String)globalMap.get("tSSH_1_STDOUT")).
if condition you can check is :
if(((String)globalMap.get("tSSH_1_STDERR")) != null) than call error log
else call tSSH2.
Hope this helps...

In k8s reflector, DeltaFiFo.Replace function

In reflector framework, it will first exec 'LIST' request and then watch changes. Get the list result and put them into a queue(such as DeltaFiFo) by calling Replace function, however, when compute key error it will just return and listwatch will end to exec next round ListAndWatch.
(source on Github)
One drop of poison infects the whole tun of wine.
I don't know why deal with it in this way. When one of the items is in wrong state(so it will result in computing key error all the time), ListAndWatch will always fail. I think log the error instead of return is a better way.
Thanks.

Can one dynamically add sections to a matlab publish script?

I have a matlab script, where I would like to dynamically create sections in my matlab publish.
At present, the only way I know to create a section break, is to put code like this in my script:
%% This is a section break
I'd like to run publish on my script, and have the section breaks get added as part of the publish. For instance. Say I had the following script:
breaks(1).name = 'This is section break 1.';
breaks(2).name = 'This is section break 2.';
for ix = 1 : numel(breaks)
functionThatInsertsSectionBreakTitle(breaks(ix).name);
fprintf('Some random processing associated with break %d.\n', ix);
end
I would like to call publish on that script, and end up with a document that looks something like:
This is section break 1.
Some random processing associated with break 1.
This is section break 2.
Some random processing associated with break 2.
Obviously I could do this by writing a script that writes a script that then gets executed by publish. I was hoping for something a bit more direct. Am aware of the report generation toolbox, which I would hope would cleanly handle this type of scenario. Alternatively, if the new (as of R2016a) Live Script handles this use case, that's a fine answer as well.
One way to address this problem is by displaying html code in the command output (documented here).
In your example, the code would look like this:
breaks(1).name = 'This is section break 1.';
breaks(2).name = 'This is section break 2.';
for ix = 1 : numel(breaks)
disp(['<html><h2>' breaks(ix).name '</h2></html>']);
fprintf('Some random processing associated with break %d.\n', ix);
end
This is incredibly useful when you want to get results to be displayed with a custom layout, such as a table. And it avoids the need of having a Matlab Report Generator license...

Can a list of strings be supplied to Capistrano tasks?

I have a task whose command in 'run' is the same except for a single value. This value would out of a list of potential values. What I would like to do is create a task which would use this list of values to define the task and then use that same value in the command defined in 'run'. The point is that it would be great to define the task in such a way where I don't have to repeat nearly identical task definitions for each value.
For example: I want a task that will get the status of a single program from a list of programs that I have defined in an array. I would like to define task to be something like this:
set programs = %w["postfix", "nginx", "pgpool"]
programs.each do |program|
desc "#{program} status"
task :#{program} do
run "/etc/init.d/#{program} status"
end
end
This obviously doesn't work, but hopefully it shows what I am attempting here.
Thoughts?
Well, I answered my own question... with a little trial and error. I also did the same thing with namespace so the control of services is nice and elegant. It works quite nicely!
set :programs, %w[postfix nginx pgpool]
set :init_commands, %w[status start stop]
# init.d service control
init_commands.each do |init_command|
namespace :"#{init_command}" do
programs.each do |program|
desc "#{program} #{init_command}"
task :"#{program}" do
run "/etc/init.d/#{program} #{init_command}"
end
end
end
end