whats the best method to test for truthiness in coffeescript - coffeescript

Is it acceptable to just test for "truthy" in coffeescript? I'm looking for a best practice to soak up empty strings in object attributes.
Given:
obj = {
"isNull": null,
"isEmptyString": "",
"isZero": 0
}
## coffeescript
# obj.isNull? === true
# obj.isEmptyString? === false
# obj.isZero? === false
Which is safer or preferable??
# obj.isEmptyString == "truthy"
# !!obj.isEmptyString === true

I believe !! is the accepted method:
obj = {"isNull": null, "isEmptyString": "", "isZero": 0, "isValue": 1}
!!obj.isNull # false
!!obj.isEmptyString # false
!!obj.isZero # false
!!obj.isValue # true
EDIT: Possible duplicate: Easiest way to check if string is null or empty

Related

Terraform enable or disable a resource conditionally

My requirement is to create or delete a resource by specifying enable flag true or false (In case of false the resource should get deleted, in case of true the resource should get created)
Kindly refer below code - here I am creating "confluent topic" resource and calling it dynamically using for_each condition.
Confluent Topic creation
topic.tf file:
resource "confluent_kafka_topic" "topic" {
kafka_cluster {
id = confluent_kafka_cluster.dedicated.id
}
for_each = { for t in var.topic_name : t.topic_name => t }
topic_name = each.value["topic_name"]
partitions_count = each.value["partitions_count"]
rest_endpoint = confluent_kafka_cluster.dedicated.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
Variable declared as:
variable "topic_name" {
type = list(map(string))
default = [{
"topic_name" = "default_topic"
}]
}
And finally executing it through DEV.tfvars file:
topic_name = [
{
topic_name = "avro-topic-1"
partitions_count = "6"
},
{
topic_name = "json-topic-1"
partitions_count = "8"
},
]
The above code execution works fine and I am able to create and delete multiple resources. I want to modify it further and add a flag/toggle to create or delete a resource.
Example as shown below:
topic_name = [
{
topic_name = "avro-topic-1"
partitions_count = "6"
**enable = true #this flag will create the resource**
},
{
topic_name = "json-topic-1"
partitions_count = "8"
**enable = false #this flag will delete the resource**
},
]
Kindly help suggest how it can be achieved and if there is any different approach to follow.
As mentioned in my comment, I think this can be achieved with the following change:
resource "confluent_kafka_topic" "topic" {
for_each = { for t in var.topic_name : t.topic_name => t if t.enable }
kafka_cluster {
id = confluent_kafka_cluster.dedicated.id
}
topic_name = each.value["topic_name"]
partitions_count = each.value["partitions_count"]
rest_endpoint = confluent_kafka_cluster.dedicated.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
Additionally, for_each should probably be at the top of the resource block to make sure it is visible immediately to the reader. The if t.enable part will make sure that for_each will create a resource only when the variable key has the enabled = true.

Isolation between kubernetes.admission policies in OPA

I use the vanilla Open Policy Agent as a deployment on Kubernetes for handling admission webhooks.
The behavior of multiple policies evaluation is not clear to me, see this example:
## policy-1.rego
package kubernetes.admission
check_namespace {
# evaluate to true
namespaces := {"namespace1"}
namespaces[input.request.namespace]
}
check_user {
# evaluate to false
users := {"user1"}
users[input.request.userInfo.username]
}
allow["yes - user1 and namespace1"] {
check_namespace
check_user
}
.
## policy-2.rego
package kubernetes.admission
check_namespace {
# evaluate to false
namespaces := {"namespace2"}
namespaces[input.request.namespace]
}
check_user {
# evaluate to true
users := {"user2"}
users[input.request.userInfo.username]
}
allow["yes - user2 and namespace12] {
check_namespace
check_user
}
.
## main.rego
package system
import data.kubernetes.admission
main = {
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": response,
}
default uid = ""
uid = input.request.uid
response = {
"allowed": true,
"uid": uid,
} {
reason = concat(", ", admission.allow)
reason != ""
}
else = {"allowed": false, "uid": uid}
.
## example input
{
"apiVersion": "admission.k8s.io/v1beta1",
"kind": "AdmissionReview",
"request": {
"namespace": "namespace1",
"userInfo": {
"username": "user2"
}
}
}
.
## Results
"allow": [
"yes - user1 and namespace1",
"yes - user2 and namespace2"
]
It seems that all of my policies are being evaluated as just one flat file, but i would expect that each policy will be evaluated independently from the others
What am I missing here?
Files don't really mean anything to OPA, but packages do. Since both of your policies are defined in the kubernetes.admission module, they'll essentially be appended together as one. This works in your case only due to one of the check_user and check_namespace respectively evaluating to undefined given your input. If they hadn't you would see an error message about conflict, since complete rules can't evalutate to different results (i.e. allow can't be both true and false).
If you rather use a separate package per policy, like, say kubernetes.admission.policy1 and kubernetes.admission.policy2, this would not be a concern. You'd need to update your main policy to collect an aggregate of the allow rules from all of your policies though. Something like:
reason = concat(", ", [a | a := data.kubernetes.admission[policy].allow[_]])
This would iterate over all the sub-packages in kubernetes.admission and collect the allow rule result from each. This pattern is called dynamic policy composition, and I wrote a longer text on the topic here.
(As a side note, you probably want to aggregate deny rules rather than allow. As far as I know, clients like kubectl won't print out the reason from the response unless it's actually denied... and it's generally less useful to know why something succeeded rather than failed. You'll still have the OPA decision logs to consult if you want to know more about why a request succeeded or failed later).

Rundeck stop running steps based on global variable

I have a Rundeck job that executes multiple steps, each of which are Job References to other small jobs. The first step selects a server to upgrade, and sets a global variable with the server name. The remaining steps perform upgrade tasks. It is possible though for the first step to return NONE as the server name, and if that's the case I would like to halt execution right there without running the remaining steps, and I'd like the whole job to be marked as Successful.
I could just make that first job exit with an error code, but then the whole job looks failed, and it looks like there is something wrong with it, even though it successfully ran and found there was nothing to upgrade.
Any ideas? I'm finding "use a flow control step" everywhere, but I can't see how to make that work for my use case.
The best way to create complex workflows depending on some output value is to use the Ruleset Strategy (Rundeck Enterprise). Take a look at this.
On the community version you can save the result of the first step on a key-value variable and do some "script-fu" in the following steps:
Step 1: print the status and save it on a data variable using the key-value data log filter.
Steps 2,3,4: capture the key-value data and then the step can continue or not.
I made an example easy to import to your instance for testing:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
options:
- enforced: true
name: opt1
required: true
value: 'true'
values:
- 'true'
- 'false'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "url=${option.opt1}"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
name: result
regex: .*=\s*(.+)$
type: key-value-data
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step two"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step three"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step four"
fi
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
MegaDrive68k's answer is what you can do best with the basic opensource version or if you have the Enterprise version.
But you can also create your own plugin or make a fork out of an existing one.
Which I did with the official flow control puglin and add conditions.
You can fork this plugin and add in the java code 2 new #PluginProperty (That add two new field in a plugin parameter in rundeck interface) and make a comparison of values.
Example:
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
Comparison of Strings values (in your case it is)
if (value1.equals(value2)) {...}
Comparison of Numeric values
if (value1 == value2) {...}
If you want to stop the job with successful (it does not stop the parent job, just actual):
context.getFlowControl().Halt(true);
If you want to stop the job with a failed status:
context.getFlowControl().Halt(false);
If you want to stop the job with a customized status:
context.getFlowControl().Halt("MY CUSTOM STATUS");
And finally, if you want to continue and not stop:
context.getFlowControl().Continue();
So a complete example (add this to your public class):
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
#Override
public void executeStep(final PluginStepContext context, final Map<String, Object> configuration)
throws StepException
{
if (value1.equals(value2)) {
//Halt actual JOB without failed
context.getFlowControl().Halt(true);
} else {
//Continue
context.getFlowControl().Continue();
}
}
Then create your jar file and place it in the libext folder.
Now you can add your custom step. Put your global var in the first field and "NONE" in the second field.
If global var contain "NONE" the job stop successful at this step.
If you call a job with this step from oterh job (parent), the parent job continue.
If you want you can use this fork plugin which already includes these modifications. Look like this

Terraform 0.14, Validate variable base on the value of another variable

Is there a way of implementing the logic like I have to check the condition for both variables i.e env and subscription id. It should skip the execution for dev env and continue for stg and prod. I was trying the below code:
locals {
validate_env_code_cnd = var.env == "dev" && var.subscription_id == "XXX"
validate_env_code_msg = "The environment should not dev for given sub"
validate_env_code_chk = regex (
"^${local.validate_env_code_msg}$",
(!local.validate_env_code_cnd
? local.validate_env_code_msg
: "") )
I am getting the error like:
Error: Error in function call
on vars.tf line 20, in locals:
20: validate_env_code_chk = regex("^${local.validate_env_code_msg}$",
(!local.validate_env_code_cnd ? local.validate_env_code_msg : "") )
|----------------
| local.validate_env_code_cnd is true
| local.validate_env_code_msg is "The dev environment not allowed for given
sub"
Call to function "regex" failed: pattern did not match any part of the given
string.
The reason its not working is validate_env_code_cnd is returing true or false but you want actual values. This should work for your use case.
variable "env" {
default = "dev"
}
variable "subscription_id" {
default = "XXX"
}
locals {
validate_env_code_msg = "The environment should not dev for given sub"
validate_env_code_chk = length(regexall(var.env, local.validate_env_code_msg)) > 0 && length(regexall(var.subscription_id, local.validate_env_code_msg)) > 0
validate_env_code_updated = var.env == "dev" && var.subscription_id == "XXX" ? local.validate_env_code_msg : ""
}
output "test" {
value = local.validate_env_code_chk
}
output "test_updated" {
value = local.validate_env_code_updated
}
I am returning True or False but you can return anything or check and continue.

How to round trip ruamel.yaml strings like "on"

When using ruamel.yaml to round-trip some YAML I see the following issue. Given this input:
root:
matchers:
- select: "response.body#state"
test: all
expected: "on"
I see this output:
root:
matchers:
- select: response.body#state
test: all
expected: on
Note that in YAML, on parses as a boolean true value while off parses as false.
The following code is used to read/write:
# Use the default (round-trip) settings.
yaml = YAML()
if args.source == '-':
src = sys.stdin
else:
src = open(args.source)
doc = yaml.load(src)
process(args.tag, set(args.keep.split(',')), doc)
if args.destination == '-':
dest = sys.stdout
else:
dest = open(args.destination, 'w')
yaml.dump(doc, dest)
The process function is not modifying values. It only removes things with a special tag in the input after crawling the structure.
How can I get the output to be a string rather than a boolean?
You write that:
Note that in YAML, on parses as a boolean true value while off parses as false.
That statement is not true (or better:
has not been true for ten years). If you have an unquoted on in your
YAML, like in your output, that is obviously not the case when using ruamel.yaml:
import sys
import ruamel.yaml
yaml_str = """\
root:
matchers:
- select: response.body#state
test: all
expected: on
"""
yaml = ruamel.yaml.YAML()
data = yaml.load(yaml_str)
expected = data['root']['matchers'][0]['expected']
print(type(expected), repr(expected))
which gives:
<class 'str'> 'on'
This is because in the YAML 1.2 spec on/off/yes/no are no
longer mentioned as having the same meaning as true
resp. false. They are mentioned in the YAML 1.1 spec, but that was
superseded in 2009. Unfortunately there are YAML libraries out in the
wild, that have not been updated since then.
What is actually happening is that the suprefluous quotes in your
input are automatically discarded by the round-trip process. You can
also see that happen for the value "response.body#state". Although
there the character that starts comments (#) is included, to
actually start a comment that character has to be proceded by
white-space, and since it is isn't, the quotes are not necessary.
So your output is fine, but if you are in the unfortunate
situation where you have to deal with other programs relying on
outdated YAML 1.1, then you can e.g. specify that you want to preserve
your quotes on round-trip:
yaml_str = """\
root:
matchers:
- select: "response.body#state"
test: all
expected: "on"
"""
yaml = ruamel.yaml.YAML()
yaml.indent(sequence=4, offset=2)
yaml.preserve_quotes = True
data = yaml.load(yaml_str)
yaml.dump(data, sys.stdout)
as this gives your exact input:
root:
matchers:
- select: "response.body#state"
test: all
expected: "on"
However maybe the better option would be that you actually specify that your
YAML is and has to conforming to the YAML 1.1 specification by making
your intensions, and the output document, explicit:
yaml_str = """\
root:
matchers:
- select: response.body#state
test: all
expected: on
"""
yaml_in = ruamel.yaml.YAML()
yaml_out = ruamel.yaml.YAML()
yaml_out.indent(sequence=4, offset=2)
yaml_out.version = (1, 1)
data = yaml_in.load(yaml_str)
yaml_out.dump(data, sys.stdout)
Notice that the "unquoted" YAML 1.2 input, gives output where on is quoted:
%YAML 1.1
---
root:
matchers:
- select: response.body#state
test: all
expected: 'on'