For one of my requirements, I need to call a specific tasks based on whether a packageconfig variable is defined in another recipes or not.
For example:
We have a recipe called recipes-crypto where, in the .bb file we have:
PACKAGECONFIG[veritysetup] = "--enable-veritysetup,--disable-veritysetup"
BBCLASSEXTEND = "native nativesdk"
Then, in my meta-qti-bsp/classes, I have qimage.class, where I wanted to do like this:
if ${#bb.utils.contains('PACKAGECONFIG', 'veritysetup', 'true', 'false', d)}; then
#Call some function
fi
But it gives errors:
ERROR: ParseError at /local/mnt/workspace/PINTU/WORK/Y2021/NAD-CORE-WORK/NEW_C10_30Nov/poky/meta-qti-bsp/classes/qimage.bbclass:102: unparsed line: 'if ${#bb.utils.contains('PACKAGECONFIG', 'veritysetup', 'true', 'false', d)}; then'
How to make veritysetup variable get recognised in my class file?
I saw some examples and added this on top:
PACKAGECONFIG_append_class-native = " veritysetup"
But with this also it gives the same error.
I am using this veritysetup command only during build time.
So, I wanted to execute this command if and only if this PACKAGECONFIG variable is defined.
What is the best way to do it ?
veritysetup is not a value of PACKAGECONFIG, but it is a flag.
PACKAGECONFIG has many flags and each flag has its value.
For more information about variable flags check this link.
So, here is an example of how to check if that flag is activated:
verity-example.bb
LICENSE = "CLOSED"
PACKAGECONFIG[veritysetup] = "--enable-veritysetup,--disable-veritysetup"
do_check_verity(){
if [ ${#d.getVarFlag('PACKAGECONFIG', 'veritysetup', False)} ]; then
bbwarn "veritysetup is activated with value: ${#d.getVarFlags('PACKAGECONFIG').get('veritysetup')}"
else
bbwarn "veritysetup is not activated."
fi
}
addtask do_check_verity
If you run:
bitbake verity-example -c check_verity
You will get the warning:
WARNING: verity-example-1.0-r0 do_sample: veritysetup is activated
with value: --enable-veritysetup,--disable-veritysetup
Actually, I did it in this way and it worked for me.
The following is already enabled in recipes-crypto like this:
PACKAGECONFIG[veritysetup] = "--enable-veritysetup,--disable-veritysetup"
Now, in our .bbclass I just called like this:
DEPENDS += "cryptsetup-native openssl-native"
PACKAGECONFIG_append = " veritysetup"
**==> This is the main part how we can check, if a packageconfig variable is enabled elsewhere or not**
Then I can check the condition like this:
if not bb.utils.contains('PACKAGECONFIG', 'veritysetup', True, False, d):
//dome something
else:
//done something else
Related
I have a Rundeck job that executes multiple steps, each of which are Job References to other small jobs. The first step selects a server to upgrade, and sets a global variable with the server name. The remaining steps perform upgrade tasks. It is possible though for the first step to return NONE as the server name, and if that's the case I would like to halt execution right there without running the remaining steps, and I'd like the whole job to be marked as Successful.
I could just make that first job exit with an error code, but then the whole job looks failed, and it looks like there is something wrong with it, even though it successfully ran and found there was nothing to upgrade.
Any ideas? I'm finding "use a flow control step" everywhere, but I can't see how to make that work for my use case.
The best way to create complex workflows depending on some output value is to use the Ruleset Strategy (Rundeck Enterprise). Take a look at this.
On the community version you can save the result of the first step on a key-value variable and do some "script-fu" in the following steps:
Step 1: print the status and save it on a data variable using the key-value data log filter.
Steps 2,3,4: capture the key-value data and then the step can continue or not.
I made an example easy to import to your instance for testing:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
options:
- enforced: true
name: opt1
required: true
value: 'true'
values:
- 'true'
- 'false'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "url=${option.opt1}"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
name: result
regex: .*=\s*(.+)$
type: key-value-data
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step two"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step three"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step four"
fi
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
MegaDrive68k's answer is what you can do best with the basic opensource version or if you have the Enterprise version.
But you can also create your own plugin or make a fork out of an existing one.
Which I did with the official flow control puglin and add conditions.
You can fork this plugin and add in the java code 2 new #PluginProperty (That add two new field in a plugin parameter in rundeck interface) and make a comparison of values.
Example:
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
Comparison of Strings values (in your case it is)
if (value1.equals(value2)) {...}
Comparison of Numeric values
if (value1 == value2) {...}
If you want to stop the job with successful (it does not stop the parent job, just actual):
context.getFlowControl().Halt(true);
If you want to stop the job with a failed status:
context.getFlowControl().Halt(false);
If you want to stop the job with a customized status:
context.getFlowControl().Halt("MY CUSTOM STATUS");
And finally, if you want to continue and not stop:
context.getFlowControl().Continue();
So a complete example (add this to your public class):
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
#Override
public void executeStep(final PluginStepContext context, final Map<String, Object> configuration)
throws StepException
{
if (value1.equals(value2)) {
//Halt actual JOB without failed
context.getFlowControl().Halt(true);
} else {
//Continue
context.getFlowControl().Continue();
}
}
Then create your jar file and place it in the libext folder.
Now you can add your custom step. Put your global var in the first field and "NONE" in the second field.
If global var contain "NONE" the job stop successful at this step.
If you call a job with this step from oterh job (parent), the parent job continue.
If you want you can use this fork plugin which already includes these modifications. Look like this
I am trying to pass firebase environment variables for deployment with now.
I have encoded these variables manually with base64 and added them to now with the following command:
now secrets add firebase_api_key_dev "mybase64string"
The encoded string was placed within speech marks ""
These are in my CLI tool and I can see them all using the list command:
now secrets ls
> 7 secrets found under project-name [499ms]
name created
firebase_api_key_dev 6d ago
firebase_auth_domain_dev 6d ago
...
In my firebase config, I am using the following code:
const config = {
apiKey: Buffer.from(process.env.FIREBASE_API_KEY, "base64").toString(),
authDomain: Buffer.from(process.env.FIREBASE_AUTH_DOMAIN,"base64").toString(),
...
}
In my now.json file I have the following code:
{
"env": {
"FIREBASE_API_KEY": "#firebase_api_key_dev",
"FIREBASE_AUTH_DOMAIN": "#firebase_auth_domain_dev",
...
}
}
Everything works fine in my local environment (when I run next) as I also have a .env file with these variables, yet when I deploy my code, I get the following error in my now console:
TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined
Does this indicate that my environment variables are not being read? What's the issue here? It looks like they don't exist at all
The solution was to replace my existing now.json with:
{
"build":{
"env": {
"FIREBASE_API_KEY": "#firebase_api_key",
"FIREBASE_AUTH_DOMAIN": "#firebase_auth_domain",
"FIREBASE_DATABASE_URL": "#firebase_database_url",
"FIREBASE_PROJECT_ID": "#firebase_project_id",
"FIREBASE_STORAGE_BUCKET": "#firebase_storage_bucket",
"FIREBASE_MESSAGING_SENDER_ID": "#firebase_messaging_sender_id",
"FIREBASE_APP_ID": "#firebase_app_id",
"FIREBASE_API_KEY_DEV": "#firebase_api_key_dev",
"FIREBASE_AUTH_DOMAIN_DEV": "#firebase_auth_domain_dev",
"FIREBASE_DATABASE_URL_DEV": "#firebase_database_url_dev",
"FIREBASE_PROJECT_ID_DEV": "#firebase_project_id_dev",
"FIREBASE_STORAGE_BUCKET_DEV": "#firebase_storage_bucket_dev",
"FIREBASE_MESSAGING_SENDER_ID_DEV": "#firebase_messaging_sender_id_dev",
"FIREBASE_APP_ID_DEV": "#firebase_app_id_dev"
}
}
}
I was missing the build header.
I had to contact ZEIT support to help me identify this issue.
I have some tests I marked with an appropriate marker. If I run pytest, by default they run, but I would like to skip them by default. The only option I know is to explicitly say "not marker" at pytest invocation, but I would like them not to run by default unless the marker is explicitly asked at command line.
A slight modification of the example in Control skipping of tests according to command line option:
# conftest.py
import pytest
def pytest_collection_modifyitems(config, items):
keywordexpr = config.option.keyword
markexpr = config.option.markexpr
if keywordexpr or markexpr:
return # let pytest handle this
skip_mymarker = pytest.mark.skip(reason='mymarker not selected')
for item in items:
if 'mymarker' in item.keywords:
item.add_marker(skip_mymarker)
Example tests:
import pytest
def test_not_marked():
pass
#pytest.mark.mymarker
def test_marked():
pass
Running the tests with the marker:
$ pytest -v -k mymarker
...
collected 2 items / 1 deselected / 1 selected
test_spam.py::test_marked PASSED
...
Or:
$ pytest -v -m mymarker
...
collected 2 items / 1 deselected / 1 selected
test_spam.py::test_marked PASSED
...
Without the marker:
$ pytest -v
...
collected 2 items
test_spam.py::test_not_marked PASSED
test_spam.py::test_marked SKIPPED
...
Instead of explicitly say "not marker" at pytest invocation, you can add following to pytest.ini
[pytest]
addopts = -m "not marker"
I am trying to checkout code from SVN repo for which I am accepting the URL as argument. I have quoted the URL as shown below because it contains spaces. I also checked the parameter by redirecting the $svn_url in file (shown below). If I pick the URL from the file and pass it as is on the command line to the given script, it works fine but somehow when invoked from Puppet, it's not working.
Puppet manifests:
repo_checkout.pp:
define infra::svn::repo_checkout ($svn_url_params) {
$svn_url = $svn_url_params[svn_url]
include infra::params
$repo_checkout_ps = $infra::params::repo_checkout_ps
file { $repo_checkout_ps:
ensure => file,
source => 'puppet:///modules/infra/repo_checkout.ps1',
}
util::executeps { 'Checking out repo':
pspath => $repo_checkout_ps,
argument => "\'\"$svn_url\"\'",
}
}
params.pp:
$repo_checkout_ps = 'c:/scripts/infra/repo_checkout.ps1',
site.pp:
$svn_url_ad = {
svn_url => 'https:\\\\some_repo.abc.com\svn\dir with space\util',
}
infra::svn::repo_checkout { "Checking out code in C:\build":
svn_url_params => $svn_url_ad
}
executeps.pp:
define util::executeps ($pspath, $argument) {
$powershell = 'C:/Windows/System32/WindowsPowerShell/v1.0/powershell.exe -NoProfile -NoLogo -NonInteractive'
exec { "Executing PS file \"$pspath\" with argument \"$argument\"":
command => "$powershell -file $pspath $argument",
timeout => 900,
}
}
PowerShell code:
$svn_url = $args[0]
Set-Location C:\build
echo "svn co --username user --password xxx --non-interactive '$svn_url'" | Out-File c:\svn_url
svn co --username user --password xxx --non-interactive '$svn_url'
Puppet output on agent node:
Util::Executeps[Checking out repo]/Exec[Executing PS file "c:/scripts/infra/repo_checkout.ps1" with argument "'"https:\\some_repo.abc.com\svn\dir with space\util"'"]/returns: executed successfully
Notice: Applied catalog in 1.83 seconds
Content of c:\svn_url:
'https:\\\\some_repo.abc.com\svn\dir with space\util'
UPDATE: Sorry for the confusion but i was trying out several permutations and combinations and in doing that, i forgot to mention that when the $svn_url contains backslash (\), it does NOT work on the command line too if i copy the SVN URL from the text file where i am redirecting the echo output.
Based on #Ansgar's suggestion, i changed '$svn_url' to "$svn_url" in powershell code but the output in text file then contained ' quote twice around the URL. So i changed the argument parameter from "\'\"$svn_url\"\'" to "\"$svn_url\"". Now the output file had only single quote present around the URL. I copied only the URL (along with single quotes around it) from the output file and tried passing it to the powershell script. I now get the following error:
svn: E020024: Error resolving case of 'https:\\some_repo.abc.com\svn\dir with space\util'
Another thing to note is that if i change the back slashes in URL to forward slashes, it works fine on the command line. Invoking from Puppet still doesn't work.
Posting the final configuration that worked out for me based on #AnsgarWiechers' suggestion.
[tom#pe-server] cat repo_checkout.pp
define infra::svn::repo_checkout ($svn_url_params) {
$svn_url = $svn_url_params[svn_url]
...
...
util::executeps { 'Checking out repo':
pspath => $repo_checkout_ps,
argument => "\"$svn_url\"",
}
}
[tom#pe-server] cat repo_checkout.ps1
$svn_url = $args[0]
Set-Location C:\build
svn co --username user --password xxx --non-interactive "$svn_url"
[tom#pe-server] cat params.pp
$repo_checkout_ps = 'c:/scripts/infra/repo_checkout.ps1',
[tom#pe-server] cat site.pp
$svn_url_ad = {
svn_url => 'https://some_repo.abc.com/svn/dir with space/util',
}
infra::svn::repo_checkout { "Checking out code in C:\build":
svn_url_params => $svn_url_ad
}
Thanks a lot #AnsgarWiechers! :)
Note:
In site.pp: Used forwardslashes (/) when specifying svn_url
In repo_checkout.ps1: Changed '$svn_url' to "$svn_url"
In repo_checkout.pp: Changed double-nested (' and ") quoting in argument to single (") nested i.e., from "\'\"$svn_url\"\'" to "\"$svn_url\""
So I'm using r.js to build a bunch of my files -- some of which are Coffee-Script. I am using the Require plugin require-cs to handle this.
Here is a look at my Require.js config, a la rjs:
rjs.optimize({
baseUrl: SRC_PATH,
include: channelMap[channel],
optimize: 'none',
stubModules: ['cs', 'tpl', 'less', 'text'],
exclude: ['normalize', 'coffee-script', 'underscore'],
CoffeeScript: {
header: false,
// since we use AMD, there's no need for IIFE
bare: true
},
separateCSS: true,
skipModuleInsertion: true,
// If something needs to be present for tests too and not only for
// the build step, then add it tools/karma-amd.js instead
paths: _.extend({
'less-builder': 'vendor/require-less/less-builder',
'normalize': 'vendor/require-less/normalize'
}, rjsPaths),
wrap: true,
less: {
paths: [path.join(BASE_SHOP_FOLDER, 'static', 'zalando', 'css', channel)]
},
out: path.join(BUILD_PATH, channel, BUILD_BASE_FILE_NAME + '.js')
}, function () {
// this needs to be async because less builder uses
// process.nextTick() to write the file
process.nextTick(done);
});
Even the most simple .coffee file seems to fail violently. E.g.
define [], ->
foo = "hello world"
return foo
throws the following error:
the variable "foo" can't be assigned with undefined because it has not been declared before
foo = "hello world"
^^^
When I use replace require-cs's coffee-script.js with the older version of 1.6.3 everything works just fine.
Your code compiles BTW. Try to go to CoffeeScriptDahWebSite and click on TRY COFFEESCRIPT and you will see that it is valid code.
From the define [], () -> code ..., I assume you are using the CoffeeScript plugin with require.js. I am ready to bet your issue is in the require.js configuration (which should be your main.js file or whatever you named it) since the error you get looks oddly like the JavaScript interpreter trying to run the invalid code you wrote (for JavaScript that is :). Meaning, your plugin is not there at all.
If you give me your require configuration maybe I can edit this answer and help you more.
Cheers!
EDIT
I see you edited your question, but you provided me the wrong file. What you showed me was the r.js optimizer configuration, instead of the main.js which specifies how cs.js and coffee-script.js files are loaded. The error might be in your optimizer, but I can't know without seeing your other config.
A reiteration of that, show me the entry point of your program, the data-main that is loaded in your HTML.
I was unable to recreate the issue:
$ cat ./etc/temp1.coffee
define [], ->
foo = "hello world"
return foo
$ coffee --version
CoffeeScript version 1.7.1
$ which coffee
/home/dev/.nvm/v0.10.23/bin/coffee
$ coffee -cp ./etc/temp1.coffee
// Generated by CoffeeScript 1.7.1
(function() {
define([], function() {
var foo;
foo = "hello world";
return foo;
});
}).call(this);
$ coffee -cpb ./etc/temp1.coffee
// Generated by CoffeeScript 1.7.1
define([], function() {
var foo;
foo = "hello world";
return foo;
});
Turns out the problem was with my previous version of 1.7.1. Someone Beautified it and broke everything. Everything works as advertised when I go out of my way to get coffee-script.js from http://coffeescript.org/extras/coffee-script.js