add a variable to yocto SDK environment setup script - yocto

How to add a environment variable to yocto SDK environment setup script.
export CODE_ARGS = "${SAMPLE_ARGS}"
I want to add this CODE to SDK environment setup script.

These steps might depend on which Yocto release you are using, but the general idea should be the same.
Steps for Yocto kirkstone:
Looking at the end of the environment setup script you should see something like
# Append environment subscripts
if [ -d "$OECORE_TARGET_SYSROOT/environment-setup.d" ]; then
for envfile in $OECORE_TARGET_SYSROOT/environment-setup.d/*.sh; do
. $envfile
done
fi
if [ -d "$OECORE_NATIVE_SYSROOT/environment-setup.d" ]; then
for envfile in $OECORE_NATIVE_SYSROOT/environment-setup.d/*.sh; do
. $envfile
done
fi
and e.g. openssl recipe leverages this functionality:
do_install:append:class-nativesdk () {
mkdir -p ${D}${SDKPATHNATIVE}/environment-setup.d
install -m 644 ${WORKDIR}/environment.d-openssl.sh ${D}${SDKPATHNATIVE}/environment-setup.d/openssl.sh
sed 's|/usr/lib/ssl/|/usr/lib/ssl-3/|g' -i ${D}${SDKPATHNATIVE}/environment-setup.d/openssl.sh
}
So install shell scripts either under SDKPATHNATIVE or SDKPATH depending if the variable is used for all targets (SDKPATHNATIVE) or for a single target (SDKPATH)

Related

cpack restrict OS version package can be installed on

I create packages for several OS versions including RHEL7 & RHEL8 (or mostly equally CentOS7 & 8).
It is possible to install a package built for .el7. on .el8. but it will typically not work (for example due to undefined symbols etc).
Ideally I would like to make the installation fail with an error message like "this package is only intend for RHEL7/CentOS7".
How can I do this?
More specifically how can I do this with CPack/CMake?
Bonus points if you can also given an explanation suitable for Debian versions.
Here are some ideas I have so far:
Use dist tags somehow, see:
https://serverfault.com/questions/283330/rpm-spec-conditional-require-based-of-distro-version
Check uname -r at install time in a pre-install script
Part of that answer is here:
How to check os version in rpmbuild spec file
https://unix.stackexchange.com/questions/9296/how-can-i-specify-os-conditional-build-requirements-in-an-rpm-spec-file
I'm not quite sure how to do that using cpack. I do not want to generate a custom spec file as the build machinery is already complex enough.
Another option would be to add a %requires on a package that only exists on RHEL7 but not RHEL8 or visa versa. That package would need to also exist on CentOS and not change in a way that would make the installation fail if it is upgraded. Does anyone know a suitable package to depend on?
For example:
>rpm -q --whatprovides /etc/redhat-release
centos-release-8.2-2.2004.0.1.el8.x86_64
This looks like a good candidate but if I add a dependency on centos-release-8.2 and they later upgrade to centos-release-8.3 or use RedHat instead this will not work.
I did this before by having a stanza in %pre that stopped it:
if [ -n "%{dist}" ]; then
PKG_VER=`echo %{dist} | perl -ne '/el(\d)/ && print $1'`
THIS_VER=`perl -ne '/release (\d)/ && print $1' /etc/redhat-release`
if [ -n "${PKG_VER}" -a -n "${THIS_VER}" ]; then
if [ ${PKG_VER} -ne ${THIS_VER} ]; then
for i in `seq 20`; do echo ""; done
echo "WARNING: This RPM is for CentOS${PKG_VER}, but you seem to be running CentOS${THIS_VER}" >&2
echo "You might want to uninstall these RPMs immediately and get the CentOS${THIS_VER} version." >&2
for i in `seq 5`; do echo "" >&2; done
fi
fi
fi
Remember - you cannot have any user interaction in RPM installation. You could have it fail instead of warn tho; that's up to you.

How can I install a VSIX file based extension in a remote container via devcontainer.json?

In the context of VS Code Remote Development inside a container I can see that extensions to install can be specified in the devcontainers.json file, as shown in the samples in the vscode-dev-containers repo, like this example:
"extensions": [
"dbaeumer.vscode-eslint"
]
I have a VSIX file based extension locally that I'd also like to specify here so that it gets installed into the container. But I'm not sure how best to declare it here, path-wise.
I looked in the output of the container build step and noticed that the local project directory is mounted into the container (linebreaks added for readability):
Run: docker run -a STDOUT -a STDERR -p 127.0.0.1:4004:4004
-v /Users/dj/local/projects/test1:/workspaces/test1
-v /Users/dj/.gitconfig:/root/.gitconfig
-l vsch.quality=insider
-l vsch.local.folder=/Users/dj/local/projects/test1
-l vsch.remote.devPort=9753
vsc-test1-304320e2e9560b5557f6f7871801047f
/bin/sh -c echo Container started ; while sleep 1; do :; done
so I placed my VSIX file in the root of the project (/Users/dj/local/projects/test1/vscode-cds-1.1.4.vsix) and this was then available in the container. Adding the fully qualified path to this file in the container to the extensions property thus:
"extensions": [
"dbaeumer.vscode-eslint",
"/workspaces/test1/vscode-cds-1.1.4.vsix"
]
did indeed result in a successful installation of this extension into the container:
Installing extensions...
Installing extension 'dbaeumer.vscode-eslint' v1.8.2...
Extension 'dbaeumer.vscode-eslint' v1.8.2 was successfully installed.
Extension 'vscode-cds-1.1.4.vsix' was successfully installed. <----
Great!
But this hack requires me to hard code the name of the directory in which the .devcontainer/ directory is (i.e. test1/), which of course I want to avoid.
Is there a way of doing this without hard coding the whole project directory name in the devcontainer.json file?
Thank you.
About 10 mins after asking this question I thought of a different approach which is perhaps still a hack, but it avoids having to use the project directory name. I put the VSIX file in the .devcontainer/ directory, and then added a COPY command to the end of my Dockerfile thus:
COPY vscode-cds-1.1.4.vsix /tmp/
and could then specify this neutral path in the extensions property thus:
"extensions": [
"dbaeumer.vscode-eslint",
"/tmp/vscode-cds-1.1.4.vsix"
]
This works. Wondering if there's a better way though.
devcontainer.json has access to the following variables:
https://containers.dev/implementors/json_reference/#variables-in-devcontainerjson
So in your example it would be:
"extensions": [
"dbaeumer.vscode-eslint",
"${containerWorkspaceFolder}/vscode-cds-1.1.4.vsix"
]
Additional variable support was added in this PR

scons: how to define command/target that only takes place during 'scons -c'?

Before building the targets I wish to create some directory structures, I know I can use:
env = Environment()
env.Execute('mkdir -p xxx')
But this will cause "mkdir -p' to be executed even when I do clean up:
scons -c
And the "env.Execute" will gets called.
I wish there's some command or target that's only taking place when I execute 'scons -c'
How to achieve that?
Thanks.
The -c option is a built in scons option and you can check if it was set with GetOption('clean').
You could then call the commands conditionally based off the value of the 'clean' option. Here is an example:
env = Environment()
if not GetOption('clean'):
env.Execute('mkdir -p xxx')
else:
env.Execute('echo "Cleaning up..."')
More info on the other built in options can be found here:
https://scons.org/doc/production/HTML/scons-user.html#sect-command-line-option-strings
https://scons.org/doc/production/HTML/scons-man.html#options

On scala project - Getting error GC overhead limit exceeded when running sbt test command

I'm new in scala programming and getting GC overhead limit exceeded error when I execute sbt test command in one of big scala project. Anyone knows how can I solve this?
I got help from my friends :)
Increase the memory option by executing with -mem option for example:
sbt -mem 2048 test
Other options:
For Mac & Linux user:
if we need to execute this a lot. We can update the .bash_profile file and add below command:
export SBT_OPTS="-Xmx2G"
Other solution (works with Windows as well):
There's also a specific sbtopts file where you can persist this memory setting:
Find a file in Mac/Linux:
/usr/local/etc/sbtopts
Or in Windows
C:\Program Files (x86)\sbt\conf
and add below configuration:
# set memory options
#
-mem 2048
Hopefully any of these tips will help someone with this problem.
EDIT:
If anyone using IntelliJ IDEA like me, you can increase the sbt memory usage using the VM Parameters as in the picture below.
-Xmx4G
Having a look at the launcher script for running sbt, which on my system resides at /usr/share/sbt/bin/sbt, we see the following:
declare -r sbt_opts_file=".sbtopts"
declare -r etc_sbt_opts_file="/etc/sbt/sbtopts"
declare -r dist_sbt_opts_file="${sbt_home}/conf/sbtopts"
...
# Here we pull in the default settings configuration.
[[ -f "$dist_sbt_opts_file" ]] && set -- $(loadConfigFile "$dist_sbt_opts_file") "$#"
# Here we pull in the global settings configuration.
[[ -f "$etc_sbt_opts_file" ]] && set -- $(loadConfigFile "$etc_sbt_opts_file") "$#"
# Pull in the project-level config file, if it exists.
[[ -f "$sbt_opts_file" ]] && set -- $(loadConfigFile "$sbt_opts_file") "$#"
# Pull in the project-level java config, if it exists.
[[ -f ".jvmopts" ]] && export JAVA_OPTS="$JAVA_OPTS $(loadConfigFile .jvmopts)"
run "$#"
Thus we can put configuration settings in:
.jvmopts
.sbtopts
/etc/sbt/sbtopts
${sbt_home}/conf/sbtopts
For example, typelevel/cats project uses .jvmopts to set -Xmx3G. Alternatively we could do
echo "-mem 2048" >> .sbtopts
Regarding environmental variablessbt -h documents that
JAVA_OPTS environment variable, if unset uses "$java_opts"
.jvmopts if this file exists in the current directory, its contents
are appended to JAVA_OPTS
SBT_OPTS environment variable, if unset uses "$default_sbt_opts"
.sbtopts if this file exists in the current directory, its contents
are prepended to the runner args
For example,
export JAVA_OPTS=-Xmx2G
sbt
should run sbt with 2G of memory.
Note that if you are running tests in forked JVM, then you can increase memory via javaOptions setting in build.sbt like so:
Test / fork := true
Test / javaOptions ++= Seq("-Xmx4G")
VisualVM is a useful tool to see what settings were passed to a JVM process when experimenting with different ways of configuring SBT.

Removing files with Build Phase?

Is it possible to remove a file using a build phase in xcode 4 based on if it is release or dev?
If so has anyone got an example?
I have tried :
if [ "${CONFIGURATION}" = "Debug" ]; then
find "$TARGET_BUILD_DIR" -name '*-live.*' -print0 | xargs -0 rm
fi
This prints CopyStringsFile
"build/Debug-iphonesimulator/Blue Sky.app/PortalText-live.strings" CDL/PortalText-live.strings
cd "/Users/internet/Desktop/iPhone Template/iPhonePortalTemplate/CDL.Labs"
setenv PATH "/Developer/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin"
builtin-copyStrings --validate --inputencoding utf-8 --outputencoding binary --outdir "/Users/internet/Desktop/iPhone Template/iPhonePortalTemplate/CDL.Labs/build/Debug-iphonesimulator/Blue Sky.app" -- CDL/PortalText-live.strings
But does actually remove the file from the bundle.
The only way I've ever had different files, is having a separate Target, and only include certain files in certain targets.
EDIT WITH AN EXAMPLE
Ok, I've done exactly the same in another project. We had a DefaultProperties.plist file, which was included in the target.
We then had 3 copies of this, NOT included in the target, ProdProperties.plist, TestProperties.plist, UatProperties.plist.
We built for environments on the command line, using xcodebuild, as it was built using an automated build server (Bamboo).
Prior to executing xcodebuild, we would run this:
cp -vf "./Properties/Environments/${environment}Properties.plist" ./Properties/shared/DefaultProperties.plist
touch Properties/shared/DefaultProperties.plist
with $(environment) being passed into the script.
You could do something like this with the RunScript phase in Xcode.