Pipeline failed if scanning scala files - scala

I created Gitlab CI which sends all files in project folder to scan in SonarQube, and it's working perfect with python files, but if I add scala files it's failed. My Gitlab CI:
image: testimage
variables:
SONARQUBE_URL: https://sonarqube.com
stages:
- PyLint
pylint:
stage: PyLint
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- apt-get install scala -y
- cd project
- scalac *.scala
- ls
- cd ..
- sed -i 's/PROJECT-NAME/'"$CI_PROJECT_NAME"'/g' sonar-project.properties
- sonar-scanner -Dsonar.login=$SONAR_TOKEN -Dsonar.qualitygate.wait=true -Dsonar.projectVersion=${CI_PIPELINE_ID}
- echo 'Repository Link:' "$SONARQUBE_URL${CI_PROJECT_NAME}"
Some lines from logs:
INFO: ------------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ------------------------------------------------------------------------
INFO: Total time: 13.545s
INFO: Final Memory: 16M/136M
INFO: ------------------------------------------------------------------------
ERROR: Error during SonarQube Scanner execution
java.lang.IllegalStateException: Can not execute Findbugs
at org.sonarsource.scanner.cli.Main.main(Main.java:61)
Caused by: java.lang.IllegalStateException: One (sub)project contains Java source files that are not compiled (/builds/scala/myrepo).
Property sonar.java.binaries was not set, it is required to locate the compiled .class files. For instance set the property to: sonar.java.binaries=target/classes
Sonar JavaResourceLocator.classpath was empty
Sonar JavaResourceLocator.classFilesToAnalyze was empty
at org.sonar.plugins.findbugs.FindbugsConfiguration.buildMissingCompiledCodeException(FindbugsConfiguration.java:154)
at org.sonar.plugins.findbugs.FindbugsConfiguration.initializeFindbugsProject(FindbugsConfiguration.java:124)
at org.sonar.plugins.findbugs.FindbugsExecutor.execute(FindbugsExecutor.java:117)
... 31 more
ERROR:
ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging.
Cleaning up file based variables
00:01
ERROR: Job failed: command terminated with exit code 1

My solution was to add this line in sonar-project.properties file:
sonar.java.binaries=.

Related

elastic beanstalk error when use sh ./platform/hooks/prebuild : no such file or directory

I'm trying to deploy simple front-end web by elastic beanstalk.
When I deploy my app by eb deploy or eb create --elb-type application --instance-type t3.micro, There are an error when executing an sh that I add to prevent Out Of Memory.
2022/12/10 08:20:43.618277 [INFO] Executing instruction: StageApplication
2022/12/10 08:20:43.620012 [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/
2022/12/10 08:20:43.620027 [INFO] Running command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/
2022/12/10 08:20:43.646596 [INFO] finished extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/ successfully
2022/12/10 08:20:43.647365 [INFO] Executing instruction: RunAppDeployPreBuildHooks
2022/12/10 08:20:43.647379 [INFO] Executing platform hooks in .platform/hooks/prebuild/
2022/12/10 08:20:43.647409 [INFO] Following scripts will be executed in order: [00_test.sh 01_configure_swap_space.sh]
2022/12/10 08:20:43.647413 [INFO] Try add execution permission
2022/12/10 08:20:43.647418 [INFO] Adding execute mode to file, original mode is 420
2022/12/10 08:20:43.647428 [INFO] Running script: .platform/hooks/prebuild/00_test.sh
2022/12/10 08:20:43.647663 [ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPreBuildHooks]. Stop running the command. Error: Command .platform/hooks/prebuild/00_test.sh failed with error fork/exec .platform/hooks/prebuild/00_test.sh: no such file or directory
2022/12/10 08:20:43.647668 [INFO] Executing cleanup logic
2022/12/10 08:20:43.647752 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1670660443647,"severity":"ERROR"}]}]}
2022/12/10 08:20:43.648454 [INFO] Platform Engine finished execution on command: app-deploy
I created an 00_test.sh echoing simple statement in exactly same location and chomd 777 to them. But it still doesn't work.
Some post(/.platform/hooks/': No such file or directory when deploying Django App to AWS Elastic Beanstalk) said it depends on OS, so I also tried to this all process with in vscode.
If someone who experienced same situation, want to know why this happened and solution.

NixOS - Issue packaging neovim plugin from github

I'm trying to add a neovim plugin that doesn't exist in nixpkgs yet (modes.nvim), but having trouble getting it to work.
I'm using NixOS 22.05, and Home Manager, and I am using the following to build this plugin:
pkgs.vimUtils.buildVimPlugin {
name = "modes-nvim";
src = pkgs.fetchFromGitHub {
owner = "mvllow";
repo = "modes.nvim";
rev = "3188692abf02a8838ec75e59d68c2ce3e4323f5c";
sha256 = "sha256-2QDpwQ9+F5t5gTR1KLVzRrvriwo5JUHatZEJnc0ojV8=";
};
}
Initially, I used lib.pkgs.fakeSha256 to get a "mismatched hash" error, and copy/pasted the has from that error message, so I think it's correct, though other things I've seen on the internet seem to have the SHA256 as a hex string, so not sure what's going on there.
When I run home-manager switch, I get the following error:
these 12 derivations will be built:
/nix/store/pna2lzjc3q56z59b2kfazzxi8m6swp8d-vimplugin-modes-nvim.drv
/nix/store/mnj1d881d8s57yj3y8wjy7i9nl3m089f-vimplugin-modes-nvim.drv
/nix/store/5l8vqvx2bbawkkj92s8qd0p5hw16pcmq-vim-pack-dir.drv
/nix/store/jgqf68sd50s79pydzs9154p509l5109v-hm_nviminit.vim.drv
/nix/store/rq8f75qr9ahrfr0hvp51inn19088bz5p-manifest.vim.drv
/nix/store/hbzm2xr6nv8mr2l9nrlf742fqdmw9nv3-neovim-0.7.2.drv
/nix/store/zks47ifk7njz1s8y7hvq357ac8z6azkd-neovim-0.7.2-fish-completions.drv
/nix/store/rkaa094vvyjcyy4v1zkh4f8xz64vqxas-cameron-fish-completions.drv
/nix/store/0073403x9b4wv13gm7a6bqy4765pi8g5-home-manager-files.drv
/nix/store/lxwpbhb6ryhwrff4cjyniff11843cf9x-home-manager-path.drv
/nix/store/xbbn44253wjh90md4hqrrc2wfpafkc55-activation-script.drv
/nix/store/ijwmv897xc2wlr1ij4a30vk4z154ikbf-home-manager-generation.drv
building '/nix/store/pna2lzjc3q56z59b2kfazzxi8m6swp8d-vimplugin-modes-nvim.drv'...
Sourcing vim-command-check-hook.sh
Using vimCommandCheckHook
Sourcing vim-gen-doc-hook
unpacking sources
unpacking source archive /nix/store/fcxyif8piqar9w9ynmi6ym71hw6zsy7a-source
source root is source
patching sources
configuring
no configure script, doing nothing
building
build flags: SHELL=/nix/store/iffl6dlplhv22i2xy7n1w51a5r631kmi-bash-5.1-p16/bin/bash
test -r dependencies/pack/vendor/start/plenary.nvim || git clone --depth=1 https://github.com/nvim-lua/plenary.nvim.git dependencies/pack/vendor/start/plenary.nvim
/nix/store/iffl6dlplhv22i2xy7n1w51a5r631kmi-bash-5.1-p16/bin/bash: line 1: git: command not found
make: *** [Makefile:7: install_dependencies] Error 127
error: builder for '/nix/store/pna2lzjc3q56z59b2kfazzxi8m6swp8d-vimplugin-modes-nvim.drv' failed with exit code 2;
last 10 log lines:
> unpacking source archive /nix/store/fcxyif8piqar9w9ynmi6ym71hw6zsy7a-source
> source root is source
> patching sources
> configuring
> no configure script, doing nothing
> building
> build flags: SHELL=/nix/store/iffl6dlplhv22i2xy7n1w51a5r631kmi-bash-5.1-p16/bin/bash
> test -r dependencies/pack/vendor/start/plenary.nvim || git clone --depth=1 https://github.com/nvim-lua/plenary.nvim.git dependencies/pack/vendor/start/plenary.nvim
> /nix/store/iffl6dlplhv22i2xy7n1w51a5r631kmi-bash-5.1-p16/bin/bash: line 1: git: command not found
> make: *** [Makefile:7: install_dependencies] Error 127
For full logs, run 'nix log /nix/store/pna2lzjc3q56z59b2kfazzxi8m6swp8d-vimplugin-modes-nvim.drv'.
error: 1 dependencies of derivation '/nix/store/mnj1d881d8s57yj3y8wjy7i9nl3m089f-vimplugin-modes-nvim.drv' failed to build
error: 1 dependencies of derivation '/nix/store/5l8vqvx2bbawkkj92s8qd0p5hw16pcmq-vim-pack-dir.drv' failed to build
error: 1 dependencies of derivation '/nix/store/jgqf68sd50s79pydzs9154p509l5109v-hm_nviminit.vim.drv' failed to build
error: 1 dependencies of derivation '/nix/store/rq8f75qr9ahrfr0hvp51inn19088bz5p-manifest.vim.drv' failed to build
error: 1 dependencies of derivation '/nix/store/0073403x9b4wv13gm7a6bqy4765pi8g5-home-manager-files.drv' failed to build
error: 1 dependencies of derivation '/nix/store/hbzm2xr6nv8mr2l9nrlf742fqdmw9nv3-neovim-0.7.2.drv' failed to build
error: 1 dependencies of derivation '/nix/store/ijwmv897xc2wlr1ij4a30vk4z154ikbf-home-manager-generation.drv' failed to build
It appears to be failing due to git not being found. Of course, I have git installed for me, but perhaps this command is being run in an environment that doesn't have access to my installed packages.
I'm very new to Nix and NixOS, so I'm not really sure how to begin fixing this issue. I've searched online for answers, but haven't found anything about this issue. Any advice is much appreciated.

AWS CodeBuild S3 cache for swift lambda

So I have some aws swift lambdas that I deployed via sam deploy. That works fine.
The swift lambda looks like this: aws-samples
I am currently in the middle of building a CI/CD pipeline using Codepipeline and Codebuild. My Codebuild project executes the following buildspec.yml and is configured to cache to S3:
version: 0.2
phases:
build:
commands:
- sam build
- sam package -t template.yml --s3-bucket bucketName --output-template-file packaged.yaml
artifacts:
files:
- packaged.yaml
cache:
paths:
- ".aws-sam/**/*"
To build the swift lambdas sam buildexecuted the following makefile as the function in template.yml is set to: BuildMethod: makefile
Makefile:
### Add functions here and link them to builder-bot format MUST BE "build-FunctionResourceName in template.yaml"
build-ExpiredMediaItemProcessorLambda: builder-bot
builder-bot:
$(eval $#PRODUCT = $(subst build-,,$(MAKECMDGOALS)))
$(eval $#BUILD_DIR = $(PWD)/.aws-sam/build-$($#PRODUCT))
$(eval $#STAGE = $($#BUILD_DIR)/lambda)
$(eval $#ARTIFACTS_DIR = $(PWD)/.aws-sam/build/$($#PRODUCT))
# prep directories
mkdir -p $($#BUILD_DIR)/lambda $($#ARTIFACTS_DIR)
# Compile application
swift build --product $($#PRODUCT) -c release --build-path $($#BUILD_DIR)
# copy deps
ldd '/$($#BUILD_DIR)/release/$($#PRODUCT)' | grep swift | cut -d' ' -f3 | xargs cp -Lv -t /$($#BUILD_DIR)/lambda
# copy binary to stage
cp $($#BUILD_DIR)/release/$($#PRODUCT) $($#BUILD_DIR)/lambda/bootstrap
# copy app from stage to artifacts dir
cp $($#STAGE)/* $($#ARTIFACTS_DIR)
I got the makefile from the linked aws-samples project above which I modified slightly to work on codebuild.
Now to my question: How do I get codebuild S3 cache to work with swift lambdas?
when I cache the .aws-sam/**/* folder swift cannot compile because the build paths are different due to the different build machines. The error looks like so:
swift build --product LambdaName -c release --build-path /codebuild/output/src409372700/src/.swift-build/build-LambdaName
33 [1/863] Compiling CSotoExpat xmltok_impl.c
34 [2/863] Compiling CSotoExpat xmltok_ns.c
35 [3/865] Compiling INIParser INIParser.swift
36 <unknown>:0: error: PCH was compiled with module cache path '.aws-sam/build-LambdaName/x86_64-unknown-linux-gnu/release/ModuleCache/1LD7OVICEM9JB', but the path is currently '/codebuild/output/src409372700/src/.aws-sam/build-LambdaName/x86_64-unknown-linux-gnu/release/ModuleCache/1LD7OVICEM9JB'
37 <unknown>:0: error: missing required module 'SwiftShims'
38 [4/865] Compiling Logging Locks.swift
39 <unknown>:0: error: PCH was compiled with module cache path '.aws-sam/build-LambdaName/x86_64-unknown-linux-gnu/release/ModuleCache/1LD7OVICEM9JB', but the path is currently '/codebuild/output/src409372700/src/.aws-sam/build-LambdaName/x86_64-unknown-linux-gnu/release/ModuleCache/1LD7OVICEM9JB'
40 <unknown>:0: error: missing required module 'SwiftShims'
As far as I understand this is due to the fact that swift build cannot cope with a changing or relative path.
After some digging I found this stack overflow post which has a similar problem. I tried there solution and the swift compile error is gone but after a rebuild swift compiles the entire project again even if no code was changed.
Any help is much appreciated!
Thanks!

Run QEMU from within Eclipse External tools

I'd like to ask since I am setting up the Eclipse IDE for Yocto app development and I got stuck to start the QEMU from within Eclipse.
I got working QEMU image OK such as
ubuntu#ubuntu:~/work/community/build-x11$ runqemu qemuarm
tmp/deploy/images/qemuarm/zImage-qemuarm.bin
tmp/deploy/images/qemuarm/fsl-image-multimedia-full-qemuarm.ext4
Within the Eclipse I follow
https://www.yoctoproject.org/docs/2.5/sdk-manual/sdk-manual.html#oxygen-starting-qemu-in-user-space-nfs-mode
But by configuring "External Tools" and try to run QEMU I got following
runqemu - INFO - Running MACHINE=qemuarm bitbake -e...
ERROR: Unable to find conf/bblayers.conf or conf/bitbake.conf. BBAPTH is unset and/or not in a build directory?
runqemu - WARNING - Couldn't run 'bitbake -e' to gather environment information:
runqemu - WARNING - Can't find qemuboot conf file, DEPLOY_DIR_IMAGE is NULL!
runqemu - INFO - Running MACHINE=qemuarm bitbake -e...
ERROR: Unable to find conf/bblayers.conf or conf/bitbake.conf. BBAPTH is unset and/or not in a build directory?
runqemu - WARNING - Couldn't run 'bitbake -e' to gather environment information:
runqemu - INFO - Setting STAGING_DIR_NATIVE to OECORE_NATIVE_SYSROOT (/home/ubuntu/work/community/build-x11/tmp/work/armv5e-fslc-linux-gnueabi/meta-ide-support/1.0-r3/recipe-sysroot-native)
runqemu - INFO - Setting STAGING_BINDIR_NATIVE to /home/ubuntu/work/community/build-x11/tmp/work/armv5e-fslc-linux-gnueabi/meta-ide-support/1.0-r3/recipe-sysroot-native/usr/bin
runqemu - INFO - QB_MEM is not set, use 512M by default
runqemu - INFO - Continuing with the following parameters:
KERNEL: [/home/ubuntu/work/community/build-x11/tmp/deploy/images/qemuarm/zImage-qemuarm.bin]
MACHINE: [qemuarm]
FSTYPE: [nfs]
NFS_DIR: [/home/ubuntu/work/community/build-x11/MY_QEMU_ROOTFS]
CONFFILE: []
/bin/sh: 1: stty: not found
Traceback (most recent call last):
File "/home/ubuntu/work/community/sources/poky/scripts/runqemu", line 1270, in main
config.setup_network()
File "/home/ubuntu/work/community/sources/poky/scripts/runqemu", line 997, in setup_network
self.saved_stty = subprocess.check_output("stty -g", shell=True).decode('utf-8')
File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
**kwargs).stdout
File "/usr/lib/python3.5/subprocess.py", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'stty -g' returned non-zero exit status 127
Cleanup
Command 'lesspipe' is available in the following places
* /bin/lesspipe
* /usr/bin/lesspipe
The command could not be located because '/bin:/usr/bin' is not included in the PATH environment variable.
lesspipe: command not found
Command 'dircolors' is available in '/usr/bin/dircolors'
The command could not be located because '/usr/bin' is not included in the PATH environment variable.
dircolors: command not found
ubuntu#ubuntu:~/eclipse/cpp-oxygen/eclipse$
I wonder if any has experience such a problem on setting it up "External Tools" with Eclipse?
Thank you
Navigate to the build path and then trigger this command will work.
OR
Run source oe-init-build-env build path.

Hudson failing build w/o revealing cause

Every build has failed as of Tuesday. I'm not exactly sure what happened. The Phing targets (clean/prepare) are being executed properly. Additionally, the unit tests are passing with flying colors, with only a warning for duplicate code (not a reason for a fail). I tried removing the phpDoc target to see if that was causing the error, but the build still failed.
Started by user chris Updating
file://localhost/projects/svn/ips-com/trunk
At revision 234 no change for
file://localhost/projects/svn/ips-com/trunk
since the previous build [trunk] $
/opt/phing/bin/phing clean prepare
-logger phing.listener.NoBannerLogger Buildfile:
/var/lib/hudson/.hudson/jobs/IPS/workspace/trunk/build.xml
IPS > clean:
[echo] Clean... [delete] Deleting directory
/var/lib/hudson/.hudson/jobs/IPS/workspace/build
IPS > prepare:
[echo] Prepare...
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage-html
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/docs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/app
BUILD FINISHED
Total time: 1.0244 second
[workspace] $ /bin/bash -xe
/tmp/hudson3259012225710915845.sh
+ cd trunk/tests
+ /usr/local/bin/phpunit --verbose -d memory_limit=512M --log-junit
../../build/logs/phpunit.xml
--coverage-clover ../../build/logs/coverage/clover.xml
--coverage-html ../../build/logs/coverage-html/
PHPUnit 3.5.0 by Sebastian Bergmann.
IPS Default_IndexControllerTest .
Default_AuthControllerTest ......
Manage_UsersControllerTest .....
testDeleteInvalidUserId ..
testGetPermissionsForInvalidUserId .. Audit_OverviewControllerTest
............
Time: 14 seconds, Memory: 61.00Mb
[30;42m[2KOK (28 tests, 198
assertions) [0m[2K Writing code
coverage data to XML file, this may
take a moment.
Generating code coverage report, this
may take a moment.
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
[workspace] $ /bin/bash -xe
/tmp/hudson1439023061736436000.sh
+ /usr/local/bin/phpcpd --log-pmd ./build/logs/cpd.xml ./trunk phpcpd
1.3.2 by Sebastian Bergmann.
Found 1 exact clones with 6 duplicated
lines in 2 files:
library/Ips/Form/Decorator/SplitInput.php:8-14
library/Ips/Form/Decorator/FeetInches.php:10-16
0.04% duplicated lines out of 16585 total lines of code.
Time: 4 seconds, Memory: 19.50Mb [DRY]
Skipping publisher since build result
is FAILURE Publishing Javadoc [xUnit]
[INFO] - Starting to record. [xUnit]
[WARNING] - Can't create the path
/var/lib/hudson/.hudson/jobs/IPS/workspace/generatedJUnitFiles.
Maybe the directory already exists.
[xUnit] [INFO] - Processing
PHPUnit-3.4 (default) [xUnit] [INFO] -
[PHPUnit-3.4 (default)] - 1 test
report file(s) were found with the
pattern 'build/logs/phpunit.xml'
relative to
'/var/lib/hudson/.hudson/jobs/IPS/workspace'
for the testing framework 'PHPUnit-3.4
(default)'. [xUnit] [INFO] -
Converting
'/var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/phpunit.xml'
. [xUnit] [INFO] - Stopping recording.
Publishing Clover coverage report...
Publishing Clover XML report...
Publishing Clover coverage results...
Finished: FAILURE
What changed since Tuesday? Try to manually run exactly the same commands that Hudson tries to run from the same directory that Hudson starts it from (usually the jobs workspace directory). Of course with the user account that Hudson is started under.
There are several possibilities. ranging from standard groups for a directory, to permission, or other things outside of Hudson. Was Hudson upgraded? Was a plugin upgraded? Was the OS or php upgraded? Was there a change in the default or user .profile or .env (or the equivalent files)? Does another process accesses the workspace? ......
Once I had the problem that all of the sudden my deployment scripts did not run anymore. The mystery was, that I could still run the script from command line with the Hudson user account. The reason was simple but took a while to uncover. There was a java upgrade from 5 to 6. Both versions were available. After comparing the environment variables, there was a difference in the path. The problem was that the new path was set in the global .profile. But Hudson does not open an interactive shell, therefore the .profile will not be executed. If you have a problem like this, you can put the initialization in the .env file (or whatever the filename is for your system), because this will be run regardless if it is a interactive shell or not. Alternatively you can configure Hudson to set it on master or node/slave level.
if you want a command to not break the 'build' as a failure you have to add #! in front of the command to prevent the flags -xe which produce this behaviour.