I have a python script on server_A that connects to server_B via SSH and calls a local rsync command to reset a directory B with a fresh set of files. Then the script on A proceeds to rsync over additional set of files to B. My hope was to run this on a schedule in Rundeck. However, it is erroring on me every time during run with this output. What am I doing wrong?
Remote command failed with exit status 1
Failed: NonZeroResultCode: Remote command failed with exit status 1
Execution failed: 9 in project Test: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [server_A: NonZeroResultCode: Remote command failed with exit status 1]}, Node failures: {server_A=[NonZeroResultCode: Remote command failed with exit status 1]}, flow control: Continue, status: failed]
Exit status 1 was returned by the command you called. What are you running?
Related
we are trying to move gitlab-runners from standard CentOS VMs to kebernetes.
But after setup and registration, pipeline fails with unknown error:
Running with gitlab-runner 15.7.0 (259d2fd4)
on Kubernetes-local JXRw3mH1
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image gitlab-test.domain:5005/image:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-jxrw3mh1-project-290-concurrent-0dpd88 to be running, status is Pending
Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
Getting source from Git repository
00:00
error: could not lock config file /root/.gitconfig: Read-only file system
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Inside the log of the job pod we found:
helper Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/prepare_script"}
helper error: could not lock config file /root/.gitconfig: Read-only file system
helper
helper {"command_exit_code": 1, "script": "/scripts-290-207166/get_sources"}
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/cleanup_file_variables"}
Inside the log of the gitlab-runner pod we found:
Starting in container "helper" the command ["gitlab-runner-build" "<<<" "/scripts-290-207167/get_sources" "2>&1 | tee -a /logs-290-207167/output.log"] with script: #!/usr/bin/env bash
if set -o | grep pipefail > /dev/null; then set -o pipefail; fi; set -o errexit
set +o noclobber
: | eval $'export FF_CMD_DISABLE_DELAYED_ERROR_LEVEL_EXPANSION=$\'false\'\nexport FF_NETWORK_PER_BUILD=$\'false\'\nexport FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=$\'false\'\nexport FF_USE_DIRECT_DOWNLOAD
exit 0
job=207167 project=290 runner=JXRw3mH1
Remote process exited with the status: CommandExitCode: 1, Script: /scripts-290-207167/get_sources job=207167 project=290 runner=JXRw3mH1
Container "helper" exited with error: command terminated with exit code 1 job=207167 project=290 runner=JXRw3mH1
notes:
the error "error: could not lock config file /root/.gitconfig: Read-only file system" is due to the current user inside container is different by root
the file /logs-290-207167/output.log contains the log of the job pod
Inside job pod shell we also tested some git commands and perform successfully fetch and clone using our personal credentials (the same user that perform the run of the pipeline from gitlab gui).
We think the problem can be related on gitlab-ci-token, but we have finished our investigation... :frowning:
my kubernetes is based on arm,with kubernetes1.22.4
my faas-cli command:
faas-cli build -f ./add.yml
the response code :
[0] > Building add.
Clearing temporary build folder: ./build/add/
Preparing: ./add/ build/add/function
Building: myw/add:latest with python template. Please wait..
Sending build context to Docker daemon 8.192kB
Step 1/31 : FROM --platform=${TARGETPLATFORM:-linux/amd64} ghcr.io/openfaas/classic-watchdog:0.2.0 as watchdog
---> 6f97aa96da81
Step 2/31 : FROM --platform=${TARGETPLATFORM:-linux/amd64} python:2.7-alpine
---> 8579e446340f
Step 3/31 : ARG TARGETPLATFORM
---> Using cache
---> a75f5a062540
Step 4/31 : ARG BUILDPLATFORM
---> Using cache
---> c90a8309e851
Step 5/31 : ARG ADDITIONAL_PACKAGE
---> Using cache
---> 4ee3e6fab2a3
Step 6/31 : COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
---> Using cache
---> 33d972637c65
Step 7/31 : RUN chmod +x /usr/bin/fwatchdog
---> Running in 6204e8546454
standard_init_linux.go:219: exec user process caused: exec format error
The command '/bin/sh -c chmod +x /usr/bin/fwatchdog' returned a non-zero code: 1
[0] < Building add done in 0.54s.
[0] Worker done.
Total build time: 0.54s
Errors received during build:
- [add] received non-zero exit code from build, error: The command '/bin/sh -c chmod +x /usr/bin/fwatchdog' returned a non-zero code: 1
then can someone tell me the currect way to build a function with arm in openfaas,and why this happened,did i do someting wrong?
I can't install nodejs using the meta-nodejs library on qemux86-64.
bitbake nodejs gives the following error
Initialising tasks: 100%
|########################################################################################################################################################################|
Time: 0:00:05 Sstate summary: Wanted 7 Found 0 Missed 7 Current 780
(0% match, 99% complete) NOTE: Executing Tasks ERROR:
nodejs-7.10.0-r1.4 do_configure: Execution of
'/home/user/poky/build/tmp/work/core2-64-poky-linux/nodejs/7.10.0-r1.4/temp/run.do_configure.68465'
failed with exit code 127: /usr/bin/env: ‘python’: No such file or
directory WARNING: exit code 127 from a shell command.
ERROR: Logfile of failure stored in:
/home/user/poky/build/tmp/work/core2-64-poky-linux/nodejs/7.10.0-r1.4/temp/log.do_configure.68465
Log data follows: | DEBUG: Executing shell function do_configure |
/usr/bin/env: ‘python’: No such file or directory | WARNING: exit code
127 from a shell command. | ERROR: Execution of
'/home/user/poky/build/tmp/work/core2-64-poky-linux/nodejs/7.10.0-r1.4/temp/run.do_configure.68465'
failed with exit code 127: | /usr/bin/env: ‘python’: No such file or
directory | WARNING: exit code 127 from a shell command. | ERROR: Task
(/home/user/poky/meta-openembedded/meta-nodejs/recipes-devtools/nodejs/nodejs_7.10.0.bb:do_configure)
failed with exit code '1' NOTE: Tasks Summary: Attempted 2022 tasks of
which 2016 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/home/user/poky/meta-openembedded/meta-nodejs/recipes-devtools/nodejs/nodejs_7.10.0.bb:do_configure
Summary: There was 1 WARNING message shown. Summary: There was 1 ERROR
message shown, returning a non-zero exit code.
I installed python on both the host and on the target
can someone help me?
meta-nodejs is outdated, use nodejs from meta-oe
On job failure (exit code > 0) Rundeck automatically add detailed status informations to the notification attachment:
Failed: NonZeroResultCode: Remote command failed with exit status 1
Execution failed: 3709 in project test_project_1: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [host1: NonZeroResultCode: Remote command failed with exit status 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:host1)=BaseDataContext{{exec={exitCode=0}}}, ContextView(node:host1)=BaseDataContext{{exec={exitCode=0}}}}, base=null)} ]}, Node failures: {host1=[NonZeroResultCode: Remote command failed with exit status 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:host1)=BaseDataContext{{exec={exitCode=0}}}, ContextView(node:host1)=BaseDataContext{{exec={exitCode=0}}}}, base=null)} ]}, status: failed]
Can this message by disabled / hiden to only send the script output like in attachment on a success job run?
You can force the "exit 0" in your step wrapping it on some inline script like this
#!/bin/bash
touch /root/test 2> /dev/null
if [ $? -eq 1 ]
then
# whatever you want
echo "Successfully created file"
exit 0
else
echo "Could not create file" >&2
exit 1
fi
We use gridengine(extactly open grid scheduler 2011.11.p1) as batch-queuing system. Just now I added an execd host named host094, but when jobs were submitted there, errors issued, status of job is Eqw, logs in $SGE_ROOT/default/spool/host094/messages says:
shepherd of job 119232.1 exited with exit status = 26
can't open usage file active_jobs/119232.1/usage for job 119232.1: No such file or directory
What's the meaning?