I have written a Python script to execute data like
My script :
import os
import os.path
import re
import smtplib
from email.mime.text import MIMEText
infile = r"D:\i2Build\i2SchedulerReport.txt"
if os.path.isfile(infile) and os.access(infile, os.R_OK):
print "Scheduler report exists and is readable"
else:
print "Scheduler report is missing or is not readable"
sreport = {}
keep_phrases = ["Scheduler Running is failed"]
with open(infile) as f:
f = f.readlines()
for line in f:
for phrase in keep_phrases:
if phrase in line:
key,val=line.split(":")
sreport[key]=val.strip()
break
for k,v in sreport.items():
print k,'',v
in2npdvlnx45 => Scheduler Running is failed
bnaxpd01 => Scheduler Running is failed
md1npdaix15 => Scheduler Running is failed
bnaxpd04 => Scheduler Running is failed
bnwspd03 => Scheduler Running is failed
md1npdsun10 => Scheduler Running is failed
bn2kpd14 => Scheduler Running is failed
md1npdvbld02 => Scheduler Running is failed
bnhppd05 => Scheduler Running is failed
dlaxpd02 => Scheduler Running is failed
cmwspd02 => Scheduler Running is failed
I want the above data to execute in tabular format like below and it sends the output table format to mail using MIME import modules or some other. I see that importing pandas is helpful but unable to do it
Expected output:
in2npdvlnx45 Scheduler Running is failed
bnaxpd01 Scheduler Running is failed
md1npdaix15 Scheduler Running is failed
bnaxpd04 Scheduler Running is failed
bnwspd03 Scheduler Running is failed
md1npdsun10 Scheduler Running is failed
bn2kpd14 Scheduler Running is failed
md1npdvbld02 Scheduler Running is failed
bnhppd05 Scheduler Running is failed
dlaxpd02 Scheduler Running is failed
cmwspd02 Scheduler Running is failed
You can use format. Instead of:
print k,'',v
Use:
print('{:14} {}'.format(k, v))
More about this here
Related
we are trying to move gitlab-runners from standard CentOS VMs to kebernetes.
But after setup and registration, pipeline fails with unknown error:
Running with gitlab-runner 15.7.0 (259d2fd4)
on Kubernetes-local JXRw3mH1
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image gitlab-test.domain:5005/image:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-jxrw3mh1-project-290-concurrent-0dpd88 to be running, status is Pending
Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
Getting source from Git repository
00:00
error: could not lock config file /root/.gitconfig: Read-only file system
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Inside the log of the job pod we found:
helper Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/prepare_script"}
helper error: could not lock config file /root/.gitconfig: Read-only file system
helper
helper {"command_exit_code": 1, "script": "/scripts-290-207166/get_sources"}
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/cleanup_file_variables"}
Inside the log of the gitlab-runner pod we found:
Starting in container "helper" the command ["gitlab-runner-build" "<<<" "/scripts-290-207167/get_sources" "2>&1 | tee -a /logs-290-207167/output.log"] with script: #!/usr/bin/env bash
if set -o | grep pipefail > /dev/null; then set -o pipefail; fi; set -o errexit
set +o noclobber
: | eval $'export FF_CMD_DISABLE_DELAYED_ERROR_LEVEL_EXPANSION=$\'false\'\nexport FF_NETWORK_PER_BUILD=$\'false\'\nexport FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=$\'false\'\nexport FF_USE_DIRECT_DOWNLOAD
exit 0
job=207167 project=290 runner=JXRw3mH1
Remote process exited with the status: CommandExitCode: 1, Script: /scripts-290-207167/get_sources job=207167 project=290 runner=JXRw3mH1
Container "helper" exited with error: command terminated with exit code 1 job=207167 project=290 runner=JXRw3mH1
notes:
the error "error: could not lock config file /root/.gitconfig: Read-only file system" is due to the current user inside container is different by root
the file /logs-290-207167/output.log contains the log of the job pod
Inside job pod shell we also tested some git commands and perform successfully fetch and clone using our personal credentials (the same user that perform the run of the pipeline from gitlab gui).
We think the problem can be related on gitlab-ci-token, but we have finished our investigation... :frowning:
I have a python script on server_A that connects to server_B via SSH and calls a local rsync command to reset a directory B with a fresh set of files. Then the script on A proceeds to rsync over additional set of files to B. My hope was to run this on a schedule in Rundeck. However, it is erroring on me every time during run with this output. What am I doing wrong?
Remote command failed with exit status 1
Failed: NonZeroResultCode: Remote command failed with exit status 1
Execution failed: 9 in project Test: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [server_A: NonZeroResultCode: Remote command failed with exit status 1]}, Node failures: {server_A=[NonZeroResultCode: Remote command failed with exit status 1]}, flow control: Continue, status: failed]
Exit status 1 was returned by the command you called. What are you running?
I am getting unregistered error when i run the worker to take jobs from a queue.
This is how i am doing
celery -A Tasks beat
The above command will schedule a job at specific time.
After that, the task will be added to default queue.Now i run celery worker in other terminal as below
celery worker -Q default
But i am getting the following error
[2014-08-19 19:34:02,466: ERROR/MainProcess] Received unregistered task of type 'TasksReg.vodafone_v2'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'utc': False, 'chord': None, 'args': [[u'Kerala,Karnataka']], 'retries': 0, 'expires': None, 'task': 'TasksReg.vodafone_v2', 'callbacks': None, 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, 'id': 'd4390336-9110-4e47-9e3a-017017cb509c'} (244b)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: 'TasksReg.vodafone_v2'
You should make sure the module that defines the task vodafone_v2 gets loaded by the celery worker.
You normally get that for free via task autodiscovery if you follow the suggested structure layout of your modules.
I guess this is not working because you start the worker without the -A Tasks parameter.
Perhaps you can try
celery -A Tasks worker -Q default -l info
We have a Scala project and we use SBT as its build tool.
our CI tool is TeamCity, and we build the project using the command line custom script option with the following command:
call %system.SBT_HOME%\bin\sbt clean package
The build process works fine when the build succeeds, however, when compilation fails - TeamCity thinks that the script exited with exitCode 0 and not 1 as expected, this cause TeamCity build to succeed although the compilation failed.
when we run the same commands on local cmd we see that the errorLevel is 1.
the relevant part of the build log:
[11:33:44][Step 1/3] [error] trait ConfigurationDomain JsonSupport extends CommonFormats {
[11:33:44][Step 1/3] [error] ^
[11:33:44][Step 1/3] [error] one error found
[11:33:45][Step 1/3] [error] (compile:compile) Compilation failed
[11:33:45][Step 1/3] [error] Total time: 12 s, completed Jan 9, 2014 11:33:45 AM
[11:33:45][Step 1/3] Process exited with code 0
how can we make TeamCity recognize the failure of the build?
Try explicitly exit with:
call %system.SBT_HOME%\bin\sbt clean package
echo the exit code is %errorlevel%
exit /b
If you can't get the process to output a non-zero exit code then you could use a build failure condition based on specific text in the build log. See this page for the documentation but in essence you can get the build to fail if it finds the text error found in the build log.
We use gridengine(extactly open grid scheduler 2011.11.p1) as batch-queuing system. Just now I added an execd host named host094, but when jobs were submitted there, errors issued, status of job is Eqw, logs in $SGE_ROOT/default/spool/host094/messages says:
shepherd of job 119232.1 exited with exit status = 26
can't open usage file active_jobs/119232.1/usage for job 119232.1: No such file or directory
What's the meaning?