When I print a matlab (2011b or 2013a) parallel-computing-toolbox job using the console like this:
>> findResource.jobs(1)
ans =
Job ID 17 Information
=====================
UserName : jgleixner
State : finished
SubmitTime : Sat Aug 03 05:02:59 CEST 2013
StartTime : Sat Aug 03 05:03:12 CEST 2013
Running Duration : 0 days 3h 37m 9s
- Data Dependencies
FileDependencies : {}
PathDependencies : {}
- Associated Task(s)
Number Pending : 0
Number Running : 0
Number Finished : 120
TaskID of errors : [1x94 double]
ml shows an array of the IDs of tasks that threw errors. However, if this array is too long, the values are not printed (as in the example above).
How do I access this array programmatically?
The result is an instance of the parallel.Job class, so have a look at the documentation here. You can get an array of all parallel.Task objects by fetching the Tasks property, and they contain information about any errors which occurred.
Related
I have got the response from the service now api
enter code here
$geturl = $ParentURL + "/table/sc_req_item?
sysparm_query=number="+$RITMNumber+"&sysparm_display_value=all&sysparm_fields=number,short_description,requested_for.first_name,requested_for.last_name,requested_for.user_name"
number : #{display_value=RITM2519394; value=RITMXXXX}
short_description : #{display_value=Login As ABC xyz on 19 Feb. Creating ticket for tracking purposes.; value=Login As ABC xyz on 19 Feb. Creating ticket for tracking purposes.}
requested_for.first_name : #{display_value=abc; value=abc}
requested_for.user_name : #{display_value=Exxxx; value=Exxxx}
requested_for.last_name : #{display_value=xyz; value=xyz}
I am able to display get the short description using $ShortDescription = $($response.result.short_description.value)
Login As ABC xyz on 19 Feb. Creating ticket for tracking purposes.
But when I m trying to get the requested for first name and last name I m getting blank
$RequestFor = $($response.result.requested_for.first_name)
could you please help me in sorting out
I have some questions, how I can set telegraf.conf file for collect logs from the "zimbra.conf" file?
Now I tried to use this config text, but it does not work :(((
I want to send this logs to grafana
One of the lines "zimbra.conf" for example:
Oct 1 10:20:46 webmail postfix/smtp[7677]: BD5BAE9999: to=user#mail.com, relay=mo94.cloud.mail.com[92.97.907.14]:25, delay=0.73, delays=0.09/0.01/0.58/0.19, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 4C25fk2pjFz32N5)
And I do not understand exactly how works the "grok_patterns ="
[[inputs.tail]]
files = ["/var/log/zimbra.log"]
from_beginning = false
grok_patterns = ['%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}']
name_override = "zimbra_access_log"
grok_custom_pattern_files = []
grok_custom_patterns = '''
TS_UNIX %{MONTH}%{SPACE}%{MONTHDAY}%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}
TS_CUSTOM %{MONTH}%{SPACE}%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}
'''
grok_timezone = "Local"
data_format = "grok"
I have copied your example line into a log file called Prueba.txt wich contains the following lines:
Oct 3 00:52:32 webmail postfix/smtp[7677]: BD5BAE9999: to=user#mail.com, relay=mo94.cloud.mail.com[92.97.907.14]:25, delay=0.73, delays=0.09/0.01/0.58/0.19, dsn=2.0.0, status=sent (250 2.0$
Oct 13 06:25:01 webmail systemd-logind[949]: New session 229478 of user zimbra.
Oct 13 06:25:02 webmail zmconfigd[27437]: Shutting down. Received signal 15
Oct 13 06:25:02 webmail systemd-logind[949]: Removed session c296.
Oct 13 06:25:03 webmail sshd[28005]: Failed password for invalid user julianne from 120.131.2.210 port 10570 ssh2
I have been able to parse the data with this configuration of the tail.input plugin:
[[inputs.tail]]
files = ["Prueba.txt"]
from_beginning = true
data_format = "grok"
grok_patterns = ['%{TIMESTAMP_ZIMBRA} %{GREEDYDATA:source} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}']
grok_custom_patterns = '''
TIMESTAMP_ZIMBRA (\w{3} \d{1,2} \d{2}:\d{2}:\d{2})
'''
name_override = "log_frames"
You need to match the input string with regular expressions. For that there are some predefined patters such as GREEDYDATA = .* that you can use to match your input (another example will be NUMBER = (?:%{BASE10NUM}) BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))) . You can also define your own patterns in grok_custom_patterns. Take a look at this website with some patters: https://streamsets.com/documentation/datacollector/latest/help/datacollector/UserGuide/Apx-GrokPatterns/GrokPatterns_title.html
In this case I defined a TIMESTAMP_ZIMBRA pattern for matching Oct 3 00:52:32 and Oct 03 00:52:33 alike inputs.
Here is the collected metric by Prometheus:
# HELP log_frames_delay Telegraf collected metric
# TYPE log_frames_delay untyped
log_frames_delay{delays="0.09/0.01/0.58/0.19",dsn="2.0.0",host="localhost.localdomain",message="BD5BAE9999:",path="Prueba.txt",program="postfix/smtp",relay="mo94.cloud.mail.com[92.97.907.14]:25",source="webmail",status="sent (250 2.0.0 Ok: queued as 4C25fk2pjFz32N5)",to="user#mail.com"} 0.73
P.D.: Ensure that telegraf has access to the log files.
I need to extract an alerts from rest api and sent it to a file with powershell
I was able to extract the alerts outputs looping the xml file:
foreach ($c in $temp){$c.timeOfAlertFormatted,$c.parent,$c.child,$c.category,$c.servicePlanDisplayName,$c.message}
Thu 09/19/2019 12:00:19 AM
IL
Servername
Phase Failure
Gold
One or more source luns do not have a remote target specified/mapped.
Wed 09/18/2019 02:18:25 PM
IL
Server2
Phase Failure
Gold
One or more source luns do not have a remote target specified/mapped
I am new to PS , what i want to achieve is to add descriptive string
to each filed, i.e:
Time: Thu 09/19/2019 12:00:19 AM
Country: IL
Server: servername
etc ,the rest of the fields.
i tried :
foreach ($c in $temp){Write-Host "Time : $($c.timeOfAlertFormatted)"}
Time :
Time :
Time :
Time :
Time :
Time :
Time :
Time :
Time :
Time :
Time :
Time :
Time : Thu 09/19/2019 12:00:19 AM
its printing empty "Time" fields
here is example of the xml:
It looks like you have already loaded the xml and filtered out the properties you need in a variable $temp.
I think what you want can be achieved by doing:
$temp | Select-Object #{Name = 'Time'; Expression = {$_.timeOfAlertFormatted}},
#{Name = 'Country'; Expression = {$_.parent}},
#{Name = 'ServerName'; Expression = {$_.child}},
Category,ServicePlanDisplayName, Message
The above should output something like
Time : Thu 09/19/2019 12:00:19 AM
Country : IL
ServerName : Servername
Category : Phase Failure
ServicePlanDisplayName : Gold
Message : One or more source luns do not have a remote target specified/mapped.
Time : Wed 09/18/2019 02:18:25 PM
Country : IL
ServerName : Server2
Category : General Failure
ServicePlanDisplayName : Gold
Message : One or more source luns do not have a remote target specified/mapped.
If your variable $temp is NOT what I suspect it to be, please edit your question and show us the XML aswell as the code you use to extract the alerts from it.
I have TORQUE installed on Ubuntu 16.04, and I am having trouble because my jobs hang. I have a test script test.pbs:
#PBS -N test
#PBS -l nodes=1:ppn=1
#PBS -l walltime=0:01:00
cd $PBS_O_WORKDIR
touch done.txt
echo "done"
And I run it with
qsub test.pbs
The job writes done.txt and echoes "done" just fine, but the job hangs in the C state.
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
46.localhost test wlandau 00:00:00 C batch
Edit: some diagnostic info on another job from qstat -f 55
qstat -f 55
Job Id: 55.localhost
Job_Name = test
Job_Owner = wlandau#localhost
resources_used.cput = 00:00:00
resources_used.mem = 0kb
resources_used.vmem = 0kb
resources_used.walltime = 00:00:00
job_state = C
queue = batch
server = haggunenon
Checkpoint = u
ctime = Mon Oct 30 07:35:00 2017
Error_Path = localhost:/home/wlandau/Desktop/test.e55
exec_host = localhost/2
Hold_Types = n
Join_Path = n
Keep_Files = n
Mail_Points = a
mtime = Mon Oct 30 07:35:00 2017
Output_Path = localhost:/home/wlandau/Desktop/test.o55
Priority = 0
qtime = Mon Oct 30 07:35:00 2017
Rerunable = True
Resource_List.ncpus = 1
Resource_List.nodect = 1
Resource_List.nodes = 1:ppn=1
Resource_List.walltime = 00:01:00
session_id = 5115
Variable_List = PBS_O_QUEUE=batch,PBS_O_HOST=localhost,
PBS_O_HOME=/home/wlandau,PBS_O_LANG=en_US.UTF-8,PBS_O_LOGNAME=wlandau,
PBS_O_PATH=/home/wlandau/bin:/home/wlandau/.local/bin:/usr/local/sbin
:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/ga
mes:/snap/bin,PBS_O_SHELL=/bin/bash,PBS_SERVER=localhost,
PBS_O_WORKDIR=/home/wlandau/Desktop
comment = Job started on Mon Oct 30 at 07:35
etime = Mon Oct 30 07:35:00 2017
exit_status = 0
submit_args = test.pbs
start_time = Mon Oct 30 07:35:00 2017
Walltime.Remaining = 60
start_count = 1
fault_tolerant = False
comp_time = Mon Oct 30 07:35:00 2017
And a similar tracejob -n2 62:
/var/spool/torque/server_priv/accounting/20171029: No matching job records located
/var/spool/torque/server_logs/20171029: No matching job records located
/var/spool/torque/mom_logs/20171029: No matching job records located
/var/spool/torque/sched_logs/20171029: No matching job records located
Job: 62.localhost
10/30/2017 17:20:25 S enqueuing into batch, state 1 hop 1
10/30/2017 17:20:25 S Job Queued at request of wlandau#localhost, owner =
wlandau#localhost, job name = jobe945093c2e029c5de5619d6bf7922071,
queue = batch
10/30/2017 17:20:25 S Job Modified at request of Scheduler#Haggunenon
10/30/2017 17:20:25 S Exit_status=0 resources_used.cput=00:00:00 resources_used.mem=0kb
resources_used.vmem=0kb resources_used.walltime=00:00:00
10/30/2017 17:20:25 L Job Run
10/30/2017 17:20:25 S Job Run at request of Scheduler#Haggunenon
10/30/2017 17:20:25 S Not sending email: User does not want mail of this type.
10/30/2017 17:20:25 S Not sending email: User does not want mail of this type.
10/30/2017 17:20:25 M job was terminated
10/30/2017 17:20:25 M obit sent to server
10/30/2017 17:20:25 A queue=batch
10/30/2017 17:20:25 M scan_for_terminated: job 62.localhost task 1 terminated, sid=17917
10/30/2017 17:20:25 A user=wlandau group=wlandau
jobname=jobe945093c2e029c5de5619d6bf7922071 queue=batch
ctime=1509398425 qtime=1509398425 etime=1509398425 start=1509398425
owner=wlandau#localhost exec_host=localhost/0 Resource_List.ncpus=1
Resource_List.neednodes=1 Resource_List.nodect=1
Resource_List.nodes=1 Resource_List.walltime=01:00:00
10/30/2017 17:20:25 A user=wlandau group=wlandau
jobname=jobe945093c2e029c5de5619d6bf7922071 queue=batch
ctime=1509398425 qtime=1509398425 etime=1509398425 start=1509398425
owner=wlandau#localhost exec_host=localhost/0 Resource_List.ncpus=1
Resource_List.neednodes=1 Resource_List.nodect=1
Resource_List.nodes=1 Resource_List.walltime=01:00:00 session=17917
end=1509398425 Exit_status=0 resources_used.cput=00:00:00
resources_used.mem=0kb resources_used.vmem=0kb
resources_used.walltime=00:00:00
EDIT: jobs now hanging in E
After some tinkering, I am now using these settings. I have moved on to this tiny pipeline workflow, where some TORQUE jobs wait for other TORQUE jobs to finish. Unfortunately, all the jobs hang in the E state, and any number of jobs more than 4 will just stay queued. To keep things from hanging indefinitely, I have to sudo qdel -p each one, which I think is causing legitimate problems with the project's filesystem as well as an inconvenience.
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
113.localhost ...b73ec2cda6dca wlandau 00:00:00 E batch
114.localhost ...b6c8e6da05983 wlandau 00:00:00 E batch
115.localhost ...9123b8e20850b wlandau 00:00:00 E batch
116.localhost ...e6d49a3d7d822 wlandau 00:00:00 E batch
117.localhost ...8c3f6cb68927b wlandau 0 Q batch
118.localhost ...40b1d0cab6400 wlandau 0 Q batch
qmgr -c "list server" shows
Server haggunenon
server_state = Active
scheduling = True
max_running = 300
total_jobs = 5
state_count = Transit:0 Queued:1 Held:0 Waiting:0 Running:1 Exiting:3
acl_hosts = localhost
managers = root#localhost
operators = root#localhost
default_queue = batch
log_events = 511
mail_from = adm
query_other_jobs = True
resources_assigned.ncpus = 4
resources_assigned.nodect = 4
scheduler_iteration = 600
node_check_rate = 150
tcp_timeout = 6
mom_job_sync = True
pbs_version = 2.4.16
keep_completed = 0
submit_hosts = SERVER
allow_node_submit = True
next_job_number = 119
net_counter = 118 94 93
And qmgr -c "list queue batch"
Queue batch
queue_type = Execution
total_jobs = 5
state_count = Transit:0 Queued:1 Held:0 Waiting:0 Running:0 Exiting:4
max_running = 300
resources_max.ncpus = 4
resources_max.nodes = 2
resources_min.ncpus = 1
resources_default.ncpus = 1
resources_default.nodect = 1
resources_default.nodes = 1
resources_default.walltime = 01:00:00
mtime = Wed Nov 1 07:40:45 2017
resources_assigned.ncpus = 4
resources_assigned.nodect = 4
keep_completed = 0
enabled = True
started = True
C state means the job has completed and its status is kept in the system. Usually the status is kept after job completion for a period of time specified by the keep_completed parameter. However certain types of failure may result in the job being kept in this state to provide the information necessary to examine the cause of failure.
Check the output of qstat -f 46 to see if there is anything indicating an error.
To tune the keep_completed parameter you can execute the following command to check the value of this parameter on your system.
qmgr -c "print queue batch keep_completed"
If you have administrative privileges on the Torque server you could also change this value with
qmgr -c "set queue batch keep_completed=120"
To keep jobs in completed state for 2 minutes after completion.
In general having keep_completed set is a useful feature. Advanced schedulers use the information on completed jobs to schedule around failures.
I've got an app that has lots of sensor_events being saved; I'd like to get results by a date then map those into chunks of 15 minute times... this is not the same as doing a group by in postgres as that would only return something averaged and I need the specific events...
What I'm thinking is given a day I get the beginning_of_day and split it up as 15 minute chunks as keys to a hash of arrays ie
def return_time_chunk_hash
t=Date.today
st = t.beginning_of_day
times = Hash.new
while st<t.end_of_day
times[st.to_formatted_s(:time)] = Array.new
st = st + 15.minutes
end
return times
end
And from that I would compare the sensor_events created_at date, find which bucket it belonged to and plop it in there. Once I've got it that way I know whether or not a chunk has any (.count) and if so can do all the data manipulation on the specific events.
Does this seem nutty? Is there a simpler way I'm not seeing?
Update:
I liked the way jgraft was thinking but thought it wouldn't work as I'd have to do multiple queries based upon the group column flag, but then I thought of the group_by of Enumerable so I tried something like this in the actual SensorEvent model:
def chunk
Time.at((self.created_at.to_f / 15.minutes).floor * 15.minutes).to_formatted_s(:time)
end
This allows me to get all the sensor events I need as usual (ie #se=SensorEvent.where(sensor_id: 10)) but then I could do #se.group_by(&:chunk) and I get those singular events grouped into a hash ie:
{"13:30"=>
[#<SensorEvent:0x007ffac0128438
id: 25006,
force: 0.0,
xaccel: 502.0,
yaccel: 495.0,
zaccel: 616.0,
battery: 0.0,
position: 25.0,
created_at: Thu, 18 Jun 2015 13:33:37 EDT -04:00,
updated_at: Thu, 18 Jun 2015 15:51:32 EDT -04:00,
deviceID: "D36330135FE3D36",
location: 3,
sensor_id: 10>,
#<SensorEvent:0x007ffac0128140
id: 25007,
force: 0.0,
xaccel: 502.0,
yaccel: 495.0,
zaccel: 616.0,
battery: 0.0,
position: 27.0,
created_at: Thu, 18 Jun 2015 13:39:46 EDT -04:00,
updated_at: Thu, 18 Jun 2015 15:51:32 EDT -04:00,
deviceID: "D36330135FE3D36",
location: 3,
sensor_id: 10>,
.........
The trouble is of course not every chunk of time might be created since there was no event to spawn it; also that being a hash it's not sorted in anyway:
res.keys
=> ["13:30",
"13:45",
"14:00",
"13:00",
"15:45",
"16:00",
"16:15",
"16:45",
"17:00",
"17:15",
"17:30",
"14:15",
"14:30",
I have to do calculations on the chunks of events; I might keep a master TIMECHUNKS array to compare / lookup in order...
Why not just add a column to the sensor_events table that specifies which block it belongs to? Would basically give you an array as you could do a query like:
SensorEvent.where(date: Date.today, block: 1)
and return a relation of the data in an array-esque format.
You could just add an after_create callback to the SensorEvents model that sets the block column.
class SensorEvent < ActiveRecord::Base
after_create :set_block
private
def set_block
value = ((4 * created_at.hour) + (created_at.min.to_f / 15).ceil).to_i
self.update_columns(block: value)
end