Adding messages to grib2 files - grib

Context
I would like to add additional fields to a grib2 file, which are a function of existing fields.
e.g.: I would like to add a wind-chill message, given by the formula:
35.74 + 0.6215 * T -35.75 * V^0.16 + 0.4275 * T * V^0.16
where T and V are the temperature and wind speed field that appears in the original grib2 file.
Question
I have searched for documentation on the subject, but failed to find any reference :(
Is there any easy way doing that (preferably using bash, other interfaces are also relevant...)?
Thanks :)

Multi-field messages are generally considered a bad idea. Regardless, to edit a message, you can use grib_api or wgrib.

Related

how get data(postgresql) in datacamp workspace

i want to get the complete data but it says "truncated to 2222". My question is why can it be like that even though I don't use truncated statements. my code is like below:
SELECT *
FROM cinema.films;
and I also do another code that is more specific, the goal is that all lines can be downloaded
SELECT *
FROM cinema.films
LIMIT 4968;
the form of the table is more or less like this. look in the lower right corner, it says "truncated to 2222". how so that no data is truncated and I can get the data as a whole?
I've searched on google and on the stackoverflow forums. I hope that someone will help solve my problem

Parsing XML and retrieving attributes from (nested?) elements

I am trying to get specific data from an XML file, namely X, Y coordinates that are appear, to my beginners eyes, attributes of an element called "Point" in my file. I cannot get to that data with anything other than a sledgehammer approach and would gratefully accept some help.
I have used the following successfully:
for Shooter in root.iter('Shooter'):
print(Shooter.attrib)
But if I try the same with "Point" (or "Points") there is no output. I cannot even see "Point" when I use the following:
for child in root:
print(child.tag, child.attrib)
So: the sledgehammer
print([elem.attrib for elem in root.iter()])
Which gives me the attributes for every element. This file is a single collection of data and could contain hundreds of data points and so I would rather try to be a little more subtle and home in on exactly what I need.
My XML file
https://pastebin.com/abQT3t9k
UPDATE: Thanks for the answers so far. I tried the solution posted and ended up with 7000 lines of which wasn't quite what I was after. I should have explained in more detail. I also tried (as suggested)
def find_rec(node, element, result):
for item in node.findall(element):
result.append(item)
find_rec(item, element, result)
return result
print(find_rec(ET.parse(filepath_1), 'Shooter', [])) #Returns <Element 'Shooter' at 0x125b0f958>
print(find_rec(ET.parse(filepath_1), 'Point', [])) #Returns None
I admit I have never worked with XML files before, and I am new to Python (but enjoying it). I wanted to get the solution myself but I have spent days getting nowhere.
I perhaps should have just asked from the beginning how to extract the XY data for each ShotNbr (in this file there is just one) but I didn't want code written for me.
I've managed to get the XY from this file but my code will never work if there is more than one shot, or if I want to specifically look at, say, shot number 20.
How can I find shot number 2 (ShotNbr="2") and extract only its XY data points?
Assuming that you are using:
xml.etree.ElementTree,
You are only looking at the direct children of root.
You need to recurse into the tree to access elements lower in the hierarchical tree.
This seems to be the same problem as ElementTree - findall to recursively select all child elements
which has an excellent answer that I am not going to plagiarize.
Just apply it.
Alternatively,
import xml.etree.ElementTree as ET
root = ET.parse("file.xml")
print root.findall('.//Point')
Should work.
See: https://docs.python.org/2/library/xml.etree.elementtree.html#supported-xpath-syntax

KDB:Trying to read multiple csv files at a location

I am trying to run below code to read all csv files available at location C:/q/BitCoin/Input.Getting an error and dont know what the solution is?csv files are standard ones with three fields.
raze{[x]
inputdir:`:C:/q/BitCoin/Input;
filelist1:key inputdir;
filelist2:` sv' inputdir,'filelist1;
filelist3:string filelist2;
r:flip`Time`Qty`Price!("ZFF";",")0:x;
select from r
} each `$filelist3
Hard coding the file names and running below code works but I don't want to hard code
raze {[x]
r:flip`Time`Qty`Price!("ZFF";",")0:x;
select from r
} each (`$"C:/q/BitCoin/Input/bitbayPLN.csv";`$"C:/q/BitCoin/Input/anxhkAUD.csv")
Getting below error
An error occurred during execution of the query.
The server sent the response:
filelist3
Can someone help with issue?
The reason that you are receiving the error 'filelist3 is because filelist3 is defined in the lambda and outside of the lambda it is not recognised or defined. There are various ways to overcome this as outlined below.
Firstly you can essentially take all of the defined work done on the inside of the lambda and put it on the right side of the each.
raze{[x] r:flip`Time`Qty`Price!("ZFF";",")0:x; select from r
} each `$(string (` sv' `:C:/q/BitCoin/Input,'(key `:C:/q/BitCoin/Input)))
Or if you wanted to you could create a function which will generate filelist3 for you and use that on the right hand side of the each also.
f:{[inputdir] filelist1:key inputdir; filelist2:` sv' inputdir,'filelist1; filelist3:string filelist2; filelist3}
raze{[x] r:flip`Time`Qty`Price!("ZFF";",")0:x; select from r
} each `$f[`:C:/q/BitCoin/Input]
I hope this helps.
Many thanks,
Joel

Grok filter for a time counter HH:MM

I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)

In selector with asterisk * not working in report selection

What is the proper way to search a table for every record that starts in a similar way? I have tried:
"THESE. WORDS" IN {example_one.job_title} and {example_two.status} = "A"
But I need all combinations, including "THESE. WORDS*" Adding the asterisk doesn't work, I guess because of how IN works.
To summarize information in the comments,
to limit job_title by the list of values in these. words, you need your field on the left hand side and the values on the right.
you may want {example_one.job_title} LIKE 'keyword*'
If you found this information helpful, you can upvote and/or accept the answer.