Power BI- Combining 2 existing countif style statements into one countifs - merge

I have written 2 different countif style statements in Power BI and both work individually. However, I am having trouble combining them into one working countifs statement. Everything I've tried so far just seems to break the syntax. Here are the existing statements I've written-
_FI-Comp = COALESCE(CALCULATE(COUNT('All Surveys'[Further Investigation Status]),FILTER(ALL('All Surveys'[Further Investigation Status]),SEARCH("complete",'All Surveys'[Further Investigation Status],1,0))),0)
_FIA-Outage-Ladders = CALCULATE(COUNT('All Surveys'[Asset Type]),FILTER('All Surveys','All Surveys'[Asset Type] ="Ladder"))
I know there should probably be && somewhere in there to merge them but I just can't work out where for the life of me. Thanks in advance!

Related

KDB:Trying to read multiple csv files at a location

I am trying to run below code to read all csv files available at location C:/q/BitCoin/Input.Getting an error and dont know what the solution is?csv files are standard ones with three fields.
raze{[x]
inputdir:`:C:/q/BitCoin/Input;
filelist1:key inputdir;
filelist2:` sv' inputdir,'filelist1;
filelist3:string filelist2;
r:flip`Time`Qty`Price!("ZFF";",")0:x;
select from r
} each `$filelist3
Hard coding the file names and running below code works but I don't want to hard code
raze {[x]
r:flip`Time`Qty`Price!("ZFF";",")0:x;
select from r
} each (`$"C:/q/BitCoin/Input/bitbayPLN.csv";`$"C:/q/BitCoin/Input/anxhkAUD.csv")
Getting below error
An error occurred during execution of the query.
The server sent the response:
filelist3
Can someone help with issue?
The reason that you are receiving the error 'filelist3 is because filelist3 is defined in the lambda and outside of the lambda it is not recognised or defined. There are various ways to overcome this as outlined below.
Firstly you can essentially take all of the defined work done on the inside of the lambda and put it on the right side of the each.
raze{[x] r:flip`Time`Qty`Price!("ZFF";",")0:x; select from r
} each `$(string (` sv' `:C:/q/BitCoin/Input,'(key `:C:/q/BitCoin/Input)))
Or if you wanted to you could create a function which will generate filelist3 for you and use that on the right hand side of the each also.
f:{[inputdir] filelist1:key inputdir; filelist2:` sv' inputdir,'filelist1; filelist3:string filelist2; filelist3}
raze{[x] r:flip`Time`Qty`Price!("ZFF";",")0:x; select from r
} each `$f[`:C:/q/BitCoin/Input]
I hope this helps.
Many thanks,
Joel

Grok filter for a time counter HH:MM

I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)

how to use for each loop in Progress?

Basically i'm trying to do a simple join. I'm a beginner in progress and even if i'm reading always the same things... my problem still unresolved ! :'(
I'm using unixodbc to communicate with my base and this is working like a charm when i'm using simple command like : SELECT * from PUB."Art"
I understood I have to do something who looks like that to join 2 tables :
FOR EACH PUB."Art" WHERE (PUB."Art".IdArt = 16969) ,
EACH PUB."ArtDet" WHERE (PUB."ArtDet".IdArt = PUB."Art".IdArt)
END
But this only return me [ISQL]ERROR: Could not SQLPrepare
I then try to simplify the thing with :
for each PUB."Art": display PUB."Art".IdArt end.
I try to put colon (or not) after the for each loop, using point / comma etc... but I never use the right syntax apparently... or I'm missing a thing to execute this command !
Is anyone can advice me ?
Thx a lot !
You appear to mixing SQL and 4GL syntax.
"FOR EACH" is 4GL. The SQL equivalent is "SELECT".
(If you are using 4GL you do not need then "PUB" prefix and quoting table and field names will not work.)
To do a join with SQL (or the 4GL) use a "," between the table names. For SQL your syntax would look something like:
SELECT * from PUB."Art", PUB."ArtDet"
Gory details regarding WHERE clauses, SQL INNER & OUTER joins etc. can be found in the online documentation:
https://community.progress.com/community_groups/openedge_general/w/openedgegeneral/1329.openedge-product-documentation-overview
You will want to navigate to your specific release and then find the "SQL" guide.

Creating Drool Decision Table

So I wanted to try my hand out creating a decision table from a rule that I've already made in a .drl file. Then I wanted to convert it back to a .drl. Didn't see any nifty conversions from drl to xls/csv nor was the jboss documentation comprehensive enough. It could be the rule is too complicated for a simple decision table but I was hoping this community could help me out.
Here is the drl:
rule "Patient: Compute BMI"
when
$basic : BasicInfoModel(
notPresent('bmi'),
isPresent('height'),
isPresent('weight'),
$height : value('height', 0.0),
$weight : value('weight', 0.0))
then
modify($basic){
put('bmi', $weight / Math.pow($height,2))
};
end
So this rule basically looks at an objects weight and height field and then computes the bmi. I've tried basically taking what I have and putting it into the decision table format but with little success. Nothing really parses (I'm just using the droolsSpreadSheet.compile and printing out what I get, which is a whole of empty rules). Any help would be appreciated!
Update:
This is what my excel sheet looks like
This is what my rule parses out to:
package DROOLS;
//generated from Decision Table
import basic.BasicInfoModel;
// rule values at A11, header at A6
rule "Computing BMI"
when
$patient:BasicInfoModel(notPresent('bmi'), isPresent('height'),isPresent('weight'), $height:value('height', 0.0), $weight:value('weight',0.0) == "20,4")
then
end
Update #2: I think I figured out my parse issues. Here is my new and improved spreadsheet., Basically found out that I cannot have the Computing BMI: data blank, there must be something in there in order to have the rule parse (Which isn't entirely clear in the docs I read, though that could be because my experience with decision tables is novice putting it lightly).
So now the compile looks more like what I want:
// rule values at A11, header at A6
rule "Computing BMI"
when
$patient:BasicInfoModel(notPresent('bmi'), isPresent('height'), isPresent('weight') == "TRUE")
$weight:value('weight',0.0), $height:value('height', 0.0)
then
modify($patient){put('bmi', $weight / Math.pow($height,2))};
end
Can someone confirm that I have to have real, specific data in the rules in order for them to parse? Can I just use injection elsewhere? Perhaps I should ask a new question on this.
So the answer is yes, you do need parameters, but what I didn't know was that the data doesn't have to be hardcoded like every example I've come across. Thanks to stumbling onto this answer. So now the table looks like this. I hope this helps others who've come across this issue. Also my recommendation is to just make your drools in a .drl rather than go through the spreadsheet, unless you have a bunch of rules that are pretty much copy and paste replicas. That's my two cents anyways.

dataFrame keying using pandas groupby method

I new to pandas and trying to learn how to work with it. Im having a problem when trying to use an example I saw in one of wes videos and notebooks on my data. I have a csv file that looks like this:
filePath,vp,score
E:\Audio\7168965711_5601_4.wav,Cust_9709495726,-2
E:\Audio\7168965711_5601_4.wav,Cust_9708568031,-80
E:\Audio\7168965711_5601_4.wav,Cust_9702445777,-2
E:\Audio\7168965711_5601_4.wav,Cust_7023544759,-35
E:\Audio\7168965711_5601_4.wav,Cust_9702229339,-77
E:\Audio\7168965711_5601_4.wav,Cust_9513243289,25
E:\Audio\7168965711_5601_4.wav,Cust_2102513187,18
E:\Audio\7168965711_5601_4.wav,Cust_6625625104,-56
E:\Audio\7168965711_5601_4.wav,Cust_6073165338,-40
E:\Audio\7168965711_5601_4.wav,Cust_5105831247,-30
E:\Audio\7168965711_5601_4.wav,Cust_9513082770,-55
E:\Audio\7168965711_5601_4.wav,Cust_5753907026,-79
E:\Audio\7168965711_5601_4.wav,Cust_7403410322,11
E:\Audio\7168965711_5601_4.wav,Cust_4062144116,-70
I loading it to a data frame and the group it by "filePath" and "vp", the code is:
res = df.groupby(['filePath','vp']).size()
res.index
and the output is:
[E:\Audio\7168965711_5601_4.wav Cust_2102513187,
Cust_4062144116, Cust_5105831247,
Cust_5753907026, Cust_6073165338,
Cust_6625625104, Cust_7023544759,
Cust_7403410322, Cust_9513082770,
Cust_9513243289, Cust_9702229339,
Cust_9702445777, Cust_9708568031,
Cust_9709495726]
Now Im trying to approach the index like a dict, as i saw in examples, but when im doing
res['Cust_4062144116']
I get an error:
KeyError: 'Cust_4062144116'
I do succeed to get a result when im putting the filepath, but as i understand and saw in previouse examples i should be able to use the vp keys as well, isnt is so?
Sorry if its a trivial one, i just cant understand why it is working in one example but not in the other.
Rutger you are not correct. It is possible to "partial" index a multiIndex series. I simply did it the wrong way.
The index first level is the file name (e.g. E:\Audio\7168965711_5601_4.wav above) and the second level is vp. Meaning, for each file name i have multiple vps.
Now, this is correct:
res['E:\Audio\7168965711_5601_4.wav]
and will return:
Cust_2102513187 2
Cust_4062144116 8
....
but trying to index by the inner index (the Cust_ indexes) will fail.
You groupby two columns and therefore get a MultiIndex in return. This means you also have to slice using those to columns, not with a single index value.
Your .size() on the groupby object converts it into a Series. If you force it in a DataFrame you can use the .xs method to slice a single level:
res = pd.DataFrame(df.groupby(['filePath','vp']).size())
res.xs('Cust_4062144116', level=1)
That works. If you want to keep it as a series, boolean indexing can help, something like:
res[res.index.get_level_values(1) == 'Cust_4062144116']
The last option is a bit less readable, but sometimes also more flexibile, you could test for multiple values at once for example:
res[res.index.get_level_values(1).isin(['Cust_4062144116', 'Cust_6073165338'])]