I am using FileMaker Web Publishing Engine version="12.0.2.228 and can't quite get the comparison operators to work
e.g. I am executing the following
when I do
fmi/xml/fmresultset.xml?-db=MY_TESTING&-lay=MY_SUBJECTS&name_short=Yr10&-find
I get two results (as expected because the default filemaker query is beings with)
Record 1) name_short = Yr10
Record 2) name_short = Yr10-TO
However when I excute with a eq comparision would expect only record 1 to return. I get the same two records back
fmi/xml/fmresultset.xml?-db=MY_TESTING&-lay=MY_SUBJECTS&name_short=Yr10&name_short.op=eq&-find
results in
Record 1) name_short = Yr10
Record 2) name_short = Yr10-TO
Am i missing something?
Try the same search in FileMaker, and you'll see consistent results. The answer is to prefix your search term with "==" for an exact match.
fmi/xml/fmresultset.xml?-db=MY_TESTING&-lay=MY_SUBJECTS&name_short=%3D%3DYr10&-find
Related
We are evaluating NEsper. Our focus is to monitor data quality in an enterprise context. In an application we are going to log every change on a lot of fields - for example in an "order". So we have fields like
Consignee name
Consignee street
Orderdate
....and a lot of more fields. As you can imagine the log files are going to grow big.
Because the data is sent by different customers and is imported in the application, we want to analyze how many (and which) fields are updated from "no value" to "a value" (just as an example).
I tried to build a test case with just with the fields
order reference
fieldname
fieldvalue
For my test cases I added two statements with context-information. The first one should just count the changes in general per order:
epService.EPAdministrator.CreateEPL("create context RefContext partition by Ref from LogEvent");
var userChanges = epService.EPAdministrator.CreateEPL("context RefContext select count(*) as x, context.key1 as Ref from LogEvent");
The second statement should count updates from "no value" to "a value":
epService.EPAdministrator.CreateEPL("create context FieldAndRefContext partition by Ref,Fieldname from LogEvent");
var countOfDataInput = epService.EPAdministrator.CreateEPL("context FieldAndRefContext SELECT context.key1 as Ref, context.key2 as Fieldname,count(*) as x from pattern[every (a=LogEvent(Value = '') -> b=LogEvent(Value != ''))]");
To read the test-logfile I use the csvInputAdapter:
CSVInputAdapterSpec csvSpec = new CSVInputAdapterSpec(ais, "LogEvent");
csvInputAdapter = new CSVInputAdapter(epService.Container, epService, csvSpec);
csvInputAdapter.Start();
I do not want to use the update listener, because I am interested only in the result of all events (probably this is not possible and this is my failure).
So after reading the csv (csvInputAdapter.Start() returns) I read all events, which are stored in the statements NewEvents-Stream.
Using 10 Entries in the CSV-File everything works fine. Using 1 Million lines it takes way to long. I tried without EPL-Statement (so just the CSV import) - it took about 5sec. With the first statement (not the complex pattern statement) I always stop after 20 minutes - so I am not sure how long it would take.
Then I changed my EPL of the first statement: I introduce a group by instead of the context.
select Ref,count(*) as x from LogEvent group by Ref
Now it is really fast - but I do not have any results in my NewEvents Stream after the CSVInputAdapter comes back...
My questions:
Is the way I want to use NEsper a supported use case or is this the root cause of my failure?
If this is a valid use case: Where is my mistake? How can I get the results I want in a performant way?
Why are there no NewEvents in my EPL-statement when using "group by" instead of "context"?
To 1), yes
To 2) this is valid, your EPL design is probably a little inefficient. You would want to understand how patterns work, by using filter indexes and index entries, which are more expensive to create but are extremely fast at discarding unneeded events.
Read:
http://esper.espertech.com/release-7.1.0/esper-reference/html_single/index.html#processingmodel_indexes_filterindexes and also
http://esper.espertech.com/release-7.1.0/esper-reference/html_single/index.html#pattern-walkthrough
Try the "previous" perhaps. Measure performance for each statement separately.
Also I don't think the CSV adapter is optimized for processing a large file. I think CSV may not stream.
To 3) check your code? Don't use CSV file for large stuff. Make sure a listener is attached.
It seems that there is not standard support for the IN clause in Npgsql. I see posts that recommend using = ANY instead of IN. This works great as a replacement for a standard IN clause. However, Postgres (pgsql) does not seem to have anything that allows you to do a NOT ANY or !=ANY query. It does, however, support NOT IN, but it seems that Npgsql does not. Can someone help me understand how I might write an Npgsql compatible query like this one:
select * my_table where id NOT IN(1,2,3,4)
First, this has nothing to do with Npgsql - it's a PostgreSQL question.
Second, PostgreSQL does have full standard support for IN clauses. It's important to understand the difference between IN and ANY: IN operates on rows, whereas ANY operates on arrays - the two definitely aren't the same, even though you can convert one into the other (e.g. see unnest). Read the docs carefully.
Finally, to answer your question... Saying WHERE x != ANY(some_array) means "where there's some element of some_array that isn't equal to x". This indeed isn't the same as what you want, which is "where none of some_array's elements are equal to x". You can achieve the latter with WHERE x != ALL(some_array): this checks x against each and every element, returning true only if all of them are unequal.
You can also use ANY with simple logical negation: WHERE NOT (x = ANY(SOME_ARRAY)).
SQLALchemy generated the following query for me:
SELECT count(client.id = user_accounting_journal_entry.client_id) AS count_1
FROM client, user_accounting_journal_entry
WHERE user_accounting_journal_entry.kind = 'debit'
GROUP BY client.name = user_accounting_journal_entry.client_id
Note the part inside select: count(client.id = user_accounting_journal_entry.client_id).
Having mostly used MySQL, I am not familiar with this syntax, and have a hard time finding documentation.
You should be familiar with the syntax from MySQL, at least in this form:
select sum(client.id = user_accounting_journal_entry.client_id)
This would count the number of matches.
The count() version counts the number of times that the expression is not NULL. Or equivalent, it counts the number of times that both values are not NULL . . . something that seems really strange. More commonly in Postgres, I would expect:
select sum((client.id = user_accounting_journal_entry.client_id)::int)
This converts the boolean to an integer and hence counts the number of matches.
The query itself is awful:
It doesn't use proper join syntax
The join conditions between the tables don't look correct (a name to an id)
It is grouping by a boolean condition
In addition, it doesn't look like it is doing something that is really useful.
count(client.id = user_accounting_journal_entry.client_id) count number of times expression is not null.
Using OrientDB 2.1.2, I was trying to use the inherent COALESCE functionality and ran into some strange results.
Goal: select the maximum value of a property based on certain conditions OR 0 if there is no value for that property given the conditions.
Here's what I tried to use to produce my results.
Attempt 1: Just selecting the Maximum value of a property based on some condition - This worked as I expected... a single result
Attempt 2: Same query as before but now I'm adding an extra condition that I know will cause no results to be returned - This also worked as I expected... no results found
Attempt 3: Using COALESCE to select 0 if the result from the second query returns no results - This is where the query fails (see below).
I would expect the result from the second query to return no results, thereby qualifying as a "NULL" result meaning that the COALESCE function should then go on to return 0. What happens instead is that the COALESCE function is seeing the results of the inner select (which again, returns no results) as a valid non-null value, causing the COALESCE function to never return the intended "0" value.
Two questions for those who are familiar with using the OrientDB API:
Do you think this functionality is working properly or should an issue be filed with the orientdb issue tracker?
Is there another way to achieve my goal without using COALESCE or by using COALESCE in a different way?
Try rather:
select coalesce($a, 0) from ... let $a = (subquery) where ...
Or also this variant because the sub-select returns a result set, but the coalescence wants a single value:
select coalesce($a[0], 0) from ... let $a = (subquery) where ...
I'm trying to query postgres db and get a SQL error every time.
The query is correct, but why does it now work?
Query
Apireq::where('calls','<','maxcalls')->get();
Error
SQLSTATE[22P02]: invalid input syntax for integer
Field types are both set to bigint.
It's bizzare.
You need to use whereRaw instead of just where. It's weird, but it gets around the issue which I think is a Laravel Bug.
whereRaw('calls < maxcalls')
#Andrew's answer is totally correct, except that it is not a bug but expected behavior.
Here's an example why:
where('foo', '=', 'bar')
Now there are two possibilities how Laravel could interpret (or misinterpret?) this
1. Column named bar
"Sure you want to compare the column foo with bar. Here's your SQL:"
WHERE foo = bar
2. The value "bar"
"Well obviously you want all the records where foo equals "bar". Here you go:"
WHERE foo = "bar"
So Laravel has to make a decision. And because a computer (without artificial intelligence at least) can't possibly know if you want to compare with a value or another column, the developers decided it should always compare with the value (probably because it is the functionality that's needed more)
And as you already know, whereRaw is the solution:
whereRaw('calls < maxcalls')