I'm wondering if it is possible to create functions in Oracle SQL that can be used concisely inside of a WHERE clause.
I'd like to achieve something similar to the code below:
SELECT 'Hey'
FROM DUAL
WHERE TrimEquals(' hey', ' hey ')
I'm not sure if this is possible.
I've seen examples where the function TrimEquals returns a value (usually 0 or 1) and you then compare that value like in the following example:
SELECT 'Hey'
FROM DUAL
WHERE TrimEquals(' hey', ' hey ') = 1
Maybe I'm just being too picky but I don't like putting in the relational operator every time.
So, is this something I can do or am I stuck using the style from the second example?
Thank you.
Related
I am trying to sort the OUN.note column by using the OUN.outcomeKey, since
the way it it is working right now is putting the notes in the wrong order (sorting alphabetically). Any idea on how to go about this? I've been trying to sort the data using another sub-query within, but I haven't had much luck (I don't have a plethora of experience).
Here's my current query:
SELECT DISTINCT OC.outcomeKey [Outcome Key], OC.outcome [Result],
STUFF((SELECT ','+' '+ OUN.note
FROM
Outcome AS OUT
JOIN OutcomeNote AS OUN
ON OUT.outcomeKey = OUN.outcomeKey
WHERE OUN.outcomeKey = OC.outcomeKey
GROUP BY OUN.note
FOR XML PATH ('')), 1, 1, '') [Outcome Note]
FROM Outcome AS OC
Any help or tips would be greatly appreciated! Also, please let me know if any more info is needed.
You may replace the line
GROUP BY OUN.note
with the line
ORDER BY OUN.outcomeKey
Also, because the concatenation starts with ', ', you may want to use 1, 2, '' as the additional arguments of the STUFF function. Otherwise, the values in your [Outcome note] column always start with a space.
Edit:
By the way, sorting the notes by outcomeKey in the subquery that generates the values for the [Outcome note] column has no effect... since all the notes in each subquery result will have the same outcomeKey value...
But you may sort on any column you want, of course. Perhaps there are other columns in your OutcomeNotes table that can serve as a useful sorting column of your outcome notes.
If I misunderstood your question, please provide definitions of the Outcome and OutcomeNote tables, together with a demo population of those tables and the desired/expected query result, please.
Edit 2:
Starting with SQL Server 2017, Transact-SQL contains a function called STRING_AGG, which seems to be functionally equivalent (more or less) to MySQL's GROUP_CONCAT function. Using this function, your query would become something like this:
SELECT
OUN.outcomeKey [Outcome Key],
OC.outcome [Result],
STRING_AGG(OUN.[Note], ', ') WITHIN GROUP (ORDER BY OUN.outcomeKey) [Outcome Note]
FROM
Outcome AS OC
JOIN OutcomeNote AS OUN ON OUN.outcomeKey = OC.outcomeKey
GROUP BY
OUN.outcomeKey,
OC.outcome;
When using SQL Server 2017 or SQL Azure, this might be a more fitting choice, since it does not only make the query more readable, but it also eliminates the use of (way less efficient) XML-functions in your query.
I too have used the XML-functionality for field concatenation (the way you use it) intensively in the past, but I noticed a considerable drop in performance of my queries (which sometimes contained up to 10 columns with concatenated data). Since then, I tend to go for recursive common table expressions or scalar UDF with recursion approaches in pre SQL Server 2017 environments.
I feel I am missing a simple T-SQL answer to this question. I have a Measurements table, and an Activity table related by MeasurementID column, and there are at least 3 activities (sometimes more) related to a single measurement. How do I construct a query such that the output would look like this:
Measurement ID Activities
1 Running:Walking:Eating
2 Walking:Eating:Sleeping
I would also be satisfied if the output looked like this:
Measurement ID Activity1 Activity2 Activity3
1 Running Walking Eating
Is there a simple single query way to do this, or must I use (shudder) cursors to do the trick?
Unfortunately, there is no GROUP_CONCAT() in T-SQL. There is a trick to simulate it, though:
SELECT
MeasurmentID,
Activities = REPLACE((SELECT Activity AS [data()]
FROM MeasurmentActivities
WHERE MeasurmentID = ma.MeasurmentID
FOR xml path('')), ' ', ':')
FROM
MeasurmentActivities AS ma
GROUP BY
MeasurmentID
If you know in advance the number of activies by measurement, you can try the PIVOT operator.
Otherwise there is no easy way to do it.
If you can, I would suggest you to "horizontalize" the rows on your application-end.
If that is not an option, the only way I suppose would work in Transact-SQL is to convert your result in XML output and tweak it using XPATH queries.
I have a Postgres table with more than 8 million rows. Given the following two ways of doing the same query via DBD::Pg, I get wildly different results.
$q .= '%';
## query 1
my $sql = qq{
SELECT a, b, c
FROM t
WHERE Lower( a ) LIKE '$q'
};
my $sth1 = $dbh->prepare($sql);
$sth1->execute();
## query 2
my $sth2 = $dbh->prepare(qq{
SELECT a, b, c
FROM t
WHERE Lower( a ) LIKE ?
});
$sth2->execute($q);
query 2 is at least an order of magnitude slower than query 1... seems like it is not using the indexes, while query 1 is using the index.
Would love hear why.
With LIKE expressions, b-tree indexes can only be used if the search pattern is left-anchored, i.e. terminated with %. More details in the manual.
Thanks to #evil otto for the link. This link to the current version.
Your first query provides this essential information at prepare time, so the query planner can use a matching index.
Your second query does not provide any information about the pattern at prepare time, so the query planner cannot use any indexes.
I suspect that in the first case the query compiler/optimizer detects that the clause is a constant, and can build an optimal query plan. In the second it has to compile a more generic query because the bound variable can be anything at run-time.
Are you running both test cases from same file using same $dbh object?
I think reason of increasing speed in second case is that you using prepared statement which is already parsed(but maybe I wrong:)).
Ahh, I see - I will drop out after this comment since I don't know Perl. But I would trust that the editor is correct in highlighting the $q as a constant. I'm guessing that you need to concatenate the value into the string, rather than just directly referencing the variable. So, my guess is that if + is used for string concatenation in perl, then use something like:
my $sql = qq{
SELECT a, b, c
FROM t
WHERE Lower( a ) LIKE '
} + $q + qq{'};
(Note: unless the language is tightly integrated with the database, such as Oracle/PLSQL, you usually have to create a completely valid SQL string before submitting to the database, instead of expecting the compiler to 'interpolate'/'Substitute' the value of the variable.)
I would again suggest that you get the COUNT() of the statements, to make sure that you are comparing apple to apples.
I don't know Postgres at all, but I think in Line 7 (WHERE Lower( a ) LIKE '$q'
), $q is actually a constant. It looks like your editor thinks so too, since it is highlighted in red. You probably still need to use the ? for the variable.
To test, do a COUNT(*), and make sure they match - I could be way offbase.
I am now working on the sqlite in iPhone platform and dealing with the database with some strange entities. One of the column in this DB called type, which allows multiple values (multiple types). The data in this column is like this:
1|9|20|31|999
The table is like:
ID TYPE
------------------------
1 1|9|20|31|999
2 5|13|15|30|990
3 6|7|45|46|57
When the user want to select the data with the type 9, it needs select the above data because the entity contains 9. And I use the following statement to execute:
SELECT id
FROM table
WHERE type LIKE '%' || ? || '%'
The problem is that the data with the type 999 and without type 9 will also be selected. Also, if the user wants to select type 1, the data with type 11, 12, 13...etc will also be selected.
I tried to use the statement:
SELECT id
FROM table
WHERE type LIKE '[^0123456789]' || ? || '[^0123456789]'
But it can't select any data.
What can I do to select the data with correct type?
(The database cannot be changed because of the company requirement)
You'd need a regular expression that matches only 'yournumber|', '|yournumber|', '|yournumber', or a combination of the above. I see no way an optimizer could use an index on that kind of thing short of a specialized index that would work similarly to full-text indexes.
This is almost hopeless, unless you have a trivial amount of data (or way too many processing resources), it will not scale.
If the amount of data is small, consider just selecting like '%number%', and do some post-processing in you application code.
Otherwise, normalize you data.
You could try this
SELECT id
FROM table
WHERE '|' + type + '|' LIKE '%|' + ? + '|%'
(I use the + operator here because it's easier to read but you could also use || instead)
It's an easy solution without regex. Indexes don't help anything here (this is the same for regex solution) the query will be slow with huge data.
I had a similar question I asked myself a while ago. Also check it's answer
I'm not too much familiar with SQLite but what about:
SELECT id FROM table WHERE type REGEX '(^9\|.*)|(.*\|9\|.*)|(.*\|9$)';
assuming that you are looking for the type 9
Then to look for type 999 it would be:
SELECT id FROM table WHERE type REGEX '(^999\|.*)|(.*\|999\|.*)|(.*\|999$)';
Note that you might want to tweak the expression if you may potentially have a single type value in the type field because in that case I'd suppose that there wouldn't be any | character.
I'm doing a bit of work which requires me to truncate DB2 character-based fields. Essentially, I need to discard all text which is found at or after the first alphabetic character.
e.g.
102048994BLAHBLAHBLAH
becomes:-
102048994
In SQL Server, this would be a doddle - PATINDEX would swoop in and save the day. Much celebration would ensue.
My problem is that I need to do this in DB2. Worse, the result needs to be used in a join query, also in DB2. I can't find an easy way to do this. Is there a PATINDEX equivalent in DB2?
Is there another way to solve this problem?
If need be, I'll hardcode 26 chained LOCATE functions to get my result, but if there is a better way, I am all ears.
SELECT TRANSLATE(lower(column), ' ', 'abcdefghijklmnopqrstuvwxyz')
FROM table
write a small UDF (user defined function) in C or JAVA, that does your task.
Peter