I'm new to SQL and Postgres, so hopefully this isn't too hard to figure out for all of you.
I'm trying to use a Position function within a CASE statement, but I keep getting the error
"ERROR: Syntax error at or near ""Project"". LINE 2: CASE WHEN position('(' IN "Project") >0 THEN".
I've used this position function before and it worked fine, so I'm confused what the problem is here. I've also tried the table name, such as "xyztable.Project" and "Project" - both without quotation marks.
Here is the entire statement:
SELECT "Project",
CASE WHEN postion('(' IN "Project") >0 THEN
substring("Project",position('(' IN "Project")+1,position(')' IN "Project")-2)
CASE WHEN postion('('IN "Project") IS NULL THEN
"Project"
END
FROM "2015Budget";
As I haven't gotten to past the second line of this statement, if anyone sees anything that would prevent this statement from running correctly, please feel free to point it out.
New Statement:
SELECT "Project",
CASE
WHEN position('(' IN "Project") >0 THEN
substring("Project",position('(' IN "Project")+1,position(')' IN "Project")-2)
WHEN position('('IN "Project") IS NULL THEN
"Project"
END
FROM "2015Budget";
Thank you for your help!!
The error is due to a simple typo - postion instead of position.
You would generally get a much more comprehensible error message in situations like this (e.g. "function postion(text,text) does not exist"). However, the use of function-specific keywords as argument separators (as mandated by the SQL standard) makes this case much more difficult for the parser to cope with.
After fixing this, you'll run into another error. Note that the general form of a multi-branch CASE expression is:
CASE
WHEN <condition1> THEN <value1>
WHEN <condition2> THEN <value2>
...
END
Related
I need to be able to use variables in table names - I basically have the same set of tables used for different types of data, so I would like to just have one dashboard and swapping between all types instead of always having to set up multiple identical dashboards.
My query is something like:
select * from table_$variable_name;
Where my list of possible variable is something like cat, dog, bird
I can seem to make this work, if I only put the variable as shown above I get the following error
Error 1146: Table 'table_$variable_name' doesn't exist
If I enclose it in curly brackets, I get this error instead.
Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '{bird}' at line 1
(i.e. with the selected variable actually being visible this time)
I'm not sure if the issue is having underscores in the table names, I tried putting underscores around my variables too to check and I had no luck with that.
Another thing I tried was gradually adding on to the table name, so e.g.
select * from table_$variable;
Still returns an error, but I can see the table name starting to form correctly
Error 1146: Table 'table_bird_' doesn't exist
However, as soon as I add another underscore, the variable is not picked up abymore
```Error 1146: Table 'table_$variable_' doesn't exist``
I'm sure it's something silly I am missing in the syntax of the query - anyone has any suggestions?
Using this https://grafana.com/docs/grafana/latest/variables/templates-and-variables/ for reference
As #arturomp suggests, use
${var:raw}
At least in my case, this was the solution that worked.
I found double square brackets work. e.g.
Rather than
select * from table_$variable_name;
use
select * from table_[[variable_name]];
I'm looking at this example provided by MS as I'm trying to learn Try...Catch. I understand the syntax and Output (for the most part) but I have one question:
The Output will show the Error_Line as '4'. This is fine but if I remove the line break between GO and BEGIN TRY it'll show the Error_Line as '3'. I just want to understand the logic here.
What I imagine is happening is that SQL Server is counting the lines by beginning the batch immediately after GO, even if that line is blank but I do not know this for certain. Can anyone clarify? If that theory is correct, wouldn't that make finding errors difficult if scripts are written with line breaks like this?
-- Verify that the stored procedure does not already exist.
IF OBJECT_ID ( 'usp_GetErrorInfo', 'P' ) IS NOT NULL
DROP PROCEDURE usp_GetErrorInfo;
GO
-- Create procedure to retrieve error information.
CREATE PROCEDURE usp_GetErrorInfo
AS
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
GO
--Line 1
BEGIN TRY --Line 2
-- Generate divide-by-zero error. --Line 3
SELECT 1/0; --Line 4
END TRY
BEGIN CATCH
-- Execute error retrieval routine.
EXECUTE usp_GetErrorInfo;
END CATCH;
You can't really rely on ERROR_LINE(), especially when the error is thrown in internal stored procedure or there is dynamic T-SQL statement which is executed.
But do you really need the exact error line?
in real production code, the fix for the line causing the error may not be so obvious as in your example;
it will be better to debug the stored procedure or the function with the corresponding input parameter in order to reproduce the error
In this way it will be easier to fix an issue. In order to debug a SQL routine:
just script it
remove the drop and create stuff
add declare in front of the input parameters and initialized them with the values causing the error
Basically, instead of the exact error line (which can be easily fine having the correct input parameters and executing the routine) you may found useful two things:
which routing is causing the error (for example, you can add additional parameter to user usp_GetErrorInfo SP which is yielding the SP name as well
the input parameters which are causing the error (this can be done using separated table for logging the errors in the CATCH clause - you simple insert the input parameters in the table and information about the error)
Having this information, it will be easy to reproduce and then fix an issue (in many cases).
I have a Python script that runs a pgSQL file through SQLAlchemy's connection.execute function. Here's the block of code in Python:
results = pg_conn.execute(sql_cmd, beg_date = datetime.date(2015,4,1), end_date = datetime.date(2015,4,30))
And here's one of the areas where the variable gets inputted in my SQL:
WHERE
( dv.date >= %(beg_date)s AND
dv.date <= %(end_date)s)
When I run this, I get a cryptic python error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) argument formats can't be mixed
…followed by a huge dump of the offending SQL query. I've run this exact code with the same variable convention before. Why isn't it working this time?
I encountered a similar issue as Nikhil. I have a query with LIKE clauses which worked until I modified it to include a bind variable, at which point I received the following error:
DatabaseError: Execution failed on sql '...': argument formats can't be mixed
The solution is not to give up on the LIKE clause. That would be pretty crazy if psycopg2 simply didn't permit LIKE clauses. Rather, we can escape the literal % with %%. For example, the following query:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%';
would need to be modified to:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%%';
More details in the pscopg2 docs: http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
As it turned out, I had used a SQL LIKE operator in the new SQL query, and the % operand was messing with Python's escaping capability. For instance:
dv.device LIKE 'iPhone%' or
dv.device LIKE '%Phone'
Another answer offered a way to un-escape and re-escape, which I felt would add unnecessary complexity to otherwise simple code. Instead, I used pgSQL's ability to handle regex to modify the SQL query itself. This changed the above portion of the query to:
dv.device ~ E'iPhone.*' or
dv.device ~ E'.*Phone$'
So for others: you may need to change your LIKE operators to regex '~' to get it to work. Just remember that it'll be WAY slower for large queries. (More info here.)
For me it's turn out I have % in sql comment
/* Any future change in the testing size will not require
a change here... even if we do a 100% test
*/
This works fine:
/* Any future change in the testing size will not require
a change here... even if we do a 100pct test
*/
I'm a total erlang noob and I just want to see what's in a particular table I have. I want to just "select *" from a particular table to start with. The examples I'm seeing, such as the official documentation, all have column restrictions which I don't really want. I don't really know how to form the MatchHead or Guard to match anything (aka "*").
A very simple primer on how to just get everything out of a table would be very appreciated!
For example, you can use qlc:
F = fun() ->
Q = qlc:q([R || R <- mnesia:table(foo)]),
qlc:e(Q)
end,
mnesia:transaction(F).
The simplest way to do it is probably mnesia:dirty_match_object:
mnesia:dirty_match_object(foo, #foo{_ = '_'}).
That is, match everything in the table foo that is a foo record, regardless of the values of the fields (every field is '_', i.e. wildcard). Note that since it uses record construction syntax, it will only work in a module where you have included the record definition, or in the shell after evaluating rr(my_module) to make the record definition available.
(I expected mnesia:dirty_match_object(foo, '_') to work, but that fails with a bad_type error.)
To do it with select, call it like this:
mnesia:dirty_select(foo, [{'_', [], ['$_']}]).
Here, MatchHead is _, i.e. match anything. The guards are [], an empty list, i.e. no extra limitations. The result spec is ['$_'], i.e. return the entire record. For more information about match specs, see the match specifications chapter of the ERTS user guide.
If an expression is too deep and gets printed with ... in the shell, you can ask the shell to print the entire thing by evaluating rp(EXPRESSION). EXPRESSION can either be the function call once again, or v(-1) for the value returned by the previous expression, or v(42) for the value returned by the expression preceded by the shell prompt 42>.
I'm wondering if this is possible, I have a messy set of first name data to work with, and it's shared by other applications so I don't want to run any update statements. But the data will sometime include an AKA such as:
Robert (AKA Bob)
And I am trying to get a clean data where it just says "Robert".
One way I thought of is to use a temp table then CHARINDEX for ( and ) then REPLACE what's between ( and ). This seems like a long winded way to do this.
Is there a smarter way?
EDIT: More examples of the data hell. Sometimes the parenthesis comes in the front or mixed up such as:
(Bill) William
Richard (Dick) Jr.
Untested:
FirstName = case when charindex('(AKA', FirstName) = 0
then FirstName
else substring(FirstName, 1, charindex('(AKA', FirstName)) end
If the noisy string pattern is unpredictable, better to use SQL CLR TVF(table value function), utilize C# code regex. Taking an xml as input parameter, which includes data you need to process, return a table with data processed.
Pieter got the ball rolling for me! Thank you for getting me started on the right track!
SELECT
CASE WHEN FirstName like '%)%'
THEN REPLACE(FirstName,
SUBSTRING(FirstName,CHARINDEX('(',FirstName),CHARINDEX(')',FirstName)-CHARINDEX('(',FirstName)+1),'')
ELSE FirstName END