The syntax for SUBSTRING in PostgreSQL is SUBSTRING(<text_expr> FROM <i> FOR <j>). Any idea how to make SQLAlchemy core generate that? I'm trying sqlalchemy.sql.expression.func, but that expects typically comma-separated notation. I don't see a built-in Function that addresses this. I'm not quite sure if literal or text would work. Any thoughts?
Looking through the SqlAlchemy tests, I found that sqlalchemy.sql.expression.func.substring compiles to SUBSTRING for PSQL:
def test_substring(self):
self.assert_compile(
func.substring("abc", 1, 2),
"SUBSTRING(%(substring_1)s FROM %(substring_2)s "
"FOR %(substring_3)s)",
)
self.assert_compile(
func.substring("abc", 1),
"SUBSTRING(%(substring_1)s FROM %(substring_2)s)",
)
func.substring(str, from, [for]) is indeed what you want. It is "comma-delineated" because that's how Python methods
If you want to generate the SQL yourself, you could do something like text("SUBSTRING('foo' FROM 1 FOR 2)"), but I don't see why you would.
Related
in my postgres database i have an array of jsons (json[]) to keep track of errors in the form of json objects. if an update comes in i want to append the latest error to that array but only keep the latest 5 errors (all sql is simplified):
update dc
set
error_history =
array_append(dc.error_history[array_length(dc.error_history, 1) - 4:array_length(dc.error_history, 1)+1], cast('{"blah": 6}' as json))
where dc.id = 'f57520db-5b03-4586-8e77-284ed2dca6b1'
;
this works fine in native sql, i tried to replicate this in jooq as follows:
.set(dc.ERROR_HISTORY,
field("array_append(dc.error_history[array_length(dc.error_history, 1) - 4:array_length(dc.error_history, 1) + 1], cast('{0}' as json))", dc.ERROR_HISTORY.getType(), latestError)
);
but it seems the : causes the library to think that there is a bind parameter there. the generated sql is:
array_append(dc.error_history[array_length(dc.error_history, 1) - 4?(dc.error_history, 1) + 1], cast('{0}' as json)
and the error i get is
nested exception is org.postgresql.util.PSQLException: ERROR: syntax error at or near "$5"
which i totally agree with :D
is there some way to escape the : in the java code or is there a better way to do this?
edit:
i have also tried just to remove the first element upon updating using the arrayRemove function but that also didn't work because it doesn't work by index but by element and postgres doesn't know how to check json elements for equality.
omg, the answer was really simple :D
field("array_append(dc.error_history[array_length(dc.error_history, 1) - 4 : array_length(dc.error_history, 1) + 1], cast({0} as json))", dc.ERROR_HISTORY.getType(), latestError)
just add spaces around the colon and it works correctly. note that i made another mistake by putting single quotes around {0}, because the latestError object is already of type JSON.
I use like with lowercaseString and Russian symbols but LOWER doesn't convert them to lowercase in the query. I tried to create my own function but it didn't work for me. How to solve this problem?
Having studied the documentation of SQLite, I learned that you need to connect the ICU library. How can this be done in this plugin?
Library: stephencelis/SQLite.swift (https://github.com/stephencelis/SQLite.swift)
Thanks for help.
// in name value: ПРИВЕТ from database
let search_name = "Привет"
user.filter(name.lowercaseString.like("%" + search_name.lowercased() + "%"))
SQLite LOWER is only for ASCII. If you want to get case insensitive for Russian (or any other symbols besides ASCII), use FTS3/FTS4 https://www.sqlite.org/fts3.html (or FTS5 https://www.sqlite.org/fts5.html).
SQLite.swift has the corresponding full text search modules https://github.com/stephencelis/SQLite.swift/blob/master/Documentation/Index.md#full-text-search
To use it in your project with existing database, you should make connection to virtual table via FTS module and filter the query using .match
// CREATE VIRTUAL TABLE "table" USING fts4("row0", "row1"), if not exists
try db.run(table.create(.FTS4(row0, row1), ifNotExists: true))
// SELECT * FROM "table" WHERE "row0" MATCH 'textToMatch*'
try db.prepare(table.filter(row0.match("\(textToMatch)*")))
// SELECT * FROM "table" WHERE "any row" MATCH 'textToMatch*'
try db.prepare(table.match("\(textToMatch)*")))
I am getting error while inserting data to pgsql with .net core entity framework
error is Input string was not in correct format
this is my query executing
INSERT INTO public."MedQuantityVerification"("Id","MedId","ActivityBy","ActivityOn","Quantity","ActivityType","SupposedOn","Note") Values(7773866,248953,8887,'7/14/2018 10:43:43 PM','42.5 qty',5,NULL,'I counted forty two {point} five.')
anyhow when I run that query directly to postgresql browser it works fine
looks like issue on c# side it is but not know what?
also issue is with {point}
this is how I executing the dynamic query
db.Database.ExecuteSqlRaw(query);
You have to escape the curly brackets:
{point} should be {{point}}
ExecuteSqlRaw utilizes curly braces to parameterize the raw query so if your query naturally includes them like OP's does the function is going to try and parse them. Doubling up the braces like in Koen Schepens' answer acts as an escape sequence and tells the function not to parse it as a parameter.
The documentation for the function uses the following example as to the purpose of why it does what it does:
var userSuppliedSearchTerm = ".NET";
context.Database.ExecuteSqlRaw("UPDATE Blogs SET Rank = 50 WHERE Name = {0}", userSuppliedSearchTerm);
Note that you'll want to use this to your advantage any time you're accepting user-input and passing it to ExecuteSqlRaw. If the curly brace is in a parameter instead of the main string it doesn't need to be escaped.
I have a Python script that runs a pgSQL file through SQLAlchemy's connection.execute function. Here's the block of code in Python:
results = pg_conn.execute(sql_cmd, beg_date = datetime.date(2015,4,1), end_date = datetime.date(2015,4,30))
And here's one of the areas where the variable gets inputted in my SQL:
WHERE
( dv.date >= %(beg_date)s AND
dv.date <= %(end_date)s)
When I run this, I get a cryptic python error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) argument formats can't be mixed
…followed by a huge dump of the offending SQL query. I've run this exact code with the same variable convention before. Why isn't it working this time?
I encountered a similar issue as Nikhil. I have a query with LIKE clauses which worked until I modified it to include a bind variable, at which point I received the following error:
DatabaseError: Execution failed on sql '...': argument formats can't be mixed
The solution is not to give up on the LIKE clause. That would be pretty crazy if psycopg2 simply didn't permit LIKE clauses. Rather, we can escape the literal % with %%. For example, the following query:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%';
would need to be modified to:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%%';
More details in the pscopg2 docs: http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
As it turned out, I had used a SQL LIKE operator in the new SQL query, and the % operand was messing with Python's escaping capability. For instance:
dv.device LIKE 'iPhone%' or
dv.device LIKE '%Phone'
Another answer offered a way to un-escape and re-escape, which I felt would add unnecessary complexity to otherwise simple code. Instead, I used pgSQL's ability to handle regex to modify the SQL query itself. This changed the above portion of the query to:
dv.device ~ E'iPhone.*' or
dv.device ~ E'.*Phone$'
So for others: you may need to change your LIKE operators to regex '~' to get it to work. Just remember that it'll be WAY slower for large queries. (More info here.)
For me it's turn out I have % in sql comment
/* Any future change in the testing size will not require
a change here... even if we do a 100% test
*/
This works fine:
/* Any future change in the testing size will not require
a change here... even if we do a 100pct test
*/
Basically i'm trying to do a simple join. I'm a beginner in progress and even if i'm reading always the same things... my problem still unresolved ! :'(
I'm using unixodbc to communicate with my base and this is working like a charm when i'm using simple command like : SELECT * from PUB."Art"
I understood I have to do something who looks like that to join 2 tables :
FOR EACH PUB."Art" WHERE (PUB."Art".IdArt = 16969) ,
EACH PUB."ArtDet" WHERE (PUB."ArtDet".IdArt = PUB."Art".IdArt)
END
But this only return me [ISQL]ERROR: Could not SQLPrepare
I then try to simplify the thing with :
for each PUB."Art": display PUB."Art".IdArt end.
I try to put colon (or not) after the for each loop, using point / comma etc... but I never use the right syntax apparently... or I'm missing a thing to execute this command !
Is anyone can advice me ?
Thx a lot !
You appear to mixing SQL and 4GL syntax.
"FOR EACH" is 4GL. The SQL equivalent is "SELECT".
(If you are using 4GL you do not need then "PUB" prefix and quoting table and field names will not work.)
To do a join with SQL (or the 4GL) use a "," between the table names. For SQL your syntax would look something like:
SELECT * from PUB."Art", PUB."ArtDet"
Gory details regarding WHERE clauses, SQL INNER & OUTER joins etc. can be found in the online documentation:
https://community.progress.com/community_groups/openedge_general/w/openedgegeneral/1329.openedge-product-documentation-overview
You will want to navigate to your specific release and then find the "SQL" guide.