I'm learning LibreOffice Base (3.6.2). Unfortunatly the doc is pretty poor. The DB is a ".odb" file format. Here's a simple multi-table query:
I'd like to merge the field "refLogiciel.name" and "tblPosteLogiciel.version" in one field.
Thank you!
You will have to Create the query in SQL for that one. I have the same problem. No answers yet, but I do at least have some headway in that area. Here's a link to my question, which might help:
Error in Querying Concatenation of 2 fields in LibreOffice Base SQL
EDIT: Oops, I din't realize how old this post was. Hope you found out how to do it so you can share it to me. :)
use as field: CONCAT( "refLogiciel"."name" , "tblPosteLogiciel"."version")
If you want a space between the two then use:
CONCAT( "refLogiciel"."name" , CONCAT( ' ', "tblPosteLogiciel"."version"))
Related
Does anyone know how to use to_tsquery() function of postgresql in sqlalchemy? I searched a lot in Google, I didn't find anything that I can understand. Please help.
I hope it is available in filter function like this:
session.query(TableName).filter(Table.column_name.to_tsquery(search_string)).all()
The expected SQL for the above query is something like this:
Select column_name
from table_name t
where t.column_name ## to_tsquery(:search_string)
The .op() method allows you to generate SQL for arbitrary operators.
session.query(TableName).filter(
Table.c.column_name.op('##')(to_tsquery(search_string))
).all()
For these type of arbitrary queries, you can embed the sql directly into your query:
session.query(TableName).\
filter("t.column_name ## to_tsquery(:search_string)").\
params(search_string=search_string).all()
You should also be able to parameterize t.column_name, but can't see the docs for that just now.
This may have been added recently but worth adding as a more standard solution.
query.filter(Model.attribute.match('your search string'))
does it for you as it looks for the right operation available for your dialect.
See the official docs:
https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#full-text-search
Of course, this assumes the table you are querying is a view built with a to_tsvector attribute to apply the ## operation to.
My fifty cents in 2021, following the docs
None of the previous answers mention how to cast the text column as postgres: to_tsvector('english', column). I have a text column indexed as tsvector. And this is the way:
select(mytable.c.id).where(
func.to_tsvector('english', mytable.c.title )\
.match('somestring', postgresql_regconfig='english')
)
In my case, I didn't want to use to_tsquery as ".match" forces. A more intuitive way is to use websearch_to_tsquery when you have a search input like stackOverflow. So I did a mix from jd response.
I finally applied it as a .filter() the following statement:
func.to_tsvector('english', Table.column_name)\
.op('##')(func.websearch_to_tsquery("string to search",
postgresql_regconfig='english'))
I think this is the general formula and it applies to to_tsquery too.
I am currently using this sqlite query in my application. Two tables are used in this query.....
UPDATE table1 set visited = (SELECT COUNT(DISTINCT table1.itemId) from 'table2' WHERE table2.itemId = table1.itemId AND table2.sessionId ='eyoge2avao');
It is working correct.... My problem is it is taking around 10 seconds to execute this query and retrieve the result..... Don't know what to do... Almost all other process are in right way.. So it seems the problem is with this query formation...
Plz someone help with how to optimize this query....
Regards,
Brian
Make sure you have indexes on the following (combinations of) fields:
table1.itemId
(This will speed up the DISTINCT clause, since the itemId will already be in the correct order).
table2.itemId, table2.sessionId
This will speed up the WHERE clause of your SELECT statement.
How many rows are there in these tables?
Aso try doing an EXPLAIN on your SELECT command to see if it gives you any helpful advice.
I'm doing a bit of work which requires me to truncate DB2 character-based fields. Essentially, I need to discard all text which is found at or after the first alphabetic character.
e.g.
102048994BLAHBLAHBLAH
becomes:-
102048994
In SQL Server, this would be a doddle - PATINDEX would swoop in and save the day. Much celebration would ensue.
My problem is that I need to do this in DB2. Worse, the result needs to be used in a join query, also in DB2. I can't find an easy way to do this. Is there a PATINDEX equivalent in DB2?
Is there another way to solve this problem?
If need be, I'll hardcode 26 chained LOCATE functions to get my result, but if there is a better way, I am all ears.
SELECT TRANSLATE(lower(column), ' ', 'abcdefghijklmnopqrstuvwxyz')
FROM table
write a small UDF (user defined function) in C or JAVA, that does your task.
Peter
I have a select statement and a cursor to iterate the rows I get. the problem is that I have many columns (more than 500), and so "fetch .. into #variable" is impossible for me. how can I iterate the columns (one by one, I need to process the data)?
Thanks in advance,
n.b
Two choices.
1/ Use SSIS or ADO.Net to pour through your dataset row by row.
2/ Consider what you're actually needing to achieve and find a set-based approach.
My preference is for option 2. Let us know what you need done and we'll find a way.
Rob
You can build a SQL string using sys.columns or INFORMATION_SCHEMA queries. Here's a post I wrote on that.
some time ago I happend to resolve a PG related problem with this SO question of mine.
Basically it's about using row_number over a partition in 8.4.
Sadly now I have to create the same thing for 8.2 since one of my customers is on
8.2 and needs it desperatly.
What I do know (on 8.4) is the following:
SELECT custId, custName, 'xyz-' || row_number() OVER (PARTITION by custId)
AS custCode
Basically counting the occurances of custId and assigning custCodes from that.
(just an example, to show what I to; of course the query is way more complex)
I figured the solutions provided to the question mentioned above, but did'nt get them
working since there's one more hurdle to take. I don't run SQL directly I have to
embed it into a xml based config file which creates a certain xml format from the query
results. So creating temp stuff or procedures is not really an option.
So here's the question, does anyone of you guys have an idea how to port that solution of
mine to PG 8.2?
TIA
K
use depesz solution http://www.depesz.com/index.php/2007/08/17/rownum-anyone-cumulative-sum-in-one-query/