FREETEXTTABLE always has a rank of 0 - tsql

I'm using SQLServer 2008 and if I perform the following query:
SELECT
*
FROM
FREETEXTTABLE(SomeTable, Name, 'a name that I know exists')
I get the rows back that I would expect, but the rank is always 0.
Searching for a solution to this problem, I found this question on the Microsoft ASP.NET forum, and sure enough if I add:
ALTER FULLTEXT CATALOG MyCatalog REBUILD
I start to get a rank - but only temporarily.
I don't want to have to rebuild my catalog every time I do a search especially when I have lots of data in my database and if I add it to my Sproc directly before the query, my query returns no results anyway, presumably because the catalog has finished being rebuilt. There seem to be other people having this and similar problems but I have been unable to find a solution. Any ideas?

I am running in the same issue, and currently accepted answer is not a solution for me.
Yes the ranking is done as said by this answer, but it is no way a reason for having inconsistent results when it has been some times since the last catalog rebuild. Ranking should not dramatically change upon rebuild and even less some minutes after rebuild...
For me, there is a bug in freetexttable ranking. (Bug which does not affect containstable ranking: I have checked it myself with my own buggy catalog, and it is also written on this Microsoft forum post.)
From this other Microsoft forum post it seems this bug occurs only in catalog having only very few rows indexed. Adding data to the catalog causes the bug to disappear.
So here is my answer, taken from Pavel Valenta on yet another Microsoft forum post:
If your real catalog is not going to have more than a few hundreds rows indexed, add some dummy table to your catalog in order to have more rows indexed.
This will not pollute your results due to the way queries are build. Yes this seems quite a strange fix. But that is the only one that had solved the trouble for me.
One final note: I had this problem with sql 2005 sp4, not tested with 2008. (The question is for 2008.)

The rank is relative to the other results returned in the query and is therefore only useful for sorting on relevance from the returned values. There is detailed information on the ranking method.

Related

Time of first occurrence of the query from pg_stat_statements. Possible to get?

I'm debugging a DB performance issue. There's a lead that the issue was introduced after a certain deploy, e.g. when DB started to serve some new queries.
I'm looking to correlate deployment time with the performance issues, and would like to identify the queries that are causing this.
Using pg_stat_statements has been very handy so far. Unfortunately it does not store the time stamp of the first occurrence of each query.
Is there any auxiliary tables I could look into to see the time of first occurrence of queries?
Ideally, if this information would have been available in pg_stat_statements, I'd make a query like this:
select queryid from where date(first_run) = '2020-04-01';
Additionally, it'd be cool to see last_run as well, so to filter out some old queries that no longer execute at all, but remain in pg_stat_statements. That's more of a nice thing that's a necessity though.
This information is not stored anywhere, and indeed it would not be very useful. If the problem statement is a new one, you can easily identify it in your application code. If it is not a new statement, but something made the query slower, the first time the query was executed won't help you.
Is your source code not under version control?

Unit tests producing different results when using PostgreSQL

Been working on a module that is working pretty well when using MySQL, but when I try and run the unit tests I get an error when testing under PostgreSQL (using Travis).
The module itself is here: https://github.com/silvercommerce/taxable-currency
An example failed build is here: https://travis-ci.org/silvercommerce/taxable-currency/jobs/546838724
I don't have a huge amount of experience using PostgreSQL, but I am not really sure why this might be happening? The only thing I could think that might cause this is that I am trying to manually set the ID's in my fixtures file and maybe PostgreSQL not support this?
If this is not the case, does anyone have an idea what might be causing this issue?
Edit: I have looked again into this and the errors appear to be because of this assertion, which should be finding the Tax Rate vat but instead finds the Tax Rate reduced
I am guessing there is an issue in my logic that is causing the incorrect rate to be returned, though I am unsure why...
In the end it appears that Postgres has different default sorting to MySQL (https://www.postgresql.org/docs/9.1/queries-order.html). The line of interest is:
The actual order in that case will depend on the scan and join plan types and the order on disk, but it must not be relied on
In the end I didn't really need to test a list with multiple items, so instead I just removed the additional items.
If you are working on something that needs to support MySQL and Postgres though, you might need to consider defining a consistent sort order as part of your query.

Why is Crystal Reports Query so slow?

I have many Crystal Reports to the same database. Some execute quickly given the same date parameters and many fields are the same as well as the tables they access. One of my reports used to run quickly is now running very slow and I can see it looking through all the records - represented in the bottom 0 of 100000 til it finds records. I have no idea what I may have changed to make it do this. Some reports still run fast and some do not. These findings are consistent with the reports I am talking about. Does anyone know why setting might be causing this?
I have tried looking for any subtle differences in them - I cannot see anything. Many of them where clones from the original(still works fast).
In my CR book in the performance section it states if the where clause can not be translated it will be ignored and for the process of all records - which is what this looks like - though I have a valid where clause when I check it in the report.
Use Indexes Or Server For Speed is checked. All other setting in Report Options as identical.
Thanks
You can do some troubleshoot:
Try run your query directly on db and see how long it takes.
Is there any business logic added in your report.
May be also try to put same query in fresh report and see if it takes similar time.
Also try debug your application and see if some part of your code making your report to show slow.
Are you running it on local db or on some server.
Also if you can share your query, so I can take a look.
Let me know if you need more help.

Performance Tuning

How can i check the Query running from long time & steps of tuning the query? (Oracle)
Run explain plan for select .... to see what Oracle is doing with your query.
Post your query here so that we can look at it and help you out.
Check out the Oracle Performance Tuning FAQ for some tricks-of-the-trade, if you will.
You can capture the query by selecting from v$sql or v$sqltext.
If you are not familiar with it, look up 'Explain Plan' in the Oracle
documentation. There should be plenty on it in the performance tuning
guide.
Have a look at Quest Software's Toad for a third party tool that helps
in this area too.
K
Unfortunately your question is not expressed clearly. The other answers have already tackled the issue of tuning a known bad query, but another interpretation is that you want to monitor your database to find poorly performing queries.
If you don't have Enterprise Edition with the Diagnostics pack - and not many of us do - your best bet is to run statspack snapshots on a reqular basis. This will give you a lot of information about your system, including which queries take a long time to complete and which queries consume a lot of your system's resources. You can find out more about statspack here.
If you do not want to use OEM, then you can query and find out.
First find the long running query. If it's currently being executing, You can join gv$session to find which session running since long time. Then go to gv$sql to find SQL details. You need to look last_call_et column.If SQL executed some time inpast you can use dba_hist_snapshot ,dba_hist_sqlstat ,DBA_HIST_SQLTEXT tables to find offending SQL.
Once you get query, you can check what plan it's picking from dba_hist_sql_plan table if this SQL executed in past or from gv$sql_plan if it's currently executing.
Now you analyze execution plan and see if it's using right index, join etc.
If not tune those.
Let me know which step you have the problem. I can help you in answering those.

How can I limit DataSet.WriteXML output to typed columns?

I'm trying to store a lightly filtered copy of a database for offline reference, using ADO.NET DataSets. There are some columns I need not to take with me. So far, it looks like my options are:
Put up with the columns
Get unmaintainably clever about the way I SELECT rows for the DataSet
Hack at the XML output to delete the columns
I've deleted the columns' entries in the DataSet designer. WriteXMl still outputs them, to my dismay. If there's a way to limit WriteXml's output to typed rows, I'd love to hear it.
I tried to filter the columns out with careful SELECT statements, but ended up with a ConstraintException I couldn't solve. Replacing one table's query with SELECT * did the trick. I suspect I could solve the exception given enough time. I also suspect it could come back again as we evolve the schema. I'd prefer not to hand such a maintenance problem to my successors.
All told, I think it'll be easiest to filter the XML output. I need to compress it, store it, and (later) load, decompress, and read it back into a DataSet later. Filtering the XML is only one more step — and, better yet, will only need to happen once a week or so.
Can I change DataSet's behaviour? Should I filter the XML? Is there some fiendishly simple way I can query pretty much, but not quite, everything without running into ConstraintException? Or is my approach entirely wrong? I'd much appreciate your suggestions.
UPDATE: It turns out I copped ConstraintException for a simple reason: I'd forgotten to delete a strongly typed column from one DataTable. It wasn't allowed to be NULL. When I selected all the columns except that column, the value was NULL, and… and, yes, that's profoundly embarrassing, thank you so much for asking.
It's as easy as Table.Columns.Remove("UnwantedColumnName"). I got the lead from
Mehrdad's wonderfully terse answer to another question. I was delighted when Table.Columns turned out to be malleable.