Solr AutoCommit not working with Postgresql - postgresql

I am using Solr 4.10.0 with PostgreSql 9.3. I am able to configure my solr core properly using data-config.xml and search through the database different tables. However, I am not able to setup the autoCommit feature. Whenever any row gets added in the table, I expect them to start appearing in the results after the maxTime (1 minute) but that doesn't happen. I have to explicitly rebuild the index by doing a full data-import and then everything works fine.
My solrconfig.xml is:
<updateHandler class="solr.DirectUpdateHandler2">
<autoCommit>
<maxTime>60000</maxTime>
<openSearcher>true</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
</updateHandler>
Is there something extra needs to be done for using autoCommit here? I checked my log files as well but there is no error / exception. What am I missing?

Please find the below link...
SOLR: What does an autoSoftCommit maxtime of -1 mean?
I think this is what is happening in your case..
First off, you can see the expression ${solr.autoSoftCommit.maxTime:-1} within the tag. This allows you to make use of Solr's variable substitution. That feature is described in detail here in the reference. If that variable has not been substituted by any of those means -1 is taken as value for that configuration.
Turning commitMaxTime to -1 effectively turns autocommit off.

Related

How to check when is the last time a row is selected in PostgreSQL?

Literally as the title said.
I'm checking an old database left by earlier developer, and apparently instead of creating a new "Master" table, he created a table which contains constants in the form of JSONs. Now however I want to check whether that row is still used, and when is the last time it's used.
When transitioning, the developer doesn't provide documentation whatsoever. So I have to check on my own on how things should work. I want to know because the code is really messy. Also since I can't seem to find this on Google, it's something worth to ask.
You cannot log past events. PostgreSQL does not retain that information.
The best you can do is:
Set log_statement = 'all'
Examine the statements in the log.

DBeaver will not display certain schemas correctly in the Database Navigator

I'm using DBeaver 5.2.5.201811181655 with IBM DB2/400 v7r3.
I'm trying to see a schema called WRKCERTO, but Database Navigator will not show it. The schema is there and I have rights to it, and I'm able to run SQL scripts with its objects, such as SELECT * FROM WRKCERTO.DAILYT and it works.
To make matters stranger, when WRKCERTO is the only schema in the filters, the contents of a schema which I cannot identify are shown under the connection as if the connection is their parent. It doesn't show any schema as a node in the tree between the connection & Tables, Views, etc. The tables are familiar, but I cannot determine their exact schema, and as such also cannot query any of them because DBeaver doesn't know what schema to use.
The behavior of the Projects window is the same.
If I connect with SquirrelSQL 3.8.1 everything looks ok. I can see WRKCERTO along with all my other schemas as if nothing is different.
The screenshot below shows the issue. The schema I use most is F_CERTOB, which is visible under the connection ASP7, which currently has two schema filters: F_CERTOB and WRKCERTO. But as shown, WRKCERTO...isn't.
The connection TEST is an exact copy of ASP7, but its only filter is WRKCERTO. And as mentioned above, the items under the connection name cannot be identified.
I've gone through the DBeaver settings, but I cannot find any way to change this behavior. AND...this is the first time I've tried to use WRKCERTO. I tried to access it for the first time only a couple days ago, so it seems unlikely there are bad bits of information about it floating around in my system, or in DBeaver.
What information can I provide to help diagnose this issue...?
Please check below url.Similar issue mentioned with some solution.
You may also want to try this and let me know if it works or not.
https://dbeaver.io/forum/viewtopic.php?f=2&t=911

Can I debug a PostgreSQL query sent from an external source, that I can't edit?

I see how to debug queries stored as Functions in the database. But my problem is with an external QGIS plugin that connects to my Postgres 10.4 via network and does a complex query and calculations, and stores the results back into PostGIS tables:
FOR r IN c LOOP
SELECT
(1 - ST_LineLocatePoint(path.geom, ST_Intersection(r.geom, path.geom))) * ST_Length(path.geom)
INTO
station
(continues ...)
When it errors, it just returns that line number as the failing location, but no clue where it was in the loop through hundreds of features. (And any features it has processed are not stored to the output tables when it fails.) I totally don't know enough about the plugin and about SQL to hack the external query, and I suspect if it was a reasonable task the plugin author would have included more revealing debug messages.
So is there some way I could use pgAdmin4 (or anything) from the server side to watch the query process? Even being able to see if it fails the first time through the loop or later would help immensely. Knowing the loop count at failure would point me to the exact problem feature. Being able to see "station" or "r.geom" would make it even easier.
Perfectly fine if the process is miserably slow or interferes with other queries, I'm the only user on this server.
This is not actually a way to watch the RiverGIS query in action, but it is the best I have found. It extracts the failing ST_Intersects() call from the RiverGIS code and runs it under your control, where you can display any clues you want.
When you're totally mystified where the RiverGIS problem might be, run this SQL query:
SELECT
xs."XsecID" AS "XsecID",
xs."ReachID" AS "ReachID",
xs."Station" AS "Station",
xs."RiverCode" AS "RiverCode",
xs."ReachCode" AS "ReachCode",
ST_Intersection(xs.geom, riv.geom) AS "Fraction"
FROM
"<your project name>"."StreamCenterlines" AS riv,
"<your project name>"."XSCutLines" AS xs
WHERE
ST_Intersects(xs.geom, riv.geom)
ORDER BY xs."ReachID" ASC, xs."Station" DESC
Obviously replace <your project name> with the QGIS project name.
Also works for the BankLines step if you replace "StreamCenterlines" with "BankLines". Probably could be adapted to other situations where ST_Intersects() fails without a clue.
You'll get a listing with shorter geometry strings for good cross sections and double-length strings for bad ones. Probably need to widen your display column a lot to see this.
Works for me in pgAdmn4, or in QGIS3 -> Database -> DB Manager -> (click the wrench icon). You could select only bad lines, but I find the background info helpful.

How can I set sql_mode to a list of values

I am trying to use the 2nd gen cloud sql and would like to change the sql mode. In the UI, I can only set sql_mode to one value from a drop-down list, but not multiple of them (eg, "STRICT_MODE_TRANS,ALLOW_INVALID_DATES"). What would be the best way to accomplish that?
Cheers,
Andres
I know this post is 1 year old, but I stumbled upon this now when I had a problem with sql_mode when I tried migrating a database from MySQL 5.5 to Google SQL using 5.7. Though I know that we could SET GLOBAL sql_mode='' to any valid value we want, it took me hours to give up and concluded we could not set multiple values on Google Cloud SQL.
Google only allows one value to be set on sql_mode flag for now. If your problem pertains to removing ONLY_FULL_GROUP_BY (OP does not mention why he wants to customize values) without removing the rest of the values of sql_mode, using the value TRADITIONAL in the Console or gcloud sql instances patch <instance_name> --database-flags sql_mode=TRADITIONAL will remove that value from the rest of the string.
From MySQL 5.7 Documentation:
Before MySQL 5.7.4, and in MySQL 5.7.8 and later, TRADITIONAL is equivalent to STRICT_TRANS_TABLES, STRICT_ALL_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, and NO_ENGINE_SUBSTITUTION.
I would have only added this as a comment above, but I can't add one yet due to lacking points.
This is not supported right now by Google Cloud SQL. You can only set one value.
Another potential solution is to set the sql_mode to HIGH_NOT_PRECEDENCE
Once set in Cloud SQL the string for sql_mode will become:
HIGH_NOT_PRECEDENCE
All other flags are removed!
I was coming from an older project so this solution might not work for all, but seems to be working well for us, plus it's something that can be tried quickly.

What can cause an inability to set QRYTIMLMT in DB2 from .NET?

We are using IBM's data provider from C# .NET 4.5 to query an i Series DB2 database. Normally this works very well, but for some queries, DB2 reports error "SQL0666 - SQL query exceeds specified time limit or storage limit".
I have tried setting the command timeout to 0, but to no effect. I have also tried to execute, in the manner explained here, the CHGQRYA command to set the QRYTIMLMT value to *NOMAX (or some other large value), but seemingly to no effect. However, if I use the same command to set the QRYSTGLMT (storage limit), it takes effect. Thus, I know that I am using the command correctly, and that it gets interpreted and executed by the database.
So, what can cause my inability to set the QRYTIMLMT value?
Also, our "DBA" has set the limit to *NOMAX on his end, and for queries not running through the .NET provider, everything works fine.
We're using IBM's Client Tools version 6r1 with service pack SI42423.
OK, so after lots of testing, I found the problem.
We're using the DeriveParameters() method to set the parameter types correctly, and if this method is called before setting CommandTimeout, the latter has no effect(!). The solution was to reverse the ordering of these statements.