i'm new to solr, though i'm struggling for a few days to run full indexing on a postgreSQL 9.4 DB on a entity with about 117.000.000 entries.
I'm using solr 5.3.1 on Windows 7 x64 with 16 GB of RAM. I'm not intending to use this machine as a server, it's just some kind of prototyping i'm at.
I kept getting this error on JDK x86 with just starting solr as solr start without any options. Then i tried:
solr start -m 2g which results in solr not coming up at all
solr start -m 1g makes solr start, but after indexing about 87.000.000 entries it dies with an out of memory error.
It is exactly the same point at which it dies without any options, though at the admin dashboard I see JVM heap is full.
So, since solr warns me anyway to use a x64 JDK i did and now use 8u65. I started solr with 4g Heap and started full import again. Again after 87.000.000 entries it threw the same exception. But the heap isn't even full (42%), neither is RAM or SWAP.
Does anyone have an idea what could be the reason for this behaviour?
Here is my data-config
<dataConfig>
<dataSource
type="JdbcDataSource"
driver="org.postgresql.Driver"
url="jdbc:postgresql://localhost:5432/dbname"
user="user"
password="secret"
readOnly="true"
autoCommit="false"
transactionIsolation="TRANSACTION_READ_COMMITTED"
holdability="CLOSE_CURSORS_AT_COMMIT" />
<entity name="hotel"
query="select * from someview;"
deltaImportQuery = "select * someview where solr_id = '${dataimporter.delta.id}'"
deltaQuery="select * from someview where changed > '${dataimporter.last_index_time}';">
<field name="id" column="id"/>
... etc for all 84 columns
in solrconfig.xml i have defined a RequestProcessorChain to generate a unique key while indexing, which seems to work.
in schema.xml there again are 84 columns with type, indexed and other attributes.
Here is the exception i'm getting, they are in german but the first one is saying "error 48" and the other "out of memory"
getNext() failed for query 'select * from someview;':org.apache.solr.handler.dataimport.DataImportHandlerException: org.postgresql.util.PSQLException: FEHLER: Speicher aufgebraucht
Detail: Fehler bei Anfrage mit Größe 48.
at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:62)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:416)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.access$500(JdbcDataSource.java:296)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:331)
at org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:132)
at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:74)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
Caused by: org.postgresql.util.PSQLException: FEHLER: Speicher aufgebraucht
Detail: Fehler bei Anfrage mit Größe 48.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:2113)
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1964)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:408)
... 12 more
Thank you in advance
As pointed by MatsLindh it was a JDBC error. Meanwhile i worked with hibernate search and experienced the same error at exactly the same time (near 87.000.000 indexed entities). The trick was to commit more often.
So at this case i tried several things at one time and it worked (don't know which option exactly did the trick):
1. set maxDocs for autoCommit in solrconfig.xml to 100.000. I believe that the default setting for committing is something at 15 seconds if no new documents are added, what actually happens all the time, until heap space runs full.
2. Set batchSize for the postrgreSQL JDBC Driver at 100 (Default is 500).
3. Changed the evil 'select * from table' to 'select c1, c2, ..., c85 from table'
4. Updated the JDBC Driver from 9.4.1203 to 9.4.1207
5. Updated Java to 1.8u74
I think it worked due to 1. and/or 3., I will do some further testing and update my post.
While i was trying the indexing with hibernate search I could see that the allocated RAM for PostgreSQL Server was freed at commit, so the RAM never was an issue again. It didn't happen here and the DB Server was at 85 GB RAM in the end, but kept on working.
Related
we are planning to migrate from EDB PostgreSQL to the community PostgreSQL. So we have started using the NPGSQL library instead EDB library.
we configured the same connection string settings as what we have in EDB connection strings.
<add name="ConnectionString"
connectionString="Host='XXXXX';port=9999;username='XXXX';password='XXXX';Database='dev_xxxx';Timeout=300;**ConnectionLifetime=2;MinPoolSize=5;MaxPoolSize=25;**CommandTimeout=300;" />
<add name="NPGSQLConnectionString"
connectionString="Host='XXXX';port=9999;username='XXXX';password='XXXX';Database='dev_xxxx';Timeout=300;Connection Idle Lifetime=300;Connection Pruning Interval = 10;**MinPoolSize=5;MaxPoolSize=25;**CommandTimeout=300;**ConnectionLifetime=2;**" />
but when I change to use NPGSQL library, we are getting the below issues.
Npgsql.NpgsqlException (0x80004005): The connection pool has been exhausted, either raise MaxPoolSize (currently 25) or Timeout (currently 300 seconds) ---> System.TimeoutException: The operation has timed out.
I understand, I have to increase the max pool size, but the same configuration is working in EDB ?
I tried to increase maxpool size to 50, but it makes no difference. so I'm expecting a connection leak, but have no way to test or monitor it.
Npgsql version: 5.0.1.1
PostgreSQL version: 9.5
I have system where we perform large number of inserts and updates query(something upsert as well)
I see occasionally error on my logs that states ..
PG::ObjectNotInPrerequisiteState: ERROR: attempted to delete invisible tuple
INSERT INTO call_records(plain_crn,efd,acd,slt,slr,ror,raw_processing_data,parsed_json,timestamp,active,created_at,updated_at) VALUES (9873,2016030233,'R',0,0,'PKC01','\x02000086000181f9000101007 ... ')
What I fail to understand even when no (delete) query is performed (the above error appear on insert clause) yet the error was thrown.
I have been googling around this issue but no conclusive evidence of why this happen.
Version of Postgres.
database=# select version();
version
--------------------------------------------------------------------------------------------------------------
PostgreSQL 9.5.2 on x86_64-apple-darwin14.5.0, compiled by Apple LLVM version 7.0.0 (clang-700.1.76), 64-bit
(1 row)
Any Clue ??
Im running a workflow in powercenter that is constatnly getting an SQL1224N error.
This process execute a query against one table (POLIZA) with 800k rows, it retrieves the first 10k rows and then it start to execute to another table with 75M rows, at ths moment in DB2 an idle thread error appear but the PWC process still running retrieving the 75M rows, when it is completed (after 20 minutes) the errros comes up related with the first table:
[IBM][CLI Driver] SQL1224N A database agent could not be started to service a request, or was terminated as a result of a database system shutdown or a force command. SQLSTATE=55032
sqlstate = 40003
[IBM][CLI Driver] SQL1224N A database agent could not be started to service a request, or was terminated as a result of a database system shutdown or a force command. SQLSTATE=55032
sqlstate = 40003
Database driver error...
Function Name : Fetch
SQL Stmt : SELECT POLIZA.BSPOL_BSCODCIA, POLIZA.BSPOL_BSRAMOCO
FROM POLIZA
WHERE
EXA01.POLIZA.BSPOL_IDEMPR='0015' for read only with ur
Native error code = -1224
DB2 Fatal Error].
I have a similar process runing against the same 2 tables and it is woking fine where the only difference I can see is that the DB2 user is different.
Any idea how can i fix this?
Regards
The common causes for -1224 are:
Your instance or database has crashed, or
Something/somebody is forcing off your application (FORCE APPLICATION or equivalent)
As for the crash, I think you would know by know. This typically requires a database or instance restart. At any rate, can you please have a look into your DIAGPATH to check for any FODC* directories whose timestamp would match the timestamp of the -1224 errors?
As for the FORCE case, you should find some evidence of the -1224 in db2diag.log. Try searching for the decimal -1224, but also for its hex representation (0xFFFFFB38).
I have followed the instructions from here and installed the updated plugin. The error has become:
Query error
Message: net.sf.jasperreports.engine.JRException:
Error executing SQL statement for : null Level: SEVERE Stack Trace:
Error executing SQL statement for : null com.jaspersoft.hadoop.hive.HiveFieldsProvider.getFields(HiveFieldsProvider.java:113)
com.jaspersoft.ireport.hadoop.hive.designer.HiveFieldsProvider.getFields(HiveFieldsProvider.java:32)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.readFields(HiveConnection.java:154)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
After downgrading to 4.5.0 the error has become (the connection is verified and I am able to query the table from hive):
Query error
Message: net.sf.jasperreports.engine.JRException: Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:14 Table not found 'panstats' Level:
SEVERE Stack Trace: Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:14 Table not found 'panstats'
com.jaspersoft.hadoop.hive.HiveFieldsProvider.getFields(HiveFieldsProvider.java:260)
com.jaspersoft.ireport.hadoop.hive.designer.HiveFieldsProvider.getFields(HiveFieldsProvider.java:32)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.readFields(HiveConnection.java:146)
com.jaspersoft.ireport.designer.wizards.ConnectionSelectionWizardPanel.validate(ConnectionSelectionWizardPanel.java:146)
org.openide.WizardDescriptor$7.run(WizardDescriptor.java:1357)
org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
I am using Hive 0.8.1 on OS X Lion 10.7.4.
Is your query as simple as select * from panstats? I suspect that the query is not the problem, but you'll want to confirm that first.
You could try querying that table from a tool like SQuirreL SQL. If that tool also cannot get the data, then it's probably a Hive issue. If it can... then it's probably an issue with iReport or the Hive plugin.
It sounds like Hive is not configured to share metadata. It uses the annoying default configuration with Derby, so outside connections don't get access to your panstats table. I came across this article about configuring Hive earlier this year. It documents using MySQL instead of derby. If that's indeed the problem, then it's just a Hive configuration issue. Following that article would solve things both for SQuirreL and for iReport.
I'm working with: seam 2.2.2 + hibernate + richfaces + jboss 5.1 + postgreSQL
I have an module which needs to load some data from the database. Easy. The problem is, on development it works fine, 100%, but when I deploy on my production server and try to get the data, an error rise:
could not read column value from result set: fechahor9_504_; Bad value for type timestamp : [C#122e5cf
SQL Error: 0, SQLState: 22007
Bad value for type timestamp : [C#122e5cf
javax.persistence.PersistenceException: org.hibernate.exception.DataException: could not execute query
[more errors]
Caused by: org.postgresql.util.PSQLException: Bad value for type timestamp : [C#122e5cf
at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:232)
[more errors]
Caused by: java.lang.NumberFormatException: Trailing junk on timestamp: ''
at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:226)
I can't understand why it works on my machine (development) and why not on production. Any clues? Anyone gone through the same problem? Is exactly the same compilation
Stefano Travelli was right. I was checking the jBoss on production and there was an old jdbc driver on [jboss_dir]/common/lib from an old jwebstart application (not developed by me). Deleted that jdbc and it works fine. I should check if the old application is still needed and if so, check if it still works without the jdbc being there or with an upgraded version.
Not sure what the driver story is ..
but the problem for me show up when JDBC tries to parse bigint from the DB to
myOjbect.setDate(Date date){...}
the other "JDBC friendly" is ignorred for some reason .
myOjbect.setDate(long date){...}
So .. removing the Date setter and leaving a long one resolves the problem.
This is a Big workaround .. but may help someone out there :)