How to solve this com.orientechnologies.common.io.OIOException in orientDb? - orientdb

When I try to update a record with some data I'm getting this exception:
Caused by: com.orientechnologies.common.io.OIOException: Impossible to write a chunk of length:83644944 max allowed chunk length:16777216 see NETWORK_BINARY_MAX_CONTENT_LENGTH settings
at com.orientechnologies.orient.client.remote.OStorageRemote.handleIOException(OStorageRemote.java:321)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:296)
at com.orientechnologies.orient.client.remote.OStorageRemote.asyncNetworkOperation(OStorageRemote.java:163)
at com.orientechnologies.orient.client.remote.OStorageRemote.createRecord(OStorageRemote.java:564)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeSaveRecord(ODatabaseDocumentTx.java:2202)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveNew(OTransactionNoTx.java:241)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.saveRecord(OTransactionNoTx.java:171)
... 56 more
Caused by: com.orientechnologies.common.io.OIOException: Impossible to write a chunk of length:83644944 max allowed chunk length:16777216 see NETWORK_BINARY_MAX_CONTENT_LENGTH settings
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.writeBytes(OChannelBinary.java:273)
at com.orientechnologies.orient.enterprise.channel.binary.OChannelBinary.writeBytes(OChannelBinary.java:259)
at com.orientechnologies.orient.client.remote.OStorageRemote$5.execute(OStorageRemote.java:571)
at com.orientechnologies.orient.client.remote.OStorageRemote$1.execute(OStorageRemote.java:167)
at com.orientechnologies.orient.client.remote.OStorageRemote.baseNetworkOperation(OStorageRemote.java:252)
... 61 more
How and where do I need to increase this configuration of maxlength???
My OrientDB version is: 2.2.34
image of table structure
Here trying to add BINARY data to screenshot column

You can change this setting as follow:
Go into orientdb-server-config.xml file and change as follow:
<entry name="network.binary.maxLength" value="<a value in KB here>"/>
At startup, specifying the following parameter on the command line:
-Dnetwork.binary.maxLength=<aValueInKb>
eg.
-Dnetwork.binary.maxLength=32768
if you are running embedded, you can do the following before you start the server:
OGlobalConfiguration.NETWORK_BINARY_MAX_CONTENT_LENGTH.set(32768);

Related

Cryengine 3 Cgf Upload Failed

I see this error when I stage an object that I imported from 3ds max. The error I get is:
[Warning] CGF Upload failed : Directory stream 8 cannot be from 32-bit to 16-bit format because it contains directory 65535 [File=demo/3D/Sofa.cgf].
What resource method can I use to solve the problem?
When you transfer the object by separating it into certain groups, the problem disappears. I'm open to other suggestions.

OrientDB multi-node configuration: hazelcast issues

I have OrienDB 2.1.4 cluster of 3 nodes with basic configuration. The only change in hazelcast.xml I made is to replace multicast by implicit tcp-ip hosts list.
After heavy request to DB (select without joins, about 300k rows in result set), OrientDB stops response to network connection attempts from application (OrientDB Studio is still working), the follwoing exceptions continuously appear in logs:
on master node
2016-02-24 10:02:17:647 INFO [10.10.10.124]:2434 [zertodb] [3.3.5] Remaining migration tasks in queue => 1 [InternalPartitionService][10.10.10.124]:2434 [zertodb] [3.3.5] Received data format is invalid. (An old version of Hazelcast may be running here.)
com.hazelcast.nio.serialization.HazelcastSerializationException: java.io.UTFDataFormatException: Length check failed, maybe broken bytestream or wrong stream position
at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:354)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:341)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:454)
at com.hazelcast.cluster.MulticastService.receive(MulticastService.java:155)
at com.hazelcast.cluster.MulticastService.run(MulticastService.java:113)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.UTFDataFormatException: Length check failed, maybe broken bytestream or wrong stream position
at com.hazelcast.nio.UTFEncoderDecoder.readUTF0(UTFEncoderDecoder.java:505)
at com.hazelcast.nio.UTFEncoderDecoder.readUTF(UTFEncoderDecoder.java:77)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readUTF(ByteArrayObjectDataInput.java:450)
at com.hazelcast.cluster.ConfigCheck.readData(ConfigCheck.java:219)
at com.hazelcast.cluster.JoinMessage.readData(JoinMessage.java:80)
at com.hazelcast.cluster.JoinRequest.readData(JoinRequest.java:64)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:111)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39)
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:335)
... 4 more
on other nodes:
[10.10.10.194]:2434 [zertodb] [3.3.5] Received data format is invalid. (An old version of Hazelcast may be running here.)
com.hazelcast.nio.serialization.HazelcastSerializationException: java.io.StreamCorruptedException: invalid type code: 00
at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:354)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:341)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:454)
at com.hazelcast.cluster.ConfigCheck.readData(ConfigCheck.java:215)
at com.hazelcast.cluster.JoinMessage.readData(JoinMessage.java:80)
at com.hazelcast.cluster.JoinRequest.readData(JoinRequest.java:64)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:111)
at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39)
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:335)
at com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readObject(ByteArrayObjectDataInput.java:454)
at com.hazelcast.cluster.MulticastService.receive(MulticastService.java:155)
at com.hazelcast.cluster.MulticastService.run(MulticastService.java:113)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.StreamCorruptedException: invalid type code: 00
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1379)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at com.hazelcast.nio.serialization.DefaultSerializers$ObjectSerializer.read(DefaultSerializers.java:196)
at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.nio.serialization.SerializationServiceImpl.readObject(SerializationServiceImpl.java:335)
... 12 more
The same query with smaller result set works fine.
I have found this issue https://github.com/hazelcast/hazelcast/issues/4327 about your problem with hazelcast.
Hope it helps.

Writing from PIG to MongoDB - error 2116 - mongodb schema not found

I am using Hadoop on Windows Server 2008 - Hortonworks distribution
We are using PIG and trying to write the data into MongoDB; I am not able to read or write to the MongoDB; not sure what the issue we get an error 2116 which states that the mongodb schema is empty
Command to read -
register 'D:\hdp\pig-0.12.1.2.1.1.0-1621\lib\mongo-hadoop-core-1.2.0.jar'
register 'D:\mongo-hadoop-2.2-1.2.0\mongo-hadoop-2.2-1.2.0\mongo-hadoop-1.2.0.jar'
register 'D:\mongo-hadoop-2.2-1.2.0\mongo-hadoop-2.2-1.2.0\mongo-hadoop-pig-1.2.0.jar'
register 'D:\hdp\hadoop-2.4.0.2.1.1.0-1621\lib\mongo-2.6.1.jar'
set mapred.map.tasks.speculative.execution false;
set mapred.reduce.tasks.speculative.execution false;
SET mapreduce.fileoutputcommitter.marksuccessfuljobs false;
SalesLoading = load 'mongodb://localhost/benvenuedb.SalesData' using com.mongodb.hadoop.pig.MongoLoader();
store SalesLoading into 'mongodb://localhost:27017/benvenuedb.SalesData1' using com.mongodb.hadoop.pig.MongoStorage();
Error Messages
Pig Stack Trace
---------------
ERROR 2116:
<line 5, column 0> Output Location Validation Failed for: 'mongodb://127.0.0.1:27017/benvenuedb.SalesData More info to follow:
The value of property mongo.pig.output.schema must not be null
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1002: Unable to store alias salesLoading
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1637)
at org.apache.pig.PigServer.registerQuery(PigServer.java:577)
at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1093)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:501)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:541)
at org.apache.pig.Main.main(Main.java:156)
Caused by: org.apache.pig.impl.plan.VisitorException: ERROR 2116:
<line 5, column 0> Output Location Validation Failed for: 'mongodb://127.0.0.1:27017/benvenuedb.SalesData More info to follow:
The value of property mongo.pig.output.schema must not be null
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:75)
at org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:66)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:66)
at org.apache.pig.newplan.DepthFirstWalker.walk(DepthFirstWalker.java:53)
at org.apache.pig.newplan.PlanVisitor.visit(PlanVisitor.java:52)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator.validate(InputOutputFileValidator.java:45)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.compile(HExecutionEngine.java:303)
at org.apache.pig.PigServer.compilePp(PigServer.java:1382)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.execute(PigServer.java:1299)
at org.apache.pig.PigServer.access$400(PigServer.java:124)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1632)
... 8 more
Caused by: java.lang.IllegalArgumentException: The value of property mongo.pig.output.schema must not be null
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:971)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:953)
at com.mongodb.hadoop.pig.MongoStorage.setStoreLocation(MongoStorage.java:249)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:68)
... 20 more
I have issued netstat -an to see the open ports
The local address is 10.69.148.89; I do not see the port 27017 open in this IP; however 127.0.0.1 has 27017 open. There is something simple we are overlooking.
Need some help; we have spent over 2 days with no resolution
Have you tried setting the property it says is missing?
The value of property mongo.pig.output.schema must not be null
There are some issues in writing to MongoDB from PIG especially when you use Hortonworks windows distribution. I have broken this into Three steps;
Write to HDFS filesystem as a json file using JSONStorage( );
Move the HDFS file to windows filesystem
Load the json file into MongoDB
I am open if anyone has attempted this in a different way

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>

Seam 2.2GA + JBoss AS 5.1GA + Postgres 8.4

Sorry for the big wall of text, but its mostly logs
Thx for any help in any of my problems
I've been trying to get help from Seam forums, but in vain.
I'm trying this Setup mentioned in the title, but unsuccessfully.
I have it all installed correctly and the problems start with the seam-gen.
This is my build.properties
#Generated by seam setup
#Sat Aug 29 19:12:18 BRT 2009
hibernate.connection.password=abc123
workspace.home=/home/rgoytacaz/workspace
hibernate.connection.dataSource_class=org.postgresql.ds.PGConnectionPoolDataSource
model.package=com.atom.Commerce.model
hibernate.default_catalog=PostgreSQL
driver.jar=/home/rgoytacaz/postgresql-8.4-701.jdbc4.jar
action.package=com.atom.Commerce.action
test.package=com.atom.Commerce.test
database.type=postgres
richfaces.skin=glassX
glassfish.domain=domain1
hibernate.default_schema=Core
database.drop=n
project.name=Commerce
hibernate.connection.username=postgres
glassfish.home=C\:/Program Files/glassfish-v2.1
hibernate.connection.driver_class=org.postgresql.Driver
hibernate.cache.provider_class=org.hibernate.cache.HashtableCacheProvider
jboss.domain=default
project.type=ear
icefaces.home=
database.exists=y
jboss.home=/srv/jboss-5.1.0.GA
driver.license.jar=
hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
hibernate.connection.url=jdbc\:postgresql\:Atom
icefaces=n
./seam create-project works okay, but when I try generate-entities, I get the following...
generate-model:
[echo] Reverse engineering database using JDBC driver /home/rgoytacaz/postgresql-8.4-701.jdbc4.jar
[echo] project=/home/rgoytacaz/workspace/Commerce
[echo] model=com.atom.Commerce.model
[hibernate] Executing Hibernate Tool with a JDBC Configuration (for reverse engineering)
[hibernate] 1. task: hbm2java (Generates a set of .java files)
[hibernate] log4j:WARN No appenders could be found for logger (org.hibernate.cfg.Environment).
[hibernate] log4j:WARN Please initialize the log4j system properly.
[javaformatter] Java formatting of 4 files completed. Skipped 0 file(s).
this is problem no.1. How do I fix this? What is this? I had to do this in eclipse. It worked.
Then I import the seam-gen created project into eclipse, and deploy to JBoss 5.1. While my servers start I've noticed the following..
03:18:56,405 ERROR [SchemaUpdate] Unsuccessful: alter table PostgreSQL.atom.productsculturedetail add constraint FKBD5D849BC0A26E19 foreign key (culture_Id) references PostgreSQL.atom.cultures
03:18:56,406 ERROR [SchemaUpdate] ERROR: cross-database references are not implemented: "postgresql.atom.productsculturedetail"
03:18:56,407 ERROR [SchemaUpdate] Unsuccessful: alter table PostgreSQL.atom.productsculturedetail add constraint FKBD5D849BFFFC9417 foreign key (product_Id) references PostgreSQL.atom.products
03:18:56,408 ERROR [SchemaUpdate] ERROR: cross-database references are not implemented: "postgresql.atom.productsculturedetail"*
03:18:56,408 INFO [SchemaUpdate] schema update complete
Problem no.2. What is this cross-database references?
What about this..
03:18:55,089 INFO [SettingsFactory] JDBC driver: PostgreSQL Native Driver, version: PostgreSQL 8.4 JDBC3 (build 701)
Problem no.3 I've said in the build.properties to use JDBC4 driver, I don't know why seam insists to use JDBC3 driver. Where do I change this?
When I go into http://localhost:5443/Commerce and try to browse the auto-generated CRUD UI.
I get this error.. Error reading 'resultList' on type com.atom.Commerce.action.ProductsList_$$_javassist_seam_2
And this is what is showing in my server logs...
03:34:00,828 INFO [STDOUT] Hibernate:
select
products0_.product_Id as product1_0_,
products0_.active as active0_
from
PostgreSQL.atom.products products0_ limit ?
03:34:00,848 WARN [JDBCExceptionReporter] SQL Error: 0, SQLState: 0A000
03:34:00,849 ERROR [JDBCExceptionReporter] ERROR: cross-database references are not implemented: "postgresql.atom.products"
Position: 81
03:34:00,871 SEVERE [viewhandler] Error Rendering View[/ProductsList.xhtml]
javax.el.ELException: /ProductsList.xhtml: Error reading 'resultList' on type com.atom.Commerce.action.ProductsList_$$_javassist_seam_2
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not execute query
Problem no.4 What is going on here? Cross-database references?
Thx for any help in any of my problems.
You did receive a few answers on the Seam forums (here and here), but you didn't follow up. Anyway, all these are actually caused by one problem:
As Stuart Douglas told you, you shouldn't use a catalog when connecting to PostgreSQL. To fix this, replace the property "hibernate.default_catalog=PostgreSQL" in your properties file by the property: "hibernate.default_catalog.null=", so that your file looks like this:
...
model.package=com.atom.Commerce.model
hibernate.default_catalog.null= # <-- This is the replaced property
driver.jar=/home/rgoytacaz/postgresql-8.4-701.jdbc4.jar
...
You should be able to use seam generate-entities fine afterwards (assuming the rest of your configuration is correct). I'd recommend doing the generation into a clean folder.
Cross-database references is when a query tries to access two or more different databases. PostgreSQL does not support this, and thus complains when there is more than 1 period in the table name, so in PostgreSQL.atom.productsculturedetail, the bold part should be removed. Hibernate adds this prefix when you tell it to use a default catalog, which we already fixed in step 1 above (by telling it not to use a catalog), so this problem should be fixed after you regenerate your entities.
(Note that this is effectively the same as what Stuart Douglas told you, that you should remove the catalog="PostgreSQL" attribute in the annotations on your entity classes.)
When you specified the postgresql-8.4-701.jdbc4.jar file in the properties file, this didn't mean that the driver supports JDBC4. Although the name of the file would suggest so, the driver's website clearly states that "The driver provides a reasonably complete implementation of the JDBC 3 specification". This shouldn't be a problem for you, as you're not using the driver directly (or at least you're not supposed to). The driver is sufficient for Hibernate to fulfill its requirements and provide the required functionality.
This issue is caused by the same problem above. Hibernate is unable to read data from the database because of the incorrect query. Fixing the catalog problem should fix this issue.