titan core 2.0 bug : com.esotericsoftware.kryo.KryoException: Buffer too small: capacity: 2, required: 8 - titan

I have a titan graph setup using following versions of software.
Cassandra version : 2.0.6
Titan version : 0.4.2
5 cassandra server with replication factor 3.
Recently, all of sudden, we have started getting following error intermittently.
com.esotericsoftware.kryo.KryoException: Buffer too small: capacity: 2, required: 8
I searched net and found that there is a bug filed relating to same error. Bug is pertaining to kryo API. https://code.google.com/p/kryo/issues/detail?id=124
Fix of above bug is not incorporated in titan-0.4.2 version.
On cassandra server we have not run repair database for many days (6 months). Can it be the issue with inconsistent data on cassandra server?
Can you please suggest the reason for this bug and how to approach solving it?

Judging from this issue (which discusses Faunus, but is likely very related), i'm not so sure that there is a solution:
https://github.com/thinkaurelius/titan/issues/809
According to the issue, it looks like the 0.4.x solution was to change the hardcoded limit and recompile:
https://github.com/thinkaurelius/titan/blob/0.4.4/titan-core/src/main/java/com/thinkaurelius/titan/graphdb/database/serialize/kryo/KryoSerializer.java#L37

Related

What is the alternative of PL/Java for PostgreSQL 11 and 12?

Understand from: https://www.enterprisedb.com/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/11/EDB_Postgres_Advanced_Server_Guide.1.80.html
that PL/Java is deprecated in Advanced Server 11 and will be unavailable in server versions 12 or later.
May I know:
What is the recommended replacement for PL/Java in PostgreSQL12?
For my existing UDF in PostgreSQL 9.6.x, etc., which is using the PL/Java, how could I move over to PostgreSQL 12?
Thanks in advance.
Edit: Come to think of it, there was an issue reported back in September 2020 (PL/Java issue 260) about failing to build against EDB PostgreSQL 11. It turned out that EDB had made an API-breaking change to upstream PostgreSQL, by changing an API function and leaving behind only a macro with the old name, instead of a (possibly inline-qualified) wrapper that could be addressed.
That ended up requiring an EDB-specific workaround to be shipped in PL/Java, and that fix has been included since PL/Java 1.6.0 and since PL/Java 1.5.6, which both were released in October 2020.
I am sorry that I did not see this question earlier.
I maintain PL/Java, and I have had no notice from EDB concerning why they have deprecated it. Perhaps they are simply no longer providing a binary package prebuilt by them.
I know of PL/Java in use with PostgreSQL 12 and 13. I believe that to build it from source for use with EDB, it should be built with Visual Studio, following these instructions.
If you are able to learn anything more from EDB about the deprecation, or if you have any difficulty building from source, please feel free to open an issue. Thanks!

How to update Feature Compatibility Version for mongoDB

I am a beginner/self-taught developer and need some help. I created an app half a year ago and when I revisit it today, I noticed my MongoDB is not running as it used to.
One of the err log I got when I ran 'mongod' states:
** IMPORTANT: UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: BadValue: Invalid value for version, found 3.6, expected '4.2' or '4.0'. Contents of featureCompatibilityVersion document in admin.system.version: { _id: "featureCompatibilityVersion", version: "3.6" }. See http://dochub.mongodb.org/core/4.0-feature-compatibility.). If the current featureCompatibilityVersion is below 4.0, see the documentation on upgrading at http://dochub.mongodb.org/core/4.0-upgrade-fcv.
So I have 2 questions...
I don't recall ever updating my mongodb, so I am wondering why it worked before and not now.
I understand that I need to change CompatibilityVersion to 4.0, but MongoDB's documentation states you can only issue the setFeatureCompatibilityVersion against the admin database. What does the admin database mean and how do I access it?
Thank you for your help!
I don't recall ever updating my mongodb, so I am wondering why it worked before and not now.
Well, someone (or something updated it). What version did you install originally and what is the current version?
I understand that I need to change CompatibilityVersion to 4.0, but MongoDB's documentation states you can only issue the setFeatureCompatibilityVersion against the admin database. What does the admin database mean and how do I access it?
Follow the upgrade guide you linked to. It spells out all the steps.

Orange (data mining) psycopg2 connect fail "Unsupported frontend protocol" with postgresql 12.2

https://orange.biolab.si/
Connect to postgresql 12.2 fails with:
Unsupported frontend protocol 123[wraps]
I believe this is:
https://github.com/petere/homebrew-postgresql/issues/51
I'm on Windows 18363.778 using Orange 3.25.0 and psycopg2-2.8.5.
Is there a fix/workaround for this? Kind of annoying. It's not easy finding a visualization tool that works with localhost postgresql.
Tomorrow's bug fix release (12.3) should contain a fix for this. (Although I thought this was only a problem on Macs, so maybe you are seeing a related but different problem)

Issue while running offline compaction in AEM 6.1 SP2 CFP3

I am running offline compaction to reduce aem repository size. but it is throwing error [05:28:37.939 [main] ERROR o.a.j.o.p.segment.SegmentTracker - Segment not found:
3ff5d2ae-2b7f-412b-bfff-1dcdf0613315. Creation date delta is 15 ms.
org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 3ff5
d2ae-2b7f-412b-bfff-1dcdf0613315 not found].
Compaction was working fine earlier, when we were using AEM 6.1 only with communities FP4 and have oak version 1.2.7.
Problem occurred after installation of communities FP5, FP6, service pack 2 and CFP3 in AEM 6.1, now oak version upgraded to 1.2.18 and we are using oak jar version 1.2.18 to perform the compaction.
When I Google this error then found that our segments have been corrupted and we have to restore our segment with last good condition.
Then we have found a command this [java -jar D:\aem\oakfile\oak-run-1.2.18.jar check -d1 --bin=-1 -p D:\aem\crx-quickstart\repository\segmentstore] to find the last good condition segment where we can restore. But when we are running this command to find the previous good segment then this command is keep running to infinite without end.
Can anyone let me know how I can fix this?

Squeryl JDK 1.8

We recently hit the same issue as discussed here - Squeryl fails to reflect in debug mode only
And it was also solved by changing to use JDK 1.7
As Java 7 support life-cycle has ended; we would like to move to Java 8.
Does squeryl support Java 8?
Is there a solution to the 'error while reflecting on metadata' issue?
Are there any other migration considerations?
Thanks,
Brent
Maybe there is same issue in salat, lift-json. see https://github.com/novus/salat/issues/133
I posted the same question on the squeryl group and got
Does squeryl support Java 8?
Yes; it should work on Java 8 <- mostly I was looking to confirm that in theory it should
Is there a solution to the 'error while reflecting on metadata' issue?
Kenji's link might be a lead to investigate this further; but till now I don't have an update
Are there any other migration considerations?
None were mentioned on the group