Can anyone please provide the steps to upgrade the apache ignite from 1.8 to latest version.
Actually a best way in your case is to export your data in some format (CSV for example), and then import it back.
Related
Log4j 1.x has reached End of Life in 2015 and is no longer supported. Vulnerabilities reported after August 2015 against Log4j 1.x were not checked and will not be fixed. Users should upgrade to Log4j 2 to obtain security fixes.
Kafka is a software used by the application to communicate between microservices. Kafka in Jboss servers is using log4j 1.x. We need to be able to use 2.x log4j here.
Vulnerable software installed: Apache Log4j 1.2.17 (/apps/server/standalone/kafka/kafka_2.11-0.10.1.0/libs/log4j-1.2.17.jar)
All new Kafka version also uses Log4j 1.2.17. Need to remediate this.
JBoss version is jboss-eap-6.4
What is the way?
Log4j2 is not scheduled to be released with Kafka until Kafka 4.0 - KAFKA-9366
Until then, you can try to directly modify the log4j jars yourself to remove vulnerable classes, such as JMSAppender, or replace with reload4j, as only available in recent commits (Kafka 3.1.1 & 3.2) - https://github.com/apache/kafka/pull/11743
Seeing as your Jboss is using a version of Kafka several years old now, it might not be possible to upgrade directly without upgrading Jboss itself
I am working on Zookeeper upgrade from 3.4.6 to 3.5.5.
Since org.apache.zookeeper.data package is removed from 3.5.5 I am looking for alternative API.
mvn compilation is failing for below two classes
org.apache.zookeeper.data.ACL and org.apache.zookeeper.data.Stat
Thanks in advance!!
If you conclude that those two packages are removed for later versions of ZooKeeper, this is not the case. However, during the documentation generation for later versions, there is no link for those packages because they are now defined using Jute and not pure java. So to find such documentation, check here:
https://zookeeper.apache.org/doc/r3.6.1/apidocs/zookeeper-jute/
Go into 'zookeeper-jute' project and run "maven generate-sources".
Assuming: Your on eclipse and opened project from parent folder.
I have a jar file compiled in scala 2.12 and now I want to run it on emr 5.29.0. How do I run them as the default version of emr 5.29.0 is scala 2.11.
As per this thread in AWS Forum, all Spark versions on EMR are built with Scala 2.11 as it's the stable version:
On EMR, Spark is built with Scala-2.11.x, which is currently the
stable version. As per-
https://spark.apache.org/releases/spark-release-2-4-0.html ,
Scala-2.12 is still under experimental support. Our service team is
already aware of this feature request, and they shall be adding
Scala-2.12.0 support in coming releases, once it becomes stable.
So you'll have to wait until they add support on future EMR releases or you may want to build a Spark with Scala 2.12 and install it on EMR. See Building and Deploying Custom Applications with Apache Bigtop and Amazon EMR and Building a Spark Distribution for EMR.
UPDATE:
Since Release 6.0.0, Scala 2.12 can be used with Spark on EMR:
Changes, Enhancements, and Resolved Issues
Scala
Scala 2.12 is used with Apache Spark and Apache Livy.
Just an Idea, if waiting is not the option!
Is it possible to package the latest scala jars with the application with appropriate maven scope defined and point those packages with the spark property
--properties spark.jars.repositories ??
Maybe you'll have to figure out a way to transfer the jars to the driver node. If s3 is an option that can be used as intermediatory storage.
I would like to use an application developed with Apache Spark 2.0.0 (GitHub repo here) but I only have Spark 2.3.1 installed on my iMac (it seems to be the only one supported by homebrew at the moment). I can successfully compile it with sbt assembly but then when I run the first example given here I get the following error:
java.lang.NoSuchMethodError: breeze.linalg.DenseVector$.canDotD()Lbreeze/generic/UFunc$UImpl2;
Is this a compatibility issue between the two different versions of Scala-breeze used by Spark 2.0.0 and Spark 2.3.1. Is there a way to easily change the code in order to be able to use it with Spark 2.3.1? (I have never used scala before)
It probably is.
You can always manually download required version of Apache Spark (not by homebrew, but by downloading tar.gz archive from official page and just extracting it).
I am using grails 2.3.0 and wanted to know which version of mongodb GORM is compatible with it.
The latest version of mongodb GORM requires 2.3.2 > http://grails.org/plugin/mongodb
My mongodb version currently is 2.6.0 but that should not matter I suppose.
It would be great If I could also know the gorm datastore dependencies version for grails 2.3.0 and mongodb gorm. Below are the ones for the latest mongo gorm.
dependencies {
…
compile 'org.grails:grails-datastore-gorm:3.1.0.RELEASE'
compile 'org.grails:grails-datastore-core:3.1.0.RELEASE'
test 'org.grails:grails-datastore-simple:3.1.0.RELEASE'
}
I also tried to go thru all the releases for mongodb gorm here: https://github.com/grails/grails-data-mapping/releases but did not get much of help.
In my Grails 2.3.11 app I'm using
compile ':mongodb:3.0.2'
before upgrade from 2.2.x to 2.3.x I was using :mongodb:3.0.1.
In another Grails 1.3.7 app I'm using :mongodb:1.0.0.RC4 with some hacks to enable aggregation.
I tried to find a version of plugin in between to match my Grails version best, but it's not trivial. The branch names in github have nothing to do with released plugin versions.
BTW,
early versions of Grails 2.3.0 has severe problems (don't remember which ones exactly, but they made the upgrade from 2.2.x impossible). So, I'd recommend to upgrade your Grails to the most recent 2.3.11