When I use Mybatis 3.4.2 and spring boot。
I found that when I write **Mapper.xml on my business jars, But Others want to overwrite my **Mapper.xml on there spring boot applications, but they can't do it With org.apache.ibatis.session.Configuration.mappedStatements use StrictMap, Con't
overwrite value for special key.
So I what know is there other goal for this design.
Related
The problem is simple: i want to print all topics from apache kafka after installing kafka module on karaf. I need to get properties from cfg file which is located in jbossfuse/etc and create a KafkaConsumer object. I want to implement BundleActivator to be able to start method in the moment of installation module.
The question is: how can i get properties from the config file?
I found some solution here: some solution, they said " you can use ConfigAdimn service from OSGi spec. ". How can i use it? All examples with the code are welcome
Karaf uses Felix-FileInstall to read config files: http://felix.apache.org/documentation/subprojects/apache-felix-file-install.html
So if there is a file named kafka.cfg, it will pick it up and register a config with the ConfigAdmin-Service under the pid 'kafka'.
You can fetch the ConfigAdmin-Service and fetch the config using an Activator and read that config from there, but I strongly recommend to use DeclarativeServices or Blueprint instead to interact with the OSGi-Framework, both support injection of configuration if it is available.
Because otherwise you have to deal yourself with the following topics:
there is no ConfigAdmin (yet), maybe because your bundle starts earlier)
the ConfigAdmin changes (for example due to a package refresh or update)
the configuration is not yet registered (because felix has not read it yet)
the configuration gets updated (for example somone changes the file)
My Presto plugin has 2 components: some UDFs (for basic MD5 / SHA1 hashing) and an EventListener (for logging queries using FluentD logger)
During development (single-node Presto cluster), I added them under a single Plugin class, bundled a single JAR and faced no problem
During deployment I found a pitfall: the UDFs must be registered with all nodes whereas (my particular) EventListener must be registered only with master node
Now I have two options
1. Bundle them together in single JAR
We can control registration of UDFs / EventListeners via external config file (different configs for master & slave nodes). As more UDFs, EventListeners and other SPIs are added, a single JAR paired with tweaked config file with achieve the desired result.
2. Bundle them as separate JARs
We can create different Plugin classes for UDFs / EventListener and provide corresponding classpaths in META-INF.services/com.facebook.spi.Plugin file through Jenkins. We'll then have different JARs for different components: one JAR for all UDFs, one JAR for all EventListeners etc. However as more functionalities are added in future, we might end up having lots of different JARs.
My questions are
What are the pros and cons of both techniques?
Is there an alternate approach?
I'm currently on Presto 0.194 but will soon be upgrading to Presto 0.206
Either way works. You can do whichever is easiest for you. There's actually a third option in the middle, which is to have multiple Plugin implementations in a single JAR (you would list all implementations in the META-INF/services file).
EventListener is actually used on both the coordinator and workers. Query events happen on the coordinator and split events happen on the workers. However, if you only care about query events, you only need it on the coordinator.
You can deploy the event plugin on both coordinator and workers but only configure it on the coordinator. The code will only be used if you configure it by adding an event-listener.properties file with a event-listener.name property that matches the name you return in your EventListenerFactory.getName() method.
I have the following configuration:
Spring-integration-kafka 1.3.1.RELEASE
I have a custom kafka-sink and a custom kafka-source
The configuration I want to have:
I'd like to still using Spring-integration-kafka 1.3.1.RELEASE with my custom kafka-sink.
I'm changing my kafka-source logic to use Spring-integration-kafka-2.1.0.RELEASE. I noticed the way to implement a consumer/producer is way different to prior versions of Spring-integration-kafka.
My question is: could I face some compatibily issues?
I'm using Rabbit.
You should be ok then; it would probably work with the newer kafka jars in the source's /lib directory since each module is loaded in its own classloader so there should be no clashes with the xd/lib jars.
However, you might have to remove the old kafka jars from the xd/lib directory (which is why I asked about the message bus).
I want to apply memcached on my Scala project, but i don't know how to apply it. My project takes too much time to retrieve the whole set of results from database.
If anyone knows then please tell me the steps to apply it.
First look at using a local cache: http://www.playframework.com/documentation/2.2.x/ScalaCache
It uses EHCache by default but you can replace the cache plugin with one that wraps a memcached client, if you think it is necessary.
Example of how to write a cache plugin: https://github.com/mumoshu/play2-memcached
An example memcached client: https://github.com/Atry/memcontinuationed
I'm trying to create a jbpm human task web application and deployed it in jboss as 7.
Adhering to the deployment structure i have placed the orm.xml in resources META-INF folder along with persistent.xml and it's having the required unescalateddealines named query. But still im getting the exception
Caused by: java.lang.IllegalArgumentException: Named query not found: UnescalatedDeadlines
at org.hibernate.ejb.AbstractEntityManagerImpl.createNamedQuery(AbstractEntityManagerImpl.java:108) [hibernate-entitymanager-3.4.0.GA.jar:]
at org.jbpm.task.service.TaskService.<init>(TaskService.java:109) [jbpm-human-task-5.1.0.Final.jar:]
at org.jbpm.task.service.TaskService.<init>(TaskService.java:92) [jbpm-human-task-5.1.0.Final.jar:]
at com.sample.taskserver.HumanTaskStartupServlet.init(HumanTaskStartupServlet.java:52) [classes:]
In nutshell, the orm.xml file is not being identified by hibernate.
what's the configuration that im missing or what could be the problem.
kindly help me in this regard.
It's probably best to add direct references to the orm files you're using, to make sure they are being picked up. For example, in your persistence.xml you could add the following:
<mapping-file>META-INF/Taskorm.xml</mapping-file>
Do you have multiple orm.xml files inside your application? Probably JBoss AS is just picking one. Usually if you merge those files it will work.
Cheers
I had the same problem, but I solved it.
My jbpm version is 5.3.
I deployed and set it up according to the guide.
I used two persistent-unit, one for jbpm/process and one for task.
I have this taskorm.xml, but in some situation jbpm will only search jbpm persistent-unit.
but taskorm.xml is defined in task unit.
You need to combine it into one.
For me I am using jbpm 5.4.0.FINAL but referred to the example with bundled presistence.xml having only orm.xml
Added Taskorm.xml resovled the problem