I am trying to change the JBoss Data Virtualization settings of my server to increase the performance.I am working on the cache consideration properties. I have noticed that with this attribute
"preparedplan-cache-infinispan-container" => "teiid-cache",
"resultset-cache-infinispan-container" => "teiid-cache",
the value can either be teiid or teiid-cache.. I would like to know the difference and how it would affect the performance of the server.
Also would like to have suggestions on how i can increase the performance of the server.
This option is simply pointing to the infinispan cache container settings in the JBoss EAP. So in this case "teiid" is just pointing another configuration setting. These typically deal with expiration of cache, eviction and number entries as such.
Overall resultset caching can improve your performance, however the above settings only let you configure how manage those cache container settings. Look under "infinispan" subsystem in standalone-teiid.xml file to find the configuration for "teiid". BTW, there is no "teid-cache" any more.
Related
Is it possible to programmatically create/update a cluster on a remote Artemis server?
I will have lots of docker instances and would rather configure on the fly than have to set in XML files if possible.
Ideally on app launch I'd like to check if a cluster has been set up and if not create one.
This would probably involve getting the current server configuration and updating it with the cluster details.
I see it's possible to create a Configuration.
However, I'm not sure how to get the remote server configuration, if it's at all possible.
Configuration config = new ConfigurationImpl();
ClusterConnectionConfiguration ccc = new ClusterConnectionConfiguration();
ccc.setAddress("231.7.7.7");
config.addClusterConfiguration(ccc);
// need a way to get and update the current server configuration
ActiveMQServer.getConfiguration();
Any advice would be appreciated.
If it is possible, is this a good approach to take to configure on the fly?
Thanks
The org.apache.activemq.artemis.core.config.impl.ConfigurationImpl object can be used to programmatically configure the broker. The broker test-suite uses this object to configure broker instances. However, this object is not available in any remote sense.
Once the broker is started there is a rich management API you can use to add things like security settings, address settings, diverts, bridges, addresses, queues, etc. However, the changes made by most (although not all) of these operations are volatile which means many of them would need to be performed every time the broker started. Furthermore, there are no management methods to add cluster connections.
You might consider using a tool like Ansible to manage the configuration or even roll your own solution with a templating engine like FreeMarker to customize the XML and then distribute it to your Docker instances using some other technology.
We are considering using pgbouncer for our project, which includes dynamic db creation (i.e each and every tenant that is added - a new db created)
As far as I understand, pgbouncer takes a config file that maps the databases.
The question is - is there a way adding new databases to pgbouncer without restarting it? (adding a new db row in the config.ini file)
I was actually looking into this same issue. It doesn't seem to be possible by default right now (per this issue). The originator of that issue has a branch of his fork for dynamic pooling, but it doesn't seem that will be merged. I wouldn't use it in production unless you're up to the additional work of maintaining a forked dependency for your project.
The current way is updating the .ini. However, in addition to the overhead of maintaining configuration in another place this is further complicated because based on the docs the "online restart" capability of pgbouncer only works for non-TLS connections and if your pgbouncer is running with unix sockets. So depending on your system configuration online restarts for the potentially frequent updates might be out of the question.
Need to front an existing DataStore (RDBMS) with infinispan, with write-behind. Also we dont want to run Infinispan in embedded mode to avoid memory pressure on the existing application which needs to serve high volume. We would like to run a cluster of infinispan in server mode and connect our application using hotrod.
Since JDBCStore doesnt support custom schema, the only option is to use JPA based CacheStore, however looks like there is limitation of only allowing JPA store to work in embedded mode .. Is there any workaround ?
The tricky part of using JPA cache store with server is how to plug user classes into the server. This is necessary since JPA entities/collections need to be accessible for JPA cache store. It might be possible to do this by creating a module and plugging it into the server modules and having Infinispan JPA cache store modules depend on it. This might be doable but not something we encourage/support and requires deep understanding of Infinispan Server's module architecture.
My suggestion would be to stick to embedded use case and JPA cache store. To avoid memory issues, you could configure eviction so that only those recently used entries are kept in memory. You could also use off-heap container in Infinispan to offload data storage outside the heap and keep your application running smoothly.
I have a proprietary CMS that keeps a lot (20k lines) of configuration files on disk. I have quite a few nodes, all with the same configurations except for one or two elements that designate the node name and the IP.
Since this is proprietary I do not have a lot of leverage for going in and completely overhauling the configuration loading to look at an endpoint, though I might be able to be creative.
My questions are simple, but I do not know a better place to answer them:
Is this a use case for distributed configuration management like Zookeeper? Ideally I'd like to spin up a box and have it look for a service endpoint to load config files rather than have the config files deployed through source. This way I can update the configuration in one place, and have it replicate to all nodes without doing a full deployment.
Can Zookeeper (or equivalent) mimic a file system? Could I mount an NFS point and have it expose configuration as if they were files on the filesystem, even if these are symbolic constructs? Does this make sense?
Your configuration use case seems more like a a job for chef, puppet or similar system. They will allow you to update the configuration in one place, keep them version controlled, and distribute them properly to all target nodes.
Zookeeper makes sense when your application/service needs to dynamically get fresh configuration data during live operation, and when multiple nodes in your system need the same consistent view of that data. If you don't have this requirements, Zookeeper might be too much of an overhead for just laying down mostly static config files on disk.
As for mimicking a filesystem, there is zkfuse which you could use to mount it. But again, it doesn't look like this is what you want. Zookeeper should not be used as an actual file system replacement or file distribution system. It is best for storing small bits of metadata that needs to be consistent across your distributed system.
I have an AppFabric Cache installation (the cluster just has one node, viz. my local machine). I just saw the strangest behavior- I put something in the cache, then restarted the cache cluster using the AF Cache PowerShell command Restart-CacheCluster. When I then try to retrieve the value from the cache, it's still there.
Has anyone seen this behavior? Is it due to some configuration setting I'm missing? This is not causing me problems, but the fact that it is not behaving the way I expect scares me in case other issues arise later.
The cache is using a SQL Server database for configuration (as opposed to an XML file).
As specified here :
The Stop-CacheCluster or Restart-CacheCluster cache cluster commands
cause all data to be flushed from the memory of all cache hosts in the
cluster.
No, it's not possible and I've never seen this kind of issue. I suggest you check the whole process.
Are you using a Read-Through provider ? In that scenario, the cache detects the missing item and calls a specific provider to perform the data load. The item is then seamlessly returned to the cache client and will never be null.
Other things you may have to check
Check the result of Restart-CacheCluster cmdlet (sucess/failure)
Maybe a background task is still running, putting data into the cache
By using cmdlet Get-CacheStatistics, you can check how many items are really in the cache