Update of elasticsearch plugin - plugins

I have developed a couple of plugins for elasticsearch, and these may require (more or less) frequent updates.
My simple question is: is there a way of updating an elasticsearch plugin without having to remove the old version, delete the relevant indexes, install the new version and rebuild the indexes from scratch?
Thanks in advance.

There is no way to update an existing plugin. You need to delete the old version and install the new one.
I didn't get your question about indices though. A plugin doesn't necessarily work against data, it can just be a site plugin, a query parser etc. In case a plugin does work against indices and you want to upgrade it, but the elasticsearch version stays the same, I don't see why you would need to reindex. The only case is if the plugin itself changed in a non backwards compatible way.

As of the latest version 2.4.1 (2016-10-18) there is still no way to do this easily, because the elasticsearch folks recommend manual plugin updates.
Expect that when you update Elasticsearch and start the service, that you will end up with an error because service won't start due to a plugin being one minor version behind the ES version, e.g. "license".
Go to your elasticsearch bin directory and run the following commands:
sudo ./plugin remove <plugin name>
sudo ./plugin install <plugin name>
You could even be so bold as to write an "update" shell script that does this for you.

A bash script here:
#!/bin/bash
plugin_bin=/usr/share/elasticsearch/bin/plugin
for plugin in $($plugin_bin list | awk '/^\s*-\s*(\S+)/{print $2}')
do
$plugin_bin remove $plugin && $plugin_bin install $plugin
done

Related

intermediate update step in own builded RPM

If I build my own RPMs, is there a way, to tell that before upgrade to the latest version, upgrade to a specific other version first?
So for example I often made a mistake in my postun actions, that I delete a link. So when I want to update to an newer package that fixes my postun action, the usually RPM behavior is that my new packages will be installed, then the old package will be uninstalled and trigger my mistake in the postun action. So I would need to run a reinstall then or to update again to an even later version, that on the next update, my postun action is correct.
So I would imaging something like
UpdateRequires: MyPackage >= 1.1
There is no way to force an update chain like that. You could have the user reinstall the new package. Another option is to fix the symlinks in your %verify stanza and tell the user to run rpm -V on your RPMs.
It is possible to run a step after old packages %postun with %posttrans in new package

Mask package version in Yum on CentOS

I can't seem to find anything useful in the man pages etc for this, but it seems like it should be straightforward..?
Our servers are running CentOS 6.8 but also have the Atomic repository for some package versions. The most recent version of one of the packages that Atomic provides seems to be broken, so we've had to do a yum downgrade of that package.
Problem now is that we're running Plesk, which performs automatic Yum updates on a schedule, and the next time this happens, the broken package will just drag back in again!
All I want to do is tell Yum to ignore this specific package version so that it updates the next time there's a newer version, but skips the current.
I found that I can add exclude= lines to yum.conf but I can't seem to find how to define a specific version number in this exclude. It looks like I can only exclude entire package names?
I'm more familiar with Gentoo where we can tell Portage to mask specific versions when problems like this occur. Is this not an option in CentOS?
Much appreciated.

Why compiled and installed gstreamer plugin from boilerplate code is not found by gst-inspect

I followed the instructions in GStreamer Plugin Writer's Guide (1.7.1.1):
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/pwg/html/index.html
in order to build a new gstreamer plugin. Basically I ran make_element and then edited Makefile.am as described. Amazingly make and make install worked and I ended up with:
/usr/local/lib/gstreamer-1.0/libgstframe_grabber.la
/usr/local/lib/gstreamer-1.0/libgstframe_grabber.so
As I understand it, gst-inspect should find this plugin automatically. The guide says that /usr/local/lib/gstreamer-1.0 needs to be added to GST_PLUGIN_PATH in order for plugins in this directory to be found. Another document states that this directory is searched automatically. I tried with and without the environment variable, but no luck.
Now I should say that I have just started to use gstreamer and I am suffering from total information overload. I have read so many documents, yet I don't even know whether I am building a gstreamer1.0 or a gstreamer0.10 plugin (I think the guide is for gstreamer1.0, since the guide's version is 1.7.1.1 but can't be sure).
Can anybody give me a clue here ?
There are many possible reasons that can cause this issue.
First, check if your plugin is blacklisted by command gst-inspect-1.0 -b.
If your plugin show up here, that means it is really blacklisted.
In that case, delete directory ~/.cache/ and then run gst-inpect-1.0 again.
This will force GStreamer to re-scan plugins list. If the reason of blacklist is not solved yet, gst-inpect will probably print out the reason here for you.
Another possible reason (but unlikely happens) is setting GST_REGISTRY_UPDATE as no, which will force GStreamer NOT to rescan the plugin directory, thus not found new plugin
P/s: The guide is for GStreamer 1.0
If you've tried removing your plugin from the blacklist and it still doesn't show up, try this:
export GST_PLUGIN_PATH=/usr/local/lib/gstreamer-1.0
/usr/local/lib/gstreamer-1.0 is the default directory used by make in case of plugins. If you have defined a different directory, use it.
Then run gst-inspect-1.0 and you'll find the newly compiled and installed plugin.
You'll be required to perform the export every time in the shell whether you either create a static pipeline with gst-launch-1.0 or run code of your own. I couldn't find any alternative to make it permanent other than making entry of this in .bashrc file. If you have one, please suggest via comments.
If you run ./configure --help in the gst-plugin directory you will see the following:
By default, make install' will install all the files in
/usr/local/bin', /usr/local/lib' etc. You can specify
an installation prefix other than/usr/local' using --prefix',
for instance--prefix=$HOME'.
If you do after the original installation:sudo updatedb && locate libgst[NAME_OF_YOUR_PLUGIN].so you should see where the library holding your plugin is located (in my case it is under /usr/local/lib/gstreamer-1.0/ as described by the configure help above).
Now on my machine, the GStreamer "official" plugins are installed under: /usr/lib/i386-linux-gnu/gstreamer-1.0/ . This is where the new created plugin library should be stored.
To store the plugin at the right place, run configure with the following parameter:
./configure --libdir=/usr/lib/i386-linux-gnu followed by make && sudo make install
It is important to override with --libdir and NOT --prefix! The usage of --prefix will stick a /lib that we don't want to have under /usr/lib/i386-linux-gnu.The plugin will not be found by gst-inspect-1.0 if /lib is added to the path.
Extra note :
Even if the plugin is at the proper location, you may still see GStreamer blacklisting it when you run gst-inspect-1.0. One of the cause of the blacklisting could be the shared library/ies required by your plugin not installed or not found on your platform. The ldd command can help figuring out the dependencies your plugin may have. Just run ldd [YOUR_GSTREAMER_LIBRARY].so

How to continously deploy a feature to karaf?

I want to continously deploy a feature to ServiceMix 6.0, which is based on Karaf 3.0.4.
I first tried this using the karaf console. However there are some problems. It is for a standard karaf installation not possible to determine on the karaf console if a feature is already installed (see my other question on this). The other problem with the karaf console is that it doesn't support exit codes. So it is not possible to determine reliably if a feature installation was successfully finished.
I then installed hawtio and tried to use the exposed JMX beans via jolokia/REST that is bundled with hawtio. The problem here is that karaf 3.0.x is unable to update a feature. Therefore features must be uninstalled first. However the FeatureService only offers the possibility to uninstall an explicitly specified feature. But when a previous version of the feature was installed, subfeatures were installed with it. They also need an upgrade and therefore be uninstalled first. So I would need a way to iterate over the subfeatures of a feature, which I do not have.
So how can continous deployment of features to karaf 3.0.x be done?
The first try we have implemented is a bash script. The biggest problem is uninstalling the old version. Therefore we have set up a convention for the names of the feature and it's subfeatures. So we can use the following to find already installed features:
features=$(echo "feature:list" | ssh -p $smx_ssh_port $smx_user#$smx_host | grep -h "<feature-name-convention-regex>.*|.*x.*|" | cut -f1 -d" " | tr '\n' ' ')
This can then be passed to feature:uninstall and can also be used for detecting if features were installed after the call to feature:repo-add -i.
The remaining problem is that we are unable to reference 3rd-party subfeatures because they won't be uninstalled when an updated version needs to be installed and we can't be sure if all of the subfeatures have been successfully installed.
For karaf 3 there is no good way to update features.
This is already a little better for karaf 4. It allows to update a feature repo and you can then simply install the feature again. It will detect that the feature has changed and do the necessary changes in bundles.

Why can't I cabal install --only-dependencies with mongodb?

I have gone through the following steps:
$ mkdir mongoEg
$ cd mongoEg
$ cabal init
...
Configured to run as an executable. I add mongodb to the build-depends list. I make a dummy Main.hs file and put a basic hello world in there. I then do
$ cabal sandbox init
$ cabal install --only-dependencies
Which responds with:
Resolving dependencies...
cabal: Could not resolve dependencies:
trying: monogEg-0.1.0.0 (user goal)
next goal: mongodb (dependency of monogEg-0.1.0.0)
Dependency tree exhaustively searched.
Note: when using a sandbox, all packages are required to have consistent
dependencies. Try reinstalling/unregistering the offending packages or
recreating the sandbox.
I read up on other problems people are having, and remove ~/.ghc, remove my mongoEg directory, and repeat to get the same results. I try to run through the analogous steps at http://howistart.org/posts/haskell/1 and find that everything works just fine.
I then guess that something is wrong with the mongodb package itself. I seem to be able to cabal install mongodb in a global environment and use it outside of a sandbox without any issue. So, why wont cabal sandboxes play with the mongodb package?
See this gist for details: https://gist.github.com/anonymous/e5a548cf7d9ec59bea31
After looking here
Cabal configure in a sandbox complains "At least the following dependencies are missing" on installed packages
I saw that the answer states that package names are case sensitive. So I tried changing mongodb to the way MongoDB spells it, namely MongoDB. This did not work, so I tried changing it to mongoDB, and finally there was joy.
So even though I can do cabal install mongodb I can't use that same spelling to install it from within a .cabal file, which is, obviously, completely stupid. I'm sure I'll find the right place to channel my rage about this kind of flagrant violation of the principle of least surprise, but for now I can say that to newcomers it is most needlessly confusing.