Restart requirement for Apache Ambari custom service - service

Currently, every time I change some property on a service (lets say,Hive), the Ambari UI will ask me to Restart the service and it's components. However, the same won't happen with custom services. Is there some tag that needs to be added to a given property to trigger this behavior? Is there a documentation somewhere about it?
I'm using Ambari 2.7.3.0 with HDP-3.1.0.0-78.

#Evandro Teixeira
Inside of custom services should be a metainfo file with the following value:
<restartRequiredAfterChange>true</restartRequiredAfterChange>
Depending on the service, there can be other references between different components, including even the order of restarting. There are also some big stack configuration files (in the raw stack folder and/or in m-packs) that have similar functionality for behaviors in ambari. The documentation here is very limited so I recommend comparing your custom services with others. For example I learned a ton about Management Packs by reverse engineering NiFI Managementpack. I also learned a lot more doing same with Hortonworks created ELK Management Pack.
You can find most of my management pack and custom services work on my GitHub:
https://github.com/steven-matison?tab=repositories
The ELK Management Pack (elasticsearch_mpack-3.4.0.0-0/common-services/ELASTICSEARCH/6.3.2/metainfo.xml) is where I referenced the restartRequiredAfterChange.

Related

How to manage dependencies between different applications services while deploying on Kubernetes?

There are web applications - WebAppA and WebAppB. Each web application depend on a Postgres database. We want to ship these applications to a customer who will deploy the applications on its own k8s cluster.
We want to create three packages - "WebappA", "WebAppB" and "Datastore". The webapp itself made of multiple services, not mentioning for the sake of simplicity.
We want to provide apt-get/brew/yum kind of experience, so that customer can deploy one or both the applications like al-carte. Most importantly while deploying, it should identify if the common package "DataStore" is running and not spin off yet another Postgres instance.
Is there any to package applications as packages for Kubernetes which can make installation easy with dependency handling?
Of course! One way to start would be using Helm charts. You can read more about them here.
Helm defines dependency relationships declaratively using charts, allows you to manipulate/maintain dependencies simply by managing some YAML manifests. It also allows you to have personalised repositories where you can put your images to be fetched from. It's really nice.

Is it possible to modify the test server configuration in each separate microservice project?

I am developing a number of microservices which will run on Open Liberty. I have set up a test server in my eclipse environment which is configured to use all the features required by all the services which I am currently working on.
Whilst this works, it seems a heavy-handed approach and it would be good to test each service in an environment which closely resembles the target server. The services can differ in the set of features they require as well as the JVM settings necessary.
Each service will run in its own docker container and the docker configuration is defined in each project.
Is there a way to better test these services without explicitly setting up a new server for each individual service?
I am not aware of any way to segment the Liberty runtime (its features) nor the jvm (for different jvm settings) for different applications running in a single Liberty instance.
You can set app specific variables and retrieve them using MP Config, but that's not the same as jvm settings and certainly not the same as trying to segment specific features of the runtime to a specific application.
However, in general when testing, I would highly recommend trying to mimic your production environment as much as possible. Since you're planning on deploying into docker, I would do the same locally when testing, and given Liberty's lightweight, composable nature, it's unlikely that you'll hit resource issues locally when doing this (you should only enable the features on each Liberty instance that your app is using to minimize the size of that instance). This approach is one of the big benefits/value provided by containers and Liberty.
In other words, even if you could have one Liberty instance segmented per application, I would not recommend it for your testing because, as you said, "it would be good to test each service in an environment which closely resembles the target server"

Making Registry changes during startup of stateless service in Service Fabric

I am using a library which searches in registry for a dll. That dll can be installed by running MSI in the Service Fabric cluster and this path will be set.
But I wanted to avoid the installation of MSI in the cluster, and provided the required dlls in the package itself. During start up of the service, I am creating the registry entry and giving the location of the dll in my package. Everything is working as expected.
Is this approach ideal? Are we allowed to make changes to registry? If not, how do we solve this problem? Any pointers are appreciated.
If the library has to use the registry, there is nothing you can do about it other than register the values. If you could change the DLL to retrieve this information from the configuration file would be the ideal solution.
You can do it in SF, the right way to do it is using the SetupEntryPoint option of the ServiceManifest to do these management tasks, and from the Application manifest you can set the policies to specify which user you should run these policies. it is described here with more details
The main issue you have on SF with this approach is that you application might move around the cluster and you have to register it on every node, and maybe also remove it when the application is not running there anymore to avoid garbage in the registry.

What's the anatomy of a Bluemix/Cloud Foundry node red project?

There's lots of documentation and a kludgy console to set up continuous deployment in Cloud Foundry, but I haven't found any documentation on what the artifacts inside a repository need to be.
I don't want to cut-n-paste flows from the node red editor. If that's the only way, then IBM is not ready for prime time. I also am aware of most everything about my flows being in the Cloudant nodered db.
A node red application is more than the flows though. What about my _design docs for my dbs?
I need device info and other stuff from the Watson console, Cloudant info and my flows packaged up into something deployable.
Has anyone scripted this?
What I mean by this is I can clone a Docker project, an npm project and all sorts of projects that implement a build->test->push mechanism. They employ a configuration script of some sort (e.g. package.json) and contain a bunch of source files for the actual application, test scripts, db scripts, whatever is necessary to deploy the application and its environment into a host. I see lots of documentation on the toolchain and its features, but I'm not clear on if it's possible to make use of it for my hosted node red application. Or if I have to write the scripting mechanisms to offload flow info from the nodered db and query all my other dbs for their respective _design docs and all the other configuration information required to set up an IoT node red application.
I forgot to mention, the copy/paste method loses information; you get no tab level metadata. The only way to get all the flow stuff is to pull if from the nodered flow record.
Node-RED will release a new version in a couple of days that will introduce projects, so you'll be able to use GitHub and all the usual tools to handle your app: https://twitter.com/NodeRED/status/956934949784956931 and https://nodered.org/docs/user-guide/projects/
While it doesn't address your short-term needs, I think it's the best long-term solution. Hopefully that helps.

Service Fabric: Plugins vs. Application Types

I'm developing a Service Fabric-based trading platform that will host hundreds of different long-running trading algorithms, all of which conform to a common interface and share a good deal of common code but can be vastly different in their internal specifics. I could model each of the different algos as an application type (which I'd dynamically load) but given the large number of different algos I have to wonder if in makes more sense to create a single Plugin Runner application type then implement the algos as plugins.
In a related question, I understand how to implement a plugin architecture, in general, but I'm not quite sure where one would place the actual plugins in order to be discoverable by an instance running on Service Fabric.
Anyway, thanks for your help....
Both approaches can work I think. Using lots of Application Types adds the (significant) overhead of running lots of processes, but allows you to use and upgrade multiple versions of the same algorithm running simultaneously.
Using the plugin approach requires you to deal with versioning yourself.
Using the Application approach probably requires some kind of request router, while the
plugin service could make it's own decisions (if it's stateless).
You can create a Stateful service that acts as the plugin repository, or mount a file share, or use a database, no restrictions from the platform here. You can use naming conventions to locate the proper plugin.
The following approach could work if an application upgrade is acceptable to you when changing the set of plugins needed for a given application instance.
Recall that Service Fabric apps must be packaged before deployment or upgrade. Using either msbuild tasks or Powershell, you could copy your plugin dlls to the plugin runner service's code package as a post-packaging step prior to the app upgrade. Then your plugin dlls would be available to the service at startup using Assembly.Load and the code package's path, available in your service implementation's Context.CodePackageActivationContext.GetCodePackageObject("Your-Code-Package-Name").Path property. The code package's name is defined in ServiceManifest.xml, and is named Code by default.