Integrating RTC and Worklight, SCM Strategies - version-control

Is there any best practice for dividing a worklight project into streams and components?

Silva,
Perhaps the following IBM Worklight user documentation topic will help you: Integrating with source control systems.
There is the following approach as well.

A IBM Worklight project should use components for each coherent set of files, like a worklight project (with its containers, html files, css, ...)
Anything generated, like an android project generated to be run and simulated, shouldn't be in a component.
You can start simple with a single stream (which groups all components you need to work), or with one stream per component (but that approach doesn't scale well past 10-20 components, since the streams have no hierarchy)

Related

Enterprise Wide Cluster Streaming System

I'm interested in deploying an enterprise service bus on a fault tolerant system with a variety of use cases that include tracking network traffic and analyzing social media data. I'd prefer to use a streaming application, but open to the idea of micro-batching. The solution will need to be able to take imports and exports from a variety of sources (bus).
I have been researching various types of stream-processing software platforms here:
https://thenewstack.io/apache-streaming-projects-exploratory-guide/
But I've been struggling with the fact that many (all) of these projects are open source and I don't like the large implementation risk.
I have found Apache Kafka attractive because of the Confluent Platform built on-top, but I can't seem to find anything similar to Confluent out there and want to know if there are any direct competitors built on top of another Apache project. Or an entirely private solution.
Any help would be appreciated! Thanks in advance!

Does Apache NiFi support version control

I am trying to explore Apache NiFi. So far haven't seen any ways to version control flows.
Is there a way to version control flows when multiple users are trying to develop in the same instance?
What about code merge from multiple users?
Any help in these regards will help me to continue my exploration.
In addition to James's great answer I'll also point out that this approach to flow management has leveraged external version control systems and put the task on the user to perform. What I mean is that users (or automated processes) could initiate the production of a template and then store that template into a VCS. This has worked well but it is also insufficient. The other direction is also important where given a versioned flow one would like that to be automatically reflected on another cluster/system/environment. Think of the software development lifecycle one might go through when building flows in a development environment and proving/vetting into and through production. Or think of a production case where behavior is not as expected. While NiFi offers a really powerful interactive command and control model sometimes people want to be able to test new approaches and theories in another environment. As a result, we're working now on a really awesome capability.
Come join the conversation. We'd like to hear your thoughts.
Thanks
NiFi Templates are a great format for configuration management of a NiFi flow. You can define templates for everything from small example snippets up to large nested process group structures, essentially your entire flow. Templates will include processors, queues, and controller services, but will not contain sensitive values like passwords. Templates are stored as XML files friendly to source control (since NiFi v1.0).
Templates provide a way for individual developers to separately build parts of a flow, then merge the parts together in a single NiFi. If you match templates with process groups, swapping out the old one with the new one can be fairly easy and intuitive.
The answer to this question is YES, you can use NiFi Registry to have version control.
Below you can see a how it looks like.
The project page is:
https://nifi.apache.org/registry.html

AEM6 CQ how to handle Component Development and Content Authoring happening at the same time?

I just started at my new job and found myself right in the middle of a big project using Adobe AEM CQ, which I've never used before. Currently there are developers creating and tweaking components while content authors are busy authoring about 65 pages of content using those components.
Obviously, every time a component changes someone needs to update all the authored content with the new component changes. This is a huge time-waster as it seems like the only way to do this is through a custom made script that looks for nodes in the xml files and tries to convert them to the new component specs. Sometimes this is not even possible and authors need to re-author tons of stuff and lose lots of time.
Can anyone with AEM experience please let me know if:
1) There is a more painless way to migrate authored content to new components?
2) There is a better way to have developers and authors work simultaneously?
I know that the ideal way is to develop components first, and then author on top of those but it seems unrealistic especially with a big client project where things change all the time.
Thanks
Firstly, it sounds like a business process problem. The components should be fully developed and fully tested before content is being added by the authors. If the edits to components are so different that you're having this problem, i would recommend having functional and technical requirements written before the build starts.
With that said, the Groovy console for AEM is an excellent tool for updating nodes and content within an AEM site. Take a look at it here: https://github.com/Citytechinc/cq-groovy-console
I would not agree that content production should happen after all the components where developed. It's beneficial, especially when the content production will take a lot of time, to start it while the development is happening.
On the other hand I completely agree with the other part of the answer. Groovy Console is a way to go, when dealing with content migration (both before Go Live and after, during BAU process). Ideal situation is where all the current content can be mapped to data in new version of component. Then you should be able to migrate all the content with scripts. If that's not the case then you can't run away from authors putting the content manually.
Definitely components should be fully developed before their usage.
But if you want to change something specific in a component which will remain same for the entire website just like logo component or header component you can look into the Design Dialog.
So advantage of it is:
If you have already done authoring for n pages, when you change the component using Design Dialog it will be automatically reflected in all the pages wherever the component is being used.
AEM is a CMS where content is your data is put it in simpler terms. If your development process is such that data is inconsistent with the UI after every release then your delivery process might be at fault. You can use the following ways to make things better:
Make components backward compatible with the data
Make components version-able, i.e. new versions of components work with new models of data and it's left to the user to use new versions.
Provision for data or component migration in your project plan.
In practice, most AEM implementation make components backward compatible and provide an upgrade path to new versions. This is not a technical problem, it's more of a project governance issue.
This post is resurfacing so don't want people to get wrong idea from the current state of answers (and some answers that should be comments IMHO) but the approach in general to deal with components and releases is not a technical problem of the platform.

Grails Multiple Applications as Plugins

I need a bit of clarity regarding whats possible with grails plugins before committing my self to a corner a month or two down the line,
We have two applications built in Grails what share the same model, however we are looking at creating a single application which will control the ACL and add the two Grails applications as plugins.
Now the two applications are very extensive and they have their own controllers, views and routing.
Is it still viable to integrate the two applications as grails plugins or is there another better way of doing it. In the past I have found that following a quick simple guide / tutorial on how to create a grails plugin for instance, might not really explain the other issues I might encounter as I take the two big applications which use plugins of their own and try to convert them to plugins...
Any heads up information would be appreciated.
Everyone's needs are different. I'll simply explain what we've done on a current project and then you can use that to help make your decision.
We have a "common" grails plugin. This plugin contains all of our domains, controllers, layouts, views, css, images, and js that are shared throughout our grails applications. The common plugin has the spring-security-core plugin installed since the security domains are, well, common to all the other applications. However, each application that uses are common plugin still specifies its own security. It uses the domains from common as well as the spring-security-core plugin installed in common, but each application can control its own access points and lock down the URLs that need locked down.
We have an admin application
We have a customer facing application which has both secured and un-secure content.
And we have a couple other internal only applications that use our common plugin.
We've been at this for 6 months and haven't noticed any drawbacks to this approach.

How to develop against a web-based product with built-in server (not ASP.NET project)?

We have an application at work which is web-based and comes with a bundled web server (Apache tomcat), and is for network monitoring/patch management. It allows for personalisation, all sorts of rules, custom UI design using proprietary components and definition language, and even custom code to fire on events (based on Java).
I am in a team of several developers, each of who will be customising this app to meet various requirements. As it's a server app, not a codebase, what's the best way to setup a dev environment for >1 user?
If there is one single shared VM with this app, I don't know how good source control like TFS would work with this sort of system? I think also, developers working on various parts of the project may even need the same file at the same time (though TFS does do multiple check-outs).
What is the best way to develop against this sort of product? Bare in mind, even with personal VMs and an instance of the app, changes have to be merged to one central instance. Something keeps making me think about how app-virtualisation could help with this?
Thanks
If it is just an instance of Tomcat (even though it was bundled) couldn't you put the whole Tomcat directory and all of its subdirectories under source control? You just need to check in the non-binary parts, so exclude all the .jar, .exe, .tar.gz and .dll files when you check in. That's what I would do, unless I misunderstood your question.
I'm not familiar with the source control software that you mentioned. I have been using SVN (which is free) and TortoiseSVN as a client (also free). Just an option if your software can't support what I've suggested.