Uninstallation of application leaves leftovers such as BLA's and CU's - deployment

I came across problem of cyclic (deploy-undeploy) deployments to WebSphere 7, where uninstalled application leaves dirty workplace. IBM has a fix (PM20642)for it in cumulative updates starting from 7.0.15, but I see no difference. Orphaned folder for business level app and composition unit are still present after undeployment.I'm using JMX admin client for connectivity to the server.
Anyone has any experience in dealing with this issue?

If you're using IBM's fix and it still fails, I would say open a PMR with IBM to help you investigate. It could be their fix didn't work as they expected or maybe the fix pack was not applied correctly. In either scenario way I would say you may want IBM's support to resolve this issue.

If you only have remote access via JMX, then you could try to use $AdminConfig deleteDocument in wsadmin to remove the files/folders from the configuration repository.

Related

APIC Cannot GET /apim while updating schema

am new to stackoverflow, and I thought of sharing this regarding an IBM product called APIC.
I did the whole deployment correctly as recommended by IBM on an Ubuntu Environment with mongoDB and MySQL using an AZURE Virtual Machine as Server. Whenever I try to update the schema of the database with the new models, I get an error saying:
Cannot GET /apim/dataSources/partials/dataSourceMigrate.html
Please help or ask me anything in case you need more info, and tell me if it's an error from Azure or IBM or me.
Thanks
This exactly happened to me once, and I contacted IBM for several weeks of give and take, to find out it's a bug on their side, not on the cloud side or your side :)
Check it here: https://stackoverflow.com/a/40016171/4694311
In this case, go back to using strongloop until they get it fixed.
Note that his is a bug on the operating system itself, it works on iOS but that would be useless on cloud.

Getting started for team development

I want to start developing with a team using a Neo4j DB, a Spring Boot backend and an AngularJS frontend.
For that, I want to have a Maven Repository and a Jenkins.
To enable my team to use this, I want to have some kind of server at home that can provide remote (sequred) access to the Maven Repo, the Jenkins and the Neo4j DB and that can host the AngularJS frontend communicating with the Spring Backend.
I don't really know where to start. After some googling I found a NAS, but I'm not sure if they suit my requirements.
I've found tutorials for configuring a VPN but there may be a simpler way.
What would you recommend?
So, after some more asking around and googling if found 2 possible solutions, that i want to try out in the future:
First of seems to be the NAS (I've only read about Synology), although it not seems to be intended for my requirements. However there are packages available in the DiskStation OS that allow the installation of a Jenkins, a Maven Repo and Docker, allowing to host a Neo4j DB. I was told, I should be cautious, because only the "x86 diskstation supports docker". At this point I'm not too sure what this means, but since I'm posting an answer, I don't want to keep this knowledge for myself.
I didn't really find anything on hosting applications.
Second solution seems to be, to build a homeserver. In my current understanding, it should suffice to have a spare PC at home for that. All the steps involved should be available under here (german).
I didn't find anything about hosting applications here too, but since this is a "real" system, I'm pretty sure it's possible.
I'm going to try the second one out and keep you updated as far as I don't forget it :)

jBPM Repositories disappear after Wildfly restart

Pardon if I can't give more pointers, but I'm really a noob at wildfly. I'm using version 9.0.2.
I have deployed jbpm-console, drools, and dashboard - no problems here. I restart wildfly using the jboss CLI, and when I login again, the repositories won't appear in the web interface or on disk (atleast nothing that grepping or find will show).
I'm using the H2 database. I'm not even sure where to look, does anyone have any idea?
Thanks in advance!
After enough reading through the docs, it would seem that it's necessary to configure jBPM to persist. From the docs:
"By default, the engine does not save runtime data persistently. This means you can use the engine completely without persistence (so not even requiring an in memory database) if necessary, for example for performance reasons, or when you would like to manage persistence yourself. It is, however, possible to configure the engine to do use persistence by configuring it to do so. This usually requires adding the necessary dependencies, configuring a datasource and creating the engine with persistence configured."
https://docs.jboss.org/jbpm/v5.3/userguide/ch.core-persistence.html

Domino 8.5.3 - Create an organization extension library / codestore

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

What happens to existing workspaces after upgrading to TFS 2010

I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases)
I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients.
I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade.
As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this.
Thanks in advance...
When you do an upgrade your server ID should stay the same. You may need to chnage it is you want to clone your enviroment.
In your test senario you are creating a clone of the TFS server rather than a strate upgrade.
ChangeServerID
You are probably running into problems as this has been run on your test envionment to facilitate it runing on the same network as your production TFS server.
All workspaces and shelvesets remain unchanged, and people will be able to continue working immediately. Even checked-out files are OK and will be picked up correctly.
I would recommend upgrading the server first, and keep the clients as 2008 (using the Forward Compatibility Pack), and then upgrading the clients to 2010 as and when the projects are upgraded.