Jaspersoft: 4.2.1 upgrade creates issues with olap access grant schemas - jasperserver

We are in the process of developing all our domains, olap schemas, reports, etc....in preparation for a Q1 launch of jasper replacing an older BI suite. We had been working in 4.1 and had a working environment with users that had JIProfileAttributes and passing these attributes in filters for both Domains and OLAP connections via access grants. This was all working correctly in 4.1 applying data security where necessary. We recently upgraded the server to 4.2.1 as there were some additional features we wanted to take advantage of for our development but it appears the upgrade broke the security for the OLAP. None of the profile attributes are applying any filters within OLAP after the upgrade. They ARE still working with domains.....just the OLAP that broke. Wondering if anyone else has had a similar issue with 4.2.1. Have a ticket opened with Jaspersupport but have not gotten any feedback on this yet. Unfortunately it has stalled some of our development as data security needs to be tested and this piece simply no longer works. I have tried re-doing the upgrade to make sure that was done correctly and also tried simply reloading the olap schema, connection and access grant but still not working in 4.2.1. Any feedback would be appreciated. At this point I'd settle for at least knowing it's a known issue and will be addressed ASAP. Luckily we are still in development else this would have been a major issue for us. Thanks.

It's a known issue and will be addressed ASAP.
You should hear back directly from Jaspersoft Technical Support as well. I suppose they'll have more info about when a patch is expected.

I came across an issue recently with roles and permissions behaving very strangely. Eventually found out the problem was down to that I had two JasperReport Server instances running on my development PC, and that JasperReports Server actually stores in cache files information about access control lists (as well as other things). I found that one instance of JRS was incorrectly picking up the ACL cache of the other, causing all sorts of problems.
I found that bringing down each server, deleting the cache files, and then only running one server at a time (remembering to delete the files between bouncing) solved all issues.
I'm just thinking, reading your problem, that may be you've installed the upgrade either over the top of the existing install, or in a different directory, but it is picking up the previous install old cache files and causing these problems.
As I'm developing on Windows I found the cache files under C:\Users\my.profile\AppData\Local\Temp\ehcache and C:\Users\my.profile\AppData\Local\Temp\ehcache-hibernate. I don't know where on Linux/Unix this may be stored, but I think it uses the Java environment variable java.io.tmpdir.
Hope this helps..

Related

How to synchronize deployments (especially of database object changes) on multiple environments

I have this challenge. I am the DevOps engineer and a software engineer in a team where months back, the developers moved from having a central Oracle DB to having the DB on a CentOS VM on their individual laptops. The move from a central DB was to reduce dependency on the DBAs and also to eliminate issues that stemmed from inconsistent data.
The plan for sharing and ensuring synchronization of the Database with everyone on the team was that each person will share change scripts with everyone. The problem is that we use Skype for communication (we just setup slack but are yet to start using it fully), and although people sometimes post the text of DB change scripts, it could be missed by some. The other problem is that some developers miss posting the changes. Further, new releases are deployed in Production without being deployed on the Test and Demo environments.
This has posed a serious challenge for us, especially myself who of recent, became responsible for ensuring that our Demo deployments were in sync with the Production deployments.
Most of the synchronization issues border on the lack of sync of the Database due to missing change scripts or missing DB objects. Oracle is our DB of preference.
A typical deployment in the Demo environment is a very painful process that involves testing an application and as issues occur due to missing DB table columns, functions, stored procs, we have to look for the missing DB objects, apply them to the DB and then continue until all issues are resolved.
How can I solve this problem to ensure smooth, painless and less time-consuming deployments? Can migrating our applications to Docker help with the DB synchronization issues and the associated lack of discipline of the developers? What process can we put into place to improve in this area?
Thank you very much in advance for your help.
Have a look # http://www.dbmaestro.com
I strongly recommend you to join the live demo session
DBmaetro TeamWork can help you merge the changes from multiple DBs into a single shared DB and to move safely the changes from one environment to the other
Danny

How to use isolated development database in shared Visual Studio solution

I'm leading a small software development team (4 people), and have just broken ground on a source-controlled SQL Server 2008 database project, with isolated development databases for each developer. I'm still implementing this one step at a time, but I'm envisioning each developer having their own database, with a naming scheme something like <ProjectName>_DEVELOPMENT_<TFSUserName>. This was all recommended per the MSDN articles I've been reading, but someone let me know if that sounds way off.
Anyway, we have a shared application solution that we've been developing for some time. In the past, we had no database version control, and just modified our database directly from SQL Server Management Studio when new reference data needed to be populated, or when we were testing functionality -- one change immediately affected everyone else. So with this new change, I'm wondering what the best way would be to have each person connect to their isolated development databases from the application solution. Prior to isolated databases, our connection to the database was specified in our application's web.config as a connection string. If we're each going to have our own database, the only way I can see it working is for each developer to set their connection string in their local solution to point to their personal database. But changing the web.config will check out that file in the solution, so developers will always have to specifically uncheck that file when checking in application changes to the baseline. Is there a less clunky way for each developer to use their isolated database when doing application testing?
I recommend that you not make the database names username-specific. Instead make the database the same name for each developer and always reference it via localhost (localhost\<ProjectName>_DEVELOPMENT). Then the same connection string will work for every developer.
MSDN's suggestion to use username-specific databases is better for a shared development environment. It's definitely not ideal for a localized environment.

Deploying .EXE to network drive? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
What are the problems with deploying an .EXE to a network drive and having users execute the .EXE over the network?
The advantage is that upgrades only need to be made to the one location. What are the disadvantages?
I would instead consider creating an MSI (http://en.wikipedia.org/wiki/Windows_Installer) file for your application and a Group Policy to facilitate distribution throughout your company (http://support.microsoft.com/kb/816102).
There are a number of freeware MSI tools. Good ones that come to mind are http://www.advancedinstaller.com/ and http://wix.codeplex.com/
The EXE is one thing, but you also need to consider any DLLs and other shared resources that may be associated with the app.
Some DLLs may be shipped with the EXE - you'd have to put those on the remote drive with the EXE, which would cause additional network traffic if it needed to use them.
Other DLLs may be part of Windows, but there could be versioning issues here if your workstations have different versions of windows or even different service packs or patches but they're all running a common version of the app.
And what about licensing? Does the app's license actually allow you to install it on a network drive - many software companies are very specific about this sort of thing, so you need to really be careful if you don't want to get caught out.
In short, it sounds like a good idea to get a quick win for your deployment management, but it probably causes far more issues than it solves.
If you really want to go down this path, you maybe should consider alternatives like remote desktop (eg Citrix or Terminal Server) or something like that - there are much better ways of achieving your goals than just sticking everything on a network drive.
One problem is file locking. In a Windows environment, if a user executes the application directly from a network share, the application's files are locked. This prevents the application from being updated with a newer version if someone has left the application open.
You can go around this by disabling the network share before updating the app and then again enabling it.
If you write your application using an Object Capability Security model, as defined in Mark S. Miller's Ph.D. thesis, Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control, then you will not have any security drawbacks.
On the other hand, the "disadvantage" is that you must now manage access control via the object graph. The application should only have access to whatever permissions you give it. As some have already mentioned, Windows has a basic protection policy which locks the application files and thus prevents anyone from modifying the EXE until the application instance(s) is closed.
Really, the key issue here is you have to ask yourself what authority the program and its component parts should have. If it requires local user permission, then you will either have to design around that or give the program permission.
Understanding the implications of this, and doing it well, is not an easy task.
For our program we decided against a shared exe. We thought it would be harder to support (IT needs to kill users to unlock files before updates, users wont know where the exe is on the network, share\network file permissions need to be modified by IT, etc) and that we should emulate the behavior of other programs when possible (client software is normally installed on the clients).
The main disadvantage would be the network drive being unavailable.
Then each language, which you didn't specify, the EXE is written in matters. As .NET has some security issues running from a network drive.
It depends on what the application does. My application would be a problematic over-the-network deployment because the configuration files it uses are all in the same folder as the EXE, or in a subfolder. If every user runs off of the network, they could potentially modify the configuration files and screw things up for everyone else.
Thankfully, my app is only going to be deployed on separate workstations. :)
They might not have all the files your app needs installed. If they don't, you'll need to create a setup. If they do and it works and everyone's drives are mapped correctly, you should be fine.
I run a vendor's app like this at work. They didn't design for it, but it works without an issue. I have all the shortucts pointing to the UNC path. This particular app doesn't use files in the exe directory, so file locking isn't an issue. Its also hooked up to SQL Server for the data, so the data store isn't an issue either. (Would be a major problem if the app used a local SQLite, Access, or some other file based DB.)
If your app is a .Net app, this WILL NOT work without some major modifications to each machine's security settings, which is probably bad idea anyway. If you're talking about a .Net app, you should use ClickOnce. I use it for a few apps at work, as well, and it's great, and easy to use.
The problem is there isn't a definitive answer to your question, just a bunch of "it depends" qualifications. The big issues, AFAIK, are using local files for data storage, be they text files or databases. It is awesome for updates, though, which is why the app mentioned above is run like this.
This is perfectly doable. Be sure to set the "Run from CD-ROM" (I think?) flag in the Visual Studio settings when compiling -- this prevents the image from being backed directly by the binary, so you can upgrade it while people are running it. I am not running Windows at the moment, so I can't check, but you may be able to set this flag for DLLs, too.
One problem with doing this is that if your program associates itself with files, when the network changes and computers are renamed everybody's PC starts to run like a dog. Explorer has a tendency to query these things at funny times.
Another more serious problem is that if somebody accidentally deploys a broken version, it's not just the early adopters who get stuffed!
For an easy life, personally I recommend XCOPY deployment...
For .NET applications, we have observed BadImageFormatException which we have come to believe is from network glitches (or computers loosing network connectivity at key moments, for example using WIFI) while reading the EXE or DLL files.
IMHO this is a really bad design decision. We have a third party application in our company which is designed exactly like this.
In order for the program to run properly it requires full sharing for that folder; In this case the worst part was that the program had the freaking DATABASE in the same shared folder (yeah, I was shocked too when I found out)!!! Didn't take too long till someone wiped every file that was not in use from that folder, including the database of course :)
I really recommend a client-server approach, even if you have to buy/build a smart installer with auto-update features to overcome deployment issues.

Will major config changes discourage users from deploying code?

I'm beginning development on a solution that will plug into an existing application. It will be made available for public use.
I have the option of using a newer technology that promotes better architecture, flexibility, speed, etc... or sticking with existing technology that is tried and tested which the application already uses.
The downside of going with the newer technology is that a major change to an essential config file needs to be made to support it. If the change goes wrong the app would be out of service. Uninstall is also an issue as future custom code by other developers may require the newer tech and there's no way this can be determined.
How important is this issue in considering an approach?
Will significant config changes put users off deploying code, or cause problems for them later?
Edit:
Intentionally not going into specifics about technologies here to avoid the question from being siderailed.
Install/uninstall software can be provided but there is some complexity involved which may cause them to foul up on edge cases resulting in a dead app. (A backup of the original config would be a way to mitigate that.) Also see the issue about uninstall above where I essentially can't provide one.
Yes, in my experience, any large amount of work will make users think twice about deploying or upgrading.
It's your standard cost/benefit analysis done by businesses with just about every decision. Will the expected benefits more than outweigh the potential costs?
When we release updates to our software, there's almost always a major component that's there just to assist the users to migrate.
An example (modified enough to protect the guilty): we have a product which generates reports on system performance and other things. But the reports aren't that pretty and the software for viewing them is tied to a specific platform.
We've leveraged BIRT to give us intranet-based reporting that looks much nicer and only needs the client to have a web browser (not some fat client).
Very few customers made the switch until we provided a toolset that would take their standard reports and turn them into BIRT reports. Once we supplied that, customers started taking it seriously - the benefit hadn't changed, but the cost had gone right down.
You've given us no detail, so we can't answer with any specificity. But if your question is, will a significant portion of your potential userbase be deterred from using your product if they have to do significant setup work, then the answer is yes. I've seen this time and time again, with my own products and those that I've installed myself. When the only config change is an uninstall and reinstall. People don't like to do work.
You may want to devote more effort than you've considered so far to making the upgrade painless. Even if you're upgrading someone else's framework, you may find the effort worthwhile and reflected in an increased number of installs.
I have noticed that "power users" - developers, sysadmins, etc. - are willing to put up with more setup work.
I'm not sure what you mean by "major config change", but if you're talking about settings / configuration files, then I've been doing something like this:
An application always contains a default configuration which is useful for most users, and which can't be replaced. Instead, users can override one or more of the default settings in their own, separate configuration file. When a new (major) version is released, most users don't need to reconfigure anything: their own custom configurations are still taken from their own configuration file, and possibly required new parameters are taken from the new release's default settings.
It's obvious that most users don't want waste their time adjusting some settings that already were right - and quite rightfully so.

Ideas on setting up a version control system

I've been tasked with setting up a version control for our web developers. The software, which was chosen for me because we already have other non-web developers using it, is Serena PVCS.
I'm having a hard time trying to decide how to set it up so I'm going to describe how development happens in our system, and hopefully it will generate some discussion on how best to do it.
We have 3 servers, Development, UAT/Staging, and Production. The web developers only have access to write and test their code on the Development server. Once they write the code, they must go through a certification process to get the code moved to UAT/Staging, then after the code is tested thoroughly there, it gets moved to Production.
It seems like making the Developers use version control for their code on Development which they are constantly changing and testing would be an annoyance. Normally only one developer works on a module at a time so there isn't much, if any, risk of over-writing other people's work.
My thought was to have them only use version control when they are ready to go to UAT/Staging. This allows them to develop and test without constantly checking in their code.
The certification group could then use the version control to help see what changes had been made to the module and to make sure they were always getting the latest revision from the developer to put up on UAT/Staging (now we rely on the developer zip'ing up their changed files and uploading them via a web request system).
This would take care of the file side of development, but leaves the whole database side out of version control. That's something else that I need to consider...
Any thoughts or ideas would be greatly appreciated. Thanks.
I would not treat source control as annoyance. See Nicks answer for the reasons.
If I were You, I would not decide this on my own, because it is not a
matter of setting up a version control software on some server but
a matter of changing and improving development procedures.
In Your case, it might be worth explaining and discussing release branches
with Your developers and with quality assurance.
This means that Your developers decide which feature to include into a release
and while the staging crew is busy on testing the "staging" branch of the source,
Your developers can already work on the next release without interfering with the staging team.
You can also think about feature branches, which means that there is a new branch for every specific new feature of the web site. Those branches are merged back, if the feature is implemented.
But again: Make sure, that Your teams agreed to the new development process. Otherwise, You waste Your time by setting up a version control system.
The process should at least include:
When to commit.
When to branch/merge.
What/When to tag.
The overall work flow.
I have used Serena, and it is indeed an annoyance. In addition to the unpleasantness of the workflow overhead Serena puts on top of the check in-check out process, it is a real pain with regard to doing anything besides the simplest of tasks.
In Serena ChangeMan, all code on local machines is managed through a central server. This is a really bad design. This means a lot of day-to-day branch maintenance work that would ordinarily be done by developers has to go through whomever has administrator privileges, making that person 1) a bottleneck and 2) embittered because they have a soul-sucking job.
The centralized management also strictly limits what developers are able to do with the code on their own machine. For example, if you want to create a second copy of the code locally on your box, just to do a quick test or whatever, you have to get the administrator to set up a second repository on your box. When you limit developers like this, you limit the productivity and creativity of your team.
Also, the tools are bad and the user interface is horrendous. And you will never be able to find developers who are already trained to use it, because its too obscure.
So, if another team says you have to use Serena, push back. That product is terrible.
Using source control isn't any annoyance, it's a tool. Having the benefits of branching and tagging is invaluable when working with new APIs and libraries.
And just a side note, a couple of months back one of the dev's machine's failed and lost all his newest source, we asked when the last time he committed code to the source control and it was 2 months. Sometimes just having it to back up stuff when you reach milestones is nice.
I usually commit to source control a couple of times a week, depending if I've hit a good stopping point and I'm about to move on to something different or bigger.
Following on from the last two good points I would also ask your other non-web developers what developmet process they are using so you won't have to create a new one. They would also have encountered many of he problems that occur in your environment, both technical using the same OS and setup and managerial.