SQL 2008 R2 - Reporting Services, using Data Model with Report Builder 3 over internet causes crash, any ideas? - sql-server-2008-r2

Background:
I have a Windows 2008 R2 box set up with SQL 2008 R2 both the Data Engine and Reporting Services.
I have configured Reporting Services to use custom authentication (FormsAuthentication) that I wrote.
The custom authentication gets passed the name of a user to treat as the admininistrator assuming they login correctly in the Reporting Services configuration files.
The custom authentication when queried by Reporting Services about the current users permissions will always return true when logged in as the user configured as the administrator.
I have uploaded a Data Model to Reporting Services (using the built in Report Manager app) which uses a Data Source I added (also using built in Report Manager app) which connects to a database on the same box.
I have a ASP.NET MVC3 web app (also on same box) that is configured to use the Reporting Services web service to do things like list existing reports, run existing reports and a link to start Report Builder 3.
The ASP.NET MVC3 web app shares it user logins with Reporting Services, i.e. the custom authentication used by Reporting Services verifies user details by looking at the same data as the web app.
The ASP.NET MVC3 web app is available externally.
Problem:
If I log into the web app remotely, start Report Builder 3 via link, login as UserA, use the Report Wizard with options >> Create dataset >> select Data Model (see above) as the source of data >> choose 1 table of data (e.g. Organisations) >> click button to preview data >> click next -- BANG - REPORT BUILDER 3 HANGS.
If I log into the web app locally on the server hosting everything, start Report Builder 3 via link, login as UserA, use the Report Wizard with options >> Create dataset >> select Data Model (see above) as the source of data >> choose 1 table of data (e.g. Organisations) >> click button to preview data >> click next >> choose row/column groups and values, report style and click finish. I can then run the report and save it. -- EVERYTHING WORKS!
Things I looked at:
If I do a SQL Profiler Trace against
the both the Reporting Services
database and the database that the
Data Model is using in the case where
it hangs it appears Reporting
Services is getting into some kind of
loop continuously asking the
Reporting Services databases if there
are any running jobs. When it works
it never asks the Reporting Services
about Running jobs at this point.
If I use the Data Source the Data Model uses in Report Builder 3 as the data source then it does work. HOWEVER this then prompts for a SQL Server login and requires the you open the SQL Server port on both the server and the remote machine! Not good.
This is driving me crazy. Any body with elite skills in the black magic of using Reporting Services 2008 R2 and Report Builder 3 that can help be figure this out will be deserving of everying computing award going.
EDIT: I found this while Googling again (Forum post, Google Cache) and got really excited but couldn't quite make sense of what the poster was saying and changes I made trying to follow it just broke all Reporting Services access so I rolled back the changes. Unfortunately the forum is archieved (and a bit rubbish) so I can't contact or leave a message for the poster.

Peter, my experience with Reporting Services is extremely limited; I've been building reports for SQL Azure Reporting Services using BIDS.
I had a problem whereby BIDS was crashing when I attempted to preview a report and it took me hours to work out that my reports weren't checked out of version control and BIDS couldn't save them. Rather than giving me any kind of meaningful error message, Visual Studio simply crashed on me.
Maybe, just maybe, your issue is that Report Builder can't save the report and is crashing for a similar reason. However, from your description, I'd suggest that if it is the case, it's likely to be a permissions issue rather than a read-only one. I'd check that the security context for the access of data is as you expect when logging in remotely.
Sorry that I can't give you a definitive answer, but maybe it will suggest somewhere new to look.

It turns out that this is caused by a school boy error by Microsoft of using one connection and not opening/closing it appropriately. MS were able to give us this workaround when we called them:
Basically after you've got the Report Builder 3 clickOnce app installed, locate it on your computer and then inside the system.net tag of the app.config file add the tags below.
<connectionManagement>
<add address="*" maxconnection="1024"/>
</connectionManagement>

Related

Adding relying parties in ADFS using C# or Powershell

I need to add Relying parties in ADFS everytime a new client comes on. I would like to automate this by just specifying either the url to the federation metadata or a file picker for the admin to load the federation metadata file.
I have been following the instructions on this site Adding a New Relying Party Trust
However I get the following error
ADMIN0120: The client is not authorized to access the endpoint
net.tcp://localhost:1500/policy.
The client process must be run with elevated administrative privileges.
not sure what I am doing wrong. I guess the bigger question is : is this the best way to set up Relying parties and Claims using code or should I use powershell commands?
This error doesn't means you have code issue. It is something related to the privilege. Test it by right mouse click the client and "Run as administrator" to see if it goes through.
As per your link, there are three ways:
Using the AD FS 2.0 Management console
Using the Windows PowerShell command-line interface
Programmatically using the AD FS 2.0 application programming interface (API)
All three are equally valid - the only difference is how much work you have to do for each e.g. the wizard is lots of mouse clicks.
What I do is set up the RP the first time via the wizard and then save the setup using PowerShell (Get RP, Get Claims etc.) and then use these to set up subsequent ones as you migrate from dev. to test. to staging etc.

Insert message into a process running in gwt-console-server from external application?

I'm a jBPM noob running jBPM5.4 in AS7. I have tried posting this question on the jBPM duscussion board, but no luck, so I thought I'd try here on stack.
My Goal: Create the process in guvnor, run it in gwt-console-server, have my java application feed information to the process, and follow the current state in the jbpm Console.
So far, I have installed the jbpm console and console server as well as Guvnor and designer on jBOSS AS7. I am able to create a process in Guvnor and run and monitor that process from the jbpm Console. The missing piece is that I do not understand how to externally insert messages to the process that is running.
Using eclipse and the jBPM example, I can run a process and insert messages, but my goal is to use the jbpm console to monitor the processes.
I assume I need to access the knowledgesession running in the gwt-console-server, but I'm not sure how to do that. Is it safe to access/modify a session that is persisted out to a database (ie, both gwt-console-server and my custom app would be able to modify it) and then the jbpm console would read from it?
I see in the BPM Console reference (https://community.jboss.org/wiki/BPMConsoleReference) that there is an Integration Layer, but there is nothing about how to leverage that - and the like in the doc is broken :(
Can someone point me to an example of an external application feeding messages to a jbpm process that is being monitored by jbpm-console or suggest ways to accomplish this?
Thanks very much for any insight.
-J
PS. I have the new jBPM Developer's Guide, but can't find anything in it to help me with this (so if I am missing something, I can handle a reference back to that guide).
The jBPM console has a REST api that exposes a subset of the functionality. For example, if you model this feeding of information as the start of a process, or the sending of a signal, you could use the signal REST method to send this information to the console for processing.
It's also fine to use an external ksession to update a process instance. As long as they are using the same database to store the information, everything should be fine.
It turns out that the console is just using the logs, so as long as you log to the same DB the console is using (with JPAWorkingMemoryDbLogger) everything pretty much automagically works. You can use either JBPMHelper.newStatefulKnowledgeSession(kbase) or JBPMHelper.loadStatefulKnowledgeSession(kbase, sessionId) depending on if you want to use the knowledge session started from the Console. Also, if you borrow the Console's session, don't dispose it of course.
I read somewhere that you can give the session a business id (and soon do the same from your own code so that they automatically use the same session), but currently when I want to borrow the Console's session I use a kludge that just assumes the highest session is the one I want (it will be as long as the console is already running).

SSRS 2008 R2 Change Shared Data Source for Production vs Test

I am trying to figure out the best way to mitigate this situation. My project team consists of 3 developers each with their own instances of SSRS installed. We have 2 external SSRS servers that we must push updates to in order for the customer to review and for us to test and there is a 3rd external server coming online that will not be administered by us.
I have been trying to find a way to set the Shared Data Source to the current environment regardless of the system it is on. I had thought that just a common naming convention for the ReportServer address would be fine, but we've already found them to be inconsistent on the production and test servers. My next attempt was to specify an ODBC connection and let each person create a system DSN with connection information, but after an entire day of messing with it and continually getting errors, I'm not convinced it's the way to go. ( The most recent error being "The specified DSN contains an architecture mismatch between the Driver and Application" ). I have tried going through Windows ODBC DSN msc to create the DSN and I have tried using Report Builder 3.0 to create one and neither seem to work.
So I guess at this point I just have to ask, is there a best practice for going about this? I'd like to do local development and testing via the "Run" button inside Report Builder and then I'd just like to upload the file to the Report Manager and have it work regardless of the URL for the Report Server.
If the properties (connectionstring, etc) for shared data sources don't change much on your servers, the following may work for you: in the properties for your project set OverwriteDataSources = False for the appropriate configurations. Set it to true only temporarily to change the data source, if needed.
That way any dev can safely deploy to the servers, without affecting the data source, even if (s)he locally changed something (e.g. the connection string) to match a personal environment.
Not an optimal solution, but relatively easy to set up.

How To Deploy Web Application

We have an internal web system that handles the majority of our companies business. Hundreds of users use it throughout the day, it's very high priority and must always be running. We're looking at moving to ASP.NET MVC 2; at the moment we use web forms. The beauty of using web forms is we can instantaneously release a single web page as opposed to deploying the entire application.
I'm interested to know how others are deploying their applications whilst still making them accessible to the user. Using the deployment tool in Visual Studio would supposedly cause a halt. I'm looking for a method that's super quick.
If you had high priority bug fixes for example, would it be wise to perhaps mix web forms with MVC and instead replace the view with a code-behind web form until you make the next proper release which isn't a web form?
I've also seen other solutions on the same server of having the same web application run side-by-side and either change the root directory in IIS or change the web.config to point to a different folder, but the problem with this is that you have to do an entire build and deploy even if it were for a simple bug fix.
EDIT: To elaborate, how do you deploy the application without causing any disruption to users.
How is everyone else doing it?
I guess you can run the MVC application uncompiled also? and just replace .cs/views and such on the run.
A websetup uninstall/install is very quick, but it kills the application pool.. which might cause problem. Depending on how your site is built.
The smoothest way is to run it on two servers and store the sessions in sql server or shared state. Then you can just bring S1 down and patch it => bring s1 back up again and bring S2 down => patch S2 and then bring it up again. Al thought this might not work if you make any major changes to the session parts of the code.
Have multiple instances of your website running on multiple servers. The best way to do it is to have a production environment, a test environment, and a developement environment. You can create test cases and run the load every time you have a new build, if can get through all the tests, move the version into production ;).
You could have two physical servers each running IIS and hosting a copy of the site. OR you could run two copies of the site under different IIS endpoints on the SAME server.
Either way you cut it you are going to need at least two copies of the site in production.
I call this an A<->B switch method.
Firstly, have each production site on a different IP address. In your company's DNS, add an entry set to one of the IPs and give it a really short TTL. Then you can update site B and also pre-test/warm-up the site by hitting the IP address. When it's ready to go, get your DNS switched to the new site B. Once your TTL has expired you can take down site A and update it.
Using a shared session state will help to minimise the transition of users between sites.

Crystal Reports 9 Database Connection Issue

Crystal Reports 9 seems to save the database connection information inside the report file itself. I am having an issue changing that connection. I work with a team of developers who all have their own copy of a database on the same server. We are using Trusted Connections to the db. When we need to make changes to a crystal report, and we click the lightning bolt to execute the report, Crystal does not ask for login information to the database. It actually ends up connecting to the last database that was used when the report was saved last.
We came up with 2 workarounds:
Take the database that crystal thinks it should connect to offline, then crystal will ask for login info.
Remove permissions for the username that is making the crystal change.
Neither of these are acceptable for us. Does anyone know how to remove the crystal connection from the report file?
We have tried Log Off Datasource Location and all of the settings in the Database Expert.
UPDATE
I still have not found a solution that fits my case. But our newest workaround is to load up a crystal report and just before you click the lightning bolt (to run report against the database), unplug your ethernet cable. Then when Crystal cannot find the database, plug the ethernet cable back in and it will allow you to choose a different database server and name.
You could use a .dsn datasource file in a user-specific location (i.e. the same path for every user, but a different physical location) and point Crystal Reports at that. For example, on everyone's C drive: C:\DSNs\db.dsn, or on a network drive that is mapped to a different location for each user.
You can get more info on .dsn files on MSDN:
http://msdn.microsoft.com/en-us/library/ms710900(VS.85).aspx
We are using such way (using sql authentication however):
open report
database - log on server
database - set datasource location
refresh/preview
You may disable your [domain user] access to dev database, should help too :)
I am probably answering too late to have any chance at the bounty, but I'll offer an answer anyway.
If you are running the Crystal Report directly or with Crystal Enterprise then the only way I can think of to do this is by using a dsn as paulmorriss mentions. The drawback to this is that you'd be using ODBC which I believe is generally slower and thought of as outdated.
If you are using this in an application then you can simply change the database connection settings in code. Then, everyone can develop the report against their own test database and you can point it to the production database at runtime (assuming the developers database is up to date and contain the same fields as the production database).
To do this you should be able to use a function like the following:
private void SetDBLogonForReport(CrystalDecisions.Shared.ConnectionInfo connectionInfo, CrystalDecisions.CrystalReports.Engine.ReportDocument reportDocument)
{
CrystalDecisions.CrystalReports.Engine.Tables tables = reportDocument.Database.Tables;
foreach (CrystalDecisions.CrystalReports.Engine.Table table in tables)
{
CrystalDecisions.Shared.TableLogOnInfo tableLogonInfo = table.LogOnInfo;
tableLogonInfo.ConnectionInfo = connectionInfo;
table.ApplyLogOnInfo(tableLogonInfo);
}
}
For this to work you need to pass in a ConnectionInfo object (which will contain all of your login information) and the report document to apply it to. Hope this helps.
EDIT - Another option, that I can't believe I haven't thought of until now, is that if you are using SQL Server you can make sure that all of the development databases names are the same, then use "." or "(local)" for the server and integrated security so that everyone effectively has the same connection info locally. I think this is probably the best way to go assuming that you can get all of the developers to use the same setup.
EDIT Again :)
After reading some of the comments on the other answers, I think I may have misunderstood the question. There is no reason that I can think of why you wouldn't be able to do the steps in Arvo's answer outside of not having rights to edit the report, but I'm assuming that you've been able to make other changes so I doubt that is it. I assumed that to get the report to work for each developer you had been doing these steps all along.
Yeah I agree Crystal Reports is a pain. I have ran into the same problem in the applications that I have built that I was forced to use it.
1- Log off the server(inside crystal right click the database and log-off)
2- Click on the database and change the database location
If you are logged on and change the database location it doesn't seem to stick
You can set the logon at runtime. See this question...
How do I change a Crystal Report's ODBC database connection at runtime?
If you used ODBC, each dev could point their DSN at the appropriate database. Essentially pushing the connection string into the DSN and out of the crystal report.