run one crystal report multiple times with different parameters - crystal-reports

I am using the BusinessObjects Enterprise server and i have a report that uses the "department" as a parameter field to control the selection of records. there are 20 different departments.
I want to schedule this report to run 20 times with a new single department selected each time. Is there a way to do this without scheduling the report 20 times?
thanks for any help

Yes, you can. A bit of a process:
Create a Group for each department
Add users to groups as desired; ensure that they have an email address
Create a Profile; add a Profile Value for each Group (one Profile Value for each Group/Department ID combination); the Profile Values will be strings (important)
Create a Publication; add your report to the Source Document; add the Groups that you created earlier to the Enterprise-Recipient list
now define the Personalization (the key part of this); you can either add a Filter (set TABLE.FIELD or FORMULA to your Profile (Report Field & Enterprise Recipient Mapping columns) OR set the Department ID parameter to the appropriate Enterprise Recipient Mapping value (your parameter needs to be a string for this to work; note comment earlier).
set Destination to Email
set other properties (e.g. Format) as desired
Save & Close
You can also schedule this Publication to occur on a recurring basis.
Notes:
This solution uses the Publication Job Server (runs the Publication), the Crystal Reports Job Server (to run the report), the Adaptive Processing Server (does the bursting), and the Destination Job Server (send the email messages). You may want to create a separate set of these services and package them into their own server group, then force the Publications to use only this server group.
Related to the earlier point, you may want to create a server group just for scheduled reports and force recurring instances to use this server group. Why? Publications don't seem to do a good job of waiting for reports in a queue--if a Crystal Reports Job server isn't available, the Publication will fail. Forcing scheduled-report instances to generate on their own server group helps to eliminate this issue.
If you make significant changes to the report (e.g. add a parameter), you may need to remove then add the report to the Source-Document list to ensure that it has the most-recent definition; other changes to the report (e.g. adding a column) don't seem to require this attention. YOUR MILEAGE MAY VARY.

You can design the report with the department as a group.
Have a new page after each group and be sure to print the records from the department group section, not the details.
This is assuming you are getting all the departments inside your database fields.

Related

Efficient Way To Design Database For My Specific Use Case

I am building a website where users can view emails that are fetched from my gmail account.
Users can read emails, change their labels & archive them. Each email has metadata associated with it, and users can search through the emails based on the metadata. Furthermore, each user is associated with an organization. Changes made to an email (e.g., if the email is archived, or if the tags are changed) by any one user gets reflected across the organization.
Right now, I store all emails in a single table along with their metadata. However, the problem is that I now have over 20,000 emails in the database, and searching through them based on the metadata takes too much time.
Now one way to optimize this is that when a user runs a search command then the system should only search through emails that are in the inbox & not archived or deleted. But the issue is that where one organization might have archived an email, another organization might have not. So I can not create separate tables for Inbox & Archive. By default emails also get auto-archived after some time (this option can be disabled also), so the Inbox generally has around 4 thousand emails, whereas the archive has many many times that.
My question is does it make sense to create separate Inbox & Archive tables for each organization & just copy all new incoming emails to the tables? Since organizations only join by invitation, so I do not expect the total number to cross 100. Or would this just explode and become too difficult to handle in the code later on, with so many tables.
I am using PostgreSQL for this.
If your operational workflow says "upon adding a new customer create such-and-such a table" then you have a serious database design problem. When you have more than about 50 customers things will slow down due to per-table overhead. In other words, when you start to succeed in business you will start to fail in performance. Not good.
You have a message entity. It, no doubt, contains the message's text, subject, timestamp, from, to, and other attributes that form part of the original message. Each message will have a unique (primary key) message_id. But the entity should not contain attributes like inbox and archive, because those attributes relate to the organization.
You need an org entity. Each organization has a unique org_id, a 'name and other attributes of the organization.
Then you need an org_message table. Its primary key contains both org_id and message_id. And it will contain Boolean attributes like archived and read, and a VARCHAR attribute naming its current folder. So, each org's window into your message table is organized by the org_messages.
If you start with an organization named, for example, shipping, and you want to see all its messages, you use a query like this.
SELECT org.id, org.name,
message.*,
COALESCE(org_message.read, 0) unread,
COALESCE(org_message.archived, 0) archived,
COALESCE(org_message.folder, 'inbox') folder
FROM org
LEFT JOIN org_message ON org.org_id = org_message.org_id
LEFT JOIN message ON message.message_id = org_message.message_id
WHERE org.name = 'shipping';
The LEFT JOINs and COALESCEs work to set each org's defaults for each message to unread, not archived, and in the inbox folder. That way you don't have to create a row in org_message for each organization and each message until the org handles the message.
If you want to mark a message as read and archived for a particular org, you INSERT a row into org_message, using ON CONFLICT DO UPDATE
INSERT INTO org_message (org_id, message_id, read, archived, folder)
VALUES (?, ?, ?, ?, ?) ON CONFLICT DO UPDATE;
That either sets or updates the org's attributes for the messages
If you find that searching these tables is too slow, you'll need indexes. That's the subject of a different question.

Weird "data has been changed" issue

I'm experiencing a very weird issue with "data has been changed" errors.
I use ms access as a frontend and postgresql as backend. The backend used to be in ms access and there were no issues, then it was moved to sql server and there were no issues there either. The problem started when I moved to postgresql.
I have a table called Orders and a table called Job. Each order has multiple jobs, I have 2 forms, one parent form for the Order and one Subform for the Jobs (continuous form). I put the subform in a separate tab, first tab contains general order information and the second tab has the Job information. Job is connected Orders using a foreign key called OrderID, Id of Orders is equal to OrderID in Job.
Here is my problem:
I enter some information in the first tab, customer name, dates etc, then move to the second tab, do nothing in the second tab, go back to the first one and change a date. I get "The data has been changed" error
I'm confused as to why this is happening. Now why I call this weird?
First, if I put the subform on the first tab, I can change all fields of Orders just fine. IT's only if I put it on the second tab and, add some info, change tab, then go back and change an already existing value that I get the error
Second, if I make the subform on the second tab Unbound (so no ID - OrderID) connection, I get the SAME error
Third, the "usual" id for "The data has been changed" error is Runtime Error 440. But what I get is Runtime Error: "-2147352567 (80020009)". Searching online for this error didn't help because it can mean a lot of different things, including "The value you entered isn't valid for this field" like here:
Access Run time error - '-2147352567 (80020009)': subform
or many different results for code 80020009 but none for "the data has been changed"
MS access 2016, postgresql 12.4.1
I'm guessing you are using ODBC to connect Access to Postgresql. If so do you have timestamp fields in the data you working with? I have seen the above as the Postgres timestamp can have a higher precision then Access. This means when you go to UPDATE Access uses a truncated version of the timestamp and can't find the record and you get the error. For this and other possible causes see:
https://odbc.postgresql.org/faq.html#6.4
Microsoft Applications

Email Triggered With Success Row count in Target Using informatica

I need to trigger an email with all the stats like Count of Rows which are successfully loaded in the target, Failed rows count with the help of Informatica Powercenter.
So where can I find this information for the Workflow and how can I used that information to trigger the email to respective people.
There is an email task present in the Informatica which I am hoping I can use that.
You can use built-in session parameters to collect session run details, e.g.:
$PMSessionName: Name of the Informatica session.
$PMSourceName#TableName: Name of the source table name.
$PMTargetName#TableName: Name of the target table name.
$PMSourceQualifierName#numAffectedRows: Number of records returned
from source.
$PMTargetName#numAffectedRows: Number of record
inserted/updated into the target table.
$PMTargetName#numRejectedRows: Number of records error out in target.
Here's more info: http://powercenternotes.blogspot.com/2014/01/an-etl-framework-for-operational.html
you are right. You can use e mail task. Or you can use post session email command to send mail with statistics - like output file, counts etc.
I need to trigger an email with all the stats like Count of Rows which are successfully loaded in the target, Failed rows count with the help of Informatica Powercenter.
Since you are looking for session level count, you may want to look into using post-session email and using the email variables.
...trigger the email to respective people. There is an email task present in the Informatica which I am hoping I can use that.
If you need to use email task, in the link going to email task, you can double-click on the link, and set link condition to trigger which email task to go towards.

How to import users in CRM 2011 with source GUID

We have three Organization tenents, Dev, Test and Live. All hosted on premise (CRM 2011. [5.0.9690.4376] [DB 5.0.9690.4376]).
Because the way dialogs uses GUIDs to refference record in Lookup, we aim to maintain GUIDs for static records same across all three tenents.
While all other entities are working fine, I am failing to import USERS and also maintain their GUIDS. I am using Export/Import to get the data from Master tenent (Dev) in to the Test and Live tenents. It is very similar to what 'configuration migration tool' does in CRM 2013.
Issue I am facing is that in all other entities I can see the Guid field and hence I map it during the import wizard but no such field shows up in SystemUser entity while running import wizards. For example, with Account, I will export a Account, amend CSV file and import it in the target tenant. When I do this, I map AccountId (from target) to the Account of source and as a result this account's AccountId will be same both in source and target.
At this point, I am about to give up trying but that will cause all dialogs that uses User lookup will fail.
Thank you for your help,
Try following steps. I would strongly recommend to try this on a old out of use tenant before trying it on live system. I am not sure if this is supported by MS but it works for me. (Another thing, you will have to manually assign BU and Roles following import)
Create advance find. Include all required fields for the SystemUser record. Add criteria that selects list of users you would like to move across.
Export
Save file as CSV (this will show the first few hidden columns in excel)
Rename the Primary Key field (in this case User) and remove all other fields with Do Not Modify.
Import file and map this User column (with GUID) to the User from CRM
Import file and check GUIDs in both tenants.
Good luck.
My only suggestion is that you could try to write a small console application that connects to both your source and destination organisations.
Using that you can duplicate the user records from the source to the destination preserving the IDs in the process
I can't say 100% it'll work but I can't immediately think of a reason why it wouldn't. This is assuming all of the users you're copying over don't already existing in your target environments
I prefer to resolve these issues by creating custom workflow activities. For example; you could create a custom workflow activity that returns a user record by an input domain name as a string.
This means your dialogs contain only shared configuration values, e.g. mydomain\james.wood which are used to dynamically find the record you need. Your dialog is then linked to a specific record, but without having the encode the source guid.

Accessing multi-level fields in a CRM 2011 Workflow

Sorry if this is sort of confusing because I'm not sure how to word this. I am trying to create a workflow that runs off of Account's in Microsoft CRM 2011. One part of this workflow requires me to retrieve a field contained in the Business Unit of the User in the Account's "Created By" field. However, the workflow will only allow me to access the Business Unit itself, but not any of its fields.
I'm wondering if there is a simple trick or work-around that will allow me access to this data.
Thanks!
For reference, the Account has a User, who has a Business Unit, and the Business Unit has a field I need to access. CRM, however, doesn't want to let me get more than 2 levels deep when accessing fields.
Clunky but do-able if you accept a bit of denormalisation (temporarily or otherwise). I'll assume for the sake of example you want to get at the "cost centre" field from the BU.
Add a field on User entity to temporarily hold the value from the BU (so make it same type and length, text(100) in this case), optionally put it on the form.
Create a child workflow for the User entity to update the user with the "cost centre" value from their BU. Make it only available to run as a child, not onDemand or anything else. Activate
In your Account workflow, add a step to call the child workflow against the relevant user (eg Created By in your case).
Add a step to wait until the new cost centre field on the user record contains data.
Now do whatever you need to with the value from the user record, such as update the Account, or do some branched logic.
Whatever you do, once you have used the value, clear the field on the user record, or do this as the last step of the workflow.
Now, since Users don't change BU very often, you might actually just go ahead and keep that value on the User record permanently, and instead of a child workflow, simply run this on create of a new user, or on change of BU, and store the value permanently on the User record. Yes, it is 'denormalised' and not purest SQL design, but then you don't need a child workflow, you don't need a wait state and you don't have to clear the value at the end, or worry about what happens when two Accounts need to run their workflow at the same time. I include the more general approach above as this might apply to other records which do change their parent quite often.
Just an additional thought - you can access the "owning business unit" of the Account, but this will be the BU of the Owning User, rather than the Created By, but is your business process such that this would normally be the same person? (eg users only have Create priviledge to "user owned" depth, so can only create records they own).
If so, then you could get at the BU directly from the Account, and then any fields on it too (in a condition or to update the Account)
Alternative which is less ideal but a similar approach - add a relationship from Account to BU (eg "created BU"). Now you can update the Account with this by referring to the Created By User's BU, then in the next step, reference this value from the Account. This is again denormalised, and less preferable since number of Accounts is far greater than number of users, so the level of duplicate information is much higher.
You can't get deeper with the standard steps of a workflow.
The solution is to create a custom workflow activity, you can start from this article:
http://msdn.microsoft.com/en-us/library/gg328515.aspx