Dynamic parameter in Crystal Reports with filter - crystal-reports

There is a possibility in Crystal Reports to use dynamic parameters - list of choices for parameter isn't fixed and typed in the report but is taken from some database table. It is described for example here:
https://www.youtube.com/watch?v=kuHs89yyuEc
My problem is that a parameter created in such way allows to choose from ALL values in the table.
I'd like to filter it in relevance with data in report.
For example: My report represents an invoice. It is filtered to a single invoice by invoice id. A parameter allows to select place of delivery for the invoice. But I don't want to choose from all places of delivery in the table. Instead I'd like the parameter to display only places of delivery for the customer from the invoice.
Let's say customer_id is a formula field in a report and place of delivery is a table like
id customer_id street city ...
Is it possible to filter the dynamic parameter in the way I describe?
EDIT:
Maybe a simple example helps.
I've created a test database with two tables (I'm using Sql Server):
CREATE DATABASE TEST
USE TEST
CREATE TABLE [dbo].[DELIVERY_PLACE](
[ID_DELIVERY] [int] NULL,
[ID_CUSTOMER] [int] NULL,
[ADDRESS] [varchar](50) NULL
) ON [PRIMARY]
INSERT [dbo].[DELIVERY_PLACE] ([ID_DELIVERY], [ID_CUSTOMER], [ADDRESS]) VALUES (1, 1, N'Address A1')
INSERT [dbo].[DELIVERY_PLACE] ([ID_DELIVERY], [ID_CUSTOMER], [ADDRESS]) VALUES (2, 1, N'Address A2')
INSERT [dbo].[DELIVERY_PLACE] ([ID_DELIVERY], [ID_CUSTOMER], [ADDRESS]) VALUES (3, 2, N'Address B1')
INSERT [dbo].[DELIVERY_PLACE] ([ID_DELIVERY], [ID_CUSTOMER], [ADDRESS]) VALUES (4, 2, N'Address B2')
CREATE TABLE [dbo].[CUSTOMER](
[ID_CUSTOMER] [int] NULL,
[NAME] [varchar](20) NULL
) ON [PRIMARY]
INSERT [dbo].[CUSTOMER] ([ID_CUSTOMER], [NAME]) VALUES (1, N'Customer A')
INSERT [dbo].[CUSTOMER] ([ID_CUSTOMER], [NAME]) VALUES (2, N'Customer B')
And I have made a report using this database. you can get it here:
https://www.sendspace.com/file/907wq9
The report filters to CUSTOMER_ID=1
DELIVERY_PLACE table links in report to CUSTOMER table by foreign key: CUSTOMER_ID
I have a dynamic parameter that takes address from DELIVERY_TABLE
But it shows all addresses while I want it to show only addresses filtered to current customer

I should've posted this as comment, but i dont have points etc.
As per my understanding you need to connect the tables on CustomerID. This will automatically link the customer to its address. Secondly you do not need parameter to select the right customer's address, you should place the database field Address from DELIVERY_PLACE, instead. I dont have your database so am lacking knowledge if both tables are already linked. Please right click the databasefileds node and show sql query.. and then post it for better understanding.
You can also change your RecordSelection formula to {CUSTOMER.ID_CUSTOMER}=1 and {DELIVERY_PLACE.ID_CUSTOMER} = 1
This will show two addresses for each customer as per entries in the tables. if you want to select single address from the addresses table during runtime. you need to put it in record selection formula say ..
{DELIVERY_PLACE.ID_DELIVERY} = {?DeliveryPlace}
moreover it is better to give ID_DELIVERY to the parameter instead of complete address.

Related

Postgresql: find all the allowed values for a domain

Suppose I created the following domain:
CREATE DOMAIN MY_STATUS AS VARCHAR NOT NULL DEFAULT 'STATUS0' CHECK(VALUE in ('STATUS1', 'STATUS2', 'STATUS3'));
As expected, in a column whose type is MY_STATUS, i can put only the values:
'STATUS0'
'STATUS1'
'STATUS2'
'STATUS3'
Now, let's suppose that I want to validate this column before to send an insert or update to my DB. I need to know which values are allowed so that, if I have status = STATUS4, I will have an error before to send the insert to DB and I can manage it. Since the domain may change in the future, I need to select all the allowed values from DB rather than hardcode all possible values creating constants.
Shortly: how to make a query that selects all the possible values of the domain?
In my example, I would like to have a query that will return:
'STATUS0', 'STATUS1', 'STATUS2', 'STATUS3'
I would recommend that you use a foreign key reference rather than a type or check constraint. Although you can devise a complex query to parse the the constraint, this is so much easier with a foreign key:
create table valid_domain_statuses (
status varchar(32) primary key
);
insert into valid_domain_statuses (status)
select 'STATUS0' union all
select 'STATUS1' union all
select 'STATUS2' union all
select 'STATUS3' ;
. . .
alter table t add constraint fk_my_status
foreign key (status) references valid_domain_statuses(status);
This has other advantages:
You can readily see the valid values by looking at the table.
You can add additional information about the statuses.
It is easy to add new status values.

Inserting multiple records and updating identity column in SQL

i have to insert multiple records into a table and in the mean time needed to insert the identity column of the first table in to another table.Can i do it avoiding loop?
Edited
i have two tables named StudentMaster and StudentSujects.
First Table structure is (StudentID int Identity(1,1),StudentName varchar(100))
Second table structure is (SubjectID int Identity(1,1),StudentID int,SubjectName varchar(100)).
StudentID in the 'StudentSujects' table is the Identity column of first table 'StudentMaster'.
INSERT INTO StudentMaster
(
StudentName
)
SELECT StudentName
FROM OPENXML(#hDoc,'/XML/Students')
WITH( StudentName varchar(100) 'StudentName')
I am inserting multiple records in to the first table using the above query.I the mean time i have to insert the identity column of each row in to the second table.
You can use the OUTPUT clause to output multiple columns/rows on an INSERT operation into a table variable.
Assuming your table that you're inserting into has an IDENTITY column called ID, you could have code something like this:
DECLARE #InsertedData TABLE (NewID INT, SomeOtherColumn.....)
INSERT INTO dbo.YourTable(Col1, Col2, ..., ColN)
OUTPUT INTO #InsertedData(NewID, SomeOtherColumn) Inserted.ID, Inserted.OtherColumn
VALUES(Val11, Val12, ..., Val1N),
(Val21, Val22, ..., Val2N),
....
(ValM1, ValM2, ..., ValMN)
Of course, you need to have something that allows you to identify which row in your second table to insert which value into - that's entirely dependent on your situation (and you didn't offer any explanation of that in your question).
But basically, using the OUTPUT clause, you can capture as much information as you need, including the newly assigned IDENTITY values, so that you can then do your second insert based on that information.

Access form to use SQL Server query/stored procedure to populate a field

I have a Microsoft SQL Server database with a table called tblCABLE with the following two relevant columns:
ID - indentity
CableID - nchar(8) not null
I have an After Insert trigger on that table that I have written and when all the columns of tblCABLE are entered it runs fine on.
I am trying to create a front end form in access for data entry to tblCABLE so that the trigger can run on new rows.
My problem is that I want the column of CableID to be populated automatically on opening a new record form in Access but do not know how to do this.
I have written some SQL code which will generate the new CableID as follows (the problem is how to add this to run on a new record form in Access
declare #newcableID nchar(8)
declare #cableIDnum int
declare #maxcableID nchar(8)
set #maxcableID = (select max(cableid) from tblCable)
set #cableIDnum = (convert(int, substring(#maxcableID,3,8)))
set #cableIDnum = #cableIDnum + 1
set #newcableID = (select 'CA' + right('000000' + cast((#cableIDnum) as varchar),6))
Any help would be greatly appreciated.
Please don't do this! Using a SELECT MAX()+1 approach is inherently bad and it will break under load and produce duplicate values.
Let the SQL Server database handle this by using a column of type INT IDENTITY - something like:
CREATE TABLE dbo.tblCable
(ID INT IDENTITY(1, 1) NOT NULL
CONSTRAINT PK_tblCable PRIMARY KEY CLUSTERED,
...(other columns here).....
)
That way, SQL Server guarantees properly handled ID values - once you've inserted a new row, the ID will be a unique, valid number and you don't have to fiddle and mess around with creating that unique ID yourself.
If you need a column that concatenates the numeric ID value with a fixed prefix, use a computed column:
ALTER TABLE dbo.tblCable
ADD CableID AS 'CA' + RIGHT('000000' + CAST(ID AS VARCHAR(6)), 6) PERSISTED
and you're done! Whenever you insert a new row, SQL Server will give you a unique ID (e.g. 42), and your CableID column will automatically contain CA000042

SQL Server - How to find a records in INSERTED when the database generates a primary key

I've never had to post a question on StackOverflow before because I can always find an answer here by just searching. Only this time, I think I've got a real stumper....
I'm writing code that automates the process of moving data from one SQL Server database to another. I have some pretty standard SQL Server Databases with foreign key relationships between some of their tables. Straight forward stuff. One of my requirements is that the entire table needs to be copied in one fell swoop, without looping through rows or using a cursor. Another requirement is I have to do this in SQL, no SSIS or other external helpers.
For example:
INSERT INTO TargetDatabase.dbo.MasterTable
SELECT * FROM SourceDatabase.dbo.MasterTable
That's easy enough. Then, once the data from the MasterTable has been moved, I move the data of the child table.
INSERT INTO TargetDatabase.dbo.ChildTable
SELECT * FROM SourceDatabase.dbo.ChildTable
Of course, in reality I use more explicit SQL... like I specifically name all the fields and things like that, but this is just a simplified version. Anyway, so far everything's going alright, except ...
The problem is that the primary key of the master table is defined as an identity field. So, when I insert into the MasterTable, the primary key for the new table gets calculated by the database. So to deal with that, I tried using the OUTPUT INTO statement to get the updated values into a Temp table:
INSERT INTO TargetDatabase.dbo.MasterTable
OUPUT INSERTED.* INTO #MyTempTable
SELECT * FROM SourceDatabase.dbo.MasterTable
So here's where it all falls apart. Since the database changed the primary key, how on earth do I figure out which record in the temp table matches up with the original record in the source table?
Do you see the problem? I know what the new ID is, I just don't know how to match it with the original record reliably. The SQL server lets me output the INSERTED values, but doesn't let me output the FROM TABLE values along side the INSERTED values. I've tried it with triggers, I've tried it with an SP, always I have the same problem.
If I were just updating one record at a time, I could easily match up my INSERTED values with the original record I was trying to insert to see the old and new primary key values, but I have this requirement to do it in a batch.
Any Ideas?
PS: I'm not allowed to change the table structure of the target or source table.
You can use MERGE.
declare #Source table (SourceID int identity(1,2), SourceName varchar(50))
declare #Target table (TargetID int identity(2,2), TargetName varchar(50))
insert into #Source values ('Row 1'), ('Row 2')
merge #Target as T
using #Source as S
on 0=1
when not matched then
insert (TargetName) values (SourceName)
output inserted.TargetID, S.SourceID;
Result:
TargetID SourceID
----------- -----------
2 1
4 3
Covered in this blog post by Adam Machanic: Dr. OUTPUT or: How I Learned to Stop Worrying and Love the MERGE
To illustrate what I mentioned in the comment:
SET IDENTITY_INSERT TargetDatabase.dbo.MasterTable ON
INSERT INTO TargetDatabase.dbo.MasterTable (IdentityColumn, OtherColumn1, OtherColumn2, ...)
SELECT IdentityColumn, OtherColumn1, OtherColumn2, ...
FROM SourceDatabase.dbo.MasterTable
SET IDENTITY_INSERT TargetDatabase.dbo.MasterTable OFF
Okay, since that didn't work for you (pre-existing values in target tables), how about adding a fixed increment (offset) to the id values in both tables (use the current max id value). Assuming the identity column is "id" in both tables:
DECLARE #incr int
BEGIN TRAN
SELECT #incr = max(id)
FROM TargetDatabase.dbo.MasterTable AS m WITH (TABLOCKX, HOLDLOCK)
SET IDENTITY_INSERT TargetDatabase.dbo.MasterTable ON
INSERT INTO TargetDatabase.dbo.MasterTable (id{, othercolumns...})
SELECT id+#incr{, othercolumns...}
FROM SourceDatabase.dbo.MasterTable
SET IDENTITY_INSERT TargetDatabase.dbo.MasterTable OFF
INSERT INTO TargetDatabase.dbo.ChildTable (id{, othercolumns...})
SELECT id+#incr{, othercolumns...}
FROM SourceDatabase.dbo.ChildTable
COMMIT TRAN

How to use BULK INSERT when rows depend on foreign keys values?

My question is related to this one I asked on ServerFault.
Based on this, I've considered the use of BULK INSERT. I now understand that I have to prepare myself a file for each entities I want to save into the database. No matter what, I still wonder whether this BULK INSERT will avoid the memory issue on my system as described in the referenced question on ServerFault.
As for the Streets table, it's quite simple! I have only two cities and five sectors to care about as the foreign keys. But then, how about the Addresses? The Addresses table is structured like this:
AddressId int not null identity(1,1) primary key
StreetNumber int null
NumberSuffix_Value int not null DEFAULT 0
StreetId int null references Streets (StreetId)
CityId int not null references Cities (CityId)
SectorId int null references Sectors (SectorId)
As I said on ServerFault, I have about 35,000 addresses to insert. Shall I memorize all the IDs? =P
And then, I now have the citizen people to insert who have an association with the addresses.
PersonId int not null indentity(1,1) primary key
Surname nvarchar not null
FirstName nvarchar not null
IsActive bit
AddressId int null references Addresses (AddressId)
The only thing I can think of is to force the IDs to static values, but then, I lose any flexibility that I had with my former approach with the INSERT..SELECT stategy.
What are then my options?
I force the IDs to be always the same, then I have to SET IDENTITY_INSERT ON so that I can force the values into the table, this way I always have the same IDs for each of my rows just as suggested here.
How to BULK INSERT with foreign keys? I can't get any docs on this anywhere. =(
Thanks for your kind assistance!
EDIT
I edited in order to include the BULK INSERT SQL instruction that finally made it for me!
I had my Excel workbook ready with the information I needed to insert. So, I simply created a few supplemental worksheet and began to write formulas in order to "import" the information data to these new sheets. I had one for each of my entities.
Streets;
Addresses;
Citizens.
As for the two other entities, it wasn't worthy to bulk insert them, as I had only two cities and five sectors (cities subdivisions) to insert. Once the both the cities and sectors inserted, I noted their respective IDs and began to ready my record sets for bulk insert. Using the power of Excel to compute the values and to "import" the foreign keys was a charm of itself, by the way. Afterwards, I have saved each of the worksheets to a separated CSV file. My records were then ready to bulked.
USE [DatabaseName]
GO
delete from Citizens
delete from Addresses
delete from Streets
BULK INSERT Streets
FROM N'C:\SomeFolder\SomeSubfolder\Streets.csv'
WITH (
FIRSTROW = 2
, KEEPIDENTITY
, FIELDTERMINATOR = N','
, ROWTERMINATOR = N'\n'
, CODEPAGE = N'ACP'
)
GO
FIRSTROW
Indicates the row number at which to begin the insert. In my situation, my CSVs contained the column headers, so the second row was the one to begin with. Aside, one could possibly want to start anywhere in his file, let's say the 15th row.
KEEPIDENTITY
Allows one to bulk-insert specified in-file entity IDs even though the table has an identity column. This parameter is the same as SET INDENTITY_INSERT my_table ON before a row insert when you wish to insert with a precise id.
As for the other parameters, they speak by themselves.
Now that this is explained, the same code was repeated for each of the two remaining entities to insert Addresses and Citizens. And because the KEEPIDENTITY was specified, all of my foreign keys remained still, though my primary keys were set as identities in SQL Server.
Only a few tweaks though, just the exact same thing as marc_s said in his answer, just import your data as fast as you can into a staging table with no restriction at all. This way, you're gonna make your life much easier, while following good practices nevertheless. =)
The basic idea is to bulk insert your data into a staging table that doesn't have any restrictions, any constraints etc. - just bulk load the data as fast as you can.
Once you have the data in the staging table, then you need to start to worry about constraints etc. when you insert the data from the staging table into the real tables.
Here, you could e.g.
insert only those rows into your real work tables that match all the criteria (and mark them as "successfully inserted" in your staging table)
handle all rows that are left in the staging table that aren't successfully inserted by some error / recovery process - whatever that could be: printing a report with all the "problem" rows, tossing them into an "error bin" or whatever - totally up to you.
Key point is: the actual BULK INSERT should be into a totally unconstrained table - just load the data as fast as you can - and only then in a second step start to worry about constraints and lookup data and references and stuff like that