We have Sql server database and it is designed on snow flakes (Facts and Dimensions) schema. These tables don't have PK and FK relationship. Instead we are maintaining these information in metadata tables.
Is it possible to design reports in SSRS on these tables?
I want to design reports by combining different columns from tables.
Any help would be appreciated.
Assuming that the Fact and Dimension tables are held in a conventional relational database, you can access them in SSRS using conventional SQL queries.
If the database structure is OLAP, you'll need MDX to query instead of T-SQL
yes, of courser you can!
It doesn't matter how your data is structured on your database, if you manage to build a query to display it, SSRS will be able to read it on a DataSet and use it on the report
A SELECT statement is sufficient. A view or a stored proc are preferred. It's hard to imagine a SQL database that doesn't support those options.
Related
Is it possible to get the table structure like db2look from SQL?
Or the only way is from command line? Thus, by wrapping a external stored procedure in C I could call the db2look, but that is not what I am looking for.
Clarification added later:
I want to know which tables have the non logged option from SQL.
It is possible to create the table structure from regular SQL and the public DB2 catalog - however, it is complex and requires some deeper skills.
The metadata is available in the DB2 catalog views in the SYSCAT schema. For a regular table you would first start off by looking into the values in SYSCAT.TABLES and SYSCAT.COLUMNS. From there you would need to branch off to other views depending on what table and column options you are after, whether time-travel tables, special partitioning rules, or many other options are involved.
Serge Rielau published an article on developerWorks called Backup and restore SQL schemas for DB2 Universal Database that provides a set of stored procedures that will do exactly what you're looking for.
The article is quite old (2006) so you may need to put some time in to update the procedures to be able to handle features that were added to DB2 since the date of publication, but the procedures may work for you now and are a nice jumping off point.
I have a fairly simple question concerning a design with a view for how reports would look like when the program is complete. I use Java, with JasperReports for my reporting needs.
Correct me if I'm wrong, but JasperReports does not make elements from different tables overlap to make sense, for example, in my case, I would like the the sales and receipt of an Item in a single report ordered by date (Sales and Receipts overlap). I have a sales table and a receipt table in the database.
The question is, should I redesign my database so that both sales and receipts are stored in the same table, or is there a way jasper reports can merge both tables and make reports overlap in a tabular form?
You can write an SQL query, from which the table in jasper-reports will take it's data.
In this SQL, you can write a join that will get you the data from both of the tables.
So you can leave the design of your tables as it is now.
As part of some requirement, I need to migrate a schema from some existing database to a new schema in a different database. Some part of it is already done and now I need to compare the 2 schema and make changes in the new schema as per gap finding.
I am not using a tool and was trying to understand some details using syscat command but could not get much success.
Any pointer on what is the best way to solve this?
Regards,
Ramakant
A tool really is the best way to solve this – IBM Data Studio is free and can compare schemas between databases.
Assuming you are using DB2 for Linux/UNIX/Windows, you can do a rudimentary compare by looking at selected columns in SYSCAT.TABLES and SYSCAT.COLUMNS (for table definitions), and SYSCAT.INDEXES (for indexes). Exporting this data to files and using diff may be the easiest method. However, doing this for more complex structures (tables with range or database partitioning, foreign keys, etc) will become very complex very quickly as this information is spread across a lot of different system catalog tables.
An alternative method would be to extract DDL using the db2look utility. However, you can't specify the order that db2look outputs objects (db2look extracts DDL based on the objects' CREATE_TIME), so you can't extract DDL for an entire schema into a file and expect to use diff to compare. You would need to extract DDL into a separate file for each table.
Use SchemaCrawler for IBM DB2, a free open-source tool that is designed to produce text output that is designed to be diffed. You can get very detailed information about your schema, including view and stored procedure definitions. All of the information that you need will be output in a single file, and can be compared very easily using a standard diff tool.
Sualeh Fatehi, SchemaCrawler
unfortunately as per company policy, cannot use these tools at this point of time. So am writing some program using JDBC to get the details and do some comparison kind of stuff.
We create several crystal reports based on SQL Server - usually 2005 or 2008. Broadly there are 2 kind of reports
a) tabular reports - which shows some data in a table (for example, invoice list)
b) document layouts - which shows data in specific format - usually from one or two main tables - and several secondary tables (for example, invoice)
We sometimes use tables directly in crystal. Or create a procedure in SQL and than use that procedure. One invoice could refer to usually around 10-12 tables. Most of these linked using left outer join to the primary invoice table.
What option is better - using tables in crystal (and let crystal create and run the sql query) - or create a query - and than use that query in crystal. Which one will give better performance?
There will be no difference in performance between a query generated by the 'Database Expert' versus the same SQL added to a Command. One caveat: ensure that the record-selection formula can be parsed and sent to the database (a filter applied WhileReadingRecords will definitely be less efficient that a pure-SQL one).
Reasons to prefer the 'Database Expert':
prior to v 2008, Command objects didn't support a multivalued parameter
easier to manage (somewhat subjective)
Reasons to prefer a Command:
you can add hints
you have more finely-grained control over the SQL (e.g. in-line views, CTEs, more-complex JOINs, subselects)
Personally, I try to avoid stored procedures as they offer minimal performance benefits, but require a more-signification investment in development and maintenance.
In the end, there is no substitute for performance. Try you query both ways and measure the results.
Coding it yourself will almost invariably run faster -- after all, you know what your data looks like, and Crystal doesn't. Also, there are things you can do in manual queries (windowing functions, for example) that Crystal can't.
Crystal had tendency to do some crazy stuff behind the scenes. You can view the "Show SQL Query" under the Database menu options to see what it creates. If find it easier to write the query in SQL as I can optimize it myself much easier. I also prefer to do any calculated/formula fields in SQL to and just use Crystal as a display interface. If you do put logic in crystal remember that it is running that logic for every record returned... so if there are conditions that exclude a record from a formula put that first to limit the time spent in the calculation.
I really want to know your experience at working with ADO.Net datasets (calling stored procedures from SQL) and Crystal Reports, I know about the 2-4 seconds to
CrystalDecisions.CrystalReports.Engine.ReportDocument document = new CrystalDecisions.CrystalReports.Engine.ReportDocument();
document.Load(file);
but what about the load of each tableadapter is there another way to work with Crystal Reports? Maybe with LINQ
Thanks in advance
I have used DataSets with Crystal. In general I do not like to allow Crystal Reports to fetch its own data as we have had errors with it opening too many connections to the Database. I usually create a DataSet and serialize it to XML with the schema and use the xml file as the ADO.Net "DataBase" for design purposes and then at runtime I assign the DataSet to the Report
Dim rd As New ReportDocument
rd.Load("SomeReport.rpt")
rd.Database.Tables(0).SetDataSource(dataset)