I’m experiancing a problem when trying to link to tables in the database expert. The two fields that link the tables have exactly the same information except one table always has an additional space. For example;
Table 1 = Multivitamin/Tablets
Table 2 = Multivitamin//Tablets
‘/‘ are representing spaces
Formulas won’t help (e.g. extractstring etc) as it’s the tables themselves I need to link together
This is preventing me from retrieving the information I need. Any advice on how I can get around this?
There are some ways to come across this:
Consider using a command as datasource instead of tables. When writing the query of the command you can define the join condition yourself.
If you have access to the data source, you could add a calculated field to the tables to contain the normalized field values and then use these for linking in CR.
Alternatively, one could create views in the database, either adding normalized "linking fields" or providing the joined tables results.
If it's only a few rows in CR, you could consider using SQL fields or subreports to retrieve data from Table 2.
Related
My crystal report pulls data about books, including an identifier (isbn, issn order number etc.), author, and publisher.
The ID field stores multiple ways to identify the book. The report displays any of the identifiers for that record. If one book has two identifiers; issn and order number, the report currently displays one apparently at random.
How can I make it prioritise which type to use based on a preset order? I figured some sort of filter on the field could work, but I haven't figured out how. I can't edit the table, but I can use SQL within the report.
If all the different types of ID are stored in a single field, your best bet is to use a SQL Command inside your report to separate them into multiple virtual fields.
Go to Database Fields / Database Expert, expand the connection you want to use, and pick Add Command. From here you can write a custom SQL statement to grab the information you're currently using, and at the same time separate the ID field into multiple different fields (as far as the report will be concerned, anyway. The table will stay unchanged.)
The trick is to figure out how to write your command to do the separation. We don't know what your data looks like, so you're on your own from here.
Based on the very little information that you have provided and if i was to make a guess.I suggest you make use of the formula field in your report and then use something like this to accomplish your goal.
IF ISNULL{first_priority_field_name} OR {first_priority_field_name} = '' THEN
{second_priority_field_name}
ELSE
{first_priority_field_name}
Use nested IF statement in case there are more than 2 identifier fields.
The title pretty much says it all. Im trying to return two tables by including the two table names in the setof. This didnt seem to work out. Is there another way I could include both tables in the return?
this is my first time using SQL at all, so this might sound basic. I'm making an iPhone app that creates and uses a sqlite3 database (I'm using the libsqlite3.dylib database as well as importing "sqlite3.h"). I've been able to correctly created the database and a table in it, but now I need to know the best way to get stuff back from it.
How would I go about retrieving all the information in the table? It's very important that I be able to access each row in the order that it is in the table. What I want to do (if this helps) is get all the info from the various fields in a single row, put all that into one object, and then store the object in an array, and then do the same for the next row, and the next, etc. At the end, I should have an array with the same number of elements as I have rows in my sql table. Thank you.
My SQL is rusty, but I think you can use SELECT * FROM myTable and then iterate through the results. You can also use a LIMIT/OFFSET(1) structure if you do not want to retrieve all elements at one from your table (for example due to memory concerns).
(1) Note that this can perform unexpectedly bad, depending on your use case. Look here for more info...
How would I go about retrieving all the information in the table? It's
very important that I be able to access each row in the order that it
is in the table.
That is not how SQL works. Rows are not kept in the table in a specific order as far as SQL is concerned. The order of rows returned by a query is determined by the ORDER BY clause in the query, e.g. ORDER BY DateCreated, or ORDER BY Price.
But SQLite has a rowid virtual column that can be used for this purpose. It reflects the sequence in which the rows were inserted. Except that it might change with a VACUUM. If you make it an INTEGER PRIMARY KEY it should stay constant.
order by rowid
I have a large table which inserts data into the database. The problem is when the user edits the table I have to:
run the query
use lots of lines like value="<cfoutput>getData.firstname#</cfoutput> in the input boxes.
Is there a way to bind the form input boxes to the database via a cfc or cfm file?
Many Thanks,
R
Query objects include the columnList, which is a comma-delimited list of returned columns.
If security and readability aren't an issue, you can always loop over this. However, it basically removes your opportunity to do things like locking certain columns, reduces your ability to do any validation, and means you either just label the form boxes with the column names or you find a way to store labels for each column.
You can then do an insert/update/whatever with them.
I don't recommend this, as it would be nearly impossible to secure, but it might get you where you are going.
If you are using CF 9 you can use the ORM (Object Relation Management) functionality (via CFCs)
as described in this online chapter
https://www.packtpub.com/sites/default/files/0249-chapter-4-ORM-Database-Interaction.pdf
(starting on page 6 of the pdf)
Take a look at <cfgrid>, it will be the easiest if you're editing table and it can fire 1 update per row.
For security against XSS, you should use <input value="#xmlFormat(getData.firstname)#">, minimize # of <cfoutput> tags. XmlFormat() not needed if you use <cfinput>.
If you are looking for an easy way to not have to specify all the column names in the insert query cfinsert will try to map all the form names you submit to the database column names.
http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-7c78.html
This is indeed a very good question. I have no doubt that the answers given so far are helpful. I was faced with the same problem, only my table does not have that many fields though.
Per the docs EntityNew() the syntax shows that you can include the data when instantiating the object:
artistObj = entityNew("Artists",{FirstName="Tom",LastName="Ron"});
instead of having to instantiate and then add the data field by field. In my case all I had to do is:
artistObj = entityNew( "Artists", FORM );
EntitySave( artistObj );
ORMFlush();
NOTE
It does appear from your question that you may be running insert or update queries. When using ORM you do not need to do that. But I may be mistaken.
This may be a very simplistic question, so apologies in advance, but I am very new to database usage.
I'd like to have Postgres run its full text search across multiple joined tables. Imagine something like a model User, with related models UserProfile and UserInfo. The search would only be for Users, but would include information from UserProfile and UserInfo.
I'm planning on using a gin index for the search. I'm unclear, however, on whether I'm going to need a separate tsvector column in the User table to hold the aggregated tsvectors from across the tables, and to setup triggers to keep it up to date. Or if it's possible to create an index without a tsvector column that'll keep itself up to date whenever any of the relevant fields in any of the relevant tables change. Also, any tips on the syntax of the command to create all this would be much appreciated as well.
Your best answer is probably to have a separate tsvector column in each table (with an index on, of course). If you aggregate the data up to a shared tsvector, that'll create a lot of updates on that shared one whenever the individual ones update.
You will need one index per table. Then when you query it, obviously you need multiple WHERE clauses, one for each field. PostgreSQL will then automatically figure out which combination of indexes to use to give you the quickest results - likely using bitmap scanning. It will make your queries a little more complex to write (since you need multiple column matching clauses), but that keeps the flexibility to only query some of the fields in the cases where you want.
You cannot create one index that tracks multiple tables. To do that you need the separate tsvector column and triggers on each table to update it.