Filtering fields containing foreign characters (Chinese) in Access 2010? - tsql

I have an access 2010 database that contains foreign characters and this data needs to be filtered, searched ect...
The data is stored in a SQL database using NVARCHAR. Visually they look good but when I go to search on the a field that has these characters I am unable to use the like function. I can get an exact match.
I am able to search for the foreign characters in SQL.
In the front end access 2010 DB that this is attached to when I filter on these columns I am unable to get an exact match or a like match.
It looks lke the sorting function works properly.
How can I get the filtering, sorting ect to function properly in Access 2010?
Thank you for your help.

Related

Is it safe to change the data source format to OLE DB database file when using a htm/html as the datasource to remove the limitation of 62 fields?

I have a mail merge datasource which is in htm/html format and it contains 70 fields. Since there is a limitation of 62 fields for such datasources(Reference).
Is it safe to change the datasource type to OLE DB database file in the confirm datasource dialog when selecting the datasource?
When you choose the "All web pages" type (and this is the default type in the case of an HTML file), you are in essence choosing a Word internal file converter to retrieve your data. The reason you end up with the concatenated columns is because
The internal converter is not primarily designed to "read data sources". It's there to convert a document in HTML format into a document in Word format.
Your HTML file contains a table in HTML format, so naturally, the converter tries to convert that into a Word table
However, Word tables can only have 63 columns, whereas HTML tables can have more, so the converter has to deal with that somehow. In this case, it concatenates the column data so column 63 ends up containing all the remaining data in the row.
Once the document is converted, Word uses the converted document as the data source. It's really no different from the situation where it uses a Word document as the data source.
If your HTML file actually contained (say) 1 paragraph of 70 comma-delimited values for each row of data, rather than an HTML table row with td cells, Word would end up treating the data as 70 separate columns (but it would also probably ask for the column delimiter every time you used the file, and you would have to ensure that commas in the data were correctly quoted.
In general, when you choose the "OLE DB Database Files" option, Word either knows of an OLE DB Provider type that can read the specified type of file, or it won't be able to read the file. In this case, what it tries to do is read the file using the Jet OLE DB provider (or in recent versions of Word, the ACE OLE DB provider).
The Jet/ACE providers are one of the mechanisms used to read Access .mdb/.accdb data, but these providers can read a number of formats such as Excel workbook data and plain text file data, using a number of what Jet/ACE calls "Installable ISAMs (IISAMs).
Since there is an IISAM for HTML format data, Word will try to get the data using that IISAM.
In that case, as long as the IISAM can actually read the HTML (it may not be able to read more modern versions of HTML very well) it works much more like the case where Word gets data from Excel. For example, if your HTML file contained two tables, you may get to choose which table to read, cf. an Excel workbook with multiple worksheets and perhaps named ranges.
Jet/ACE IISAMs generally do not support more than 255 columns. 70 shhould be fine. However, you may need to verify what the HTML IISAM does about
Columns with mixed data types (for example where some rows have numbers in them and others have text). When the Excel IISAM finds such data in the first 8 rows (by default) it tries to choose a format - somtimes that can mean that cells with text are read as if they contained "0". FWIW I do not think the HTML IISAM does that, but I would check anyway.
Columns with large amount of text, particularly if there is more than one such column. The IISAM is quite likely to truncate such columns to 255 characters or even less.
Columns with non-ANSI data (non-ANSI Unicode text e.g. Arabic, Hindi or Chinese text.
Other than delimited text files which will let you go over theat 255 limit if they are read by the internal converter, the only data source I know that will let WOrd see thousands of columns is SQL Server. Other servers with OLE DB providers such as MySQL might allow that too. If you have to use a very large number of columns, be aware that you may not see all the available field names in the relevant dropdowns in WOrd, but you should be able to insert the MERGEFIELD codes in manually in the usual way.
What is your current mailmerge connection method (OLEDB, DDE)? By switching to the OLEDB connection method - which is Word's default - you would not be changing the datasource type (only the connection method). Whether doing so will work with your datasource can easily be established by changing to OLEDB and leaving the datasource type alone. If it doesn't work, close the document without saving (or revert to the current connection method.
Regardless, the screen you're showing allows you to specify a datasource type, not the connection method. HTML files are not OLE DB database files and you'd be unlikely to find your datasource if you switched to that file type.
In any event, the 62-field limitation most likely only relates to the fields you can see via the GUI. If you know the field name, you can insert its reference via the keyboard. To do so, simply press Ctrl-F9 to create a pair of field braces (i.e. { }) and fill in between them with 'MERGEFIELD' and the field name, thus { MERGEFIELD FieldName }.

MS-Access 2010 Form: field doesn't accept data source with 2 hyphens

I have a form based on a multiple-tables query. As some fields from different tables have the same names, I must add the corresponding table's name. However, there are hyphens in the tables' names as well as in the fields' names (both inherited from foreign Excel tables).
In VBA there is no problem: [Table-1.Field-1] always works well (also in SQL queries). However, when I write this in drafting mode as data source into the form, Access "thinks" this would be wrong and replaces it automatically with [[Table-1].[Field-1]] - with the result that the form then displays the error #Name?. I tried to replace [] by quotes but without any success.
Note that there is no error when only the table or only the field has a hyphen: both MyTable.[Field-1] and [Table-1].Myfield are accepted by the form.
The correct syntax should be:
[Table-1].[Field-1]
Or, using bang notation:
[Table-1]![Field-1]
Meanwhile I found not a true answer, but nevertheless a quite satisfactory workaround by adding following calculated field into the query:
MyWorkAround: [Table-1.Field-1]
Then I can simply refer to [MyWorkAround] in the corresponding form's field to avoid the form's bug. But this isn't really very elegant !
Note that I always use [ … ] around fields, even where not necessary. This practice helps avoiding a lot of errors.

Is it possible to create T-SQL file containing mixed Arabic (right-to-left) and English?

I have a database that caters for English and Arabic, and normally these are separate and cause no issues. I have a requirement to mix them - and this is not a problem in the database using nvarchar and when the user directly inputs the values.
However, we can't give the user direct access to the DB, so we wanted to write a simple insert script (xxx.sql file) and get them to edit the values...but we are struggling!
In Word, you can change language/text direction fairly simply to get the correct message, but when c&p'd into a sql file (plain text) the direction and/or encoding get messed up :(
Does anyone know how to achieve this?
Let user use any text editor which can produce correct text file. This file should be not sql script but a file with comma separated values. Then you will be able to bulk import these values correctly into any database structure you need.

Must be possible to filter table names in a single database?

As far as I can tell, the search filter in the navigator will only search available database names, not table names.
If you click on a table name and start typing, it appears that a simple search can be performed beginning with the first letter of the tables.
I'm looking for way to be able to search all table names in a selected database. Sometimes there can be a lot of tables to sort through. It seems like a feature that would likely be there and I can't find it.
Found out the answer...
If you type for example *.test_table or the schema name instead of the asterisk it will filter them. The key is that the schema/database must be specified in the search query. The asterisk notation works with the table names as well. For example *.*test* will filter any table in any schema with test anywhere in the table name.
You can use the command
SHOW TABLES like '%%';
To have it always in your tools, you can add it as a snippet to SQL aditions panel on the right.
Then you can always either bring it in your editor and type your search key between %%, or just execute it as it is (It will fetch all the tables of the database) and then just filter using the "filter rows" input of the result set.

coldfusion - bind a form to the database

I have a large table which inserts data into the database. The problem is when the user edits the table I have to:
run the query
use lots of lines like value="<cfoutput>getData.firstname#</cfoutput> in the input boxes.
Is there a way to bind the form input boxes to the database via a cfc or cfm file?
Many Thanks,
R
Query objects include the columnList, which is a comma-delimited list of returned columns.
If security and readability aren't an issue, you can always loop over this. However, it basically removes your opportunity to do things like locking certain columns, reduces your ability to do any validation, and means you either just label the form boxes with the column names or you find a way to store labels for each column.
You can then do an insert/update/whatever with them.
I don't recommend this, as it would be nearly impossible to secure, but it might get you where you are going.
If you are using CF 9 you can use the ORM (Object Relation Management) functionality (via CFCs)
as described in this online chapter
https://www.packtpub.com/sites/default/files/0249-chapter-4-ORM-Database-Interaction.pdf
(starting on page 6 of the pdf)
Take a look at <cfgrid>, it will be the easiest if you're editing table and it can fire 1 update per row.
For security against XSS, you should use <input value="#xmlFormat(getData.firstname)#">, minimize # of <cfoutput> tags. XmlFormat() not needed if you use <cfinput>.
If you are looking for an easy way to not have to specify all the column names in the insert query cfinsert will try to map all the form names you submit to the database column names.
http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-7c78.html
This is indeed a very good question. I have no doubt that the answers given so far are helpful. I was faced with the same problem, only my table does not have that many fields though.
Per the docs EntityNew() the syntax shows that you can include the data when instantiating the object:
artistObj = entityNew("Artists",{FirstName="Tom",LastName="Ron"});
instead of having to instantiate and then add the data field by field. In my case all I had to do is:
artistObj = entityNew( "Artists", FORM );
EntitySave( artistObj );
ORMFlush();
NOTE
It does appear from your question that you may be running insert or update queries. When using ORM you do not need to do that. But I may be mistaken.