I have created a table 'User' with two fields: 'username' and 'password'.
But I want to import only passwords from a csv file. How do I do this?
When selecting a file to import, you have the ability to match which columns in the file should be imported and what they should match to. There's a column between the import file's fields on the left and the FileMaker fields on the right. If the column displays a right-pointing arrow, that field will be imported. If you click the right pointing arrow, it won't be imported. So match the password in the CSV to the password in FileMaker and make sure it has an arrow, but that there's no arrow next to the username field.
If you're actually creating the database from the CSV file by opening it directly in FileMaker you can't control which fields are imported, but after they have been, you can remove the information imported for the username. Find all records in the Users table (Records>Show All Records, or Ctrl/Cmd-J) Click into the username field while in browse mode, delete any information there, and select Records>Replace Field Contents... (Ctrl/Cmd-=). Leave the option as Replace with: "", and click "Replace". Be very careful with this feature, as it will set all the data for all of the records in the found set and there is no undo for it.
Related
I have data in excel, like below:
and, i have microsoft Word document, like below:
How to mail merge in microsoft word, one to Many row?
I want result like below in microsoft Word:
Thank You Very Much
This can be done, but it's a bit complex. There are a number of possible approaches, outlined at my website.
Since you show the desired result as a table, using a Database field is probably the optimal way to go about it. Insert Database is an old command that's no longer exposed in the Word UI by default. You'll find it in File/Options/Customize Ribbon or Quick Access Toolbar, under All Commands.
The command inserts a field with the name Database, via a set of dialog boxes:
Get Data is the same as what you see in mail merge when selecting the data source. This uses any valid connection method (these days, ODBC or OLE DB - the latter is the default) to bind to the data source. Select the data source containing the "many" information. (Note: the "one" side should be only the unique "one" information; the "many" side should be in a separate data source containing the unique identifier from the "one" side for each item on the "many" side.)
Query options is for setting Query Options (filter/sort what comes in). On the left side of the "equation" you need to select the field that is the identifier in the data source for the one side of one-to-many. On the right side, enter a value you know is in the data so that there's a match.
Table AutoFormat can be used to select a built-in (or user-defined) Table Style.
Insert Data - This is important: activate the checkbox Insert Data as field. This is what will dynamically link the data to the data source and provide a link to the merge information.
OK to insert the data / field.
Press Alt+F9 to view the underlying field codes.
Locate the query information (Select...) near the end of the field code. Change the right side of the Where clause to match the mergefield that provides the "one" side of one-to-many. For example: WHERE ((ID= 1)) would become WHERE ((ID= { Mergefield ID }))
If you don't want to see some of the fields (columns), such as an ID column (the "one" side of one-to-many), edit the list of fields at the beginning of the Select statement.
The result will look something like the following
With a few solutions Ive worked with I've created temp table's or history tables. Normally I script it to take a handful of fields needed from a main table and copy it over to the other table by
Setting a variable then setting field to the variable for each field in the new table / new record.
I have a situation now, where Im building a history table that needs to copy the current record as is. A snapshot where all fields from that instance of the record are copied to the history table.
Rather then setting a variable then set field to the variable, Id like to get some input on a quicker way to get this done where I can do this on a record level and not type out field by field to get it done. Also if fields are added to both tables then I have to make sure my script gets updated.
Ill keep hunting around.. appreciate any help.
-Rich
Do you have a sample of copying a record from 1 table to another
including all fields and setting some fields?
As I suggested in comments, use the Import Records[] script step, and select the same file as the source. If you choose Arrange by: [ matching names ] in the Import Field Mapping dialog, it will automatically map all source fields to their similarly named counterparts.
Note that you must establish a found set in the source table before importing.
For "setting some fields", you can define auto-enter options and activate them during the import, or run Replace Field Contents[] immediately after the import.
i'm using Crystal Reports with my ERP-System. There have been predefined reports i now want to change.
In the field-explorer are some tables which have been renamed for better readability. But those tables are missing some fields, i want to use. If i connect the whole table again, all fields are there. Is there a way to display all fields in the predefined tables.
I tried to refresh the Database but nothing changes. If i delete the predefined table and then rename the new one to the old one, so i can use all predefinded formulas, all used fields in the report get deleted. I would need to recreate the whole report then.
Thanks for the help
If it is truly the same table and is not showing all the fields then you need to do "Database > Verify Database". That will force CR to refresh the structure of the table (instead of just the data). If this doesn't add the missing fields then the table in the report is actually a different object.
To see what the table/view the report is actually using go to "Database > Set DataSource Location" and look at the properties node for that table. It will show if it is a table/view/SP and what the true object name is.
If you want to replace the existing table with a different table you go to "Database > Set DataSource Location" again. Highlight the existing table in the top window, connect and highlight the replacement table in the bottom window. Then click update. Crystal will replace one table with the other and all of the fields in the report that exist in the new table will be mapped automatically. Note that the new table will keep the alias of the original table. If you are unsure if the table was updated you can look at the properties node in the top window to see the change.
I am currently using the Kentico Import Toolkit to create documents in the tree.
At this point, I have imported around 100 documents using the toolkit, and they are all located at the correct place in the tree. Now the issue/concern that I had was, as I have imported these documents, my spreadsheet has been updated, so extra fields and data were added, so how do I go about importing this extra data into the currently existing documents? Also just bear in mind I don't want other fields or data to be affected by this, as some of the documents were updated with some other content by the content editors using CMS Desk, which isn't available in the spreadsheet.
Import toolkit is not the right tool to achieve this task. Even if you select "Import new and overwrite existing pages" it'll overwrite most of your columns. Actually it only preserves system and id columns from the existing documents - all other columns get overwritten.
Either you can write a piece of custom code or you can try following:
Open SSMS and navigate to the coupled table of your page type (something like CONTENT_MyDocType). This is where your custom columns are stored.
Right click -> Edit top 200 rows
Click "Show SQL Pane"
Adjust the columns, ORDER BY and WHERE clause to match your excel file, re-run the query
Select desired rows in your excel file and copy them to clipboard
Paste the data in the SSMS
rocky is right, Import Toolkit is meant for importing complete objects, not partial/continuous update.
You could map the fields that you know are not changed in the spreadsheet to a SQL query selecting the value from the target database.
To achieve this, just insert #<target> at the beginning of the SQL select statement you will be mapping the field to.
It will be rather laborious though and it also requires certain knowledge about the nature of the spreadsheet changes.
I'm trying to copy some field values to a duplicate database. One record at a time. This is used for history and so I can delete some records in the original database to keep it fast.
I don't want to manually save the values in a variable because there are hundreds of fields. So I want to go to the first field, save the field name and value and then go over to the other database and save the data. Then run a 'Go to Next Field' and loop through all the fields.
This works perfectly, but here is the problem: When a field is a calculation you cannot tab into it and therefore 'Go to Next Field' doesn't work. It skips it.
I though of doing a 'Go to Object' but then I need to name all the objects and I can't find a script to name objects.
Can anyone out there think of a solution?
Thanks!
This is one of those problems where I always found it easier to do an export/import.
Export all the data you want from the one database, and then import it into the other database. All you need to do is:
Manually specify which fields you want to copy
Map the data from the export to the right fields in the new database/table
You can even write a script to do these things for you.
There are several ways to achieve this.
To make a "history file", I have found there are several cases out there, so lets take a look.
CASE ONE
Single file I just want to "keep" a very large file with historical data, because I need to erease all data in my Main file.
In this case, you should create a "clone" table (in the same file ore in other file, is the same). Then change any calculation field to the type of the calculation result (number, text, date, an so on...). Remove any "auto entered value or calculation from any field, like auto number, auto creation date, etc..). You will have a "Plain Table" with no calculations or auto entered data.
Then add a field to control duplicate data. If you have lets say an invoice number (unique) for each record, you can do this to achieve this task. But if you do not have a unique field that identifies the record as unique, then you have to create one...
To create such a field, I recommed to add a new field on the clone table and set as an aunto entered calculation and make a field combination that is unique... somthing like this: invoiceNumber & "-" & lineNumber & "-" " & date.
On the clone table make shure that validation is set up for "always", and no empty values allowed and that this value is unique.
Once you setup the clone table... then you can import your records, making sure that the auto enty option is on. Yo can do it as many times as you like, new records will be added and no duplicates.
If you want, can make a Script to do the move to historical table all the current records before deleting them.
NOTE:
This technique works fine when the data you try to keep do not have changes over time. This means, once the record is created is has no changes.
CASE TWO
A historical table must be created but some fields are updated.
In the beginnig I thougth a historical data, never changes. In some cases I found this is not the case, like the case I want to track historical invoices but at the same time, keep track if they are paid or not...
In this case you may use the same technique above, but instead of importing data... you must update data based on the "unique" fields that identifiy the record.
Hope this technique helps
FileMaker's FieldNames() function, along with GetField() can give you a list of field names and then their values