How can I edit table properties in tabular model in visual studio? - ssas-tabular

I want to add new columns in two tables in a tabular model. But I faced three questions in the process.
When I opened the table properties, I found here has filter rows commands. I tried to directly delete filter rows command here, but I clicked validate, it shows the credentials for this operation could not be validated. How can I renew the SQL statement?
When I open design and click import. Error appears: Cannot import the partition query because the set of columns in the partition definition does not match those in the table definition. The following required columns are mission.
The partition only sets the datetime, I do not understand what the error is here.
When I opened design in the table properties and click update, the error: cannot save changes because the partitions' schema has been changed. Please correct the schema and try again. But the table does not have any partitions. How can I fix it?

Related

Azure Data Factory - Data Flow - Derived Column Issue

Am using Azure DataFlow - DerivedColumn to create some new columns.
Ex:
this is my source and can preview the data.
But from DerivedColumn1 i cannot see these column or even in Expression Editor
Expression Editor:
Is something changed in ADF or am I doing something wrong.
According you screenshot, the column name is set as the row. Or you will get the error in Sink column mapping. Please set "first row as header" in the excel dataset.
If you don't check it, the column name will be considered as first row:
For your issue, you could try bellow workarounds:
import the source schema in Projection and Delete the Derived column
active and add again.
Drop the data flow and create a new one. Some time data flow may
have bugs, we refresh the browser or just recreate the data flow, it
will be solved.

Copy Data - How to skip Identity columns

I'm designing a Copy Data task where the Sink SQL Server table contains an Identity column. The Copy Data task always wants me to map that column when, in my opinion, it should just not include the column in the list of columns to map. Does anyone know how I can get the ADF Copy Data task to ignore Sink Identity columns?
If you are using copy data tool, and in your sql server, the ID is set as auto-increment, then it should not show out at the mapping step. Please tell us if it is not the case.
If you are using the create pipeline/dataset, you could just go to the sink dataset schema tab, remove the id column. And then go to the copy activity mapping tab, click import schemes again. ID column should has disappeared now.
You could include a SET_IDENTITY_INSERT_ON statement for the given table before executing the copy step. After completed, set it to OFF.

How to know in Talend if tMySQLInput will overwrite data?

I have one already existing Talend Open Studio tMySQLInput component with some sql code inside it, in order to retrieve some joined columns linked to a tMySQLOuput component (pointing to an already existing MySQL table) with few records.
QUESTION:
Will the "tMySQLInput" component overwrite the already existing table data that the tMySQLOutput component relates to? I mean is there an option to check in the tMySQLInput our output in order to say, overwrite each time this job is executed ?
Thank you all.
Yes, there is an option where in tMySQLOutput where you can specify what action you want to do to your table. Follow following steps:
Go to component tab of tMySQLOutput, it will open the basic settings of this component.
If you will look closer you will find Action on table. This is the action which you can perform on the table which is pointed by tMySQLOutput. It has options as Default, Drop and Create Table etc.
Then you have Action on data. These are the options which you can perform on the data like Insert, Update etc.
In your case I suppose you can choose Action on Table as Default and Action on Data as Insert. Default action would not do anything on the table and Insert option would insert the records at the end of table. But in case of Insert if you will have duplicate rows then job would stop the moment it will find any duplicate row.

How do you change a table's schema?

We have a MySQL Workbench project with two tabs (two schemas/two databases).
If we create a table in the first tab, it's attached to the schema
magikweb_dev_igcweb.
If we create a table in the second tab, it's attached to the schema
magikweb_dev_igcweb_archive.
If we copy-paste/duplicate a table from the first tab to the second tab, the resulting table remains in the first schema. How can you change a table's schema?
Each schema is linked with a specific database, so when we use the "Synchronize Model..." feature, it links all the tables properly.
Use the model tab. You can cut out a table from one schema tab and insert it into another.
The cut-and-paste method described in another answer works well for tables with no foreign keys, and for a reasonable number of tables.
An alternative that preserves foreign keys is to export the model as a SQL script, edit it, and then import the new script into a new model.
Using MySQL Workbench v6.3:
File -> Export -> Forward Engineer SQL Script
Carefully edit SQL script. Replace references to one schema with the other, for the tables you want to move. Do this both for CREATE TABLE commands and foreign key references.
File -> New Model
File -> Import -> Reverse Engineer SQL Script
Unfortunately you will then need to recreate any diagrams. But that can be straightforward if you have the original diagram as reference (take a screenshot or export it to PNG or PDF.)
Follow this simple steps (never miss step 4 and 5) :
Open Model Tab
Choose source schema. In my case, I want to copy table users from schema abc_develop_v1 to schema abc_develop_v2 then paste to diagram . So I choose schema abc_develop_v1, right-click table users then Copy 'users'
Go to the targeted schema. In my case is schema abc_develop_v2, right-click then Paste 'users'
Next, copy table users from schema abc_develop_v2. Right-click table users then Copy 'users'
Go to your diagram and Paste 'users'.
That's all. Your table is ready in your diagram with the right schema :-)
Notes: You can double check by double-click on the table in your diagram, and look at the right corner. It will show the Schema name.
I found a less painful way to do this.
Save and backup your diagram and your schema.
Display schema's name before table's names in diagram. This will make the next step easier.
Right-click on the tables which are on the wrong schema, and select "Copy SQL to clipboard". Paste the script in a new SQL window. Repeat for each table you want to migrate.
Edit the script to change the schema name. Watch for any miss in entries, the wrong schema might be a reference at any line. Mine was mydb, which I don't remember creating. Execute the script. Now you have the tables on the right schema.
Synchronize your model. Be sure to check "Update the model" for each missing table, otherwise, the tables will be deleted from the schema :)
Drag'n'drop the newly created tables into the diagram. Then remove the ones which are using the wrong schema. Tip: tables that are not in diagram won't display a dot next to their name.
Optionally, you can delete the faulty schema from the model so this never happens again. Be sure to know what you're doing first!

Filemaker Pro 14 History tables

With a few solutions Ive worked with I've created temp table's or history tables. Normally I script it to take a handful of fields needed from a main table and copy it over to the other table by
Setting a variable then setting field to the variable for each field in the new table / new record.
I have a situation now, where Im building a history table that needs to copy the current record as is. A snapshot where all fields from that instance of the record are copied to the history table.
Rather then setting a variable then set field to the variable, Id like to get some input on a quicker way to get this done where I can do this on a record level and not type out field by field to get it done. Also if fields are added to both tables then I have to make sure my script gets updated.
Ill keep hunting around.. appreciate any help.
-Rich
Do you have a sample of copying a record from 1 table to another
including all fields and setting some fields?
As I suggested in comments, use the Import Records[] script step, and select the same file as the source. If you choose Arrange by: [ matching names ] in the Import Field Mapping dialog, it will automatically map all source fields to their similarly named counterparts.
Note that you must establish a found set in the source table before importing.
For "setting some fields", you can define auto-enter options and activate them during the import, or run Replace Field Contents[] immediately after the import.