Is it possible to pass an argument to Oil that will allow the new table field to be null?
something like
oil g migration foo bar:string null baz:int
Thanks
Of course it is possible to parse most of the parameters used to created table in generating migration. Parameters required must be separated by colon like below:
oil g migration foo bar:string:null baz:int:unsigned
The short answer is "No".
What you are supposed to do is create your migrations using the allowed syntax and then edit the migration files located in
app/migrations.
After you have updated the migration file you can
run oil refine migrate
Related
Iam using mdriven build 7.0.0.11347 for DDD project and have model designed in .ecomdl file.
In this file i have a class Job with WorkDone as one of a property. Backedup SQL table has WorkDone varchar(255) field. Now i wanted to increase length of this field and When i changed the WorkDone property length from 255 to 2000 then it modified the code file but when application runs EvolveSchema then evolving process doesn't recognize this change which leads to no scripts being generated. In the end database doesn't get this updated.
Can you please help me how to get this change persist to database. I thought to increase manually to SQL table but then if database gets change in case of new envrionment QA production then it has to be done every time, which id don't want to do.
In MDriven we dont evolve attribute changes - we only write a warning (255->2000 this change will not be evolved)
You should take steps to alter the column in the database yourself.
We should fix in the future but currently this is a limitation
To expand on my comment, VARCHAR can only be from 0-255 chars
Using TEXT will allow for non-binary (character) strings and BLOBs will allow for binary (byte) strings
Your mileage may vary with this as to what you can do with them, as I am using MySQL knowledge and knowledgebases (since you don't specify your SQL type)
See below for exaplanations of the types;
char / varchar
blobs / text
I am performing an ETL job via Pentaho 7.1.
The job is to populate a table 'PRO_T_TICKETS' in PostgreSQL 9.2 via the Pentaho Jobs and transformations?
I have mapped the table fields with respect to the stream fields
Mapped Fields
My Table PRO_T_TICKETS contains the Schema (Column Names) in UPPERCASE.
Is this the reason I can't populate the table PRO_T_TICKETS with my ETL Job?
I duplicated the step TABLE_OUTPUT to PRO_T_TICKETS and changed the Target table field to 'PRO_T_TICKETS2'. Pentaho created a new table with lowercase schema and populated the data in it.
But I want this data to be uploaded in the table PRO_T_TICKETS only and with the UPPERCASE schema if possible.
I am attaching the whole job here and the error thrown by Pentaho. Pentaho Error I have also tried my query by adding double quotes to the column names as you can see in the error. But it didn't help.
What do you think I should do?
When you create (or modify) the connection, select Advanced on the left panel and click on the Force to upper case or Force to lower case or, even better, Preserve case of reserved words.
To know which option to choose, copy the 4th line of your error log, the line starting with INSERT INTO "public"."PRO_T_TICKETS("OID"... in your SQL-developer tool and change the connection advanced parameters until it works.
Also, at debug time, don't use batch updates, don't use lazy conversion on previous steps, and try with one (1) field rather than all (25).
Just as a complement: it worked for me following the tips from AlainD and using specific configurations that I'd like to share with you. I have a transformation streaming data from MySQL to PostgreSQL using a Table Input and Output. In both of DBs I have uppercase objects.
I did the following steps to work in the right way:
In the table input (MySQL) the objects are uppercase too, but I typed in lowercase and it worked and I didn't set any special option in the DB Connection.
In the table output (PostgreSQL) I typed everything in uppercase (schema, table name and columns) and I also set "specify the database fields" (clicking on "Get fields").
In the target DB Connection (PostgreSQL) I put the options (in "Advanced" section): "Quote all in database" and "Preserve case of reserved words".
PS: Ah, the last option is because I've found out that there was one more problem with my fields: there was a column called "Admin" (yes guys, they created a camelcase column using a reserved word!) and for that reason I must to put "Preserve case of reserved words" and type it as "Admin" (without quotes and in camelcase) in the Table Output.
I have this simple flow in Talend DI 6 (simplified for posting on SO):
The last step crashes with a NullPointerException, because missing XML attributes are returned as null.
Is there a way to get empty string values instead of nulls?
For now I'm using a tReplace step to remove nulls as a work-around, but it's tedious and adds to the cost of maintenance by creating one more place where the list of attributes needs to be maintained.
In Talend DI 5.6.2 it is possible to add default data values to the schema. The column in the schema is called "Default". If you expect strings, you can set an empty string, which is set if the column value is null:
Talend schema view with Default column
Works also for other data types. Talend DI 6 should still be able to do this, although the field might be renamed.
I've searched but haven't been able to find an answer to this question. Currently our Db users prefixes of tables - e.g. tblUsers. I've updated the EF templates to remove the "tbl" from the generated class names. However I still can't figure out how to change the output file name to match.
Is it possible or am I asking for the moon? I’m using EF Power Tools Beta 3 in VS 2012. Any help would be GREATLY appreciated!
Patrick, what you need is to modify the T4 template used by the EF Power Tools. When you want to create a code-first with all the mappings, instead of Reverse Engineer Code First option, choose Customize Reverse Engineer Template. You should get three files:
Context.tt
Entity.tt
Mapping.tt
For example, in Mapping.tt there is a line that reads MetadataProperties from a TableSet, and which extracts Table name. The line looks like this:
var tableSet = efHost.TableSet;
var tableName = (string)tableSet.MetadataProperties["Table"].Value ?? tableSet.Name;
This is where you need to make changes and do something like:
var newTableName = tableName.Replace("tbl", String.Empty);
Of course, you should opt for a different strategy and use Substring method or Regular expression to read and remove the first three characters. After that you have to go through the tt file and use your logic where you want to use tableName and where to user newTableName variable. You will keep tableName where the mapping is done with the table in a database, and newTableName where you want to use that name for your POCO classes and filenames.
Repeat the process for the other two files. For more information have a look at Rowan Miller's blog article. This should give you a pretty good idea how to proceed.
I thought this would be an easy task, but since I am new to PDI, I could not
find out so far which transform to choose to accomplish the following:
I am using Pentaho Data Integration (former Kettle), Community Edition, to map/copy values from one table ('tasksA') of one database 'A' to another table
'tasksB' in another database B. tasksA has a column 'description' and I want
to copy these values to the column 'taskName' in 'tasksB'.
Furthermore, I have to copy each value of 'description' several times, since
in 'tasksB', there are multiple lines for each value in 'taskName'.
Maybe this would be possible by direct SQL, but I wanted to try whether
I can define this more readable with PDI, especially because in the next step I will have to extend it to other tables involved.
So I have to tell which value of
'description' has to be mapped onto which value of 'taskName' and that in
every tuple containing this value (well, sounds like a WHERE clause...) in the column 'taskName' it should be replaced.
My first experiments with the 'Table input' and 'Table output' steps
did not work when I simply drew a hop between them and modifying the 'database
fields' tab of the 'Table output' step, which generated 'drop column' statements
in the resulting SQL which is not what I want. I don't want to modify the schema, just copy the values.
Would be great if someone could point me to the right steps/transforms needed,
I worked through the first examples from the Pentaho Wiki and have got the 'Pentaho Kettle Solutions' book of Casters et al. but could find out how
to do solve this. Many thanks in advance for any help.
If I got this right, you should use the Table Input connected to a "Insert/Update" step.
On the Insert/Update step you need to inform the keys from tasksA where should be looked up on tasksB. Then define which fields on tasksB should be updated: description (as stream field) -> taskName (as the table field).
Keep in mind that if this key is not found, a row will be inserted on tasksB. If it is not what you plan, you'll need to build something like: Table Input -> Database Lookup -> Filter Rows -> Insert/Update
#RFVoltolini has a good answer. Alternatively you could go
Table Input -> Update
And connect the error output to something else like a Text file output.