I am using Red-Gate Data Compare to synchronize two databases, lets call them DBSource and DBDestination. DBDestination has a table, TableA which has a field which has a not null constraint. TableA within DBSource has the same structure, apart from this new field.
When I synchronize using the Data Compare tool, it fails due to this particular not null field, since there's no object map I can set up for it.
I wanted to know if there is a way of setting a default on the tool, since I can't alter the schema of the destination table and the file is quite large to edit?
The best way to handle this is to alter the column in the database and put a default value on it. There isn't anything you can do in the SQL Data Compare settings to make it replace illegal NULL data with a default value.
Related
When we use Mybatis , in <select> ...</select> statment I know we need set jdbcType beacuse the IN variable maybe null, but when I see the document of Mybatis, I found jdbcType in <result>...</result> under ResultMap. the document of the
jdbcTpe in <result>...</result> was:
... The JDBC type is only required for nullable columns upon insert, update or delete. This is a JDBC requirement, not a MyBatis one. So even if you were coding JDBC directly, you'd need to specify this type – but only for nullable values.
the bold word say only required for nullable columns upon insert, update or delete.
BUT,the element of result is used in select neither insert, update or delete.
so ,is it necessary use jdbcType in <result>...</result> ?
Most of the time, no. Why? Read on.
If you want to use a null as a JDBC parameter value you need to specify the jdbcType. That's a restriction of the JDBC specification you can't avoid. Therefore, if there's even a remote possibility a JDBC parameter could have a null value, then yes, specify it.
This does not apply to parameters preprocessed by MyBatis inside MyBatis tags, like the ones you use in the "test" attribute of the < if > tag. Those ones are not JDBC parameters.
Now, for the columns you read. These are the ones you are interested on. The thing is most of the time you don't need them. MyBatis will pick the right JDBC type for you. Well... this has been the case for me 99.999% of the time.
What about the other 0.001%? For some exotic column types -- that you rarely use -- MyBatis may pick the wrong JDBC type for you. The designers of MyBatis thought about this case, and give you the chance of overriding it. I think I remember an XML type database column that MyBatis was unsuccessfully trying to read as a VARCHAR, but I don't remember which database.
Bottom line, don't use it when reading columns, unless MyBatis reads exotic data type columns (XML, UUID, POINT, etc.) the wrong way.
I have this simple flow in Talend DI 6 (simplified for posting on SO):
The last step crashes with a NullPointerException, because missing XML attributes are returned as null.
Is there a way to get empty string values instead of nulls?
For now I'm using a tReplace step to remove nulls as a work-around, but it's tedious and adds to the cost of maintenance by creating one more place where the list of attributes needs to be maintained.
In Talend DI 5.6.2 it is possible to add default data values to the schema. The column in the schema is called "Default". If you expect strings, you can set an empty string, which is set if the column value is null:
Talend schema view with Default column
Works also for other data types. Talend DI 6 should still be able to do this, although the field might be renamed.
I am using slick with play2.
I have multiple fields in the database which are managed by the database. I don't want to create or update them, however I want to get them while reading the values.
For example, suppose I have
case class MappedDummyTable(id: Int, .. 20 other fields, modified_time: Optional[Timestamp])
which maps Dummy in the database. modified_time is managed by the database.
The problem is during insert or update, I create an instance of MappedDummyTable without the modified time attribute and pass it to slick for create/update like
TableQuery[MappedDummyTable].insert(instanceOfMappedDummyTable)
For this, Slick creates query as
Insert INTO MappedDummyTable(id,....,modified_time) Values(1,....,null)
and updates the modified_time as NULL, which I don't want. I want Slick to ignore the fields while updating and creating.
For updating, I can do
TableQuery[MappedDummyTable].map(fieldsToBeUpdated).update(values)
but this leads to 20 odd fields in the map method which looks ugly.
Is there any better way?
Update:
The best solution that I found was using multiple projection. I created one projection to get the values and another to update and insert the data
maybe you need to write some triggers in table if you don't want to write code like row => (row.id,...other 20 fields)
or try use None instead of null?
I believe that the solution with mapping non-default field is the only way to do it with Slick. To make it less ugly you can define function ignoreDefaults on MappedDummyTable that will return only non default value and function in companion object to MappedDummyTable case class that returns projection
TableQuery[MappedDummyTable].map(MappedDummyTable.ignoreDefaults).insert(instanceOfMappedDummyTable.ignoreDefaults)
I have 2 fields that I'm adding to a current database table with data in it. One is a bit and one is an int. If I am setting defaults for both, should I just set them to not null since there is no case where they would be null?
If you will ever need to store data where you need the ability to indicate "we don't know" then you may consider allowing null values.
For example, I store data from remote sensors. When I am unable to retrieve the sensor data, like due to network problems, I use null.
If, however, you require that a value always be present, then you should use the NOT NULL constraint.
Yes, that would do the trick. If you set those columns as not null and you don't specify a default value, you'll definitely get an error from the DB.
This is probably a super simple question, but I'm struggling to come up with the right keywords to find it on Google.
I have a Postgres table that has among its contents a column of type text named content_type. That stores what type of entry is stored in that row.
There are only about 5 different types, and I decided I want to change one of them to display as something else in my application (I had been directly displaying these).
It struck me that it's funny that my view is being dictated by my database model, and I decided I would convert the types being stored in my database as strings into integers, and enumerate the possible types in my application with constants that convert them into their display names. That way, if I ever got the urge to change any category names again, I could just change it with one alteration of a constant. I also have the hunch that storing integers might be somewhat more efficient than storing text in the database.
First, a quick threshold question of, is this a good idea? Any feedback or anything I missed?
Second, and my main question, what's the Postgres command I could enter to make an alteration like this? I'm thinking I could start by renaming the old content_type column to old_content_type and then creating a new integer column content_type. However, what command would look at a row's old_content_type and fill in the new content_type column based off of that?
If you're finding that you need to change the display values, then yes, it's probably a good idea not to store them in a database. Integers are also more efficient to store and search, but I really wouldn't worry about it unless you've got millions of rows.
You just need to run an update to populate your new column:
update table_name set content_type = (case when old_content_type = 'a' then 1
when old_content_type = 'b' then 2 else 3 end);
If you're on Postgres 8.4 then using an enum type instead of a plain integer might be a good idea.
Ideally you'd have these fields referring to a table containing the definitions of type. This should be via a foreign key constraint. This way you know that your database is clean and has no invalid values (i.e. referential integrity).
There are many ways to handle this:
Having a table for each field that can contain a number of values (i.e. like an enum) is the most obvious - but it breaks down when you have a table that requires many attributes.
You can use the Entity-attribute-value model, but beware that this is too easy to abuse and cause problems when things grow.
You can use, or refer to my implementation solution PET (Parameter Enumeration Tables). This is a half way house between between 1 & 2.