Wondering if there is a way to query a SQL Server database and somehow format columns to omit commas in the data if there is any.
Reason for asking is I have 10000+ records and through out the data the varchar have data like 3,25% and other 1%.
I'd prefer not to alter the data in the original table thus asking if a select with other functions would do the trick.
I have thought about selecting all the data into a temp table and stripping the commas but that is a lot of work for every time I do the query.
Any info or if its is possible please reply.
Take a look at the REPLACE function:
SELECT REPLACE(YourColumn, ',', '')
FROM YourTable
Use SQL REPLACE :
REPLACE(YourField,',','')
Related
I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.
I have the following query
SELECT
[DocID],
[Docunum],
[Comments] = REPLACE(REPLACE([Comments], CHAR(13), ''), CHAR(10), '')
FROM
[Billy].[dbo].[order]
WHERE
DocDate = '2017-12-20 00:00:00.000'
I was wondering if the replace function, actually changes the value in the database? My concern is that this is ERP and I do not want referential integrity problems. I only want to eliminate the carriage separators from the NVARCHAR column to avoid spacing issues while pasting in Excel. I do not want any values changed in the database.
Any feedback would be appreciated. I have searched and did not find anything that answered this specifically. If I missed something please post link for reference if possible.
Actually here you are using replace in Select query so it will not affect your database it will only affect your result which is returned by this query, so here you are safe.
I have three different values in my database that represent a null: an actual null, an empty string, and a string {x:Null}. This value appears across multiple columns.
{x:Null} is normalized on the web front-end, so all these values look exactly the same although they end up ordered differently in a sort. How can I write a query that will take these values and make them actual nulls across every column and every table?
Bonus points if you can tell me how to make sure these other empty values are always inserted as nulls going forward. (Disclaimer: I have no power to grant any actual bonus points. ;)
You can query the information_schema to get a list of all tables and columns with a string type.
SELECT table_name, column_name
FROM information_schema.columns
WHERE data_type IN ('text', 'character', 'character varying')
NOTE double check first what values data_type has, I'm not sure if it will be character or char or what.
Then I would write a small program to update each column in each table. Here it is sketched out in Perl.
while( my($table, $column) = $sth->fetch ) {
my $q_table = $dbh->quote($table);
my $q_column = $dbh->quote($column);
$dbh->do(q[
UPDATE `$q_table`
SET `$q_column` = NULL
WHERE `$q_column` = '{x:Null}'
OR `$q_column` = ''
]);
}
Be sure to SQL escape $table and $column as in my sample.
Going forward, you'll have to set CONSTRAINTS on each and every column. You can use the information_schema.columns to do this as well. Something like
ALTER TABLE `$q_table` ADD CHECK(`$q_column` NOT IN ('{x:Null}', ''))
You could use a trigger to change the values to NULL, but I don't like data stores that silently change basic data for application purposes.
For new columns and tables, you'll have to remember to add that constraint. Same caveats about data_type apply.
However, it's probably a bad idea to say that no column can ever be an empty string. You might want to be bit more selective.
Another thing to note: NULL is a funny thing, its not true and its not false. You might be better off deciding that an empty string is the thing to set empty values to.
I don't think this approach is maintainable. It's scribbling an application rule all over the data layer. What if you have some data that doesn't follow that rule? And it will have to be continuously maintained for any new data schema added. Perhaps instead you should put this at your ORM layer. Or write a few stored procedures to take care of this.
Using the information_schema.columns table, write a procedural language routine which iterates through all applicable tables and columns, executing an update... set *column* = NULL...where column in ('','{x:Null}'). for each eligible column.
As for inserting these values as NULL going forward, you would have to set triggers on your tables to intercept these values and replace them with NULL.
I don't think there is any query that would do this thing for every table and every column. In principle, what you want to do is
UPDATE table SET column=NULL WHERE column='' OR column='{x:Null}';
You could try selecting data from the pg_attribute and pg_class columns to get the names of the tables and names of the columns and then generating automatically the queries. Be sure to select only those columns that contain textual data.
What if somebody has entered a genuine string '{x:Null}'? You would then change it into NULL.
However, you have done a real mistake by letting the situation to be as bad as it's currently. You should always normalize data before putting it into a database.
I am have two fields in my table:
One is Primary key auto increment value and second is text value.
lets say: xyzId & xyz
So I can easily insert like this
insert into abcTable(xyz) Values('34')
After performing above query it must insert these information
xyzId=1 & xyz=34
and for retrieving I can retrieve like this
select xyzId from abcTable
But for this I have to write down two operation. Cant I retrieve in single/sub query ?
Thanks
If you are on SQL Server 2005 or later you can use the output clause to return the auto created id.
Try this:
insert into abcTable(xyz)
output inserted.xyzId
values('34')
I think you can't do an insert and a select in a single query.
You can use a Store Procedures to execute the two instructions as an atomic operation or you can build a query in code with the 2 instructions using ';' (semicolon) as a separator betwen instructions.
Anyway, for select identity values in SQL Server you must check ##IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT. It's faster and cleaner than a select in the table.
I am working on third party data which I need to load into my postgresql database. I am running into problems where sometimes I get the time '24:00:30' when it actually should be '00:00:30'. This rejects the data.
I tried to cast but it did not work.
insert into stop_times_test trip_id, cast(arrival_time as time), feed_id, status
from external_source;
Is there any way to convert to the correct one internally?
This may work for your case:
> select '0:0:0'::time + '24:00:30'::interval;
00:00:30
Cast to interval, then cast to time:
SELECT '24:00:30'::interval::time
If you want to bulk load the data with COPY or mass INSERT make the target column data type interval and convert it to time later. This works out of the box:
ALTER TABLE mytable ALTER col1 TYPE time;
No, there is no magic way of doing it. No cast will help you. 24:00:30 is an invalid time. Period.
You could try adding that value on a varchar and then using regular expressions to update the right values and insert them on the right columns. This sort of things happen a lot when doing data transformation.