SQL query question - tsql

I have a table that has about 6 fields. I want to write a SQL statement that will return all records that do not have "England" in the country field, English in the language field & english in the comments field.
What would the sql query be like?

Well, your question depends a lot on what DBMS you're using and what your table set up looks like. This would be one way to do it in MySQL or TSQL:
SELECT *
FROM tbl
WHERE country NOT LIKE '%England%' AND language NOT LIKE '%english%'
AND comments NOT LIKE '%english%';
The way you word your question makes it sound like all these fields could contain a lot of text, in which case the above query would be the way to go. However, more likely than not you'd be looking for exact matches in a real database:
SELECT *
FROM tbl
WHERE country!='England' AND language!='english'
AND comments NOT LIKE '%english%';

Start with this and modify as necessary:
SELECT *
FROM SixFieldTable
WHERE Country <> 'England'
AND language <> 'english'
AND comments NOT LIKE '%english%'
Hope this helps.

Are you wanting something like
select * from myTableOfMadness
where country <> 'England'
and language <> 'English'
and comments not like '%english%'
Not sure if you want 'and's or 'or's, or all 'not' comparisons. Your sentence structure is somewhat misleading.

The above solutions do not appear to account for possible nulls in the columns. The likes of
Where country <> 'England'
will erroneously exclude entries where Country is null, under default SQL Server connection settings.
Instead, you could try using
IsNull(Country, '') <> 'England'

To ignore case:
SELECT *
FROM SixFieldTable
WHERE LOWER(Country) <> 'england' AND
LOWER(language) <> 'english' AND
LOWER(comments) NOT LIKE '%english%'

Try This
Select * From table
Where Country Not Like '%England%'
And Language Not Like '%English%'
And comments Not Like '%English%'

Related

Smart way to filter out unnecessary rows from Query

So I have a query that shows a huge amount of mutations in postgres. The quality of data is bad and i have "cleaned" it as much as possible.
To make my report so user-friendly as possible I want to filter out some rows that I know the customer don't need.
I have following columns id, change_type, atr, module, value_old and value_new
For change_type = update i always want to show every row.
For the rest of the rows i want to build some kind of logic with a combination of atr and module.
For example if the change_type <> 'update' and concat atr and module is 'weightperson' than i don't want to show that row.
In this case id 3 and 11 are worthless and should not be shown.
Is this the best way to solve this or does anyone have another idea?
select * from t1
where concat(atr,module) not in ('weightperson','floorrentalcontract')
In the end my "not in" part will be filled with over 100 combinations and the query will not look good. Maybe a solution with a cte would make it look prettier and im also concerned about the perfomance..
CREATE TABLE t1(id integer, change_type text, atr text, module text, value_old text, value_new text) ;
INSERT INTO t1 VALUES
(1,'create','id','person',null ,'9'),
(2,'create','username','person',null ,'abc'),
(3,'create','weight','person',null ,'60'),
(4,'update','id','order','4231' ,'4232'),
(5,'update','filename','document','first.jpg' ,'second.jpg'),
(6,'delete','id','rent','12' ,null),
(7,'delete','cost','rent','600' ,null),
(8,'create','id','rentalcontract',null ,'110'),
(9,'create','tenant','rentalcontract',null ,'Jack'),
(10,'create','rent','rentalcontract',null ,'420'),
(11,'create','floor','rentalcontract',null ,'1')
Fiddle
You could put the list of combinations in a separate table and join with that table, or have them listed directly in a with-clause like this:
with combinations_to_remove as (
select *
from (values
('weight', 'person'),
('floor' ,'rentalcontract')
) as t (atr, module)
)
select t1.*
from t1
left join combinations_to_remove using(atr, module)
where combinations_to_remove.atr is null
I guess it would be cleaner and easier to maintain if you put them in a separate table!
Read more on with-queries if that sounds strange to you.

postgres query a string not starting with a given set of characters using LIKE or ILIKE keywords

Is it possible in Postgresql to query a string that does not start with vowels (or a given set of characters) using only LIKE or ILIKE keyword?
For example I was playing with this sample DB and was trying to extract from Customer the first_names that do not start with vowels:
SELECT first_name
FROM Customer
WHERE first_name NOT ILIKE '[aeiou]%'
ORDER BY first_name;
This query however does not work and I get results like Aaron, Adam etc. The problem is in the square brackets because the condition NOT ILIKE 'a%' works.
I know there are previous answers to similar questions but 1) they are not in postgresql and 2) they use regular expressions or substringing whereas I'd like to know if a solution using LIKE/ILIKE exists
AFAIK, LIKE/ILIKE do not support regex syntax, which is what your're trying to use in the provided example. LIKE/ILIKE only supports % and _ special symbols and those don't do what you need them to in a single expression.
You could probably get away with something like
SELECT first_name
FROM Customer
WHERE first_name NOT ILIKE 'a%' AND first_name NOT ILIKE 'e%' AND first_name NOT ILIKE 'i%' AND first_name NOT ILIKE 'o%' AND first_name NOT ILIKE 'u%'
ORDER BY first_name;
...but it'd take a very good reason to do it that way, rather than using regex.
In addition you can use ^ symbol (beginning of word (string))
SELECT first_name
FROM Customer
WHERE not first_name ~* '^[aeiou]'
ORDER BY first_name;
LIKE (and it's Postgres extension ILIKE) does not support regular expressions in SQL.
If you want to use a regex, use you can use the standard-compliant similar to or the Postgres specific ~* operator
SELECT first_name
FROM Customer
WHERE not first_name ~* '^[aeiou]*'
ORDER BY first_name;

Exclude a word in sql query?

I'm trying to write a query in sql to exclude a keyword:
It's a list of cities written out (e.g. AnnArbor-MI). In the list there are duplicates because some have the word 'badsetup' after the city and these need to be discarded. How would I write something to exclude any city with 'badsetup' after it?
Your question title and content appear to be asking for two different things ...
Query cities while excluding the trailing 'badsetup':
SELECT regexp_matches(citycolumn, '(.*)badsetup')
FROM mytable;
Query cities that don't have the trailing 'badsetup':
SELECT citycolumn
FROM mytable
WHERE citycolumn NOT LIKE '%badsetup';
In psql, to select rows excluding those with the word 'badsetup' you can use the following:
SELECT * FROM table_name WHERE column NOT LIKE '%badsetup%';
In this case the '%' indicates that there can be any characters of any length in this space. So this query will find any instance of the phrase 'badsetup' in your column, regardless of the characters before or after it.
You can find more information in section 9.7.1 here: https://www.postgresql.org/docs/8.3/static/functions-matching.html

Find strings in PostgreSQL and order by distance to beginning of string

I am trying to write a query that looks up a partial string, but order by proximity to the front of the string. It's for a typeahead application.
Here's my query currently:
SELECT DISTINCT ON (full_name) full_name
FROM users
WHERE (lower(full_name) LIKE 'q%' OR lower(full_name) LIKE '%q%')
LIMIT 10
But it does not seem to order in the way I would expect.
So if I search for 'pet' I would like to return peter smith and petra glif before abigail peters.
Is it possible to write that where clause in this way? We don't currently have any fuzzy text search modules installed in the database so I would like to avoid doing that if possible.
You can use position(substring in string) function for this:
order by position(lower('pet') in lower(full_name))
http://www.postgresql.org/docs/9.1/static/functions-string.html
You can use boolean expression full_name ILIKE 'q%' as a sort order.
SELECT *
FROM (
SELECT DISTINCT full_name
FROM users
WHERE full_name ILIKE '%q%'
) alias
ORDER BY full_name ILIKE 'q%' DESC, full_name
LIMIT 10
Note that there is ILIKE operator for case-insensitive LIKE.
You may be also interested in Full Text Search.

Dynamic number of fields in table

I have a problem with TSQL. I have a number of tables, each table contain different number of fielsds with different names.
I need dynamically take all this tables, read all records and manage each record into string list, where each value separated by commas. And do smth. with this string.
I think that I need to use CURSORS, but I can't FETCH em without knowing A concrete amount of fields with names and types. Maybe I can create a table variable with dynamic number of fields?
Thanks a lot!
Makarov Artem.
I would repurpose one of the many T-SQL scripts written to generate INSERT statements. They do exactly what you require. Namely
Reverse engineer a given table to determine columns names and types
Generate a delimited string of values
The most complete example I've found is here
But just a simple Google search for "INSERT STATEMENT GENERATOR" will yield several examples that you can repurpose to fit your needs.
Best of luck!
SELECT
ORDINAL_POSITION
,COLUMN_NAME
,DATA_TYPE
,CHARACTER_MAXIMUM_LENGTH
,IS_NULLABLE
,COLUMN_DEFAULT
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'MYTABLE'
ORDER BY
ORDINAL_POSITION ASC;
from http://weblogs.sqlteam.com/joew/archive/2008/04/27/60574.aspx
Perhaps you can do something with this.
select T2.X.query('for $i in *
return concat(data($i), ",")'
).value('.', 'nvarchar(max)') as C
from (
select *
from YourTable
for xml path('Row'),elements xsinil, type
) as T1(X)
cross apply T1.X.nodes('/Row') T2(X)
It will give you one row for each row in YourTable with each value in YourTable separated by a comma in the column C.
This builds an XML for the entire table and then parses that XML. Might get you into trouble if you have tables with a lot of rows.
BTW: I saw from a comment that you can "use only pure SQL". I really don't think this qualifies as "pure SQL" :).