MySQL order by clause with expression - sql-injection

I am going through some starter material on SQL injections and while fuzzing, one SQL query I concocted was:SELECT * FROM users ORDER by 3 and 0; Now, shouldn't the part 3 AND 0 evaluate to 0. But that doesn't happen. The query returns all users, ordered by the first column. The DB I am using is MySQL.

Related

when to use direct sql query vs type-safe slick

I am not sure when slick runs the sql query and returns the result set. I want to run different queries but my table is very big. The sql command it self takes few seconds to run. I am afraid if I use slick to return entire table and then do selection or group by, it consumes a lot of memory and makes everything slow.
Here is one example to get the count of customers per day:
sql"""
select timestamp::date, count(distinct id) from tenant GROUP BY timestamp::date ORDER BY timestamp::date asc;
""".as[(Date, Int)]
My table has multiple columns other than date and id. If I want to bring the entire table and then to do group by and map in scala, it takes a lot of memory.
If I use above command, it is prone to errors.
Is there a way to return only id and date from sql db? and then what is the best way to do group by?
I have tried above sql command but not sure how to use slick better.

How to reproducibly query random rows with SQLAlchemy from PostgreSQL?

I am trying to pseudo-randomly select rows from a PostgreSQL table using SQLAlchemy, but I need to use a seed to guarantee reproducibility of the query. The specific case is concerning a publication being submitted, tied to a codebase.
This answer does a great job introducing how one can leverage the following one-liner to select a single random row:
select.order_by(func.random()) # for PostgreSQL, SQLite
Further, one can select many of psuedo-random rows via:
select.order_by(func.random()).limit(n)
But how do I ensure that I select the same psuedo-random rows every time I run the above query?
You can leverage the setseed(n) postgreSQL method. Using sqlalchemy and ChEMBL as our sample DB, the full solution looks like this:
from sqlalchemy import create_engine, func, select
query = select([MoleculeRecord.molregno]).order_by(func.random()).limit(500)
SEED = .5 # Change this to any float from -1.0 -> 1.0 to get different query results
e = create_engine("postgres:///chembl_25")
conn = e.connect()
conn.execute(f"SELECT setseed({SEED})")
firstQueryResults = [x.molregno for x in conn.execute(query)]
secondQueryResults = [x.molregno for x in conn.execute(query)]
assert(firstQueryResults == secondQueryResults)
This returns 500 random rows, with the invariant that these 500 rows will always be the same each time this query is executed. Too select different random rows, change the SEED variable too something different, between -1.0 and 1.0.

Convert T-SQL Cross Apply to Redshift

I am converting the following T-SQL statement to Redshift. The purpose of the query is to convert a column in the table with a value containing a comma delimited string with up to 60 values into multiple rows with 1 value per row.
SELECT
id_1
, id_2
, value
into dbo.myResultsTable
FROM myTable
CROSS APPLY STRING_SPLIT([comma_delimited_string], ',')
WHERE [comma_delimited_string] is not null;
In SQL this processes 10 million records in just under 1 hour which is fine for my purposes. Obviously a direct conversation to Redshift isn't possible due to Redshift not having a Cross Apply or String Split functionality so I built a solution using the process detailed here (Redshift. Convert comma delimited values into rows) which utilizes split_part() to split the comma delimited string into multiple columns. Then another query that unions everything to get the final output into a single column. But the typical run takes over 6 hours to process the same amount of data.
I wasn't expecting to run into this issue just knowing the power difference between the machines. The SQL Server I was using for the comparison test was a simple server with 12 processors and 32 GB of RAM while the Redshift server is based on the dc1.8xlarge nodes (I don't know the total count). The instance is shared with other teams but when I look at the performance information there are plenty of available resources.
I'm relatively new to Redshift so I'm still assuming I'm not understanding something. But I have no idea what am I missing. Are there things I need to check to make sure the data is loaded in an optimal way (I'm not an adim so my ability to check this is limited)? Are there other Redshift query options that are better than the example I found? I've searched for other methods and optimizations but unless I start looking into Cross Joins, something I'd like to avoid (Plus when I tried to talk to the DBA's running the Redshift cluster about this option their response was a flat "No, can't do that.") I'm not even sure where to go at this point so any help would be much appreciated!
Thanks!
I've found a solution that works for me.
You need to do a JOIN on a number table, for which you can take any table as long as it has more rows that the numbers you need. You need to make sure the numbers are int by forcing the type. Using the funcion regexp_count on the column to be split for the ON statement to count the number of fields (delimiter +1), will generate a table with a row per repetition.
Then you use the split_part function on the column, and use the number.num column to extract for each of the rows a different part of the text.
SELECT comma_delimited_string, numbers.num, REGEXP_COUNT(comma_delimited_string , ',')+1 AS nfields, SPLIT_PART(comma_delimited_string, ',', numbers.num) AS field
FROM mytable
JOIN
(
select
(row_number() over (order by 1))::int as num
from
mytable
limit 15 --max num of fields
) as numbers
ON numbers.num <= regexp_count(comma_delimited_string , ',') + 1

Update from existing table in Redshift

I would like to update a value in Redshift table from results of other table, I'm trying to run to following query but received an error.
update section_translate
set word=t.section_type
from (
select distinct section_type from mr_usage where section_type like '%sディスコ')t
where word = '80sディスコ'
The error I received:
ERROR: Target table must be part of an equijoin predicate
Can't understand what is incorrect in my query.
You need to make the uncorrelated subquery to a correlated subquery,
update section_translate
set word=t.section_type
from (
select distinct section_type,'80sディスコ' as word from mr_usage where section_type like '%sディスコ')t
where section_translate.word = t.word
Otherwise, each record of the outer query is eligible for updates and the query engine rejects it. The way Postgre (and thus Redshift) evaluates uncorrelated subqueries is slightly different from SQL Server/ Oracle etc.

Why does this Oracle 10g SQL run slow only when I query a subquery with a where clause?

I can't paste in the entire SQL for various reasons, so consider this example:
select *
from
(select nvl(get_quantity(1), 10) available_qty
from dual)
where available_qty > 30;
get_quantity is a function that makes a calculation based on the ID of a record that's passed through it. If it returns null, I use nvl() to force it to 10.
The query runs very slow when I use the WHERE clause in the parent query. When I comment out the WHERE clause, however, it runs very fast. What I don't get is why it can display the data very fast, but it can't query it just as fast. I am querying the results of a subquery, too. I was under the impression that subqueries return a "rendered" dataset. It's almost as if querying the available_qty identifier is causing it to reference something within the subquery.
This is why I don't think the contents of the get_quantity function are relevant here, so I didn't bother posting it. Instead, I think it's a misunderstanding on my part of how Oracle handles subqueries and whatnot.
Do any of you Oracle gurus have any idea what I am doing wrong?
Afterthought: as I was entering tags for this question, the tag "correlated subquery" came up. In doing some quick research, it seems that this type of subquery somewhat depends on the outer query. Could this be related to my problem?
Let's try an experiment. First we'll run the following query:
select lvl, rnd
from (select level as lvl from dual connect by level <= 5) a,
(select dbms_random.value() rnd from dual) b;
The "a" subquery will return 5 rows with values from 1 to 5. The "b" subquery will return one row with a random value. If the function is run before the two tables are join (by Cartesian), the same random value will be returned for each row. The actual results:
LVL RND
---------- ----------
1 .417932089
2 .963531718
3 .617016889
4 .128395638
5 .069405568
5 rows selected.
Clearly the function was run for each of the joined rows, not for the subquery before the join. This is a result of Oracle's optimizer deciding that the best path for the query is to do things in that order. To prevent this, we have to add something to the second subquery that will make Oracle run the subquery in it's entirety before performing the join. We'll add rownum to the subquery, since Oracle knows rownum will change if it's run after the join. The following query demonstrates this:
select lvl, rnd from (
select level as lvl from dual connect by level <= 5) a,
(select dbms_random.value() rnd, rownum from dual) b;
As you can see from the results, the function was only run once in this case:
LVL RND
---------- ----------
1 .028513902
2 .028513902
3 .028513902
4 .028513902
5 .028513902
5 rows selected.
In your case, it seems likely that the filter provided by the where clause is making the optimizer take a different path, where it's running the function repeatedly, rather than once. By making Oracle run the subquery as written, you should get more consistent run-times.