Basically what i want in my stored procedure is to return a list of tables, store this list in a variable; i need to go through every item in my list to recursively call this storedprocedure. In the end i need an overall listOfTables built up of this recursion.
Any help would be most appreciated
You should take a look at Common Table Expressions in case you're on SQL2005 or higher (not sure if they can help in your specific situation but an important alternative to most recursive queries) . Recursive procedures cannot nest more than 32 levels deep and are not very elegant.
You can use CTE's:
WITH q (column1, column2) (
SELECT *
FROM table
UNION ALL
SELECT *
FROM table
JOIN q
ON …
)
SELECT *
FROM q
However, there are different limitations: you cannot use aggregates, analytics functions, TOP clause etc.
Are you after recursion or just a loop through all tables? If you are using Sql Server 2005 and want to loop through all tables you can use a table variable in your SP, try something along thse lines:
declare #TableList as table (
ID int identity (1,1),
TableName varchar(500)
)
insert into #TableList (TableName)
select name
from sys.tables
declare #count int
declare #limit int
declare #TableName varchar(500)
set #count = 1
select #limit = max(ID) from #TableList
while #count <= #limit
begin
select #TableName = TableName from #TableList where ID = #count
print #TableName --replace with call to SP
set #count = #count + 1
end
Replace the print #TableName with the call to the SP, and if you don't want this to run on every table in the DB then change the query select name from sys.tables to only return the tables you are after
Most likely a CTE would answer your requirement.
If you really must use a stored procedure not a query then all you have to do is iterate through the table list then you can use your code of choice to iterate through the table list and call the procedure. And Macros already posted how to do that as I was typing lol. And as Mehrdad already told you, there is limit on the number of nested levels of call SQL Server allows and is rather shallow. I'm not convinced from your explanation that you need a recursive call, it looks more like a simple iteration over a list, but if you do indeed need recursivity then remember CS 101 class: any recursive algorithm can be transformed into a non-recursive one by using a loop iteration and a stack.
Stored procedures are very useful. BUT.
I recently had to work on a system that was heavily dependent on stored procedures. It was a nightmare. Half the business logic was in one language (Java, in this case), and the other half was in the database in stored procedures. Worse yet, half the application was under source code control and the other half was one database crash from being lost forever (bad backup processes). Plus, all those lovely little tools I have for scanning, analyzing and maintaining source code can't work with sources inside the database.
I'm not inherently anti-stored-procedure, but oh, how they can be abused. Stored procedures are excellent for when you need to enforce rules against data coming from a multiplicity of sources, and there's no better way to offload heavy-duty record access off the webservers (and onto the DBMS server). But for the most part, I'd rather use a View than a Stored Procedure and an application programming language for the business logic. I know it makes some things a little more complex. But it can make life a whole lot easier.
Related
So I know there's already similar questions on this, but most of them are very old, or they have non-answers, like "why would you even want to do this?", or "table types aren't performant and we don't want them here", or even "you need to rethink your whole approach".
So what I would ideally want to do is to declare a user-defined table type like this:
CREATE TYPE my_table AS TABLE (
a int,
b date);
Then use this in a procedure as a parameter, like this:
CREATE PROCEDURE my_procedure (
my_table_parameter my_table)
Then be able to do stuff like this:
INSERT INTO
my_temp_table
SELECT
m.a,
m.b,
o.useful_data
FROM
my_table m
INNER JOIN my_schema.my_other_table o ON o.a = m.a;
This is for a billing system, let's make it a mobile phone billing system (it isn't but it's similar enough to work). Here's several ways I might call my procedure:
I sometimes want to call my procedure for one row, to create an adhoc bill for one customer. I want to do this while they are on the phone and get them an immediate result. Maybe I just fixed something wrong with their bill, and they're angry!
I sometimes want to bill everyone who's due a bill on a specific date. Maybe this is their preferred billing date, and they're on a monthly billing cycle?
I sometimes want to bill in bulk, but my data comes from a CSV file. Maybe this is a custom billing run that I have no way of understanding the motivation for?
Maybe I want to final bill customers who recently left?
Sometimes I might need to rebill customers because a mistake was made. Maybe a tariff rate was uploaded incorrectly, and everyone on that tariff needs their bill regenerating?
I want to split my code up into modules, it's easier to work like this, and it allows a large degree of reusability. So what I don't want to do is to write n billing systems, where each one handles one of the use cases above. I want a generic billing engine that is a stored procedure, uses set-based queries where possible, and works just about as well for one customer as it does for 1,000,000. If anything I want it optimised for lots of customers, as long as it only takes a few seconds to run for one customer.
If I had SQL Server I would create a user-defined table type, and this would contain a list of customers, the date they need billing to, and maybe anything else that would be useful. But let's just leave it at the simplest case possible, an integer representing a customer and a date to say what date I want to bill them up to, like my example above.
I've spent some days now looking at the options available in PostgreSQL, and these are the conclusions I have reached. I would be extremely grateful for any help with this, correcting my incorrect assumptions, or telling me of another way I have overlooked.
Process-Keyed Table
Create a table that looks like this:
CREATE TABLE customer_list (
process_key int,
customer_id int,
bill_to_date date);
When I want to call my billing system I get a unique key (probably from a sequence), load up the rows with my list of customers/ dates to bill them to, and add the unique key to every row. Now I can simply pass the unique key to my billing engine, and it can scoop up the data at the other side.
This seems the most optional way to proceed, but it's clunky, like something I would have done in SQL Server 20 years ago, when there weren't better options, it's prone to leaving data lying around, and it doesn't seem like it would be optimal, as the data will have to be squirted to physical storage, and read back into memory.
Use a Temporary Table
So I'm thinking that I create a temporary table, call it customer_temp, and make it ON COMMIT DROP. When I call my stored procedure to bill customers it picks the data out of the temporary table, does what it needs to do, and then when it ends the table is vacuumed away.
But this doesn't work if I call the billing engine more than once at a time. So I need to give my temporary tables unique names, and also pass this name into my billing engine, which has to use some vile dynamic SQL to get the data into some usable area (probably another temporary table?).
Use a TYPE
When I first saw this I thought I had the answer, but it turns out to not work for multidimensional arrays (or I'm doing something wrong). I quickly learned that for a single dimensional array I could get this working by just pretending that a PostgreSQL TYPE was a user defined table type. But it obviously isn't.
So passing in an array of integers, e.g. customer_list int[]; works fine, and I can use the ARRAY command to populate that array from a query, and then it's easy to access it with =ANY(customer_list) at the other side. It's not ideal, and I bet it's awful for large data sets, but it's neat.
However, it doesn't work for multidimensional arrays, the ARRAY command can't cope with a query that has more than one column in it, and the other side becomes more awkward, needing an UNNEST command to unpack the data into a format where it's usable.
I defined my type like this:
CREATE TYPE customer_list (
customer_id int,
bill_to_date date);
...and then used it in my procedure parameter list as customer_list[], which seems to work, but I have no nice way to populate this structure from the calling procedure.
I feel I'm missing something here, as I never got it to work properly as a prototype, but I also feel this is a potential dead end anyway, as it won't cope with large numbers of rows in a performant way, and this isn't what arrays are meant for. The first thing I do with the array at the other side, is unpack it back into a table again, which seems counterintuitive.
Ref Cursors
I read that you can use REF CURSORs, but I'm not quite sure how this works. It seems that you open a cursor in one procedure, and then pass a handle to it to another procedure. It doesn't seem like this is going to be set-based, but I could be wrong, and I just haven't found a way to convert a cursor back into a table again?
Write Everything as One Massive Procedure
I'm not ruling this out, as PostgreSQL seems to be leading me this way. If I write one enormous billing engine that copes with every eventuality, then my only issue will be when this is called using an externally provided list. Every other issue can be solved by just not having to pass data between procedures.
I can probably cope with this by loading the data into a batch table, and feeding this in as one of the options. It's awful, and it's like going back to the 1990s, but if this is what PostgreSQL wants me to do, then so be it.
Final Thoughts
I'm sure I'm going to be asked for code examples, which I will happily provide, but I avoided because this post is already uber-long, and what I'm trying to achieve is actually quite simple I feel.
Having typed all of this out, I'm still feeling that there must be a way of working around the "temporary table names must be unique", as this would work nicely if I found a way to let it be called in a multithreaded way.
Okay, taking the bits I was missing I came up with this, which seems to work:
CREATE TYPE IF NOT EXISTS bill_list AS (
customer_id int,
bill_date date);
CREATE TABLE IF NOT EXISTS billing.pending_bill (
customer_id int,
bill_date date);
CREATE TABLE IF NOT EXISTS billing.customer (
customer_id int,
billed boolean,
last_billed date);
INSERT INTO billing.customer
VALUES
(1, false, NULL::date),
(2, false, NULL::date),
(3, false, NULL::date);
INSERT INTO billing.pending_bill
VALUES
(1, '20210108'::date),
(2, '20210105'::date),
(3, '20210104'::date);
CREATE OR REPLACE PROCEDURE billing.bill_customer_list (
pending bill_list[])
LANGUAGE PLPGSQL
AS
$$
BEGIN
UPDATE
billing.customer c
SET
billed = true,
last_billed = p.bill_date
FROM
UNNEST(pending) p
WHERE
p.customer_id = c.customer_id;
END;
$$
CREATE OR REPLACE PROCEDURE billing.test ()
LANGUAGE PLPGSQL
AS
$$
DECLARE pending bill_list[];
BEGIN
pending := ARRAY(SELECT p FROM billing.pending_bill p);
CALL billing.bill_customer_list (pending);
END;
$$
Your select in the procedure returns multiple columns. But you want to create an array of a custom type. So your SELECT list needs to return the type, not *.
You don't need the bill_list type either, as every table has a corresponding type and you can simply pass an array of the table's type.
So you can use the following:
CREATE PROCEDURE bill_customer_list (
pending pending_bill[])
LANGUAGE PLPGSQL
AS
$$
BEGIN
UPDATE
customer c
SET
billed = true
FROM unnest(pending) p --<< treat the array as a table
WHERE
p.customer_id = c.customer_id;
END;
$$
;
CREATE PROCEDURE test ()
LANGUAGE PLPGSQL
AS
$$
DECLARE
pending pending_bill[];
BEGIN
pending := ARRAY(SELECT p FROM pending_bill p);
CALL bill_customer_list (pending);
END;
$$
;
The select p returns a record (of the same type as the table) as a single column in the result.
The := is the assignment operator in PL/pgSQL and typically much faster than a SELECT .. INTO variable. Although in this case the performance difference wouldn't matter much I guess.
Online example
If you do want to keep the extra type bill_list around because it e.g. contains less columns than pending_bill you need to select only those columns that match the type's column and create a record by enclosing them in parentheses. (a,b) is a single column with an anonymous record type (and two fields). a,b are two distinct columns
CREATE PROCEDURE test ()
LANGUAGE PLPGSQL
AS
$$
DECLARE
pending bill_list[];
BEGIN
pending := ARRAY(SELECT (id, customer_id) FROM pending_bill p);
CALL bill_customer_list (pending);
END;
$$
;
You should also note that DECLARE starts a block in PL/pgSQL where multiple variables can be defined. There is no need to write one DECLARE for each variable (your formatting of the DECLARE block let's me think that you assumed you need one DECLARE per variable as is the case in T-SQL)
I am new to using cursors for looping through a set of rows. But so far I had prior knowledge of which columns I am about to read.
E.g.
DECLARE db_cursor FOR
SELECT Column1, Column2
FROM MyTable
DECLARE #ColumnOne VARCHAR(50), #ColumnTwo VARCHAR(50)
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #ColumnOne, #ColumnTwo
...
But the tables I am about to read into my key/value table have no specific structure and I should be able to process them one row at a time. How, using a nested cursor, can I loop through all the columns of the fetched row and process them according to their type and name?
TSQL cursors are not really designed to read data from tables of unknown structure. The two possibilities I can think of to achieve something in that direction are:
First read the column names of an unknown table from the Information Schema Views (see System Information Schema Views (Transact-SQL)). Then use dynamic SQL to create the cursor.
If you simply want to get any columns as a large string value, you might also try a simple SELECT * FROM TABLE_NAME FOR XML AUTO and further process the retrieved data for your purposes (see FOR XML (SQL Server)).
SQL is not very good in dealing with sets generically. In most cases you must know column names, data types and much more in advance. But there is XQuery. You can transform any SELECT into XML rather easily and use the mighty abilities to deal with generic structures there. I would not recommend this, but it might be worth a try:
CREATE PROCEDURE dbo.Get_EAV_FROM_SELECT
(
#SELECT NVARCHAR(MAX)
)
AS
BEGIN
DECLARE #tmptbl TABLE(TheContent XML);
DECLARE #cmd NVARCHAR(MAX)= N'SELECT (' + #SELECT + N' FOR XML RAW, ELEMENTS XSINIL);';
INSERT INTO #tmptbl EXEC(#cmd);
SELECT r.value('*[1]/text()[1]','nvarchar(max)') AS RowID
,c.value('local-name(.)','nvarchar(max)') AS ColumnKey
,c.value('text()[1]','nvarchar(max)') AS ColumnValue
FROM #tmptbl t
CROSS APPLY t.TheContent.nodes('/row') A(r)
CROSS APPLY A.r.nodes('*[position()>1]') B(c)
END;
GO
EXEC Get_EAV_FROM_SELECT #SELECT='SELECT TOP 10 o.object_id,o.* FROM sys.objects o';
GO
--Clean-Up for test purpose
DROP PROCEDURE Get_EAV_FROM_SELECT;
The idea in short
The select is passed into the procedure as string. With the SP we create a statement dynamically and create XML from it.
The very first column is considered to be the Row's ID, if not (like in sys.objects) we can write the SELECT and force it that way.
The inner SELECT will read each row and return a classical EAV-list.
I have found solutions (I think) to the problem I'm about to ask for on Oracle and SQL Server, but can't seem to translate this into a Postgres solution. I am using Postgres 9.3.6.
The idea is to be able to generate "metadata" about the table content for profiling purposes. This can only be done (AFAIK) by having queries run for each column so as to find out, say... min/max/count values and such. In order to automate the procedure, it is preferable to have the queries generated by the DB, then executed.
With an example salesdata table, I'm able to generate a select query for each column, returning the min() value, using the following snippet:
SELECT 'SELECT min('||column_name||') as minval_'||column_name||' from salesdata '
FROM information_schema.columns
WHERE table_name = 'salesdata'
The advantage being that the db will generate the code regardless of the number of columns.
Now there's a myriad places I had in mind for storing these queries, either a variable of some sort, or a table column, the idea being to then have these queries execute.
I thought of storing the generated queries in a variable then executing them using the EXECUTE (or EXECUTE IMMEDIATE) statement which is the approach employed here (see right pane), but Postgres won't let me declare a variable outside a function and I've been scratching my head with how this would fit together, whether that's even the direction to follow, perhaps there's something simpler.
Would you have any pointers, I'm currently trying something like this, inspired by this other question but have no idea whether I'm headed in the right direction:
CREATE OR REPLACE FUNCTION foo()
RETURNS void AS
$$
DECLARE
dyn_sql text;
BEGIN
dyn_sql := SELECT 'SELECT min('||column_name||') from salesdata'
FROM information_schema.columns
WHERE table_name = 'salesdata';
execute dyn_sql
END
$$ LANGUAGE PLPGSQL;
System statistics
Before you roll your own, have a look at the system table pg_statistic or the view pg_stats:
This view allows access only to rows of pg_statistic that correspond
to tables the user has permission to read, and therefore it is safe to
allow public read access to this view.
It might already have some of the statistics you are about to compute. It's populated by ANALYZE, so you might run that for new (or any) tables before checking.
-- ANALYZE tbl; -- optionally, to init / refresh
SELECT * FROM pg_stats
WHERE tablename = 'tbl'
AND schemaname = 'public';
Generic dynamic plpgsql function
You want to return the minimum value for every column in a given table. This is not a trivial task, because a function (like SQL in general) demands to know the return type at creation time - or at least at call time with the help of polymorphic data types.
This function does everything automatically and safely. Works for any table, as long as the aggregate function min() is allowed for every column. But you need to know your way around PL/pgSQL.
CREATE OR REPLACE FUNCTION f_min_of(_tbl anyelement)
RETURNS SETOF anyelement
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE (
SELECT format('SELECT (t::%2$s).* FROM (SELECT min(%1$s) FROM %2$s) t'
, string_agg(quote_ident(attname), '), min(' ORDER BY attnum)
, pg_typeof(_tbl)::text)
FROM pg_attribute
WHERE attrelid = pg_typeof(_tbl)::text::regclass
AND NOT attisdropped -- no dropped (dead) columns
AND attnum > 0 -- no system columns
);
END
$func$;
Call (important!):
SELECT * FROM f_min_of(NULL::tbl); -- tbl being the table name
db<>fiddle here
Old sqlfiddle
You need to understand these concepts:
Dynamic SQL in plpgsql with EXECUTE
Polymorphic types
Row types and table types in Postgres
How to defend against SQL injection
Aggregate functions
System catalogs
Related answer with detailed explanation:
Table name as a PostgreSQL function parameter
Refactor a PL/pgSQL function to return the output of various SELECT queries
Postgres data type cast
How to set value of composite variable field using dynamic SQL
How to check if a table exists in a given schema
Select columns with particular column names in PostgreSQL
Generate series of dates - using date type as input
Special difficulty with type mismatch
I am taking advantage of Postgres defining a row type for every existing table. Using the concept of polymorphic types I am able to create one function that works for any table.
However, some aggregate functions return related but different data types as compared to the underlying column. For instance, min(varchar_column) returns text, which is bit-compatible, but not exactly the same data type. PL/pgSQL functions have a weak spot here and insist on data types exactly as declared in the RETURNS clause. No attempt to cast, not even implicit casts, not to speak of assignment casts.
That should be improved. Tested with Postgres 9.3. Did not retest with 9.4, but I am pretty sure, nothing has changed in this area.
That's where this construct comes in as workaround:
SELECT (t::tbl).* FROM (SELECT ... FROM tbl) t;
By casting the whole row to the row type of the underlying table explicitly we force assignment casts to get original data types for every column.
This might fail for some aggregate function. sum() returns numeric for a sum(bigint_column) to accommodate for a sum overflowing the base data type. Casting back to bigint might fail ...
#Erwin Brandstetter, Many thanks for the extensive answer. pg_stats does indeed provide a few things, but what I really need to draw a complete profile is a variety of things, min, max values, counts, count of nulls, mean etc... so a bunch of queries have to be ran for each columns, some with GROUP BY and such.
Also, thanks for highlighting the importance of data types, i was sort of expecting this to throw a spanner in the works at some point, my main concern was with how to automate the query generation, and its execution, this last bit being my main concern.
I have tried the function you provide (I probably will need to start learning some plpgsql) but get a error at the SELECT (t::tbl) :
ERROR: type "tbl" does not exist
btw, what is the (t::abc) notation referred as, in python this would be a list slice, but it’s probably not the case in PLPGSQL
Using Postgres 9.3:
I am attempting to automatically populate a table when an insert is performed on another table. This seems like a good use for rules, but after adding the rule to the first table, I am no longer able to perform inserts into the second table using the writable CTE. Here is an example:
CREATE TABLE foo (
id INT PRIMARY KEY
);
CREATE TABLE bar (
id INT PRIMARY KEY REFERENCES foo
);
CREATE RULE insertFoo AS ON INSERT TO foo DO INSERT INTO bar VALUES (NEW.id);
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
INSERT INTO foo SELECT * FROM a
When this is run, I get the error
"ERROR: WITH cannot be used in a query that is rewritten by rules
into multiple queries".
I have searched for that error string, but am only able to find links to the source code. I know that I can perform the above using row-level triggers instead, but it seems like I should be able to do this at the statement level. Why can I not use the writable CTE, when queries like this can (in this case) be easily re-written as:
INSERT INTO foo SELECT * FROM (VALUES (1), (2)) a
Does anyone know of another way that would accomplish what I am attempting to do other than 1) using rules, which prevents the use of "with" queries, or 2) using row-level triggers? Thanks,
TL;DR: use triggers, not rules.
Generally speaking, prefer triggers over rules, unless rules are absolutely necessary. (Which, in practice, they never are.)
Using rules introduces heaps of problems which will needlessly complicate your life down the road. You've run into one here. Another (major) one is, for instance, that the number of affected rows will correspond to that of the very last query -- if you're relying on FOUND somewhere and your query is incorrectly reporting that no rows were affected by a query, you'll be in for painful bugs.
Moreover, there's occasional talk of deprecating Postgres rules outright:
http://postgresql.nabble.com/Deprecating-RULES-td5727689.html
As the other answer I definitely recommend using INSTEAD OF triggers before RULEs.
However if for some reason you don't want to change existing VIEW RULEs and still want use WITH you can do so by wrapping the VIEW in a stored procedure:
create function insert_foo(int) returns void as $$
insert into foo values ($1)
$$ language sql;
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
SELECT insert_foo(a.column1) from a;
This could be useful when using some legacy db through some system that wraps statements with CTEs.
Is it possible to declare a variable within a View? For example:
Declare #SomeVar varchar(8) = 'something'
gives me the syntax error:
Incorrect syntax near the keyword 'Declare'.
You are correct. Local variables are not allowed in a VIEW.
You can set a local variable in a table valued function, which returns a result set (like a view does.)
http://msdn.microsoft.com/en-us/library/ms191165.aspx
e.g.
CREATE FUNCTION dbo.udf_foo()
RETURNS #ret TABLE (col INT)
AS
BEGIN
DECLARE #myvar INT;
SELECT #myvar = 1;
INSERT INTO #ret SELECT #myvar;
RETURN;
END;
GO
SELECT * FROM dbo.udf_foo();
GO
You could use WITH to define your expressions. Then do a simple Sub-SELECT to access those definitions.
CREATE VIEW MyView
AS
WITH MyVars (SomeVar, Var2)
AS (
SELECT
'something' AS 'SomeVar',
123 AS 'Var2'
)
SELECT *
FROM MyTable
WHERE x = (SELECT SomeVar FROM MyVars)
EDIT: I tried using a CTE on my previous answer which was incorrect, as pointed out by #bummi. This option should work instead:
Here's one option using a CROSS APPLY, to kind of work around this problem:
SELECT st.Value, Constants.CONSTANT_ONE, Constants.CONSTANT_TWO
FROM SomeTable st
CROSS APPLY (
SELECT 'Value1' AS CONSTANT_ONE,
'Value2' AS CONSTANT_TWO
) Constants
#datenstation had the correct concept. Here is a working example that uses CTE to cache variable's names:
CREATE VIEW vwImportant_Users AS
WITH params AS (
SELECT
varType='%Admin%',
varMinStatus=1)
SELECT status, name
FROM sys.sysusers, params
WHERE status > varMinStatus OR name LIKE varType
SELECT * FROM vwImportant_Users
also via JOIN
WITH params AS ( SELECT varType='%Admin%', varMinStatus=1)
SELECT status, name
FROM sys.sysusers INNER JOIN params ON 1=1
WHERE status > varMinStatus OR name LIKE varType
also via CROSS APPLY
WITH params AS ( SELECT varType='%Admin%', varMinStatus=1)
SELECT status, name
FROM sys.sysusers CROSS APPLY params
WHERE status > varMinStatus OR name LIKE varType
Yes this is correct, you can't have variables in views
(there are other restrictions too).
Views can be used for cases where the result can be replaced with a select statement.
Using functions as spencer7593 mentioned is a correct approach for dynamic data. For static data, a more performant approach which is consistent with SQL data design (versus the anti-pattern of writting massive procedural code in sprocs) is to create a separate table with the static values and join to it. This is extremely beneficial from a performace perspective since the SQL Engine can build effective execution plans around a JOIN, and you have the potential to add indexes as well if needed.
The disadvantage of using functions (or any inline calculated values) is the callout happens for every potential row returned, which is costly. Why? Because SQL has to first create a full dataset with the calculated values and then apply the WHERE clause to that dataset.
Nine times out of ten you should not need dynamically calculated cell values in your queries. Its much better to figure out what you will need, then design a data model that supports it, and populate that data model with semi-dynamic data (via batch jobs for instance) and use the SQL Engine to do the heavy lifting via standard SQL.
What I do is create a view that performs the same select as the table variable and link that view into the second view. So a view can select from another view. This achieves the same result
How often do you need to refresh the view? I have a similar case where the new data comes once a month; then I have to load it, and during the loading processes I have to create new tables. At that moment I alter my view to consider the changes.
I used as base the information in this other question:
Create View Dynamically & synonyms
In there, it is proposed to do it 2 ways:
using synonyms.
Using dynamic SQL to create view (this is what helped me achieve my result).