Different results with cfstoredproc and cfquery - tsql

When I execute a stored proc via cfstoredproc, I am getting a different result than calling that stored proc via cfquery. I am passing in the same exact parameter values to each call. And also, when I run the stored proc in SQL Studio, I get the correct results (same as the cfquery).
Here's the cfstoredproc call
<cfstoredproc datasource="#request.mainDSN#" debug="#request.debug#" procedure="rankingresults">
<cfprocparam type="in" value="8652" CFSQLType="CF_SQL_INTEGER">
<cfprocparam type="in" value="50" CFSQLType="CF_SQL_INTEGER">
<cfprocparam type="in" value="53" CFSQLType="CF_SQL_INTEGER">
<cfprocresult name="local.listing">
</cfstoredproc>
Here is the cfquery call
<cfquery datasource="#request.mainDSN#" name="rankings">
EXEC rankingresults
#CityZipId = 8652,
#distance = 50,
#sic = 53
</cfquery>
The results are completely different. It's not even close. I've been banged my head over this for several hours, and I can't figure out why it is doing what it is doing.
UPDATE
The stored proc is massive (and one that I inherited), so I'm not going to paste it all here: http://pastebin.com/EtufPWXf

(From the comments)
Looks like it does have optional parameters. So your cfstoredproc call may not be passing in the values you think it is. Based on the order, it looks like it is actually passing in values for: #CityZipID, #Sic, #lastRank. As Dan mentioned (and I hinted at), cfstoredproc uses positional notation for parameters (#dbVarName is deprecated). You need to supply all of the parameter values in the correct order.
Update:
FWIW, if you create a shell procedure you would see the cfstoredproc and cfquery are actually invoking the procedure with different parameters/values. (See below).
You would definitely see a difference in results if you invoked the procedure without the named parameters as #Dan suggested ie exec rankingresults 8652, 50, 53. (I know you said there was "no change", but there was probably just an error in your test).
CFSTOREDPROC
#ATTRCODES|#CITYZIPID|#DISTANCE|#HASURL |#ISFEATURED |#LASTRANK|#PHOTOCOUNT|#REVIEWCOUNT |#SIC|#SICBUDGETIDS
(nothing)| 8652| (nothing)| (nothing)| (nothing)| 53| (nothing)| (nothing)| 50| (nothing)
CFQUERY
#ATTRCODES|#CITYZIPID|#DISTANCE|#HASURL |#ISFEATURED |#LASTRANK|#PHOTOCOUNT|#REVIEWCOUNT |#SIC|#SICBUDGETIDS
(nothing)| 8652| 50| (nothing)| (nothing)| 0| (nothing)| (nothing)| 53| (nothing)

If you run this directly on the sql server how many results does it return? You could be returning multiple results that could explain the difference in behavior.

Related

Splunk: How to get two searches in one timechart/graph?

I have to queries which look like this:
source="/log/ABCD/cABCDXYZ/xyz.log" doSomeTasks| timechart partial=f span=1h count as "#XYZ doSomeTasks" | fillnull
source="/log/ABCD/cABCDXYZ/xyz.log" doOtherTasks| timechart partial=f span=1h count as "#XYZ doOtherTasks" | fillnull
I now want to get this two searches in one graph (I do not want to sum the numbers I get per search up to one value).
I saw that there is the possibility to take appendcols but my trials to use this command were not successful.
I tried this but it did not work:
source="/log/ABCD/cABCDXYZ/xyz.log" doSomeTasks|timechart partial=f span=1h count as "#XYZ doSomeTasks" appendcols [doOtherTasks| timechart partial=f span=1h count as "#XYZ doOtherTasks" | fillnull]
Thanks to PM 77-1 the issue is solved.
This command works:
source="/log/ABCD/cABCDXYZ/xyz.log" doSomeTasks|timechart partial=f span=1h count as "#XYZ doSomeTasks" | appendcols[search source="/log/ABCD/cABCDXYZ/xyz.log" doOtherTasks| timechart partial=f span=1h count as "#XYZ doOtherTasks" | fillnull]
Note: You do not have to mention the source in the second search command if it is the same source as the first one.
General solution
Generate each data column by using a subsearch query in the following form:
|appendcols[search (myquery) |timechart count]
Additional steps
The list of one-or-more query columns needs to be preceded by a generated column which establishes the timechart rows (and gives appendcols something to append to).
|makeresults |timechart count |eval count=0
Note: It isn't strictly required to start with a generated column, but I've found this to be a clean and robust approach. Notably, it avoids problems that may occur in the special-case of "No results found", which otherwise can confuse the visualization rendering. Plus it's more uniform and, as a result, easier to work with.
Finally, specify each of the fields to be charted, with _time as the x-axis:
|fields _time, myvar1, myvar2, myvar3
Complete example
|makeresults |timechart span=5m count |eval count=0
|appendcols[search (myquery1) |timechart span=5m count as myvar1]
|appendcols[search (myquery2) |timechart span=5m count as myvar2]
|appendcols[search (myquery3) |timechart span=5m count as myvar3]
|fields _time, myvar1, myvar2, myvar3
Be careful to use the same span throughout.
Other hints
When comparing disparate data on the same chart, perhaps to evaluate their relative timing, it's common to have differences in type or scale that can render the overlaid result nearly useless. For cases like this, don't neglect the 'Log' format option for the Y-Axis.
In some cases, it may even be worthwhile to employ data hacks with eval to massage the values into a visual comparable state. For example, appending |eval myvar1=if(myvar1=0,0,1) deduplicates values when used following timechart count. Here's some relevant docs:
Mathematical functions
Comparison and Conditional functions

How to insert similar value into multiple locations of a psycopg2 query statement using dict? [duplicate]

I have a Python script that runs a pgSQL file through SQLAlchemy's connection.execute function. Here's the block of code in Python:
results = pg_conn.execute(sql_cmd, beg_date = datetime.date(2015,4,1), end_date = datetime.date(2015,4,30))
And here's one of the areas where the variable gets inputted in my SQL:
WHERE
( dv.date >= %(beg_date)s AND
dv.date <= %(end_date)s)
When I run this, I get a cryptic python error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) argument formats can't be mixed
…followed by a huge dump of the offending SQL query. I've run this exact code with the same variable convention before. Why isn't it working this time?
I encountered a similar issue as Nikhil. I have a query with LIKE clauses which worked until I modified it to include a bind variable, at which point I received the following error:
DatabaseError: Execution failed on sql '...': argument formats can't be mixed
The solution is not to give up on the LIKE clause. That would be pretty crazy if psycopg2 simply didn't permit LIKE clauses. Rather, we can escape the literal % with %%. For example, the following query:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%';
would need to be modified to:
SELECT *
FROM people
WHERE start_date > %(beg_date)s
AND name LIKE 'John%%';
More details in the pscopg2 docs: http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
As it turned out, I had used a SQL LIKE operator in the new SQL query, and the % operand was messing with Python's escaping capability. For instance:
dv.device LIKE 'iPhone%' or
dv.device LIKE '%Phone'
Another answer offered a way to un-escape and re-escape, which I felt would add unnecessary complexity to otherwise simple code. Instead, I used pgSQL's ability to handle regex to modify the SQL query itself. This changed the above portion of the query to:
dv.device ~ E'iPhone.*' or
dv.device ~ E'.*Phone$'
So for others: you may need to change your LIKE operators to regex '~' to get it to work. Just remember that it'll be WAY slower for large queries. (More info here.)
For me it's turn out I have % in sql comment
/* Any future change in the testing size will not require
a change here... even if we do a 100% test
*/
This works fine:
/* Any future change in the testing size will not require
a change here... even if we do a 100pct test
*/

Use consistency level in Phantom-dsl and Cassandra

Currently using --
cqlsh> show version
[cqlsh 4.1.1 | Cassandra 2.0.17 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Using phantom-dsl 1.12.2 , Scala 2.10 ..
I can't figure out how to set consistency levels on queries.
There are predefined functions insert() , select() as part of CassandraTable .. How can I pass the consistency level to them ?
insert.value(....).consistencyLevel_=(ConsistencyLevel.QUORUM)
does not work and fails with an error ( probably because this appends a "USING CONSISTENCY QUORUM" at the end of the query). Here's the actual exception I get
com.datastax.driver.core.exceptions.SyntaxError: line 1:424 no viable alternative at input 'CONSISTENCY'
at com.datastax.driver.core.Responses$Error.asException(Responses.java:122) ~[cassandra-driver-core-2.2.0-rc3.jar:na]
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:120) ~[cassandra-driver-core-2.2.0-rc3.jar:na]
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186) ~[cassandra-driver-core-2.2.0-rc3.jar:na]
at com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:45) ~[cassandra-driver-core-2.2.0-rc3.jar:na]
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:754) ~[cassandra-driver-core-2.2.0-rc3.jar:na]
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:576) ~[cassandra-driver-core-2.2.0-rc3.jar:na]
I see from the documentation and discussion on this pull request that I could do a setConsistencyLevel(ConsistencyLevel.QUORUM) on a SimpleStatement, but I would prefer not rewrite all the different insert statements.
UPDATE
Just to close the loop on this issue. I worked around this by creating a custom InsertQuery and then using that instead of the one provided by final def insert in CassandraTable
def qinsert()(implicit keySpace: KeySpace) = {
val table = this.asInstanceOf[T]
new InsertQuery[T, M, Unspecified](table, CQLQuery("INSERT into keyspace.tablename", consistencyLevel = ConsistencyLevel.QUORUM)
}
First of all there is no setValue method inside phantom and the API method you are using is missing an = at the end.
The correct structure is:
Table.insert
.value(_.name, "test")
.consistencyLevel_=(ConsistencyLevel.Quorum)
As you are on stackoverflow, an error stack trace and specific details of what doesn't work is generally preferable to "does not work".
I have finally figured out how to properly set the consistency level using phantom-dsl.
Using a statement you can do the following:
statement.setConsistencyLevel(ConsistencyLevel.QUORUM)
Also, take a look on the test project I've been working to help guys like you with examples using phantom-dsl:
https://github.com/iamthiago/cassandra-phantom

SQL query with XML parameter

EDIT: I have found a relevant answer already on stack overflow here:
XQuery [value()]: 'value()' requires a singleton (or empty sequence), found operand of type 'xdt:untypedAtomic *'
I have not dealt with XML in T-SQL before, and I am modifying an existing legacy stored proc, and picking most if it up through trial and error.
however I have hit a problem where trial and error is proving fruitless, and very slow. Think it's time to appeal to stack overflow gurus!
Here is some XML
<?xml version=\"1.0\"?>
<Notification xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\">
<NotificationId>0</NotificationId>
<UserNotifications>
<UserNotification>
<UserNotificationId>0</UserNotificationId>
<NotificationId>0</NotificationId>
<UserId>13514</UserId>
<MessageTypeId>1</MessageTypeId>
</UserNotification>
<UserNotification>
<UserNotificationId>0</UserNotificationId>
<NotificationId>0</NotificationId>
<UserId>13514</UserId>
<MessageTypeId>2</MessageTypeId>
</UserNotification>
</UserNotifications>
</Notification>
The Stored Proc in question accepts the above XML as a parameter:
CREATE PROCEDURE [dbo].[Notification_Insert]
#ParametersXml XML
AS
BEGIN
The XML contains child "UserNotification" elements. I would like to select the UserId, MessageTypeId of each UserNotification, into a table like this
UserId | MessageTypeId
13514 | 1
13514 | 2
Obviously the size of the collection is not fixed.
My current attempt (which doesn't work - is along these lines:
DECLARE #UserDetails TABLE ( UserId INT, MessageTypeId INT);
INSERT INTO #UserDetails (UserId, MessageTypeId)
SELECT Tab.Col.value('#UserId','INT'),
Tab.Col.value('#MessageTypeId','INT')
FROM #ParametersXml.nodes('/Notification/UserNotifications[not(#xsi:nil = "true")][1]/UserNotification') AS Tab(Col)
But this never inserts anything..
I have been playing around with this for a while now and not had any joy :(
I would suggest going through the links below. I found them short and quick to go through:
http://blog.sqlauthority.com/2009/02/12/sql-server-simple-example-of-creating-xml-file-using-t-sql/
http://blog.sqlauthority.com/2009/02/13/sql-server-simple-example-of-reading-xml-file-using-t-sql/
I found the solution to this problem through further searching stack overflow.
The query I need (thanks to XQuery [value()]: 'value()' requires a singleton (or empty sequence), found operand of type 'xdt:untypedAtomic *')
INSERT INTO #UserDetails (UserId, MessageTypeId)
SELECT UserNotification.value('UserId[1]','INT'),
UserNotification.value('MessageTypeId[1]','INT')
FROM #ParametersXml.nodes('//Notification/UserNotifications') AS x(Coll)
cross apply #ParametersXml.nodes('//Notification/UserNotifications/UserNotification') as un(UserNotification)

How to get list of available functions and their parameters in KDB/Q?

How would I go about getting a list of available functions and their parameters in a given namespace?
http://code.kx.com/q/ref/syscmds/#f-functions
\f .
\f .namspace
For functions you will have to check parameters individually by just giving the name of function
.n.function
will give you not only the parameters but the whole function definition.
this can surely be improved upon, but thought I'd share as a quick way to get the ball rolling. This will retrieve every global user defined function in every workspace and create a dictionary of namespapaces to functions to parameters.
q)getparams:{k!{n[w]!#'[;1] value each f w:where 100h=type each f:get each ".",/:"." sv/:string x,/:n:y x}[;m] each key m:k!system each "f .",/:string k:key `}
q)f1:{x+y+z}
q).n1.f2:{x*x}
q).n1.a:2
q).n2.f3:{y+y}
q)show r:getparams[]
q | `aj`aj0`asc`asof`avgs`cols`cor`cov`cross`cut`desc`dev`each`ej`except`fby`..
Q | `Cf`IN`L`S`V`addmonths`bv`chk`cn`d0`dd`def`dpft`dpt`dsftg`dt`en`f`fc`ff`f..
h | `cd`code`data`eb`ec`ed`edsn`es`fram`ha`hb`hc`hn`hr`ht`hta`htac`htc`html`h..
n1| (,`f2)!,,`x
n2| (,`f3)!,`x`y
q)r[`n1;`f2]
,`x
[EDIT] the original function was wrong. It missed the global namespace (`) and didn't capture composition, or functions defined with an adverb. The below corrects this, but seems overly convoluted. I'll still leave it here though in case anyone wants to post a nicer solution (so that I too can learn from that)
getparams:{k!{n[w][w2]!#'[;1] v w2:where 0h=type each v:value/[{type[x] in y}[;t]; ] each f:f w:where in[ ;(t:"h"$100,105+til 7)] type each f:get each `$".",/:"." sv/:string x,/:n:y x}[;m] each key m:k!system each "f .",/:string k:`,key `}
I addition to Naveen's answer, you can call value functionName which will give you a list of items, e.g. parameter names and the compiled byte code