I have an xml document saved in a DB2 Table with XML datatype and I want to update the value of a node. I tried this:
XQUERY replace value of node db2-fn:sqlquery('select my_xml_column from myTable where someId = someValue)/some/xpath/with/#attribute with "foobar"
(and I tried several variants, everything that google hinted that it could do the job).
But unfortunatelly I am just getting error messages. Here:
SQL16002N An XQuery expression has an unexpected token "value" following "replace ". Expected tokens may include: "
What am I doing wrong?
update myTable SET myXmlColumn = XMLQUERY('
transform copy $copy := $original
modify do replace value of $copy/some/xpath/with/#attribute with "FOOBAR"
return $copy
'
PASSING myXmlColumn AS "original"
) WHERE someId = someValue
This works and has the desired effect. It hoped for somebody to come up with a pure XQuery solution, but the problem is solved...
Related
I'm trying to use Query Variables in Grafana, the panel query source is PostgreSQL for QuestDB.
I have added the variable without any issue, but I'm unable to use the variable in Panel query since the variable values contains the spaces (SENSOR01 ON_OFF), also I'm unable to figure-out how to add single quote escape.
Following are the scenarios I tried:
Scenario1: this indicates due to space in the Variable value, on_off considered as separate word
where sensor_name = $sensor
db query error: pq: unexpected token: on_off
.
.
Scenario2: tried to add single quotes explicitly for the variable value, but there is generic error from source DB (QuestDB)
where sensor_name = concat('''', $sensor, '''')
db query error: pq: dangling expression
When tried Scenario2 approach directly in query of Variable, getting the same error
..
Scenario3: Hard-coded the variable value with space and with single quotes, but this giving me error with first part of the variable, looks like the hard-coded single quotes not passed here!
Error (Scenario3):
Is there any way/workaround to tackle this issue?
Could you just add the quotes directly in the query?
where sensor_name = '$sensor'
I have a similar grafana panel querying a questDB database using a variable and it works for me. This is my query:
select device_type, avg(duration_ms) as avg_duration_ms, avg(speed) as avg_speed, avg(measure1) as avg_m1, avg(measure2) as avg_m2 from ilp_test
WHERE
$__timeFilter(timestamp) and device_type = '$deviceType'
A rather hacky workaround would be to do:
where sensor_name = concat(cast(cast('&' as int) + 1 as char), $sensor, cast(cast('&' as int) + 1 as char))
This should work, but I'm pretty sure there is a better solution. Let me find it and get back to you.
Update. We may support Postgres syntax (which is '' escaping for a single quote char) in one of upcoming versions. For now, you'd have to use the above workaround.
I am trying to convert the following SQL Server query to Postgresql
select CAST(CAST('<IncludeSettle/><StartTime value="2019-03-26 08:45:48.780"></StartTime>' as XML).value('(//StartTime/#value)[1]', 'datetime') AS varchar(40)) + ''')';
I have tried converting it to below and got back following error.
select unnest(xpath('//StartTime/#value', xmlparse(document '<IncludeSettle/><StartTime value="2019-03-26 08:45:48.780"></StartTime>')))
Error:
ERROR: invalid XML document
DETAIL: line 1: Extra content at the end of the document
<IncludeSettle/><StartTime value="2010-03-26 08:45:48.780"></StartTim
^
SQL state: 2200M
As a hack, I had made the following change to make it work.
select unnest(xpath('//StartTime/#value', xmlparse(document '<tempzz>'||'<IncludeSettle/><StartTime value="2019-03-26 08:45:48.780"></StartTime>'||'</tempzz>')))
Output for Postgresql:
2019-03-26 08:45:48.780
I am looking for a better solution. Any help really appreciated.
You can process that by adding the dummy root as you did. The value is already formatted as an ISO timestamp, so you can simply cast it to a timestamp:
But as there is no direct cast from xml to timestamp you need to cast the result of the xpath() to text first.
with data (input) as (
values ('<IncludeSettle/><StartTime value="2019-03-26 08:45:48.780"></StartTime>')
)
select (xpath('//StartTime/#value', ('<dummy_root>'||input||'</dummy_root>')::xml))[1]::text::timestamp
from data
In Powershell, I am doing a SQL query for a single row of data. Lets say $data for example.
The response from the query a System.Data.DataSet type. Within it, there is a tables property that has the data I need.
$data.Tables
ServerName : Server15
SamAccount : Admin-Server15
LastPWDReset : 1/15/2019 12:00:00 AM
LastPWDResetAttempt :
I don't intend to write this data back out of anything. Instead, I want to display it, and convert the empty "LastPWDResetAttemp" to "NONE" where it is blank.
I thought it would be done like this:
$data.Tables.lastPWDResetAttempt = "None"
but that gives me an error The property 'lastPWDResetAttempt' cannot be found on this object. Verify that the property exists and can be set.
I can't help but think I am missing some conversion from "Dataset" to "String".
I've tried to-string but in doing so, I ended up with just a string of data and not the headings. Nothing I could update, or easily use to build my eventual table.
My work around:
$webdata = "" | select ServerName,SamAccount,LastPWDReset,LastPWDResetAttempt
$webdata.ServerName = $data.tables.servername
$webdata.SamAccount = $data.tables.samaccount
$webdata.LastPWDReset = $data.tables.LastPWDReset
$webdata.LastPWDResetAttempt = $data.tables.LastPWDResetAttempt
$webdata.LastPWDResetAttempt = "Never"
works. I just can't believe there isn't an easier way, nor do I understand why I can view a list of the data, just to not then be able to set it.
I think it is because Tables is an array and does not have the property LastPWDResetAttempt.
You can try the following instead:
$data.Tables[0].Rows[0].LastPWDResetAttempt = "None"
Also I think your workaround, though it may contain more lines of code, is actually a better solution though.
this should be a simple thing but I've spent hours to no avail. Basically, I need to look up a salesrep # in a SQL database using the user's Window's user id. The format of the user id is
"Norstar\kjones" and I need the "kjones" portion of it.
using the split function, I am able to pull just the 'kjones' part out:
split(User!UserID,"\").GetValue(1)
I've created a parameter called SlsmnNum and created a dataset to be used to look up the salesrep # using the user id (the slsm_num field is a varchar, not an integer):
select slsm_num from Salesman_Msid where slsm_msid = ''' + split(User!UserID,"\").GetValue(1) + '''
However, I get no results. How can I get the select to work?
alternatively, I tried the following:
in parameter SlsmnNum, I set the default to an expression using:
split(User!UserID,"\").GetValue(1) and this returns 'kjones', as expected.
I created a SECOND parameter (which is positioned BELOW the SlsmnNum parameter), SlsmnNum2, that has a default (and an available) value using a query, which is a dataset containing the following select statement:
select slsm_num from Salesman_Msid where slsm_msid = (#SlsmnNum)
When I run the query on the Data tab, when I type in 'kjones' into the parameter box, it returns '1366', the salesrep # I'm expecting.
But, when I Preview the report, all I get in SlsmnNum2 box is Select a Value and nothing is there (it should return '1366').
Any help would be greatly appreciated!
Try your first approach with Query Text as
="select slsm_num from Salesman_Msid where slsm_msid = '" & split(User!UserID,"\").GetValue(1) & "'"
I am learning Python (I have a C/C++ background).
I need to write something practical in Python though, whilst learning. I have the following pseudocode (my first attempt at writing a Python script, since reading about Python yesterday). Hopefully, the snippet details the logic of what I want to do. BTW I am using python 2.6 on Ubuntu Karmic.
Assume the script is invoked as: script_name.py directory_path
import csv, sys, os, glob
# Can I declare that the function accepts a dictionary as first arg?
def getItemValue(item, key, defval)
return !item.haskey(key) ? defval : item[key]
dirname = sys.argv[1]
# declare some default values here
weight, is_male, default_city_id = 100, true, 1
# fetch some data from a database table into a nested dictionary, indexed by a string
curr_dict = load_dict_from_db('foo')
#iterate through all the files matching *.csv in the specified folder
for infile in glob.glob( os.path.join(dirname, '*.csv') ):
#get the file name (without the '.csv' extension)
code = infile[0:-4]
# open file, and iterate through the rows of the current file (a CSV file)
f = open(infile, 'rt')
try:
reader = csv.reader(f)
for row in reader:
#lookup the id for the code in the dictionary
id = curr_dict[code]['id']
name = row['name']
address1 = row['address1']
address2 = row['address2']
city_id = getItemValue(row, 'city_id', default_city_id)
# insert row to database table
finally:
f.close()
I have the following questions:
Is the code written in a Pythonic enough way (is there a better way of implementing it)?
Given a table with a schema like shown below, how may I write a Python function that fetches data from the table and returns is in a dictionary indexed by string (name).
How can I insert the row data into the table (actually I would like to use a transaction if possible, and commit just before the file is closed)
Table schema:
create table demo (id int, name varchar(32), weight float, city_id int);
BTW, my backend database is postgreSQL
[Edit]
Wayne et al:
To clarify, what I want is a set of rows. Each row can be indexed by a key (so that means the rows container is a dictionary (right)?. Ok, now once we have retrieved a row by using the key, I also want to be able to access the 'columns' in the row - meaning that the row data itself is a dictionary. I dont know if Python supports multidimensional array syntax when dealing with dictionaries - but the following statement will help explain how I intend to conceptually use the data returned from the db. A statement like dataset['joe']['weight'] will first fetch the row data indexed by the key 'joe' (which is a dictionary) and then index that dictionary for the key 'weight'. I want to know how to build such a dictionary of dictionaries from the retrieved data in a Pythonic way like you did before.
A simplistic way would be to write something like:
import pyodbc
mydict = {}
cnxn = pyodbc.connect(params)
cursor = cnxn.cursor()
cursor.execute("select user_id, user_name from users"):
for row in cursor:
mydict[row.id] = row
Is this correct/can it be written in a more pythonic way?
to get the value from the dictionary you need to use .get method of the dict:
>>> d = {1: 2}
>>> d.get(1, 3)
2
>>> d.get(5, 3)
3
This will remove the need for getItemValue function. I wont' comment on the existing syntax since it's clearly alien to Python. Correct syntax for the ternary in Python is:
true_val if true_false_check else false_val
>>> 'a' if False else 'b'
'b'
But as I'm saying below, you don't need it at all.
If you're using Python > 2.6, you should use with statement over the try-finally:
with open(infile) as f:
reader = csv.reader(f)
... etc
Seeing that you want to have row as dictionary, you should be using csv.DictReader and not a simple csv. reader. However, it is unnecessary in your case. Your sql query could just be constructed to access the fields of the row dict. In this case you wouldn't need to create separate items city_id, name, etc. To add default city_id to row if it doesn't exist, you could use .setdefault method:
>>> d
{1: 2}
>>> d.setdefault(1, 3)
2
>>> d
{1: 2}
>>> d.setdefault(3, 3)
3
>>> d
{1: 2, 3: 3}
and for id, simply row[id] = curr_dict[code]['id']
When slicing, you could skip 0:
>>> 'abc.txt'[:-4]
'abc'
Generally, Python's library provide a fetchone, fetchmany, fetchall methods on cursor, which return Row object, that might support dict-like access or return a simple tuple. It will depend on the particular module you're using.
It looks mostly Pythonic enough for me.
The ternary operation should look like this though (I think this will return the result you expect):
return defval if not key in item else item[key]
Yeah, you can pass a dictionary (or any other value) in basically any order. The only difference is if you use the *args, **kwargs (named by convention. Technically you can use any name you want) which expect to be in that order and the last one or two arguments.
For inserting into a DB you can use the odbc module:
import odbc
conn = odbc.odbc('servernamehere')
cursor = conn.cursor()
cursor.execute("INSERT INTO mytable VALUES (42, 'Spam on Eggs', 'Spam on Wheat')")
conn.commit()
You can read up or find plenty of examples on the odbc module - I'm sure there are other modules as well, but that one should work fine for you.
For retrieval you would use
cursor.execute("SELECT * FROM demo")
#Reads one record - returns a tuple
print cursor.fetchone()
#Reads the rest of the records - a list of tuples
print cursor.fetchall()
to make one of those records into a dictionary:
record = cursor.fetchone()
# Removes the 2nd element (at index 1) from the record
mydict[record[1]] = record[:1] + record[2:]
Though that practically screams for a generator expression if you want the whole shebang at once
mydict = dict((record[1], record[:1] + record[2:] for record in cursor.fetchall())
which should give you all of the records packed up neatly in a dictionary, using the name as a key.
HTH
a colon required after defs:
def getItemValue(item, key, defval):
...
boolean operators: In python !->not; &&->and and ||->or (see http://docs.python.org/release/2.5.2/lib/boolean.html for boolean operators). There's no ? : operator in python, there is a return (x) if (x) else (x) expression although I personally rarely use it in favour of plain if's.
booleans/None: True, False and None have capitals before them.
checking types of arguments: In python, you generally don't declare types of function parameters. You could go e.g. assert isinstance(item, dict), "dicts must be passed as the first parameter!" in the function although this kind of "strict checking" is often discouraged as it's not always necessary in python.
python keywords: default isn't a reserved python keyword and is acceptable as arguments and variables (just for the reference.)
style guidelines: PEP 8 (the python style guideline) states that module imports should generally only be one per line, though there are some exceptions (I have to admit I often don't follow the import sys and os on separate lines, though I usually follow it otherwise.)
file open modes: rt isn't valid in python 2.x - it will work, though the t will be ignored. See also http://docs.python.org/tutorial/inputoutput.html#reading-and-writing-files. It is valid in python 3 though, so I don't think it it'd hurt if you want to force text mode, raising exceptions on binary characters (use rb if you want to read non-ASCII characters.)
working with dictionaries: Python used to use dict.has_key(key) but you should use key in dict now (which has largely replaced it, see http://docs.python.org/library/stdtypes.html#mapping-types-dict.)
split file extensions: code = infile[0:-4] could be replaced with code = os.path.splitext(infile)[0] (which returns e.g. ('root', '.ext') with the dot in the extension (see http://docs.python.org/library/os.path.html#os.path.splitext).
EDIT: removed multiple variable declarations on a single line stuff and added some formatting. Also corrected the rt isn't a valid mode in python when in python 3 it is.