I have three arguments: --a --b --c and I want my command to accept at least on of them but all combinations of a/b/c are also valid. E.g:
command.py --a
command.py --a --b
command.py --a --b --c
...
but not without arguments:
command.py
Thanks!
I want my command to accept at least on of them
You can do the following:
>>> from docopt import docopt
>>> u = '''usage: command.py --a [--b --c]
... command.py --b [--a --c]
... command.py --c [--a --b]'''
>>> docopt(u, ['--a'])
{'--a': True,
'--b': False,
'--c': False}
>>> docopt(u, ['--b'])
{'--a': False,
'--b': True,
'--c': False}
>>> docopt(u, ['--c'])
{'--a': False,
'--b': False,
'--c': True}
>>> docopt(u, [])
usage: command.py --a [--b --c]
command.py --b [--a --c]
command.py --c [--a --b]
Although this might not be the most user-friendly command-line interface. Maybe, you could explain your interface in more detail, and I can advise you on how to implement it (possibly with not only options, but also commands and positional arguments).
Related
when I encounter a panic error in a select query, how do I narrow down which is / are the offending expressions
Edit:
simply by looking at the logs / setting a Param. Without having to alter the code into a sequential binary search for loop.
pl.select([
expr1, expr2, expr3 ....
])
Where you have
pl.select([
expr1, expr2, expr3 ....
])
isolate the list
exprlist=[expr1, expr2, expr3 .... ]
Then do something like:
for i, expr in enumerate(exprlist):
try:
pl.select(expr)
except:
print(f"{i} is bad")
I tried a whole range of patterns with the to_char() function but cannot find the right one.
to_char(price, '99999990D00')
I have two test numbers 0 and 399326, I want 0 to become '0.00' and 399326 to become '399326.00'.
I found out that I needed to add a '9' to my pattern for as many numbers to expect, that is my first concern. When I supply '999999990D99' I get an error message, I suppose this is a too long pattern, but this limits my numbers. This will be a problem. supplying '9990D99' as a pattern to '399326' results in '####.'.
Second of all, I cannot find how to get the two trailing zeros behind the large number, though it works with the '0'. I tried with '999999990D99', '999999990D09' and '999999990D00' but it doesn't seem to work either way.
UPDATE
The solution of Laurenz Albe works with integers, look at my two examples below:
SELECT
to_char(0, '99999999990D00FM'),
to_char(1, '99999999990D00FM'),
to_char(11, '99999999990D00FM'),
to_char(111, '99999999990D00FM'),
to_char(1111, '99999999990D00FM'),
to_char(11111, '99999999990D00FM'),
to_char(111111, '99999999990D00FM'),
to_char(1111111, '99999999990D00FM'),
to_char(11111111, '99999999990D00FM')
WHERE 1=1
outputs:
"0.00"; "1.00"; "11.00"; "111.00"; "1111.00"; "11111.00"; "111111.00"; "1111111.00"; "11111111.00"
As expected.
SELECT
to_char(0::real, '99999999990D00FM'),
to_char(1::real, '99999999990D00FM'),
to_char(11::real, '99999999990D00FM'),
to_char(111::real, '99999999990D00FM'),
to_char(1111::real, '99999999990D00FM'),
to_char(11111::real, '99999999990D00FM'),
to_char(111111::real, '99999999990D00FM'),
to_char(1111111::real, '99999999990D00FM'),
to_char(11111111::real, '99999999990D00FM')
WHERE 1=1
outputs:
"0.00"; "1.00"; "11.00"; "111.00"; "1111.00"; "11111.0"; "111111"; "1111111"; "11111111"
And this is strange, according to the documentation it should work also for the real data type. Is this a bug in Postgres?
Cast the reals to numeric and use the FM modifier:
SELECT to_char((REAL '123456789')::numeric, '99999999990D00FM');
to_char
--------------
123457000,00
(1 row)
This will cut off all positions that exceed real's precision.
So I've got the following migration:
create table(:things, primary_key: false) do
add :id, :uuid, primary_key: true
add :state, :string
timestamps()
end
Which has the following schema:
#primary_key {:id, Ecto.UUID, autogenerate: true}
#derive {Phoenix.Param, key: :id}
schema "things" do
field :state, :string
timestamps()
end
And upon trying the following query in the REPL, I get an Ecto.Query.CastError:
iex(8)> q = from s in Thing, where: s.id == "ba34d9a0-889f-4999-ac23-f04c7183f2ba", select: s
#Ecto.Query<from o in App.Thing,
where: o.id == "ba34d9a0-889f-4999-ac23-f04c7183f2ba", select: o>
iex(9)> Repo.get!(Thing, q)
** (Ecto.Query.CastError) /project/deps/ecto/lib/ecto/repo/queryable.ex:341: value `#Ecto.Query<from o in App.Thing, where: o.id == "ba34d9a0-889f-4999-ac23-f04c7183f2ba", select: o>` in `where` cannot be cast to type Ecto.UUID in query:
from o in App.Thing,
where: o.id == ^#Ecto.Query<from o in App.Thing, where: o.id == "ba34d9a0-889f-4999-ac23-f04c7183f2ba", select: o>,
select: o
(elixir) lib/enum.ex:1811: Enum."-reduce/3-lists^foldl/2-0-"/3
(elixir) lib/enum.ex:1357: Enum."-map_reduce/3-lists^mapfoldl/2-0-"/3
(elixir) lib/enum.ex:1811: Enum."-reduce/3-lists^foldl/2-0-"/3
(ecto) lib/ecto/repo/queryable.ex:124: Ecto.Repo.Queryable.execute/5
(ecto) lib/ecto/repo/queryable.ex:37: Ecto.Repo.Queryable.all/4
(ecto) lib/ecto/repo/queryable.ex:78: Ecto.Repo.Queryable.one!/4
(stdlib) erl_eval.erl:670: :erl_eval.do_apply/6
(iex) lib/iex/evaluator.ex:219: IEx.Evaluator.handle_eval/5
(iex) lib/iex/evaluator.ex:200: IEx.Evaluator.do_eval/3
(iex) lib/iex/evaluator.ex:178: IEx.Evaluator.eval/3
I'm not sure this is the proper way of using UUID's with Ecto, I've looked around but there are several people doing it differently. I'm using Ecto 2.1 and Postgres 10.0 with postgrex.
Any tips on how to get Ecto querying correctly and casting the UUID?
EDIT: The ID I'm giving in the query is an actual existing ID I copied from the database.
I find the problem you're experiencing is a bit confusing at first, but I think it works quite correctly - the problem is in how you're querying the database.
If you have your query ready, you need to use Repo.all(query), so your code should be:
q = from s in Thing, where: s.id == "ba34d9a0-889f-4999-ac23-f04c7183f2ba", select: s
Repo.all(q)
You can query the database to look for specific record by primary key, and this is when you can use Repo.get!/3.
Try this:
Repo.get!(Thing, "ba34d9a0-889f-4999-ac23-f04c7183f2ba")
When you pass your q as a second argument to get!, it will try to cast it to UUID, but in fact, query is not cast-able, which is explained in the exception:
** (Ecto.Query.CastError) /project/deps/ecto/lib/ecto/repo/queryable.ex:341: value `#Ecto.Query<from o in App.Thing, where: o.id == "ba34d9a0-889f-4999-ac23-f04c7183f2ba", select: o>` in `where` cannot be cast to type Ecto.UUID in query:
Can you see the whole #Ecto.Query<> is enclosed in ``?
You can find more in the documentation:
Repo#all/2
Repo#get!/3
Hope that helps!
The question says it all. Using the Python API as an example:
import ibm_db
#setup stuff
conn = ibm_connect(DATABASE, user, password)
stmt = ibm_db.exec_immediate(conn, 'DELETE FROM sometable')
print(ibm_db.num_rows(stmt)) # prints -1
Why doesn't it print the actual count of rows deleted?
This is not an answer, really, just showing that the function does work and that SQLCODEs cause python exceptions. The fact that num_rows() does not work for you may indicate that the functionality may not be supported by the database you're connected to. You may want to describe your environment in detail: the DB2 server version and platform, whether it's a local or remote database, the DB2 client version (if different from the server) etc.
>>> import ibm_db
>>> conn = ibm_db.connect('TEST',user,password)
>>> stmt = ibm_db.exec_immediate(conn, 'create table t (f int)')
>>> print ibm_db.num_rows(stmt) # DDL statement - num_rows not applicable
-1
>>> stmt = ibm_db.exec_immediate(conn,'delete from t')
>>> print ibm_db.num_rows(stmt) # 0 rows deleted
0
>>> stmt = ibm_db.exec_immediate(conn,'delete from x') # nonexistent table - exception
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Exception: [IBM][CLI Driver][DB2/LINUXX8664] SQL0204N "USER.X" is an undefined name. SQLSTATE=42704 SQLCODE=-204
>>> stmt = ibm_db.exec_immediate(conn,'insert into t(f) values (1),(2),(3)')
>>> print ibm_db.num_rows(stmt) # 3 rows inserted
3
>>> stmt = ibm_db.exec_immediate(conn,'delete from t')
>>> print ibm_db.num_rows(stmt) # 3 rows deleted
3
>>> ibm_db.close(conn)
True
>>>
A negative SQL Code in DB2 means an error:
SQL codes
is there a magic function or operator to ignore some tokens?
select to_tsvector('the quick. brown fox') ## 'brown' -- returns true
select to_tsvector('the quick,brown fox') ## 'brown' -- returns true
select to_tsvector('the quick.brown fox') ## 'brown' -- returns false, should return true
select to_tsvector('the quick/brown fox') ## 'brown' -- returns false, should return true
I'm afraid that you are probably stuck. If you run your terms through ts_debug you will see that 'quick.brown' is parsed as a hostname and 'quick/brown' is parsed as filesystem path. The parser really isn't that clever sadly.
My only suggestion is that you preprocess your texts to convert these tokens to spaces. You could easily create a function in plpgsql to do that.
nicg=# select ts_debug('the quick.brown fox');
ts_debug
---------------------------------------------------------------------
(asciiword,"Word, all ASCII",the,{english_stem},english_stem,{})
(blank,"Space symbols"," ",{},,)
(host,Host,quick.brown,{simple},simple,{quick.brown})
(blank,"Space symbols"," ",{},,)
(asciiword,"Word, all ASCII",fox,{english_stem},english_stem,{fox})
(5 rows)
As you can see from the above you don't get tokens for quick and brown