Basic Grako example gives IndexError - grako

I'd like to get started with Grako (3.6.6) and as a first experience with parsers I wanted to generate an HTML table from a custom syntax. The following basic test
import grako
grammar = """table = { row }+ ;
row = (cell1:cell "|" cell2:cell) "\n";
cell = /[a-z]+/ ;
"""
model = grako.genmodel("model", grammar)
ast = model.parse(
"""a | b
c | d
""", "table")
print(ast)
results in an error
File "test.py", line 13, in <module>
""", "table")
File "grako\grammars.py", line 790, in grako.grammars.Grammar.parse (grako\grammars.c:27773)
File "grako\grammars.py", line 97, in grako.grammars.GrakoContext.parse (grako\grammars.c:4391)
File "grako\contexts.py", line 180, in grako.contexts.ParseContext.parse (grako\contexts.c:4313)
File "grako\grammars.py", line 594, in grako.grammars.Rule.parse (grako\grammars.c:22253)
File "grako\grammars.py", line 597, in grako.grammars.Rule._parse_rhs (grako\grammars.c:22435)
File "grako\contexts.py", line 399, in grako.contexts.ParseContext._call (grako\contexts.c:10088)
File "grako\contexts.py", line 433, in grako.contexts.ParseContext._invoke_rule (grako\contexts.c:11135)
File "grako\grammars.py", line 435, in grako.grammars.PositiveClosure.parse (grako\grammars.c:17285)
File "grako\contexts.py", line 695, in grako.contexts.ParseContext._positive_closure (grako\contexts.c:19286)
File "grako\contexts.py", line 696, in grako.contexts.ParseContext._positive_closure (grako\contexts.c:19240)
File "grako\grammars.py", line 435, in grako.grammars.PositiveClosure.parse.lambda10 (grako\grammars.c:17195)
File "grako\grammars.py", line 547, in grako.grammars.RuleRef.parse (grako\grammars.c:20774)
File "grako\grammars.py", line 594, in grako.grammars.Rule.parse (grako\grammars.c:22253)
File "grako\grammars.py", line 597, in grako.grammars.Rule._parse_rhs (grako\grammars.c:22435)
File "grako\contexts.py", line 399, in grako.contexts.ParseContext._call (grako\contexts.c:10088)
File "grako\contexts.py", line 433, in grako.contexts.ParseContext._invoke_rule (grako\contexts.c:11135)
File "grako\grammars.py", line 326, in grako.grammars.Sequence.parse (grako\grammars.c:11582)
File "grako\grammars.py", line 268, in grako.grammars.Token.parse (grako\grammars.c:9463)
File "grako\contexts.py", line 543, in grako.contexts.ParseContext._token (grako\contexts.c:13772)
File "grako\buffering.py", line 301, in grako.buffering.Buffer.match (grako\buffering.c:9168)
IndexError: string index out of range
which happens to be partial_match = (token[0].isalpha() and token.isalnum() and self.is_name_char(self.current()) )
Despite me being new to parsers and a little lack of documentation, I'd like to stick to Grako.
Can you help me set up a basic example which outputs the HTML for a table?

Grako is not seeing the "\n" in the grammar correctly because newlines are not allowed in tokens, and the \n is being evaluated in the context of the outer, triple-quote ("""), string. Things work fine if you use /\n/ instead.
Also note that if \n will be part of the language, then you should probably write a ##whitespace clause so the parser doesn't skip over the character:
##whitespace :: /[\t ]+/
This is the correct grammar for your language:
grammar = """
##whitespace :: /[\t ]+/
table = { row }+ ;
row = (cell1:cell "|" cell2:cell) "\\n";
cell = /[a-z]+/ ;
"""
I'm currently patching Grako to detect and report errors like the one in your grammar. The changes are already in the Bitbucket repository. I'll make a release after I finish testing.

Related

SQLFluff Lint not working for multilevel subdirectory sql file

we are trying to use SQLFluff in our project to avoid sql parser errors before deployment.
in our case we have subdirectories which contains sql files.
During Development we are running command sqlfluff lint command in root directory, we found that lint command is working for one level subdirectory sql path like below,
sqlfluff lint demo/complexquery.sql --dialect snowflake
But when we try for 2 level subdirectory sql file path, lint command is not working and giving error as below. could you please let me know whether iam missing syntax.
sqlfluff lint SQLScript/demo/complexquery.sql --dialect snowflake
Traceback (most recent call last):
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\toml\decoder.py", line 395, in loads
value, vtype = decoder.load_value(multilinestr)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\toml\decoder.py", line 849, in load_value
raise ValueError("Found tokens after a closed " +
ValueError: Found tokens after a closed string. Invalid TOML.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\ar\AppData\Roaming\Python\Python310\Scripts\sqlfluff.exe_main.py", line 7, in
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 1130, in call
return self.main(*args, **kwargs)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\cli\commands.py", line 549, in lint
config = get_config(
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\cli\commands.py", line 361, in get_config
return FluffConfig.from_root(
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 613, in from_root
c = loader.load_config_up_to_path(
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 414, in load_config_up_to_path
[self.load_config_at_path(p) for p in config_paths]
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 414, in
[self.load_config_at_path(p) for p in config_paths]
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 338, in load_config_at_path
configs = self.load_config_file(p, fname, configs=configs)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 305, in load_config_file
elems = self._get_config_elems_from_toml(file_path)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 191, in _get_config_elems_from_toml
config = toml.load(fpath)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\toml\decoder.py", line 134, in load
return loads(ffile.read(), _dict, decoder)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\toml\decoder.py", line 397, in loads
raise TomlDecodeError(str(err), original, pos)
toml.decoder.TomlDecodeError: Found tokens after a closed string. Invalid TOML. (line 54 column 1 char 4192)

Read GCS file into dask dataframe

I want to read a csv file stored in Google Cloud Storage using dask dataframe.
I have insalled gcsfs & dask in the conda env. on Windows
import dask.dataframe as dd
import gcsfs
project_id = 'my-project'
token_file = 'C:\\path\to\credentials.json'
fs = gcsfs.GCSFileSystem(project=project_id)
gcs_bucket_name = 'my_bucket'
df = dd.read_csv('gs://'+gcs_bucket_name+'/my_file.csv',storage_options={'token': token_file, 'project': project_id})
I know I'm not providing the key file correctly as per
https://gcsfs.readthedocs.io/en/latest/ but not sure how to do it. Can anyone please help?
Error I'm getting -
File "<ipython-input-XXXXXXXX>", line 1, in <module>
runfile('C:/path/to/scripts/my_python_script.py', wdir='C:/path/to/scripts')
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\my-rdkit-env\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\my-rdkit-env\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/path/to/scripts/my_python_script.py", line 28, in <module>
df = dd.read_csv('gcs://'+gcs_bucket_name+'/my_file.csv',storage_options={'token': token_file, 'project': project_id})
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\my-rdkit-env\lib\site-packages\dask\dataframe\io\csv.py", line 578, in read
**kwargs
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\my-rdkit-env\lib\site-packages\dask\dataframe\io\csv.py", line 444, in read_pandas
head = reader(BytesIO(b_sample), **kwargs)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\my-rdkit-env\lib\site-packages\pandas\io\parsers.py", line 685, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\my-rdkit-env\lib\site-packages\pandas\io\parsers.py", line 463, in _read
data = parser.read(nrows)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\my-rdkit-env\lib\site-packages\pandas\io\parsers.py", line 1154, in read
ret = self._engine.read(nrows)
File "C:\Users\AppData\Local\Continuum\anaconda3\envs\my-rdkit-env\lib\site-packages\pandas\io\parsers.py", line 2059, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 881, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 896, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 950, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 937, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2132, in pandas._libs.parsers.raise_parser_error
ParserError: Error tokenizing data. C error: Expected 4 fields in line 3, saw 134

"InterfaceError: connection already closed" when using multiprocessing.Pool on black box function that queries PostgreSQL database

I've been given a Python (2.7) function that takes 3 strings as arguments, and returns a list of dictionaries. Due to the nature of the project, I can't alter the function, which is quite complex, calling several other non-standard Python modules and querying a PostgreSQL database using psychopg2. I think that it's the Postgres functionality that's causing me problems.
I want to use the multiprocessing module to speed up calling the function hundreds of times. I've written a "helper" function so that I can use multiprocessing.Pool (which takes only 1 argument) with my function:
from function_script import function
def function_helper(args):
return function(*args)
And my main code looks like this:
from helper_script import function_helper
from multiprocessing import Pool
argument_a = ['a0', 'a1', ..., 'a99']
argument_b = ['b0', 'b1', ..., 'b99']
argument_c = ['c0', 'c1', ..., 'c99']
input = zip(argument_a, argument_b, argument_c)
p = Pool(4)
results = p.map(function_helper, input)
print results
What I'm expecting is a list of lists of dictionaries, however I get the following errors:
Traceback (most recent call last):
File "/local/python/2.7/lib/python2.7/site-packages/variantValidator/variantValidator.py", line 898, in validator
vr.validate(input_parses)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 33, in validate
return self._ivr.validate(var, strict) and self._evr.validate(var, strict)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 69, in validate
(res, msg) = self._ref_is_valid(var)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 89, in _ref_is_valid
var_x = self.vm.c_to_n(var) if var.type == "c" else var
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/variantmapper.py", line 223, in c_to_n
tm = self._fetch_TranscriptMapper(tx_ac=var_c.ac, alt_ac=var_c.ac, alt_aln_method="transcript")
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/decorators/lru_cache.py", line 176, in wrapper
result = user_function(*args, **kwds)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/variantmapper.py", line 372, in _fetch_TranscriptMapper
self.hdp, tx_ac=tx_ac, alt_ac=alt_ac, alt_aln_method=alt_aln_method)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/transcriptmapper.py", line 69, in __init__
self.tx_identity_info = hdp.get_tx_identity_info(self.tx_ac)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/decorators/lru_cache.py", line 176, in wrapper
result = user_function(*args, **kwds)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 353, in get_tx_identity_info
rows = self._fetchall(self._queries['tx_identity_info'], [tx_ac])
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 216, in _fetchall
with self._get_cursor() as cur:
File "/local/python/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 529, in _get_cursor
cur.execute("set search_path = " + self.url.schema + ";")
File "/local/python/2.7/lib/python2.7/site-packages/psycopg2/extras.py", line 144, in execute
return super(DictCursor, self).execute(query, vars)
DatabaseError: SSL error: decryption failed or bad record mac
And:
Traceback (most recent call last):
File "/local/python/2.7/lib/python2.7/site-packages/variantValidator/variantValidator.py", line 898, in validator
vr.validate(input_parses)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 33, in validate
return self._ivr.validate(var, strict) and self._evr.validate(var, strict)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 69, in validate
(res, msg) = self._ref_is_valid(var)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 89, in _ref_is_valid
var_x = self.vm.c_to_n(var) if var.type == "c" else var
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/variantmapper.py", line 223, in c_to_n
tm = self._fetch_TranscriptMapper(tx_ac=var_c.ac, alt_ac=var_c.ac, alt_aln_method="transcript")
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/decorators/lru_cache.py", line 176, in wrapper
result = user_function(*args, **kwds)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/variantmapper.py", line 372, in _fetch_TranscriptMapper
self.hdp, tx_ac=tx_ac, alt_ac=alt_ac, alt_aln_method=alt_aln_method)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/transcriptmapper.py", line 69, in __init__
self.tx_identity_info = hdp.get_tx_identity_info(self.tx_ac)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/decorators/lru_cache.py", line 176, in wrapper
result = user_function(*args, **kwds)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 353, in get_tx_identity_info
rows = self._fetchall(self._queries['tx_identity_info'], [tx_ac])
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 216, in _fetchall
with self._get_cursor() as cur:
File "/local/python/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 526, in _get_cursor
conn.autocommit = True
InterfaceError: connection already closed
Does anybody know what might cause the Pool function to behave like this, when it seems so simple to use in other examples that I've tried? If this isn't enough information to go on, can anyone advise me on a way of getting to the bottom of the problem (this is the first time I've worked with someone else's code)? Alternatively, are there any other ways that I could use the multiprocessing module to call the function hundreds of times?
Thanks
I think what may be happening is that your connection object is used across all workers and when 1 worker has completed all its tasks it closes the connection and meanwhile the other workers are still working and the connection is closed so when one of those workers tries to use the db it is already closed.

Problems with MySQL encoding

I have a serious problem with my populate. Characters are not stored correctly. My code:
def _create_Historial(self):
datos = [self.DB_HOST, self.DB_USER, self.DB_PASS, self.DB_NAME]
conn = MySQLdb.connect(*datos)
cursor = conn.cursor()
cont = 0
with open('principal/management/commands/Historial_fichajes_jugadores.csv', 'rv') as csvfile:
historialReader = csv.reader(csvfile, delimiter=',')
for row in historialReader:
if cont == 0:
cont += 1
else:
#unicodedata.normalize('NFKD', unicode(row[4], 'latin1')).encode('ASCII', 'ignore'),
cursor.execute('''INSERT INTO principal_historial(jugador_id, temporada, fecha, ultimoClub, nuevoClub, valor, coste) VALUES (%s,%s,%s,%s,%s,%s,%s)''',
(round(float(row[1]))+1,row[2], self.stringToDate(row[3]), unicode(row[4],'utf-8'), row[5], self.convertValue(row[6]), str(row[7])))
conn.commit()
cursor.close()
conn.close()
El error es el siguiente:
Traceback (most recent call last):
File "/home/tfg/pycharm-2016.3.2/helpers/pycharm/django_manage.py", line 41, in <module>
run_module(manage_file, None, '__main__', True)
File "/usr/lib/python2.7/runpy.py", line 188, in run_module
fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 82, in _run_module_code
mod_name, mod_fname, mod_loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/tfg/TrabajoFinGrado/demoTFG/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist- packages/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist -packages/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/home/tfg/TrabajoFinGrado/demoTFG/principal/management/commands/populate_db.py", line 230, in handle
self._create_Historial()
File "/home/tfg/TrabajoFinGrado/demoTFG/principal/management/commands/populate_db.py", line 217, in _create_Historial
(round(float(row[1]))+1,row[2], self.stringToDate(row[3]), unicode(row[4],'utf-8'), row[5], self.convertValue(row[6]), str(row[7])))
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 187, in execute
query = query % tuple([db.literal(item) for item in args])
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py", line 278, in literal
return self.escape(o, self.encoders)
File "/usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py", line 208, in unicode_literal
return db.literal(u.encode(unicode_literal.charset))
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 6-7: ordinal not in range(256)
The characters was shownn as follows: Nicolás Otamendi, Gaël Clichy ....
When I print the characteros on shell of the python, its wah shown correctly.
Sorry for my english :(
Ok, I'll keep this brief.
You should convert encoded data/strs to Unicodes early in your code. Don't inline .decode()/.encode()/unicode()
When you open a file in Python 2.7, it's opened in binary mode. You should use io.open(filename, encoding='utf-8'), which will read it as text and decode it from utf-8 to Unicodes.
The Python 2.7 CSV module is not Unicode compatible. You should install https://github.com/ryanhiebert/backports.csv
You need to tell the MySQL driver that you're going to pass Unicodes and use UTF-8 for the connection. This is done by adding the following to your connection string:
charset='utf8',
use_unicode=True
Pass Unicode strings to MySQL. Use the u'' prefix to avoid troublesome implied conversion.
All your CSV data is already str / Unicode str. There's no need to convert it.
Putting it all together, your code will look like:
from backports import csv
import io
datos = [self.DB_HOST, self.DB_USER, self.DB_PASS, self.DB_NAME]
conn = MySQLdb.connect(*datos, charset='utf8', use_unicode=True)
cursor = conn.cursor()
cont = 0
with io.open('principal/management/commands/Historial_fichajes_jugadores.csv', 'r', encoding='utf-8') as csvfile:
historialReader = csv.reader(csvfile, delimiter=',')
for row in historialReader:
if cont == 0:
cont += 1
else:
cursor.execute(u'''INSERT INTO principal_historial(jugador_id, temporada, fecha, ultimoClub, nuevoClub, valor, coste) VALUES (%s,%s,%s,%s,%s,%s,%s)''',
round(float(row[1]))+1,row[2], self.stringToDate(row[3]), row[4], row[5], self.convertValue(row[6]), row[7]))
conn.commit()
cursor.close()
conn.close()
You may also want to look at https://stackoverflow.com/a/35444608/1554386, which covers what Python 2.7 Unicodes are.

can not rename column using alter_column

I have an existing project that is based on pylons and sqlalchemy. I did not use alembic or any wrapper in this project, yet.
I migrated the project to flask and sqlalchemy.
I took the old database of the old version 1.5 and generated the first migration script with Flast-Migrate. Hm, I realized I will have to do many things manually.
I am already stuck, when renaming columns. I am doing this with an SQLite database.
The old project had a table "Token", which was renamed to "token". Obviously there are some database that do not care about upper or lower case.
When I try to rename the first column like this:
op.alter_column('token', 'privacyIDEATokenId', new_column_name='id')
I get this error:
sqlalchemy.exc.OperationalError: (OperationalError) near \
""privacyIDEATokenId"": syntax error u'ALTER TABLE token RENAME \
"privacyIDEATokenId" TO id' ()
The column Token.privacyIDEATokenId is the primary key that should be token.id in the new version.
The same happens to columns that are not the primary key.
Full trace:
Traceback (most recent call last):
File "./manage.py", line 107, in <module>
manager.run()
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/flask_migrate/__init__.py", line 98, in upgrade
command.upgrade(config, revision, sql = sql, tag = tag)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/command.py", line 129, in upgrade
script.run_env()
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/script.py", line 208, in run_env
util.load_python_file(self.dir, 'env.py')
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/util.py", line 230, in load_python_file
module = load_module_py(module_id, path)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/compat.py", line 63, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "migrations/env.py", line 72, in <module>
run_migrations_online()
File "migrations/env.py", line 65, in run_migrations_online
context.run_migrations()
File "<string>", line 7, in run_migrations
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/environment.py", line 696, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/migration.py", line 266, in run_migrations
change(**kw)
File "migrations/versions/112475c7f45_.py", line 103, in upgrade
_upgrade_token_table()
File "migrations/versions/112475c7f45_.py", line 49, in _upgrade_token_table
op.alter_column('token', 'privacyIDEATokenId', new_column_name='id')
File "<string>", line 7, in alter_column
File "<string>", line 1, in <lambda>
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/util.py", line 353, in go
return fn(*arg, **kw)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/operations.py", line 329, in alter_column
existing_autoincrement=existing_autoincrement
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/ddl/impl.py", line 131, in alter_column
existing_nullable=existing_nullable,
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/alembic/ddl/impl.py", line 81, in _exec
conn.execute(construct, *multiparams, **params)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute
return meth(self, multiparams, params)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 69, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 783, in _execute_ddl
compiled
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context
context)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1159, in _handle_dbapi_exception
exc_info
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context
context)
File "/home/cornelius/src/privacyidea/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (OperationalError) near ""privacyIDEATokenId"": syntax error u'ALTER TABLE token RENAME "privacyIDEATokenId" TO id' ()
Nowadays you can use "batch migration" to enable ALTER-ing columns under SQLite:
with op.batch_alter_table('table_name') as bop:
bop.alter_column('old_column_name', new_column_name='new_name')
Obivously sqlite can not rename a column. So finally I ended up with creating new tables with new columns and shoving data from one to the other.
This is ok, since I realized, I also have to mangle the data, adapt and modify it before writing to the new table again.
The script I ended up with, is here:
https://github.com/privacyidea/privacyidea/blob/version2/migrations/versions/4f32a4e1bf33_.py#L241