I use jupyter notebooks in vscode all the time. I noticed about a day or two ago that I could no longer run code in cells. It always displayed
Failed to start the Kernel. OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '"c:'. View Jupyter log for further details.
The logs also show
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '"c:'... View Jupyter [log](command:jupyter.viewOutput) for further details.
at ChildProcess.<anonymous> (c:\Users\XY_User\.vscode\extensions\ms-toolsai.jupyter-2022.6.1201981810\out\extension.node.js:24:230120)
at ChildProcess.emit (node:events:402:35)
at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)] {
category: 'kerneldied',
kernelConnectionMetadata: {
kind: 'startUsingPythonInterpreter',
kernelSpec: {
specFile: 'c:\\Users\\XY_User\\.vscode\\extensions\\ms-toolsai.jupyter-2022.7.1001951036\\temp\\jupyter\\kernels\\python383jvsc74a57bd0ad2bdc8ecc057115af97d19610ffacc2b4e99fae6737bb82f5d7fb13d2f2c186\\kernel.json',
interpreterPath: 'c:\\ProgramData\\Anaconda3\\python.exe',
isRegisteredByVSC: 'registeredByNewVersionOfExt',
name: 'python383jvsc74a57bd0ad2bdc8ecc057115af97d19610ffacc2b4e99fae6737bb82f5d7fb13d2f2c186',
argv: [Array],
language: 'python',
executable: 'python',
display_name: "Python 3.8.3 ('base')",
metadata: [Object],
env: {}
},
interpreter: {
id: 'C:\\PROGRAMDATA\\ANACONDA3\\PYTHON.EXE',
sysPrefix: 'C:\\ProgramData\\Anaconda3',
envType: 'Conda',
envName: 'base',
envPath: [w],
architecture: 3,
sysVersion: '3.8.3 (default, Jul 2 2020, 17:30:36) [MSC v.1916 64 bit (AMD64)]',
version: [Object],
companyDisplayName: 'ContinuumAnalytics',
displayName: "Python 3.8.3 ('base')",
detailedDisplayName: "Python 3.8.3 ('base': conda)",
uri: [w]
},
id: '.jvsc74a57bd0ad2bdc8ecc057115af97d19610ffacc2b4e99fae6737bb82f5d7fb13d2f2c186.c:\\ProgramData\\Anaconda3\\python.exe.c:\\ProgramData\\Anaconda3\\python.exe.-m#ipykernel_launcher'
},
exitCode: 1,
stdErr: 'Traceback (most recent call last):\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel\\kernelapp.py", line 248, in init_connection_file\r\n' +
" self.connection_file = filefind(self.connection_file, ['.', self.connection_dir])\r\n" +
' File "c:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipython_genutils\\path.py", line 71, in filefind\r\n' +
' raise IOError("File %r does not exist in any of the search paths: %r" %\r\n' +
"OSError: File 'c:\\\\Users\\\\XY_User\\\\AppData\\\\Roaming\\\\jupyter\\\\runtime\\\\kernel-v2-7868dLFyv3ry3NTk.json' does not exist in any of the search paths: ['.', 'C:\\\\Users\\\\XY_User\\\\AppData\\\\Roaming\\\\jupyter\\\\runtime']\r\n" +
'\r\n' +
'During handling of the above exception, another exception occurred:\r\n' +
'\r\n' +
'Traceback (most recent call last):\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\runpy.py", line 194, in _run_module_as_main\r\n' +
' return _run_code(code, main_globals, None,\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\runpy.py", line 87, in _run_code\r\n' +
' exec(code, run_globals)\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py", line 16, in <module>\r\n' +
' app.launch_new_instance()\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\site-packages\\traitlets\\config\\application.py", line 663, in launch_instance\r\n' +
' app.initialize(argv)\r\n' +
' File "<decorator-gen-125>", line 2, in initialize\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\site-packages\\traitlets\\config\\application.py", line 87, in catch_config_error\r\n' +
' return method(app, *args, **kwargs)\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel\\kernelapp.py", line 565, in initialize\r\n' +
' self.init_connection_file()\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel\\kernelapp.py", line 252, in init_connection_file\r\n' +
' ensure_dir_exists(os.path.dirname(self.abs_connection_file), 0o700)\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipython_genutils\\path.py", line 167, in ensure_dir_exists\r\n' +
' os.makedirs(path, mode=mode)\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\os.py", line 213, in makedirs\r\n' +
' makedirs(head, exist_ok=exist_ok)\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\os.py", line 213, in makedirs\r\n' +
' makedirs(head, exist_ok=exist_ok)\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\os.py", line 213, in makedirs\r\n' +
' makedirs(head, exist_ok=exist_ok)\r\n' +
' [Previous line repeated 3 more times]\r\n' +
' File "c:\\ProgramData\\Anaconda3\\lib\\os.py", line 223, in makedirs\r\n' +
' mkdir(name, mode)\r\n' +
`OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '"c:'\r\n`,
vslsStack: [ CallSite {}, CallSite {}, CallSite {} ]
}
info 2:51:27.573: Process Execution: > c:\ProgramData\Anaconda3\python.exe -c "import ipykernel"
> c:\ProgramData\Anaconda3\python.exe -c "import ipykernel"
This topic seems to recur a lot so I found and tried these but no dice.
python -m ipykernel install --user
checking the kernel.json file in anaconda3/share/jupyter/kernels/python3/ directory.
switching between pre-release and release versions of the jupyter extension
Any help would be appreciated.
Turns out this can be caused by a mismatch between the version of ipykernel installed by anaconda and the base requirements of the jupyter plugin for vscode.
My case was solved with the following steps.
Launch an instance of anaconda prompt
Type pip install -U ipykernel --user and hit enter
Switch to vscode and push Ctrl + Shift + P and enter Developer: Reload Window
Try executing a cell in your .ipynb file
Hope this helps someone out there.
Related
we are trying to use SQLFluff in our project to avoid sql parser errors before deployment.
in our case we have subdirectories which contains sql files.
During Development we are running command sqlfluff lint command in root directory, we found that lint command is working for one level subdirectory sql path like below,
sqlfluff lint demo/complexquery.sql --dialect snowflake
But when we try for 2 level subdirectory sql file path, lint command is not working and giving error as below. could you please let me know whether iam missing syntax.
sqlfluff lint SQLScript/demo/complexquery.sql --dialect snowflake
Traceback (most recent call last):
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\toml\decoder.py", line 395, in loads
value, vtype = decoder.load_value(multilinestr)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\toml\decoder.py", line 849, in load_value
raise ValueError("Found tokens after a closed " +
ValueError: Found tokens after a closed string. Invalid TOML.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\ar\AppData\Roaming\Python\Python310\Scripts\sqlfluff.exe_main.py", line 7, in
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 1130, in call
return self.main(*args, **kwargs)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\cli\commands.py", line 549, in lint
config = get_config(
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\cli\commands.py", line 361, in get_config
return FluffConfig.from_root(
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 613, in from_root
c = loader.load_config_up_to_path(
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 414, in load_config_up_to_path
[self.load_config_at_path(p) for p in config_paths]
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 414, in
[self.load_config_at_path(p) for p in config_paths]
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 338, in load_config_at_path
configs = self.load_config_file(p, fname, configs=configs)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 305, in load_config_file
elems = self._get_config_elems_from_toml(file_path)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\sqlfluff\core\config.py", line 191, in _get_config_elems_from_toml
config = toml.load(fpath)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\toml\decoder.py", line 134, in load
return loads(ffile.read(), _dict, decoder)
File "C:\Users\ar\AppData\Roaming\Python\Python310\site-packages\toml\decoder.py", line 397, in loads
raise TomlDecodeError(str(err), original, pos)
toml.decoder.TomlDecodeError: Found tokens after a closed string. Invalid TOML. (line 54 column 1 char 4192)
I got the following error when running the command to create a new postgres user. Any ideas what may have caused this - the error doesn't seem to be related to the create bash command that I ran.
(sandbox) airflow#airflowvm:~/airflow$ airflow users create -u admin -p admin -r Admin -f admin -l admin -e admin#airflow.com
Traceback (most recent call last):
File "/home/airflow/sandbox/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 47, in command
func = import_string(import_path)
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/cli/commands/user_command.py", line 29, in <module>
from airflow.www.app import cached_app
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/www/app.py", line 38, in <module>
from airflow.www.extensions.init_views import (
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/www/extensions/init_views.py", line 29, in <module>
from airflow.www.views import lazy_add_provider_discovered_options_to_connection_form
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/www/views.py", line 2836, in <module>
class ConnectionFormWidget(FormWidget):
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/www/views.py", line 2839, in ConnectionFormWidget
field_behaviours = json.dumps(ProvidersManager().field_behaviours)
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/providers_manager.py", line 397, in field_behaviours
self.initialize_providers_manager()
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/providers_manager.py", line 129, in initialize_providers_manager
self._discover_all_providers_from_packages()
File "/home/airflow/sandbox/lib/python3.8/site-packages/airflow/providers_manager.py", line 149, in _discover_all_providers_from_packages
self._provider_schema_validator.validate(provider_info)
File "/home/airflow/sandbox/lib/python3.8/site-packages/jsonschema/validators.py", line 353, in validate
raise error
jsonschema.exceptions.ValidationError: Additional properties are not allowed ('logo' was unexpected)
Failed validating 'additionalProperties' in schema['properties']['integrations']['items']:
{'additionalProperties': False,
'properties': {'external-doc-url': {'description': 'URL to external '
'documentation for '
'the integration.',
'type': 'string'},
'how-to-guide': {'description': 'List of paths to '
'how-to-guide for the '
'integration. The path '
'must start with '
"'/docs/'",
'items': {'type': 'string'},
'type': 'array'},
'integration-name': {'description': 'Name of the '
'integration.',
'type': 'string'},
'tags': {'description': 'List of tags describing the '
"integration. While we're "
'using RST, only one tag is '
'supported per integration.',
'items': {'enum': ['apache',
'aws',
'azure',
'gcp',
'gmp',
'google',
'protocol',
'service',
'software',
'yandex'],
'type': 'string'},
'maxItems': 1,
'minItems': 1,
'type': 'array'}},
'required': ['integration-name', 'external-doc-url', 'tags'],
'type': 'object'}
On instance['integrations'][0]:
{'external-doc-url': 'https://www.postgresql.org/',
'how-to-guide': ['/docs/apache-airflow-providers-postgres/operators/postgres_operator_howto_guide.rst'],
'integration-name': 'PostgreSQL',
'logo': '/integration-logos/postgres/Postgres.png',
'tags': ['software']}
Expected outpcome:
Admin user admin created
When I run the command, airflow db check, I can connect successfully with INFO - Connection successful.
I believe you are using Airflow 2.0.0 with non-compatible Provider (likely forced when you installed it). Please upgrade Airflow to 2.1+ if you want to use Postgres Provider which has >= 2.1 limitation.
See comment in the changelog here: https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/index.html#id1
I am a beginner in Yocto project. I am trying to build the image for Beaglebone Black Board with command-line: bitbake core-image-sato -c populate_sdk and I had an error (the detail in the below) in last task.
Enviroment build: Ubuntu 16.04 LTS, using Bash Shell instead of Dash Shell.
I tried to build again many times but still facing same error. Anybody can help me to fix this error?
Log file:
NOTE: Executing create_sdk_files ...
DEBUG: Executing shell function create_sdk_files
DEBUG: Shell function create_sdk_files finished
NOTE: Executing check_sdk_sysroots ...
DEBUG: Executing python function check_sdk_sysroots
DEBUG: Python function check_sdk_sysroots finished
NOTE: Executing archive_sdk ...
DEBUG: Executing shell function archive_sdk
/home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392: line 106: 11617 Broken pipe tar --owner=root --group=root -cf - .
11618 Killed | xz -T 0 -9 > /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/x86_64-deploy-core-image-sato-populate-sdk/poky-glibc-x86_64-core-image-sato-armv7at2hf-neon-beaglebone-toolchain-3.0.tar.xz
WARNING: /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392:1 exit 137 from 'xz -T 0 -9 > /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/x86_64-deploy-core-image-sato-populate-sdk/poky-glibc-x86_64-core-image-sato-armv7at2hf-neon-beaglebone-toolchain-3.0.tar.xz'
ERROR: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_populate_sdk(d)
0003:
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/classes/populate_sdk_base.bbclass', lineno: 169, function: do_populate_sdk
0165:
0166: populate_sdk(d)
0167:
0168:fakeroot python do_populate_sdk() {
*** 0169: populate_sdk_common(d)
0170:}
0171:SSTATETASKS += "do_populate_sdk"
0172:SSTATE_SKIP_CREATION_task-populate-sdk = '1'
0173:do_populate_sdk[cleandirs] = "${SDKDEPLOYDIR}"
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/classes/populate_sdk_base.bbclass', lineno: 166, function: populate_sdk_common
0162: manifest_type=Manifest.MANIFEST_TYPE_SDK_HOST)
0163: create_manifest(d, manifest_dir=d.getVar('SDK_DIR'),
0164: manifest_type=Manifest.MANIFEST_TYPE_SDK_TARGET)
0165:
*** 0166: populate_sdk(d)
0167:
0168:fakeroot python do_populate_sdk() {
0169: populate_sdk_common(d)
0170:}
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/lib/oe/sdk.py', lineno: 413, function: populate_sdk
0409: env_bkp = os.environ.copy()
0410:
0411: img_type = d.getVar('IMAGE_PKGTYPE')
0412: if img_type == "rpm":
*** 0413: RpmSdk(d, manifest_dir).populate()
0414: elif img_type == "ipk":
0415: OpkgSdk(d, manifest_dir).populate()
0416: elif img_type == "deb":
0417: DpkgSdk(d, manifest_dir).populate()
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/lib/oe/sdk.py', lineno: 60, function: populate
0056: self.sysconfdir, "ld.so.cache")
0057: self.mkdirhier(os.path.dirname(link_name))
0058: os.symlink("/etc/ld.so.cache", link_name)
0059:
*** 0060: execute_pre_post_process(self.d, self.d.getVar('SDK_POSTPROCESS_COMMAND'))
0061:
0062: def movefile(self, sourcefile, destdir):
0063: try:
0064: # FIXME: this check of movefile's return code to None should be
File: '/home/huongnguyen/Desktop/poky/openembedded-core/meta/lib/oe/utils.py', lineno: 260, function: execute_pre_post_process
0256: for cmd in cmds.strip().split(';'):
0257: cmd = cmd.strip()
0258: if cmd != '':
0259: bb.note("Executing %s ..." % cmd)
*** 0260: bb.build.exec_func(cmd, d)
0261:
0262:# For each item in items, call the function 'target' with item as the first
0263:# argument, extraargs as the other arguments and handle any exceptions in the
0264:# parent thread
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/build.py', lineno: 249, function: exec_func
0245: with bb.utils.fileslocked(lockfiles):
0246: if ispython:
0247: exec_func_python(func, d, runfile, cwd=adir)
0248: else:
*** 0249: exec_func_shell(func, d, runfile, cwd=adir)
0250:
0251: try:
0252: curcwd = os.getcwd()
0253: except:
File: '/usr/lib/python3.5/contextlib.py', lineno: 77, function: __exit__
0073: # Need to force instantiation so we can reliably
0074: # tell if we get the same exception back
0075: value = type()
0076: try:
*** 0077: self.gen.throw(type, value, traceback)
0078: raise RuntimeError("generator didn't stop after throw()")
0079: except StopIteration as exc:
0080: # Suppress StopIteration *unless* it's the same exception that
0081: # was passed to throw(). This prevents a StopIteration
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/utils.py', lineno: 431, function: fileslocked
0427: if files:
0428: for lockfile in files:
0429: locks.append(bb.utils.lockfile(lockfile))
0430:
*** 0431: yield
0432:
0433: for lock in locks:
0434: bb.utils.unlockfile(lock)
0435:
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/build.py', lineno: 249, function: exec_func
0245: with bb.utils.fileslocked(lockfiles):
0246: if ispython:
0247: exec_func_python(func, d, runfile, cwd=adir)
0248: else:
*** 0249: exec_func_shell(func, d, runfile, cwd=adir)
0250:
0251: try:
0252: curcwd = os.getcwd()
0253: except:
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/build.py', lineno: 450, function: exec_func_shell
0446: with open(fifopath, 'r+b', buffering=0) as fifo:
0447: try:
0448: bb.debug(2, "Executing shell function %s" % func)
0449: with open(os.devnull, 'r+') as stdin, logfile:
*** 0450: bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
0451: finally:
0452: os.unlink(fifopath)
0453:
0454: bb.debug(2, "Shell function %s finished" % func)
File: '/home/huongnguyen/Desktop/poky/bitbake/lib/bb/process.py', lineno: 182, function: run
0178: if not stderr is None:
0179: stderr = stderr.decode("utf-8")
0180:
0181: if pipe.returncode != 0:
*** 0182: raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
0183: return stdout, stderr
Exception: bb.process.ExecutionError: Execution of '/home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392' failed with exit code 137:
/home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392: line 106: 11617 Broken pipe tar --owner=root --group=root -cf - .
11618 Killed | xz -T 0 -9 > /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/x86_64-deploy-core-image-sato-populate-sdk/poky-glibc-x86_64-core-image-sato-armv7at2hf-neon-beaglebone-toolchain-3.0.tar.xz
WARNING: /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/run.archive_sdk.4392:1 exit 137 from 'xz -T 0 -9 > /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/x86_64-deploy-core-image-sato-populate-sdk/poky-glibc-x86_64-core-image-sato-armv7at2hf-neon-beaglebone-toolchain-3.0.tar.xz'
ERROR: Logfile of failure stored in: /home/huongnguyen/Desktop/poky/build/tmp/work/beaglebone-poky-linux-gnueabi/core-image-sato/1.0-r0/temp/log.do_populate_sdk.4392
ERROR: Task (/home/huongnguyen/Desktop/poky/openembedded-core/meta/recipes-sato/images/core-image-sato.bb:do_populate_sdk) failed with exit code '1'
Exit code 137 means something killed xz during the build. You may be running out of memory: check dmesg after this happens, there might be a log line about out-of-memory killer.
Had the same problem and could make it go away with XZ_MEMLIMIT="75%" bitbake image-name -c do_populate_sdk. The bitbake.conf in my version of Yocto defaults XZ_MEMLIMIT to 50%.
Had the same problem and none of the usual methods, like, deleting hidden repo file worked.
I then clean the build using bitbake -c clean mybuildname and then again made the build and it worked flawlessly, I hope it helps someone.
I have installed Anaconda on Windows 10 and everytime I open Jupyter Notebook, I get the below error. Can someone please help in understanding the issue an resolution?
http://localhost:8888/?token=7902567fdc4d1d33959bd34f85ce21f842677e1efd65ea20
[I 11:35:11.235 NotebookApp] Accepting one-time-token-authenticated connection from ::1
[W 11:35:14.680 NotebookApp] Error loading kernelspec 'pyspark2.2'
Traceback (most recent call last):
File "D:\Anaconda3\envs\pythonREnv\lib\site-packages\jupyter_client\kernelspec.py", line 258, in get_all_specs
spec = self._get_kernel_spec_by_name(kname, resource_dir)
File "D:\Anaconda3\envs\pythonREnv\lib\site-packages\jupyter_client\kernelspec.py", line 201, in _get_kernel_spec_by_name
return self.kernel_spec_class.from_resource_dir(resource_dir)
File "D:\Anaconda3\envs\pythonREnv\lib\site-packages\jupyter_client\kernelspec.py", line 47, in from_resource_dir
kernel_dict = json.load(f)
File "D:\Anaconda3\envs\pythonREnv\lib\json\__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "D:\Anaconda3\envs\pythonREnv\lib\json\__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "D:\Anaconda3\envs\pythonREnv\lib\json\decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 15 (char 14)
Output of jupyter kernelspec list: jupyter kernelspec list
Traceback (most recent call last):
File "D:\Anaconda3\Scripts\jupyter-kernelspec-script.py", line 10, in <module>
sys.exit(KernelSpecApp.launch_instance())
File "D:\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspecapp.py", line 273, in start
return self.subapp.start()
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspecapp.py", line 44, in start
specs = self.kernel_spec_manager.get_all_specs()
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspec.py", line 224, in get_all_specs
} for kname in d}
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspec.py", line 224, in <dictcomp>
} for kname in d}
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspec.py", line 192, in _get_kernel_spec_by_name
return self.kernel_spec_class.from_resource_dir(resource_dir)
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspec.py", line 40, in from_resource_dir
kernel_dict = json.load(f)
File "D:\Anaconda3\lib\json\__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "D:\Anaconda3\lib\json\__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "D:\Anaconda3\lib\json\decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 15 (char 14)
I had created a new file under (C:\Users\myusername.ipython\kernels\pyspark2.2) the following folder which I had created in order to install Apache Spark that will enable Pixiedust on Jupyter Notebook but that is also not working.
Referred the following link to create the below kernels.json file: https://github.com/pixiedust/pixiedust/wiki/Setup:-Install-and-Configure-pixiedust
"display_name": "pySpark (Spark 2.3.1) Python 3", "language": "python", "argv": [ "D:\Anaconda3\", "-m", "ipykernel", "-f", "{connection_file}" ], "env": { "SPARK_HOME": "D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\", "PYTHONPATH": "D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\python\:D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip", "PYTHONSTARTUP": "D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\python\pyspark\shell.py", "PYSPARK_SUBMIT_ARGS": "--driver-class-path D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\data\mllib\* --master local[10] pyspark-shell", "SPARK_DRIVER_MEMORY":"10G", "SPARK_LOCAL_IP":"127.0.0.1" } }
Thanks
Ganesh Bhat
I'm using odoo 11 on localhost and recently i did database restore from
PgAdmin 4 and from there it completed successfully. But when i chose it from odoo login screen the screen get blank and not responds. find pic attached.
I tried this to reset javascript in the browser
localhost:8069/web?debug=
but still not working.
Here are the logs:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_cron.py", line 92, in _callback
self.env['ir.actions.server'].browse(server_action_id).run()
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_actions.py", line 536, in run
res = func(action, eval_context=eval_context)
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_actions.py", line 417, in run_action_code_multi
safe_eval(action.sudo().code.strip(), eval_context, mode="exec", nocopy=True) # nocopy allows to return 'action'
File "C:\Odoo 11.0\server\odoo\tools\safe_eval.py", line 370, in safe_eval
pycompat.reraise(ValueError, ValueError('%s: "%s" while evaluating\n%r' % (ustr(type(e)), ustr(e), expr)), exc_info[2])
File "C:\Odoo 11.0\server\odoo\tools\pycompat.py", line 85, in reraise
raise value.with_traceback(tb)
File "C:\Odoo 11.0\server\odoo\tools\safe_eval.py", line 347, in safe_eval
return unsafe_eval(c, globals_dict, locals_dict)
File "", line 1, in <module>
File "C:\Odoo 11.0\server\odoo\addons\mail\models\ir_autovacuum.py", line 13, in power_on
return super(AutoVacuum, self).power_on(*args, **kwargs)
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_autovacuum.py", line 36, in power_on
self._gc_transient_models()
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_autovacuum.py", line 20, in _gc_transient_models
model._transient_vacuum(force=True)
File "C:\Odoo 11.0\server\odoo\models.py", line 4048, in _transient_vacuum
self._transient_clean_rows_older_than(self._transient_max_hours * 60 * 60)
File "C:\Odoo 11.0\server\odoo\models.py", line 4009, in _transient_clean_rows_older_than
self.sudo().browse(ids).unlink()
File "C:\Odoo 11.0\server\odoo\models.py", line 2857, in unlink
cr.execute(query, (sub_ids,))
File "C:\Odoo 11.0\server\odoo\sql_db.py", line 155, in wrapper
return f(self, *args, **kwargs)
File "C:\Odoo 11.0\server\odoo\sql_db.py", line 232, in execute
res = self._obj.execute(query, params)
ValueError: <class 'psycopg2.IntegrityError'>: "null value in column "wizard_id" violates not-null constraint
DETAIL: Failing row contains (1, null, 8, null, null, 1, 2018-01-01 03:32:24.944104, 1, 2018-01-01 03:32:25.077112).
CONTEXT: SQL statement "UPDATE ONLY "public"."change_password_user" SET "wizard_id" = NULL WHERE $1 OPERATOR(pg_catalog.=) "wizard_id""
" while evaluating
'model.power_on()'
I think some method is not found and your odoo source code is old so get the latest code form odoo github: https://github.com/odoo/odoo
and then after in terminal throw update the all module like:
./odoo-bin -d your_database_name --db-filter your_database_name --addons-path your_all_addons_path_name -u all
this is helpfull tip
may be you have some missing files, try to restore also a folder named: filestore
that folder can be found in:
/home/$User/.local/share/Odoo/filestore
replace $User with your ubuntu username