how to embed qt-console with Ipython - ipython

Whenever I try to use ipython with a qt as gui support it gives me this :
enter code here $ ipython --gui=qt
Python 2.7.3 (default, Apr 20 2012, 22:39:59)
Type "copyright", "credits" or "license" for more information.
IPython 0.12.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: Got bus address: "unix:abstract=/tmp/dbus- q 1DAvsew5j,guid=a3ed4bb7c5723eeff9aaed690000006e"
Connected to accessibility bus at: "unix:abstract=/tmp/dbus- q1DAvsew5j,guid=a3ed4bb7c5723eeff9aaed690000006e"
Registered DEC: true
Registered event listener change listener: true
Registered event listener change listener: true
Registered event listener change listener: true
Registered event listener change listener: true
Registered event listener change listener: true
Registered event listener change listener: true
Registered event listener change listener: true
My questions are:
What is this and why won't it stop until the keyboard interrupt?
How do I embed a qt-console in ipython?

This problem just started happening to me too. I am running Ubuntu 12.04. Removing a QT accessibility package (qt-at-spi) fixed this for me.
Here is the command to run from terminal.
sudo apt-get remove --purge qt-at-spi
Source:
http://blog.koppi.me/2012/01/howto-fix-sni-qt19799-warn-024248-774-void-statusnotifieritemfactoryconnecttosnw-invalid-interface-to-snw_service-error-message-on-ubuntu-11-10/

You can in theory accomplish the same thing (stopping the QT AT API messages) without purging the package by setting the environment variable QT_ACCESSIBILITY
i.e., add
export QT_ACCESSIBILITY=0
to your shell and/or system startup (like ~/.bashrc)
You can look at the README for the qt accessibility package here
/usr/share/doc/qt-at-spi/README

Related

bitbake error in do_rootfs: systemd depends on update-rc.d

I got a bit stuck debugging a yocto build problem. I encountered this while updating from yocto warrior (2.7) to yocto dunfell (3.1) The build fails during the building of the rootfs, all steps before seem to work:
ERROR: my-project-develop-1.0-r0 do_rootfs: Could not invoke dnf. Command '/shared/build/tmp/work/raspberrypi_cm3-poky-linux-gnueabi/my-project-develop/1.0-r0/recipe-sysroot-native/usr/bin/dnf -v --rpmverbosity=info -y -c /shared/build/tmp/work/raspberrypi_cm3-poky-linux-gnueabi/my-project-develop/1.0-r0/rootfs/etc/dnf/dnf.conf --setopt=reposdir=/shared/build/tmp/work/raspberrypi_cm3-poky-linux-gnueabi/my-project-develop/1.0-r0/rootfs/etc/yum.repos.d --installroot=/shared/build/tmp/work/raspberrypi_cm3-poky-linux-gnueabi/my-project-develop/1.0-r0/rootfs --setopt=logdir=/shared/build/tmp/work/raspberrypi_cm3-poky-linux-gnueabi/my-project-develop/1.0-r0/temp --repofrompath=oe-repo,/shared/build/tmp/work/raspberrypi_cm3-poky-linux-gnueabi/my-project-develop/1.0-r0/oe-rootfs-repo --nogpgcheck install base-version-develop bash cairo cantarell-fonts cellular-geolocation commit-hashes-develop crda curl disable-airplane-mode disable-power-saving-for-some-devices disconnect-wifi-without-connectivity dnsmasq dosfstools e2fsprogs e2fsprogs-resize2fs firmware-develop fit-conf gbs-overlay geofencing-db hostapd htop i2c-tools iw jq lateswap libgpiod libgpiod-tools linux-firmware-rtl8192cu matlab-develop modemmanager mosquitto mosquitto-clients nano network-configuration networkmanager openmoji-fonts os-release ostree ostree-devicetrees ostree-initramfs ostree-kernel packagegroup-base packagegroup-base-extended packagegroup-core-boot packagegroup-core-ssh-openssh parted psplash-raspberrypi pstree raspi-gpio rtwpriv run-postinsts set-modes-and-bands source-han-sans-jp-fonts special-shadow sqlite3tzdata u-boot-fw-utils userland weston weston-init wifi-configurator-frontend-develop wifilm811 wifilm843 wpa-supplicant locale-base-en-us' returned 1:
DNF version: 4.2.2
cachedir: /shared/build/tmp/work/raspberrypi_cm3-poky-linux-gnueabi/my-project-develop/1.0-r0/rootfs/var/cache/dnf
Added oe-repo repo from /shared/build/tmp/work/raspberrypi_cm3-poky-linux-gnueabi/my-project-develop/1.0-r0/oe-rootfs-repo
repo: using cache for: oe-repo
not found other for:
not found modules for:
not found deltainfo for:
not found updateinfo for:
oe-repo: using metadata from Tue 16 Feb 2021 08:59:38 AM UTC.
No module defaults found
--> Starting dependency resolution
--> Finished dependency resolution
Error:
Problem 1: package packagegroup-core-boot-1.0-r17.raspberrypi_cm3 requires systemd, but none of the providers can be installed
- conflicting requests
- nothing provides update-rc.d needed by systemd-1:244.5-r0.cortexa7t2hf_neon_vfpv4
Problem 2: package packagegroup-distro-base-1.0-r83.raspberrypi_cm3 requires packagegroup-core-boot, but none of the providers can be installed
- package packagegroup-base-1.0-r83.raspberrypi_cm3 requires packagegroup-distro-base, but none of the providers can be installed
- package packagegroup-core-boot-1.0-r17.raspberrypi_cm3 requires systemd, but none of the providers can be installed
- conflicting requests
- nothing provides update-rc.d needed by systemd-1:244.5-r0.cortexa7t2hf_neon_vfpv4
Problem 3: package packagegroup-base-1.0-r83.raspberrypi_cm3 requires packagegroup-distro-base, but none of the providers can be installed
- package packagegroup-distro-base-1.0-r83.raspberrypi_cm3 requires packagegroup-core-boot, but none of the providers can be installed
- package packagegroup-base-extended-1.0-r83.raspberrypi_cm3 requires packagegroup-base, but none of the providers can be installed
- package packagegroup-core-boot-1.0-r17.raspberrypi_cm3 requires systemd, but none of the providers can be installed
- conflicting requests
- nothing provides update-rc.d needed by systemd-1:244.5-r0.cortexa7t2hf_neon_vfpv4
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
It seems that systemd-1:244.5 depends on update-rc.d. This doesn't make a lot sense to me, since I don't need those scripts anymore when using systemd - maybe there are for some compatibility reasons? Puzzled by this I checked the reference and it seems that I have the right settings to use systemd exclusively:
$ bitbake -e exos-develop | grep "^DISTRO_FEATURES"
DISTRO_FEATURES="acl alsa argp bluetooth ext2 ipv4 ipv6 largefile pcmcia usbgadget usbhost wifi xattr nfs zeroconf pci 3g nfc x11 vfat largefile opengl ptest multiarch wayland vulkan systemd weston wayland sota usrmerge systemd systemd pulseaudio gobject-introspection-data ldconfig"
DISTRO_FEATURES_BACKFILL="pulseaudio sysvinit gobject-introspection-data ldconfig"
DISTRO_FEATURES_BACKFILL_CONSIDERED="sysvinit sysvinit"
DISTRO_FEATURES_DEFAULT="acl alsa argp bluetooth ext2 ipv4 ipv6 largefile pcmcia usbgadget usbhost wifi xattr nfs zeroconf pci 3g nfc x11 vfat"
DISTRO_FEATURES_FILTER_NATIVE="api-documentation"
DISTRO_FEATURES_FILTER_NATIVESDK="api-documentation"
DISTRO_FEATURES_NATIVE="x11 ipv6 xattr sota"
DISTRO_FEATURES_NATIVESDK="x11"
During debugging I also saw that poky's systemd recipe uses the update-rc.d.bbclass. From what I can see it only gets active when the DISTRO_FEATURES contain sysvinit, which is apparently not the case here. Maybe some caching issue?
Any ideas how I can debug this further?
I found it out myself (interesting how asking questions helps you thinking...):
The issue was in the systemd recipe itself and related to the systemd-compat-units recipe. I was able to fix it with this in my layer's recipes-core/systemd/systemd_%.bbappend:
# Disable all relations to update-rc.d:
PACKAGECONFIG_remove = "sysvinit"
RRECOMMENDS_${PN}_remove = "systemd-compat-units"
I'm still wondering how this issue came to be at all, though.
Would be great if somebody could explain why this happened at all.

I'd like to reduce telemetry in VSCode

when VSCode is running I see
[master *%]> ps aux | grep enableTelemetry
pl 29331 4.8 3.7 1326800 223568 ? Sl Mar10 2:07 /usr/share/code/code --max-old-space-size=3072 /usr/share/code/resources/app/extensions/node_modules/typescript/lib/tsserver.js --useInferredProjectPerProjectRoot --enableTelemetry --cancellationPipeName /tmp/vscode-typescript1000/a21f3a40b2e3452a6c26/tscancellation-31b196e0b1a09b5f8b22.tmp* --globalPlugins typescript-vscode-sh-plugin --pluginProbeLocations /usr/share/code/resources/app/extensions/typescript-language-features --locale en --noGetErrOnBackgroundUpdate --validateDefaultNpmLocation
pl 29366 0.2 1.4 573640 85360 ? Sl Mar10 0:05 /usr/share/code/code /usr/share/code/resources/app/extensions/node_modules/typescript/lib/typingsInstaller.js --globalTypingsCacheLocation /home/pl/.cache/typescript/3.8 --enableTelemetry --typesMapLocation /usr/share/code/resources/app/extensions/node_modules/typescript/lib/typesMap.json --validateDefaultNpmLocation
while settigs are
~/.config/Code/User/settings.json:
32: "telemetry.enableCrashReporter": false,
33: "telemetry.enableTelemetry": false,
it doesn’t really scare me)) , just don’t like that they hide it
The enableTelemetry command line flag you see being used for TypeScript does not mean any telemetry is being uploaded, it only makes the TypeScript server that powers VS Code's JS/TS IntelliSense send telemetry data back to the main VS Code process. Depending on your settings, VS Code then itself may then upload this telemetry data.
Whether or not any telemetry is upload by VS Code or its built-in extensions is controlled by the normal VS Code telemetry settings. Again though, the TypeScript server will alway runs with --enableTelemetry regardless of any user settings because the flag is independent of telemetry being uploaded.
You can check this by building VS Code from source. Network monitors will show no telemetry is being sent from your build, but the --enableTelemetry flag will still be used

pkcs#11 CKR_DEVICE_REMOVED error logging in to HSM

I have the SmartCard HSM usb plugged in to my laptop. I can see it when I run a command thru an application using the PKCS#11 API:
Slot 0
Slot info:
Description: Identiv uTrust 3512 SAM slot Token [CCID Interface] (55511725602
Manufacturer ID: Identiv
Hardware version: 2.2
Firmware version: 0.0
Token present: yes
Token info:
Manufacturer ID: www.CardContact.de
Model: PKCS#15 emulated
Hardware version: 24.13
Firmware version: 2.5
Serial number: DECC0300697
Initialized: yes
User PIN init.: yes
Label: UserPIN (SmartCard-HSM)
Its been initialized with a SO-PIN and USER-PIN.
When I try to login in to the HSM using C_Login, I get a CKR_DEVICE_REMOVED error back. The usb HSM is still plugged in. I have googled the error but nothing fruitful came up.
login_token -LOGIN user -SLOT 0 -UPIN user-pin
EROR: rv=0x00000032: Could not log in on the token.
How can I login to the HSM ?
Following text is the description of CKR_DEVICE_REMOVED error from PKCS#11 v2.20 specification:
CKR_DEVICE_REMOVED: The token was removed from its slot during the
execution of the function.
If you did not attach/detach new reader and did not insert/remove smartcard once the PKCS#11 library was loaded then I don't see any obvious reason why you are receiving this error.
However you are using PKCS#11 library provided by OpenSC project so you can enable its debugging via environment variable or configuration file. You may be able to find the cause of the error by exploring the debug output yourself. If not, then your best bet is to open new OpenSC issue and discuss your problem with OpenSC project members.

Openocd reports "Failed with code (1)" for eclipse debugging a STM32F429 Discovery board

When I start a debugging session under eclipse (luna) for my STM32F429 Discovery board. I get the following error:
OpenOCD failed with code (1).
The information in the console pane is:
Open On-Chip Debugger 0.9.0-dev-00223-g1567cae (2015-01-12-13:43)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.sourceforge.net/doc/doxygen/bugs.html
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
srst_only separate srst_nogate srst_open_drain connect_deassert_srst
Started by GNU ARM Eclipse
Info : clock speed 2000 kHz
Error: open failed
in procedure 'init'
in procedure 'ocd_bouncer'
in procedure 'transport'
in procedure 'init'
The "board" file being referenced in the debug setup is: stm32f429discovery.cfg
I did have this working for another ST-Micro board and I could do a full debug session with no problems. Suddenly it just stopped being able to access the board and I get the same errors for it as I get with this board.
I was hoping to be able to use the purely open source s/w that runs on Linux to be able to work with these boards. I'm hoping that someone can get me out of this situation.
Thanks in advance.
Cheers!!
What you use the command and debugger?
try:
openocd -f interface/jlink.cfg -f target/stm32f429discovery.cfg

Can I register event callbacks using the libvirt Python module with a QEMU backend?

I would like to write some code to monitor events for domains running under QEMU, managed by libvirt. However, trying to register an event handler yields the following error:
>>> import libvirt
>>> conn = libvirt.openReadOnly('qemu:///system')
>>> conn.domainEventRegister(callback, None)
libvir: Remote error : this function is not supported by the connection driver: no event support
("callback" in this case is a stub function that simply prints its arguments.)
The examples I've been able to find regarding libvirt's event handling don't seem to be specific as to which backend hypervisors support which features. Is this expected to work for QEMU backends?
I'm running a Fedora 16 system, which includes libvirt 0.9.6 and qemu-kvm 0.15.1.
For folks finding themselves here via <searchengine>:
UPDATE 2013-10-04
Many months and a few Fedora releases later, the event-test.py code in the libvirt git repository runs correctly on Fedora 19.
Make sure you have registered in the libvirt event loop (or set up your own) before registering for events.
There is a nice example of event handling shipped with the libvirt source (file is called event-test.py). I'm attaching an example based on that code;
import libvirt
import time
import threading
def callback(conn, dom, event, detail, opaque):
print "EVENT: Domain %s(%s) %s %s" % (dom.name(),
dom.ID(),
event,
detail)
eventLoopThread = None
def virEventLoopNativeRun():
while True:
libvirt.virEventRunDefaultImpl()
def virEventLoopNativeStart():
global eventLoopThread
libvirt.virEventRegisterDefaultImpl()
eventLoopThread = threading.Thread(target=virEventLoopNativeRun,
name="libvirtEventLoop")
eventLoopThread.setDaemon(True)
eventLoopThread.start()
if __name__ == '__main__':
virEventLoopNativeStart()
conn = libvirt.openReadOnly('qemu:///system')
conn.domainEventRegister(callback, None)
conn.setKeepAlive(5, 3)
while conn.isAlive() == 1:
time.sleep(1)
Good luck!
//Seto