AML-S905X-CC (Le Potato). How to enable and use UART on 40 pin header as TTY - overlay

I want to use the UART on the 40 pin header (pins 8 and 10) of the AML-S905X-CC (Le Potato).
I have tried a couple of O/S like Raspbian Stretch Headless and Armbian, and while the boot and work fine, there seems to be no support for the on board UART on the 40 pin header.
I am happy to use any O/S which can provide this.
Do I need to use Device Tree Overlays to enable this?
If so, where can I download the devicetree overlay package and is there a tutorial or some documentation on the process.
If not, how can I use this onboard UART?
Thanks

This is what I do
ROOT###### ARNBIAN ONLY
wget https://raw.githubusercontent.com/libre-computer-project/libretech-overlays/for-4.13.y/overlays/meson-gxl-s905x-libretech-cc-i2c-ao.dts
sudo armbian-add-overlay meson-gxl-s905x-libretech-cc-i2c-ao.dts
end ROOT #####
If Ubuntu
git clone https://github.com/libre-computer-project/libretech-wiring-tool.git
cd libretech-wiring-tool
sudo make

It works for me:
wget https://raw.githubusercontent.com/libre-computer-project/libretech-overlays/for-4.13.y/overlays/meson-gxl-s905x-libretech-cc-uarta.dts
sudo armbian-add-overlay meson-gxl-s905x-libretech-cc-uarta.dts
python3 -m serial.tools.miniterm
--- Available ports:
--- Enter port index or full name:
Never port appears, but the uarta is working in /dev/ttyAML6.
python3
Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import serial
>>> ser = serial.Serial()
>>> ser.baud = 9600
>>> ser.port = '/dev/ttyAML6'
>>> ser.open()
>>> ser.write(str.encode('test'))
4
>>>
Serial comm over the port /dev/ttyAML6 works!

Related

CISCO IOS-XR, Python3.7, Not able to run commands like 'ls' and 'df' on Cisco router

Cisco ios-xr router using CLI:
RP/0/RP0#show version
Thu Nov 25 07:53:59.103 UTC
Cisco IOS XR Software, Version 6.5.32.11I
Copyright (c) 2013-2020 by Cisco Systems, Inc.
RP/0/RP0#run
Thu Nov 25 07:54:05.231 UTC
[xr-vm_node0_RP0_CPU0:~]$df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 3966080 1332040 2412860 36% /
76892 11848320 43% /mnt/ecu/vdd
[xr-vm_node0_RP0_CPU0:~]$
Using python:
I am able to run show commands using Connecthandler .send.command:
from netmiko import ConnectHandler
import subprocess
Network_Device = {"host": "10.111.22.333", "username": "USER123", "password": "Pass123", "device_type": "cisco_xr",}
Connect = ConnectHandler(**Network_Device)
Connect.enable()
version1 = "show version"
print(Connect.send_command(version1))
But not able to run 'df' or 'ls' commands, as not able to reach bash prompt i reach by running 'run' command on router.
I tried:
disk1files = subprocess.run("df", stdout=subprocess.PIPE)
print(disk1files.stdout.decode())
But seems its wrong. Please suggest the right library or code I can use here.
This is my first question here, so bear some silly questions or mistakes done in code
if on DF you are referring to "Don't fragment" then it is posible to send it like
Connect.send_command("ping 192.168.10.10 df-bit size 1600")
where 1600 represents MTU, and for ls commands is link command,
Connect.send_command("ls-active")
Connect.send_command("ls-active-enabled")
but if you are referring to df and ls in linux (disk free and list files..) then you can use os module for sending commands:
import os
os.system("ls -l")
or use call from subprocess module:
from subprocess import call
call(["ls", "-l"])
If you need to acccess cisco bash:
switch# configure terminal
switch(config)# feature bash-shell
switch# run?
run Execute/run program
run-script Run shell scripts
switch# run bash?
bash Linux-bash
switch# run bash
bash-4.2$ whoami
admin
bash-4.2$ pwd
/bootflash/home/admin
bash-4.2$

GKE - Unable to make cuda work with pytorch

I have setup a kubernetes node with a nvidia tesla k80 and followed this tutorial to try to run a pytorch docker image with nvidia drivers and cuda drivers working.
My nvidia drivers and cuda drivers are all accessible inside my pod at /usr/local:
$> ls /usr/local
bin cuda cuda-10.0 etc games include lib man nvidia sbin share src
And my GPU is also recongnized by my image nvidia/cuda:10.0-runtime-ubuntu18.04:
$> /usr/local/nvidia/bin/nvidia-smi
Fri Nov 8 16:24:35 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 73C P8 35W / 149W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
But after installing pytorch 1.3.0 i'm not able to make pytorch recognize my cuda installation even with LD_LIBRARY_PATH set to /usr/local/nvidia/lib64:/usr/local/cuda/lib64:
$> python3 -c "import torch; print(torch.cuda.is_available())"
False
$> python3
Python 3.6.8 (default, Oct 7 2019, 12:59:55)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print ('\t\ttorch.cuda.current_device() =', torch.cuda.current_device())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 386, in current_device
_lazy_init()
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 192, in _lazy_init
_check_driver()
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 111, in _check_driver
of the CUDA driver.""".format(str(torch._C._cuda_getDriverVersion())))
AssertionError:
The NVIDIA driver on your system is too old (found version 10000).
Please update your GPU driver by downloading and installing a new
version from the URL: http://www.nvidia.com/Download/index.aspx
Alternatively, go to: https://pytorch.org to install
a PyTorch version that has been compiled with your version
of the CUDA driver.
The error above is strange because my cuda version for my image is 10.0 and Google GKE mentions that:
The latest supported CUDA version is 10.0
Also, it's GKE's daemonsets that automatically installs NVIDIA drivers
After adding GPU nodes to your cluster, you need to install NVIDIA's device drivers to the nodes.
Google provides a DaemonSet that automatically installs the drivers for you.
Refer to the section below for installation instructions for Container-Optimized OS (COS) and Ubuntu nodes.
To deploy the installation DaemonSet, run the following command:
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
I have tried everything i could think of, without success...
I have resolved my problem by downgrading my pytorch version by buildling my docker images from pytorch/pytorch:1.2-cuda10.0-cudnn7-devel.
I still don't really know why before it was not working as it should otherwise then by guessing that pytorch 1.3.0 is not compatible with cuda 10.0.

Failed to change font size in ipython qtconsole

I tried to change the ipython qtconsole font size with reference to this answer in stackoverflow; however, the font size refused to change no matter how I change the ~/.ipython/profile_default/ipython_config.py.
➜ ~ ipython profile locate
/home/nick/.ipython/profile_default
➜ ~ head .ipython/profile_default/ipython_config.py
# Configuration file for ipython.
c = get_config()
c.IPythonWidget.font_size = 16
c.IPythonWidget.font_family = 'Source Code Pro'
➜ ~ uname -a
Linux nick-thinkpad 4.2.5-1-ARCH #1 SMP PREEMPT Tue Oct 27 08:13:28 CET 2015 x86
_64 GNU/Linux
➜ ~ ipython --version
4.0.0
To my suprise, ipython qtconsole --ConsoleWidget.font_size=16 works. What's wrong with my configuration?
From version 4.0 on ipython qtconsole is deprecated (because of the big split). Instead, use jupyter qtconsole. You can set the fontsize by adding c.ConsoleWidget.font_size = 12 to ~/.jupyter/jupyter_qtconsole_config.py (this also sets the font size for ipython qtconsole).
Be aware of a bug in jupyter that does not allow you to automatically create a default config file. For now, you just have to create that file manually.

installing RASPBIAN 3.18 + adicional packages into Raspberry Pi

I've just installed RASPBIAN 3.18 and next packages:
wget http://mirrordirector.raspbian.org/raspbian/pool/main/b/bluez/bluez_4.99- 2_armhf.deb
wget http://mirrordirector.raspbian.org/raspbian/pool/main/libc/libcap- ng/libcap-ng0_0.6.6-2_armhf.deb
wget http://mirrordirector.raspbian.org/raspbian/pool/main/r/radvd/radvd_1.8.5- 1_armhf.deb
wget -O kernel.zip http://www.nordicsemi.com/eng/nordic/download_resource/41602/5/28710770
unzip kernel.zip
sudo dpkg -i radvd_1.8.5-1_armhf.deb
sudo dpkg -i libcap-ng0_0.6.6-2_armhf.deb
sudo dpkg -i bluez_4.99-2_armhf.deb
sudo dpkg -i linux-image-3.17.4-release+_1_armhf.deb
sudo dpkg -i linux-headers-3.17.4-release+_1_armhf.deb
sudo nano /boot/config.txt
Add the following line:
kernel=vmlinuz-3.17.4-release+ to config.txt
save and exit
sudo reboot
and when I restart I got an screen more or less as the print screen attached. Any idea ?
One thing is sure: The rainbow screen means the GPU firmware is loaded, but there is a problem with the kernel image. Which one? Impossible to say from here. Perhaps not found or corrupt. Might be that the kernel you got from www.nordicsemi.com is broken. Might be you have a typo somewhere. But it can also be a faulty SD-card. Or a wrong power supply. According to Google:
In some cases (Stuck on the Rainbow Screen), freezing at this point has been fixed by adding "boot_delay=1" to the config.txt file.
If nothing helps, you probably have to go back to the default Raspian kernel. If you need a more recent kernel than in the default Raspian, you can switch to Raspian testing. The testing kernel should be a bit more recent... and definitely works for me.
This might also help you (https://www.raspberrypi.org/forums/viewtopic.php?t=58151)
Error ACT LED patterns
While booting the ACT LED should blink in an irregular pattern, indicating it is reading from the card. If it starts blinking in a regular (Morse code like) pattern then it is signaling an error.
When it blinks just once: possibly you have a Rpi from Micron. Take a good look at the processor if it says M with an orbit around it, then using the latest software ( after Sept 2013 ) will solve your problem. Also make sure you have a 4Gb SD card: a 2Gb doesn't work in this particular case.
Other patterns that might appear during a failed boot mean:
3 flashes: start.elf not found
4 flashes: start.elf not launch-able (corrupt)
7 flashes: kernel.img not found
8 flashes: SDRAM not recognized. You need newer bootcode.bin/start.elf firmware, or your SDRAM is damaged
Firmware before 20th October 2012 required loader.bin, and the meaning of the flashes was slightly different:
3 flashes: loader.bin not found
4 flashes: loader.bin not launch-able (corrupt)
5 flashes: start.elf not found
6 flashes: start.elf not launch-able
7 flashes: kernel.img not found

ipython style tab complete with ipdb for imported modules

I'm trying to get ipython style table complete with pdb by using ipdb.
On a clean ubuntu 14.04 install (new aws instance) I run.
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install python-setuptools
sudo easy_install pip
sudo pip install ipython
sudo pip install ipdb
sudo pip install boto
Then boot up ipython, and try.
ubuntu#ip-10-0-0-244:~$ ipython
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
Type "copyright", "credits" or "license" for more information.
IPython 2.3.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import ipdb
In [2]: import boto
In [3]: ipdb.set_trace()
--Call--
> /usr/local/lib/python2.7/dist-packages/IPython/core/displayhook.py(234)__call__()
233
--> 234 def __call__(self, result=None):
235 """Printing with history cache management.
ipdb> str.
str.capitalize str.encode str.format str.isdigit str.isupper str.lstrip str.rfind str.rsplit str.startswith str.translate
str.center str.endswith str.index str.islower str.join str.mro str.rindex str.rstrip str.strip str.upper
str.count str.expandtabs str.isalnum str.isspace str.ljust str.partition str.rjust str.split str.swapcase str.zfill
str.decode str.find str.isalpha str.istitle str.lower str.replace str.rpartition str.splitlines str.title
ipdb> boto.
boto.[tab] just sits there. If I'm reading the docs right this should work, but maybe I have miss-understood something.
If I define a simple script test.py
import boto
print(boto.__version__)
Then call:
ubuntu#ip-10-0-0-244:~$ python -m ipdb test.py
> /home/ubuntu/test.py(1)<module>()
----> 1 import boto
2
3 print(boto.__version__)
ipdb> n
> /home/ubuntu/test.py(3)<module>()
1 import boto
2
----> 3 print(boto.__version__)
ipdb> boto.
boto.BUCKET_NAME_RE boto.connect_autoscale boto.connect_emr boto.connect_s3 boto.os
boto.BotoConfigLocations boto.connect_beanstalk boto.connect_euca boto.connect_sdb boto.perflog
boto.BucketStorageUri boto.connect_cloudformation boto.connect_fps boto.connect_ses boto.platform
boto.Config boto.connect_cloudfront boto.connect_glacier boto.connect_sns boto.plugin
boto.ENDPOINTS_PATH boto.connect_cloudsearch boto.connect_gs boto.connect_sqs boto.pyami
boto.FileStorageUri boto.connect_cloudsearch2 boto.connect_ia boto.connect_sts boto.re
boto.GENERATION_RE boto.connect_cloudtrail boto.connect_iam boto.connect_support boto.regioninfo
boto.InvalidUriError boto.connect_cloudwatch boto.connect_kinesis boto.connect_swf boto.resultset
boto.NullHandler boto.connect_cognito_identity boto.connect_logs boto.connect_vpc boto.s3
boto.TOO_LONG_DNS_NAME_COMP boto.connect_cognito_sync boto.connect_mturk boto.connect_walrus boto.set_file_logger
boto.UserAgent boto.connect_directconnect boto.connect_opsworks boto.datetime boto.set_stream_logger
boto.VERSION_RE boto.connect_dynamodb boto.connect_rds boto.exception boto.storage_uri
boto.Version boto.connect_ec2 boto.connect_rds2 boto.handler boto.storage_uri_for_key
boto.boto boto.connect_ec2_endpoint boto.connect_redshift boto.init_logging boto.sys
boto.compat boto.connect_elastictranscoder boto.connect_route53 boto.log boto.urlparse
boto.config boto.connect_elb boto.connect_route53domains boto.logging boto.vendored
I get the behavior I'd like.
Does anyone know how to make the tab complete functionality work for the set_trace() work case?
-Thanks