I am using AWS CDK inside VSCODE. My project layout is such that a single directory contains multiple top-level stacks.
I would like to be able to share code between Stacks. To be specific, it is code and not cross-stack references or resources.
For instance I have a tagging function that I want to be able share and use across all stacks
def default_tag_stacks(*args: cdk.Stack):
for stack in args:
cdk.Tags.of(stack).add("OU", "PlatformInfrastructure")
cdk.Tags.of(stack).add("GovernanceLevel", "Production")
cdk.Tags.of(stack).add("Owner", "Platform")
My editor sees the library and picks up the import, but when running
cdk synth
I get an import error. Is there path variable or another project structure I can follow so I can share code that is not part of a particular stack?
Project structure is as follows
/
lib/tagging.py
stack-1/app.py
stack-2/app.py
Imports are as such
from lib.tagging import tag_all_app_stacks_default
cd stack-1
cdk synth
gives
Traceback (most recent call last):
File "app.py", line 7, in <module>
from lib.tagging import tag_all_app_stacks_default
ModuleNotFoundError: No module named 'lib'
You should decide what is the top-level project directory for imports. If it's /, then you should have app.py and cdk.json in /, and cdk synth should work. Otherwise (i.e. your working directory for the project is stack-1), the import should be from tagging import tag_all_app_stacks_default. You can read more about how Python module imports work in https://docs.python.org/3/tutorial/modules.html.
Related
I'm working on a pybind11 extension written in C++ but I'm having a hard time understanding how should it be distributed.
The project links to a number of third party libraries (e.g. libpng, glew etc.).
The project builds fine with CMAKE and it generates a .so file. Now I am not sure what is the right way of installing this extension. The extension seems to work, as if I try copy the file into the python lib directories it is picked up (I can import it, and it works correctly). However, this is clearly not the way to go I think.
I also tried the setuptools route (from https://pybind11.readthedocs.io/en/stable/compiling.html) by creating a setup.py files like this:
import sys
# Available at setup time due to pyproject.toml
from pybind11 import get_cmake_dir
from pybind11.setup_helpers import Pybind11Extension, build_ext
from setuptools import setup
from glob import glob
files = sorted(glob("*.cpp"))
__version__ = "0.0.1"
ext_modules = [
Pybind11Extension("mylib",
files,
# Example: passing in the version to the compiled code
define_macros = [('VERSION_INFO', __version__)],
),
]
setup(
name="mylib",
version=__version__,
author="fab",
author_email="fab#fab",
url="https://github.com/pybind/python_example",
description="mylib",
long_description="",
ext_modules=ext_modules,
extras_require={"test": "pytest"},
cmdclass={"build_ext": build_ext},
zip_safe=False,
python_requires=">=3.7",
)
and now I can build the extension by simply calling
pip3 install
however it looks like all the links are broken because whenever I try importing the extension in Python I get linkage errors, as if setuptools does not link correctly the extension with the 3rd party libs. For instance errors in linking with libpng as in:
>>> import mylib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so: undefined symbol: png_sig_cmp
However I have no clue how to add this link info to setuptools, and don't even know if that's possible (it should be the setuptools equivalent of CMAKE's target_link_libraries).
I am really at a loss after weeks of reading documentation, forum threads and failed attempts. If anyone is able to point me in the right way or to clear some of the fog it would be really appreciated!
Thanks!
Fab
/home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so: undefined symbol: png_sig_cmp
This line pretty much says it clearly. Your local shared object file .so can't find the libpng.so against which it is linked.
You can confirm this by running:
ldd /home/fabrizio/.local/lib/python3.8/site-packages/mylib.cpython-38-x86_64-linux-gnu.so
There is no equivalent of target_link_libraries() in setuptools. Because that wouldn't make any sense. The library is already built and you've already linked it. This is your system more or less telling you that it can't find the libraries it needs. And those most likely need to be installed.
This is also one of the reasons why Linux distributions provide their own package managers and why you should use the developer packages provided by said distributions.
So how do you fix this? Well your .so file needs to find the other .so files against which you linked to understand how this works I will refer you to this link.
My main guess is based on the fact that when you manually copy the files it works - That during the build process you probably specify the rpath to a local directory. Hence what you most likely need to do is specify to your setuptools that it needs to copy those files when installing.
I'm using Thonny version 3.3.13 on Windows 10 to program Raspberry Pi Pico.
The main program is main.py. I have no issues with it (examples are working), except any local imports.
I'm following this tutorial.
It is not duplicated, as I've searched and tested many version of import on StackOverflow and many other websites for hours.
My file structure:
sd_card_read
|-main.py
|-lib
|-__init__.py
|-SDCard.py
My main.py file:
import sys
print(sys.path)
import SDCard
#... the rest of the code
The error I'm getting is:
['', '.frozen', '/lib']
Traceback (most recent call last):
File "<stdin>", line 10, in <module>
ImportError: no module named 'SDCard'
How can I solve the import?
Notes:
I tried appending '.' and '/' to sys, it does not work. e.g sys.path.append('/')
I tried different versions of import, no luck. e.g from lib import SDCard
While Thonny allows you to run a file opened from your local computer, it ONLY allows importing modules from its own internal storage.
For me, this is confusing.
I ran "Save copy..." on all my module files, chose "Raspberry Pi Pico" and entered the filename manually.
Maybe there is another way of doing this in Thonny, as this is my first time using MicroPython on RPi Pico.
Try
from lib import SDCard
It works like that because lib is the package and not SDCard. SDCard is just a file in the package.
I have had a look at several different topics on this matter but can't work out how to apply it to my situation. I don't have an init.py in my test folder and I have tried to use conftest. I have a directory structure like this:
--app
--app.py
--src
--init.py
--module1.py
--module2.py
--module3.py
--configs
--config.json
--non-default-config.json
--tests
--test1.py
--conftest.py
where app.py imports module1, which then imports modules 2&3 (using import src.module2). I load up config.json in all the modules files (and app.py) using:
with open('configs/config.json') as f:
CFG = json.load(f)
This works when I run app.py from the app directory. However, when I run pytest (which I believe should also be referencing from the app directory, since conftest.py is in the app directory) and it imports module1 (using import src.module1), it cannot find configs/config.json, but will find app/configs/config.json. I cannot use this as it will cause my app to break when I run app.py. However, Pytest can find the imports from within the src folder, even though this is on the same level as the configs folder.
If I move the conftest.py outside of the app directory and import module1 using import app.src.module1 then this import succeeds, but the import of module2 inside module1 then fails.
How can I resolve this issue? And is there a better way of structuring my project?
Solved this by running pytest from inside the app folder instead of from the base directory.
Please can anyone suggest and help how can we execute the Robot Framework Test Cases and Files via command line ?
My Robot Framework Directory Location is as follows :
/Users/tanyagrover/Desktop/Robot Files/Charcoal PreProd
I've tried :
robot -L debug Charcoal preprod.robot
and got error as :
File "/usr/local/bin/robot", line 6, in <module>
from robot.run import run_cli
ModuleNotFoundError: No module named 'robot'
I'am using ride.py to create my test cases and the test cases are running fine when i'm using RIDE UI. But I want to run my test cases using Robot CLI. Whenever I'am executing my .robot file using robot command I'am getting following error
robot Login.robot
Traceback (most recent call last):
File "/usr/local/bin/robot", line 6, in
from robot.run import run_cli
ModuleNotFoundError: No module named 'robot'
Thank You
The reason you are getting the below error is , you need to go to the same virtualenv/interpreter where you have installed robotframework,as you have configured your eclipse to run on. Otherwise, you will get the below error.
File "/usr/local/bin/robot", line 6, in <module>
from robot.run import run_cli
ModuleNotFoundError: No module named 'robot'
Steps to remedy
You need to use the same virtualenv/interpreter and
then make sure you have robotframework installed and
then you need to invoke robot, only then it is going to work.
APPROACH#0
Assuming that you have created a virtualenv/interpreter with robotframework installed successfully, then you need to just
cd to that specific directory and
then execute robot as mentioned below.
If you want to run all the testcases from all the files and folders under Prepod
cd /Users/tanyagrover/Desktop/Robot\ Files/Charcoal\ PreProd
robot *.robot
NOTE: few users are confused regarding "cd" when dir name contains
space i have created a simple folder name "sample sd"(there is a space
in the folder name)
On mac this works,
cd sample\ sd/
06:30 PM##~/sample sd::>
First, make sure you have robot framework installed and it is found in PYTHONPATH environment variable.
For executing the tests, there are many ways to do this.
Option #1:
Go to Charcoal PreProd folder and just robot Suites
Option #2:
Go to Suites folder and just robot .
Option #3:
If you wanna run only Login test suite, in Charcoal PreProd folder: robot Login.robot (assuming the file extension for Login file is robot).
Also note that the last argument cannot have spaces as you have in Charcoal preprod.robot. In this case, you should use quotes: 'Charcoal preprod.robot'.
Setting default python version to 3.6 worked for me
I have two modules, both named connection.py in two separate environments listed below. Both of the folders containing connection.py are in my PYTHONPATH system environment variable.
However, if that of spec is not placed above that of bvbot, spec's test_connection.py attempts to import from the connection.py of bvbot.
If in cmd, I can resolve this by moving the path of spec above that of bvbot. But, in Visual Studio Code, spec's test_connection.py still imports from bvbot's connection.py.
The two environments of interest are:
C:\Users\You_A\Desktop\2016Coding\VirtualEnviroments\spec\spec_trading
C:\Users\You_A\Desktop\2016Coding\VirtualEnviroments\bvbot\Legacy_bvbot
Structure of the spec path above:
src/
spec_trading/
__init__.py
connection.py
tests/
__init__.py
connection.py
spec test_connection.py:
import pytest
from connection import Connection, OandaConnection
class TestConnection:
def test_poll_timeout(self):
connection = Connection()
timeout = 10.0
connection.set_poll_timeout(timeout)
assert connection.poll_timeout == timeout
What I am doing wrong here? How can I resolve this without resorting to manually faffing with my systems environment variables and resolve the VSC issue?
Easiest solution is to not use implicit relative imports (I assume this is Python 2.7). Basically use explicit relative imports and make sure the imports resolve within the package they are contained within instead of Python having to search sys.path for the module.
And if you are using Python 2.7, put from __future__ import absolute_import at the top of the file.