Problem with "No access to /dev/mem."
I have HA in docker container on raspberry pi 4.
I can read temperature from the one wire sensor (GPIO4).
sensor:
- platform: onewire
names:
28-3c01f09519d1: Sensor1
But when i wanted to manage gpio i get error.
switch:
- platform: rpi_gpio
ports:
16: light
Error:
"Error while setting up rpi_gpio platform for switch.
RuntimeError: No access to /dev/mem. Try running as root!"
In docker I tried to set:
privileged: true
command: ["--privileged"]
devices:
- /dev/mem:/dev/mem
- /dev/gpiomem:/dev/gpiomem
volumes:
- /home/pi/homeassistant:/config
- /dev/gpiomem:/dev/gpiomem
- /dev/mem:/dev/mem
And I add user pi to group gpio.
But nothing helps ... š
Anyone know how to solve this?
Related
I am trying to setup a local Beam Runner for easier testing/developing.
I'd like to allow testing python pipeline which uses kafka IO locally on my mac.
Here's my current plan for the entire framework looks like:
Here's my current docker-compose
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- "9092:9092"
jobmanager:
image: flink_image
command: ['jobmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\nparallelism.default: 2"
ports:
- "8081:8081"
taskmanager:
image: flink_image
scale: 1
depends_on:
- jobmanager
command: ['taskmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\ntaskmanager.numberOfTaskSlots: 2\nparallelism.default: 2"
beam-jobserver:
image: flink_image
ports:
- "8097:8097"
- "8098:8098"
- "8099:8099"
entrypoint:
- java
- -cp
- /target/flink/flink-web-upload/beam-runner.jar
- org.apache.beam.runners.flink.FlinkJobServerDriver
- --flink-master=jobmanager
- --job-host=0.0.0.0
And my pipeline looks like this:
LOCAL_ARGS = [
'--streaming',
'--runner=portableRunner',
'--environment_type=LOOPBACK',
'--job_endpoint=localhost:8099',
'--artifact_endpoint=localhost:8098',
'--defaultEnvironmentType=EXTERNAL',
'--defaultEnvironmentConfig=host.docker.internal:5000',
]
with beam.Pipeline(options=PipelineOptions(LOCAL_ARGS)) as pipeline:
result = (
pipeline
| "Kafka Read" >> ReadFromKafka(
consumer_config={"bootstrap.servers": "kafka:9092", 'auto.offset.reset': 'earliest'},
topics=["test.topic"],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam/java_boot\"}",
'--experiments=use_deprecated_read',
]
)
)
| "logging" >> beam.Map(lambda x: logging.info(f"logged: {x}"))
)
However, it looks like the LOOPBACK tried to open a port on my host machine, and ask the task manager to talk to itself via localhost:<randomPort>. Which is not accessible inside the container.
Unfortunately, host network is not supported for Docker on Mac, and thus I need to find a way to overwrite the Loopback settings so that it connect to host.docker.internal:<dedicated_pool> instead of a random port on my host machine? or if there are other suggested workaround? Thanks!
(The entire infra can be found here: https://gist.github.com/lydian/0db7614652c2ccdc733884134bf67f9b)
It looks like this is not supported. LOOPBACK mode is mostly targeting very simple setups.
You could come close by starting the worker manually, e.g.
python -m apache_beam.runners.worker.worker_pool_main --service_port =PORT
and then passing --environment_type=EXTERNAL --environment_config= host.docker.internal:PORT.
I was just facing similar struggles recently. Luckily there's two environment variables that facilitate testing on Docker for Mac. Unfortunately, there's not much documentation around that currently.
DOCKER_MAC_CONTAINER=1 limits the ports for communication with SDK workers to the range 8100 - 8200 instead of using random ports. Ports of that range are used in a round-robin fashion and have to be published.
BEAM_WORKER_POOL_IN_DOCKER_VM=1 tells an SDK worker to communicate with a runner node using host.docker.internal / via the docker host instead of using localhost.
Here's an example how to use these with Spark, but Flink shouldn't be any different
I'm getting error when configuration file is set.
My host is a Ubuntu 22.04
Inside the docker container the user is rabbitmq, using id -u rabbitmq the $UID is 999
I changed the file using: chown 999 advanced.config
But the same error still persists.
Failed to load advanced configuration file "/etc/rabbitmq/advanced.config": unknown POSIX error
Error during startup: {error,failed_to_read_advanced_configuration_file}
version: "3.2"
services:
rabbitmq2:
image: rabbitmq:3-management
hostname: rabbitmq2
container_name: 'rabbitmq2'
ports:
- "5672:5672"
- "15672:15672"
- "5552:5552"
volumes:
- ./advanced/rabbitmq2/advanced.config:/etc/rabbitmq/advanced.config
# or using:
# - type: bind
# source: $PWD/advanced/rabbitmq2/advanced.config
# target: /etc/rabbitmq/advanced.config
environment:
- RABBITMQ_ADVANCED_CONFIG_FILE=/etc/rabbitmq/advanced.config
If I use another place to put the file, or another file name, the container runs, but Rabbitmq doesn't load the configuration file.
I changed the content of the file and it didn't work (rabbitmq can't load the file), I tried using blank file, and using some configurations, for example:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
]
Be sure the format is correct:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
].
Note the trailing period.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Iām using PFC200 from Wago to automate my home. The blinds and lights are working fine, I got a lot of scenes implemented in the PLC. Now I would like to visualize the state of lights and be able to turn them on/off from HA. HA will be just as a visualization tool - everything else is on the PLC.
I wanted to connect to my PLC using modbus but Iām having a lot of problems with configuration.
First of all, Iām always getting the following error:
2021-05-19 08:23:41 ERROR (SyncWorker_1) [homeassistant.components.modbus.modbus] Pymodbus: Modbus Error: [Connection] ModbusTcpClient(192.168.10.60:502): Connection unexpectedly closed 0.000071 seconds into read of 8 bytes without response from unit before it closed connection
Data are actually read but this error is always there spamming logs.
And the second problem is the data refresh time. I was investigating the PLC and the modbus is working fine there, updating the data immediately but it is not reflected in HA even I have scan_interval: 2.
Here is my configuration:
modbus:
- name: PLC
type: tcp
host: 192.168.10.60
port: 502
switch:
- platform: modbus
scan_interval: 2
coils:
- name: fake_switch
hub: PLC
slave: 1
coil: 0
- name: switch.bedroom_main_light
hub: PLC
slave: 1
coil: 1
- name: switch.bedroom_wardrobe_light
hub: PLC
slave: 1
coil: 2
- name: switch.bathroom_main_light
hub: PLC
slave: 1
coil: 4
- name: switch.bathroom_mirror_light
hub: PLC
slave: 1
coil: 5
light:
- platform: switch
name: Bedroom
entity_id: switch.bedroom_main_light
- platform: switch
name: Wardrobe
entity_id: switch.bedroom_wardrobe_light
- platform: switch
name: Bathroom
entity_id: switch.bathroom_main_light
- platform: switch
name: Bathroom - Mirror
entity_id: switch.bathroom_mirror_light
Maybe I should go with the registry instead of the coil? Can someone share his working configuration?
How to set node-exporter of Prometheus for collecting host metrics in docker-swarm
version: '3.3'
services:
node-exporter:
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
- '--collector.textfile.directory=/etc/node-exporter/'
- '--collector.enabled="conntrack,diskstats,entropy,filefd,filesystem,loadavg,mdadm,meminfo,netdev,netstat,stat,textfile,time,vmstat,ipvs"'
ports:
- 9100:9100
i am getting this error:- node_exporter: error: unknown long flag '--collector.enabled', try --help
what's wrong about last line under command section in this docker-compose file & if wrongly set/passed, how to pass it correctly.
Try to use --collector.[collector_name] (e.g. --collector.diskstats) keys instead of --collector.enabled as it does not work anymore since 0.15 version or higher.
For multiple collectors you can try as below after version "< 0.15":
--collector.processes --collector.ntp ...... so on
In the older version " > 0.15 " we were using as below for specific collectors:
--collectors.enabled meminfo,loadavg,filesystem
I closed SWD and JTAG by acident so that I can't download new program into developboard by j-Link.Then I try using j-flash ARM to erase chip, and error comes like this:
Connecting ...
- Connecting via USB to J-Link device 0
- J-Link firmware: V1.20 (J-Link ARM V8 compiled Dec 1 2009 11:42:48)
- JTAG speed: 2000 kHz (Auto)
- Initializing CPU core (Init sequence) ...
- Executing Reset (0, 0 ms)
- Initialized successfully
- JTAG speed: 2000 kHz (Auto)
- Connected successfully
Reading entire flash chip ...
- 64 sectors, 1 range, 0x8000000 - 0x800FFFF
- ERROR: RAM check failed # address 0x20000000.
- ERROR: Write: 0x03020100 07060504
- ERROR: Read: 0xAAAAAAAA AAAAAAAA
- ERROR: (0 bytes of RAM have been checked successfully)
- ERROR: Failed to read back target memory
Disconnecting ...
- Disconnected
I don't know how to use BOOT0 and BOOT1 to get into ISP mode. BOOT0 is connected to GND.
Post some information about your environment.
Are you using IAR EWARM? If you're not, you should download the size-limited trial version. Then, load one of the basic program examples, and try to flash it to your board.
What board are you using? And what do you mean you "closed" SWD and JTAG? I'm not sure what that refers to...jumpers? options window?
Help us out here.