I’m using PFC200 from Wago to automate my home. The blinds and lights are working fine, I got a lot of scenes implemented in the PLC. Now I would like to visualize the state of lights and be able to turn them on/off from HA. HA will be just as a visualization tool - everything else is on the PLC.
I wanted to connect to my PLC using modbus but I’m having a lot of problems with configuration.
First of all, I’m always getting the following error:
2021-05-19 08:23:41 ERROR (SyncWorker_1) [homeassistant.components.modbus.modbus] Pymodbus: Modbus Error: [Connection] ModbusTcpClient(192.168.10.60:502): Connection unexpectedly closed 0.000071 seconds into read of 8 bytes without response from unit before it closed connection
Data are actually read but this error is always there spamming logs.
And the second problem is the data refresh time. I was investigating the PLC and the modbus is working fine there, updating the data immediately but it is not reflected in HA even I have scan_interval: 2.
Here is my configuration:
modbus:
- name: PLC
type: tcp
host: 192.168.10.60
port: 502
switch:
- platform: modbus
scan_interval: 2
coils:
- name: fake_switch
hub: PLC
slave: 1
coil: 0
- name: switch.bedroom_main_light
hub: PLC
slave: 1
coil: 1
- name: switch.bedroom_wardrobe_light
hub: PLC
slave: 1
coil: 2
- name: switch.bathroom_main_light
hub: PLC
slave: 1
coil: 4
- name: switch.bathroom_mirror_light
hub: PLC
slave: 1
coil: 5
light:
- platform: switch
name: Bedroom
entity_id: switch.bedroom_main_light
- platform: switch
name: Wardrobe
entity_id: switch.bedroom_wardrobe_light
- platform: switch
name: Bathroom
entity_id: switch.bathroom_main_light
- platform: switch
name: Bathroom - Mirror
entity_id: switch.bathroom_mirror_light
Maybe I should go with the registry instead of the coil? Can someone share his working configuration?
Related
I am trying to setup a local Beam Runner for easier testing/developing.
I'd like to allow testing python pipeline which uses kafka IO locally on my mac.
Here's my current plan for the entire framework looks like:
Here's my current docker-compose
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- "9092:9092"
jobmanager:
image: flink_image
command: ['jobmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\nparallelism.default: 2"
ports:
- "8081:8081"
taskmanager:
image: flink_image
scale: 1
depends_on:
- jobmanager
command: ['taskmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\ntaskmanager.numberOfTaskSlots: 2\nparallelism.default: 2"
beam-jobserver:
image: flink_image
ports:
- "8097:8097"
- "8098:8098"
- "8099:8099"
entrypoint:
- java
- -cp
- /target/flink/flink-web-upload/beam-runner.jar
- org.apache.beam.runners.flink.FlinkJobServerDriver
- --flink-master=jobmanager
- --job-host=0.0.0.0
And my pipeline looks like this:
LOCAL_ARGS = [
'--streaming',
'--runner=portableRunner',
'--environment_type=LOOPBACK',
'--job_endpoint=localhost:8099',
'--artifact_endpoint=localhost:8098',
'--defaultEnvironmentType=EXTERNAL',
'--defaultEnvironmentConfig=host.docker.internal:5000',
]
with beam.Pipeline(options=PipelineOptions(LOCAL_ARGS)) as pipeline:
result = (
pipeline
| "Kafka Read" >> ReadFromKafka(
consumer_config={"bootstrap.servers": "kafka:9092", 'auto.offset.reset': 'earliest'},
topics=["test.topic"],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam/java_boot\"}",
'--experiments=use_deprecated_read',
]
)
)
| "logging" >> beam.Map(lambda x: logging.info(f"logged: {x}"))
)
However, it looks like the LOOPBACK tried to open a port on my host machine, and ask the task manager to talk to itself via localhost:<randomPort>. Which is not accessible inside the container.
Unfortunately, host network is not supported for Docker on Mac, and thus I need to find a way to overwrite the Loopback settings so that it connect to host.docker.internal:<dedicated_pool> instead of a random port on my host machine? or if there are other suggested workaround? Thanks!
(The entire infra can be found here: https://gist.github.com/lydian/0db7614652c2ccdc733884134bf67f9b)
It looks like this is not supported. LOOPBACK mode is mostly targeting very simple setups.
You could come close by starting the worker manually, e.g.
python -m apache_beam.runners.worker.worker_pool_main --service_port =PORT
and then passing --environment_type=EXTERNAL --environment_config= host.docker.internal:PORT.
I was just facing similar struggles recently. Luckily there's two environment variables that facilitate testing on Docker for Mac. Unfortunately, there's not much documentation around that currently.
DOCKER_MAC_CONTAINER=1 limits the ports for communication with SDK workers to the range 8100 - 8200 instead of using random ports. Ports of that range are used in a round-robin fashion and have to be published.
BEAM_WORKER_POOL_IN_DOCKER_VM=1 tells an SDK worker to communicate with a runner node using host.docker.internal / via the docker host instead of using localhost.
Here's an example how to use these with Spark, but Flink shouldn't be any different
Problem with "No access to /dev/mem."
I have HA in docker container on raspberry pi 4.
I can read temperature from the one wire sensor (GPIO4).
sensor:
- platform: onewire
names:
28-3c01f09519d1: Sensor1
But when i wanted to manage gpio i get error.
switch:
- platform: rpi_gpio
ports:
16: light
Error:
"Error while setting up rpi_gpio platform for switch.
RuntimeError: No access to /dev/mem. Try running as root!"
In docker I tried to set:
privileged: true
command: ["--privileged"]
devices:
- /dev/mem:/dev/mem
- /dev/gpiomem:/dev/gpiomem
volumes:
- /home/pi/homeassistant:/config
- /dev/gpiomem:/dev/gpiomem
- /dev/mem:/dev/mem
And I add user pi to group gpio.
But nothing helps ... 🙁
Anyone know how to solve this?
I am attempting to port the Hyperledger Fabric Getting Started to Kubernetes. But am struggling to get peer1's to deploy. If I enable CORE_PEER_GOSSIP_BOOTSTRAP, I receive errors "Received AliveMessage from a peer with the same PKI-ID as myself".
How can I debug a peer reportedly having the same PKI-ID as another?
Using this as a starting point:
https://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
I am able to create:
orderer and cli pods in default namespace
peer0's one in each org1|org2 namespace.
peer1's but only if I disable (comment out) CORE_PEER_GOSSIP_BOOTSTRAP
If I enable CORE_PEER_GOSSIP_BOOTSTRAP for the peer1's, I receive the following warning and error:
[gossip/gossip#10.0.0.10:7051] NewGossipService -> WARN 01c External endpoint is empty, peer will not be accessible outside of its organization
...
[gossip/discovery#10.0.0.10:7051] handleAliveMessage -> ERRO 02a Bad configuration detected: Received AliveMessage from a peer with the same PKI-ID as myself: tag:EMPTY alive_msg:<membership:<pki_id:"[[REDACTED]]" > timestamp:<inc_number:1495468533769417608 seq_num:416 > >
In order to better map the Orderer, Peers to DNS names, I'm using Kubernetes Namespaces and this configuration:
OrdererOrgs:
- Name: Orderer
Domain: default.svc.cluster.local
Specs:
- Hostname: orderer
PeerOrgs:
- Name: Org1
Domain: org1.svc.cluster.local
Template:
Count: 2
Users:
Count: 2
- Name: Org2
Domain: org2.svc.cluster.local
Template:
Count: 2
Users:
Count: 2
In order to expose the peer0's to the other peers in the org and to expose the orderer, I have ClusterIP services for the peer0's (selecting only the peer0's) and orderer. It's inelegant but I'm trying to get it to work before I get it working more beautifully.
I am able to resolve orderer.default.svc.cluster.local, peer0.org1.svc.cluster.local, `peer0.org2.svc.cluster.local' using nslookup from within a pod deployed to default on the cluster.
Absent a curl-like tool for gPRC, I am able to open sockets against these endpoints on 7051 and 7053.
First, make sure you are using the right certificates.
Second, verify that your environment/configuration for gossip is set correctly
environment:
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:8051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
- CORE_PEER_GOSSIP_ENDPOINT=peer0.org1.example.com:7051
OR in core.yaml
peer:
gossip:
bootstrap: peer0.org1.example.com:7051
externalEndpoint: peer1.org1.example.com:8051
endpoint: peer0.org1.example.com:7051
Edited: Also make sure that you have properly setup your CA
Hope this helps, it worked for me. And I was successfully able to connect peers.
If the peers are started from the same node, its possible that you are mounting the same crypto-material (path to mspconfig directory) for both the peers. If that is the case, separate the directory structures for both the peers and keep their respective certificates in them, update the respective paths for msp in docker-compose file and try to run.
Has anyone encountered an infinite loopdiscovering any new versions when building? I suspect there's an issue with one of the workers, does this worker config look ok? Installing the binary - my worker config is shown below.Fly workers list returns nothing.
concourse:
worker:
config:
name: ci_worker01
bind-ip: 0.0.0.0
bind-port: 7777
tsa-host: 127.0.0.1
tsa-port: 2222
tsa-public-key: /opt/concourse/.ssh/id_web_rsa.pub
tsa-worker-private-key: /opt/concourse/.ssh/id_worker_rsa
service: True
I'm using salt for my deployment issues and have the following question.
Is there any mechanism to retry a command?
For instance I have some thing like this:
platform_deps_git:
git.latest:
- name: ...
- rev: master
- target: ...
- user: ...
- identity: ...
But sometimes the network may fail. Is there any way to retry platform_deps_git instruction?
The next version of Salt (2014.7.0) will have an "onfail" requisite. This will allow you to take another action if something fails.
The docs are here:
http://docs.saltstack.com/en/latest/ref/states/requisites.html#onfail
What I do is grep through salt output whenever I run a highstate and if it sees any failures I rerun the highstate.
There's a first-class retry mechanism for states that was added in 2017:
platform_deps_git:
git.latest:
- name: ...
- rev: master
- target: ...
- user: ...
- identity: ...
- retry:
attempts: 5
until: True
interval: 60
splay: 10
The retry option supports a few different options for controlling its behavior.