SignalR with a Redis backplane; what am I doing wrong? - asp.net-core-signalr

I am new to both SignalR and Redis. I have an ASP.NET Core SignalR app, and I am trying to do a proof of concept on using Redis as a backplane when it is scaled out, as described here: https://learn.microsoft.com/en-us/aspnet/core/signalr/scale?view=aspnetcore-2.2#redis-backplane
To test this on a small scale, I created two separate projects of the demo SignalR chat application described here: https://learn.microsoft.com/en-us/aspnet/core/tutorials/signalr?view=aspnetcore-2.2&tabs=visual-studio
I opened two instances (clients) of each demo app, and verified that each instance sends messages back and forth between its own clients.
pre-Redis screenshot, 2 working demo apps
Next, I installed a local Redis database, using version 3.0.504 of the Windows MSI file found here: https://github.com/microsoftarchive/redis/releases
Using redis-cli.exe, I see that I can connect to the local Redis instance:
127.0.0.1:6379> CLIENT SETNAME 'MyLocalConnection'
OK
127.0.0.1:6379> CLIENT LIST
id=22 addr=127.0.0.1:57283 fd=9 name=MyLocalConnection age=158 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client
Next, I updated both of my demo apps, based on these instructions: https://learn.microsoft.com/en-us/aspnet/core/signalr/redis-backplane?view=aspnetcore-2.2
I installed NuGet Package Microsoft.AspNetCore.SignalR.StackExchangeRedis v1.1.5, and updated the startup.cs file:
//services.AddSignalR();
services.AddSignalR().AddStackExchangeRedis("localhost");
I started both apps, and using redis-cli.exe, I verified that both seem to be connecting properly:
127.0.0.1:6379> CLIENT LIST
id=29 addr=127.0.0.1:53692 fd=13 name=DESKTOP-ALLBLN9 age=11 idle=10 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=30 addr=127.0.0.1:53693 fd=11 name=DESKTOP-ALLBLN9 age=11 idle=9 flags=N db=0 sub=5 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=subscribe
id=31 addr=127.0.0.1:53695 fd=10 name= age=10 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client
id=32 addr=127.0.0.1:53696 fd=9 name=DESKTOP-ALLBLN9 age=10 idle=9 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=33 addr=127.0.0.1:53697 fd=12 name=DESKTOP-ALLBLN9 age=10 idle=8 flags=N db=0 sub=5 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=subscribe
127.0.0.1:6379>
At this point, I open two clients for each app again, expecting that a message sent by any one will reach all four clients. But, it still only reaches the two clients for that specific app.
After Redis added, clients still only talk to their own app
Can someone help me understand what my mistake is here? Is there more I need to add to get both applications to "see" each other? Or am I misunderstanding how the Redis backplane is supposed to work?

The name of the projects should be the same. redis adds the project name to the name of the channels. because different project names are on different channels, they cannot message.

Related

Connect Retroflag GPi (Raspberry Pi Zero W) to WPA2 Enterprise

I just got my retroflag gpi case working and set up. I have one small problem though. I can't connect my Pi to my WPA2-Enterprise network. I've tried a bunch of settings in wpa_supplicant.conf but can't get it to work.
Pi Model or other hardware: Raspberry Pi Zero W & Retroflag GPi Case
Power Supply used: Retroflag GPi's inbuilt.
RetroPie Version Used: 4.6.1
Built From: https://github.com/RetroPie/RetroPie-Setup/releases/download/4.6/retropie-buster-4.6-rpi1_zero.img.gz
USB Devices connected: Retroflag GPi
Controller used: Retroflag GPi
Error messages received:
Can't see any error messages. Don't know where they appear. It just says IP-address Unkown in show ip.
Guide used: Several on Google. This one among others: https://gist.github.com/elec3647/1e223c02ef2a9a3f836db7984011b53b.
This one for documentation: https://w1.fi/cgit/hostap/plain/wpa_supplicant/wpa_supplicant.conf
File: /etc/wpa_supplicant/wpa_supplicant.conf
Attachment of config files: (wpa_supplicant.conf)
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=SE
ap_scan=1
network={
ssid="Wifi-Name"
scan_ssid=1
identity="myusername"
password="mypassword"
key_mgmt=WPA-EAP
eap=TTLS
phase1="peapver=0 peaplabel=1"
phase2="autheap=MSCHAPV2"
}
I actually solved it just now.
For anyone wondering I managed to connect to the network on another machine (Ubuntu) and used Network Manager. I then checked the log (according to this link) with the command:
journalctl -u NetworkManager
This gave me this config for wpa_supplicant.conf:
country=SE
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
ap_scan=1
network={
ssid="Wifi-Name"
scan_ssid=1
bgscan="simple:30:-65:300"
key_mgmt=WPA-EAP WPA-EAP-SHA256
password="password"
eap=PEAP
fragment_size=1266
phase2="auth=MSCHAPV2"
identity="username"
proactive_key_caching=1
}
This might not work for everyone else since the configs are always different for every network. So be sure to do the same steps as I did if you can't get it to work.
So glad I finally got this, haha.

kubernetes 1.2 to 1.3 upgrade on CoreOS

There seem to be several issues to go from 1.2 to 1.3 that makes it impossible to upgrade in place.
Is this correct?
When upgrading one worker node to 1.3.4, while the rest is running 1.2.2, the node is never ready
I get lots of 415 errors (unsupported media type?) from kubelet, which seems to indicate incompatible format.
kubelet[2927]: E0804 01:55:13.794921 2927 event.go:198] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:".146777d057f9b62b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"198.245.63.87", UID:"xxxxxxxx", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node xxxxxxxx status is now: NodeHasSufficientDisk", Source:api.EventSource{Component:"kubelet", Host:"xxxxxxxxxx"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63605872340, nsec:72642091, loc:(*time.Location)(0x45be3e0)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63605872513, nsec:790683013, loc:(*time.Location)(0x45be3e0)}}, Count:29, Type:"Normal"}': 'the server responded with the status code 415 but did not return more information (post events)' (will not retry!)
I'd like to understand if it's a setup issue or a real breaking change that prevent in-place upgrade...
Thanks
I've done an upgrade of 1.2.2 to 1.3.3 on CoreOS. This involved first upgrading the master server, then doing the nodes....
All went surprisingly smoothly...
Basically I followed:
https://coreos.com/kubernetes/docs/latest/kubernetes-upgrade.html

Wildfly 9 - mod_cluster on TCP

We are currently testing to move from Wildfly 8.2.0 to Wildfly 9.0.0.CR1 (or CR2 built from snapshot). The system is a cluster using mod_cluster and is running on VPS what in fact prevents it from using multicast.
On 8.2.0 we have been using the following configuration of the modcluster that works well:
<mod-cluster-config proxy-list="1.2.3.4:10001,1.2.3.5:10001" advertise="false" connector="ajp">
<dynamic-load-provider>
<load-metric type="cpu"/>
</dynamic-load-provider>
</mod-cluster-config>
Unfortunately, on 9.0.0 proxy-list was deprecated and the start of the server will finish with an error. There is a terrible lack of documentation, however after a couple of tries I have discovered that proxy-list was replaced with proxies that are a list of outbound-socket-bindings. Hence, the configuration looks like the following:
<mod-cluster-config proxies="mc-prox1 mc-prox2" advertise="false" connector="ajp">
<dynamic-load-provider>
<load-metric type="cpu"/>
</dynamic-load-provider>
</mod-cluster-config>
And the following should be added into the appropriate socket-binding-group (full-ha in my case):
<outbound-socket-binding name="mc-prox1">
<remote-destination host="1.2.3.4" port="10001"/>
</outbound-socket-binding>
<outbound-socket-binding name="mc-prox2">
<remote-destination host="1.2.3.5" port="10001"/>
</outbound-socket-binding>
So far so good. After this, the httpd cluster starts registering the nodes. However I am getting errors from load balancer. When I look into /mod_cluster-manager, I see a couple of Node REMOVED lines and there are also many many errors like:
ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to node1/1.2.3.4:10001, configuration will be reset: MEM: Can't read node
In the log of mod_cluster there are the equivalent warnings:
manager_handler STATUS error: MEM: Can't read node
As far as I understand, the problem is that although wildfly/modcluster is able to connect to httpd/mod_cluster, it does not work the other way. Unfortunately, even after an extensive effort I am stuck.
Could someone help with setting mod_cluster for Wildfly 9.0.0 without advertising? Thanks a lot.
I ran into the Node Removed issue to.
I managed to solve it by using the following as instance-id
<subsystem xmlns="urn:jboss:domain:undertow:2.0" instance-id="${jboss.server.name}">
I hope this will help someone else to ;)
There is no need for any unnecessary effort or uneasiness about static proxy configuration. Each WildFly distribution comes with xsd sheets that describe xml subsystem configuration. For instance, with WildFly 9x, it's:
WILDFLY_DIRECTORY/docs/schema/jboss-as-mod-cluster_2_0.xsd
It says:
<xs:attribute name="proxies" use="optional">
<xs:annotation>
<xs:documentation>List of proxies for mod_cluster to register with defined by outbound-socket-binding in socket-binding-group.</xs:documentation>
</xs:annotation>
<xs:simpleType>
<xs:list itemType="xs:string"/>
</xs:simpleType>
</xs:attribute>
The following setup works out of box
Download wildfly-9.0.0.CR1.zip or build with ./build.sh from sources
Let's assume you have 2 boxes, Apache HTTP Server with mod_cluster acting as a load balancing proxy and your WildFly server acting as a worker. Make sure botch servers can access each other on both MCMP enabled VirtualHost's address and port (Apache HTTP Server side) and on WildFly AJP and HTTP connector side. The common mistake is to binf WildFLy to localhost; it then reports its addess as localhost to the Apache HTTP Server residing on a dofferent box, which makes it impossible for it to contact WildFly server back. The communication is bidirectional.
This is my configuration diff from the default wildfly-9.0.0.CR1.zip.
328c328
< <mod-cluster-config advertise-socket="modcluster" connector="ajp" advertise="false" proxies="my-proxy-one">
---
> <mod-cluster-config advertise-socket="modcluster" connector="ajp">
384c384
< <subsystem xmlns="urn:jboss:domain:undertow:2.0" instance-id="worker-1">
---
> <subsystem xmlns="urn:jboss:domain:undertow:2.0">
435c435
< <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:102}">
---
> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
452,454d451
< <outbound-socket-binding name="my-proxy-one">
< <remote-destination host="10.10.2.4" port="6666"/>
< </outbound-socket-binding>
456c453
< </server>
---
> </server>
Changes explanation
proxies="my-proxy-one", outbound socket binding name; could be more of them here.
instance-id="worker-1", the name of the worker, a.k.a. JVMRoute.
offset -- you could ignore, it's just for my test setup. Offset does not apply to outbound socket bindings.
<outbound-socket-binding name="my-proxy-one"> - IP and port of the VirtualHost in Apache HTTP Server containing EnableMCPMReceive directive.
Conclusion
Generally, these MEM read / node error messages are related to network problems, e.g. WildFly can contact Apache, but Apache cannot contact WildFly back. Last but not least, it could happen that the Apache HTTP Server's configuration uses PersistSlots directive and some substantial enviroment conf change took place, e.g. switch from mpm_prefork to mpm_worker. In this case, MEM Read error messages are not realted to WildFly, but to the cached slotmem files in HTTPD/cache/mod_custer that need to be deleted.
I'm certain it's network in your case though.
After a couple of weeks I got back to the problem and found the solution. The problem was - of course - in configuration and had nothing in common with the particular version of Wildfly. Mode specifically:
There were three nodes in the domain and three servers in each node. All nodes were launched with the following property:
-Djboss.node.name=nodeX
...where nodeX is the name of a particular node. However, it meant that all three servers in the node get the same name, which is exactly what confused the load balancer.
As soon as I have removed this property, everything started to work.

AppFabric Cache service crashes when accessed from a different machine

Having persuaded it all to work in my development environment, and then on a test server, I'm trying to set up AppFabric Cache on a server that will be in production.
So I have the AppFabricCache Service installed on what I'll call "cacheserver1" and a client installed on what I'll call "webserver1". When I connect to the cache from a client installed on cacheserver1, it works. When I connect to the cache from a client installed on webserver1 - or from my development machine, having opened the firewall - the cache service crashes.
Application: DistributedCacheService.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.Runtime.CallbackException
Stack:
at System.Runtime.AsyncResult.Complete(Boolean)
at System.ServiceModel.Channels.ConnectionStream+IOAsyncResult.OnAsyncIOComplete(System.Object)
at System.ServiceModel.Channels.SocketConnection.OnSendAsync(System.Object, System.Net.Sockets.SocketAsyncEventArgs)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
at System.Net.Sockets.SocketAsyncEventArgs.FinishOperationSuccess(System.Net.Sockets.SocketError, Int32, System.Net.Sockets.SocketFlags)
at System.Net.Sockets.SocketAsyncEventArgs.CompletionPortCallback(UInt32, UInt32, System.Threading.NativeOverlapped*)
at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*)
The "hosts" section of the ClusterConfig.xml reads:
<hosts>
<host replicationPort="22236" arbitrationPort="22235" clusterPort="22234"
hostId="990129399" size="849" leadHost="true" account="WIN-BLABLABLA\AppFabricCache"
cacheHostName="AppFabricCachingService" name="WIN-BLABLABLA"
cachePort="22233" />
</hosts>
The firewall is configured so that cacheserver1 accepts incoming connections on port 22233 from webserver1. They're both AWS machines, not on the same domain or anything... I had assumed that I could just talk over TCPIP.
How is a simple request crashing the whole service? This doesn't make me feel reassured about the robustness of the cache service.
Is this configuration how a Distributed Cache is meant to be used?
What do I need to do to make it work / get a more helpful error out of it?
Got it!
I needed to disable the Cache Security (which is fine, I'm just going to manage it with the firewalls).
I needed to add
to the config file on the client (webserver), and
<advancedProperties>
<securityProperties mode="None" protectionLevel="None">
<authorization>
<allow users="Everyone" />
</authorization>
</securityProperties>
</advancedProperties>
to ClusterConfig.xml on the cacheserver

Azure deployment is cycling for a long time

12:08:41 - Preparing deployment for Windows Azure MSDN - Visual Studio Ultimate with
Subscription ID: xxxxxxxxxxxxxxxxxxxx...
12:08:41 - Connecting...
12:08:41 - Verifying storage account 'cck'...
12:08:43 - Uploading Package...
12:41:00 - Creating...
12:57:14 - Created Deployment ID: xxxxxxxxxxxxxxxxx.
12:57:14 - Starting...
12:57:55 - Initializing...
12:57:55 - Instance 0 of role RIS2048.ConsultaClick.Web is in an unknown state
12:59:03 - Instance 0 of role RIS2048.ConsultaClick.Web is starting the virtual machine
13:00:40 - Instance 0 of role RIS2048.ConsultaClick.Web is in an unknown state
13:01:48 - Instance 0 of role RIS2048.ConsultaClick.Web is busy
13:06:12 - Instance 0 of role RIS2048.ConsultaClick.Web is cycling
It's now 13:31, more than 1 hour stared ago and at 25 minutes the instance is cycling (I don't know what that means). Will it finish? When?
Here's part of my ServiceDefinition.csdef file with 2 VirtualApplications:
<Site name="Web">
<VirtualApplication name="CCKPt"
physicalDirectory=".">
<VirtualDirectory name="images"
physicalDirectory="..\RIS2048.ConsultaClick.WWWPacientes\imgpt" />
</VirtualApplication>
<VirtualApplication name="CCKRo"
physicalDirectory=".">
<VirtualDirectory name="images"
physicalDirectory="..\RIS2048.ConsultaClick.WWWPacientes\imgro" />
</VirtualApplication>
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" />
</Bindings>
</Site>
Is this a new app you just built? And does it run locally in the emulator? First thing I always check is the connection string for diagnostics, along with the session state provider. By default, your diagnostics are written to the emulator storage (meaning SQL Express on your local machine), and SQL Express is used for session state, which doesn't exist in Windows Azure. Rather, you'd need to switch it to Cache or SQL Azure.
Likely means there's a problem. Sometimes, if you're lucky, you can squeeze a little more info about where exactly it's bombing from IntelliTrace:
http://blogs.msdn.com/b/jnak/archive/2010/06/07/using-intellitrace-to-debug-windows-azure-cloud-services.aspx