I am trying to access the keyspace notifications in a .Net Application using ServiceStack.Redis. I am new to Redis.
I enabled event notifications on cache by command:
CONFIG SET notify-keyspace-events KEs
I am subscribing to the channel "key*:*" in .Net. The following is my code:
const string ChannelName = "__key*__:*";
using (var redisConsumer = new RedisClient("localhost:6379"))
using (var subscription = redisConsumer.CreateSubscription())
{
subscription.OnSubscribe = channel =>
{
Console.WriteLine(String.Format("Subscribed to '{0}'", channel));
};
subscription.OnUnSubscribe = channel =>
{
Console.WriteLine(String.Format("UnSubscribed from '{0}'", channel));
};
subscription.OnMessage = (channel, msg) =>
{
Console.WriteLine(String.Format("Received '{0}' from channel '{1}'",
msg, channel));
};
Console.WriteLine(String.Format("Started Listening On '{0}'", ChannelName));
subscription.SubscribeToChannels(ChannelName); //blocking
}
From another .Net application, I am adding new data to the cache. I am expecting to receive an event (in OnMessage). The application is not capturing any event, on adding a new item in cache.
But, when I run the command "psubscribe 'key*:*'" on redis-cli.exe, it captures the events properly. ( when I add a new item to cache, it displays the event details in a console window.)
I am unable to capture the same in my application. Am I missing anything here?
use subscription.SubscribeToChannelsMatching(ChannelName);
Related
I coded the next Node/Express/Mongo script:
const { MongoClient } = require("mongodb");
const stream = require("stream");
async function main() {
// CONECTING TO LOCALHOST (REPLICA SET)
const client = new MongoClient("mongodb://localhost:27018");
try{
// CONECTION
await client.connect();
// EXECUTING MY WATCHER
console.log("Watching ...");
await myWatcher(client, 15000);
} catch (e) {
// ERROR MANAGEMENT
console.log(`Error > ${e}`);
} finally {
// CLOSING CLIENT CONECTION ???
await client.close(); << ????
}
}main().catch(console.error);
// MY WATCHER. LISTENING CHANGES FROM MY DATABASE
async function myWatcher(client, timeInMs, pipeline = []) {
// TARGET TO WATCH
const watching = client.db("myDatabase").collection("myCollection").watch(pipeline);
// WATCHING CHANGES ON TARGET
watching.on("change", (next) => {
console.log(JSON.stringify(next));
console.log(`Doing my things...`);
});
// CLOSING THE WATCHER ???
closeChangeStream(timeInMs, watching); << ????
}
// CHANGE STREAM CLOSER
function closeChangeStream(timeInMs = 60000, watching) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Closing the change stream");
watching.close();
resolve();
}, timeInMs);
});
}
So, the goal is to keep always myWatcher function in an active state, to watch any database changes and for example, send an user notification when is detected some updating. The closeChangeStream function close myWatcher function in X seconds after any database changes. So, to keep the myWatcher always active, do you recomment not to use the closeChangeStream function ??
Another thing. With this goal in mind, to keep always myWatcher function in an active state, if I keep the await client.close();, my code emits an error: Topology is closed, so when I ignore this await client.close(), my code works perfectly. Do you recomment not to use the await client.close() function to keep always myWatcher function in an active state ??
Im a newbee in this topics !
thanks for the advice !
Thanks for help !
MongoDB change streams are implemented in a pub/sub paradigm.
Send your application to a friend in the Sudan. Have both you and your friend run the application (that has the change stream implemented). If you open up mongosh and run db.getCollection('myCollection').updateOne({_id: ObjectId("6220ee09197c13d24a7997b7")}, {FirstName: Bob}); both you and your friend will get the console.log for the change stream.
This is assuming you're not running localhost, but you can simulate this with two copies of the applications locally.
The issue comes from going into production and suddenly you have 200 load bearers, 5 developers, etc. running and your watch fires a ton of writes around the globe.
I believe, the practice is to functionize it. Wrap your watch in a function and fire the function when you're about to do a write (and close after you do your associated writes).
I'm using Google Cloud Storage for storing objects, with the bucket associated to a topic and subscription id. The flow is such that a Java application requests for the upload link(s), and upload object(s) using those upload link(s). I also have a pubsub listener implemented in Java, which receives the upload notification message, and does something on every successful upload. This is the snippet that handles the event listening.
public void eventListener() {
MessageReceiver messageReceiver = (message, consumer) -> {
final Map<String, Object> uploadMetaDataMap = getUploadDataMap(message);
LOGGER.info("Upload event detected => {} ", uploadMetaDataMap);
// do something
consumer.ack();
};
Subscriber subscriber = null;
Subscriber finalSubscriber = subscriber;
/* To ensure that any messages already being handled by receiveMessage run to completion */
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
finalSubscriber.stopAsync().awaitTerminated();
}
});
try {
subscriber = Subscriber.newBuilder(subscription, messageReceiver)
.setCredentialsProvider(FixedCredentialsProvider.create(creds)).build();
subscriber.addListener(new Subscriber.Listener() {
#Override
public void failed(Subscriber.State from, Throwable failure) {
// Handle failure. This is called when the Subscriber encountered a fatal error and is shutting down.
LOGGER.error(String.valueOf(failure));
}
}, MoreExecutors.directExecutor());
subscriber.startAsync().awaitRunning();
subscriber.awaitTerminated();
} finally {
if (subscriber != null) {
subscriber.stopAsync().awaitTerminated();
}
}
}
I'm storing the objects in this format => bucket/uuid/objectName.extension and on every successful upload, LOGGER.info("Upload event detected => {} ", uploadMetaDataMap); logs messages like this
2020-08-03 16:12:14,686 [Gax-1] INFO listener.AsynchronousPull - Upload event detected => {size=85, uuid=6dff9a20-3995-4f28-93e9-79e6c3cf613d, bucket=bucketName}
The issue I'm facing now is, not all the successful upload events send out notification message. I can see the folder structure created in the GCS with the respective object inside it, but notification related to that upload is nowhere to be found in the logs printed by pubsub listener. It's been bothering me for a while now, and could really use some help with this.
I am trying to follow the hello world example. With regular ActiveMQ it works, but ActiveMQ Artemis is giving me headaches. I guess there is some configuration I am not doing correctly. The Address is made, but is it made using Multicast routing. I think I need Unicast (queue routing).
The below snippet does not work for the artemis version of ActiveMQ. Is it possible what I am trying to do? I would like to auto-create a durable Queue.
public class SimpleAmqpTest
{
[Fact]
public async Task TestHelloWorld()
{
Address address = new Address("amqp://guest:guest#localhost:5672");
Connection connection = await Connection.Factory.CreateAsync(address);
Session session = new Session(connection);
Message message = new Message("Hello AMQP");
var target = new Target
{
Address = "simple-queue",
Durable = 1,
};
SenderLink sender = new SenderLink(session, "sender-link", target, null);
await sender.SendAsync(message);
ReceiverLink receiver = new ReceiverLink(session, "receiver-link", "simple-queue");
message = await receiver.ReceiveAsync();
receiver.Accept(message);
await sender.CloseAsync();
await receiver.CloseAsync();
await session.CloseAsync();
await connection.CloseAsync();
}
}
Finally found out what I was doing wrong, as Amqp does not have the configuration of queues and topics, it can be defined in the Capabilities. For some reason Artemis creates topics by default (multicast). If you need AnyCast you can specify your need using Capabilities = new Symbol[] { new Symbol("queue") }. For the full test fact:
public async Task TestHelloWorld()
{
//strange, works using regular activeMQ and the amqp test broker from here: http://azure.github.io/amqpnetlite/articles/hello_amqp.html
//but this does not work in ActiveMQ Artemis
Address address = new Address("amqp://guest:guest#localhost:5672");
Connection connection = await Connection.Factory.CreateAsync(address);
Session session = new Session(connection);
Message message = new Message("Hello AMQP");
Target target = new Target
{
Address = "q1",
Capabilities = new Symbol[] { new Symbol("queue") }
};
SenderLink sender = new SenderLink(session, "sender-link", target, null);
sender.Send(message);
Source source = new Source
{
Address = "q1",
Capabilities = new Symbol[] { new Symbol("queue") }
};
ReceiverLink receiver = new ReceiverLink(session, "receiver-link", source, null);
message = await receiver.ReceiveAsync();
receiver.Accept(message);
await sender.CloseAsync();
await receiver.CloseAsync();
await session.CloseAsync();
await connection.CloseAsync();
}
I'm using MassTransit with Reactive Extensions to stream messages from the queue in batches. Since the behaviour isn't the same as a normal consumer I need to be able to send a message to the error queue if it fails an x number of times.
I've looked through the MassTransit source code and posted on the google groups and can't find an anwser.
Is this available on the ConsumeContext interface? Or is this even possible?
Here is my code. I've removed some of it to make it simpler.
_busControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://localhost/"), h =>
{
h.Username("guest");
h.Password("guest");
});
cfg.UseInMemoryScheduler();
cfg.ReceiveEndpoint(host, "customer_update_queue", e =>
{
var _observer = new ObservableObserver<ConsumeContext<Customer>>();
_observer.Buffer(TimeSpan.FromMilliseconds(1000)).Subscribe(OnNext);
e.Observer(_observer);
});
});
private void OnNext(IList<ConsumeContext<Customer>> messages)
{
foreach (var consumeContext in messages)
{
Console.WriteLine("Content: " + consumeContext.Message.Content);
if (consumeContext.Message.RetryCount > 3)
{
// I want to be able to send to the error queue
consumeContext.SendToErrorQueue()
}
}
}
I've found a work around by using the RabbitMQ client mixed with MassTransit. Since I can't throw an exception when using an Observable and therefore no error queue is created. I create it manually using the RabbitMQ client like below.
ConnectionFactory factory = new ConnectionFactory();
factory.HostName = "localhost";
factory.UserName = "guest";
factory.Password = "guest";
using (IConnection connection = factory.CreateConnection())
{
using (IModel model = connection.CreateModel())
{
string exchangeName = "customer_update_queue_error";
string queueName = "customer_update_queue_error";
string routingKey = "";
model.ExchangeDeclare(exchangeName, ExchangeType.Fanout);
model.QueueDeclare(queueName, false, false, false, null);
model.QueueBind(queueName, exchangeName, routingKey);
}
}
The send part is to send it directly to the message queue if it fails an x amount of times like so.
consumeContext.Send(new Uri("rabbitmq://localhost/customer_update_queue_error"), consumeContext.Message);
Hopefully the batch feature will be implemented soon and I can use that instead.
https://github.com/MassTransit/MassTransit/issues/800
I am trying to implement Facebook X_FACEBOOK_PLATFORM SASL mechanism so I could integrate Facebook Chat to my application over XMPP.
Here is the code:
var ak = "my app id";
var sk = "access token";
var aps = "my app secret";
using (var client = new TcpClient())
{
client.Connect("chat.facebook.com", 5222);
using (var writer = new StreamWriter(client.GetStream())) using (var reader = new StreamReader(client.GetStream()))
{
// Write for the first time
writer.Write("<stream:stream xmlns=\"jabber:client\" xmlns:stream=\"http://etherx.jabber.org/streams\" version=\"1.0\" to=\"chat.facebook.com\"><auth xmlns=\"urn:ietf:params:xml:ns:xmpp-sasl\" mechanism=\"X-FACEBOOK-PLATFORM\" /></stream:stream>");
writer.Flush();
Thread.Sleep(500);
// I am pretty sure following works or at least it's not what causes the error
var challenge = Encoding.UTF8.GetString(Convert.FromBase64String(XElement.Parse(reader.ReadToEnd()).Elements().Last().Value)).Split('&').Select(s => s.Split('=')).ToDictionary(s => s[0], s => s[1]);
var response = new SortedDictionary<string, string>() { { "api_key", ak }, { "call_id", DateTime.Now.Ticks.ToString() }, { "method", challenge["method"] }, { "nonce", challenge["nonce"] }, { "session_key", sk }, { "v", "1.0" } };
var responseString1 = string.Format("{0}{1}", string.Join(string.Empty, response.Select(p => string.Format("{0}={1}", p.Key, p.Value)).ToArray()), aps);
byte[] hashedResponse1 = null;
using (var prov = new MD5CryptoServiceProvider()) hashedResponse1 = prov.ComputeHash(Encoding.UTF8.GetBytes(responseString1));
var builder = new StringBuilder();
foreach (var item in hashedResponse1) builder.Append(item.ToString("x2"));
var responseString2 = Convert.ToBase64String(Encoding.UTF8.GetBytes(string.Format("{0}&sig={1}", string.Join("&", response.Select(p => string.Format("{0}={1}", p.Key, p.Value)).ToArray()), builder.ToString().ToLower()))); ;
// Write for the second time
writer.Write(string.Format("<response xmlns=\"urn:ietf:params:xml:ns:xmpp-sasl\">{0}</response>", responseString2));
writer.Flush();
Thread.Sleep(500);
MessageBox.Show(reader.ReadToEnd());
}
}
I shortened and shrunk the code as much as possible, because I think my SASL implementation (whether it works or not, I haven't had a chance to test it yet) is not what causes the error.
I get the following exception thrown at my face: Unable to read data from the transport connection: An established connection was aborted by the software in your host machine.
10053
System.Net.Sockets.SocketError.ConnectionAborted
It happens every time I try to read from client's stream for the second time. As you can see i pause a thread here so Facebook server has enough time to answer me, but I used asynchronous approach before and I encountered the exact same thing, so I decided to try it synchronously first. Anyway actual SASL mechanism implementation really shouldn't cause this because if I don't try to authenticate right away, but I send the request to see what mechanisms server uses and select that mechanism in another round of reading and writing, it fails, but when I send mechanism selection XML right away, it works and fails on whatever second I send.
So the conclusion is following: I open the socket connection, write to it, read from it (first read works both sync and async), write to it for the second time and try to read from it for the second time and here it always fails. Clearly then, problem is with socket connection itself. I tried to use new StreamReader for second read but to no avail. This is rather unpleasant since I would really like to implement facade over NetworkStream with "Received" event or something like Send(string data, Action<string> responseProcessor) to get some comfort working with that stream, and I already had the implementation, but it also failed on second read.
Thanks for your suggestions.
Edit: Here is the code of facade over NetworkStream. Same thing happens when using this asynchronous approach, but couple of hours ago it worked, but for second response returned same string as for first. I can't figute out what I changed in a meantime and how.
public void Send(XElement fragment)
{
if (Sent != null) Sent(this, new XmppEventArgs(fragment));
byte[] buffer = new byte[1024];
AsyncCallback callback = null;
callback = (a) =>
{
var available = NetworkStream.EndRead(a);
if (available > 0)
{
StringBuilder.Append(Encoding.UTF8.GetString(buffer, 0, available));
NetworkStream.BeginRead(buffer, 0, buffer.Length, callback, buffer);
}
else
{
var args = new XmppEventArgs(XElement.Parse(StringBuilder.ToString()));
if (Received != null) Received(this, args);
StringBuilder = new StringBuilder();
// NetworkStream.BeginRead(buffer, 0, buffer.Length, callback, buffer);
}
};
NetworkStream.BeginRead(buffer, 0, buffer.Length, callback, buffer);
NetworkStreamWriter.Write(fragment);
NetworkStreamWriter.Flush();
}
The reader.ReadToEnd() call consumes everything until end-of-stream, i.e. until TCP connection is closed.