MassTransit messages not picked up - msmq

I'm using MassTransit version 2.7.0 to communicate between a web service and a web site. The web service publishes messages and the website subscribes to them.
Publisher service:
ServiceBus = ServiceBusFactory.New(sbc =>
{
sbc.UseMsmq();
sbc.VerifyMsmqConfiguration();
sbc.ReceiveFrom("msmq://localhost/publisher");
sbc.UseMulticastSubscriptionClient();
});
Subscriber website:
ServiceBus = ServiceBusFactory.New(sbc =>
{
sbc.UseMsmq();
sbc.VerifyMsmqConfiguration();
sbc.ReceiveFrom("msmq://localhost/website");
sbc.UseMulticastSubscriptionClient();
sbc.Subscribe(s =>
{
s.Handler<SomethingHappened>(HandleSomethingHappened);
});
});
Everything works fine on my dev machine but not on the live site. The messages are showing up in the website_subscriptions MSMQ queue, but they aren't getting picked up by the subscribed website. The server does have multiple NICs so I added the registry settings to support that. I don't get any errors or anything - it just doesn't pickup the messages from the queue.
Am I missing some configuration or permissions problem? Is there some way to see why it isn't working?
[edit]
I just noticed that the website_subscriptions queue is all messages from the framework: AddPeerMessage and AddPeerSubscriptionMessage. All the messages from the service (SomethingHappened in the above example) are in the error queue.

You might try setting up the Log4Net or NLog integration to see what is happening in MassTransit. They did a great job of logging lots of useful troubleshooting information.

Related

Connection to external Kafka Server using confluent-kafka-dotnet fails

I need to read Kafka messages with .Net from an external server. As the first step, I have installed Kafka on my local machine and then wrote the .Net code. It worked as wanted. Then, I moved to the cloud but the code did not work. Here is the setup that I have.
I have a Kafka Server deployed on a Windows VM (VM1: 10.0.0.4) on Azure. It is up and running. I have created a test topic and produced some messages with cmd. To test that everything is working I have opened a consumer with cmd and received the generated messages.
Then I have deployed another Windows VM (VM2, 10.0.0.5) with Visual Studio. Both of the VMs are deployed on the same virtual network so that I do not have to worry about opening ports or any other network configuration.
then, I have copied my Visual Studio project code and then changed the IP address of the bootstrap-server to point to the Kafka server. It did not work then, I read that I have to change the server configuration of Kafka, so I opened the server.properties and modified the listeners property to listeners=PLAINTEXT://10.0.0.4:9092. It still does not work.
I have searched online and tried many of the tips but it does not work. I think first of all to provide the credential to an external server (vm1), and probably some other configuration. Unfortunately, the official documentation of confluent is very short with very few examples. There is also no example to my case on the official GitHub. I have played with the "Sasl" properties in the Consumer Config class, but also no success.
the error message is:
%3|1622220986.498|FAIL|rdkafka#consumer-1| [thrd:10.0.0.4:9092/bootstrap]: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 1/1 brokers are down
Here is my .Net core code:
static void Main(string[] args)
{
string topic = "AzureTopic";
var config = new ConsumerConfig
{
BootstrapServers = "10.0.0.4:9092",
GroupId = "test",
//SecurityProtocol = SecurityProtocol.SaslPlaintext,
//SaslMechanism = SaslMechanism.Plain,
//SaslUsername = "[User]",
//SaslPassword = "[Password]",
AutoOffsetReset = AutoOffsetReset.Latest,
//EnableAutoCommit = false
};
int x = 0;
using (var consumer = new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
consumer.Subscribe(topic);
var cancelToken = new CancellationTokenSource();
while (true)
{
// some tasks
}
consumer.Close();
If you set listeners to a hard-coded IP, it'll only start the server binding and accepting traffic to that ip
And your listener isn't defined as SASL, so I'm not sure why you've tried using that in the client. While using credentials is strongly encouraged when sending data to cloud resources, it's not required to fix a network connectivity problem. You definitely shouldn't send credentials over plaintext, however
Start with these settings
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.0.0.4:9092
That alone should work within the VM shared network. You can use the console tools included with Kafka to test it.
And if that still doesn't work from your local client, then it's because 10.0.0.0/8 address space is considered a private network and you must advertise the VM's public IP and allow TCP traffic on port 9092 through Azure Firewall. It'd also make sense to expose multiple listeners for internal Azure network and external, forwarded network traffic
Details here discuss AWS and Docker, but the basics still apply
Overall, I think it'd be easier to setup Azure EventHub with Kafka support

Log all exceptions in service fabric application

I have a bunch of backend service in Azure Service Fabric, I want to log any uncaught exceptions to App Insights, along with all my other logs. Is there any way in an Azure Service Farbic app to catch all uncaught exceptions and log them before re-throwing them?
You're using .net so you have access to the standard AppDomain way of handling all uncaught exceptions. Use this event.
Add the following lines into your Program.cs with logging code in there
AppDomain.CurrentDomain.UnhandledException += (sender,e)
=> {
//log exception
};
For sending application/service telemetry to Application Insights, I strongly recommend you have a look at App Insights Service Fabric. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics, as well as,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications
If you're also interesting in monitoring the overall health of your cluster (e.g. CPU/Memory and when nodes go up/down), have a look at EventFlow or this github project

How to define contract for both messaging and http API using sping-contract

I have a situation where there are 2 services. Service A is exposing query API through HTTP endpoint and also is listening for incomming asynchronous command messages (service A owns both of CQRS contracts).
Service B is using both endpoints of service A: to GET data and to invoke commands.
While implementing contract (stub and tests) for HTTP flow is quite simple, configuring messaging part is a tricky for me and actually I've stucked at this one.
Docs says that there is publisher side test generation what is suitable for publishing event case where publisher owns the contract.
But how to makes it working for situation where message consumer owns the contract??
I can't figure out any solution on that one as I need to have a stub used in service A to verify if service A is properly consuming commands messages and also I need genereated tests on service B that will verify that service B if it is producing compliant command message.
I'd appreciate any help.
Many thanks in advance.
Service A is the producer of the API and the consumer of messages. It owns only contracts for HTTP. The messaging contracts are owned by Service B. Service B is the producer of messages. You should have an HTTP contract defined on the Service A side and a Stub Runner test to test if it can receive the message sent by Service B. Service B should have the messaging contract to assert whether the message is properly sent and Stub Runner test for HTTP
That might lead to a dependency cycle. If you have a cycle between your apps then, yeah, what you have to do is ignore a stub runner test on one side until the jars got uploaded.
You've asked about storing contracts in a separate repository. You can do it - here are the docs https://cloud.spring.io/spring-cloud-static/Edgware.SR3/multi/multi__spring_cloud_contract_faq.html#_common_repo_with_contracts and here is an example https://github.com/spring-cloud-samples/spring-cloud-contract-samples/tree/master/beer_contracts
You've asked about not generating the tests for some reason (IMO that's a wrong thing to do). You can not use <extensions>true</extensions> in Maven but manually provide which goals you want to execute (omit the test generation). In Gradle just disable generateContractTests task AFAIR

Service Fabric and Application Insights

I am new to service Fabric and trying to integrate my windows service application into service fabric. For logging information, we are planning to use Application Insights. But the events are not logged if i send it through my SF application. At the same time, through a normal console/windows application, I can able to log the message to applicationinsights and can be viewed from there.
Then I tried to create a VM in azure environment, and create SF application there and send the log information to AI and its worked successfully. I copied the same codebase into my local machine and run it, its not working. I am not sure whether its related to any firewall or proxy settings. Can anyone help on this?
I have used the nuget package to install Microsoft.ApplicationInsights dll in my machine. The version that I used is 2.2.0. And I am using .Net framework 4.6.1
You could look at EventFlow to help you capture Service Fabric ETW Events from your SF services and send them to Application Insights.
It's easy enough to setup, simply add Microsoft.Diagnostics.EventFlow.ServiceFabric NuGet to your Service Fabric service project and then setup a pipline
public static void Main(string[] args)
{
try
{
using (var diagnosticsPipeline = ServiceFabricDiagnosticPipelineFactory.CreatePipeline("MyApplication-MyService-DiagnosticsPipeline"))
{
ServiceRuntime.RegisterServiceAsync("MyServiceType", ctx => new MyService(ctx)).Wait();
ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id, typeof(MyService).Name);
Thread.Sleep(Timeout.Infinite);
}
}
catch (Exception e)
{
ServiceEventSource.Current.ServiceHostInitializationFailed(e.ToString());
throw;
}
}
In your eventflow.config you can then setup Application Insights as an output:
{
"inputs": [
{
"type": "EventSource",
"sources": [
{ "providerName": "Your-Service-EventSource" }
]
},
],
"filters": [
{
"type": "drop",
"include": "Level == Verbose"
}
],
"outputs": [
// Please update the instrumentationKey.
{
"type": "ApplicationInsights",
"instrumentationKey": "00000000-0000-0000-0000-000000000000"
}
],
"schemaVersion": "2016-08-11",
"extensions": []
}
An alternative to the EventFlow approach suggested by yoape would be Azure Diagnostics (WAD).
Setup WAD in SF VMSS
When you're running an Azure Service Fabric cluster, it's a good idea
to collect the logs from all the nodes in a central location. Having
the logs in a central location helps you analyze and troubleshoot
issues in your cluster, or issues in the applications and services
running in that cluster. One way to upload and collect logs is to use
the Windows Azure Diagnostics (WAD) extension, which uploads logs to
Azure Storage, and also has the option to send logs to Azure
Application Insights or Event Hubs. You can also use an external
process to read the events from storage and place them in an analysis
platform product, such as OMS Log Analytics or another log-parsing
solution.
Setup AI upload in WAD
Cloud services, Virtual Machines, Virtual Machine Scale Sets and
Service Fabric all use the Azure Diagnostics extension to collect
data. Azure diagnostics sends data to Azure Storage tables. However,
you can also pipe all or a subset of the data to other locations using
Azure Diagnostics extension 1.5 or later. This article describes how
to send data from the Azure Diagnostics extension to Application
Insights.
The nice thing about it is that it's completely managed by Azure, and you don't need to change anything in your project.
You can adapt the watchdog Service from Microsoft Samples. The watchdog Service is a generic standalone Service Fabric Stateful Service that will log data into Application Insights.
Add the watchdog app and watchdog service into your solution.
Add your Azure App ID in the WatchDogService -PackageRoot/Config/ServiceManifest.xml
In the service that you need monitored, in the Run Async command, add the following lines (Example is in the Test Stateless Service provided in the link below)
Code:
protected override async Task RunAsync(CancellationToken cancellationToken)
{
// Register the health check and metrics with the watchdog.
bool healthRegistered = await this.RegisterHealthCheckAsync(cancellationToken);
bool metricsRegistered = await this.RegisterMetricsAsync(cancellationToken);
while (true)
{
// Report some fake metrics to Service Fabric.
this.ReportFakeMetrics(cancellationToken);
await Task.Delay(TimeSpan.FromSeconds(30), cancellationToken);
// Check that registration was successful. Could also query the watchdog for additional safety.
if (false == healthRegistered)
{
healthRegistered = await this.RegisterHealthCheckAsync(cancellationToken);
}
if (false == metricsRegistered)
{
metricsRegistered = await this.RegisterMetricsAsync(cancellationToken);
}
}
}
Copy the RegisterHealthCheckAsync, RegisterMetricsAsync and ReportFakeMetrics methods, as is, into your service.cs file.
That should be it! It uses Azure Storage optionally. I did not have to implement that to get the watchdog up and running.
Here is the link : https://github.com/Azure-Samples/service-fabric-watchdog-service
For sending application/service telemetry to Application Insights, I strongly recommend you have a look at App Insights Service Fabric. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics, as well as,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications
One thing however that the above won't solve is providing overall cluster health information - e.g. when/how often nodes go up/down, how much CPU/Memory and disk IO is consumed on individual nodes.
When running in Azure, the above should be fairly simple and I recommend you start here.
Doing this on-premise is not quite as simple. For this you could try MS EventFlow, or some of the other solutions already mentioned above.
Personally, I ended up creating a simple/custom windows service that use standard App Insights nuget packages to report the following info:
Cluster and Node ETW events from the Service Fabric Operational ETW channel
Performance counters (configurable via app insights config file)

spring cloud auto refresh config server property

I have configured spring cloud config which picks up property from Github. If I post to /refresh, I am also able to get the updated value in my application.
Now I want to get properties updated automatically. That means I don't want to hit refresh API to get the changes reflected in my application from Github property file to my application.
Do I need to implement Rabbitmq and cloud bus for it or there is any other simple way to do it?
Also there document says that we need to add a dependency on the spring-cloud-config-monitor library for push notification.
http://projects.spring.io/spring-cloud/spring-cloud.html#_push_notifications_and_spring_cloud_bus
But I did not find any such dependency in maven to be added. Not sure if my understanding is wrong. Please help.
You would need a Config server with Spring Cloud Bus and RabbitMQ (or Kafka or Redis) support.
RabbitMQ with the following exchange:
name: springCloudBus
type: topic
durable: true
autoDelete: false
internal: false
The config server would send data to the topic once it receives push events from Git (Github, Bitbucket, GitLab) via a webhook to http://<config-server>/monitor
And a client application with Config and RabbitMQ libraries, subscribed to the previous exchange to receive messages of the properties that need to be refreshed.
More could be found in my blog at: http://tech.asimio.net/2017/02/02/Refreshable-Configuration-using-Spring-Cloud-Config-Server-Spring-Cloud-Bus-RabbitMQ-and-Git.html with a brief explanation of the configuration, logs and full source code for the Config server and client app.
They are not generally available yet. You need to add http://repo.spring.io/milestone/ as a maven repository and use a milestone release.