I'm trying to make a Kafka producer and consumer, but my project is in dotnet Core 2.0 and it doesn't seem to work well with kafka. This is the proof-of-concept I've tried coming up with. I'm using Visual Studio 2017 with the kafka-net nuget package:
using
using KafkaNet;
using KafkaNet.Model;
using KafkaNet.Protocol;
producer
static void Main(string[] args)
{
string payload = "Welcome to Kafka!";
string topic = "IDGTestTopic";
Message msg = new Message(payload);
Uri uri = new Uri("localhost:9092");
var options = new KafkaOptions(uri);
var router = new BrokerRouter(options);
var client = new Producer(router);
client.SendMessageAsync(topic, new List<Message> { msg }).Wait();
Console.ReadLine();
}
consumer
static void Main(string[] args)
{
string topic = "IDGTestTopic";
Uri uri = new Uri("http://localhost:9092");
var options = new KafkaOptions(uri);
var router = new BrokerRouter(options);
var consumer = new Consumer(new ConsumerOptions(topic, router));
foreach (var message in consumer.Consume())
{
Console.WriteLine(Encoding.UTF8.GetString(message.Value));
}
Console.ReadLine();
}
When I try to run the producer first, I get an error message on the BrokerRouter:
$exception {System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.
Parameter name: port
at System.Net.IPEndPoint..ctor(IPAddress address, Int32 port)
at KafkaNet.DefaultKafkaConnectionFactory.Resolve(Uri kafkaAddress, IKafkaLog log)
at KafkaNet.Model.KafkaOptions.<get_KafkaServerEndpoints>d__0.MoveNext()
at KafkaNet.BrokerRouter..ctor(KafkaOptions kafkaOptions)
at SampleKafkaProducer.Program.Main(String[] args) in C:\v4target\SampleKafka\SampleKafkaProducer\SampleKafkaProducer\Program.cs:line 18} System.ArgumentOutOfRangeException
How is a port of 9092 out of range? My Visual Studio projects are running on ports in the 55000's. Multiple sources I've researched use 9092 as a kafka port.
Does anyone understand the error message? Is part of the main problem because I'm using a version of Kafka not compatible with dotnet core?
The problem is with the uri.
Uri uri = new Uri("localhost:9092");
If you print out the uri.Port, it's -1. Hence the ArgumentOutOfRangeException.
Try this instead:
Uri uri = new Uri("http://localhost:9092");
From the KafkaNet Repository. This is how they setup the URI:
var options = new KafkaOptions(new Uri("http://CSDKAFKA01:9092"), new Uri("http://CSDKAFKA02:9092"))
{
Log = new ConsoleLog()
};
Related
I have created a device that delivers messages. Now I want to send them to an Azure IoT Hub by using MQTTnet Version 2.4.0 because the .NET target Framework is on Version 4.5 and it is not my decision to change it.
My Question:
Is there any other or better method to do this
What MqttClientOption do I take best
What Parameters to set to which value to connect to the Hub
I have tried almost every combination of values for the ClientId/UserName/Password as described here: https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-mqtt-support#using-the-mqtt-protocol-directly-as-a-device
but none of them worked for me
I have tried outside the Project and build a similar device on the current framework and it worked perfectly with the newer version of MQTTnet.
Sadly I don't get any kind of error message only a MqttCommunicationTimedOutException after about 10 seconds.
Thanks for your help I have been stuck at this problem for almost a week.
The following code snippet is a working example of the simulated device1 using the MQTT protocol directly to the Azure IoT Hub via the MQTTnet Version 2.4.0 library:
using MQTTnet;
using MQTTnet.Core;
using MQTTnet.Core.Client;
using MQTTnet.Core.Packets;
using MQTTnet.Core.Protocol;
using System;
using System.Text;
namespace ConsoleApp1
{
class Program
{
static void Main(string[] args)
{
var options = new MqttClientTcpOptions()
{
Server = "myIoTHub.azure-devices.net",
Port = 8883,
ClientId = "device1",
UserName = "myIoTHub.azure-devices.net/device1/api-version=2018-06-30",
Password = "SharedAccessSignature sr=myIoTHub.azure-devices.net%2Fdevices%2Fdevice1&sig=****&se=1592830262",
ProtocolVersion = MQTTnet.Core.Serializer.MqttProtocolVersion.V311,
TlsOptions = new MqttClientTlsOptions() { UseTls = true },
CleanSession = true
};
var factory = new MqttClientFactory();
var mqttClient = factory.CreateMqttClient();
// handlers
mqttClient.Connected += delegate (object sender, EventArgs e)
{
Console.WriteLine("Connected");
};
mqttClient.Disconnected += delegate (object sender, EventArgs e)
{
Console.WriteLine("Disconnected");
};
mqttClient.ApplicationMessageReceived += delegate (object sender, MqttApplicationMessageReceivedEventArgs e)
{
Console.WriteLine(Encoding.ASCII.GetString(e.ApplicationMessage.Payload));
};
mqttClient.ConnectAsync(options).Wait();
// subscribe on the topics
var topicFilters = new[] {
new TopicFilter("devices/device1/messages/devicebound/#", MqttQualityOfServiceLevel.AtLeastOnce),
new TopicFilter("$iothub/twin/PATCH/properties/desired/#", MqttQualityOfServiceLevel.AtLeastOnce),
new TopicFilter("$iothub/methods/POST/#", MqttQualityOfServiceLevel.AtLeastOnce)
};
mqttClient.SubscribeAsync(topicFilters).Wait();
// publish message
var topic = $"devices/device1/messages/events/$.ct=application%2Fjson&$.ce=utf-8";
var payload = Encoding.ASCII.GetBytes("Hello IoT Hub");
var message = new MqttApplicationMessage(topic, payload, MqttQualityOfServiceLevel.AtLeastOnce, false);
mqttClient.PublishAsync(message);
Console.Read();
}
}
}
and the following screen snippet shows an example of the output for updating a desired twin property color and receiving a C2D message:
I am trying to follow the hello world example. With regular ActiveMQ it works, but ActiveMQ Artemis is giving me headaches. I guess there is some configuration I am not doing correctly. The Address is made, but is it made using Multicast routing. I think I need Unicast (queue routing).
The below snippet does not work for the artemis version of ActiveMQ. Is it possible what I am trying to do? I would like to auto-create a durable Queue.
public class SimpleAmqpTest
{
[Fact]
public async Task TestHelloWorld()
{
Address address = new Address("amqp://guest:guest#localhost:5672");
Connection connection = await Connection.Factory.CreateAsync(address);
Session session = new Session(connection);
Message message = new Message("Hello AMQP");
var target = new Target
{
Address = "simple-queue",
Durable = 1,
};
SenderLink sender = new SenderLink(session, "sender-link", target, null);
await sender.SendAsync(message);
ReceiverLink receiver = new ReceiverLink(session, "receiver-link", "simple-queue");
message = await receiver.ReceiveAsync();
receiver.Accept(message);
await sender.CloseAsync();
await receiver.CloseAsync();
await session.CloseAsync();
await connection.CloseAsync();
}
}
Finally found out what I was doing wrong, as Amqp does not have the configuration of queues and topics, it can be defined in the Capabilities. For some reason Artemis creates topics by default (multicast). If you need AnyCast you can specify your need using Capabilities = new Symbol[] { new Symbol("queue") }. For the full test fact:
public async Task TestHelloWorld()
{
//strange, works using regular activeMQ and the amqp test broker from here: http://azure.github.io/amqpnetlite/articles/hello_amqp.html
//but this does not work in ActiveMQ Artemis
Address address = new Address("amqp://guest:guest#localhost:5672");
Connection connection = await Connection.Factory.CreateAsync(address);
Session session = new Session(connection);
Message message = new Message("Hello AMQP");
Target target = new Target
{
Address = "q1",
Capabilities = new Symbol[] { new Symbol("queue") }
};
SenderLink sender = new SenderLink(session, "sender-link", target, null);
sender.Send(message);
Source source = new Source
{
Address = "q1",
Capabilities = new Symbol[] { new Symbol("queue") }
};
ReceiverLink receiver = new ReceiverLink(session, "receiver-link", source, null);
message = await receiver.ReceiveAsync();
receiver.Accept(message);
await sender.CloseAsync();
await receiver.CloseAsync();
await session.CloseAsync();
await connection.CloseAsync();
}
I am developing a REST client using JBOSS app server and RESTEasy 2.3.6. I've included the following line at the beginning of my code:
RegisterBuiltin.register(ResteasyProviderFactory.getInstance());
Here's the rest of the snippet:
RegisterBuiltin.register(ResteasyProviderFactory.getInstance());
DefaultHttpClient httpclient = new DefaultHttpClient();
httpclient.getCredentialsProvider().setCredentials(
new AuthScope(host, port, AuthScope.ANY_REALM), new UsernamePasswordCredentials(userid,password));
ClientExecutor executor = createAuthenticatingExecutor(httpclient, host, port);
String uriTemplate = "http://myhost:8080/webapp/rest/MySearch";
ClientRequest request = new ClientRequest(uriTemplate, executor);
request.accept("application/json").queryParameter("query", searchArg);
ClientResponse<SearchResponse> response = null;
List<MyClass> values = null;
try
{
response = request.get(SearchResponse.class);
if (response.getResponseStatus().getStatusCode() != 200)
{
throw new Exception("REST GET failed");
}
SearchResponse searchResp = response.getEntity();
values = searchResp.getValue();
}
catch (ClientResponseFailure e)
{
log.error("REST call failed", e);
}
finally
{
response.releaseConnection();
}
private ClientExecutor createAuthenticatingExecutor(DefaultHttpClient client, String server, int port)
{
// Create AuthCache instance
AuthCache authCache = new BasicAuthCache();
// Generate BASIC scheme object and add it to the local auth cache
BasicScheme basicAuth = new BasicScheme();
HttpHost targetHost = new HttpHost(server, port);
authCache.put(targetHost, basicAuth);
// Add AuthCache to the execution context
BasicHttpContext localContext = new BasicHttpContext();
localContext.setAttribute(ClientContext.AUTH_CACHE, authCache);
// Create ClientExecutor.
ApacheHttpClient4Executor executor = new ApacheHttpClient4Executor(client, localContext);
return executor;
}
The above is a fairly simple client that employs the ClientRequest/ClientResponse<T> technique. This is documented here. The above code does work (only left out some trivial variable declarations like host and port). It is unclear to me from the JBOSS documentation as to whether I need to run RegisterBuiltin.register first. If I remove the line completely - my code still functions. Do I really need to include the register method call given the approach I have taken? The Docs say I need to run this once per VM. Secondly, if I am required to call it, is it safe to call more than one time in the same VM?
NOTE: I do understand there are newer versions of RESTEasy for JBOSS, we are not there yet.
I'm trying to integrate Kafka (Kafka_2.10 version 0.8.2.1) with Storm (version 0.9.3) in Cloudera environment, and have written some code for producers/consumers. I'm able to run the producer code separately with Kafka and see that it is working with my consumer code (on console). I then wrote some code using KafkaSpout and HDFSBolt to write data into HDFS. With this code, I am able to create a topology (and see it in the UI), but the the KafkaSpout is not receiving any messages from the producer.
My code snippet is shown below:
public class LoadingData {
public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException {
String kafkaTopic = "test";
SpoutConfig spoutConfig = new SpoutConfig(new ZkHosts("localhost:2181"),
kafkaTopic, "/kafkastorm", "KafkaSpout");
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("KafkaSpout", new KafkaSpout(spoutConfig),4);
RecordFormat format = new DelimitedRecordFormat().withFieldDelimiter(",");
SyncPolicy syncPolicy = new CountSyncPolicy(10);
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(5.0f, Units.MB);
FileNameFormat fileNameFormat = new DefaultFileNameFormat().withPath("/stormstuff");
builder.setBolt("stormbolt", new HdfsBolt()
.withFsUrl("hdfs://localhost:8020")
.withSyncPolicy(syncPolicy)
.withRecordFormat(format)
.withRotationPolicy(rotationPolicy)
.withFileNameFormat(fileNameFormat),1
).shuffleGrouping("KafkaSpout");
String topologyName = "EmployeeTopology";
Config config = new Config();
config.setNumWorkers(1);
StormSubmitter.submitTopology(topologyName, config, builder.createTopology());
}
}
Any ideas/suggestions on what I might be doing wrong? I really appreciate your help! Please let me know if you need any more details.
Could someone explain why this httpunit test case keeps failing in wc.getResponse with "bad file descriptor". I added the is.close() as a guess and moved it before and after the failure but that had no effect. This tests put requests to a Dropwizard app.
public class TestCircuitRequests
{
static WebConversation wc = new WebConversation();
static String url = "http://localhost:8888/funl/circuit/test.circuit1";
#Test
public void testPut() throws Exception
{
InputStream is = new FileInputStream("src/test/resources/TestCircuit.json");
WebRequest rq = new PutMethodWebRequest(url, is, "application/json");
wc.setAuthentication("FUNL", "foo", "bar");
WebResponse response = wc.getResponse(rq);
is.close();
}
No responses? So I'll try myself based on what I learned fighting this.
Httpunit is an old familiar tool that I'd use if I could. But it hasn't been updated in more than two years, so I gather its support for #PUT requests isn't right.
So I converted to Jersey-client instead. After a bunch of struggling I wound up with this code which does seem to work:
#Test
public void testPut() throws Exception
{
InputStream is = new FileInputStream("src/test/resources/TestCircuit.json");
String circuit = StreamUtil.readFully(is);
is.close();
Authenticator.setDefault(new MyAuthenticator());
ClientConfig config = new DefaultClientConfig();
Client client = Client.create(config);
com.sun.jersey.api.client.WebResource service = client.resource(url);
Builder builder = service.accept(MediaType.APPLICATION_JSON);
builder.entity(circuit, MediaType.APPLICATION_JSON);
builder.put(String.class, circuit);
return;
}
This intentionally avoids JAX-RS automatic construction of beans from JSON strings.