How to connect Vertx RedisClient in cluster mode with Elasticache - vert.x

I am using Vertx Redis client from the package io.vertx.rxjava.redis.RedisClient to connect to Elasticache Redis.
It does connect but shows an error,
io.vertx.redis.client.impl.types.ErrorType: MOVED 4985 xxx.xxx.xxx.xxx:63791
After reading about the error I found its because there are sharding and its not able to connect to all of them.
From the library, I am not able to figure what method to use to connect in cluster mode.

Here is an example how to connect and send get command in cluster mode.
Define options:
final RedisOptions options = new RedisOptions()
.setType(RedisClientType.CLUSTER)
.setUseSlave(RedisSlaves.SHARE)
.setMaxWaitingHandlers(128 * 1024)
.addEndpoint("redis://127.0.0.1:7000")
.addEndpoint("redis://127.0.0.1:7001")
.addEndpoint("redis://127.0.0.1:7002")
.addEndpoint("redis://127.0.0.1:7003")
.addEndpoint("redis://127.0.0.1:7004")
.addEndpoint("redis://127.0.0.1:7005");
Connect and send command:
Redis.createClient(vertx, options).connect(onCreate -> {
final Redis cluster = onCreate.result();
cluster.send(cmd(SET).arg("key"), set -> {
System.out.println(set.result());
});
});
Tip: If you are unsure how use some library or documentation is not clear enough you can always checkout Tests if that projects has them. You can check how they are implemented so you can use examples from there.

Related

How to disable sniffing of elasticsearch in Monstache

I'm getting this error while using Monstache:
Unable to create Elasticsearch client: health check timeout: no Elasticsearch node available
I applied these lines to Monstache configuration:
elasticsearch-validate-pem-file = false
elasticsearch-healthcheck-timeout-startup = 200
elasticsearch-healthcheck-timeout = 200
However, I still encounter the mentioned error. When I searched about it, I found that the problem is due to sniffing in elasticsearch client. But I don't know where and how exactly I must change it?
I should denote that I studied this tutorial for this problem, but I'm still full of ambiguities.
The problem has been solved when I installed Monstache on the same local server on which the ELK stack was installed. Also, the MongoDB database on the remote server has been changed to a single node replica set to be able to connect to Monstache.
let's try to use
elastic.SetSniff(false)

Run Arango Shell (Arangosh) on a Kubernetes pod

I have set up an Arango instance on Kubernetes nodes, which were installed on a VM, as mentioned in the ArangoDB docs ArangoDB on Kubernetes. Keep in mind, I skipped the ArangoLocalStorage and ArangoDeploymentReplication step. I can see 3 pods each of agent, coordinators and dbservers in get pods.
The arango-cluster-ea service, however, shows the external IP as pending. I can use the master node's IP address and the service port to access the Web UI, connect to the DB and make changes. But I am not able to access either the Arango shell, nor am I able to use my Python code to connect to the DB. I am using the Master Node IP and the service port shown in arango-cluster-ea in services to try to make the Python code connect to DB. Similarly, for arangosh, I am trying the code:
kubectl exec -it *arango-cluster-crdn-pod-name* -- arangosh --service.endpoint tcp://masternodeIP:8529
In case of Python, since the Connection class call is in a try block, it goes to except block. In case of Arangosh, it opens the Arango shell with the error:
Cannot connect to tcp://masternodeIP:port
thus not connecting to the DB.
Any leads about this would be appreciated.
Posting this community wiki answer to point to the github issue that this issue/question was resolved.
Feel free to edit/expand.
Link to github:
Github.com: Arangodb: Kube-arangodb: Issues: 734
Here's how my issue got resolved:
To connect to arangosh, what worked for me was to use ssl before using the localhost:8529 ip-port combination in the server.endpoint. Here's the command that worked:
kubectl exec -it _arango_cluster_crdn_podname_ -- arangosh --server.endpoint ssl://localhost:8529
For web browser, since my external access was based on NodePort type, I put in the master node's IP and the 30000-level port number that was generated (in my case, it was 31200).
For Python, in case of PyArango's Connection class, it worked when I used the arango-cluster-ea service. I put in the following line in the connection call:
conn = Connection(arangoURL='https://arango-cluster-ea:8529', verify= False, username = 'root', password = 'XXXXX')
The verify=False flag is important to ignore the SSL validity, else it will throw an error again.
Hopefully this solves somebody else's issue, if they face the similar issue.
I've tested following solution and I've managed to successfully connect to the database via:
arangosh from localhost:
Connected to ArangoDB 'http+ssl://localhost:8529, version: 3.7.12 [SINGLE, server], database: '_system', username: 'root'
Python code
from pyArango.connection import *
conn = Connection(arangoURL='https://ABCD:8529', username="root", password="password",verify= False )
db = conn.createDatabase(name="school")
Additional resources:
Arangodb.com: Tutorials: Tutorial Python
Arangodb.com: Docs: Stable: Tutorials Kubernetes

Connection to external Kafka Server using confluent-kafka-dotnet fails

I need to read Kafka messages with .Net from an external server. As the first step, I have installed Kafka on my local machine and then wrote the .Net code. It worked as wanted. Then, I moved to the cloud but the code did not work. Here is the setup that I have.
I have a Kafka Server deployed on a Windows VM (VM1: 10.0.0.4) on Azure. It is up and running. I have created a test topic and produced some messages with cmd. To test that everything is working I have opened a consumer with cmd and received the generated messages.
Then I have deployed another Windows VM (VM2, 10.0.0.5) with Visual Studio. Both of the VMs are deployed on the same virtual network so that I do not have to worry about opening ports or any other network configuration.
then, I have copied my Visual Studio project code and then changed the IP address of the bootstrap-server to point to the Kafka server. It did not work then, I read that I have to change the server configuration of Kafka, so I opened the server.properties and modified the listeners property to listeners=PLAINTEXT://10.0.0.4:9092. It still does not work.
I have searched online and tried many of the tips but it does not work. I think first of all to provide the credential to an external server (vm1), and probably some other configuration. Unfortunately, the official documentation of confluent is very short with very few examples. There is also no example to my case on the official GitHub. I have played with the "Sasl" properties in the Consumer Config class, but also no success.
the error message is:
%3|1622220986.498|FAIL|rdkafka#consumer-1| [thrd:10.0.0.4:9092/bootstrap]: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 1/1 brokers are down
Here is my .Net core code:
static void Main(string[] args)
{
string topic = "AzureTopic";
var config = new ConsumerConfig
{
BootstrapServers = "10.0.0.4:9092",
GroupId = "test",
//SecurityProtocol = SecurityProtocol.SaslPlaintext,
//SaslMechanism = SaslMechanism.Plain,
//SaslUsername = "[User]",
//SaslPassword = "[Password]",
AutoOffsetReset = AutoOffsetReset.Latest,
//EnableAutoCommit = false
};
int x = 0;
using (var consumer = new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
consumer.Subscribe(topic);
var cancelToken = new CancellationTokenSource();
while (true)
{
// some tasks
}
consumer.Close();
If you set listeners to a hard-coded IP, it'll only start the server binding and accepting traffic to that ip
And your listener isn't defined as SASL, so I'm not sure why you've tried using that in the client. While using credentials is strongly encouraged when sending data to cloud resources, it's not required to fix a network connectivity problem. You definitely shouldn't send credentials over plaintext, however
Start with these settings
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.0.0.4:9092
That alone should work within the VM shared network. You can use the console tools included with Kafka to test it.
And if that still doesn't work from your local client, then it's because 10.0.0.0/8 address space is considered a private network and you must advertise the VM's public IP and allow TCP traffic on port 9092 through Azure Firewall. It'd also make sense to expose multiple listeners for internal Azure network and external, forwarded network traffic
Details here discuss AWS and Docker, but the basics still apply
Overall, I think it'd be easier to setup Azure EventHub with Kafka support

openshift 3.12 websocket ERR_CONNECTION_ABORTED

I would like to start websocket connections (ws://whaterver)
in OpenShift but somehow they always ends with ERR_CONNECTION_ABORTED
immediately (new WebSocket('ws://whatever').
First I thought that the problem is in our application
but I created a minimal example and I got the same result.
First I created a pod and started this minimal Python websocket server.
import asyncio
import websockets
async def hello(websocket, path):
name = await websocket.recv()
print(f"< {name}")
greeting = f"Hello {name}!"
await websocket.send(greeting)
print(f"> {greeting}")
start_server = websockets.serve(hello, "0.0.0.0", 8000)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
Then I created a service (TCP 8000) and created a routing too and I got the same result.
I also tried to use different port or different targets (e.g.: /ws), without success.
This minimal script was able to respond to a simple http request, but for the websocket connection it can't.
Do you have any idea what could be the problem?
(by the documentation these connections should work as they are)
Should I try to play with some routing environment variables or are there any limitations which are not mentioned in the documentation?
Posting Károly Frendrich answer as community wiki:
Finally we realized that the TLS termination is required to be set.
It can be done using Secured Routes

Not able to get the Metric charts displayed using Hystrix dashboard in Bluemix

I am trying to use Hystrix to implement service proxy to implement circuit breaker pattern. I did implement the Hystrix Commands and also package the Hystrix servlet to provide the Hystrix stream. To monitor the services, I am using the Hystrix Dashboard 1.5.0. All works fine on a local Tomcat server. I am able to see the metrics charts
However, when I deploy the same on Bluemix, the Dashboard does not show the charts. Instead it says 'Unable to connect to Command Metric Stream.'. I also checked the stream using Chrome browser. I am able to see the messages as below:
ping:
data:
{
"type":"HystrixCommand",
"name":"GetAllContactsCommand",
"group":"GetAllContactsService",
"currentTime":1464714539673,
"isCircuitBreakerOpen":false,
"errorPercentage":0,
"errorCount":0,
"requestCount":0,
"rollingCountBadRequests":0,
"rollingCountCollapsedRequests":0,
"rollingCountEmit":0,
"rollingCountExceptionsThrown":0,
"rollingCountFailure":0,
"rollingCountEmit":0,
"rollingCountFallbackFailure":0,
"rollingCountFallbackRejection":0,
"rollingCountFallbackSuccess":0,
"rollingCountResponsesFromCache":0,
"rollingCountSemaphoreRejected":0,
"rollingCountShortCircuited":0,
"rollingCountSuccess":0,
"rollingCountThreadPoolRejected":0,
"rollingCountTimeout":0,
"currentConcurrentExecutionCount":0,
"rollingMaxConcurrentExecutionCount":0,
"latencyExecute_mean":0,
"latencyExecute":{"0":0,
"25":0,
"50":0,
"75":0,
"90":0,
"95":0,
"99":0,
"99.5":0,
"100":0
},
"latencyTotal_mean":0,
"latencyTotal":
{ "0":0,
"25":0,
"50":0,
"75":0,
"90":0,
"95":0,
"99":0,
"99.5":0,
"100":0
},
"propertyValue_circuitBreakerRequestVolumeThreshold":20,
"propertyValue_circuitBreakerSleepWindowInMilliseconds":5000,
"propertyValue_circuitBreakerErrorThresholdPercentage":50,
"propertyValue_circuitBreakerForceOpen":false,
"propertyValue_circuitBreakerForceClosed":false,
"propertyValue_circuitBreakerEnabled":true,
"propertyValue_executionIsolationStrategy":"THREAD",
"propertyValue_executionIsolationThreadTimeoutInMilliseconds":1000,
"propertyValue_executionTimeoutInMilliseconds":1000,
"propertyValue_executionIsolationThreadInterruptOnTimeout":true,
"propertyValue_executionIsolationThreadPoolKeyOverride":null,
"propertyValue_executionIsolationSemaphoreMaxConcurrentRequests":10,
"propertyValue_fallbackIsolationSemaphoreMaxConcurrentRequests":10,
"propertyValue_metricsRollingStatisticalWindowInMilliseconds":10000,
"propertyValue_requestCacheEnabled":true,
"propertyValue_requestLogEnabled":true,
"reportingHosts":1
}
data:
{
"type":"HystrixThreadPool",
"name":"GetAllContactsService",
"currentTime":1464714539673,
"currentActiveCount":0,
"currentCompletedTaskCount":3,
"currentCorePoolSize":10,
"currentLargestPoolSize":3,
"currentMaximumPoolSize":10,
"currentPoolSize":3,
"currentQueueSize":0,
"currentTaskCount":3,
"rollingCountThreadsExecuted":0,
"rollingMaxActiveThreads":0,
"rollingCountCommandRejections":0,
"propertyValue_queueSizeRejectionThreshold":5,
"propertyValue_metricsRollingStatisticalWindowInMilliseconds":10000,
"reportingHosts":1
}
Any idea, why the Dashboard is not able to connect to stream when deployed on Bluemix. Any help is appreciated.
Regards,
Umasuthan.
I have the same exact problem trying to run on Bluemix. I also run fine locally using Spring Tools Suite. Has there been a resolution to this problem?
My situation:
I used the Spring Initialzr to create a Spring Cloud application (Eureka, Hystrix, REST controller). I have deployed this to Bluemix (Cloud Foundry). Everything works fine except the Hystrix dashboard. I get "Unable to connect to Command Metric Stream." on the dashboard.
I can curl the stream url - it takes a really long time but data does come back.