How to connect orientdb cluster with pyorient in python?
My orientdb cluster has two host in one server, the hosts ports are 2425 and 2426.
I do not know if it can be useful to you, at this link there is an example of OrientDB with python use.
You can use two clients, one for each cluster.
client1 = pyorient.OrientDB("localhost", 2425)
# I do not why yet, but you need to first connect as root.
session1_id = client1.connect('root',root_password)
cluster1_info = client1.db_open(db_name, db_username, user_password)
Then your other client
client2 = pyorient.OrientDB("localhost", 2426)
session2_id = client2.connect('root',root_password)
cluster2_info = client2.db_open(db_name, db_username, user_password)
Hope this will help ;)
Related
I have set up an Arango instance on Kubernetes nodes, which were installed on a VM, as mentioned in the ArangoDB docs ArangoDB on Kubernetes. Keep in mind, I skipped the ArangoLocalStorage and ArangoDeploymentReplication step. I can see 3 pods each of agent, coordinators and dbservers in get pods.
The arango-cluster-ea service, however, shows the external IP as pending. I can use the master node's IP address and the service port to access the Web UI, connect to the DB and make changes. But I am not able to access either the Arango shell, nor am I able to use my Python code to connect to the DB. I am using the Master Node IP and the service port shown in arango-cluster-ea in services to try to make the Python code connect to DB. Similarly, for arangosh, I am trying the code:
kubectl exec -it *arango-cluster-crdn-pod-name* -- arangosh --service.endpoint tcp://masternodeIP:8529
In case of Python, since the Connection class call is in a try block, it goes to except block. In case of Arangosh, it opens the Arango shell with the error:
Cannot connect to tcp://masternodeIP:port
thus not connecting to the DB.
Any leads about this would be appreciated.
Posting this community wiki answer to point to the github issue that this issue/question was resolved.
Feel free to edit/expand.
Link to github:
Github.com: Arangodb: Kube-arangodb: Issues: 734
Here's how my issue got resolved:
To connect to arangosh, what worked for me was to use ssl before using the localhost:8529 ip-port combination in the server.endpoint. Here's the command that worked:
kubectl exec -it _arango_cluster_crdn_podname_ -- arangosh --server.endpoint ssl://localhost:8529
For web browser, since my external access was based on NodePort type, I put in the master node's IP and the 30000-level port number that was generated (in my case, it was 31200).
For Python, in case of PyArango's Connection class, it worked when I used the arango-cluster-ea service. I put in the following line in the connection call:
conn = Connection(arangoURL='https://arango-cluster-ea:8529', verify= False, username = 'root', password = 'XXXXX')
The verify=False flag is important to ignore the SSL validity, else it will throw an error again.
Hopefully this solves somebody else's issue, if they face the similar issue.
I've tested following solution and I've managed to successfully connect to the database via:
arangosh from localhost:
Connected to ArangoDB 'http+ssl://localhost:8529, version: 3.7.12 [SINGLE, server], database: '_system', username: 'root'
Python code
from pyArango.connection import *
conn = Connection(arangoURL='https://ABCD:8529', username="root", password="password",verify= False )
db = conn.createDatabase(name="school")
Additional resources:
Arangodb.com: Tutorials: Tutorial Python
Arangodb.com: Docs: Stable: Tutorials Kubernetes
I need to read Kafka messages with .Net from an external server. As the first step, I have installed Kafka on my local machine and then wrote the .Net code. It worked as wanted. Then, I moved to the cloud but the code did not work. Here is the setup that I have.
I have a Kafka Server deployed on a Windows VM (VM1: 10.0.0.4) on Azure. It is up and running. I have created a test topic and produced some messages with cmd. To test that everything is working I have opened a consumer with cmd and received the generated messages.
Then I have deployed another Windows VM (VM2, 10.0.0.5) with Visual Studio. Both of the VMs are deployed on the same virtual network so that I do not have to worry about opening ports or any other network configuration.
then, I have copied my Visual Studio project code and then changed the IP address of the bootstrap-server to point to the Kafka server. It did not work then, I read that I have to change the server configuration of Kafka, so I opened the server.properties and modified the listeners property to listeners=PLAINTEXT://10.0.0.4:9092. It still does not work.
I have searched online and tried many of the tips but it does not work. I think first of all to provide the credential to an external server (vm1), and probably some other configuration. Unfortunately, the official documentation of confluent is very short with very few examples. There is also no example to my case on the official GitHub. I have played with the "Sasl" properties in the Consumer Config class, but also no success.
the error message is:
%3|1622220986.498|FAIL|rdkafka#consumer-1| [thrd:10.0.0.4:9092/bootstrap]: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 1/1 brokers are down
Here is my .Net core code:
static void Main(string[] args)
{
string topic = "AzureTopic";
var config = new ConsumerConfig
{
BootstrapServers = "10.0.0.4:9092",
GroupId = "test",
//SecurityProtocol = SecurityProtocol.SaslPlaintext,
//SaslMechanism = SaslMechanism.Plain,
//SaslUsername = "[User]",
//SaslPassword = "[Password]",
AutoOffsetReset = AutoOffsetReset.Latest,
//EnableAutoCommit = false
};
int x = 0;
using (var consumer = new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
consumer.Subscribe(topic);
var cancelToken = new CancellationTokenSource();
while (true)
{
// some tasks
}
consumer.Close();
If you set listeners to a hard-coded IP, it'll only start the server binding and accepting traffic to that ip
And your listener isn't defined as SASL, so I'm not sure why you've tried using that in the client. While using credentials is strongly encouraged when sending data to cloud resources, it's not required to fix a network connectivity problem. You definitely shouldn't send credentials over plaintext, however
Start with these settings
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.0.0.4:9092
That alone should work within the VM shared network. You can use the console tools included with Kafka to test it.
And if that still doesn't work from your local client, then it's because 10.0.0.0/8 address space is considered a private network and you must advertise the VM's public IP and allow TCP traffic on port 9092 through Azure Firewall. It'd also make sense to expose multiple listeners for internal Azure network and external, forwarded network traffic
Details here discuss AWS and Docker, but the basics still apply
Overall, I think it'd be easier to setup Azure EventHub with Kafka support
I am using Vertx Redis client from the package io.vertx.rxjava.redis.RedisClient to connect to Elasticache Redis.
It does connect but shows an error,
io.vertx.redis.client.impl.types.ErrorType: MOVED 4985 xxx.xxx.xxx.xxx:63791
After reading about the error I found its because there are sharding and its not able to connect to all of them.
From the library, I am not able to figure what method to use to connect in cluster mode.
Here is an example how to connect and send get command in cluster mode.
Define options:
final RedisOptions options = new RedisOptions()
.setType(RedisClientType.CLUSTER)
.setUseSlave(RedisSlaves.SHARE)
.setMaxWaitingHandlers(128 * 1024)
.addEndpoint("redis://127.0.0.1:7000")
.addEndpoint("redis://127.0.0.1:7001")
.addEndpoint("redis://127.0.0.1:7002")
.addEndpoint("redis://127.0.0.1:7003")
.addEndpoint("redis://127.0.0.1:7004")
.addEndpoint("redis://127.0.0.1:7005");
Connect and send command:
Redis.createClient(vertx, options).connect(onCreate -> {
final Redis cluster = onCreate.result();
cluster.send(cmd(SET).arg("key"), set -> {
System.out.println(set.result());
});
});
Tip: If you are unsure how use some library or documentation is not clear enough you can always checkout Tests if that projects has them. You can check how they are implemented so you can use examples from there.
It seems to be a common and safer practice to host the database separately from Meteor apps. That is to say, have an EC2 instance for your Meteor app, and an EC2 instance for your MongoDB, and make them talk to one another.
From what I understand, people do this because it's more secure, and it allows them to deploy newer versions of their app without touching the database.
I'd like to do this with Amazon EC2 alone, as opposed to using another 3rd party service, like Compose.io.
How can I host a Meteor app and its database separately on two EC2 instances, and have them communicate with one another?
It is common practice, and people mostly do it because it offers you the ability to scale them both independently.
As to the how, you'll want to obviously configure each of your Amazon EC2 instances, installing meteor on one, and MongoDB on the other. You'll also need to configure your VPC (Amazon Virtual Private Cloud) so that your MongoDB instance accepts incoming connections on whatever port you specify (default is 27017), so that your Meteor Application can connect.
After that it's just a matter of telling your meteor app where to go to get the database connection. The most secure way of doing this will be to set a couple Environment Variables, named MONGODBSERVER and MONGODBPORT, DBUSER, DBPASSWORD, etc.
You'll then want to set some variables in your server Meteor code, using something like:
Meteor.startup(function() {
var DbUser = process.env.DBUSER;
var DbPassword = process.env.DBPASSWORD;
var MongoDBServer = process.env.MONGODBSERVER;
var MongoDBPort = process.env.MONGODBPORT;
});
And if you're using the native MongoDB Driver, connecting becomes trivial:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://DbUser:DbPassword#MongoDBServer:MongoDBPort/databasename', function(err, db) {
...
});
Then it's just a matter of constructing your Mongo models using something like:
Temperatures = new Mongo.Collection('temperatures');
Temperatures._ensureIndex({temp: 1, time: 1});
And then taking action on those models in regard to the database:
Temperatures.insert({temp: ftemp, time: Math.floor(Date.now() / 1000)});
I'll also mention that http://modulus.io is a really decent Meteor hosting solution. I'd recommend them, unless you are stuck on using Amazon EC2 instances, which is fine, but more complicated for a simple application.
You need to set an Environment Variable for Mongo where it is hosted
MONGO_URL
mongodb://:#hostingproviderurl:port/xxx?autoReconnect=true&connectTimeoutMS=60000
the correct mongodb:// url string would be provided by the mongodb hosting provider.
I currently have my app running on my local machine, in Boot.scala I have:
MongoDB.defineDb(
DefaultMongoIdentifier,
MongoAddress(MongoHost("127.0.0.1", 27017), "platform")
)
I've successfully deployed the app to a cloud provider, and am in the process of setting up a database # mongohq.com
What would I need to change to enable the app to connect? I've taken a look here:
https://www.assembla.com/wiki/show/liftweb/Mongo_Configuration
But am a little confused by the connection details provided by mongohq, all they provide is:
Mongo URI
mongodb://<user>:<password>#<host>:<port>/<my_account_name>
Thanks in advance for any help, much appreciated :)
I am not familiar with MongoHQ in particular, but you should be able to put something in Boot like this:
MongoDB.defineDbAuth(
DefaultMongoIdentifier,
new Mongo(new ServerAddress("<host>", <port>)),
<my_account_name>,
<user>,
<pass>
)
Where the <*> variables are the particular part of the connection URI that were provided to you when you signed up for MongoHQ.