I have written a small application running OrientDB embedded. It works well. I can read and write to the database from the applicatiom using a plocal connection.
Now I am trying to access the same database from a remote OrientDB client (from a another PC).
I am getting a error message telling me that the database is locked and cant be accessed.
Is there a work around for this, or are I doing something wrong?
Using Java and OrienDB 2.2.12
You can try this code for connection:
private static final String dbUrl = "remote:localhost/databaseName";
private static final String dbUser = "admin";
private static final String dbPassword = "admin";
public static void createDBIfDoesNotExist() throws IOException {
OServerAdmin server = new OServerAdmin(dbUrl).connect(dbUser, dbPassword);
if (!server.existsDatabase("plocal")) {
server.createDatabase("graph", "plocal");
}
server.close();
}
public static void connectToDBIfExists() throws IOException {
OServerAdmin server = new OServerAdmin(dbUrl).connect(dbUser, dbPassword);
// some code
server.close();
}
Related
Here is my logstash.conf file. (Apologies for not pasting the code here directly; StackOverflow does not allow posts exceeding a certain code-to-text ratio.)
My remote VM, which also hosts my ElasticSearch and LogStash servers, listens on Port 8080.
On my local machine, I periodically send zipped folders (containing JSON documents) over TCP to my remote server, which receives the data into a memory stream, unzips the folders, and sends the contents to LogStash. LogStash in turn forwards the data to ElasticSearch.
I am currently testing the workflow with some dummy data.
On my remote server, here is the method for receiving data over TCP:
private static void ReceiveAndUnzipElasticSearchDocumentFolder(int numBytesExpectedToReceive)
{
int numBytesLeftToReceive = numBytesExpectedToReceive;
using (MemoryStream zippedFolderStream = new MemoryStream(new byte[numBytesExpectedToReceive]))
{
while (numBytesLeftToReceive > 0)
{
// Receive data in small packets
}
zippedFolderStream.Unzip(afterReadingEachDocument: LogStashDataSender.Send);
}
}
Here is the code for unzipping the received folder:
public static class StreamExtensions
{
public static void Unzip(this Stream zippedElasticSearchDocumentFolderStream, Action<ElasticSearchJsonDocument> afterReadingEachDocument)
{
JsonSerializer jsonSerializer = new JsonSerializer();
foreach (ZipArchiveEntry entry in new ZipArchive(zippedElasticSearchDocumentFolderStream).Entries)
{
using (JsonTextReader jsonReader = new JsonTextReader(new StreamReader(entry.Open())))
{
dynamic jsonObject = jsonSerializer.Deserialize<ExpandoObject>(jsonReader);
string jsonIndexId = jsonObject.IndexId;
string jsonDocumentId = jsonObject.DocumentId;
afterReadingEachDocument(new ElasticSearchJsonDocument(jsonObject, jsonIndexId, jsonDocumentId));
}
}
}
}
And here is the method for sending data to LogStash:
public static async void Send(ElasticSearchJsonDocument document)
{
HttpResponseMessage response =
await httpClient.PutAsJsonAsync(
IsNullOrWhiteSpace(document.DocumentId)
? $"{document.IndexId}"
: $"{document.IndexId}/{document.DocumentId}",
document.JsonObject);
try
{
response.EnsureSuccessStatusCode();
}
catch (Exception exception)
{
Console.WriteLine(exception.Message);
}
Console.WriteLine($"{response.Content}");
}
The httpClient referenced in the public static async void Send(ElasticSearchJsonDocument document) method was created using the following code:
private const string LogStashHostAddress = "http://127.0.0.1";
private const int LogStashPort = 31311;
httpClient = new HttpClient { BaseAddress = new Uri($"{LogStashHostAddress}:{LogStashPort}/") };
httpClient.DefaultRequestHeaders.Accept.Clear();
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
When I step into a new debug instance, the program runs smoothly, but dies immediately after executing await httpClient.PutAsJsonAsync for each of the documents contained inside the zipped folder -- response.EnsureSuccessStatusCode(); is never hit; neither is Console.WriteLine(exception.Message); nor Console.WriteLine($"{response.Content}");.
Here is an example of ElasticSearchJsonDocument that is passed to the public static async void Send(ElasticSearchJsonDocument document) method:
When I ran the same PUT request using cURL, the Book index was successfully created, and I could then a GET request to retrieve the data from ElasticSearch.
My questions are:
Why did the program die immediately (with no visible exception messages) after executing await httpClient.PutAsJsonAsync(...) for each of the JSON document inside the received zipped folder?
What changes should I make to ensure that I can make successful PUT requests to LogStash using a HttpClient instance?
I changed my httpClient instantiation code from
httpClient = new HttpClient { BaseAddress = new Uri($"{LogStashHostAddress}:{LogStashPort}/") };
httpClient.DefaultRequestHeaders.Accept.Clear();
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
to
httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Accept.Clear();
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
And I changed await http.Client.PutAsJsonAsync(...) to
HttpResponseMessage response =
await httpClient.PutAsJsonAsync(
IsNullOrWhiteSpace(document.DocumentId)
? $"{LogStashHostAddress}:{LogStashPort}/{document.IndexId}"
: $"{LogStashHostAddress}:{LogStashPort}/{document.IndexId}/{document.DocumentId}",
document.JsonObject);
response.EnsureSuccessStatusCode();
It turns out that the BaseAddress field in HttpClient is extremely user-unfriendly, so instead of wasting more time on it, I decided to just eliminate it entirely.
I'm having trouble setting up a connection with my database here and was wondering what I may be doing wrong.
The error is as follows:
Exception in thread "main" java.sql.SQLException: No suitable driver found for jdbc:postgresql://168.16.1.128:5432/dbname
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at sample.DbConnect.getConnection(DbConnect.java:21)
at sample.UserTest.main(UserTest.java:41)
My connecting class looks as follows:
public class DbConnect {
public java.sql.Connection getConnection() throws SQLException,IllegalAccessException, ClassNotFoundException {
java.sql.Connection conn = null;
String url = "jdbc:postgresql://168.16.1.128:5432/dbname";
conn = DriverManager.getConnection(url);
System.out.println("Connected to database");
return conn;
}
}
and heres where it gets called:
public class UserTest {
public static void main(String[] args) throws JSONException, SQLException, InstantiationException, IllegalAccessException, ClassNotFoundException{
DbConnect db = new DbConnect();
db.getConnection();
}
}
I have a feeling the error may come from the way the url is written and if this is the case can someone please explain to me how to properly write the url?
This database doesnt require a password and username to be connected to. I hope anyone can be so kind as to help me.
Thanks!
Background
I have a remote hosted server thats running java vm with custom server code for multiplayer real-time quiz game. The server deals with matchmaking, rooms, lobbies etc. I'm also using a Mongo db on same space which holds all the questions for mobile phone quiz game.
This is my first attempt at such a project and although I'm competent in Java my mongo skill are novice at best.
Client Singleton
My server contains static singleton of mongo client:
public class ClientSingleton
{
private static ClientSingleton uniqueInstance;
// The MongoClient class is designed to be thread safe and shared among threads.
// We create only 1 instance for our given database cluster and use it across
// our application.
private MongoClient mongoClient;
private MongoClientOptions options;
private MongoCredential credential;
private final String password = "xxxxxxxxxxxxxx";
private final String host = "xx.xx.xx.xx";
private final int port = 38180;
/**
*
*/
private ClientSingleton()
{
// Setup client credentials for DB connection (user, db name & password)
credential = MongoCredential.createCredential("XXXXXX", "DBName", password.toCharArray());
options = MongoClientOptions.builder()
.connectTimeout(25000)
.socketTimeout(60000)
.connectionsPerHost(100)
.threadsAllowedToBlockForConnectionMultiplier(5)
.build();
try
{
// Create client (server address(host,port), credential, options)
mongoClient = new MongoClient(new ServerAddress(host, port),
Collections.singletonList(credential),
options);
}
catch (UnknownHostException e)
{
e.printStackTrace();
}
}
/**
* Double checked dispatch method to initialise our client singleton class
*
*/
public static ClientSingleton getInstance()
{
if(uniqueInstance == null)
{
synchronized (ClientSingleton.class)
{
if(uniqueInstance == null)
{
uniqueInstance = new ClientSingleton();
}
}
}
return uniqueInstance;
}
/**
* #return our mongo client
*/
public MongoClient getClient() {
return mongoClient;
}
}
Notes here:
Mongo client is new to me and I understand failure to properly utilise connection pooling is one major “gotcha” that greatly impact Mongo db performance. Also creating new connections to the db is expensive and I should try and re-use existing connections.
I've not left socket timeout and connect timeout at defaults (eg infinite) if connection hangs for some reason I think it will get stuck forever!
I set number of milliseconds the driver will wait before a connection attempt is aborted, for connections made through a Platform-as-a-Serivce (where server is hosted) it is advised to have a higher timeout (e.g. 25 seconds). I also set number of milliseconds the driver will wait for a response from the server for all types of requests (queries, writes, commands, authentication, etc.). Finally I set threadsAllowedToBlockForConnectionMultiplier to 5 (500) connection in, a FIFO stack, awaiting their turn on the db.
Server Zone
Zone gets a game request from client and receives the meta data string for quiz type. In this case "Episode 3". Zone creates room for user or allows user to join room with with that property.
Server Room
Room then establishes db connection to mongo collection for the quiz type:
// Get client & collection
mongoDatabase = ClientSingleton.getInstance().getClient().getDB("DBName");
mongoColl = mongoDatabase.getCollection("GOT");
// Query mongo db with meta data string request
queryMetaTags("Episode 3");
Notes here:
Following a game or I should say after an room idle time the room get destroyed - this idle time is currently set to 60 mins. I believe that if connections per host is set to 100 then while this room is idle then it would be using valuable connection resources.
Question
Is this a good way to manage my client connections?
If I have several hundred concurrently connected games and each accessing the db to pull the questions then maybe following that request free up the client connection for other rooms to use? How should this be done? I'm concerned about possible bottle necks here!
Mongo Query FYI
// Query our collection documents metaTag elements for a matching string
// #SuppressWarnings("deprecation")
public void queryMetaTags(String query)
{
// Query to search all documents in current collection
List<String> continentList = Arrays.asList(new String[]{query});
DBObject matchFields = new
BasicDBObject("season.questions.questionEntry.metaTags",
new BasicDBObject("$in", continentList));
DBObject groupFields = new BasicDBObject( "_id", "$_id").append("questions",
new BasicDBObject("$push","$season.questions"));
//DBObject unwindshow = new BasicDBObject("$unwind","$show");
DBObject unwindsea = new BasicDBObject("$unwind", "$season");
DBObject unwindepi = new BasicDBObject("$unwind", "$season.questions");
DBObject match = new BasicDBObject("$match", matchFields);
DBObject group = new BasicDBObject("$group", groupFields);
#SuppressWarnings("deprecation")
AggregationOutput output =
mongoColl.aggregate(unwindsea,unwindepi,match,group);
String jsonString = null;
JSONObject jsonObject = null;
JSONArray jsonArray = null;
ArrayList<JSONObject> ourResultsArray = new ArrayList<JSONObject>();
// Loop for each document in our collection
for (DBObject result : output.results())
{
try
{
// Parse our results so we can add them to an ArrayList
jsonString = JSON.serialize(result);
jsonObject = new JSONObject(jsonString);
jsonArray = jsonObject.getJSONArray("questions");
for (int i = 0; i < jsonArray.length(); i++)
{
// Put each of our returned questionEntry elements into an ArrayList
ourResultsArray.add(jsonArray.getJSONObject(i));
}
}
catch (JSONException e1)
{
e1.printStackTrace();
}
}
pullOut10Questions(ourResultsArray);
}
The way I've done this is to use Spring to create a MongoClient Bean. You can then autowire this bean wherever it is needed.
For example:
MongoConfig.java
import com.mongodb.MongoClient;
import com.mongodb.MongoClientURI;
import com.tescobank.insurance.telematics.data.connector.config.DatabaseProperties;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.net.UnknownHostException;
#Configuration
public class MongoConfig {
private #Autowired DatabaseProperties properties;
#Bean
public MongoClient fooClient() throws UnknownHostException {
return mongo(properties.getFooDatabaseURI());
}
}
Class Requiring Mongodb connection:
#Component
public class DatabaseUser {
private MongoClient mongoClient;
....
#Autowired
public DatabaseUser(MongoClient mongoClient) {
this.mongoClient = mongoClient;
}
}
Spring will then create the connection and wire it where required. What you've done seems very complex and perhaps tries to recreate the functionality you would get for free by using a tried and test framework such as Spring. I'd generally try to avoid the use of Singletons too if I could avoid it. I've had no performance issues using Mongodb connections like this.
Im using google-api-java-client to access the Google Cloud Storage API. It works fine from my local machine and from some servers. But it fails with a 400 Bad Request (error: invalid_grant) on production server. I had a similar problem with Google Adwords API which was solved by the Google API Adwords Support. I think they just whitelisted the IP's of the servers I used.
Is it possible that I need to add permissions in the Google Cloud Storage API? Which would be strange, because it works from my local pc.
public class StorageObj {
private static final String KEY_FILE_NAME = "key.p12";
private static final String APPLICATION_NAME = "application_name";
private static final String BUCKET_NAME = "bucket_name";
private static final String STORAGE_OAUTH_SCOPE = "https://www.googleapis.com/auth/devstorage.read_write";
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
private final Storage storageObject;
public StorageObj() throws GeneralSecurityException, IOException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
GoogleCredential credential = new GoogleCredential.Builder().setTransport(httpTransport)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(SERVICE_ACCOUNT_ACCOUNT_EMAIL)
.setServiceAccountScopes(Collections.singleton(STORAGE_OAUTH_SCOPE))
.setServiceAccountPrivateKeyFromP12File(new File(KEY_FILE_NAME))
.build();
this.storageObject = new Storage.Builder(httpTransport, JSON_FACTORY, credential)
.setApplicationName(APPLICATION_NAME)
.build();
;
}
public void insert(String folder, String file) throws IOException {
try (FileInputStream inputStream = new FileInputStream(new File(folder + file))) {
InputStreamContent mediaContent = new InputStreamContent("application/octet-stream", inputStream);
mediaContent.setLength(inputStream.available());
Storage.Objects.Insert insertObject = storageObject.objects()
.insert(BUCKET_NAME, null /* obj-meta-data */, mediaContent)
.setName(file);
int _2MB = 2 * 1000 * 1000;
if (mediaContent.getLength() > 0 && mediaContent.getLength() <= _2MB) {
insertObject.getMediaHttpUploader().setDirectUploadEnabled(true);
}
insertObject.execute();
}
public static void main(String[] args) throws IOException, GeneralSecurityException {
new StorageObj().insert(args[0], args[1]);
}
}
}
My first guess is that your service account might not have access to the cloud bucket's ACL. Maybe you have access to one bucket but not to all, so I think you might have the proper ACL for certain buckets, but not for others?
Also, when you say "local", do you mean the devserver? the google cloud storage behaves differently in devserver, so this might be explainable just with how the devserver works.
I also, in my answer, assumed that by "servers" you meant different buckets on the google cloud storage
Am new to web services. Am trying to generate unique session id for every login that a user does, in web services.
What I thought of doing is,
Write a java file which has the login and logout method.
Generate WSDL file for it.
Then generate web service client(using Eclipse IDE), with the WSDl file which I generate.
Use the generated package(client stub) and call the methods.
Please let me know if there are any flaws in my way of implementation.
1. Java file with the needed methods
public String login(String userID, String password) {
if (userID.equalsIgnoreCase("sadmin")
&& password.equalsIgnoreCase("sadmin")) {
System.out.println("Valid user");
sid = generateUUID(userID);
} else {
System.out.println("Auth failed");
}
return sid;
}
private String generateUUID(String userID) {
UUID uuID = UUID.randomUUID();
sid = uuID.toString();
userSessionHashMap = new HashMap<String, String>();
userSessionHashMap.put(userID, sid);
return sid;
}
public void logout(String userID) {
Set<String> userIDSet = userSessionHashMap.keySet();
Iterator<String> iterator = userIDSet.iterator();
if (iterator.equals(userID)) {
userSessionHashMap.remove(userID);
}
}
2. Generated WSDL file
Developed the web service client from the wsdl.
4. Using the developed client stub.
public static void main(String[] args) throws Exception {
ClientWebServiceLogin objClientWebServiceLogin = new ClientWebServiceLogin();
objClientWebServiceLogin.invokeLogin();
}
public void invokeLogin() throws Exception {
String endpoint = "http://schemas.xmlsoap.org/wsdl/";
String username = "sadmin";
String password = "sadmin";
String targetNamespace = "http://WebServiceLogin";
try {
WebServiceLoginLocator objWebServiceLoginLocator = new WebServiceLoginLocator();
java.net.URL url = new java.net.URL(endpoint);
Iterator ports = objWebServiceLoginLocator.getPorts();
while (ports.hasNext())
System.out.println("ports Iterator size-->" + ports.next());
WebServiceLoginPortType objWebServiceLoginPortType = objWebServiceLoginLocator
.getWebServiceLoginHttpSoap11Endpoint();
String sid = objWebServiceLoginPortType.login(username, password);
System.out.println("sid--->" + sid);
} catch (Exception exception) {
System.out.println("AxisFault at creating objWebServiceLoginStub"
+ exception);
exception.printStackTrace();
}
}
On running the this file, I get the following error.
AxisFault
faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException
faultSubcode:
faultString: java.net.ConnectException: Connection refused: connect
faultActor:
faultNode:
faultDetail:
{http://xml.apache.org/axis/}stackTrace:java.net.ConnectException: Connection refused: connect
Can anyone suggest an alternate way of handling this task ? And what could probably be the reason for this error.
Web services are supposed to be stateless, so having "login" and "logout" web service methods doesn't make much sense.
If you want to secure web services calls unfortunately you have to code security into every call. In your case, this means passing the userId and password to every method.
Or consider adding a custom handler for security. Read more about handlers here.