Dynamodb - Eclipse Error - titan

Firstly i am using
Titan Graph 1.0.0
Backend Storage = DynamoDB_Local
I am deleting graph created and stored in local P.C.
I have code as below
{
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty("storage.backend", "com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager");
conf.setProperty("storage.dynamodb.client.endpoint", "http://localhost:4567");
conf.setProperty("storage.dynamodb.enable-parallel-scan", "true");
conf.setProperty("ids.flush", "false");conf.setProperty("faunus.output.titan.storage.index.search.backend", "elasticsearch");
conf.setProperty("faunus.graph.output.titan.storage.index.search.hostname", "/tmp/searchindex");
conf.setProperty("faunus.graph.output.titan.storage.index.search.elasticsearch.client-only", "false");
conf.setProperty("faunus.graph.output.titan.storage.index.search.elasticsearch.local-mode", "true");
TitanGraph graph = TitanFactory.open(conf);
graph.close();
TitanCleanup.clear(graph);
System.out.println("graph delete");
System.exit(0);
}}
And when i run this code gives me an error like this
Exception in thread "main" com.thinkaurelius.titan.core.TitanException: Could not initialize backend
at com.thinkaurelius.titan.diskstorage.Backend.initialize(Backend.java:301)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1806)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:123)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:74)
at deleteGraph.main(deleteGraph.java:56)
Caused by: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: CreateTable_titan_graphindex Cannot increase provisioned throughput to more than 80,000 units per account (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 34d7515c-d628-4224-bca3-b75acb936c71)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.processDynamoDBAPIException(DynamoDBDelegate.java:215)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.createTable(DynamoDBDelegate.java:702)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.createTableAndWaitForActive(DynamoDBDelegate.java:838)
at com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore.ensureStore(AbstractDynamoDBStore.java:92)
at com.amazon.titan.diskstorage.dynamodb.MetricStore.ensureStore(MetricStore.java:47)
at com.amazon.titan.diskstorage.dynamodb.TableNameDynamoDBStoreFactory.create(TableNameDynamoDBStoreFactory.java:52)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.openDatabase(DynamoDBStoreManager.java:202)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.openDatabase(DynamoDBStoreManager.java:57)
at com.thinkaurelius.titan.diskstorage.Backend.initialize(Backend.java:235)
... 5 more
Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Cannot increase provisioned throughput to more than 80,000 units per account (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 34d7515c-d628-4224-bca3-b75acb936c71)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:1835)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1811)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.createTable(AmazonDynamoDBClient.java:640)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.createTable(DynamoDBDelegate.java:700)
... 12 more
I don't know how to solve this error also no reference has been given for this exception
Can anyone help it...!

You appear to be running the stack in us-east-1, as the maximum provisioned throughput is 80k there. You omitted the edgestore/graphindex provisioning configuration, but I suspect you tried to provision more than 80000 units of provisioned throughput. To get more throughput, you need to file a request through AWS support to have the per table and per region throughput limits raised.

Related

How to optimize config Keycloak

I have deployed Keycloak on Kubernetes and am having a performance issue with Keycloak as follows:
I run 6 pods Keycloak with mode standalone HA using KUBE_PING in Kubernetes and auto scale hpa if CPU over 80%. When I test the login load with Keycloak, the threshold is only 150ccu and if over this threshold an error will occur. I see log pods Keycloak occur timeout as below
ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p16-t1) ISPN000136: Error executing command RemoveCommand on Cache 'authenticationSessions', writing keys [f85ac151-6196-48e9-977c-048fc8bcd975]: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 2312955 from keycloak-9b6486f7-bgw8s"
ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p16-t1) ISPN000136: Error executing command ReplaceCommand on Cache 'loginFailures', writing keys [LoginFailureKey [ realmId=etc-internal. userId=a76c3578-32fa-42cb-88d7-fcfdccc5f5c6 ]]: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 2201111 from keycloak-9b6486f7-bgw8s"
ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p20-t1) ISPN000136: Error executing command PutKeyValueCommand on Cache 'sessions', writing keys [6a0e8cde-82b7-4824-8082-109ccfc984b4]: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 2296440 from keycloak-9b6486f7-9l9lq"
I see the RAM, CPU of keycloak takes up very little under 20% so it does not auto scale hpa. So I think the current configuration of Keycloak is not optimize as about number of CACHE_OWNERS, Access Token Lifespan, SSO Session Idle, SSO Session Max, etc...
I want to know what configurations to configure accordingly and can load Keycloak to 500ccu with response time ~ 3s. Please support me if you know about this !
In standalone-ha.xml config, I only update config about datasource as image below

pulumi times out importing gcp record set

I have some dns records I'm trying to import with pulumi, and they fail somewhat bluntly with this error:
Diagnostics:
gcp:dns:RecordSet (root/my.domain./NS):
error: Preview failed: refreshing urn:pulumi:root::root::gcp:dns/recordSet:RecordSet::root/my.domain./NS: Error when reading or editing DNS Record Set "my.domain.": Get "https://www.googleapis.com/dns/v1beta2/projects/root-280012/managedZones/root/rrsets?alt=json&name=my.domain.&prettyPrint=false&type=NS": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
pulumi:pulumi:Stack (root-root):
error: preview failed
I'm just getting started with pulumi, so I have no real sense of whether this is a GCP-specific problem or more general with pulumi, so apologies if this is in the wrong place.
Is this just a case of increasing a timeout limit? Is this a problem with the cli? Why would this particular request timeout? (It times out every attempt)
Appreciate any advice!
I solved this by using customTimeouts: https://www.pulumi.com/docs/intro/concepts/programming-model/#customtimeouts
When creating your resource, you can pass an options object as the last parameter. In that, you can add your customTimeouts configuration e.g. (for typescript):
{
customTimeouts: {
create: '5m',
delete: '5m',
update: '5m'
}
})

CEPH S3 Exception while Listing Blobs

I have created an S3 bucket backed by CEPH and through java S3 client and via S3 object gateway am listing the directory in a paginated fashion and randomly the listing is failing some times after listing 1100 blobs in batches, some times after listing 2000 blobs in batches and am not able to figure out how to debug this issue, this is the exception am getting and if you notice there is a requestId in the exception I think basis this I can filter the logs but where can i find the logs is the question, I have checked the s3 gateway pod logs but I couldn't find any such logs over there, please let me know where should I look for the same
com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; Status Code: 500; Error Code: UnknownError; Request ID: tx00000000000000000e7df-005e626049-1146-rook-ceph-store; S3 Extended Request ID: 1146-rook-ceph-store-rook-ceph-store), S3 Extended Request ID: 1146-rook-ceph-store-rook-ceph-store
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)
and this is my code to iterate through the blobs, this is non paginated, the paginated version, both the versions are throwing the same exception after listing few hundred blobs
ObjectListing objects = conn.listObjects(bucket.getName());
do {
for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) {
System.out.println(objectSummary.getKey() + "\t" +
objectSummary.getSize() + "\t" +
StringUtils.fromDate(objectSummary.getLastModified()));
}
objects = conn.listNextBatchOfObjects(objects);
} while (objects.isTruncated());
So, any pointers on how to debug this would be helpful.. Thanks
Try ListObjectV2.
Returns some or all (up to 1,000) of the objects in a bucket.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html

OrientDB - REST API - You have reached maximum pool size for given partition

I used Apache JMeter to call OrientDB REST API to test workload of server.
I have tested with 50 concurrent user and see that ~ 30%-45% request was failed with response message as below
{
"errors": [
{
"code": 400,
"reason": "Bad request",
"content": "Error on parsing parameters from request body\u000d\u000a\u0009DB name=\"data_dev\"\u000aYou have reached maximum pool size for given partition"
}
]
}
I have checked and found no error occur on Server.
I have tried to change
script.pool.maxSize to 200, db.pool.max to 200
But this issue still occurred
Any suggestion?
UPDATED
This issue already reported on Github at here
Thanks.
Try to increase the maximum number of instances in the pool of script engines:
script.pool.maxSize
Ref: OrientDB documentation.

LOAD Runner Internal server 500 issue (REST API)

I am trying to run REST API from Load Runner but unable to do it. Every times its throws below exception
Action.c(4): Error -26612: HTTP Status-Code=500 (Internal Server Error) for "http://ipaddress/LoyaltyApi/api1/loyaltycard/linkcard", Snapshot Info [MSH 1 1] [MsgId: MERR-26612]
My code :
Action()
{
lr_think_time(10);
web_custom_request("LinkCards",
"URL=http://ipaddress/LoyaltyApi/api1/loyaltycard/linkcard",
"Method=POST",
"Resource=0",
"EncType=application/json",
"Mode=HTTP",
"BodyFilePath=linkcards.json",
LAST);
return 0;
}
I have tested the same URL with POST parameter in POSTMAN and it's working fine without any issue.
I am very new in this technology so unable to solved the issue. Please help.
I am very new in this technology....
Assuming your management has moved you to this role, have they provided you with training on the tool and a mentor for a period of time. If not they have set you up for failure.