The documentation says
The JDBC Client API created in Vert.x 3 is now deprecated and instead the new SQL Client API should be used. It will remain supported for the life time of Vert.x 4 to allow applications to be migrated to the new SQL Client API.
It seems that this class works in an Autocommit-Mode. If I have several database calls within one Service, how should this work with transaction-consistency? Is it planned, that the "commit", "rollback" will also be available as it is in SQLConnection?
Thx
You can take a look at Javadocs of the new client transaction APIs in https://vertx.io/docs/apidocs/io/vertx/sqlclient/Pool.html#withTransaction-java.util.function.Function-, the JDBCClient will execute the block starting with disabling auto-commit mode and ending with a commit or rollback execution.
Related
I'm using ServiceDiscovery.registerService() to register a service to zookeeper server, should I call ServiceDiscovery's close() method right after I register a service?
In Curator's documentation it suggests reusing ServiceProvider objects, should I reuse ServiceDiscovery objects too?
The Curator version is 2.8.0, Thanks!
(note: I'm the main author of Curator)
No you should not call close() after registering. Close is to be used when your application is shutting down. It closes all caches, unregisters services, etc.
You only need 1 ServiceDiscovery instance per application.
Here goes - bear with me:
Two Autofac 4.2.1 Containers:
One in an Asp.NET 4.6.1 WebApi project
One in an NServiceBus 6 host
Both possess an IJobService reference to the JobService (which saves jobs to DynamoDB).
Run the project in Visual Studio...
If I make a WebApi request into the first JobService it succeeds and inserts a record to DynamoDB and drops a command on the bus for NServiceBus to pickup.
During the processing of the Saga, NServiceBus makes a call to JobService again (presumably on the second container) to save progress. This second call fails to insert to DynamoDB with the lifetime disposed. If I try to create anything from IComponentContext I get:
Instances cannot be resolved and nested lifetimes cannot be created from this LifetimeScope as it has already been disposed.
The NServiceBus host is running AsA_Server and I register the container in the Customize method of IConfigureThisEndPoint.
Any pointers on how to see where the lifetime is getting dumped or if it's mysteriously picking the wrong IJobService somehow?
Just to close this one out - we ended up redesigning the solution and moving any web service calls out to their own handlers. That was based off the advice found here http://docs.particular.net/nservicebus/sagas That change resolved the issue one way or another.
Specifically, this guidance:
Other than interacting with its own internal state, a saga should not access a database, call out to web services, or access other resources - neither directly nor indirectly by having such dependencies injected into it.
When Spark is deployed in YARN cluster mode, how should I issue the Spark monitoring REST API calls http://spark.apache.org/docs/latest/monitoring.html ?
Does YARN have an API that takes the REST call for example (I already know the app-id)
http://localhost:4040/api/v1/applications/[app-id]/jobs
, proxies it to the correct driver port, and returns the JSON back to me? By "me" I mean my client.
Assume (or already by design) I cannot directly talk to the driver machine due to security reasons.
pls have a look at spark docs
- REST API
Yes with the latest api its available.
By this article
It turns out there is a third surprisingly easy option which is not documented. Spark has a hidden REST API which handles application submission, status checking and cancellation.
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
These are the other options available..
Livy jobserver
Submit Spark jobs remotely to an Apache Spark cluster Linux using Livy
Other options include
Triggering spark jobs with REST
This is what worked for me,
In yarn resource manager UI, click on link of the "application manager" for the running application and note the URL that it directs to
For me the link was something like
http://RM:20888/proxy/application_1547506848892_0002/
Append "api/v1/applications/application_1547506848892_0002" to the URL for the api.
For above case the api url is
curl "http://RM:20888/proxy/application_1547506848892_0002/api/v1/applications/application_1547506848892_0002"
I just tried to do a attempted a seamless upgrade of a service in a test setup. The service is being accessed by a Feign client. And naively I was under the impression that with multiple instances available of the service, the client would retry another instance if it failed to connect to one.
That, however, did not happen. But I cannot find any mention of how Feign in Spring Cloud is supposed to be configured to do this? Although I have seen mentions of it supporting it (as opposed to using RestTemplate where you would use something like Spring Retry?)
If you are using ribbon you can set properties similar to the following (substituting "localapp" for your serviceid):
localapp.ribbon.MaxAutoRetries=5
localapp.ribbon.MaxAutoRetriesNextServer=5
localapp.ribbon.OkToRetryOnAllOperations=true
ps underneath Feign has a Retryer interface, which was made to support things like Ribbon.
https://github.com/Netflix/feign/blob/master/core/src/main/java/feign/Retryer.java
see if property works - OkToRetryOnAllOperations: true
You can refer application ->
https://github.com/spencergibb/spring-cloud-sandbox/blob/master/spring-cloud-sandbox-sample-frontend/src/main/resources/application.yml
Spencer was quick...was late by few minutes :-)
I'm observing weird behavior for WebSphere 7.0.0.21:
Architecture:
Simple EJB bean with annotation #Local, #Remote Interfaces and transactional method marked as #Required
Standalone command line client that looks up remote "jta/usertransaction" and transactional EJB method. Client code starts user transaction, executes method and then tries to rollback it.
Expected behavior: (I see it on Jboss) rollback of DB transaction
Observed behavior: (On WAS 7.0.0.21) commit of DB transaction
I see that client transaction is changing from STATUS_NO_TRANSACTION(6) to STATUS_ACTIVE(0) and then again to STATUS_NO_TRANSACTION(6) after rollback.
I tried to Google it but didn't find any results
Any ideas on this scenario ? I'm pretty much ready to file the issue to IBM.
thanks,
UPDATE:
Finally after long wait and interactions with IBM support I got it resolved:
No problems with IBM JRE
For Sun/Oracle JRE it requires extra configuration for ORB e.g.
jndiProperties.put("java.naming.corba.orb", com.ibm.CORBA.iiop.ORB.init((String[])null, orbProperties));
and orb.properties from WAS or AppClient JRE is required to be provided as "orbProperties"