I'm writing some bytes to gcs and would like to use the JSON API wrappers provided by Google, but with a timeout. Currently I have this:
storage = new Storage
.Builder(GoogleNetHttpTransport...)
StorageObject storageObject = new StorageObject().setBucket(bucket).setName(path);
Storage.Objects.Insert insertObject =
storage.objects().insert(bucket, storageObject, content).setName(path);
insertObject.execute();
}
Is there a simple way to add a timeout to either CloudStorage, StorageObject or the .execute?
It turns out that the storage abstraction import com.google.api.services.storage.Storage has a way to set timeouts on initialization with HttpRequestInitializers separate from your credentials.
If you have a MyGCSAbstraction that you create for each GCS operation, you can do the following:
private static HttpRequestInitializer setHttpTimeout(final HttpRequestInitializer requestInitializer) {
return new HttpRequestInitializer() {
#Override
public void initialize(HttpRequest httpRequest) throws IOException {
requestInitializer.initialize(httpRequest);
httpRequest.setConnectTimeout(1000); // ms
httpRequest.setReadTimeout(1000); // ms
}
};
}
MyGCSAbstraction(String applicationName, Credential credential) throws GeneralSecurityException, IOException {
Builder builder = new Storage.Builder(GoogleNetHttpTransport.newTrustedTransport(), JacksonFactory.getDefaultInstance(), setHttpTimeout(credential));
builder.setApplicationName(applicationName);
storage = builder.build();
}
Related
Here is my logstash.conf file. (Apologies for not pasting the code here directly; StackOverflow does not allow posts exceeding a certain code-to-text ratio.)
My remote VM, which also hosts my ElasticSearch and LogStash servers, listens on Port 8080.
On my local machine, I periodically send zipped folders (containing JSON documents) over TCP to my remote server, which receives the data into a memory stream, unzips the folders, and sends the contents to LogStash. LogStash in turn forwards the data to ElasticSearch.
I am currently testing the workflow with some dummy data.
On my remote server, here is the method for receiving data over TCP:
private static void ReceiveAndUnzipElasticSearchDocumentFolder(int numBytesExpectedToReceive)
{
int numBytesLeftToReceive = numBytesExpectedToReceive;
using (MemoryStream zippedFolderStream = new MemoryStream(new byte[numBytesExpectedToReceive]))
{
while (numBytesLeftToReceive > 0)
{
// Receive data in small packets
}
zippedFolderStream.Unzip(afterReadingEachDocument: LogStashDataSender.Send);
}
}
Here is the code for unzipping the received folder:
public static class StreamExtensions
{
public static void Unzip(this Stream zippedElasticSearchDocumentFolderStream, Action<ElasticSearchJsonDocument> afterReadingEachDocument)
{
JsonSerializer jsonSerializer = new JsonSerializer();
foreach (ZipArchiveEntry entry in new ZipArchive(zippedElasticSearchDocumentFolderStream).Entries)
{
using (JsonTextReader jsonReader = new JsonTextReader(new StreamReader(entry.Open())))
{
dynamic jsonObject = jsonSerializer.Deserialize<ExpandoObject>(jsonReader);
string jsonIndexId = jsonObject.IndexId;
string jsonDocumentId = jsonObject.DocumentId;
afterReadingEachDocument(new ElasticSearchJsonDocument(jsonObject, jsonIndexId, jsonDocumentId));
}
}
}
}
And here is the method for sending data to LogStash:
public static async void Send(ElasticSearchJsonDocument document)
{
HttpResponseMessage response =
await httpClient.PutAsJsonAsync(
IsNullOrWhiteSpace(document.DocumentId)
? $"{document.IndexId}"
: $"{document.IndexId}/{document.DocumentId}",
document.JsonObject);
try
{
response.EnsureSuccessStatusCode();
}
catch (Exception exception)
{
Console.WriteLine(exception.Message);
}
Console.WriteLine($"{response.Content}");
}
The httpClient referenced in the public static async void Send(ElasticSearchJsonDocument document) method was created using the following code:
private const string LogStashHostAddress = "http://127.0.0.1";
private const int LogStashPort = 31311;
httpClient = new HttpClient { BaseAddress = new Uri($"{LogStashHostAddress}:{LogStashPort}/") };
httpClient.DefaultRequestHeaders.Accept.Clear();
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
When I step into a new debug instance, the program runs smoothly, but dies immediately after executing await httpClient.PutAsJsonAsync for each of the documents contained inside the zipped folder -- response.EnsureSuccessStatusCode(); is never hit; neither is Console.WriteLine(exception.Message); nor Console.WriteLine($"{response.Content}");.
Here is an example of ElasticSearchJsonDocument that is passed to the public static async void Send(ElasticSearchJsonDocument document) method:
When I ran the same PUT request using cURL, the Book index was successfully created, and I could then a GET request to retrieve the data from ElasticSearch.
My questions are:
Why did the program die immediately (with no visible exception messages) after executing await httpClient.PutAsJsonAsync(...) for each of the JSON document inside the received zipped folder?
What changes should I make to ensure that I can make successful PUT requests to LogStash using a HttpClient instance?
I changed my httpClient instantiation code from
httpClient = new HttpClient { BaseAddress = new Uri($"{LogStashHostAddress}:{LogStashPort}/") };
httpClient.DefaultRequestHeaders.Accept.Clear();
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
to
httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Accept.Clear();
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
And I changed await http.Client.PutAsJsonAsync(...) to
HttpResponseMessage response =
await httpClient.PutAsJsonAsync(
IsNullOrWhiteSpace(document.DocumentId)
? $"{LogStashHostAddress}:{LogStashPort}/{document.IndexId}"
: $"{LogStashHostAddress}:{LogStashPort}/{document.IndexId}/{document.DocumentId}",
document.JsonObject);
response.EnsureSuccessStatusCode();
It turns out that the BaseAddress field in HttpClient is extremely user-unfriendly, so instead of wasting more time on it, I decided to just eliminate it entirely.
I have been migrating an existing application over to Spring Cloud's service discovery, Ribbon load balancing, and circuit breakers. The application already makes extensive use of the RestTemplate and I have been able to successfully use the load balanced version of the template. However, I have been testing the situation where there are two instances of a service and I drop one of those instances out of operation. I would like the RestTemplate to failover to the next server. From the research I have done, it appears that the fail-over logic exists in the Feign client and when using Zuul. It appears that the LoadBalancedRest template does not have logic for fail-over. In diving into the code, it looks like the RibbonClientHttpRequestFactory is using the netflix RestClient (which appears to have logic for doing retries).
So where do I go from here to get this working?
I would prefer to not use the Feign client because I would have to sweep A LOT of code.
I had found this link that suggested using the #Retryable annotation along with #HystrixCommand but this seems like something that should be a part of the load balanced rest template.
I did some digging into the code for RibbonClientHttpRequestFactory.RibbonHttpRequest:
protected ClientHttpResponse executeInternal(HttpHeaders headers) throws IOException {
try {
addHeaders(headers);
if (outputStream != null) {
outputStream.close();
builder.entity(outputStream.toByteArray());
}
HttpRequest request = builder.build();
HttpResponse response = client.execute(request, config);
return new RibbonHttpResponse(response);
}
catch (Exception e) {
throw new IOException(e);
}
}
It appears that if I override this method and change it to use "client.executeWithLoadBalancer()" that I might be able to leverage the retry logic that is built into the RestClient? I guess I could create my own version of the RibbonClientHttpRequestFactory to do this?
Just looking for guidance on the best approach.
Thanks
To answer my own question:
Before I get into the details, a cautionary tale:
Eureka's self preservation mode sent me down a rabbit hole while testing the fail-over on my local machine. I recommend turning self preservation mode off while doing your testing. Because I was dropping nodes at a regular rate and then restarting (with a different instance ID using a random value), I tripped Eureka's self preservation mode. I ended up with many instances in Eureka that pointed to the same machine, same port. The fail-over was actually working but the next node that was chosen happened to be another dead instance. Very confusing at first!
I was able to get fail-over working with a modified version of RibbonClientHttpRequestFactory. Because RibbonAutoConfiguration creates a load balanced RestTemplate with this factory, rather then injecting this rest template, I create a new one with my modified version of the request factory:
protected RestTemplate restTemplate;
#Autowired
public void customizeRestTemplate(SpringClientFactory springClientFactory, LoadBalancerClient loadBalancerClient) {
restTemplate = new RestTemplate();
// Use a modified version of the http request factory that leverages the load balacing in netflix's RestClient.
RibbonRetryHttpRequestFactory lFactory = new RibbonRetryHttpRequestFactory(springClientFactory, loadBalancerClient);
restTemplate.setRequestFactory(lFactory);
}
The modified Request Factory is just a copy of RibbonClientHttpRequestFactory with two minor changes:
1) In createRequest, I removed the code that was selecting a server from the load balancer because the RestClient will do that for us.
2) In the inner class, RibbonHttpRequest, I changed executeInternal to call "executeWithLoadBalancer".
The full class:
#SuppressWarnings("deprecation")
public class RibbonRetryHttpRequestFactory implements ClientHttpRequestFactory {
private final SpringClientFactory clientFactory;
private LoadBalancerClient loadBalancer;
public RibbonRetryHttpRequestFactory(SpringClientFactory clientFactory, LoadBalancerClient loadBalancer) {
this.clientFactory = clientFactory;
this.loadBalancer = loadBalancer;
}
#Override
public ClientHttpRequest createRequest(URI originalUri, HttpMethod httpMethod) throws IOException {
String serviceId = originalUri.getHost();
IClientConfig clientConfig = clientFactory.getClientConfig(serviceId);
RestClient client = clientFactory.getClient(serviceId, RestClient.class);
HttpRequest.Verb verb = HttpRequest.Verb.valueOf(httpMethod.name());
return new RibbonHttpRequest(originalUri, verb, client, clientConfig);
}
public class RibbonHttpRequest extends AbstractClientHttpRequest {
private HttpRequest.Builder builder;
private URI uri;
private HttpRequest.Verb verb;
private RestClient client;
private IClientConfig config;
private ByteArrayOutputStream outputStream = null;
public RibbonHttpRequest(URI uri, HttpRequest.Verb verb, RestClient client, IClientConfig config) {
this.uri = uri;
this.verb = verb;
this.client = client;
this.config = config;
this.builder = HttpRequest.newBuilder().uri(uri).verb(verb);
}
#Override
public HttpMethod getMethod() {
return HttpMethod.valueOf(verb.name());
}
#Override
public URI getURI() {
return uri;
}
#Override
protected OutputStream getBodyInternal(HttpHeaders headers) throws IOException {
if (outputStream == null) {
outputStream = new ByteArrayOutputStream();
}
return outputStream;
}
#Override
protected ClientHttpResponse executeInternal(HttpHeaders headers) throws IOException {
try {
addHeaders(headers);
if (outputStream != null) {
outputStream.close();
builder.entity(outputStream.toByteArray());
}
HttpRequest request = builder.build();
HttpResponse response = client.executeWithLoadBalancer(request, config);
return new RibbonHttpResponse(response);
}
catch (Exception e) {
throw new IOException(e);
}
//TODO: fix stats, now that execute is not called
// use execute here so stats are collected
/*
return loadBalancer.execute(this.config.getClientName(), new LoadBalancerRequest<ClientHttpResponse>() {
#Override
public ClientHttpResponse apply(ServiceInstance instance) throws Exception {}
});
*/
}
private void addHeaders(HttpHeaders headers) {
for (String name : headers.keySet()) {
// apache http RequestContent pukes if there is a body and
// the dynamic headers are already present
if (!isDynamic(name) || outputStream == null) {
List<String> values = headers.get(name);
for (String value : values) {
builder.header(name, value);
}
}
}
}
private boolean isDynamic(String name) {
return name.equals("Content-Length") || name.equals("Transfer-Encoding");
}
}
public class RibbonHttpResponse extends AbstractClientHttpResponse {
private HttpResponse response;
private HttpHeaders httpHeaders;
public RibbonHttpResponse(HttpResponse response) {
this.response = response;
this.httpHeaders = new HttpHeaders();
List<Map.Entry<String, String>> headers = response.getHttpHeaders().getAllHeaders();
for (Map.Entry<String, String> header : headers) {
this.httpHeaders.add(header.getKey(), header.getValue());
}
}
#Override
public InputStream getBody() throws IOException {
return response.getInputStream();
}
#Override
public HttpHeaders getHeaders() {
return this.httpHeaders;
}
#Override
public int getRawStatusCode() throws IOException {
return response.getStatus();
}
#Override
public String getStatusText() throws IOException {
return HttpStatus.valueOf(response.getStatus()).name();
}
#Override
public void close() {
response.close();
}
}
}
I had the same problem but then, out of the box, everything was working (using a #LoadBalanced RestTemplate). I am using Finchley version of Spring Cloud, and I think my problem was that I was not explicity adding spring-retry in my pom configuration. I'll leave here my spring-retry related yml configuration (remember this only works with #LoadBalanced RestTemplate, Zuul of Feign):
spring:
# Ribbon retries on
cloud:
loadbalancer:
retry:
enabled: true
# Ribbon service config
my-service:
ribbon:
MaxAutoRetries: 3
MaxAutoRetriesNextServer: 1
OkToRetryOnAllOperations: true
retryableStatusCodes: 500, 502
Im using google-api-java-client to access the Google Cloud Storage API. It works fine from my local machine and from some servers. But it fails with a 400 Bad Request (error: invalid_grant) on production server. I had a similar problem with Google Adwords API which was solved by the Google API Adwords Support. I think they just whitelisted the IP's of the servers I used.
Is it possible that I need to add permissions in the Google Cloud Storage API? Which would be strange, because it works from my local pc.
public class StorageObj {
private static final String KEY_FILE_NAME = "key.p12";
private static final String APPLICATION_NAME = "application_name";
private static final String BUCKET_NAME = "bucket_name";
private static final String STORAGE_OAUTH_SCOPE = "https://www.googleapis.com/auth/devstorage.read_write";
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
private final Storage storageObject;
public StorageObj() throws GeneralSecurityException, IOException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
GoogleCredential credential = new GoogleCredential.Builder().setTransport(httpTransport)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(SERVICE_ACCOUNT_ACCOUNT_EMAIL)
.setServiceAccountScopes(Collections.singleton(STORAGE_OAUTH_SCOPE))
.setServiceAccountPrivateKeyFromP12File(new File(KEY_FILE_NAME))
.build();
this.storageObject = new Storage.Builder(httpTransport, JSON_FACTORY, credential)
.setApplicationName(APPLICATION_NAME)
.build();
;
}
public void insert(String folder, String file) throws IOException {
try (FileInputStream inputStream = new FileInputStream(new File(folder + file))) {
InputStreamContent mediaContent = new InputStreamContent("application/octet-stream", inputStream);
mediaContent.setLength(inputStream.available());
Storage.Objects.Insert insertObject = storageObject.objects()
.insert(BUCKET_NAME, null /* obj-meta-data */, mediaContent)
.setName(file);
int _2MB = 2 * 1000 * 1000;
if (mediaContent.getLength() > 0 && mediaContent.getLength() <= _2MB) {
insertObject.getMediaHttpUploader().setDirectUploadEnabled(true);
}
insertObject.execute();
}
public static void main(String[] args) throws IOException, GeneralSecurityException {
new StorageObj().insert(args[0], args[1]);
}
}
}
My first guess is that your service account might not have access to the cloud bucket's ACL. Maybe you have access to one bucket but not to all, so I think you might have the proper ACL for certain buckets, but not for others?
Also, when you say "local", do you mean the devserver? the google cloud storage behaves differently in devserver, so this might be explainable just with how the devserver works.
I also, in my answer, assumed that by "servers" you meant different buckets on the google cloud storage
okay so i have made the basic rest example and now i wanted to take it a step further by using authentication (user login) in my example.
I am only using Java Collection for my data. NO DATABASE !!
I am storeing the user data in a Map where email is the key to his password !!
But i am getting stuck at the basic authentication part where a form request is being posted to my rest -post method where it takes the values from the users...something like this:
#POST
#Produces(MediaType.TEXT_HTML)
#Consumes(MediaType.APPLICATION_FORM_URLENCODED,
public void newUser(
#FormParam("email") String email,
#FormParam("password") String password,#ContextHttpServletResponse servletResponse
) throws IOException {
// Form Processing algo
if(emailexists){
servletResponse.sendRedirect("http://localhost:8080/xxx/LoginFailed.html");
}
else{
servletResponse.sendRedirect("http://localhost:8080/xxx/UserHomPage.html");
}
}
Dont know what i am doing wrong ..
Also only Java Collections are to be used (like Lists,Map.etc).
Am i using the right technique here or anyone has got a better one at their disposel.
Any help would be appreciated !
I am on windows using apache tomcat 6..
AND A TOTAL NOOB AT THIS THING !!
To save persistent data (like usernames and passwords) without a database, you should consider saving the data in a text file server side and reading the data back into a map in your constructor.
However, the more data you have the more expensive this process is. If you have a large number of users you really should consider using databases, because they are more organized, more efficient, and far more easy to use.
#Path("myPath")
public class MyResource {
private static final String FILE_PATH="my/path/to/userdata.txt";
private HashMap<String, String> _userData;
public MyResource() {
try {
Scanner scanner = new Scanner(new File(FILE_PATH));
_userData = new HashMap<String, String>();
while(scanner.hasNext()) {
String[] line = scanner.nextLine().split(",");
_userData.put(line[0].trim(), line[1].trim());
}
} catch(IOException e) {
e.printStackTrace();
}
}
#POST
#Consumes(MediaType.APPLICATION_FORM_URLENCODED)
public Response addNewUser(#FormParam("email") String email,
#FormParam("password") String password)
throws IOException {
PrintWriter writer = new PrintWriter(new File(FILE_PATH));
int statusCode = 200;
// If that email already exists, don't print to file
if(_userData.containsKey(email))
statusCode = 400;
else
writer.println(email + "," + password);
writer.close();
return Response.status(statusCode);
}
}
Am new to web services. Am trying to generate unique session id for every login that a user does, in web services.
What I thought of doing is,
Write a java file which has the login and logout method.
Generate WSDL file for it.
Then generate web service client(using Eclipse IDE), with the WSDl file which I generate.
Use the generated package(client stub) and call the methods.
Please let me know if there are any flaws in my way of implementation.
1. Java file with the needed methods
public String login(String userID, String password) {
if (userID.equalsIgnoreCase("sadmin")
&& password.equalsIgnoreCase("sadmin")) {
System.out.println("Valid user");
sid = generateUUID(userID);
} else {
System.out.println("Auth failed");
}
return sid;
}
private String generateUUID(String userID) {
UUID uuID = UUID.randomUUID();
sid = uuID.toString();
userSessionHashMap = new HashMap<String, String>();
userSessionHashMap.put(userID, sid);
return sid;
}
public void logout(String userID) {
Set<String> userIDSet = userSessionHashMap.keySet();
Iterator<String> iterator = userIDSet.iterator();
if (iterator.equals(userID)) {
userSessionHashMap.remove(userID);
}
}
2. Generated WSDL file
Developed the web service client from the wsdl.
4. Using the developed client stub.
public static void main(String[] args) throws Exception {
ClientWebServiceLogin objClientWebServiceLogin = new ClientWebServiceLogin();
objClientWebServiceLogin.invokeLogin();
}
public void invokeLogin() throws Exception {
String endpoint = "http://schemas.xmlsoap.org/wsdl/";
String username = "sadmin";
String password = "sadmin";
String targetNamespace = "http://WebServiceLogin";
try {
WebServiceLoginLocator objWebServiceLoginLocator = new WebServiceLoginLocator();
java.net.URL url = new java.net.URL(endpoint);
Iterator ports = objWebServiceLoginLocator.getPorts();
while (ports.hasNext())
System.out.println("ports Iterator size-->" + ports.next());
WebServiceLoginPortType objWebServiceLoginPortType = objWebServiceLoginLocator
.getWebServiceLoginHttpSoap11Endpoint();
String sid = objWebServiceLoginPortType.login(username, password);
System.out.println("sid--->" + sid);
} catch (Exception exception) {
System.out.println("AxisFault at creating objWebServiceLoginStub"
+ exception);
exception.printStackTrace();
}
}
On running the this file, I get the following error.
AxisFault
faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException
faultSubcode:
faultString: java.net.ConnectException: Connection refused: connect
faultActor:
faultNode:
faultDetail:
{http://xml.apache.org/axis/}stackTrace:java.net.ConnectException: Connection refused: connect
Can anyone suggest an alternate way of handling this task ? And what could probably be the reason for this error.
Web services are supposed to be stateless, so having "login" and "logout" web service methods doesn't make much sense.
If you want to secure web services calls unfortunately you have to code security into every call. In your case, this means passing the userId and password to every method.
Or consider adding a custom handler for security. Read more about handlers here.