I'm using google-cloud-cpp SDK from here and I'm trying to test for an incorrect endpoint scenario where I'm expecting an error within 100 milliseconds as per the policy I setup when I pass in an incorrect endpoint url. However, my test hangs for a long time instead. For this particular case, I need to override the policies of an existing gcs::Client for ListObjects case only and want to re-use the client that I have already created. I wrote a small program that simulates behavior of my actual issue in the codebase. I'm not sure why the retry policy for LimitedTimeRetryPolicy is not getting forwarded to the new client I created? Appreciate any help and/or examples.
using namespace google::cloud::storage;
using ::google::cloud::StatusOr;
// Ex: ./wrongEndpoint
int main(int argc, char* argv[]) {
auto options = ClientOptions::CreateDefaultClientOptions();
options.value().set_enable_http_tracing(true);
options.value().set_enable_raw_client_tracing(true);
options.value().set_endpoint("https://somegarbage.com");
options.value().set_download_stall_timeout(std::chrono::seconds(1));
// Original Client in the codebase
Client clientX(*options);
// Creating new client for ListObjects from the raw_client with a retry policy
std::shared_ptr<internal::RawClient> Rclient = clientX.raw_client();
Client client = Client(Rclient, LimitedTimeRetryPolicy(std::chrono::milliseconds(100)));
try{
for (auto&& object_metadata : client.ListObjects("march30")) {
if (!object_metadata) {
throw std::runtime_error(object_metadata.status().message());
}
std::cout << "bucket_name=" << object_metadata->bucket()
<< ", object_name=" << object_metadata->name() << "\n";
}
} catch(std::exception &ex) {
std::cout << ex.what() << "\n";
}
}
To understand why this does not work I need to talk about the implementation internals, sorry.
In your code clientX contains a stack of RawClient objects, more or less as follows: clientX -> LoggingClient -> RetryClient(DefaultRetryPolicy) -> CurlClient. Each element in that list is a decorator, it performs a function (say logging) and delegates the work to the next element.
When you create client in your code it contains a new stack client -> RetryClient(LimitedTimeRetryPolicy(100ms)) -> LoggingClient -> RetryClient(DefaultRetryPolicy) -> CurlClient.
That means every request is now subject to the original retry policy and to your new retry policy, the new retry policy is much shorter than the original one, so (most likely) it just fails immediately.
Rebuilding the stack of internal::RawClient objects is not something we ever thought about. I can describe how to do it, but I will note that you would be using classes in the google::cloud::internal:: namespace, which may change at any time without notice.
Basically you would need to use RTTI to discover the type of each element in this stack, depending on the type they have a client() member function that gives you the next element. Once you discover the right one to replace, you would create it, preserving all the downstream elements in the stack. Then you would recreate a Client using the NoDecorations constructor.
Related
I am trying to add new method to ServiceEventSource class in a Service Fabric service (web app, web api or stateless services), to log warnings and exceptions separately from information-type messages.
When I add new method to ServiceEventSource class, it does not output any message and this.IsEnabled() returns false. Out of the box, and if I remove newly added method, ServiceEventSource outputs messages as expected, and this.IsEnabled() returns true.
I am following Using EventSource generically sample.
For example, just adding following code will cause ServiceEventSource to stop logging:
private const int ErrorEventId = 7;
[Event(ErrorEventId, Level = EventLevel.Error, Message = "Error: {0} - {1}")]
public void Error(string error, string msg)
{
WriteEvent(ErrorEventId, error, msg);
}
I've looked everywhere and can't find any reference to this unexpected behaviour.
I am trying to list blobs in my Azure storage account using cpprest sdk and this is my code:
pplx::task<void> HTTPRequestCustomHeadersAsync()
{
http_client client(L"https://<account-name>.blob.core.windows.net/?comp=list");
// Manually build up an HTTP request with header and request URI.
http_request request(methods::GET);
request.headers().add(L"Authorization", L"Sharedkey <account-name>:<account-key>");
request.headers().add(L"x-ms-date", L"Thu, 08 Feb 2018 20:31:55 GMT ");
request.headers().add(L"x-ms-version", L"2017-07-29");
return client.request(request).then([](http_response response)
{
// Print the status code.
std::wostringstream ss;
ss << L"Server returned returned status code " << response.status_code() << L"." << std::endl;
std::wcout << ss.str();
});
/* Sample output:
Server returned returned status code 200.
*/
}
I keep getting the returned status code as 403. Can someone please let me know if I am doing it right?
Please note that you're not using the cpprest-sdk in a correct way, since what you did in the code above was trying to invoke Azure Storage REST API directly (and incorrectly) without going through cpprest-sdk at all.
Actually, account key in the HTTP header of Azure Storage REST API contract is not a plain text. Instead, it's calculated by complex steps mentioned in Authentication for the Azure Storage Services for a bunch of security considerations. Fortunately, all those logic have been wrapped by cpprest-sdk, you don't need to understand how it works internally:
// Define the connection-string with your values.
const utility::string_t storage_connection_string(U("DefaultEndpointsProtocol=https;AccountName=your_storage_account;AccountKey=your_storage_account_key"));
// Create the blob client.
azure::storage::cloud_blob_client blob_client = storage_account.create_cloud_blob_client();
I'd suggest you to read How to use Blob Storage from C++ at first before using cpprest-sdk.
I have a ServiceWorker registered on my page and want to pass some data to it so it can be stored in an IndexedDB and used later for network requests (it's an access token).
Is the correct thing just to use network requests and catch them on the SW side using fetch, or is there something more clever?
Note for future readers wondering similar things to me:
Setting properties on the SW registration object, e.g. setting self.registration.foo to a function within the service worker and doing the following in the page:
navigator.serviceWorker.getRegistration().then(function(reg) { reg.foo; })
Results in TypeError: reg.foo is not a function. I presume this is something to do with the lifecycle of a ServiceWorker meaning you can't modify it and expect those modification to be accessible in the future, so any interface with a SW likely has to be postMessage style, so perhaps just using fetch is the best way to go...?
So it turns out that you can't actually call a method within a SW from your app (due to lifecycle issues), so you have to use a postMessage API to pass serialized JSON messages around (so no passing callbacks etc).
You can send a message to the controlling SW with the following app code:
navigator.serviceWorker.controller.postMessage({'hello': 'world'})
Combined with the following in the SW code:
self.addEventListener('message', function (evt) {
console.log('postMessage received', evt.data);
})
Which results in the following in my SW's console:
postMessage received Object {hello: "world"}
So by passing in a message (JS object) which indicates the function and arguments I want to call my event listener can receive it and call the right function in the SW. To return a result to the app code you will need to also pass a port of a MessageChannel in to the SW and then respond via postMessage, for example in the app you'd create and send over a MessageChannel with the data:
var messageChannel = new MessageChannel();
messageChannel.port1.onmessage = function(event) {
console.log(event.data);
};
// This sends the message data as well as transferring messageChannel.port2 to the service worker.
// The service worker can then use the transferred port to reply via postMessage(), which
// will in turn trigger the onmessage handler on messageChannel.port1.
// See https://html.spec.whatwg.org/multipage/workers.html#dom-worker-postmessage
navigator.serviceWorker.controller.postMessage(message, [messageChannel.port2]);
and then you can respond via it in your Service Worker within the message handler:
evt.ports[0].postMessage({'hello': 'world'});
To pass data to your service worker, the above mentioned is a good way. But in case, if someone is still having a hard time implementing that, there is an other hack around for that,
1 - append your data to get parameter while you load service-worker (for eg., from sw.js -> sw.js?a=x&b=y&c=z)
2- Now in service worker, fetch those data using self.self.location.search.
Note, this will be beneficial only if the data you pass do not change for a particular client very often, other wise it will keep changing the loading url of service worker for that particular client and every time the client reloads or revisits, new service worker is installed.
I need to communicate between multiple clients. When I try to run file (multiple terminals) I get same identity. So I let router socket to automatically set the UUID. But what I found I cannot use that identity to store at server for routing between multiple clients.
How would I handle multiple clients IDs?
I am trying to build an asynchronous Chat server. I am following an approach of each client with dealer socket connects to server ( ROUTER-type sockets ). Server then extract the clients IDs ( set manually ) and reads the message and route accordingly.
#include "zhelpers.hpp"
#include <iostream>
#include <string>
int main(void) {
zmq::context_t context(1);
zmq::socket_t backend (context, ZMQ_DEALER);
backend.setsockopt( ZMQ_IDENTITY, "mal2", 4);
backend.connect("tcp://localhost:5559");
std::string input;
std::cout <<"you are joinning" << std::endl;
while(1){
getline (std::cin, input);
s_send (backend, input);
zmq::pollitem_t items [] = {
{ backend, 0, ZMQ_POLLIN, 0 }
};
zmq::poll (items, 1, -1);
if (items [0].revents & ZMQ_POLLIN) {
std::string identity = s_recv (backend);
std::string request = s_recv (backend);//receive reply back from router which might be other client
std::cout<<"identity="<<identity<<"reques="<<request<<std::endl;
} //ending if
}//ending while
return 0;
}
Memento
It has been proved by decades to rather design a system based on all collected requirements, i.e. before coding, not vice versa.
This way your analysis will list all the requirements before deciding on messaging and signalling layer, avoiding situations "And also I want to add this and that ..."
UUID
If you decide to create a disposable or persistent UUID-s on each client side, you face a challenge to ensure both temporal uniqueness & randomisation.
If you decide to assign UUID-s from a ChatSERVER side, you need an additional signalling layer, besides the chat-transport.
Answer to "How would I handle multiple clients ID" is not answer-able without the other requirements you are sure to have ( or will sooner or later realise to face 'em, on-the-fly )
Time is money
As said above, good projects start with proper and thorough requirements engineering & validation. The product coding than spans a minimum amount of time, compared to a "Aha-based & trouble-shooting responsive engineering".
I have a C# service that runs continuously with user credentials (i.e not as localsystem - I can't change this though I want to). For the most part the service seems to run ok, but ever so often it bombs out and restarts for no apparent reason (servicer manager is set to restart service on crash).
I am doing substantial event logging, and I have a layered approach to Exception handling that I believe makes at least some sort of sense:
Essentially I got the top level generic exception, null exception and startup exception handlers.
Then I got various handlers at the "command level" (i.e specific actions that the service runs)
Finally I handle a few exceptions handled at the class level
I have been looking at whether any resources aren't properly released, and I am starting to suspect my mailing code (send email). I noticed that I was not calling Dispose for the MailMessage object, and I have now rewritten the SendMail code as illustrated below.
The basic question is:
will this code properly release all resources used to send mails?
I don't see a way to dispose of the SmtpClient object?
(for the record: I am not using object initializer to make the sample easier to read)
private static void SendMail(string subject, string html)
{
try
{
using ( var m = new MailMessage() )
{
m.From = new MailAddress("service#company.com");
m.To.Add("user#company.com");
m.Priority = MailPriority.Normal;
m.IsBodyHtml = true;
m.Subject = subject;
m.Body = html;
var smtp = new SmtpClient("mailhost");
smtp.Send(m);
}
}
catch (Exception ex)
{
throw new MyMailException("Mail error.", ex);
}
}
I know this question is pre .Net 4 but version 4 now supports a Dispose method that properly sends a quit to the smpt server. See the msdn reference and a newer stackoverflow question.
There are documented issues with the SmtpClient class. I recommend buying a third party control since they aren't too expensive. Chilkat makes a decent one.