Backbone.js tries to fetch model after creating - rest

i'm working on a contact form which is sent via backbone.js:
r = new ContactModel(); // a simple model
r.save(data)
after saving model on server, it tries to fetch it via GET request which i've forbidden.
what can i do to override this behavior?

turns out, it was backbone-tastypie's fault.
i fixed it by restoring old Backbone.sync:
Backbone.sync = Backbone.oldSync;

Related

Not receiving latest data using Npgsql LISTEN/NOTIFY

I'm using .NET Core app with a PostgreSQL database (with Npgsql) combined with SignalR to receive real-time data and latest data entries. However, I am not receiving the latest entry, and sometimes the Clients.All.SendAsync method sends more than one entry to the client. Here is my code:
Hub method that sends new data to client:
public async Task SendForexAsync(string name)
{
var product = GetForex(name);
await Clients.All.SendAsync("CurrentData", product);
using (var conn = new NpgsqlConnection(ApplicationDbContext.GetConnectionString()))
{
conn.Open();
var cmd = new NpgsqlCommand("LISTEN new_forex", conn).ExecuteNonQuery();
conn.Notification += async (o, e) =>
{
var newProduct = GetForex(name);
await Clients.All.SendAsync("NewData", newProduct);
};
while (true)
{
await conn.WaitAsync();
}
}
}
Console app that periodically polls for new data from an API:
var addedStocksDJI = FetchNewStocks("DJI");
if (addedStocksAAPL > 0 || addedStocksDJI > 0)
{
using (var conn = new NpgsqlConnection(ApplicationDbContext.GetConnectionString()))
{
conn.Open();
var cmd = new NpgsqlCommand("NOTIFY new_stocks", conn).ExecuteNonQuery();
}
}
The other code of the app is most definitely correct because I was receiving new and correct data before I tried implementing the LISTEN/NOTIFY feature. But now, I get one (or more) of entries of newProduct on my client, but it is the "old" product, that is, the database does not query and send the latest entries, but only the old ones via SignalR. When I refresh the page manually, the new data is correctly displayed, though.
I believe it has something to do with a single connection being open so I constantly receive only the "old" set of data, but even if that is the case, I am unable to figure out why I sometimes get more than one packet of data, even though I am only trying to send one, and I am calling NOTIFY only once.
I figured it out. Hopefully this will help someone else who gets stuck with this in the future!
The issue was that I was declaring my dbContext via .NET Core's dependency injection in my Hub class, which created the context only once per that class, and also because of that per page or WebSocket transaction. Which is why I was unable to get the latest data, I assume, since the dbContext was "old" and unaware of changes.
I fixed the problem by using a dbContext via the using scheme inside of my methods, twice in my SendForexAsync method (once per every call of the GetForex function), as well as in the GetForex function itself. That way, a dbContext is created and disposed of immediately, so the next time I poll the database for new data via the GetForex function (when I get a notification from the database due to the NOTIFY from the console app), a new instance of dbContext is created which can contain that new data.

Service Fabric ServicePartitionResolver ResolveAsync

I am currently using the ServicePartitionResolver to get the http endpoint of another application within my cluster.
var resolver = ServicePartitionResolver.GetDefault();
var partition = await resolver.ResolveAsync(serviceUri, partitionKey ?? ServicePartitionKey.Singleton, CancellationToken.None);
var endpoints = JObject.Parse(partition.GetEndpoint().Address)["Endpoints"];
return endpoints[endpointName].ToString().TrimEnd('/');
This works as expected, however if I redeploy my target application and its port changes on my local dev box, the source application still returns the old endpoint (which is now invalid). Is there a cache somewhere that I can clear? Or is this a bug?
Yes, they are cached. If you know that the partition is no longer valid, or if you receive an error, you can call the resolver.ResolveAsync() that has an overload that takes the earlier ResolvedServicePartition previousRsp, which triggers a refresh.
This api-overload is used in cases where the client knows that the
resolved service partition that it has is no longer valid.
See this article too.
Yes. They are cached. There are 2 solutions to overcome this.
The simplest code change that you need to do is replace var resolver = ServicePartitionResolver.GetDefault(); with var resolver = new ServicePartitionResolver();. This forces the service to create a new ServicePartitionResolver object to every time. Whereas, GetDefault() gets the cached object.
[Recommended] The right way of handling this is to implement a custom CommunicationClientFactory that implements CommunicationClientFactoryBase. And then initialize a ServicePartitionClient and call InvokeWithRetryAsync. It is documented clearly in Service Communication in the Communication clients and factories section.

Chromecast send metadata to receiver

I need help, in my custome receiver Chromecast app, I can not fetch media metadata with which the app was initialized.
I loaded media like this, after sucesful session request:
var mediaInfo = new chrome.cast.media.MediaInfo('https://wse-wowaza01.playne.tv:443/webdrmorigin/1042a2W.smil/manifest.mpd');
mediaInfo.customData = {
"userId": "mislav",
"sessionId": "39BE906248F9F5C4A93C7",
"merchant": "playnr"
};
mediaInfo.metadata = new chrome.cast.media.MovieMediaMetadata();
mediaInfo.metadata.metadataType = chrome.cast.media.MetadataType.MOVIE;
var img = new chrome.cast.Image('https://ottservice.playnr.tv/OTTranscoderHttps/get?url=http%asd9.168%2f20664_5b8df65c-67ff-4f13-b90d-b28c37f2310c.jpg&w=224&h=126');
mediaInfo.metadata.images = [img];
mediaInfo.contentType = 'video/mp4';
var request = new chrome.cast.media.LoadRequest(mediaInfo);
//this.playerState = this.PLAYER_STATE.LOADING;
this.session.loadMedia(request,
this.onLoadMediaSuccess.bind(this, 'loadMedia'),
this.onLoadMediaFailure.bind(this)
);
How can i access that metadata in receiver app? I tried with
cast.receiver.MediaManager.getInstance()
but no luck. Are there any steps before need to code on receiver to make data available?
Thank you for help, pointed me in right direction.
Got it working, this was the problem. I am using 3rd party DMR javascript plugin for content protection. It encapsulates cast_receiver and had already instantiated MediaManager & ReceiverManager, i didnt noticed that. Then i instantiated new mediaManager, but it wasn bound to any data. Pause/play event were all handled by plugins mediamanager instance, so my instance was useless. As soon i referenced allready instantiated mediamanager, data is there and his events are working. Same with receiver manager, i started instance that was already started and problems....SO conclusion, i dont need to instantiate any, DRM plugin takes care of everything, just need to override his event handlers
Depends on where on the receiver you want t access that info. For example, in a number of callbacks, you have an "event" of type cast.receiver.MediaManager.Event, from which you can get, for example, a cast.receiver.MediaManager.LoadRequestData object via event.data. Then this data object has your customData (data.customData)

How to grab serialized in http request claims in a code using WIF?

ADFS 2.0, WIF (WS-Federation), ASP.NET: There is no http modules or any IdentityFoundation configuration defined in a web.config (like most WIF SDK samples show), instead everything is done via program code manually using WSFederationAuthenticationModule, ServiceConfiguration and SignInRequestMessage classes. I do http redirect to ADFS in a code and it seems to work fine, returning claims and redirecting user back to my web site with serialized claims in http request. So the question is how to parse this request using WIF classes, properties and methods and extract claims values from there? Thanks
Just in case want to share my experience, it might help somebody in the future. Well, solution I finally came to looks like this:
var message = SignInResponseMessage.CreateFromFormPost(Request) as SignInResponseMessage;
var rstr = new WSFederationSerializer().CreateResponse(message, new WSTrustSerializationContext(SecurityTokenHandlerCollectionManager.CreateDefaultSecurityTokenHandlerCollectionManager()));
var issuers = new ConfigurationBasedIssuerNameRegistry();
issuers.AddTrustedIssuer("630AF999EA69AF4917362D30C9EEA00C22D9A343", #"http://MyADFSServer/adfs/services/trust");
var tokenHandler = new Saml11SecurityTokenHandler {CertificateValidator = X509CertificateValidator.None};
var config = new SecurityTokenHandlerConfiguration{
CertificateValidator = X509CertificateValidator.None,
IssuerNameRegistry = issuers};
config.AudienceRestriction.AllowedAudienceUris.Add(new Uri("MyUri"));
tokenHandler.Configuration = config;
using(var reader=XmlReader.Create(new StringReader(rstr.RequestedSecurityToken.SecurityTokenXml.OuterXml)))
{
token = tokenHandler.ReadToken(reader);
}
ClaimsIdentityCollection claimsIdentity = tokenHandler.ValidateToken(token);
I found few similar code that uses SecurityTokenServiceConfiguration (it contains token handlers) instead of Saml11SecurityTokenHandler to read and parse token, however it did not work for me because of certificate validation failure. Setting SecurityTokenServiceConfiguration.CertificateValidator to X509CertificateValidator.None did not help coz Security Token Handler classes uses their own handler configuration and ignores STS configuration values, at least if you specify configuration parameters through the code like I did, however it works fine in case configuration is defined in web.config.

asp.net mvc azure "Error accessing the data store!"

I've started using the AspProviders code to store my session data in my table storage.
I'm sporadically getting the following error:
Description: Exception of type 'System.Web.HttpException' was thrown. INNER_EXCEPTION:Error accessing the data store! INNER_EXCEPTION:An error occurred while processing this request. INNER_EXCEPTION: ConditionNotMet The condition specified using HTTP conditional header(s) is not met. RequestId:0c4239cc-41fb-42c5-98c5-7e9cc22096af Time:2010-10-15T04:28:07.0726801Z
StackTrace:
System.Web.SessionState.SessionStateModule.EndAcquireState(IAsyncResult ar)
System.Web.HttpApplication.AsyncEventExecutionStep.OnAsyncEventCompletion(IAsyncResult ar) INNER_EXCEPTION:
Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider.ReleaseItemExclusive(HttpContext context, String id, Object lockId) in \Azure\AspProviders\TableStorageSessionStateProvider.cs:line 484
System.Web.SessionState.SessionStateModule.GetSessionStateItem()
System.Web.SessionState.SessionStateModule.PollLockedSessionCallback(Object state) INNER_EXCEPTION:
Microsoft.WindowsAzure.StorageClient.Tasks.Task1.get_Result()
Microsoft.WindowsAzure.StorageClient.Tasks.Task1.ExecuteAndWait()
Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteImplWithRetry[T](Func`2 impl, RetryPolicy policy)
Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider.ReleaseItemExclusive(TableServiceContext svc, SessionRow session, Object lockId) in \Azure\AspProviders\TableStorageSessionStateProvider.cs:line 603
Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider.ReleaseItemExclusive(HttpContext context, String id, Object lockId) in \Azure\AspProviders\TableStorageSessionStateProvider.cs:line 480 INNER_EXCEPTION:
System.Data.Services.Client.DataServiceContext.SaveResult.d__1e.MoveNext()
Anyone run into this? The only useful information I've found is this, which I'm hesitant to do:
If you want to bypass the validation, you can open TableStorageSessionStateProvider.cs, find ReleaseItemExclusive, and modify the code from:
svc.UpdateObject(session);
to:
svc.Detach(session);
svc.AttachTo("Sessions", session, "*");
svc.UpdateObject(session);
from here
Thanks!
So I decided to change this:
svc.UpdateObject(session);
svc.SaveChangesWithRetries();
to this:
try
{
svc.UpdateObject(session);
svc.SaveChangesWithRetries();
}
catch
{
svc.Detach(session);
svc.AttachTo("Sessions", session, "*");
svc.UpdateObject(session);
svc.SaveChangesWithRetries();
}
So, I'll see how that works...
I've encountered this problem as well and after some investigation it seems to happen more often when you have more than one instance and you try to make calls in rapid succession in the same session. (e.g. if you had an auto complete box and making ajax calls on each key press)
This occurs because when you try to access the session data, first of all the web server takes out a lock on that session. When the request is complete, it releases the lock. With the table service provider, it updates this lock status by updating a field in the table. What I think is happening is that Instance1 loads the session row, then Instance2 loads the session row, Instance1 saves down the updated lock status and when Instance2 attempts to save the lock status it gets an error because the object isn't in the same state as when it loaded it (the ETag doesn't match any more).
This is why the fix that you found will stop the error from occurring, because by specifying the "*" in the AttachTo, when Instance2 attempts to save the lock it will turn off ETag checking (and over write the changes made by Instance1).
In our situation we have altered the provider so that we can turn off session for certain paths (the ajax call that was giving us our problems didn't need access to session data, neither did the loading of images) which may be an option for you depending on what is causing your problem.
Unfortunately the TableStorageSessionStateProvider is part of the sample projects and so isn't (as far as I'm aware, but I'll happily be told otherwise) officially supported by Microsoft. It does have other issues, like the fact that it doesn't clean up it's session data once a session expires, so you will end up with lots of junk in the session table and blob container that you'll have to clean up some other way.