Working with Google Cloud Print - google-cloud-print

I have begun digging around in the Google Cloud Print project in hopes of creating a custom application for my network. I have a Windows Print Server running with a printer queue I wish to submit jobs to. Setup the Google Cloud Print with the Crome browser and I was able to submit and print jobs just fine. However, my end goal is a little more complicated.
I need to customize the access control from the server-side client endpoint once the job reaches my network. Meaning, once the job is submitted, I need to be able to check the username of the owner of the job and process accordingly. From the looks of it, the /fetch interface does not store the original owner of the job, just the end owner of the queue it ends up on. Meaning, User A has the Google Cloud Printer linked to their account and has shared it with User B. User B submits a job to the shared queue. When I run /fetch on the shared printerID the user is User A.
Has anyone else dabbled with this?
Thanks

Take a look at the ownerId of the job.
The /fetch call does in fact return a user field containing the owner of the printer (User A), but the specific job(s) returned contains an ownerId field with a value of the user that submitted the print job (User B).
Hope that helps.
Following is a partial response from a /fetch call, including ownerId of the job. Keep in mind that this could be one of many jobs that are returned in the jobs array:
...
updateTime: "1403628993840",
status: "QUEUED",
ownerId: "rpreeves#gmail.com",
rasterUrl: "https://www.google.com/cloudprint/download?id=5ca7b1e4-c533-c42b-7d2b-efb862c4215a&forcepwg=1",
ticketUrl: "https://www.google.com/cloudprint/ticket?format=ppd&output=json&jobid=5ca7b1e4-c633-c42b-782b-efb862c4215a",
printerid: "f33c6ff8-fc25-7075-249b-ab65c3e2354e",
...

Related

PATCH when the resource location is not known

I am building a rest API and want to implement a PATCH for a job manager monitoring tool.
All my clients know the job manager's job ID that is not unique. The job manager resets the jobs ID from time to time (so starting from 1 again) This happens at random intervals (can be months or days) for different reasons.
I want to let the job manager send me updates about a job but I don't want it to first do a GET to find out the job's unique ID (let's say DBid) and then do the PATCH /jobs/:DBid. This is because of performance and slow network reasons. Having to wait for the GET could block the job manager which is critical.
Selecting the latest job with the job manager ID will return the right job. But how to model this in a rest API?
You need the id of the job manager to be in a 'pot' on the server. I've worked on this type of thing where a client needs to know the internal id of a resource but that id rarely changes. It does change but very infrequently. The solution for that is a cache service but for this, it sounds like you need a 'pot', on the server, where the id is stored.
The endpoint handling the PATCH knows where the pot is on the server so can load the id from it. A simple flat file perhaps.
If the manager needs to change the id, whatever process handles that change, changes what's in the pot. So the PATCH endpoint is always getting the correct id from the pot.
You can add synchronisation for accessing the pot in case a PATCH comes in while the manager is updating the id.

How to get job id of existing job in XTRF smart project

I would like to update the status of existing jobs in XTRF smart projects using the XTRF Home Portal API. The API call requires a job ID, but I don't where to find this ID.
End point:
.../v2/jobs/{jobId}/status
Following the solution of a similar post, I have defined a view with a list of jobs that require updating. However, there seems to be no column that holds the {jobId} that is required for the API. There is a column called "Internal ID" that contains a 4-digit number. But when I use that number in the API call, there's an error:
"Invalid Job ID of a Smart Job. Use new form of Job ID for Smart Jobs (e.g. 2QROVSCO3ZG3NM6KAZZBXH5HMI)."
So apparently, there is a new form for the job ID. Is there a specific column for the view that I should use, or is there another way to retrieve this job ID?
The Job ID can be found in the url (after clicking on a job):
https://[your xtrf url]/xtrf/faces/projectAssistant/projects/project.seam?assistedProjectId=5GB3QLPO2QROVSCOR55O3WJVU2Y#/project?jobs=DZAGF2QROVSCOVBJPG2UVBCJZ4II
The Job ID is DZAGF2QROVSCOVBJPG2UVBCJZ4II
Another way is to retrieve the jobs by the API itself, this can be done for a quote, but also for a project:
Endpoint: /v2/quotes/{quoteId}/jobs

Batch status - 'validating' after failed creation

Hi i`m tryng to work with shipping batches, after i create batch like this:
{"default_carrier_account":"9348***********50","default_servicelevel_token":"usps_priority","metadata":"test","label_filetype":"PDF_4x6","batch_shipments":[{"carrier_account":"93********************","servicelevel_token":"usps_priority","shipment":"c8c411c2ad8b497eb583decf7c3c614d","metadata":1},{"carrier_account":"9348ce6eecf**********ab850","servicelevel_token":"usps_priority","shipment":"768ae43826b04040b32490a6f069fa4f","metadata":2}]}
i get notification like this:
batch 0f0b69ae42bc475ab3c1421edddeb4fc creation failed
and after this i try to make api request and get batch data(status, messages, etc..) i did post request to : http://api.goshippo.com/batches/0f0b69ae42bc475ab3c1421edddeb4fc?page=1
and get response:
{
"object_id":"0f0b69ae42bc475ab3c1421edddeb4fc",
"object_owner":"info#skumatrix.com",
"status":"VALIDATING",
"object_created":"2017-04-16T16:35:24.925Z",
"object_updated":"2017-04-16T16:35:27.143Z",
"metadata":"test",
"default_carrier_account":"9***************b850",
"default_servicelevel_token":"usps_priority",
"label_filetype":"PDF_4x6",
"batch_shipments":{
"count":0,
"next":null,
"previous":null,
"results":{
}
},
"object_results":{
"purchase_succeeded":0,
"purchase_failed":0,
"creation_failed":0,
"creation_succeeded":0
},
"label_url":{
}
}
what i don`t understand is - why status is still validating and why there is no error messages ?
So, for starters, the default status of a Batch object in Shippo is VALIDATING. So this is why it would persist to stay in that state, although it might be a little confusing when there is an unexpected failure (which is what appears to have happened here).
As mentioned in the comments there, this failure occurred due to trying to do a Batch purchase using a collection of Shipment object_id's. The Batch endpoint is actually supposed to allow you to create a collection of Shipment objects en masse, and then later you can Batch purchase the labels for your desired rates on those Shipment objects.
Rate retrieval is generally the more time consuming process, depending on how many connected shipping accounts you have. So Batch creation is intended to allow you to have Shippo retrieve rates for a lot of packages and simply check on them once they are done (or get notified of their completion via Shippo's webhooks).
So moving forward, make sure that you first try to create the Batch with a collection of Shipments (see here). Then you can proceed to create the labels for the shipment like so.

How continuous Azure Web Jobs can be idempotent and send email?

After reading tons of information on the web about Azure WebJobs, documentation says a job should be idempotent, on the other hand, blogs say they use WebJobs to do actions such as "charging a customer", "sending an e-mail".
This documentation says that running a continuous WebJob on multiple instances with a queue could result in being called more than once. Do people really ignore the fact that they could charge their customer twice, or send an e-mail twice?
How can I make sure I can run a WebJob with a queue on a scaled web app and messages are processed only once?
I do this using a database, an update query with a row lock and TransactionScope object.
In your Order table, create a column to manage the state of the action you are taking in your WebJob. i.e. EmailSent.
In the QueueTrigger function begin a transaction, then execute an UPDATE query on the customer order with ROWLOCK set, that sets EmailSent = 1 with a WHERE EmailSent = 0. If the return value from SqlCommand = 0 then exit the function. Another WebJob has already sent the email. Otherwise, send the email and call Complete() on the TransactionScope object if sent successfully.
That should provide the idempotency you want.
Hope that helps.

CQRS - When a command cannot resolve to a domain

I'm trying to wrap my head around CQRS. I'm drawing from the code example provided here. Please be gentle I'm very new to this pattern.
I'm looking at a logon scenario. I like this scenario because it's not really demonstrated in any examples i've read. In this case I do not know what the aggregate id of the user is or even if there is one as all I start with is a username and password.
In the fohjin example events are always fired from the domain (if needed) and the command handler calls some method on the domain. However if a user logon is invalid I have no domain to call anything on. Also most, if not all of the base Command/Event classes defined in the fohjin project pass around an aggregate id.
In the case of the event LogonFailure I may want to update a LogonAudit report.
So my question is: how to handle commands that do not resolve to a particular aggregate? How would that flow?
public void Execute(UserLogonCommand command)
{
var user = null;//user looked up by username somehow, should i query the report database to resolve the username to an id?
if (user == null || user.Password != command.Password)
;//What to do here? I want to raise an event somehow that doesn't target a specific user
else
user.LogonSuccessful();
}
You should take into account that it most cases CQRS and DDD is suitable just for some parts of the system. It is very uncommon to model entire system with CQRS concepts - it fits best to the parts with complex business domain and I wouldn't call logging user in a particularly complex business scenario. In fact, in most cases it's not business-related at all. The actual business domain starts when user is already identified.
Another thing to remember is that due to eventual consistency it is extremely beneficial to check as much as we can using only query-side, without event creating any commands/events.
Assuming however, that the information about successful / failed user log-ins is meaningful I'd model your scenario with following steps
User provides name and password
Name/password is validated against some kind of query database
When provided credentials are valid RegisterValidUserCommand(userId) is executed which results in proper event
If provided credentials are not valid
RegisterInvalidCredentialsCommand(providedUserName) is executed which results in proper event
The point is that checking user credentials is not necessarily part of business domain.
That said, there is another related concept, in which not every command or event needs to be business - related, thus it is possible to handle events that don't need aggregates to be loaded.
For example you want to change data that is informational-only and in no way affects business concepts of your system, like information about person's sex (once again, assuming that it has no business meaning).
In that case when you handle SetPersonSexCommand there's actually no need to load aggregate as that information doesn't even have to be located on entities, instead you create PersonSexSetEvent, register it, and publish so the query side could project it to the screen/raport.