How can we replace the production exchange in an activity with a technosphere exchange already present in this activity?
In an activity act, I tried to do it by deleting the existing production exchange existing_prod_exc and then create a new production exchange using new_prod_exc=act.new_exchange(input=act.key,output=act.key,type='production') but I don't know how to "copy" all the exchange characteristics of the existing exchange techno_exc into new_prod_exc.
Thanks for helping me.
You can simply directly change the exchange you are interested in and save it:
exc_to_become_production['type'] = 'production'
exc_to_become_production['input'] = exc_to_become_production['output']
exc_to_become_production.save()
Doing just this will result in two production exchanges. You can then delete the existing production exchange if you want.
Changing the exchange type can be accomplished in several ways; you could directly edit the type column in the SQLite database, or work with the ExchangeDataset objects, but I guess you want to use the main interface. In this case, changing the exchange type is easy:
some_exc = next(iter(some_activity.technosphere()))
some_exc['type'] = 'production'
some_exc.save()
Recall that the only difference between an input and an output is the sign of the value in the technosphere matrix; you can accomplish the same thing therefore by multiplying the sign of the exchange amount by -1.
I guess you would also want to delete the existing production exchange (you should do this first, for obvious reasons!):
for exc in some_activity.production():
exc.delete()
As in any destructive operation, it is best to try this first on a copy of actual data; you can create a test project quickly using projects.copy_project().
Related
The Problem
Say I've got a cool REST resource /account.
I can create new accounts
POST /account
{accountName:"matt"}
which might produce some json response like:
{account:"/account/matt", accountName:"matt", created:"November 5, 2013"}
and I can look up accounts created within a date range by calling:
GET /account?created-range-start="June 01, 2013"&created-range-end="December 25, 2013"
which might also produce something like:
{accounts: {account:"/account/matt", accountName:"matt", created:"November 5, 2013"}, {...}, ...}
Now, let's say I want to set up some sample data and write some tests against the GET /account resource within some specified creation date range.
For example I want to somehow insert the following accounts into the system
name=account1, created=January 1, 2010
name=account2, created=January 2, 2010
name=account3, created=December 29, 2010
name=account4, created=December 30, 2010
then call
GET /account?created-range-start="January 2, 2010"&created=range-end="December 29,2010"
and verify that only accounts 2 and 3 are returned.
How should I insert these sample accounts to write my tests?
Possible Solutions
1) I could use inversion of control and allow the user to specify the creation date for new accounts.
POST /account
{account:"matt", created="June 01, 2013"}
However, even if the created field were optional, I don't like this approach because I may not want to allow my users the ability to set the creation date of their account. I surely need to be able to do it for testing but having that functionality as part of the public api seems wrong to me. Maybe I want to give a $5 credit to anyone who joined prior to some particular day. If they can specify their create date users can game the system. Not good.
2) I could add one or more testing configuration resources
PUT /account/creationDateTimestampProvider
{provider="DefaultProvider"}
or
PUT /account/creationDateTimestampProvider
{provider="FixedDateProvider", date="June 01, 2013"}
This approach affords me the ability to lock down these resources with security constraints so that only my test context can call them, but it also necessarily has side effects on the system that may become a pain to manage, especially if I have a bunch of backdoor configuration resources.
3) I could interact directly with the database circumventing the REST api altogether to set my sample data.
INSERT INTO ACCOUNTS ...
GET /account?...
However this can allow me to get into states that using the REST api may not allow me to get into and as the db model evolves maintaining these sql scripts might also be a pain.
So... how do i test my GET /account resource? Is there another way I'm not thinking of that is more elegant?
There are a lot of ways to do this, and you've come up with some solid (though maybe not perfect for your situation) solutions.
In the setup for the test, I would spin up an in-memory database like HSQLDB (there are others) and do the inserts. The test configuration will inject the appropriate database configuration into your service provider class. Run the tests, and then shut the database down on teardown.
This post provides a good example at least for the persistence side of things.
Incidentally, do not change the API of your service just to help facilitate a test. Maybe I misunderstood and you aren't anyway, but I thought I would mention just in case.
Hope that helps.
For what it's worth, these days I'm primarily using the second approach for most of my system level (black box) tests.
I create backdoor admin / test apis that have security requirements which only my system tests can access. These superpower apis allow me to seed data. I try to limit the scope of these apis as much as possible so they are not overly coupled to the specific implementation details but are flexible enough to allow specifying whatever is needed for the desired seed data.
The reason I prefer this approach to the database solution that Vidya provided, is so that my tests aren't coupled to the specific data storage technology. If I decide to switch from mongo to dynamo or something like that; using an admin api frees me from having to update all of my tests--instead I only need to update the admin api/impl.
I'm connecting to a third-party web service to retrieve rows from the underlying database. I can optionally pass a parameter like this:
http://server.com/resource?createdAfter=[yyyy-MM-dd hh:ss]
to get only the rows created after a given date.
This means I have to store the current timestamp (using #[function:datestamp:...], no problem) in one message scope and then retrieve it in another.
It also implies the timestamp should be preserved in case of an outage.
Obviously, I could use a subflow containing a file endpoint, saving in a designated file on a path. But, intuitively, based on my (very!) limited experience, it feels hackish.
What's the correct idiom to solve this?
Thanks!
The Object Store Module is designed just for that: to allow you to save bits of information from your flows.
See:
http://mulesoft.github.io/mule-module-objectstore/mule/objectstore-config.html
https://github.com/mulesoft/mule-module-objectstore/
Baseline info:
I'm using an external OAuth provider for login. If the user logs into the external OAuth, they are OK to enter my system. However this user may not yet exist in my system. It's not really a technology issue, but I'm using JOliver EventStore for what it's worth.
Logic:
I'm not given a guid for new users. I just have an email address.
I check my read model before sending a command, if the user email
exists, I issue a Login command with the ID, if not I issue a
CreateUser command with a generated ID. My issue is in the case of a new user.
A save occurs in the event store with the new ID.
Issue:
Assume two create commands are somehow issued before the read model is updated due to browser refresh or some other anomaly that occurs before consistency with the read model is achieved. That's OK that's not my problem.
What Happens:
Because the new ID is a Guid comb, there's no chance the event store will know that these two CreateUser commands represent the same user. By the time they get to the read model, the read model will know (because they have the same email) and can merge the two records or take some other compensating action. But now my read model is out of sync with the event store which still thinks these are two separate entities.
Perhaps it doesn't matter because:
Replaying the events will have the same effect on the read model
so that should be OK.
Because both commands are duplicate "Create" commands, they should contain identical information, so it's not like I'm losing anything in the event store.
Can anybody illuminate how they handled similar issues? If some compensating action needs to occur does the read model service issue some kind of compensation command when it realizes it's got a duplicate entry? Is there a simpler methodology I'm not considering?
You're very close to what I'd consider a proper possible solution. The scenario, if I may summarize, is somewhat like this:
Perform the OAuth-entication.
Using the read model decide between a recurring visitor and a new visitor, based on the email address.
In case of a new visitor, send a RegisterNewVisitor command message that gets handled and stored in the eventstore.
Assume there is some concurrency going on that, for the same email address, causes two RegisterNewVisitor messages, each containing what the system thinks is the key associated with the email address. These keys (guids) are different.
Detect this duplicate key issue in the read model and merge both read model records into one record.
Now instead of merging the records in the read model, why not send a ResolveDuplicateVisitorEmailAddress { Key1, Key2 } towards your domain model, leaving it up to the domain model (the codified form of the business decision to be taken) to resolve this issue. You could even have a dedicated read model to deal with these kind of issues, the other read model will just get a kind of DuplicateVisitorEmailAddressResolved event, and project it into the proper records.
Word of warning: You've asked a technical question and I gave you a technical, possible solution. In general, I would not apply this technique unless I had some business indicator that this is worth investing in (what's the frequency of a user logging in concurrently for the first time - maybe solving it this way is just a way of ignoring the root cause (flakey OAuth, no register new visitor process in place, etc)). There are other technical solutions to this problem but I wanted to give you the one closest to what you already have in place. They range from registering new visitors sequentially to keeping an in-memory projection of the visitors not yet in the read model.
The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.
I'm making a site for a writers management company. They get tons of script submissions every day from prospective and often unsolicited writers. The new site will allow a prospective writer to submit a short logline / sample of his or her idea. This idea gets sent to an email account at the management group. If the management group likes what they see, they want to be able to approve that submission from within the email and have a unique link dispatched to the submitter to upload their full script. This link would either only work once, or only for a certain amount of time so that only the intended recipient could use it.
So, can anyone point me in the direction of some sort of (I'm assumine PHP + mySQL) CMS or framework that could accomplish this? I've searched a lot, but I can't seem to figure out the right way to phrase this query to a search engine.
I have moderate programming experience, but not much with PHP outside of some simple Wordpress hacks.
Thanks!
I will just give you general guidelines on a simple way to construct such a system.
I assume that the Writer is somehow Registered into the system, and his/her profile contains a valid mail address.
So, when he submits the sample, you would create an entry on the "Sample" table. Then you would mail a Manager with the sample and a link. This link would point to a script giving the database "id" of the sample as a parameter (this script should verify that the manager is logged on -- if not, show the login screen and after successful login redirect him back).
This script would then be aware of the Manager's intention to allow the Writer to submit his work. Now the fun begins.
There are many possibilities:
You can create an entry in an appropriate "SubmitAuthorizations" DB table containing the id of the Writer and the date this authorization was given (ie, the date when the row was added to your DB). Then you simply send a mail to the Writer with a link like "upload.php?id=42", where the id is the authorization id. This script would check if the logged user is the correct Writer, and if he is within the allowed timeframe (by comparing the stored "authorization date" and the current date).
The next is the one I prefer: without a special table just for handling something trivial (let's say you will never want to "edit" an authorization, nor "cancel" it, but it may still "expire"). You simply simply give the Writer a link with 2 parameters: the date the authorization was given and an authorization key, like: "upload.php?authDate=20091030&key=87a62d726ef7..."
Let me explain how it works.
The script would first verify if the Writer is logged on (if not, show the login page with a redirection after successful login).
So, now it's time to validate the request: that is, check if this is not a "forged" link. How to do this? It's just a "smart" way of construction this authorization key.
You can do something like:
key = hash(concat(userId, ";", authDate, ";", seed));
Well, here hash() is what we call a "one-way function", like MD5, SHA1, etc. Then concat() is simply a string concatenation function. Finally seed is something like a "master password", completely random and that will not change (for if you change it all the issued links would stop working) just to increase security -- let's say a hacker correctly guesses you are using MD5 (which is easy) and the he tries to hack your system by hashing some combinations of the username and the date.
Also, for a request to be valid, it must be in the correct time frame.
So, if both the key is valid, and the date is within the time frame, you are able to accept an upload.
Some points to note:
This is a very simple system, but might be exactly what you need.
You should avoid MD5 for the hashing function, take something like SHA1 instead.
For the link sent to the Writer, you could "obfuscate" the parameter names, ie, call them "k" for the "key" and "d" for the "authDate".
For the date, you could chose another format, more "cryptic", like the unix epoch.
Finally, you can encode the parameters with something like "base64" (or simply apply some character replacing function like rot13 for instance, but that take digits into account aswell) just in order to make them more difficult to guessing
Just for completeness, in the validation script you can also check if the Writer has already sent a file on the time frame, thus making it impossible to him to send many files within the time frame.
I have recently implemented something like this twice on the company I work for, for two completely different uses. Once you get the idea, it is extremelly simple to implement it -- maybe less than 10 lines of code for the whole key-generation and validation process.
On one of them, the agent equivalent to your Writer had no account into the system (actually it would be his first contact with the system) -- there was only his "profile" on the system, managed by someone else. In this case, you would have to include the "Writer"'s id on the parameters to the "Upload" script aswell.
I hope this helps, and that it was clear enough. If I find the time, I will blog about it with an working example on some language.