Mailchimp re-use of trigger for subscribe new member - email

Scenario: New member is joining in list group 'A' [trigger detect this and sends mail] [workflow-A], then member according to condition leaves list group 'A' and is joined in to group 'B' [trigger detect this and sends mail] [workflow-B]. After some time, the member wants to be re-added in to the group 'A' [trigger does nothing].
Questions: Is resend same email possible with using same workflow and trigger if condition is in both cases from Scenarios fullfiled. (If yes, how?)
Basically, can be triggered same email address 2 times (if the condition of trigger is fullfiled in both cases)?
I tried few operations below:
Remove subscribers from workflow => it does not work
Remove subscribers from list and re-adding => it does not work

The answer here is no. A subscriber can only be added to a workflow once.
This scenario sounds like it might be transactional in nature though and you may find more utility in a transactional mail service (like mandrill) as opposed to MailChimp which is really meant as a one to many service.

Related

Event Sourcing (pubsub), dependent events, chaining/queueing the events properly

Say I have the following model:
Channel [channel_id]
Group [group_id]
Channel [channel_id] 1:1 [channel_id] ChannelGroupReference [group_id] 1:1 [group_id] Group
Each of the above, including references, runs as a separate microservice.
This is not obvious, so I should mention that channel does not necessary have a group connected. There might be channels with groups and channels without groups.
Next.
Group creation event and Channel creation event are triggered from different locations:
In case we create a Group: this is handled by GroupsApp, then we create a Channel afterwards, then a Group:Channel relationship is created;
In case we create a Channel (without group): we create a channel by ChannelsApp and that's it (no further references).
The way I see it
I create a Group // handled by GroupApp
I fire the GroupCreated event // fired by GroupApp (contains group_id)
I handle GroupCreated event by ChannelApp
I fire the ChannelCreated event by ChannelApp (contains both group_id and channel_id)
I handle the ChannelCreated event by GroupChannelReferenceApp and based on the group_id field value I decide if the reference should be created or not (so for channels without group no reference is created)
The problem I see in this case that ChannelApp becomes kind of a proxy, knowing and caring too much about Group entity as well, while the only app I would like to care about both Group and Channel is a [3] GroupChannelApp. ChannelApp should care only about Channels and fire only Channel related events.
I need some sort of chain/queue (saga won't work because of distribution), which will store 'knowledge' first about the group event, then about the channel created for the group (based on the events fired), then fire the proper event of this GroupChannelWhateverCreated, so GroupChannelReferenceApp can handle this event and store a relationship.
This is the simplified example of the 3rd app handling separate events fired by 1st and 2nd apps. The chain/queue might be longer (group membership, channel membership etc). Some of them have to wait for the side services to process the events first.
Question
In general, how should I handle the events for the third app, which requires both Group and Channel events data, taking into account those two events are fired by different sources and I need both IDs to store the references? By the time I create a reference, both Group and Channel should be created, or one might not exists, but the point is, I do not want those [1] and [2] apps to know about each other.

Mixpanel: Merge duplicate people profiles and also merge events

I have duplicate profiles due to switching of the identifier in the code. I would like to merge the duplicate profiles now and also merge the events / activity feed.
I got the API working and by calling
deduplicate_people(prop_to_match='$email',merge_props=True,case_sensitive=False,backup=True,backup_file=None)
Duplicates are in fact removed, but the events / activity feed is not merged. So I'd loose many events.
Is there a way to remove duplicates and merging events / activity feed at the same time?
Duplicates happen because some persons use ID and others email as distinct_id due to the change of identifier. The events are referenced by that ID or email to the corresponding person.
So here is what I ended up doing to re-create the identity mapping for people and their events:
I used Mixpanel's API (export_people / export_events) to create a backup of people and events. I wrote a script that creates a mapping "distinct_id <-> email" for people that use an actual ID as distinct_id and not an email (each person has an $email field regardless of the content of the $distinct_id).
Then I went over all exported events. For each event that had an ID as distinct_id I used the mapping to change that distinct_id to email. Updated events were saved in a JSON file. Thus creating the reference from events to person using email as distinct_id -- the events that got lost otherwise.
Then I went ahead and used the de-duplicate API from Mixpanel to delete all duplicates -- thus loosing some events. Now I imported the events from the step before, which gave me back those missing events.
Three open questions to consider before using this approach:
I believe events are not actually deleted on deduplication. So by importing them again there are probably duplicate events in the system that are just not referenced to a person and that may show up at some point.
the deduplication by $email did keep the people that use email as distinct_id and removed the ones with the actual ID. I don't know if this is true every time or may have been a coincidence. My approach will fail for persons that still use ID as distinct_id.
I suppose it's generally discouraged to hack around the distinct_id like that, because making a mistake may result in data loss. So make sure to get it right..

DB relationship: implementing a conversation

I want to implement a simple conversation feature, where each conversation has a set of messages between two users. My question is, if I have a reference from a message to a conversation, whether I should have a reference the other way as well.
Right now, each message has conversationId. To retrieve all the messages the belong to a certain conversation, I should do Message.find({conversationId: ..}). If I had stored an array of messages in a conversation object, I could do conversation.messages.
Which way is the convention?
It all depends on usage patterns. First, you normalize: 1 conversation has many messages, 1 message belongs to 1 conversation. That means you've got a 1-to-many (1:M) relationship between conversations and messages.
In a 1:M relationship, the SQL standard is to assign the "1" as a foreign key to each of the "M". So, each message would have the conversationId.
In Mongo, you have the option of doing the opposite via arrays. Like you said, you could store an array of messageIds in the conversation. This gets pretty messy because for every new message, you have to edit the conversation doc. You're essentially doubling your writes to the DB & keeping the 2 writes in sync is completely on you (e.g. what if the user deletes a message & it's not deleted from the conversation?).
In Mongo, you also have to consider the difference between 1:M and 1:F (1-to-few). Many times, it's advantageous to nest 1:F relationships, ie make the "F" a subdoc of the "1". There is a limit: each doc cannot exceed 16MB (this may lift in future versions). The advantage of nesting subdocs is you have atomic transactions because it's all the same doc, not to mention subscriptions in a pub/sub are easier. This may work, but if you've got a group-chat with 20 friends that's been going on for the last 4 years, you might have to get clever (cap it, start a new conversation, etc.)
Nesting would be my recommendation, although your origin idea of assigning a conversationId to each message works too (make sure to index!).

How to push tables through socket

Considering the setup of kdb+ tick, how do the tables get pushed through the sockets?
In tick, it's possible to subscribe with a process (let's say) a to the tickerplant, which will then proceed to push the data of the subscribed 'tickers' to a as new data arrives.
I would like to do the same but I was wondering how. As far as I know, inter-process communication between q process is just the ability to transport commands from one process to the other, such that the commands will be executed on the other.
So how is it then possible to transport a complete table between processes?
I know the method which does this in tick is .u.pub and .u.sub, but it's not clear to me how the tables are transported between the processes.
So I have two questions:
How does kdb+ tick do this?
How can I push a table from one process to the other in general?
Let's understand the simple process of doing this:
We have one server 'S' and one client 'C'. When 'C' calls .u.sub function, that function code connects to 'S' using its host and port and call a specific function on 'S' (lets say 'request') with subscription parameters.
On getting this request, 'S request' function makes following entries to its subscribtion table which it maintains for subscription request.
-> Host and port of Client(incoming request)
-> Subscription params (for ex. clients send sym `VOD.L for subscription)
Now when 'S' gets any data update from feed, it goes thorugh it's subscription table and check the entries whose subscription param column value (sym in our case) matches with incoming data. Then it makes connection to each of them using their host and port from table and call their 'upd' function with new data.
Only thing is, client should have 'upd' function defined on their side.
This is a very basic process. KDB+ uses this with extra optimizations and features. For ex. more optimized structure for maintaining subscription table,log maintenance, replaying logs, unsubscription ,recovery logic, timer for publishing and lot more.
For more details, you can check definition of functions in 'u' namespace.

What is the relationship between the FIX Protocol's OrdID, ClOrdID, OrigClOrdID?

I'm pretty new to the FIX protocol and was hoping someone could help clarify some terms.
In particular could someone explain (perhaps with an example) the flow of NewOrderSingle, ExecutionReport, CancelReplaceRequest and how the fields ClOrdID, OrdID, OrigClOrdID are used within those messages?
A quick note about usages of fields. My experience is that many who implement FIX do it slightly differently. So be aware that though I am trying to explain correct usage you may find that there are differences between implementations. When I connect to a new broker I get a FIX specification which details exactly how they use the protocol. I have to be very careful to make sure where they have deviated from other implementations.
That said I will give you a rundown of what you have asked for.
There are more complicated orders but NewOrderSingle is the one most used. It allows you to create a trade for any asset. You will need to create a new order using this object / msg type. Then you will send it through your session using the method sendToTarget(). You can modify the message after this point through the toApp() method, assuming your application implements the quickfix.Application interface.
The broker (or whoever you are connected to) will send you a reply in the form of and Execution report. Using quickfix that reply will enter your application through the fromApp() callback. From there the best thing to do is to implement your app inheriting from the MessageCracker class (or implement it elsewhere) using the crack method from MessageCracker it will then call back a relevant onMessage() method call. You will need to implement a number of these onMessage() methods (it depends on specifically what you are doing as to which methods you will need), the main one being onMessage(ExecutionReport msg, SessionID session). This method will be called by message cracker when you receive and Execution report from the broker. This is the standard reply to a new order.
From there you handle the reply as required.
Some orders do not get filled immediately like Limit orders. They can be changed. For that you will need the CancelReplaceRequest. Your broker will give you details of how to do this specifically for them (again there are differences and not everyone does it the same). You will have to have done a NewOrderSingle first and then you will use this MsgType to update it.
ClOrdID is an ID that the client uses to identify the order. It is sent with the NewOrderSingle and returned in the ExecutionReport. The OrdID tag is in the ExecutionReport message, it is the ID that the broker will use to identify the order. OrgClOrdID is usually used to identify the original order in when you do and update (using CancelReplaceRequest), it is supposed to contain the ClOrdID of the original order. Some brokers want the original order only, others want the ClOrdID of the last update, so the first OrigClOrdID or will be the ClOrdID of the NewOrderSingle, then if there are subsequent updates to the same order then they will be the ClOrderID from the last CancelReplaceRequest. Some brokers want the last OrderID and not ClOrderID. Note that the CancelReplaceRequest will require a ClOrdID as well.