Can Sync Services add a column on the central table? - ado.net

Is it possible to have Sync Services for ADO.NET read data from a table on multiple devices and insert it into a central SQL Server, having an additional column in the central table with the origin of the row data?
Let's say I have equipped door-to-door sales people with a device where they register sales. The local table would contain rows with sales information, and the central database would contain the same data + a column with the ID of the sales person.
Is that possible, or would I need the sales person's ID in the local database too?

Sync Framework identifies each client with a GUID (see: How To:Use Session Variables) and you can use that to map a particular client to a particular salesperson (see:Identifying Which Client Made a Data Change on either How to: Use Custom Change Tracking System or How to: Use SQL Server Change Tracking.
Or try the approach here for intercepting the change dataset and inserting/substituting the salesperson value: Part 1 – Upload Synchronization where the Client and Server Primary Keys are different

Related

Efficient Way To Design Database For My Specific Use Case

I am building a website where users can view emails that are fetched from my gmail account.
Users can read emails, change their labels & archive them. Each email has metadata associated with it, and users can search through the emails based on the metadata. Furthermore, each user is associated with an organization. Changes made to an email (e.g., if the email is archived, or if the tags are changed) by any one user gets reflected across the organization.
Right now, I store all emails in a single table along with their metadata. However, the problem is that I now have over 20,000 emails in the database, and searching through them based on the metadata takes too much time.
Now one way to optimize this is that when a user runs a search command then the system should only search through emails that are in the inbox & not archived or deleted. But the issue is that where one organization might have archived an email, another organization might have not. So I can not create separate tables for Inbox & Archive. By default emails also get auto-archived after some time (this option can be disabled also), so the Inbox generally has around 4 thousand emails, whereas the archive has many many times that.
My question is does it make sense to create separate Inbox & Archive tables for each organization & just copy all new incoming emails to the tables? Since organizations only join by invitation, so I do not expect the total number to cross 100. Or would this just explode and become too difficult to handle in the code later on, with so many tables.
I am using PostgreSQL for this.
If your operational workflow says "upon adding a new customer create such-and-such a table" then you have a serious database design problem. When you have more than about 50 customers things will slow down due to per-table overhead. In other words, when you start to succeed in business you will start to fail in performance. Not good.
You have a message entity. It, no doubt, contains the message's text, subject, timestamp, from, to, and other attributes that form part of the original message. Each message will have a unique (primary key) message_id. But the entity should not contain attributes like inbox and archive, because those attributes relate to the organization.
You need an org entity. Each organization has a unique org_id, a 'name and other attributes of the organization.
Then you need an org_message table. Its primary key contains both org_id and message_id. And it will contain Boolean attributes like archived and read, and a VARCHAR attribute naming its current folder. So, each org's window into your message table is organized by the org_messages.
If you start with an organization named, for example, shipping, and you want to see all its messages, you use a query like this.
SELECT org.id, org.name,
message.*,
COALESCE(org_message.read, 0) unread,
COALESCE(org_message.archived, 0) archived,
COALESCE(org_message.folder, 'inbox') folder
FROM org
LEFT JOIN org_message ON org.org_id = org_message.org_id
LEFT JOIN message ON message.message_id = org_message.message_id
WHERE org.name = 'shipping';
The LEFT JOINs and COALESCEs work to set each org's defaults for each message to unread, not archived, and in the inbox folder. That way you don't have to create a row in org_message for each organization and each message until the org handles the message.
If you want to mark a message as read and archived for a particular org, you INSERT a row into org_message, using ON CONFLICT DO UPDATE
INSERT INTO org_message (org_id, message_id, read, archived, folder)
VALUES (?, ?, ?, ?, ?) ON CONFLICT DO UPDATE;
That either sets or updates the org's attributes for the messages
If you find that searching these tables is too slow, you'll need indexes. That's the subject of a different question.

Entity Framework - How to manage suburb and state date across multiple databases

I have an SaaS application in the pipeworks.
One of the things that has me a bit confused is the best way to manage the stable of Austalian suburb and state data across multiple databases (this applies to any country as each country has a list like this).
For example in Australia you have Australian Postcode list that links all the postcodes to the suburbs and you can use that to create a dropdown for state, suburb and postcode etc.
An example of the CSV of australian postcodes can be found HERE.
So you can upload a csv file for example but the problem remains..
Whats the best way to hold this data.. its common to all databases where you have a person, client, employee etc..
Do you replcate it in each database? Is there a better way than having redundant stores of data..
Best way to implement it..
There are several options and considerations I would look at for this problem. Some considerations:
Number of address rows expected
Whether a client database is concerned with prefill/validated international addresses
Whether the client system is web connected or can operate in isolation
Are these databases/systems hosted by you or distributed to individual clients? (SaaS implies "Web" and "Hosted by You" to points 3 & 4)
How critical address integrity is.
For smaller systems, a simple option for address systems is to de-normalize the address data (state, postcode, suburb) and consider using a central lookup database/service, either under your own control or a third party. The denormalized address table would contain the text fields for the State, Postcode, Suburb etc. rather than FK values (stateId, suburbId, etc.) This avoids needing to store lookup tables in every client DB, just one Lookup DB or leave that to a 3rd party service.
The advantage of a third-party lookup is that keeping it up to date with new areas and changes is handled for you. Third party services would require a web connection, and you have to factor in the risk of their service being down or a web connection being unavailable. Larger systems with millions of addresses might benefit from normalizing the address table, so the "cost" of replicating suitable address lookup tables might be worthwhile. You can still a central service to look up addresses, then resolve whether the client DB already has a StateId, SuburbId etc. for the respective state/suburb for that post-code before inserting one if necessary. (Cutting down the number of rows each client DB needs to address values that are actually used)
In that last example you might have lookup tables for State and Suburb linked to PostCodes, linked to Country. Country would default to the target, maybe be an optional selection for international addresses. The user provides a post code to the service which returns suburbs, they select a suburb. The address validation service could go as far as to validate the street address. When you're happy an address is "valid" and ready to be saved, you search your local State, Suburb, (even Street) tables for matches for that PostCode, if found use those FKs, otherwise insert new entries and link the FK.
Using a separate service, or services would be my consideration especially if you need to support validating/storing international addresses. For instance if the client is in Australia but regularly has address information for New Zealand. Storing entire address validation tables could get rather large if clients could be resolving addresses for many countries. (I.e. European countries and neighbours) You can write a Façade service to support different 3rd party address validation providers and/or homemade implementations with a standard interface.
If a system has to operate in isolation of an internet connection then you'll probably be stuck with each database having one or more local data sources to resolve address information.
Data integrity of address information is a separate concern you might want to consider. In some systems you need to validate that an address is recognized and don't want to allow invalid combinations or detect unexpected changes. Services that validate a particular address can provide unique IDs for an address that you can store as part of your address information. (These often tie into geocoordinate solutions where you want to quickly direct a map service to a particular location) Alternatively, if you successfully look up an address then validate that the address information is valid, even if just the country, post code, and suburb, you can create and store a hash of those values to check for tampering. (I.e. someone or some system changed a field to make the address invalid, the combined address won't match the stored hash) Addresses can be checked before use and flagged if not valid.

Tableau Dashboard Templating

How can we make the Tableau dashboard templated? We would like to create just the template/wireframe of our reports and as the client requests we should be able to fetch that specific data and generate the report and display it to the client on tableau embedded-web?
There isn't a good way to do this, but there are some hacky workarounds.
Option 1: Separate DB Servers for each Client, Same Schema
If each client has a separate database server with the same schema, you can use the Tableau Server REST API to duplicate the workbook and data source for each client, then use the Update Data Source Connection endpoint to change the database server the data source points to to the new client's.
Option 2: Same Database Server and Schema
Create a column in your database table named 'client' and set it to the client's ID or client's name in all of your rows
Create a parameter in your Tableau workbook named "Client"
When connecting to the database and table in Tableau, you can use a custom SQL statement such as:
SELECT * FROM table WHERE client=<Parameters.Client>
Once you have the workbook loaded, you can use the JS API method Workbook.changeParameterValueAsync() method to set the Client parameter to the appropriate client ID
This has some critical security issues: If the user is able to figure out the client ID of another client, they can get their data. They can also brute force this by calling changeParameterValueAsync themselves.

Master data services 2016 data validation

All modules and attributes are created as per my requirement using Master data services 2016. I am working on data validation.
Requirement is that, we have to display custom message to users while he/she is trying to enter duplicate data in the combination of 3 columns (composite primary key) and should not be inserted into database. I tried using triggers in MDS database.
Suggest me the best way to do this.
You can add business rule of Must be unique and then select Must be unique in combination with the following attributes. I Think we cannot add custom message.
Message will be shown as Column A,B,C must be unique with combination .....

CQRS Command and domain state

I am new to CQRS and confused on how command will write a address change to a customer object
Lets say I have divided customer information into two tables
customer - Domain database
Active
Preferred
Customer_Read database
Name,
Address,
Phone,
email
User modifies address of the customer. The address fields are all in read database.
there may be 3 or more query friendly tables that is keeping address information.
If I understand the CQRS implementations (sample) Customer Domain (removed Aggregate root) should be publishing event about address change that should be handled by multiple handlers to update each of the table.
How do I implement this when I wont be changing the state of customer object?
Do domain have to know that it has address in another database ?
Thank you in advance.
Regards,
The Mar
Update--
After going through more posts on net I am assuming that if the state is not changed by the command no event will be generated to save the domain itself but events will be applied to change the address in query / View Model friendly tables.
You still need to persist some domain data somewhere in the write persistence. This way the address is stored in this persistence store, event is published after changing it.
This way:
if there were no change - we can skip publishing the event
domain does not need to know anything about objects that may (or may not) be subscribed to his events.
This logic applies to both persistence in relational DBs (MS SQL with NHibernate, for example) and event sourcing approach.