Real time data base synchronization [closed] - real-time

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have 2 computers which are connected to each other via serial comunication.
The main computer is holding a DB (about 10K words) the computer is working at a 20Hz rate.
I need real-time synchronization of the DB for the other computer - if data is added, deleted, or updated, I want the other computer to see or get the changes in real-time.
If I will transfer whole the DB peirodicly it will take about 5 seconds to update the other side - which is not acceptable.
Spmeone has an idea?

As you said, the other computer has to get the changes (i.e. insert, delete, update) via the serial link.
The easiest way to do this (but maybe impossible, if you can't change certain things) is to extend the database-change methods (or, if thats not possible: every call) to send an insert/delete/update-datagram with all required data over the serial link, which has to be robust against packet-loss (i.e. error detection, retransmission, etc.).
On the other end you have to implement a semantically equivalent database where you replay all the received changes.
Of course you still have to synchronize the databases at startup/initialization or maybe periodically (e.g. once per day).

Related

firebird gfix sweep don't clean [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I've got a problem whit the gfix sweep command, because it doesn't clean the garbage collector. What problem it can be. The database backup size is 900mb smaller than the database itself. What is the problem if the gfix sweep started manually don't work?
A backup is smaller because it doesn't contain indexes, but just the database data itself, and it only contains data of the latest committed transaction, no earlier record versions. In addition, the storage format of the backup is more efficient, because it is written and read serially and doesn't need the more complex layout used for the database itself.
In other words, in almost all cases a backup will be smaller than the database itself, sometimes significantly smaller (if you have a lot of indexes or a lot of transaction churn, or a lot of blobs).
Garbage collection in Firebird will remove old record versions, sweep will also cleanup transaction information. Neither will release allocated pages, that is: the database file will not shrink. See Firebird for the Database Expert: Episode 4 - OAT, OIT, & Sweep
If you want to shrink a database, you need to backup and restore it, but generally there is no need for that: Firebird will re-use free space on its data pages automatically.
See also Firebird for the Database Expert: Episode 6 - Why can't I shrink my databases.

Change a 1500 column data-set for easier front-end manipulation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have a data-set that consist of 1500 columns and 6500 rows and I am trying to figure out what the best way is to shape the data for web based user interactive visualizations.
What I am trying to do is make the data more interactive and create an admin console that allows anyone to filter the data visually.
Front-end could potentially be based on Crossfilter, D3 and DC.js and give the user basically end-less filtering possibilities(date, value, country. In addition there will be some pre defined views like top and bottom 10 values.
I have seen and tested some great examples like this one, but after testing it did not really fit for the large amount of columns I had and it was based on a full JSON dump from the MongoDB. This amounted in very long loading times and loss of full interactivity with the data.
So in the end my question is what is the best approach (starting with normalization) in getting the data shaped in the right way so it can be manipulated from a front-end. Changing the amount of columns is a priority.
A quick look at the piece of data that you shared suggests that the dataset is highly denormalized. To allow for querying and visualization from a database backend I would suggest normalizing. This is no small bit software work but in the end you will have relational data which is much easier to deal with.
It's hard to guess where you would start but from the bit of data you showed there would be a country table, an event table of some sort and probably some tables of enumerated values.
In any case you will have a hard time finding a db engine that a lows that many columns. The row count is not a problem. I think in the end you will want a db with dozens of tables.

Is mongodb a no-go for this application? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Good sirs.
I've just started planning a new project, and it seems that I should stick with a relational database, (even though I want to play with mongo). Tell me if I'm mistaken!
There will be box models, each of which can contain hundreds to thousands of items.
At any time, the user can move an item to another box.
for example, using some Railsy pseudocode...
item = Item(5676)
item.box // returns 24
item.update(box:25)
item.box // returns 25
This sounds like a simple SQL join table to me, but an expensive array manipulation operation for mongodb.
Or is removing an object out of one (huge) array and inserting it in another (huge) array not a big problem for mongo?
Thanks for any wisdom. I've only just started with mongo.
If you want to use big arrays, stay away from MongoDB. I tell from personal experience. There are two big problems with arrays. If they start to grow, document grows and it needs to be moved on disk. That is very, very slow operation. Plus if you need to scan array to get to 10000 element, that will be very slow as it needs to check 9999 before that.

Hashing Algorithm Usage in Searching [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to know usage of Hashing in Searching. For example does Google or Yahoo uses Hash Algorithms? Does big companies use this Hashing Algorithm?
Yes. Refer to book Page rank and beyond , there you will find that google uses hashing.Hashing make your complexity too low in all aspect like searching,adding etc.Let me tell you a situation suppose you are making an online chatting website.And you have to handle a million users.you can use linear search which will take worst time around 1 million*time to fetch one element.The user will have to wait a lot on the client side.But you will save money as you are not using extra space complexity.But if you will use hashing time taken will be around the time to fetch only one element.But here the system will cost you a lot as you have to pay for extra storage (1 million data storage records with a better hashing function).But here the challenge is to have a best hashing function that can cause minimum collisons to store elements.Hashing is a big topic I cannot explain in short. refer to these links:
What is a good Hash Function?
http://en.wikipedia.org/wiki/Hash_function
http://www.cs.cmu.edu/~clo/www/CMU/DataStructures/Lessons/lesson11_2.htm
http://www.tutorialspoint.com/dbms/dbms_hashing.htm
http://www.internetlivestats.com/total-number-of-websites/
Google links trillions of websites, about 1156000000.let us assume 1 milli second in getting one page from db.In worst case it will take around 1156000000*1 ms= 1156000 sec = 5.35 years. The user in worst case will have to wait for 5 years to search.Therefore this cannot be done in simple linear search.Google have its own hidden complex algorithms(you can find in the book above).Google have its own servers to store the hashing records, from where the records will be fetched by using some hashing functions.I doesn't have much idea about how google works.What I know is google uses probability a lot.Find in this book about how google works - http://langvillea.people.cofc.edu/UIUC.pdf

Entity Framework or SqlDataReader [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I appreciate there are one or two similar questions on SO but we are a few years on and I know EF's speed and general performance has been enhanced so those may be out of date.
I am writing a new webservice to replace an old one. Replicating the existing functionality it needs to do just a handful of database operations. These are:
Call existing stored procedures to get data (2)
Send SQL to the database to be executed (should be stored procedures I know) (5)
Update records (2)
Insert records (1)
So 10 operations in total. The database is HUGE but I am only dealing with 3 tables directly (stored procedures do some complex JOINs).
When getting the data I build an array of objects (e.g. Employees) which then get returned by the web service.
From my experience with Entity Framework, and because I'm not doing anything clever with the data, I believe EF is not the right tool for my purpose and SqlDataReader is better (I imagine it is going to be lighter and faster).
Entity Framework focuses mostly on developer productivity - easy to use, easy to get things done.
EF does add some abstraction layers on top of "raw" ADO.NET. It's not designed for large-scale, bulk operations, and it will be slower than "raw" ADO.NET.
Using a SqlDataReader will be faster - but it's also a lot more (developer's) work, too.
Pick whichever is more important to you - getting things done quickly and easily (as a developer), or getting top speed by doing it "the hard way".
There's really no good "one single answer" to this "question" ... pick the right tool / the right approach for the job at hand and use it.