I am absolutely new to the elastic stack.
So my problem space is I have utility which runs on client machines .We have few logs which are generated on these machines (thousands of them), So we have three data source- csv files, log files(generated by my application) and windows event log . I want to combine these three and generate some useful information out of them .Also want to generate a dashboard with some graphs which will be used by managers.
I have zeroed down on elk stack , idea is I install beats on client machine and push data to elastic and then use Kibana to get some visualization. Since I might have thousand of client pushing the data to elastic server, it might not be feasible to keep this data in the server for ever. But I need updated visualizations, to be available always. So I was planning that periodic queries will be run on the indexed data in elastic and the result which is generated (which is real information I need) will be saved back in elastic in a separate index and the visualization in Kibana are set up based on this index .And all the original data can now be cleared. This way I extract real info and keep it and delete unnecessary info.
My question to the expert are
Is my thinking or design correct(wrt to elk stack) given the problem statement
Is it feasible in elk stack and are there any examples or utilities to achieve this.
Thanks
Gaurav
Saving the results of your aggregations back into ElasticSearch is a perfectly valid option. You should also consider Cold storage as an option for storing large amounts of data with long retention.
You tagged logz.io in your question, so it's worth mentioning that there is a logz.io feature called 'Timeless accounts' which uses Optimizers to define query results that should be saved for longer than the retention periods of the underlying logs.
For the record, I work at logz.io
Related
I have a mailer system where in we send 1-2 lakhs mail everyday and then we store all the clicks / opens actions of those mail.
This is currently working fine in MySQL.
But now with increasing traffic, we are facing some performance issue with Mysql.
So we are thinking of shifting to Elastic / Cassandra / Mongo.
My possible queries include
a) Getting user which have opened / clicked a specific mail or not.
b) Calculating open rate / click rate for mail
I think cassandra might not fit here perfectly as it is well suited for applications with high concurrent writes but with less read queries.
Here there can be many types of read queries so it will be difficult to decide on partitioning key / clustering, so too mzny aggregations will be running on cassandra.
What should we use in this case and why?
We are anyhow working on both elastic / mongo to design the data model for both and then run some benchmarks around it.
ELK stack (Elastic Search, LogStash, Kibana) is the best solution for this. As far as I have used ELK stack, it is fast for log processing.
Cassandra is definitely not the right option.
You can use MongoDB since most of the queries are GET queries.
But I have a few points why Elastic search gains power over Mongo for Log Processing.
Full-text search : Elastic Search implements a lot of features, such as customized splitting text into words, customized stemming, facetted search, etc.
Fuzzy Searching : A fuzzy search is good for spelling errors. You can find what you are searching for even though you have a spelling mistake.
Speed : Elastic search is able to execute complex queries extremely fast.
As the name itself suggests Elastic search is made for searching purpose. And Searching in mongo is not as fast as Elastic Search.
But Maintaining Elastic Search also has its own problems.
refer:
https://apiumhub.com/tech-blog-barcelona/elastic-search-advantages-books/
https://interviewbubble.com/elasticsearch-pros-and-cons-advantages-and-disadvantages-of-elasticsearch/
Thanks, I think this will help.
If I try to look at your Data Structure and Data Access pattern, it looks like you'll have a message Id for each message, it's contents, and then along with it, a lot of counters which get updated each time a person opens it, maybe some information like user id/email of people who have opened it.
Since these records are updated on each open of an email, I believe the number of writes are reasonably high. Assuming each mail gets opened on an Average of 10 times/day, it'll have 10-20 Lakh writes per day with 1-2 Lakh emails.
Comparing this with reads, I am not sure of your read pattern, but if it's being used for analytics purpose, or to show in some dashboard it'll be read a few times a day maybe. Basically Reads are significantly low compared to writes.
That being said, if your read query pattern is of the form where you query always with a message id, then Cassandra/Hbase are the best choices that you have.
If that's not the case and you have different kinds of queries, or you want to do a lot of analytics, then I would prefer Mongo DB.
Elastic search is not really a Database, it's more of a query engine. And there are a lot of instances where the data loss happens in ES. If you are planning to keep this as your primary data store then Elastic Search/ELK is not a good choice.
You could look at this video to help come to a conclusion on which DB is best given what scenarios.
Alternatively, a summary is # CodeKarle's website
I just need a bit more clarity around tableau extract VS live. I have 40 people who will use tableau and a bunch of custom SQL scripts. If we go down the extract path will the custom SQL queries only run once and all instances of tableau will use a single result set or will each instance of tableau run the custom SQL separately and only cache those results locally?
There are some aspects of your configuration that aren't completely clear from your question. Tableau extracts are a useful tool - they essentially are temporary, but persistent, cache of query results. They act similar to a materialized view in many respects.
You will usually want to employ your extract in a central location, often on Tableau Server, so that it is shared by many users. That's typical. With some work, you can make each individual Tableau Desktop user have a copy of the extract (say by distributing packaged workbooks). That makes sense in some environments, say with remote disconnected users, but is not the norm. That use case is similar to sending out data marts to analysts each month with information drawn from a central warehouse.
So the answer to your question is that Tableau provides features that you can can employ as you choose to best serve your particular use case -- either replicated or shared extracts. The trick is then just to learn how extracts work and employ them as desired.
The easiest way to have a shared extract, is to publish it to Tableau Server, either embedded in a workbook or separately as a data source (which is then referenced by workbooks). The easiest way to replicate extracts is to export your workbook as a packaged workbook, after first making an extract.
A Tableau data source is the meta data that references an original source, e.g. CSV, database, etc. A Tableau data source can optionally include an extract that shadows the original source. You can refresh or append to the extract to see new data. If published to Tableau Server, you can have the refreshes happen on schedule.
Storing the extract centrally on Tableau Server is beneficial, especially for data that changes relatively infrequently. You can capture the query results, offload work from the database, reduce network traffic and speed your visualizations.
You can further improve performance by filtering (and even aggregating) extracts to have only the data needed to display your viz. Very useful for large data sources like web server logs to do the aggregation once at extract creation time. Extracts can also just capture the results of long running SQL queries instead of repeating them at visualization time.
If you do make aggregated extracts, just be careful that any further aggregation you do in the visualization makes sense. SUMS of SUMS and MINS of MINs are well defined. Averages of Averages etc are not always meaningful.
If you use the extract, than if will behave like a materialized SQL table, thus anything before the Tableau extract will not influence the result, until being refreshed.
The extract is used when the data need to be processed very fast. In this case, the copy of the source of data is stored in the Tableau memory engine, so the query execution is very fast compared to the live. The only problem with this method is that the data won't automatically update when the source data is updated.
The live is used when handling real-time data. Here each query is accessed from the source data, so the performance won't be as good as the extract.
If you need to work on a static database use extract else the live.
I am feeling from your question that you are worrying about performance issues, which is why you are wondering if your users should use tableau extract or use live connection.
From my opinion for both cases (live vs extract) it all depends on your infrastructure and the size of the table. It makes no sense to make an extract of a huge table that would take hours to download (for example 1 billion rows and 400 columns).
In the case all your users are directly connected on a database (not a tableau server), you may run on different issues. If the tables they are connecting to, are relatively small and your database processes well multiple users that may be OK. But if your database has to run many resource-intensive queries in parallel, on big tables, on a database that is not optimized for many users to access at the same time and located in a different time zone with high latency, that will be a nightmare for you to find a solution. On the worse case scenario you may have to change your data structure and update your infrastructure to allow 40 users to access the data simultaneously.
Up to this point, I have been using MongoDB (Node.js + Mongoose) to save posts which belong to a user, so that I can later retrieve them to display in a stream (just like Facebook, Twitter, etc.)
It recently became necessary to allow the user to deeply search his stream; MongoDB's search was insufficient, so I implemented ElasticSearch on my servers (Amazon EC2 m1.large instances running CentOS, FWIW).
My question: I'm now in a position that I'm duplicating the data between MongoDB (where the user's stream is cached) and ElasticSearch (where it is searched).
Is there any disadvantage to moving my cache ENTIRELY into ElasticSearch, getting rid of the MongoDB all together? It seems a waste to double the storage, and there's no other place that I'm accessing this data (it is only used when presenting/searching the stream of posts).
Specifically, I want to make sure I'm not overlooking anything re: performance. I like the idea of reducing MongoDB as a bottleneck, yet I worry about the memory overhead of ElasticSearch. MongoDB runs on its own server in my cloud setup, whereas ElasticSearch is running on the same instances as node.js. This means I would have MORE ElasticSearch servers (the node.js servers are in an auto-scaling array), but they each are not DEDICATED servers (unlike MongoDB).
The only big obstacle to using ES as a "primary datasource" is that there isn't a good backup mechanism right now. The ES team is working on it and expect it to be out by the end of the year, but in the mean time, you'll have to implement your own backup scripts.
As far as performance, it's really hard to say because almost every situation is unique. ES benefits from memory - so more is always better. In particular, sorts/filters/facets/geo all like to eat memory. If you aren't doing much in the way of faceting, for example, you may be fine with less memory.
ES doesn't need to run on a dedicated node...but it will happily use as many resources as you give it.
Another option is to use just the elastic search indexes. You can choose to not save data in a readable format, so you search in ES and then retrieve documents from MongoDB to your user as needed.
The question bellow comments exactly on that.
Storing only selected fields and not storing _all in pyes/elasticsearch
At the moment, we store a huge amount of logs (30G/Day x3 Machines = av. 100G) of a filer. Logs are zipped.
The actual tool to search that logs, is searching the corresponding logs (according to timerange), copying them localy, unzip them, and search the xml for information and display.
We are studying the possibility to make a spunk-like tool to search that logs (it is the output of the message bus : xml-messages sent to other systems).
What are the advantage to rely on a mongo-like db, instead of querying the zipped logfile directly ?
We could also index some data in a db, and let the program search on targeted zip files...
What brings a mongodb... or hadoop more ?
I have worked on MongoDB and currently working on Hadoop so I can list some differences that you might find interesting.
MongoDB will need you to store your files as documents (instead of raw text data). HDFS can store it as files and allow you to use custom MapReduce programs to process them.
MongoDB will require you to choose a good sharding key in order to efficiently distribute the load across the cluster. Since you are storing log files it might be difficult.
If you can store the logs formatted into documents in MongoDB it will allow you query the data with very low latency across huge amounts of logs. My last project had inbuilt logging based on MongoDB and analysis is extremely fast as compared to MapReduce analysis of raw text logs. But the logging has to be done from ground up.
In Hadoop you have technologies like Hive, HBase and Impala which will help you analyze the text format logs, but the latency of MapReduce needs to be kept in mind (there are ways to optimize the latency in though).
To summarize: If you can implement mongoDB based logging in the entire stack go for MongoDB but if you already have text format logs then go for Hadoop. If you can convert your XML data into MongoDB documents in realtime then you can get a very efficient solution.
My knowledge of Hadoop is limited, so I will focus on MongoDB.
You could store each log entry in MongoDB. When you create an index on the time field, you can easily get a specific time range. MongoDB will have support for full text search in version 2.4 which would certainly be an interesting feature for your use-case, but it isn't production-ready yet. Until then, searching for substrings is a very slow operation. So you would have to convert the XML trees which are relevant for your searches to mongodb objects and create indices for the most searched fields.
But you should be aware that storing your logs in MongoDB will mean that you will need a lot more hard drive space. MongoDB does not compress the payload data and also adds some own meta-data overhead, so it will require even more disk space than the unzipped logs. Also, when you use the new text search feature, it will take even more disk space. During a presentation I saw, the text index was two times as large as the data it was indexing. Sure, this feature is still work in progress, but I wouldn't bet on it becomming a lot less in the final version.
As part of my work we get approx 25TB worth log files annually, currently it been saved over an NFS based filesystem. Some are archived as in zipped/tar.gz while others reside in pure text format.
I am looking for alternatives of using an NFS based system. I looked at MongoDB, CouchDB. The fact that they are document oriented database seems to make it the right fit. However the log files content needs to be changed to JSON to be store into the DB. Something I am not willing to do. I need to retain the log files content as is.
As for usage we intend to put a small REST API and allow people to get file listing, latest files, and ability to get the file.
The proposed solutions/ideas need to be some form of distributed database or filesystem at application level where one can store log files and can scale horizontally effectively by adding more machines.
Ankur
Since you dont want queriying features, You can use apache hadoop.
I belive HDFS and HBase will be nice fit for this.
You can see lot of huge storage stories inside Hadoop powered by page
Take a look at Vertica, a columnar database supporting parallel processing and fast queries. Comcast used it to analyze about 15GB/day of SNMP data, running at an average rate of 46,000 samples per second, using five quad core HP Proliant servers. I heard some Comcast operations folks rave about Vertica a few weeks ago; they still really like it. It has some nice data compression techniques and "k-safety redundancy", so they could dispense with a SAN.
Update: One of the main advantages of a scalable analytics database approach is that you can do some pretty sophisticated, quasi-real time querying of the log. This might be really valuable for your ops team.
Have you tried looking at gluster? It is scalable, provides replication and many other features. It also gives you standard file operations so no need to implement another API layer.
http://www.gluster.org/
I would strongly disrecommend using a key/value or document based store for this data (mongo, cassandra, etc.). Use a file system. This is because the files are so large, and the access pattern is going to be linear scan. One thing problem that you will run into is retention. Most of the "NoSQL" storage systems use logical delete, which means that you have to compact your database to remove deleted rows. You'll also have a problem if your individual log records are small and you have to index each one of them - your index will be very large.
Put your data in HDFS with 2-3 way replication in 64 MB chunks in the same format that it's in now.
If you are to choose a document database:
On CouchDB you can use the _attachement API to attach the file as is to a document, the document itself could contain only metadata (like timestamp, locality and etc) for indexing. Then you will have a REST API for the documents and the attachments.
A similar approach is possible with Mongo's GridFs, but you would build the API yourself.
Also HDFS is a very nice choice.