Our architecture consists long-term and short-term analytics.
Today we aggregate all our data using dropwizard and put it into Influx. Influx is easy to query as it's time-series based database.
The thing is that "today's" data getting old and therefor keep it on MS(millisecond) sampling on influx is a waste.
Influx introduced downsmapling feature where you can transfer ms's records into weeks/months, etc.. the problem is that we need to expend our queries and query from "downsampled" tables and non-downsampled queries.
we thought about an idea where we can use short-term queries for influx and downsampled data to put on redshift as a long-term persistent data.
what do you think? perhaps it's a waste or?
as requested adding more details:
how many rows of data: we expect around 1000 requests per second. each request would be a record
how complex are the queries: pretty straight forward (e.g how many transactions in the last day/week/year, how many distinct users have logged in, etc..)
how many concurrent users: pretty small. 50,100
Related
I host a popular website and want to store certain user events to analyze later. Things like: clicked on item, added to cart, removed from cart, etc. I imagine about 5,000,000+ new events would be coming in every day.
My basic idea is to take the event, and store it in a row in Postgres along with a unique user id.
What are some strategies to handle this much data? I can't imagine one giant table is realistic. I've had a couple people recommend things like: dumping the tables into Amazon Redshift at the end of every day, Snowflake, Google BigQuery, Hadoop.
What would you do?
I would partition the table, and as soon as you don't need the detailed data in the live system, detach a partition and export it to an archive and/or aggregate it and put the results into a data warehouse for analyses.
We have similar use case with PostgreSQL 10 and 11. We collect different metrics from customers' websites.
We have several partitioned tables for different data and together we collect per day more then 300 millions rows, i.e. 50-80 GB data daily. In some special days even 2x-3x more.
Collecting database keeps data for current and last day (because especially around midnight there can be big mess with timestamps from different part of the world).
On previous versions PG 9.x we transferred data 1x per day to our main PostgreSQL Warehouse DB (currently 20+ TB). Now we implemented logical replication from collecting database into Warehouse because sync of whole partitions was lately really heavy and long.
Beside of it we daily copy new data to Bigquery for really heavy analytical processing which would on PostgreSQL take like 24+ hours (real life results - trust me). On BQ we get results in minutes but pay sometimes a lot for it...
So daily partitions are reasonable segmentation. Especially with logical replication you do not need to worry. From our experiences I would recommend to not do any exports to BQ etc. from collecting database. Only from Warehouse.
Hi
I am in the process of adding analytics to my SaaS app, and I'd love to hear other people's experiences doing this.
Current I see two different approches:
Do most of the data handling at the DB level, building and aggregating data into materialized views for performance boost. This way the data will stay normalized.
Have different cronjobs/processes that will run at different intervals (10 min, 1 hour etc.) that will query the database and insert aggregate results into a new table. In this case, the metrics/analytics are denormalized.
Which approach makes the most sense, maybe something completely different?
On really big data, the cronjob or ETL is the only option. You read the data once, aggregate it and never go back. Querying aggregated data is then relatively cheap.
Views will go through tables. If you use "explain" for a view-based query, you might see the data is still being read from tables, possibly using indexes (if corresponding indexes exist). Querying terabytes of data this way is not viable.
The only problem with the cronjob/ETL approach is that it's PITA to maintain. If you find a bug on production environment - you are screwed. You might spend days and weeks fixing and recalculating aggregations. Simply said: you have to get it right the first time :)
Task
Hi I have 2-3 thousands of users online. I also have groups, teams and other(2-3) entities which have users. So for about every 10 seconds I
want to show online statistics (query various params of users and other entities). And every, I believe, 5 - 30 seconds user can change his status. Every 1 hour move to another group or team or whatever. What no-sql database should I use ? I dont have experience, just know no-sql is quite fast and just read a little about Redis, MongoDB, Cassandra.
Of course, I store this data model in RDBMS (except online status and statistics).
I think about next solution:
Store all data in json. use Redis. prepend id prefix (EX 'user_'+userId)
user_id:{"status":"123", "group":"group_id", "team":"team_id", "firstname":"firstname", "lastname":"lastname", ... other attributes]}
group_id:{users:[user_id,user_id,...], ... other group attributes}
team_id:{users:[user_id,user_id,...], ... other team attributes}
...
What would you recommend or propose? Will it be convenient to query such data?
Maybe I can use some popular standard algotithms to query statistics (ex monte-carlo algotithm for percentage statistics, I dunno). Thanks
You could use Redis Hyperloglog, a feature added in Redis 2.8.9.
This blog post describes how to calculate very efficiently some statistics that look quite similar to the ones you need.
I am currently testing Redshift for a SaaS near-realtime analytics application.
The queries performance are fine on a 100M rows dataset.
However, the concurrency limit of 15 queries per cluster will become a problem when more users will be using the application at the same time.
I cannot cache all aggregated results since we authorize to customize filters on each query (ad-hoc querying)
The requirements for the application are:
queries must return results within 10s
ad-hoc queries with filters on more than 100 columns
From 1 to 50 clients connected at the same time on the application
dataset growing at 10M rows / day rate
typical queries are SELECT with aggregated function COUNT, AVG with 1 or 2 joins
Is Redshift not correct for this use case? What other technologies would you consider for those requirements?
This question was also posted on the Redshift Forum. https://forums.aws.amazon.com/thread.jspa?messageID=498430񹫾
I'm cross-posting my answer for others who find this question via Google. :)
In the old days we would have used an OLAP product for this, something like Essbase or Analysis Services. If you want to look into OLAP there is an very nice open source implementation called Mondrian that can run over a variety of databases (including Redshift AFAIK). Also check out Saiku for an OSS browser based OLAP query tool.
I think you should test the behaviour of Redshift with more than 15 concurrent queries. I suspect that it will not be user noticeable as the queries will simply queue for a second or 2.
If you prove that Redshift won't work you could test Vertica's free 3-node edition. It's a bit more mature than Redshift (i.e. it will handle more concurrent users) and much more flexible about data loading.
Hadoop/Impala is overly complex for a dataset of your size, in my opinion. It is also not designed for a large number of concurrent queries or short duration queries.
Shark/Spark is designed for the case where you data is arriving quickly and you have a limited set of metrics that you can pre-calculate. Again this does not seem to match your requirements.
Good luck.
Redshift is very sensitive to the keys used in joins and group by/order by. There are no dynamic indexes, so usually you define your structure to suit the tasks.
What you need to ensure is that your joins match the structure 100%. Look at the explain plans - you should not have any redistribution or broadcasting, and no leader node activities (such as Sorting). It sounds like the most critical requirement considering the amount of queries you are going to have.
The requirement to be able to filter/aggregate on arbitrary 100 columns can be a problem as well. If the structure (dist keys, sort keys) don't match the columns most of the time, you won't be able to take advantage of Redshift optimisations. However, these are scalability problems - you can increase the number of nodes to match your performance, you just might be surprised of the costs of the optimal solution.
This may not be a serious problem if the number of projected columns is small, otherwise Redshift will have to hold large amounts of data in memory (and eventually spill) while sorting or aggregating (even in distributed manner), and that can again impact performance.
Beyond scaling, you can always implement sharding or mirroring, to overcome some queue/connection limits, or contact AWS support to have some limits lifted
You should consider pre-aggregation. Redshift can scan billions of rows in seconds as long as it does not need to do transformations like reordering. And it can store petabytes of data - so it's OK if you store data in excess
So in summary, I don't think your use case is not suitable based on just the definition you provided. It might require work, and the details depend on the exact usage patterns.
I am having 10 different queries and a total of 40 columns.
Looking for solutions in available Big data noSQL data bases that will perform read and write intensive jobs (multiple queries with SLA).
Tried with HBase but its fast only for rowkey (scan) search ,for other queries (not running on row key) query response time is quite high.Making data duplication with different row keys is the only option for quick response but for 10 queries making 10 different tables is not a good idea.
Please suggest the alternatives.
Have you tried Druid? It is inspired on Dremel, precursor of Google BigQuery.
From the documentation:
Druid is a good fit for products that require real-time data ingestion of a single, large data stream. Especially if you are targeting no-downtime operation and are building your product on top of a time-oriented summarization of the incoming data stream. When talking about query speed it is important to clarify what "fast" means: with Druid it is entirely within the realm of possibility (we have done it) to achieve queries that run in less than a second across trillions of rows of data.