Telnet memchached stats- number of keys read in a time duration - memcached

Is it possible to connect to memcached server using telnet and fetch the number of keys read in last 24 hours ( or any time duration)?
Thanks

The standard memcached server doesn't provide that information out of the box. It's easy enough to regularly interrogate the daemon and store any/all the relevant information, in order to produce stats and graphs. An easy example of that is is shown by scripts such as memcache-stats.sh.

Related

Saving query results back into elastic stack

I am absolutely new to the elastic stack.
So my problem space is I have utility which runs on client machines .We have few logs which are generated on these machines (thousands of them), So we have three data source- csv files, log files(generated by my application) and windows event log . I want to combine these three and generate some useful information out of them .Also want to generate a dashboard with some graphs which will be used by managers.
I have zeroed down on elk stack , idea is I install beats on client machine and push data to elastic and then use Kibana to get some visualization. Since I might have thousand of client pushing the data to elastic server, it might not be feasible to keep this data in the server for ever. But I need updated visualizations, to be available always. So I was planning that periodic queries will be run on the indexed data in elastic and the result which is generated (which is real information I need) will be saved back in elastic in a separate index and the visualization in Kibana are set up based on this index .And all the original data can now be cleared. This way I extract real info and keep it and delete unnecessary info.
My question to the expert are
Is my thinking or design correct(wrt to elk stack) given the problem statement
Is it feasible in elk stack and are there any examples or utilities to achieve this.
Thanks
Gaurav
Saving the results of your aggregations back into ElasticSearch is a perfectly valid option. You should also consider Cold storage as an option for storing large amounts of data with long retention.
You tagged logz.io in your question, so it's worth mentioning that there is a logz.io feature called 'Timeless accounts' which uses Optimizers to define query results that should be saved for longer than the retention periods of the underlying logs.
For the record, I work at logz.io

How can I calculate exact difference between a server time and GMT time?

How can I calculate exact difference between a server time and GMT time ?
I want to send a POST request to a server on specific time but I know the server time is not accurate so I want to calculate inaccuracy of the server clock (mili seconds) to send my request on time.(if I send request earlier server will block me )
I try this code on ubuntu but it's only show server time.
curl -I --silent www.server.com | grep "^Date"
if I can calculate difference between my pc and server clock it's very helpful for me.
There are many options, of course. Here’s a suggestion.
Write a shell script or .bat file that runs your curl and grep commands at feeds the result into a program that you write.
Write the program in Java or another language that runs on the Java Virtual Machine since this probably has the best support for handling date and time, though all the major programming languages can.
In your program you may use DateTimeFormatter.RFC_1123_DATE_TIME for parsing the server time into an OffsetDateTime. Get the PC time as another OffsetDateTime and use Duration.between() for finding the difference (positive or negative).
Beware that there will be a delay from reading the server time until reading the PC time, so results will not be accurate. If you want a better estimate of upper and lower bound somehow read the PC time both before and after reading the server time.
Links
Oracle tutorial: Date Time explaining how to use java.time.
Documentation:
DateTimeFormatter.RFC_1123_DATE_TIME
OffsetDateTime
Duration.between()

Is it possible to see the incoming queries in mongodb to debug/trace issues?

I have mongo running on my macbook (OSX).
Is it possible to run some kind of a 'monitor' that will display any income requests to my mongodb?
I need to trace if I have the correct query formatting from my application.
You will find these tools (or utilities) useful for monitoring as well as diagnosing purposes. All the tools except mtools are packaged with MongoDB server (sometimes they are installed separately).
1. Database Profiler
The profiler stores every CRUD operation coming into the database; it is off, by default. Having it on is quite expensive; it turns every read into a read+insert, and every write into a write+insert. CAUTION: Keeping it on can quickly overpower the server with incoming operations - saturating the IO.
But, it is a very useful tool when used for a short time to find what is going on with database operations. It is recommended to be used in development environments.
The profiler setting can be accessed by using the command db.getProfilingLevel(). To activate the profilre use the db.setProfilingLevel(level) command. Verify what is captured by the profiler in the db.system.profile collection; you can query it like any other collection using the find or aggregate methods. The db.system.profile document field op specifies the type of database operation; e.g., for queries it is "query".
The profiler has three levels:
0is not capturing any info (or is turned off and default). 1 captures every query that takes over 100ms. 2 captures every query;this can be used to find the actual load that is coming in.
2. mongoreplay
mongoreplay is a traffic capture and replay tool for MongoDB that you can use to inspect and record commands sent to a MongoDB instance, and then replay those commands back onto another host at a later time. NOTE: Available for Linux and macOS.
3. mongostat
mongostat commad-line utility provides a quick overview of the status of a currently running mongod instance.
You can view the incoming operations in real-time. The statistics are displated, by default every second. There are various options to customize the output, the time interval, etc.
4. mtools
mtools is a collection of helper scripts to parse, filter, and visualize (thru graphs) MongoDB log files.
You will find the mlogfilter script useful; it reduces the amount of information from MongoDB log files using various command options. For example, mlogfilter mongod.log --operation query filters the log by query operations only.

SqlBase and Gupta windows to the sky

Anybody who can advise or have experience on the possibility to have an SqlBase database in a cloud environment and run a Gupta application which is stored on local PCs?
Thanks.
We have some experience running a SQL-Database (Oracle, SqlServer, SqlBase) on a remote Server connected over WAN. Most often data access is very slow and you have to write your application carefully.
The reason for the slowness is usually not the bandwidth but the number of hops an IP-packet takes. Each hop adds some milliseconds of delay which oftens sums up to a painful experience. So it's ok to get one big blob from a database. It's also ok to fetch large resultsets. But when there are a lot of smaller queries it will get very slow.
There are two solutions to this problem:
1) Use a dedicated line from client to server if possible.
2) Write your application in a way that minimizes the number of queries.

Architecture to create an uptime monitor in Node.js

What's the best solution for using Node.js and Redis to create an uptime monitoring system? Can I use Redis as a queue but is not the best way to save information, maybe MongoDB is?
It seems pretty simple but needing to have more than 1 server to guarantee the server is down and make everything work together is not so easy.
To monitor uptime, you would use a Cron job on the system. With each call, you would check to see if the host is up, and how long it would take. And in that script, you would save your data in Redis.
To do this in Node.JS, you would create a script that checks the status of the server. Just making a HTTP request to the server (Or Ping, w.e.) and recording if it fails or not. Then I would just record it to Redis. How you do it does not matter, because the script (if you run the cron every 30 seconds) has [30] seconds before the next run, so you dont have to worry about getting your query to the server. How you save your data is up to you, but in this case even MySQL would work (if you are only doing a small number of sites)
More on Cron # Wikipedia
Can I use Redis as a queue but is not
the best way to save information,
maybe MongoDB is?
You can(should) use Redis as your queue. It is going to be extremely fast.
I also think it is going to be very good option to save the information inside Redis. Unfortunately Redis does not do any timing(yet). I think you could/should use Beanstalkd to put messages on the queue that get delivered when needed(every x seconds). I also think cron is not that a very good idea because you would be needing a lot of them and when using a queue you could do your work faster(share load among multiple processes) also.
Also I don't think you need that much memory to save everything in memory(makes site fast) because dataset is going to be relative simple. Even if you aren't able(smart to get more memory if you ask me) to fit entire dataset in memory you can rely on Redis's virtual memory.
It seems pretty simple but needing to
have more than 1 server to guarantee
the server is down and make everything
work together is not so easy.
Sharding/replication is what I think you should read into to solve this problem(hard). Luckily Redis supports replication(sharding can also be achieved). MongoDB supports sharding/replication out of the box. To be honest I don't think you need sharding yet and your dataset is rather simple so Redis is going to be faster:
http://redis.io/topics/replication
http://www.mongodb.org/display/DOCS/Sharding+Introduction
http://www.mongodb.org/display/DOCS/Replication
http://ngchi.wordpress.com/2010/08/23/towards-auto-sharding-in-your-node-js-app/