Limit Element Search to driver RFT - ui-automation

is there a way to limit the searching to a single process or driver in rational functional tester? If i have two instances up at the same time, it doesn't know which one to look at and crashes.

IF you are using descriptive programming, you could lmit your serach based on the process ID. Have you tried that ?

Related

ElasticSearch vs MongoDB vs Cassandra for mailer logs

I have a mailer system where in we send 1-2 lakhs mail everyday and then we store all the clicks / opens actions of those mail.
This is currently working fine in MySQL.
But now with increasing traffic, we are facing some performance issue with Mysql.
So we are thinking of shifting to Elastic / Cassandra / Mongo.
My possible queries include
a) Getting user which have opened / clicked a specific mail or not.
b) Calculating open rate / click rate for mail
I think cassandra might not fit here perfectly as it is well suited for applications with high concurrent writes but with less read queries.
Here there can be many types of read queries so it will be difficult to decide on partitioning key / clustering, so too mzny aggregations will be running on cassandra.
What should we use in this case and why?
We are anyhow working on both elastic / mongo to design the data model for both and then run some benchmarks around it.
ELK stack (Elastic Search, LogStash, Kibana) is the best solution for this. As far as I have used ELK stack, it is fast for log processing.
Cassandra is definitely not the right option.
You can use MongoDB since most of the queries are GET queries.
But I have a few points why Elastic search gains power over Mongo for Log Processing.
Full-text search : Elastic Search implements a lot of features, such as customized splitting text into words, customized stemming, facetted search, etc.
Fuzzy Searching : A fuzzy search is good for spelling errors. You can find what you are searching for even though you have a spelling mistake.
Speed : Elastic search is able to execute complex queries extremely fast.
As the name itself suggests Elastic search is made for searching purpose. And Searching in mongo is not as fast as Elastic Search.
But Maintaining Elastic Search also has its own problems.
refer:
https://apiumhub.com/tech-blog-barcelona/elastic-search-advantages-books/
https://interviewbubble.com/elasticsearch-pros-and-cons-advantages-and-disadvantages-of-elasticsearch/
Thanks, I think this will help.
If I try to look at your Data Structure and Data Access pattern, it looks like you'll have a message Id for each message, it's contents, and then along with it, a lot of counters which get updated each time a person opens it, maybe some information like user id/email of people who have opened it.
Since these records are updated on each open of an email, I believe the number of writes are reasonably high. Assuming each mail gets opened on an Average of 10 times/day, it'll have 10-20 Lakh writes per day with 1-2 Lakh emails.
Comparing this with reads, I am not sure of your read pattern, but if it's being used for analytics purpose, or to show in some dashboard it'll be read a few times a day maybe. Basically Reads are significantly low compared to writes.
That being said, if your read query pattern is of the form where you query always with a message id, then Cassandra/Hbase are the best choices that you have.
If that's not the case and you have different kinds of queries, or you want to do a lot of analytics, then I would prefer Mongo DB.
Elastic search is not really a Database, it's more of a query engine. And there are a lot of instances where the data loss happens in ES. If you are planning to keep this as your primary data store then Elastic Search/ELK is not a good choice.
You could look at this video to help come to a conclusion on which DB is best given what scenarios.
Alternatively, a summary is # CodeKarle's website

will this implementation affects the user experience

I am assigned with the task to implement a functionality to shorten text typed text
For example , I type text like "you" when I highlight it and it has to change like "u"
I will have table which has list of words which has longer version of text and with text to be replaced.so whenever a user types word and highlights it i want to query the db for match , if a match is found I want to replace the word with the shortened word.
This is not my idea and am being assigned to this implementation.
I think this functionality will down the speed of the app responsiveness. And it has some disadvantages over the user friendliness of the application.
So I'd like to hear your opinions on what are the disadvantages it has and how can I implement this in a better manner. Or is that ok to have this kind of functionlity? Won't it affect the app speed?
It's hard to imagine that you'll see a noticeable decrease in performance. Even the iPhone 3G's processor runs at around 400MHz, and someone typing really fast on an iPhone might get four or five characters entered in a second. A simple implementation of the sort of thing you're talking about would involve a lookup in a data structure such as a dictionary, tree, or database, and you ought to be able to do that pretty quickly.
Why not try it? Implement the simplest thing you can think of and measure its performance. For the purpose of measuring, you might want to use a loop to repeatedly look up words from an array. Count the number of lookups you can do in, say, 20 seconds, and divide by 20 to get the average number per second.
i dont think it will take a lot of performance, anyway you can use the profiler to check how long every method is taking, as for the functionality, i believe you should give the user the opportunity to "undo" and keep his own word (same as apple's auto correction)

Is there a way to get around space usage issues when using long field names in MongoDB?

It looks like having descriptive field names (the ones I like the most) can take much space in the memory for big collections. I don't like the idea of giving them short and cryptic names to save memory, neither do I like the idea to translate field names to shortened fields somewhere in the application.
Is there a way to tell mongo not to store every field name as text?
For now the only thing you can do is to vote and wait for SERVER-863 to be solved. After almost a year of discussion the status of this issue has been changes to planned but not scheduled...
The workaround is to use document mapping libraries likes Spring Data Document or morphia (in Java world) and work with nicely named objects. But the underlying database names are still cryptic.
If you are using an "object-document mapper" library to access MongoDB, many of them provide facilities for using descriptive names within your application code, but storing short names in the database. If your application has a data access layer, it may be possible for you to implement this logic in your application code, as well.
Since you haven't said what language you're using, or whether you're using an ODM at all, I provide any more guidance on which ODMs might fit your needs.

Performance of repeatedly executing the same javascript on MongoDb?

I'm looking at using some JavaScript in a MongoDb query. I have a couple of choices:
db.system.js.save the function in the db then execute it
db.myCollection.find with a $where clause and send the JS each time
exec_js in MongoEngine (which I imagine uses one of the above)
I plan to use the JavaScript in a regularly used query that's executed as part of a request to a site or API (i.e. not a batch administrative jobs) so it's important that the query executes with reasonable speed.
I'm looking at a 30ish line function.
Is the Javascript interpreted fresh each time? Will the performance be ok? Is it a sensible basis upon which to build queries?
Is the Javascript interpreted fresh each time?
Pretty much. MongoDB only has one "javascript instance" per running instance of MongoDB. You'll notice this if you try to run two different Map/Reduces at the same time.
Will the performance be ok?
Obviously, there are different definitions of "OK" here. The $where clause can not use indexes. You can combine that clause with another indexed query. In either case each object will need to be pushed from BSON over to the Javascript run-time and then acted on inside the run-time.
The process is definitely not what you would call "performant". Of course, by that measure Map/Reduce is also not very performant and people use that on production systems.
Is it a sensible basis upon which to build queries?
The real barrier here isn't the number of lines in the code, it's the number of possible documents this code will interpret. Even though it's "server-side" javascript, it's still a bunch of work that the server has to do. (in one thread, in an interpreted environment)
If you can test it and scope it correctly, it may well work out. Just don't expect miracles.
What is your point here? Write a JS script and call it regularly through cron. What should be the problem with that?

Best data store w/full text search for lots of small documents? (e.g. a Splunk-like system)

We are specing out a system that will index and store zillions of Syslog messages. These are text messages, with a few attributes (system name, date/time, message type, message body), that are typically 100 to 1500 bytes each.
We generate 2 to 10 gb of these messages per day, and need to retain at least 30 days of them.
The splunk system has a really great indexing and document compression system.
What to use?
I thought of mongodb, but it seems inappropriate for documents of this small size.
SQL Server is a possibility, but seems perhaps not super efficient for this purpose.
Text files with lucene?
-- The windows file system doesn't always like dirs with zillions of files
Suggestions ?
Thanks!
I thought of mongodb, but it seems inappropriate for documents of this small size
There's a company called Boxed Ice that actually builds a server monitoring system using MongoDB. I would argue that it's definitely appropriate.
These are text messages, with a few attributes (system name, date/time, message type, message body), that are typically 100 to 1500 bytes each.
From a MongoDB perspective, we would say that you are storing lots of small documents with a few attributes. In a case like this MongoDB has several benefits here.
It can handle changing attributes seamlessly.
It will flexibly handle different types.
We generate 2 to 10 gb of these messages per day, and need to retain at least 30 days of them.
This is well within the type of data range that MongoDB can handle. There are several different methods of handling the 30 day retention periods. These will depend on your reporting needs. I would poke around on the groups for ideas here.
Based on the people I've worked with, this type of insert-heavy logging is one of the places where Mongo tends to be a very good fit.
Graylog2 is an open-source log management tool that is built on top of MongoDB. I believe Loggy, a logging-as-a-service provider, also uses MongoDB as their backend store. So there are quiet a few products using MongoDB for logging.
It should be possible to store the ngrams returned by a Lucene analyzer for better text searching. Not sure about the feasibility though given the large amount of documents. What is primary reporting use case?
It seems that you would want something like mongodb full-text search server, which will enable you to search on different attributes without losing performance. You may try MongoLantern: http://sourceforge.net/projects/mongolantern/. Though it's in alpha stage but gives very best result for me for 5M records.
Let me know whether this serves your purpose.
I would strongly consider using something Lucene or Solr.
Lucene is built specifically for full text search and provides a ton of additional helpful features that you may find useful in your application. As a bonus, Solr is dead simple to setup and configure. (And its super fast for searching)
They do not keep a file per entry, so you shouldnt have to worry much about zillions of files.
None of the free database options specialize in full text search - dont try to force them to do what you want.
I think you should deploy your own (intranet-wide) stack of Grafana, Logstash + ElasticSearch
When setup once you have a flexibel schema, retention and a wonderful UI for your data with Grafana.