Warning: Memcache::get() [memcache.get]: Node no longer exists - memcached

I am new to memcached but I do need to fix this error quick on the website
I don't know where to cut in?
Anything that I can do to find out which node or key memcached failed to get?
Any log files can I look into?

This occurs when you store an object which has references to recources such as file descriptors or database connections. It can also occur if the object you get is of a class which isn't loaded when you get it from memcached.
To find out which memcached key fails, you could set a custom error handler, which can access the memcached key, just before the call to Memcached::get and restore it afterwards. Then you can log the warning together with the key.
[Edit] Here's an example:
<?
class MyMemcachedWrapper {
private $key;
public function get($key) {
// Save the key in an instance variable so it will be available in
// the error handler
$this->key = $key;
set_error_handler(array($this, 'handleError'));
$value = Memcached::get($key);
restore_error_handler();
return $value;
}
public function handleError($errno, $errstr) {
// Here we have both the key and the error message from memcached
$message = "Memcached error '$errstr' while fetching key '{this->key}'";
// ... and we can log it to a file or db or something
file_put_contents("memcached-errors.log", $message, FILE_APPEND);
}
}
// Then use it like this
$memcached_wrapper = new MyMemcachedWrapper();
$value = $memcached_wrapper->get('xyz');

Related

DxlImporter inside a loop throws error " DXL importer operation failed"

I am having a java agent which loops through the view and gets the attachment from each document, The attachment is nothing but the .dxl file containing the document xml data. I am extracting the file at some temp directory and trying import the extracted .dxl as soon as it get extracted.
But the problem here is ,it only imports or works on first document's attachment in the loop and throws the error in java debug console
NotesException: DXL importer operation failed
at lotus.domino.local.DxlImporter.importDxl(Unknown Source)
at JavaAgent.NotesMain(Unknown Source)
at lotus.domino.AgentBase.runNotes(Unknown Source)
at lotus.domino.NotesThread.run(Unknown Source)
My java Agent code is
public class JavaAgent extends AgentBase {
static DxlImporter importer = null;
public void NotesMain() {
try {
Session session = getSession();
AgentContext agentContext = session.getAgentContext();
// (Your code goes here)
// Get current database
Database db = agentContext.getCurrentDatabase();
View v = db.getView("DXLProcessing_mails");
DocumentCollection dxl_tranfered_mail = v.getAllDocumentsByKey("dxl_tranfered_mail");
Document dxlDoc = dxl_tranfered_mail.getFirstDocument();
while(dxlDoc!=null){
RichTextItem rt = (RichTextItem) dxlDoc.getFirstItem("body");
Vector allObjects= rt.getEmbeddedObjects();
System.out.println("File name is "+ allObjects.get(0));
EmbeddedObject eo = dxlDoc.getAttachment(allObjects.get(0).toString());
if(eo.getFileSize()>0){
eo.extractFile(System.getProperty("java.io.tmpdir") + eo.getName());
System.out.println("Extracted File to "+System.getProperty("java.io.tmpdir") + eo.getName());
String filePath = System.getProperty("java.io.tmpdir") + eo.getName();
Stream stream = session.createStream();
if (stream.open(filePath) & (stream.getBytes() >0)) {
System.out.println("In If"+System.getProperty("java.io.tmpdir"));
importer = session.createDxlImporter();
importer.setDocumentImportOption(DxlImporter.DXLIMPORTOPTION_CREATE);
System.out.println("Break Point");
importer.importDxl(stream,db);
System.out.println("Imported Sucessfully");
}else{
System.out.println("In else"+stream.getBytes());
}
}
dxlDoc = dxl_tranfered_mail.getNextDocument();
}
} catch(Exception e) {
e.printStackTrace();
}
The code executes till it prints "Break Point" and throws the error but the attachment get imported for first time
In other case if i hard code the filePath for the specific dxl file from file system it imports the dxl as document in the database with no errors
I am wondering if it is the issue of the stream passed doesn't get completes and the next loop executes.
Any kind of suggestion will be helpful.
I can't see any part where your while loop would move on from the first document.
Usually you would have something like:
Document nextDoc = dxl_tranfered_mail.getNextDocument(dxlDoc);
dxlDoc.recycle();
dxlDoc = nextDoc;
Near the end of the loop to advance it to the next document. As your code currently stands it looks like it would never advance, and always be on the first document.
If you do not know about the need to 'recycle' domino objects I suggest you have a search for some blog posts articles that explain the need to do so.
It is a little complicated but basically, the Java Objects are just a 'wrapper' for the the objects in the C API.
Whenever you create a Domino Object (such as a Document, View, DocumentCollection etc.) a memory handle is allocated in the underlying 'C' layer. This needs to be released (or recycled) and it will eventually do so when the session is recycled, however when your are processing in a loop it is much more important to recycle as you can easily exhaust the available memory handles and cause a crash.
Also it's possible you may need to close (and recycle) each Stream after you a finished importing each file
Lastly, double check that the extracted file that is causing an exception is definitely a valid DXL file, it could simply be that some of the attachments are not valid DXL and will always throw an exception.
you could put a try/catch within the loop to handle that scenario (and report the problem files), which will allow the agent to continue without halting

Random "404 Not Found" error on PasrseObject SaveAsynch (on Unity/Win)

I started using Parse on Unity for a windows desktop game.
I save very simple objects in a very simple way.
Unfortunately, 10% of the time, i randomly get a 404 error on the SaveAsynch method :-(
This happen on different kind of ParseObject.
I also isolated the run context to avoid any external interference.
I checked the object id when this error happen and everything looks ok.
The strange thing is that 90% of the time, i save these objects without an error (during the same application run).
Did someone already got this problem ?
Just in case, here is my code (but there is nothing special i think):
{
encodedContent = Convert.ToBase64String(ZipHelper.CompressString(jsonDocument));
mLoadedParseObject[key]["encodedContent "] = encodedContent ;
mLoadedParseObject[key].SaveAsync().ContinueWith(OnTaskEnd);
}
....
void OnTaskEnd(Task task)
{
if (task.IsFaulted || task.IsCanceled)
OnTaskError(task); // print error ....
else
mState = States.SUCEEDED;
}
private void DoTest()
{
StartCoroutine(DoTestAsync());
}
private IEnumerator DoTestAsync()
{
yield return 1;
Terminal.Log("Starting 404 Test");
var obj = new ParseObject("Test1");
obj["encodedContent"] = "Hello World";
Terminal.Log("Saving");
obj.SaveAsync().ContinueWith(OnTaskEnd);
}
private void OnTaskEnd(Task task)
{
if (task.IsFaulted || task.IsCanceled)
Terminal.LogError(task.Exception.Message);
else
Terminal.LogSuccess("Thank You !");
}
Was not able to replicate. I got a Thank You. Could you try updating your Parse SDK.
EDIT
404 could be due to a bad username / password match. The exception messages are not the best.

How to use withTransaction in grails to insert list of objects?

I am using grails dbm plugin to insert records with in transaction so that I don't need to use 'flush:true'.
But somehow it hangs & does not go forward. This used to work fine without withTransaction & with 'flush:true'.
Can you please let me know what is wrong with this code?
grailsChange {
change {
def list1 = [record1,record2]
list1.each {
DomainClass.withTransaction{
new DomainClass(it).save(failOnError: true)
}
}
}
}
console log says...
Starting dbm-update for database test # jdbc:postgresql://localhost:5432/test
& does not go forward...& after long time I get this exception
iquibase.exception.LockException: Could not acquire change log lock. Currently locked by ... since ....

Passing Data to a Queue Class Using Laravel 4.1 and Beanstalkd

I have never setup up a queueing system before. I decided to give it a shot. It seems the queueing system is working perfectly. However, it doesn't seem the data is being sent correctly. Here is my code.
...
$comment = new Comment(Input::all());
$comment->user_id = $user->id;
$comment->save();
if ($comment->isSaved())
{
$voters = $comment->argument->voters->unique()->toArray();
Queue::push('Queues\NewComment',
[
'comment' => $comment->load('argument', 'user')->toArray(),
'voters' => $voters
]
);
return Response::json(['success' => true, 'comment' => $comment->load('user')->toArray()]);
}
...
The class that handles this looks like this:
class NewComment {
public function fire($job, $data)
{
$comment = $data['comment'];
$voters = $data['voters'];
Log::info($data);
foreach ($voters as $voter)
{
if ($voter['id'] != $comment['user_id'])
{
$mailer = new NewCommentMailer($voter, $comment);
$mailer->send();
}
}
$job->delete();
}
}
This works beautifully on my local server using the synchronous queue driver. However, on my production server, I'm using Beanstalkd. The queue is firing like it is supposed to. However, I'm getting an error like this:
[2013-12-19 10:25:02] production.ERROR: exception 'ErrorException' with message 'Undefined index: voters' in /var/www/mywebsite/app/queues/NewComment.php:10
If I print out the $data variable passed into the NewComment queue handler, I get this:
[2013-12-19 10:28:05] production.INFO: {"comment":{"incrementing":true,"timestamps":true,"exists":true}} [] []
I have no clue why this is happening. Anyone have an idea how to fix this.
So $voters apparently isn't being put into the queue as part of the payload. I'd build the payload array outside of the Queue::push() function, log the contents, and see exactly what is being put in.
I've found if you aren't getting something out that you expect, chances are, it's not being put in like you expect either.
While you are at it, make sure that the beanstalkd system hasn't got old data stuck in it that is incorrect. You could add a timestamp into the payload to help make sure it's the latest data, and arrange to delete or bury any jobs that don't have the appropriate information - checked before you start to process them. Just looking at a count of items in the beanstalkd tubes should make it plain if there are stuck jobs.
I've not done anything with Laravel, but I have written many tasks for other Beanstalkd and SQS-backed systems, and the hard part is when the job fails, and you have to figure out what went wrong, and how to avoid just redoing the same failure over and over again.
What I ended up doing was sticking with simple numbers. I only stored the comment's ID on the queue and then did all the processing in my queue handler class. That was the easiest way to do it.
You will get data as expected in the handler by wrapping the data in an array:
array(
array('comment' => $comment->load('argument', 'user')->toArray(),
'voters' => $voters
)
)

How can MongoDB java driver determine if replica set is in the process of automatic failover?

Our application is build upon mongodb replica set.
I'd like to catch all exceptions thrown among the time frame when replica set is in process of automatic failover.
I will make application retry or wait for failover completes.
So that the failover won't influence user.
I found document describing the behavior of java driver here: https://jira.mongodb.org/browse/DOCS-581
I write a test program to find all possible exceptions, they are all MongoException but with different message:
MongoException.Network: "Read operation to server /10.11.0.121:27017 failed on database test"
MongoException: "can't find a master"
MongoException: "not talking to master and retries used up"
MongoException: "No replica set members available in [ here is replica set status ] for { "mode" : "primary"}"
Maybe more...
I'm confused and not sure if it is safe to determine by error message.
Also I don't want to catch all MongoException.
Any suggestion?
Thanks
I am now of the opinion that Mongo in Java is particularly weak in this regards. I don't think your strategy of interpreting the error codes scales well or will survive driver evolution. This is, of course, opinion.
The good news is that the Mongo driver provides a way get the status of a ReplicaSet: http://api.mongodb.org/java/2.11.1/com/mongodb/ReplicaSetStatus.html. You can use it directly to figure out whether there is a Master visible to your application. If that is all you want to know, the http://api.mongodb.org/java/2.11.1/com/mongodb/Mongo.html#getReplicaSetStatus() is all you need. Grab that kid and check for a not-null master and you are on your way.
ReplicaSetStatus rss = mongo.getReplicaSetStatus();
boolean driverInFailover = rss.getMaster() == null;
If what you really need is to figure out if the ReplSet is dead, read-only, or read-write, this gets more difficult. Here is the code that kind-of works for me. I hate it.
#Override
public ReplSetStatus getReplSetStatus() {
ReplSetStatus rss = ReplSetStatus.DOWN;
MongoClient freshClient = null;
try {
if ( mongo != null ) {
ReplicaSetStatus replicaSetStatus = mongo.getReplicaSetStatus();
if ( replicaSetStatus != null ) {
if ( replicaSetStatus.getMaster() != null ) {
rss = ReplSetStatus.ReadWrite;
} else {
/*
* When mongo.getReplicaSetStatus().getMaster() returns null, it takes a a
* fresh client to assert whether the ReplSet is read-only or completely
* down. I freaking hate this, but take it up with 10gen.
*/
freshClient = new MongoClient( mongo.getAllAddress(), mongo.getMongoClientOptions() );
replicaSetStatus = freshClient.getReplicaSetStatus();
if ( replicaSetStatus != null ) {
rss = replicaSetStatus.getMaster() != null ? ReplSetStatus.ReadWrite : ReplSetStatus.ReadOnly;
} else {
log.warn( "freshClient.getReplicaSetStatus() is null" );
}
}
} else {
log.warn( "mongo.getReplicaSetStatus() returned null" );
}
} else {
throw new IllegalStateException( "mongo is null?!?" );
}
} catch ( Throwable t ) {
log.error( "Ingore unexpected error", t );
} finally {
if ( freshClient != null ) {
freshClient.close();
}
}
log.debug( "getReplSetStatus(): {}", rss );
return rss;
}
I hate it because it doesn't follow the Mongo Java Driver convention of your application only needs a single Mongo and through this singleton you connect to the rest of the Mongo data structures (DB, Collection, etc). I have only been able to observe this working by new'ing up a second Mongo during the check so that I can rely upon the ReplicaSetStatus null check to discriminate between "ReplSet-DOWN" and "read-only".
What is really needed in this driver is some way to ask direct questions of the Mongo to see if the ReplSet can be expected at this moment to support each of the WriteConcerns or ReadPreferences. Something like...
/**
* #return true if current state of Client can support readPreference, false otherwise
*/
boolean mongo.canDoRead( ReadPreference readPreference )
/**
* #return true if current state of Client can support writeConcern; false otherwise
*/
boolean mongo.canDoWrite( WriteConcern writeConcern )
This makes sense to me because it acknowledges the fact that the ReplSet may have been great when the Mongo was created, but conditions right now mean that Read or Write operations of a specific type may fail due to changing conditions.
In any event, maybe http://api.mongodb.org/java/2.11.1/com/mongodb/ReplicaSetStatus.html gets you what you need.
When Mongo is failing over, there are no nodes in a PRIMARY state. You can just get the replica set status via the replSetGetStatus command and look for a master node. If you don't find one, you can assume that the cluster is in a failover transition state, and can retry as desired, checking the replica set status on each failed connection.
I don't know the Java driver implementation itself, but I'd do catch all MongoExceptions, then filter them on getCode() basis. If the error code does not apply to replica sets failures, then I'd rethrow the MongoException.
The problem is, to my knowledge there is no error codes reference in the documentation. Well there is a stub here, but this is fairly incomplete. The only way is to read the code of the Java driver to know what code it uses…