Construct a McRouter instance in Hack - memcached

I am new to hacklang and I am experimenting and trying to connect to Memcached which is running on my local machine .I am following the official documentation here where its advised that we use McRouter class in HHVM stdlib to connect to the Memcached instance.
My code looks very similar to what is their in the documentation
private static function createMemcacheConnection(): void {
$servers = Vector {"127.0.0.1:11211"};
MemcacheConnector::$mcRouter = \MCRouter::createSimple($servers);
}
When I run this piece of code , I get this error \nFatal error: Uncaught Error: Class undefined: MCRouter . I tried to see if there is a separate library that I need to add to my composer , that does not seem to be the case. What am I doing wrong ?

Related

MongoDart Authentication format issue

I'm trying to use MongoDart with Dart as backend. It was working fine till I shifted to a dedicated instance instead of serverless. This is the issue I see:
MongoDartError (MongoDart Error: Authentication failed.)
I'm using dedicated cluster with 3 nodes, deployment name is com-new and db name is main:
var db = await Db.create('mongodb+srv://com-admin:*****#com-new.iv00u.mongodb.net/main');
await db.open(); // MongoDart Error
This is working correctly on mongodb compass and mongoose, but on mongodart it gives the error auth failed. I also tried this:
mongodb+srv://com-admin:*****#com-new.iv00u.mongodb.net/main?authSource=admin
& this
mongodb+srv://com-admin:*****#com-new.iv00u.mongodb.net/main/?authSource=admin
This exact command is working with the old serverless db instance, but I want to use changeStreams which is not available unless we use a dedicated or shared cluster.

Error Could not initialize class net.sf.jasperreports.engine.util.JRStyledTextParser on Kubernetes container

After implementing jasperreport 6.0.0 on kubernetes I am getting below error
Could not initialize class net.sf.jasperreports.engine.util.JRStyledTextParser
I have tried multiple solution but still the error is NOT getting resolved.
Attempt 1: setting -Djava.awt.headless=true
I have implemented this property in YAML file as metioned on
how to add CATALINA_OPTS=-Djava.awt.headless=true this property in Kubernetes configuration
but it did not work.
I tried setting in system property as well System.setProperty("java.awt.headless", "true") but his also did not worked.
Attempt 2: Including missing dependency for xml-apis. It also did not work.
Attempt based on this thread.
Can someone please suggest any solution? The report is working absolutely fine in local environment but getting failed on server. The application is deployed on Kubernetes container

How to connect Vertx RedisClient in cluster mode with Elasticache

I am using Vertx Redis client from the package io.vertx.rxjava.redis.RedisClient to connect to Elasticache Redis.
It does connect but shows an error,
io.vertx.redis.client.impl.types.ErrorType: MOVED 4985 xxx.xxx.xxx.xxx:63791
After reading about the error I found its because there are sharding and its not able to connect to all of them.
From the library, I am not able to figure what method to use to connect in cluster mode.
Here is an example how to connect and send get command in cluster mode.
Define options:
final RedisOptions options = new RedisOptions()
.setType(RedisClientType.CLUSTER)
.setUseSlave(RedisSlaves.SHARE)
.setMaxWaitingHandlers(128 * 1024)
.addEndpoint("redis://127.0.0.1:7000")
.addEndpoint("redis://127.0.0.1:7001")
.addEndpoint("redis://127.0.0.1:7002")
.addEndpoint("redis://127.0.0.1:7003")
.addEndpoint("redis://127.0.0.1:7004")
.addEndpoint("redis://127.0.0.1:7005");
Connect and send command:
Redis.createClient(vertx, options).connect(onCreate -> {
final Redis cluster = onCreate.result();
cluster.send(cmd(SET).arg("key"), set -> {
System.out.println(set.result());
});
});
Tip: If you are unsure how use some library or documentation is not clear enough you can always checkout Tests if that projects has them. You can check how they are implemented so you can use examples from there.

numba caching issue: cannot cache function / no locator available for file

I am trying to deploy a codebase that has a number numba.njit functions with cache=True.
It works fine running locally (Mac OS X 10.12.3), but on the remote machine (Ubuntu 14.04 on AWS) I am getting the following error:
RuntimeError at /portal/
cannot cache function 'filter_selection':
no locator available for file:
'/srv/run/miniconda/envs/mbenv/lib/python2.7/site-packages/mproj/core_calcs/filter.py'
I looked through the numba codebase, and I saw this file: https://github.com/numba/numba/blob/master/numba/caching.py
It appears that the following function is returning None instead of a locator, for this exception to be raised
cls.from_function(py_func, source_path)
Guessing this is a permission to write the pycache folders, but I didn't see in the numba docs a way to specify the cache folder location (CACHE_DIR).
Has anyone hit this before, and if so, what is the suggested work-around?
Set sys.frozen = True before for cls in self._locator_classes: in caching.py can eliminate the issue.
I have no idea whether such setting will impact performance.

ImproperlyConfigured at / port must be an instance of int django mongodb

Trying to move Django-MongoDB developement environment to production
Keep getting the following error from web interface:
ImproperlyConfigured at /
port must be an instance of int
On terminal, if I run
python manage.py syncdb
File "/home/user/lib/python-environments/djangomongo/local/lib/python2.7/site-packages/pymongo/connection.py", line 209, in __init__
raise TypeError("port must be an instance of int")
django.core.exceptions.ImproperlyConfigured: port must be an instance of int
Double check how you set up you setup your pymongo Connection object:
http://api.mongodb.org/python/current/api/pymongo/connection.html
Judging from just the error message you seem to have an incorrect port parameter. I'm also guessing that if it worked in devel and not in production, you are missing some configuration values in prod.
I don't have a 100% working system yet, but the issue here is the incompatibility between your version of django-nonrel and pymongo. For the django-nonrel branching off of django 1.3, I needed to use pymongo 1.11 (not sure if any 2.X of pymongo would work).
I faced this while using environment variables. They are injected as strings. You need to cast port as int to get going.