servicestack.redis getvalues gives junk values - servicestack.redis

I am facing an issue getting junk values like 'OK' or '0' or some 'numeric' values while reading values from Redis. This happens while reading normal key and hash keys as well. We have upgraded all the service stack components and still facing the issues. Please find the component details and the code snippet in c#
Our environment: We have setup sentinel, and each sentinel is paired with a Redis instance. Now we have setup three instances of sentinel and three instances of redis server. We were using read only client for reading values and read-write client for writing values to Redis. Even after using read-write client for reading and writing is also giving the same junk value problems.
Components:
ServiceStack.Common v 5.1.0.0
ServiceStack.Redis v 5.1.0.0
ServiceStack.Interfaces v 5.1.0.0
ServiceStack.Text v 5.1.0.0
Redis server v 3.0.503
OS: Windows server 2012 R2
code snippet:
private static IredisClientManager m_redisManager;
initializeRedis()
{
if(m_redisManager == null)
{
var sentinel = new RedisSentinel(
"193.168.1.1:16380,193.168.1.2:16380,193.168.1.3:16380"
,"testmaster")
{
RefreshSentinelHostsAfter = 10;
};
sentinel.RedisManagerFactory += (master,slaves)
=> new RedisManagerPool(master);
m_redisManager = sentinel.Start();
}
}
public string GetValue(string key)
{
string val;
using(var client = m_redisManager.GetClient())
{
val = client.GetValue(key);
}
return val;
}
Note:
1. m_redisManager is declared as static, so that it runs only once. Each call will share this manager
2. client is disposing after each call to get value
3. My application is a multi threaded, so reading from multiple thread may happen at the same time. And application is muti instancing from same machine and difeerent machine as well.
4. The above code is from component which interact with Redis.
5. Client will call GetValue function
What could be the problem? Can someone help

Related

500 Internal Server Error - Azure Function through Data Factory using Postgres DB

I am trying to edit someone else's azure function app. When I run theirs, it works fine and connects to their DB successfully. When I try and change the connection string to my DB it gives me the error
HTTP response code: 500 Internal Server Error
without any other information.
Even if I just change the one line of code which defines the DB connection, it doesn't work. I have tried it on my local machine and it works, it just doesn't work in Azure functions.
Their original code (which works):
postgresql = os.environ.get('POSTGRES_SQL')
cnxn = psycopg2.connect(postgresql)
vs. mine (which doesn't work):
postgresql = 'postgresql://sqladmin:{my-password}#{db-connection-string}?sslmode=require'
cnxn = psycopg2.connect(postgresql)
I am also not sure where their DB connection comes from, using ".get('POSTGRES_SQL')", as they don't pass that parameter in anywhere. Below is how they call the function in Azure Data Factory, note that no parameters are passed in (nor anywhere in the function):
Even I try just a bare bones block of code as seen below, it gives me the same error.
def main(req: func.HttpRequest) -> func.HttpResponse:
postgresql = 'postgresql://sqladmin:{my-password}#{db-connection-string}?sslmode=require'
cnxn = psycopg2.connect(postgresql)
cursor = cnxn.cursor()
cursor.execute("CREATE TABLE staging.test (some-column varchar(100) null)")
cnxn.commit
cursor.close()
return func.HttpResponse(f"This HTTP triggered function executed successfully.")
Please let me know what I'm missing, or if you need any other info. I have already tried all other responses to similar StackOverflow questions.

Setting up and accessing Flink Queryable State (NullPointerException)

I am using Flink v1.4.0 and I have set up two distinct jobs. The first is a pipeline that consumes data from a Kafka Topic and stores them into a Queryable State (QS). Data are keyed by date. The second submits a query to the QS job and processes the returned data.
Both jobs were working fine with Flink v.1.3.2. But with the new update, everything has broken. Here is part of the code for the first job:
private void runPipeline() throws Exception {
StreamExecutionEnvironment env = configurationEnvironment();
QueryableStateStream<String, DataBucket> dataByDate = env.addSource(sourceDataFromKafka())
.map(NewDataClass::new)
.keyBy(data.date)
.asQueryableState("QSName", reduceIntoSingleDataBucket());
}
and here is the code on client side:
QueryableStateClient client = new QueryableStateClient("localhost", 6123);
// the state descriptor of the state to be fetched.
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}));
jobId = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
String key = "2017-01-06";
CompletableFuture<ValueState<DataBucket> resultFuture = client.getKvState(
jobId,
"QSName",
key,
BasicTypeInfo.STRING_TYPE_INFO,
descriptor);
try {
ValueState<DataBucket> valueState = resultFuture.get();
DataBucket bucket = valueState.value();
System.out.println(bucket.getLabel());
} catch (IOException | InterruptionException | ExecutionException e) {
throw new RunTimeException("Unable to query bucket key: " + key , e);
}
I have followed the instructions as per the following link:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/state/queryable_state.html
making sure to enable the queryable state on my Flink cluster by including the flink-queryable-state-runtime_2.11-1.4.0.jar from the opt/ folder of your Flink distribution to the lib/ folder and checked it runs in the task manager.
I keep getting the following error:
Exception in thread "main" java.lang.NullPointerException
at org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:84)
at org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:253)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:210)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:174)
at com.company.dept.query.QuerySubmitter.main(QuerySubmitter.java:37)
Any idea of what is happening? I think that my requests don't reach the QS at all ... Really don't know if and how I should change anything. Thanks.
So, as it turned out, it was 2 things that were causing this error. The first was the use of the wrong constructor for creating a descriptor on the client side. Rather than using the one that only takes as input a name for the QS and a TypeHint, I had to use another one where a keySerialiser along with a default value are provided as per below:
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}).createSerializer(new ExecutionConfig()),
DataBucket.emptyBucket()); // or anything that can be used as a default value
The second was relevant to the host and port values. The port was different from v1.3.2 now set to 9069 and the localhost was also different in my case. You can verify both by checking the logs of any task manager for the line:
Started the Queryable State Proxy Server # ....
Finally, in case you are here because you are looking to allow port-range for queryable state client proxy, I suggest you follow the respective issue (FLINK-7788) here: https://issues.apache.org/jira/browse/FLINK-7788.

What causes MARSHALLINGERROR when creating a znode?

I am doing a simple createAsync() with my ZooKeeperNetEx nuget package and it is throwing an exception which is triggered by a MARSHALLINGERROR.
Here's is the two-line summary (between these lines, the connection successfully confirmed to Zookeeper):
var Zoo = new ZooKeeper("localhost:50002", 5000, new ClusterWatcher());
. . .
var parentNode = Zoo.createAsync("/election", null, null, CreateMode.PERSISTENT).Result
I do not get it. ClusterWatcher is my own class derived from Watcher, of course. Yes, I am writing this in C# but this such a simple matter, I would not think it mattered. The host machine is running Windows 10 Pro, if that matters.
This exception can be triggered by not specifying the ACL mode (you seem to pass null). In Java you can pass the predefined list ZooDefs.Ids.OPEN_ACL_UNSAFE (for example, or one of the others in that class) for the ACL mode; for C# there will probably be a similarly named constant.
In the Java client library this is a convenience constant that is defined as:
/**
* This is a completely open ACL .
*/
public final ArrayList<ACL> OPEN_ACL_UNSAFE = new ArrayList<ACL>(
Collections.singletonList(new ACL(Perms.ALL, ANYONE_ID_UNSAFE))
);

Scala Netty is there any way to share a ReplayingDecoder

I am looking to open up multiple connections using a netty client bootstrap in order to parse messages coming from multiple sources. The messages all have the same format, however, due to the amount of data that needs to be processed, I must run each connection on separate threads (This is assuming netty creates a thread per client channel, which I couldn't find a reference for - if that's not the case, how would this be achieved?).
This is the code that I use to connect to the data server:
var b = new Bootstrap()
.group(group)
.channel(classOf[NioSocketChannel])
.handler(RawFeedChannelInitializer)
var ch1 = b.clone().connect(host, port).sync().channel();
var ch2 = b.clone().connect(host, port).sync().channel();
The initializer calls RawPacketDecoder, which extends ReplayingDecoder, and is defined here.
The code works well without #Sharable when opening a single connection, but for the purpose of my application I must connect to the same server multiple times.
This results in the runtime error #Sharable annotation is not allowed pointing to my RawPacketDecoder class.
I am not entirely sure on how to get past this issue, short of reimplementing in scala an instantiable class of ReplayingDecoder as my decoder based directly on ByteToMessageDecoder.
Any help would be greatly appreciated.
Note: I am using netty 4.0.32 Final
I found the solution in this StockExchange answer.
My issue was that I was using an object based ChannelInitializer (singleton), and ReplayingDecoder as well as ByteToMessageDecoder are not sharable.
My initializer was created as a scala object, and therefore a single instance allowed. Changing the initializer to a scala class and instantiating for each bootstrap clone solved the problem. I modified the bootstrap code above as follows:
var b = new Bootstrap()
.group(group)
.channel(classOf[NioSocketChannel])
//.handler(RawFeedChannelInitializer)
var ch1 = b.clone().handler(new RawFeedChannelInitializer()).connect(host, port).sync().channel();
var ch2 = b.clone().handler(new RawFeedChannelInitializer()).connect(host, port).sync().channel();
I am not sure whether this ensures multithreading as wanted but it does allow to split the data access into multiple connections to the feed server.
Edit Update: After performing additional research on the subject, I have determined that netty does in fact create a thread per channel; this was verified by printing to console after the creation of each channel:
println("No. of active threads: " + Thread.activeCount());
The output shows an incremental number as channels are created and associated with their respective threads.
By default NioEventLoopGroup uses 2*Num_CPU_cores threads as defined here:
DEFAULT_EVENT_LOOP_THREADS = Math.max(1, SystemPropertyUtil.getInt(
"io.netty.eventLoopThreads",
Runtime.getRuntime().availableProcessors() * 2));
This value can be overriden to something else by setting
val group = new NioEventLoopGroup(16)
and then using the group to create/setup the bootstrap.

Unity Entity Framework within ASP.NET WebAPI 2

I have a very weird problem with Unity here. I have the following:
public class UnityConfig
{
public static void RegisterTypes(IUnityContainer container)
container.RegisterType<IDBContext, MyDbContext>(new PerThreadLifetimeManager());
container.RegisterType<IUserDbContext>(new PerThreadLifetimeManager(), new InjectionFactory(c =>
{
var tenantConnectionString = c.Resolve<ITenantConnectionResolver>().ResolveConnectionString();
return new UserDbContext(tenantConnectionString);
}));
}
}
and then in the WebApiConfig.cs file within the Reigster method:
var container = new UnityContainer();
UnityConfig.RegisterTypes(container);
config.DependencyResolver = new UnityResolver(container);
Basically, what I want to happen in the above code is on every request to the API, I want Unity to new up a UserDbContext based on the user (multi-tenant kind of environment). Now the TenantConnectionResolver is responsible for figuring out the Connection String and then I use that connection string to new up UserDbContext.
Also note (not shown above) that TenantConnectionResolver takes an IDbConext in its constructor because I need it to figure out the connection string based on user information in that database.
But for some reason, the code within the InjectionFactory runs at random times. For example, I call //mysite.com/controller/action/1 repetitively from a browser, the code in the InjectionFactory will occasionally run but not on each request.
Am I incorrectly configuring Unity? Has anybody encountered anything similar to this?
Thanks in advance
The problem is very likely related to the LifetimeManager you are using. PerThreadLifetimeManager is not adapted in a web context, as threads are pooled and will serve multiple requests in sequence.
PerRequestLifetimeManager is probably what you want to use.