Is there a working example of Akka.net Persistence with MongoDb out there anywhere? - mongodb

I am attempting to configure Akka.Net with journal persistence to MongoDb but it is throwing an exception that I can't quite figure out. Is there a reference example out there anywhere I can look at to see how this is supposed to work? I would have expected the examples in the unit tests to fill this need for me but the tests are missing for the MongoDb implementation of persistence. :(
Here's the error I am getting:
Akka.Actor.ActorInitializationException : Exception during creation --->
System.TypeLoadException : Method 'ReplayMessagesAsync' in type
'Akka.Persistence.MongoDb.Journal.MongoDbJournal' from assembly
'Akka.Persistence.MongoDb, Version=1.0.5.2, Culture=neutral, PublicKeyToken=null'
does not have an implementation.
and here is my HOCON for this app:
---Edit - Thanks for the tip Horusiath; based on that I updated to this HOCON and the Sqlite provider works but the MongoDb one is still giving an error.
<![CDATA[
akka {
actor {
provider = "Akka.Remote.RemoteActorRefProvider, Akka.Remote"
}
remote {
helios.tcp {
port = 9870 #bound to a specific port
hostname = localhost
}
}
persistence {
publish-plugin-commands = on
journal {
#plugin = "akka.persistence.journal.sqlite"
plugin = "akka.persistence.journal.mongodb"
mongodb {
class = "Akka.Persistence.MongoDb.Journal.MongoDbJournal, Akka.Persistence.MongoDb"
connection-string = "mongodb://localhost/Akka"
collection = "EventJournal"
}
sqlite {
class = "Akka.Persistence.Sqlite.Journal.SqliteJournal, Akka.Persistence.Sqlite"
plugin-dispatcher = "akka.actor.default-dispatcher"
connection-string = "FullUri=file:Sqlite-journal.db?cache=shared;"
connection-timeout = 30s
schema-name = dbo
table-name = event_journal
auto-initialize = on
timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
}
}
snapshot-store {
#plugin = "akka.persistence.snapshot-store.sqlite"
plugin = "akka.persistence.snapshot-store.mongodb"
mongodb {
class = "Akka.Persistence.MongoDb.Snapshot.MongoDbSnapshotStore, Akka.Persistence.MongoDb"
connection-string = "mongodb://localhost/Akka"
collection = "SnapshotStore"
}
sqlite {
class = "Akka.Persistence.Sqlite.Snapshot.SqliteSnapshotStore, Akka.Persistence.Sqlite"
plugin-dispatcher = "akka.actor.default-dispatcher"
connection-string = "FullUri=file:Sqlite-journal.db?cache=shared;"
connection-timeout = 30s
schema-name = dbo
table-name = snapshot_store
auto-initialize = on
}
}
}
}
]]>
</hocon>
So, back to my original question: Is there are working MongoDb sample that I can examine to learn how this is intended to work?

Configuration requires providing fully qualified type names with assemblies. Try to specify class as "Akka.Persistence.MongoDb.Journal.MongoDbJournal, Akka.Persistence.MongoDb" (you probably don't need double quotes either, as it's not inline string).

An old thread, but here's a large sample I put together years ago using Akka.Persistence.MongoDb + Clustering: https://github.com/Aaronontheweb/InMemoryCQRSReplication

Related

Using terraform to fetch entity name under alias

I am trying to fetch all the entity names using data source vault_identity_entity, however unable to fetch the name of entity located under aliases.
Sample code:
'''
data “vault_identity_group” “group” {
group_name = “vaultadmin”
}
data “vault_identity_entity” “entity” {
for_each = toset(data.vault_identity_group.group.member_entity_ids)
entity_id = each.value
}
data “null_data_source” “values” {
for_each = data.vault_identity_entity.entity
inputs = {
ssh_user_details = lookup(jsondecode(data.vault_identity_entity.entity[each.key].data_json),“name”,{})
}
}
"data_json": "{\"aliases\":[{\"canonical_id\":\"37b4c764-a4ec-dcb7-c3c7-31cf9c51e456\",\"creation_time\":\"2022-07-20T08:53:36.553988277Z\",\"custom_metadata\":null,\"id\":\"59fb8a9c-1c0c-0591-0f6e-1a153233e456\",\"last_update_time\":\"2022-07-20T08:53:36.553988277Z\",\"local\":false,\"merged_from_canonical_ids\":null,\"metadata\":null,\"mount_accessor\":\"auth_approle_12d1d8af\",\"mount_path\":\"auth/approle/\",\"mount_type\":\"approle\",\"name\":\"name.user#test.com\"}],\"creation_time\":\"2022-07-20T08:53:36.553982983Z\",\"direct_group_ids\":[\"e456cb46-2b51-737c-3277-64082352f47e\"],\"disabled\":false,\"group_ids\":[\"e456cb46-2b51-737c-3277-64082352f47e\"],\"id\":\"37b4c764-a4ec-dcb7-c3c7-31cf9c51e456\",\"inherited_group_ids\":[],\"last_update_time\":\"2022-07-20T08:53:36.553982983Z\",\"merged_entity_ids\":null,\"metadata\":null,\"name\":\"entity_ec5c123\",\"namespace_id\":\"root\",\"policies\":[]}",
Above scripts returns entity id entity_ec5c123. Any suggestions to retrieve the name field under aliases, which has users email id.
Maybe something like this?
data “vault_identity_group” “group” {
group_name = “vaultadmin”
}
data “vault_identity_entity” “entity” {
for_each = toset(data.vault_identity_group.group.member_entity_ids)
entity_id = each.value
}
locals {
mount_accessor = "auth_approle_12d1d8af"
# mount_path = "auth/approle/"
aliases = {for k,v in data.vault_identity_entity.entity : k => jsondecode(v.data_json, "aliases") }
}
data “null_data_source” “values” {
for_each = data.vault_identity_entity.entity
inputs = {
ssh_user_details = lookup({for alias in lookup(local.aliases, each.key, "ent_missing") : alias.mount_accessor => alias.name}, local.mount_accessor, "ent_no_alias_on_auth_method")
}
}
Basically you want to do a couple lookups here, you can simplify this if you can guarantee that each entity will only have a single alias, but otherwise you should probably be looking up the alias for a specific mount_accessor and discarding the other entries.
Haven't really done a bunch of testing with this code, but you should be able to run terraform console after doing an init on your workspace and figure out what the data structs look like if you have issues.

how to make sure locks are released with ef core and postgres?

I have a console program that moves Data between two different servers (DatabaseA and DatabaseB).
Database B is a Postgres-Server.
It calls a lot of stored procedures and other raw queries.
I use ExecuteSqlRaw a lot.
I also use NpsqlBulk.EfCore.
The program uses the same context instance for DatabaseB during the whole run it takes to finish.
Somehow i get locks on some of my tables on DatabaseB that never get released.
This happens always on my table mytable_fromdatabase_import.
The code run on that is the following:
protected override void AddIdsNew()
{
var toAdd = IdsNotInDatabaseB();
var newObjectsToAdd = GetByIds(toAdd).Select(Converter.ConvertAToB);
DatabaseBContext.Database.ExecuteSqlRaw("truncate mytable_fromdatabase_import; ");
var uploader = new NpgsqlBulkUploader(DatabaseBContext);
uploader.Insert(newObjectsToAdd); // inserts data into mytable_fromdatabase_import
DatabaseBContext.Database.ExecuteSqlRaw("call insert_myTable_from_importTable();");
}
After i run it the whole table is not accessable annymore and when i query the locks on the server i can see there is a process holding it.
How can i make sure this process always closes and releases its locks on tables?
I thought ef-core would do that automaticaly.
-----------Edit-----------
I just wanted to add that this is not a temporary problem during the run of the console. When i run this code and it is finished my table is still locked and nothing can access it. My understanding was that the ef-core context would release everything after it is disposed (if by error or by being finished)
The problem had nothing to do with ef core but with a wrong configured backupscript. The program is running now with no changes to it and it works fine
For concrete task you need right tools. Probably you have locks when retrieve Ids and also when trying to do not load already imported records. These steps are slow!
I would suggest to use linq2db (disclaimer, I'm co-author of this library)
Create two projects with models from different databases:
Source.Model.csproj - install linq2db.SQLServer
Destination.Model.csproj - install linq2db.PostgreSQL
Follow instructions in T4 templates how to generate model from two databases. It is easy and you can ask questions on linq2db`s github site.
I'll post helper class which I've used for transferring tables on my previous project. It additionally uses library CodeJam for mapping, but in your project, for sure, you can use Automapper.
public class DataImporter
{
private readonly DataConnection _source;
private readonly DataConnection _destination;
public DataImporter(DataConnection source, DataConnection destination)
{
_source = source;
_destination = destination;
}
private long ImportDataPrepared<TSource, TDest>(IOrderedQueryable<TSource> source, Expression<Func<TSource, TDest>> projection) where TDest : class
{
var destination = _destination.GetTable<TDest>();
var tableName = destination.TableName;
var sourceCount = source.Count();
if (sourceCount == 0)
return 0;
var currentCount = destination.Count();
if (currentCount > sourceCount)
throw new Exception($"'{tableName}' what happened here?.");
if (currentCount >= sourceCount)
return 0;
IQueryable<TSource> sourceQuery = source;
if (currentCount > 0)
sourceQuery = sourceQuery.Skip(currentCount);
var projected = sourceQuery.Select(projection);
var copied =
_destination.BulkCopy(
new BulkCopyOptions
{
BulkCopyType = BulkCopyType.MultipleRows,
RowsCopiedCallback = (obj) => RowsCopiedCallback(obj, currentCount, sourceCount, tableName)
}, projected);
return copied.RowsCopied;
}
private void RowsCopiedCallback(BulkCopyRowsCopied obj, int currentRows, int totalRows, string tableName)
{
var percent = (currentRows + obj.RowsCopied) / (double)totalRows * 100;
Console.WriteLine($"Copied {percent:N2}% \tto {tableName}");
}
public class ImporterHelper<TSource>
{
private readonly DataImporter _improrter;
private readonly IOrderedQueryable<TSource> _sourceQuery;
public ImporterHelper(DataImporter improrter, IOrderedQueryable<TSource> sourceQuery)
{
_improrter = improrter;
_sourceQuery = sourceQuery;
}
public long To<TDest>() where TDest : class
{
var mapperBuilder = new MapperBuilder<TSource, TDest>();
return _improrter.ImportDataPrepared(_sourceQuery, mapperBuilder.GetMapper().GetMapperExpressionEx());
}
public long To<TDest>(Expression<Func<TSource, TDest>> projection) where TDest : class
{
return _improrter.ImportDataPrepared(_sourceQuery, projection);
}
}
public ImporterHelper<TSource> ImprortData<TSource>(IOrderedQueryable<TSource> source)
{
return new ImporterHelper<TSource>(this, source);
}
}
So begin transferring. Note that I have used OrderBy/ThenBy to specify Id order to do not import already transferred records - important order fields should be Unique Key combination. So this sample is reentrant and can be re-run again when connection is lost.
var sourceBuilder = new LinqToDbConnectionOptionsBuilder();
sourceBuilder.UseSqlServer(SourceConnectionString);
var destinationBuilder = new LinqToDbConnectionOptionsBuilder();
destinationBuilder.UsePostgreSQL(DestinationConnectionString);
using (var source = new DataConnection(sourceBuilder.Build()))
using (var destination = new DataConnection(destinationBuilder.Build()))
{
var dataImporter = new DataImporter(source, destination);
dataImporter.ImprortData(source.GetTable<Source.Model.FirstTable>()
.OrderBy(e => e.Id1)
.ThenBy(e => e.Id2))
.To<Dest.Model.FirstTable>();
dataImporter.ImprortData(source.GetTable<Source.Model.SecondTable>().OrderBy(e => e.Id))
.To<Dest.Model.SecondTable>();
}
For sure boring part with OrderBy can be generated automatically, but this will explode this already not a short answer.
Also play with BulkCopyOptions. Native Npgsql COPY may fail and Multi-Line variant should be used.

Execute gatling scenarios based on a boolean flag

Is it possible in Gatling to execute scenarios based on a boolean flag from properties file
application.conf
config {
isDummyTesting = true,
Test {
baseUrl = "testUrl"
userCount = 1
testUser {
CustomerLoginFeeder = "CustomerLogin.getLogin()"
Navigation = "Navigation.navigation"
}
},
performance {
baseUrl = "testUrl"
userCount = 100
testUser {
CustomerLoginFeeder = "CustomerLogin.getLogin()"
}
}
}
and in my simulation file
var flowToTest = ConfigFactory.load().getObject("config.performance.testUser").toConfig
if (ConfigFactory.load().getBoolean("config.isDummyTesting")) {
var flowToTest = ConfigFactory.load().getObject("config.Test.testUser").toConfig
}
while executing flow, i am running below code
scenario("Customer Login").exec(flowToTest)
and facing error
ERROR : io.gatling.core.structure.ScenarioBuilder
cannot be applied to (com.typesafe.config.Config)
I want if flag is true, it executes two scenarios else the other one.
I think you're making a mistake in trying to have to flow defined in the config, rather than just the flag. You can then load the value of the isDummyTesting flag and have that passed into a session variable. From there, you can use the standard gatling doIf construct to include Navigation.navigation if specified.
so in your simulation file, you can have
private val isDummyTesting = java.lang.Boolean.getBoolean("isDummyTesting")
and then in your scenario
.exec(session => session.set("isDummyTesting", isDummyTesting))
...
.exec(CustomerLogin.getLogin())
.doIf("${isDummyTesting}") {
exec(Navigation.navigation)
}

MongoDB 'upsert' from Grails

I'm trying to implement a simple "insert or update" (so-called 'upsert') method in Grails / GORM / mongodb plug-in / MongoDB.
The approach I used with Hibernate (using merge) fails with a duplicate key error. I presume perhaps merge() isn't a supported operation in mongodb GORM, and tried to get to the native upsert method through GMongo.
I finally have a version that works (as posted below), but it is probably not the best way, as adding any fields to the object being saved will break the code silently.
public void upsertPrefix(p) {
def o = new BasicDBObject()
o.put("_id", p.id)
o.put("someValue", p.someValue)
o.put("otherValue", p.otherValue)
// DBObject o = p as DBObject // No signature of method: mypackage.Prefix.keySet() is applicable for argument types: () values: []
db.prefix.update([_id : p.id], o, true, false)
// I actually would want to pass p instead of o here, but that fails with:
// No signature of method: com.gmongo.internal.DBCollectionPatcher$__clinit__closure2.doCall() is applicable for argument types: (java.util.ArrayList) values: [[[_id:keyvalue], mypackage.Prefix : keyvalue, ...]]
/* All of these other more "Hibernatesque" approaches fail:
def existing = Prefix.get(p.id)
if (existing != null) {
p.merge(flush:true) // E11000 duplicate key error
// existing.merge(p) // Invocation failed: Message: null
// Prefix.merge(p) // Invocation failed: Message: null
} else {
p.save(flush:true)
}
*/
}
I guess I could introduce another POJO-DbObject mapping framework to the mix, but that would complicate things even more, duplicate what GORM is already doing and may introduce additional meta-data.
Any ideas how to solve this in the simplest fashion?
Edit #1: I now tried something else:
def existing = Prefix.get(p.id)
if (existing != null) {
// existing.properties = p.properties // E11000 duplicate key error...
existing.someValue = p.someValue
existing.otherValue = p.otherValue
existing.save(flush:true)
} else {
p.save(flush:true)
}
Once again the non-commented version works, but is not well maintainable. The commented version which I'd like to make work fails.
Edit #2:
Version which works:
public void upsertPrefix(p) {
def o = new BasicDBObject()
p.properties.each {
if (! (it.key in ['dbo'])) {
o[it.key] = p.properties[it.key]
}
}
o['_id'] = p.id
db.prefix.update([_id : p.id], o, true, false)
}
Version which never seems to insert anything:
def upsertPrefix(Prefix updatedPrefix) {
Prefix existingPrefix = Prefix.findOrCreateById(updatedPrefix.id)
updatedPrefix.properties.each { prop ->
if (! prop.key in ['dbo', 'id']) { // You don't want to re-set the id, and dbo is r/o
existingPrefix.properties[prop.key] = prop.value
}
}
existingPrefix.save() // Never seems to insert anything
}
Version which still fails with duplicate key error:
def upsertPrefix(p) {
def existing = Prefix.get(p.id)
if (existing != null) {
p.properties.each { prop ->
print prop.key
if (! prop.key in ['dbo', 'id']) {
existingPrefix.properties[prop.key] = prop.value
}
}
existing.save(flush:true) // Still fails with duplicate key error
} else {
p.save(flush:true)
}
}
Assuming you have either an updated version of the object, or a map of the properties you need to update with their new values, you could loop over those and apply the updates for each property.
Something like this:
def upsert(Prefix updatedPrefix) {
Prefix existingPrefix = Prefix .findOrCreateById(updatedPrefix.id)
updatedPrefix.properties.each { prop ->
if (prop.key != 'id') { // You don't want to re-set the id
existingPrefix.properties[prop.key] = prop.value
}
}
existingPrefix.save()
}
How to exclude updating the ID may not be quite correct, so you might have to play with it a bit. You also might consider only updating a property if it's corresponding new value is different from the existing one, but that's essentially just an optimization.
If you have a map, you might also consider doing the update the way the default controller scaffolding does:
prefixInstance.properties = params
MongoDB has native support for upsert. See the findAndModify Command with upsert parameter true.

Filter SELECT and INSERT/UPDATE/DELETE queries in Zend Framework

I am using multidb pattern in Zend Framework.
Typically I will be using master/slave architecture of MysqlDB.
So my question is what should I do to execute SELECT queries from slave database and INSERT/UPDATE/DELETE queries on master database
My application.ini looks like
resources.multidb.primary.adapter = PDO_MYSQL
resources.multidb.primary.host = localhost
resources.multidb.primary.username = root
resources.multidb.primary.password = 123456
resources.multidb.primary.dbname = tubaah_zend
resources.multidb.primary.default = true
resources.multidb.secondary.adapter = PDO_MYSQL
resources.multidb.secondary.host = localhost
resources.multidb.secondary.username = root
resources.multidb.secondary.password = 123456
resources.multidb.secondary.dbname = tubaah
So I want to run all SELECT queries on secondary database and all INSERT/UPDATE/DELETE on primary database.
I believe insert/update/delete should work just fine, i.e.:
My_Model_DbTable_MyTable.php:
function myFunction() {
$this->insert()
$this->update()
$this->delete()
}
However, if you wish to use the secondary database, you may be unable to use the typical $this->select() method:
My_Model_DbTable_MyTable.php
// Override getAdapter() function to be able to obtain secondary database
function getAdapter($name = 'primary') {
$resource = $this->getPluginResource('multidb');
$resource->init();
// Ensure only primary and secondary are allowed
if ($name == 'secondary' || $name == 'primary') {
return $resource->getDb($name);
} else {
return $this->_db;
}
}
function selectFromSecondary() {
$db = $this->getAdapter('secondary');
$select = $this->select(true);
return $db->fetchAll($select); // normally this is $this->fetchAll()
}
Again, by overriding getAdapter() as shown above, you won't need to make any changes whatsoever if accessing the primary database, but if you need the secondary one you need to obtain the secondary adapter via $this->getAdapter('secondary') and store it into a variable, i.e. $db, and then call select/insert/update/delete methods using the $db object.
EDIT Slight modification to code above. You should try to utilize $this->_db by default for getAdapter() and the $db-> replaces $this-> for fetch(), update(), insert(), delete(), etc, not for select().
function getAdapter($name = 'primary') {
$resource = $this->getPluginResource('multidb');
$resource->init();
// Ensure only primary and secondary are allowed
if ($name == 'secondary' || $name == 'primary') {
return $resource->getDb($name);
} else {
return $this->_db;
}
}
function selectFromSecondary() {
$db = $this->getAdapter('secondary');
$select = $this->select(true);
return $db->fetchAll($select); // normally this is $this->fetchAll()
}