In the project we need to change collection name suffix everyday based on date.
So one day collection is named:
samples_22032019
and in the next day it is
samples_23032019
Everyday I need to change suffix and recompile spring-boot application because of this. Is there any way I can change this so the collection/table can be calculated dynamically based on current date? Any advice for MongoRepository?
Considering the below is your bean. you can use #Document annotation with spring expression language to resolve suffix at runtime. Like show below,
#Document(collection = "samples_#{T(com.yourpackage.Utility).getDateSuffix()}")
public class Samples {
private String id;
private String name;
}
Now have your date change function in a Utility method which spring can resolve at runtime. SpEL is handy in such scenarios.
package com.yourpackage;
public class Utility {
public static final String getDateSuffix() {
//Add your real logic here, below is for representational purpose only.
return DateTime.now().toDate().toString();;
}
}
HTH!
Make a cron job to run daily and generateNewName for your collection and execute the below code. Here I am getting collection using MongoDatabse than by using MongoNamespace we can rename the collection.
To get old/new collection name you can write a separate method.
#Component
public class RenameCollectionTask {
#Scheduled(cron = "${cron}")
public void renameCollection() {
// creating mongo client object
final MongoClient client = new MongoClient(HOST_NAME, PORT);
// selecting the mongo database
final MongoDatabase database = client.getDatabase("databaseName");
// selecting the mongo collection
final MongoCollection<Document> collection = database.getCollection("oldCollectionName");
// creating namespace
final MongoNamespace newName = new MongoNamespace("databaseName", "newCollectionName");
// renaming the collection
collection.renameCollection(newName);
System.out.println("Collection has been renamed");
// closing the client
client.close();
}
}
To assign the name of the collection you can refer this so that every time restart will not be required.
The renameCollection() method has the following limitations:
1) It cannot move a collection between databases.
2) It is not supported on sharded collections.
3) You cannot rename the views.
Refer this for detail.
Related
I'm writing a pipeline to replicate data from one source to another. Info about data sources is stored in db (BQ). How I can use this data it to build read/write endpoints dynamically?
I tried to pass Pipeline object to my custom DoFn but it can't be serialized. Later I tried to call method getPipeline() on a passed view but it doesn't work as well. -- which is actually expected
I can't know all tables I need to serialize in advance so I have to read all data from db (or any other source).
// builds some random view
PCollectionView<IdWrapper> idView = ...;
// reads tables meta and replicates data per each table
pipeline.apply(getTableMetaEndpont().read())
.apply(ParDo.of(new MyCustomReplicator(idView)).withSideInputs(idView))
private static class MyCustomReplicator extends DoFn<TableMeta, TableMeta> {
private final PCollectionView<IdWrapper> idView;
private DataReplicator(PCollectionView<IdWrapper> idView) {
this.idView = idView;
}
// TableMeta {string: sourceTable, string: destTable}
#ProcessElement
public void processElement(#Element TableMeta tableMeta, ProcessContext ctx) {
long id = ctx.sideInput(idView).getValue();
// builds read endpoint which depends on table meta
// updates entities
// stores entities using another endpoint
idView
.getPipeline()
.apply(createReadEndpoint(tableMeta).read())
.apply(ParDo.of(new SomeFunction(tableMeta, id)))
.apply(createWriteEndpoint(tableMeta).insert());
ctx.output(tableMetadata);
}
}
I expect it to replicate data specified by TableMeta but I can't use pipeline within DoFn object because it can't be serialized/deserialized.
Is there any way to implement the intended behavior?
Okay so at work we are developing a system using MVC C# & MongoDB. When first developing we decided it would probably be a good idea to follow the Repository pattern (what a pain in the ass!), here is the code to give an idea of what is currently implemented.
The MongoRepository class:
public class MongoRepository { }
public class MongoRepository<T> : MongoRepository, IRepository<T>
where T : IEntity
{
private MongoClient _client;
private IMongoDatabase _database;
private IMongoCollection<T> _collection;
public string StoreName {
get {
return typeof(T).Name;
}
}
}
public MongoRepository() {
_client = new MongoClient(ConfigurationManager.AppSettings["MongoDatabaseURL"]);
_database = _client.GetDatabase(ConfigurationManager.AppSettings["MongoDatabaseName"]);
/* misc code here */
Init();
}
public void Init() {
_collection = _database.GetCollection<T>(StoreName);
}
public IQueryable<T> SearchFor() {
return _collection.AsQueryable<T>();
}
}
The IRepository interface class:
public interface IRepository { }
public interface IRepository<T> : IRepository
where T : IEntity
{
string StoreNamePrepend { get; set; }
string StoreNameAppend { get; set; }
IQueryable<T> SearchFor();
/* misc code */
}
The repository is then instantiated using Ninject but without that it would look something like this (just to make this a simpler example):
MongoRepository<Client> clientCol = new MongoRepository<Client>();
Here is the code used for the search pages which is used to feed into a controller action which outputs JSON for a table with DataTables to read. Please note that the following uses DynamicLinq so that the linq can be built from string input:
tmpFinalList = clientCol
.SearchFor()
.OrderBy(tmpOrder) // tmpOrder = "ClientDescription DESC"
.Skip(Start) // Start = 99900
.Take(PageLength) // PageLength = 10
.ToList();
Now the problem is that if the collection has a lot of records (99,905 to be exact) everything works fine if the data in a field isn't very large for example our Key field is a 5 character fixed length string and I can Skip and Take fine using this query. However if it is something like ClientDescription can be much longer I can 'Sort' fine and 'Take' fine from the front of the query (i.e. Page 1) however when I page to the end with Skip = 99900 & Take = 10 it gives the following memory error:
An exception of type 'MongoDB.Driver.MongoCommandException' occurred
in MongoDB.Driver.dll but was not handled in user code
Additional information: Command aggregate failed: exception: Sort
exceeded memory limit of 104857600 bytes, but did not opt in to
external sorting. Aborting operation. Pass allowDiskUse:true to opt
in..
Okay so that is easy to understand I guess. I have had a look online and mostly everything that is suggested is to use Aggregation and "allowDiskUse:true" however since I use IQueryable in IRepository I cannot start using IAggregateFluent<> because you would then need to expose MongoDB related classes to IRepository which would go against IoC principals.
Is there any way to force IQueryable to use this or does anyone know of a way for me to access IAggregateFluent without going against IoC principals?
One thing of interest to me is why the sort works for page 1 (Start = 0, Take = 10) but then fails when I search to the end ... surely everything must be sorted for me to be able to get the items in order for Page 1 but shouldn't (Start = 99900, Take = 10) just need the same amount of 'sorting' and MongoDB should just send me the last 5 or so records. Why doesn't this error happen when both sorts are done?
ANSWER
Okay so with the help of #craig-wilson upgrading to the newest version of MongoDB C# drivers and changing the following in MongoRepository will fix the problem:
public IQueryable<T> SearchFor() {
return _collection.AsQueryable<T>(new AggregateOptions { AllowDiskUse = true });
}
I was getting a System.MissingMethodException but this was caused by other copies of the MongoDB drivers needing updated as well.
When creating the IQueryable from an IMongoCollection, you can pass in the AggregateOptions which allow you to set AllowDiskUse.
https://github.com/mongodb/mongo-csharp-driver/blob/master/src/MongoDB.Driver/IMongoCollectionExtensions.cs#L53
I am using spring data's elastic search module, but I am having troubles building a query. It is a very easy query though.
My document looks as follows:
#Document(indexName = "triber-sensor", type = "event")
public class EventDocument implements Event {
#Id
private String id;
#Field(type = FieldType.String)
private EventMode eventMode;
#Field(type = FieldType.String)
private EventSubject eventSubject;
#Field(type = FieldType.String)
private String eventId;
#Field(type = FieldType.Date)
private Date creationDate;
}
And the spring data repository looks like:
public interface EventJpaRepository extends ElasticsearchRepository<EventDocument, String> {
List<EventDocument> findAllOrderByCreationDateDesc(Pageable pageable);
}
So I am trying to get all events ordered by creationDate with the newest event first. However when I run the code I get an exception (also in STS):
Caused by: org.springframework.data.mapping.PropertyReferenceException: No property desc found for type Date! Traversed path: EventDocument.creationDate.
So it seems that it is not picking up the 'OrderBy' part? However a query with a findBy clause (eg findByCreationDateOrderByCreationDateDesc) seems to be okay. Also a findAll without ordering works.
Does this mean that the elastic search module of spring data doesn't allow findAll with ordering?
Try adding By to method name:
findAllByOrderByCreationDateDesc
I am implementing a custom IBsonSerializer with the official MongoDB driver (C#). I am in the situation where I must serialize and deserialize a Guid.
If I implement the Serialize method as follow, it works:
public void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
BsonBinaryData data = new BsonBinaryData(value, GuidRepresentation.CSharpLegacy);
bsonWriter.WriteBinaryData(data);
}
However I don't want the Guid representation to be CSharpLegacy, I want to use the standard representation. But if I change the Guid representation in that code, I get the following error:
MongoDB.Bson.BsonSerializationException: The GuidRepresentation for the writer is CSharpLegacy, which requires the subType argument to be UuidLegacy, not UuidStandard.
How do I serialize a Guid value using the standard representation?
Old question but in case someone finds it on google like I did...
Do this once:
BsonDefaults.GuidRepresentation = GuidRepresentation.Standard;
For example, in a Web Application/Web API, your Global.asax.cs file is best place to add it once
public class WebApiApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
BsonDefaults.GuidRepresentation = GuidRepresentation.Standard;
//Other code...below
}
}
If you don't want to modify the global setting BsonDefaults.GuidRepresentation (and you shouldn't, because modifying globals is a bad pattern), you can specify the setting when you create your collection:
IMongoDatabase db = ???;
string collectionName = ???;
var collectionSettings = new MongoCollectionSettings {
GuidRepresentation = GuidRepresentation.Standard
};
var collection = db.GetCollection<BsonDocument>(collectionName, collectionSettings);
Then any GUIDs written to the collection will be in the standard format.
Note that when you read records from the database, you will get a System.FormatException if the GUID format in the database is different from the format in your collection settings.
It looks like what's happening is when you are not explicitly passing the GuidRepresentation to BsonBinaryData constructor, it defaults to passing GuidRepresentation.Unspecified and that ultimately maps to GuidRepresentation.Legacy (see this line in the source)
So you need to explicitly pass the guidRepresentation as a third argument to BsonBinaryData set to GuidRepresentation.Standard.
edit: As was later pointed out, you can set BsonDefaults.GuidRepresentation = GuidRepresentation.Standard if that's what you always want to use.
I've got a model defined like the following...
#MongoEntity
public class Ent extends MongoModel{
public Hashtable<Integer, CustomType> fil;
public int ID;
public Ent(){
fil = new Hashtable<Integer, CustomType>();
}
}
CustomType is a datatype I've created which basically holds a list of items (among other things). At some point in my web application I update the hashtable from a controller and then read back the size of the item I just updated. Like the following...
public static void addToHash(CustomType type, int ID, int key){
//First I add an element to the list I'm storing in custom type.
Ent ent = Ent.find("byID",ID).first();
CustomType element = user.fil.get(key);
if(element == null) element = new CustomType();
element.add(type);
ent.save();
//Next I reset the variables and read back the value I just stored..
ent = null;
ent = User.find("byID",ID).first();
element = ent.fil.get(ID);
System.out.println("SIZE = " + element.size()); //null pointer here
}
As you can see by my above example I add the element, save the model and then attempt to read back what I have just added and it has not been saved. The above model Ent is a minimal version of the entire Model I'm actually using. All other values in the model including List's, String's, Integer's etc. update correctly when they're updated but this Hashtable I'm storing isn't. Why would this be happening and how could I correct it?
You should probably post on the play framework forum for better help..
Alternatives for a mongodb framework are morphia and springdata which have good documentation.
Not sure how Play maps a hash table to a document value, but it seems it cannot update just the hash table using a mongo operator.
You should be able to mark the whole document for update which would work but slower.