rubymine code completion of model object - rubymine

Our rails project involves two databases: Postgres and MS sql server.
In the one database case :
When a variable holds a model object, RubyMine suggests suitable code completion with created dynamic queries, e.g. find_by_name, find_by_phone ....
But in model like :
class Customer < ActiveRecord::Base
establish_connection :sqlacc
self.table_name = 'ACC_CODE'
...
the model object from Customer did not get code completion with created dynamic queries, i.e. find_by_code ...
Would somebody know how to get this work ?
Thanks

This seems to be a known limitation in Rubymine, unfortunately. See these two open issues and, if you can, vote for them.

Related

CQRS - How to handle if a command requires data from db (query)

I am trying to wrap my head around the best way to approach this problem.
I am importing a file that contains bunch of users so I created a handler called
ImportUsersCommandHandler and my command is ImportUsersCommand that has List<User> as one of the parameters.
In the handler, for each user that I need to import I have to make sure that the UserType is valid, this is where the confusion comes in. I need to do a query against the database, to get list of all possible user types and than for each user I am importing, I want to verify that the user type id in the import matches one that is in the db.
I have 3 options.
Create a query GetUserTypesQuery and get the rest of this and then pass it on to the ImportUsersCommand as a list and verify inside the command handler
Call the GetUserTypesQuery from the command itself and not pass it (command calling another query)
Do not create a GetUsersTypeQuery and just do the query results within the command (still a query but no query/handler involved)
I feel like all these are dirty solutions and not the correct way to apply CQRS.
I agree option 1 sounds the best but would maybe suggest adding a pre handler to validate your input?
So ImportUsersCommandHandler deals with importing you data (and only that) and add a handler that runs before that validates (in your example, checks the user types and maybe other stuff) and bails out of it does not pass. So it queries the db, checks the usertypes and does whatever it needs to if it fails. Otherwise it just passes down to your business handler (ImportUsersCommandHandler).
I am used to using Mediatr in NET Core and this pattern works well (this is what we do) so sorry if this does not fit with your environment/setup!

Strapi: Initialize / populate database

When I deploy Strapi to a new server, I want to create and populate the database tables (PostgreSQL), particularly categories. How do I access production config, and create tables and category entries?
A hint on how-to approach this, would be much appreciated!
I know this is an old question, but i recently came upon the same issue.
Basically you should create the collections first, which result in the creation of models. Of course you also could create the models manually.
In the recent documentation you find a section about a bootstrap function.
docs bootstrap
The function is called at the start of the server.
The docs list the following use cases:
Here are some use cases:
Create an admin user if there isn't one.
Fill the database with some necessary data.
Load some environment variables.
The bootstrap function can be synchronous or asynchronous.
A great example can be found in the Plugin strapi-plugin-users-permissions
You can implement a new service or overwrite a function of an existing plugin.
the function initialize is implemented here async initialize
and called in the bootstrap function here
await ...initialize()
The initialize function is used to populate the database with the two roles
Authenticated and Public.
Hope that helps whoever stumbles upon this question.

Meteor : Create user specific collections when user registers at both client and server ends [duplicate]

Normally, MongoDB Collections are defined like this:
DuckbilledPlatypi = new Mongo.Collection("duckbilledplatypi");
I want to, though, dynamically generate Collections based on user input. For example, I might want it to be:
RupertPupkin20151212_20151218 = new Mongo.Collection("rupertPupkin20151212_20151218");
It would be easy enough to build up the Collection name:
var dynCollName = username + begindate +'_'+ enddate;
...and then pass "dynCollName") to Mongo.Collection:
= new Mongo.Collection(dynCollName);
...but what about the Collection instance name - how can that be dynamically generated? I would need something like:
"RupertPupkin20151212_20151218".ToRawName() = new Mongo.Collection(dynCollName);
-or:
"RupertPupkin20151212_20151218".Unstringify() = new Mongo.Collection(dynCollName);
...but AFAIK, there's no such thing...
On a single client instance, yes, and you could dynamically reference it. However in the general case (using it to sync data between the server and all connected clients), no.
I address this point in the Dynamically created collections section of common mistakes in a little detail, but the fundamental problem is that it would be highly complex to get all connected clients to agree on a dynamically generated set of collections.
It's much more likely that a finite set of collections where some have a flexible schema, is actually what you want. As Andrew Mao points out in the answer to this related question, partitioner is another tool available to help address some cases which give rise to this question.

Is there a way to manage part of an Entity Framework object using EF and the remainder using ADO?

I have to maintain an application that creates and manages quotations. (A quotation is a single class containing all information needed for pricing).
Most of the time, creating a quotation means adding a couple of lines to a couple of tables. It is pretty fast. Sometime however, the user attaches a large history of claims to the quotation and tens of thousands lines must be created in the database. Using EF, it takes forever.
So I've tried to use DbBulkCopy to bulk insert the claims while using EF to manage the reminder of the quotation, but the way I figure out how to achieve this is really, really cumbersome: I had to clone the quotation, detach the histories, delete the claims from the database, save the quotation, get the new foreign keys, bulk create the claims, attach the histories back to the quotation, etc.
Is it another way to achieve this?
Note: I could separate the claim history from the Quotation class and manage the first using ADO and the later using EF, but a lot of existing processes need the actual class design (not to mention that the user can actually attach many claim histories, which, of course, are sub-collections of sub-collections of sub-collections buried deep in the object tree...).
Many thanks in advance,
Sylvain.
I found a simple way to accomplish this :
// We will need a quotation Id (see below). Make sure we have one
SaveQuotation( myQuotation );
// Read the large claim history
var claimCollection = ImportClaimsFromExcelFile( fileName );
// Save the claim history without using EF. The quotation Id is needed to
// link the history to the quotation.
SaveClaimCollectionUsingSqlBulkCopy( claimCollection, myQuotation.Id );
// Now, ask EF to reload the quotation.
LoadQuotation( myQuotation.Id );
With an history of 60 000 claims, this code runs in 10 seconds. Using myObjectContext.SaveChanges(), 10 minutes were not even enough...
Thanks for your suggestions!
Note : Here is the code I used to bulk insert the claims :
using (var connection = new SqlConnection(constring))
{
connection.Open();
using (var copy = new SqlBulkCopy(connection))
{
copy.DestinationTableName = "ImportedLoss";
copy.ColumnMappings.Add("ImporterId", "ImporterId");
copy.ColumnMappings.Add("Loss", "Loss");
copy.ColumnMappings.Add("YearOfLoss", "YearOfLoss");
copy.BatchSize = 1000;
copy.WriteToServer(dt);
}
connection.Close();
}
Because of the many round-trips made to the db in order to persist an entity, EF is not the best choice for bulk operations. This said looks like the EF team is looking into improving this http://entityframework.codeplex.com/workitem/53
Also, have a look here : http://elegantcode.com/2012/01/26/sqlbulkcopy-for-generic-listt-useful-for-entity-framework-nhibernate/.
Another possible solution could be add all your Inserts within a single ExecuteSqlCommand, but then you will be loosing all the advantages of using an ORM.

How to inspect every query going to DB from Zend Framework

I have a complex reporting application that allows clients to login and view reports for their client data. There are several sections of the application where there are database calls, using various controllers. I need to make sure that client A doesn't get client B's information via header manipulation.
The system authenticates, and assignes them a clientID and roleID. If your roleID >1, that means you work for the company hosting the data, and you can see all client info. I want to create a catch-all that basically works like this:
if($roleID > 1) {
...send query to database
}else {
if(...does this query select a record with clientID other than my $auth->clientID){
do not execute query
}else {
execute query
}
}
The problem is, I want this to run for every query that goes to the server... how can I place this code as a "roadblock" between the application and the DB? I already use Zend_Profiler to look at queries, so I know it is somehow possible, but cannot discern this from the Profiler code...
I can always write an authentication function and pass selected queries that way, but this catch-all would be easier to implement across all of the calls and would be future proof. Any help is appreciated.
it's application design fault.
you shoud use 'service architecture' - the only one entry point for queries would be a service. and any checks inside it.
If this is something you want run on every query, I'd suggest extending Zend_Db_Select and overwrite either the query() or assemble() functions to add in your logic. You'll also want to add a way for it to be aware of your $auth object.
Another option is to extend your database adapter so you can intercept the queries directly. IMO, you should try and do this at the application level though.
Depending on your database server, you can put a trace on the DB side.
Here's an example for Oracle:
http://orafaq.com/wiki/SQL_Trace