I have a SDU Model created and I need import this model in other collection. The documentation explain that is necessary add 1 document and then import the model.
https://cloud.ibm.com/docs/services/discovery?topic=discovery-sdu#import
The problem is that my collection is connected with the object storage service with more than 1000 documents and not is possible add only one document and then import the model
I imported the model but SDU doesn't recognize my model. Is possible import a model with this type of connection?
Thanks.
After doing the initial crawl of Object Storage you should get the option to import the model. If you are getting an error importing please post it here for further debugging. (I am an IBM Watson employee)
Related
Is there a way to import a CSV file to an existing realm? I know realm browser can import CSV but it imports it to a new realm copy which I don't want.
I have an idea to create my own converter class, which will read CSV and dump data into existing realm. But please if there is an easier way to do it, do suggest.
Currently, I have an existing realm in my app at the default location.
Slick 3 has "import api" to use specific database driver. e.g.
import slick.driver.H2Driver.api._
...DAO implementation...
or
import slick.driver.PostgresDriver.api._
...DAO implementation...
How do I use postgresql in production and h2 in unit test?
Use DatabaseConfig instead. As Slick documentation states:
On top of the configuration syntax for Database, there is another
layer in the form of DatabaseConfig which allows you to configure a
Slick driver plus a matching Database together. This makes it easy to
abstract over different kinds of database systems by simply changing a
configuration file.
Instead of importing database specific drivers, first obtain a DatabaseConfig:
val dbConfig = DatabaseConfig.forConfig[JdbcProfile]("<db_name>")
And then import api from it:
import dbConfig.driver.api._
I am using Telosys tools for code generation. It's very nice tool and help me a lot.
But there is one problem, that is, it provides database schema information and i can access in templates (templates are formerly velocity templates), which is good, but how can i get selected entity's data from database? There is no way i can find, by which i can get that selected table data.
Please provide solution if any, or provide alternate way to do it.
Thanking You!
Telosys Tools is designed to retrieve the model from the database,
not the data stored in the tables.
But it allows to create your own specific tooling classes usable
in the templates, so it's possible to create a specific Java Class to retrieve the data from the database.
There's an example of this kind of specific class in the "database-doc" bundle
https://github.com/telosys-tools/database-doc-bundle-TT210 ( in classes folder )
To simplify the loading the simplest way is to create the class in the "default package" (no java package)
NB:
The problem is that the jar containing the JDBC driver
is not accessible by the generator class-loader, so you will have to use a specific class-loader and to connect directly with the JDBC driver.
Here is an example : https://gist.github.com/l-gu/ed0c8726807e5e8dd83a
Don't use it as is (the connection is never closed) but it can be easily adapted.
All,
I'm importing a group of classes for sqlalchemy from a separate file. They define the tables on my DB (inheriting from declarative_base()), and were originally located in the same file as my engine and metadata creation.
Since I have quite a few tables, and each of them is complex, I don't want them located in the same file I'm using them in. It makes working in the file more unwieldy, and I want a more clear delineation, since the classes document the current schema.
I refactored them to their own file, and suddenly the metadata does not find them automatically. Following this link, I found that it was because my main file declares base:
from tables import address, statements
Base = declarative_base()
metadata = MetaData()
Base.metadata.create_all()
And so does my tables file:
Base = declarative_base()
class address(Base):
...
So, as far as I can tell they get separate "bases" which is why the metadata can't find and create the declared tables. I've done some googling and it looks like this should be possible, but there isn't any obvious way to go about it.
How do I import tables defined in a separate file?
Update:
I tried this, and it sort of functions.
In the tables file, declare a Base for the table classes to import:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
Then in the main file, import the preexisting base and give its metadata to a new Base:
from sqlalchemy.ext.declarative import declarative_base
from tables import Base as tableBase
Base = declarative_base(metadata=tableBase.metadata)
After some more testing, I've found this approach leaves out important information. I've gone back to one file with everything in it, since that does work correctly. I'll leave the question open in case someone can come up with an answer that does work correctly. Or alternatively, point to the appropriate docs.
I am using a custom database (MongoDB) with TG 2.1 and i am wondering where the proper place to store the PyMongo connection/database instances would be?
Eg, at the moment they are getting created inside of my inherited instance of AppConfig. Is there a standard location to store this? Would shoving the variables into the project.model.__init__ be the best location, given that under SQLAlchemy, the database seems to commonly be retrieved via:
from project.model import DBSession, metadata
Anyway, just curious what the best practice is.
As of TurboGears 2.1.3, MongoDB support is integrated via the Ming ORM. I would look at a quickstarted project using the --ming option to get best practices if you want to do some customization: http://turbogears.org/2.1/docs/main/Ming.html