since graphql queries are text, I'd like to just store them in a separate file, but I'm not finding any language support. gql files are for schema, not queries.
What's the vs code support for graphql queries?
What file extension is the convention if one is going to store queries in a file?
Is query syntax highlighting / intellisense (based on schema) possible?
can I import the query?
What I tried:
I put the query in a .gql file, no luck.
The file extension is .graphql and the loading can be handled by WebPack.
documentation
Related
I am trying to implement a FULL-TEXT SEARCH (PostgreSQL) on kotlin by using kotlin-exposed. I have my queries in raw SQL but cannot write queries containing to_tsvector()or to_tsquery() in kotlin. I couldn't actually find anything similar anywhere. After a bit of reading, I understood that complex queries could be written as raw SQL here (i couldn't get it working too) but there is a chance of SQL injection in that. Is there a way to tackle this?
I am not posting any code since what I've tried is just trial and error, actually, the methods are not even available in IDE. Any help is appreciated. My DB is PostgreSQL.
SQLAlchemy 1.4 ORM using an AsyncSession, Postgres backend, Python 3.6
I am trying to create a custom aggregate function using the SQLAlchemy ORM. The SQL query would look something like:
COUNT({group_by_function}),{group_by_function} AS {aggregated_field_name)}
I've been searching for information on this.
I know this can be created internally within the Postgres db first, and then used by SA, but this will be problematic for the way the codebase I'm working with is set up.
I know SQLAlchemy-Utils has functionality for this, but I would prefer not to use an external library.
The most direct post on this topic I can find says "The creation of new aggregate functions is backend-dependant, and must be done directly with the API of the underlining connection." But this is from quite a few years ago and thought there might have been updates since.
Am I missing something in the SA ORM docs that discusses this or is this not supported by SA, full stop?
you can try something this query
query = db.session.query(Model)\
.with_entities(
Model.id,
func.sum(Model.number).label('total_sum')
).group_by(Model.id)
I am new to Accumulo. We are migrating MongoDB to Accumulo database. we got a file with all tables information from mongoDB. Is there any option available in Accumulo to import the file and create the tables by its own? Through the API document I came to know that we can create table by shell script and also through programmatically. Can anyone tell me is there any import option available for Accumulo to import the file and create the tables?
There is no native way to do that. MongoDb deals with JSON documents which have an entirely different layout/schema than the way Accumulo does things. You could try using something like http://gora.apache.org/index.html, but that requires you to change the format of your MongoDB data. If you can't do that than you'll more than likely you'll have to do this pragmatically yourself.
I am looking at this example:
https://github.com/playframework/play-slick/tree/master/samples/computer-database
https://github.com/playframework/play-slick/blob/master/samples/computer-database/app/dao/CompaniesDAO.scala
How can I generate the SQL (DDL) file for applying to the database for this example?
I tried in application.conf
ebean.default="models.daos.*"
This didn't help. Also I don't see any examples within that github https://github.com/playframework/play-slick/blob/master/samples/ which generate SQL from the models.
Is there any tool to quickly convert a DB2 table rows into collection of XML documents that we can load to Marklogic?
DB2 supports the SQL/XML publishing extensions that were introduced in SQL:2003. These functions include XMLSERIALIZE, XMLELEMENT, XMLATTRIBUTE, and XMLFOREST, and are easily added to a SQL SELECT statement to produce a simple, well-formed XML document for each row in the result set. By writing queries that retrieve the table names and column layouts from DB2's catalog views, it is possible to automate the creation of the XML-publishing SELECT statements for a large number of tables.
One way of doing this would be to use the MLSQL toolkit ( http://developer.marklogic.com/code/mlsql ). It allows accessing relational databases from within your XQuery code in MarkLogic. Not sure how the returned data actually looks like, but it should be easy to process it within XQuery, and insert your data as XML into MarkLogic.
Just make sure not to try to load a million records in one statement, but instead try to spawn batches of lets say 1000 records at a time. Spawning will also allow for handling it with multiple threads, so should be faster for that reason too..
HTH!
Do you need to stream from DB2 to MarkLogic? Or can you temporarily dump all the documents to an intermediary filesystem and then read them in? If you can dump, then simply use some DB2 tooling (like #Fred's answer above) to export the rows to a bunch of XML documenets in a filesystem and use one of many methods for reading in a directory full of XML files into MarkLogic (like Information Studio (UI or apis), RecordLoader, and so on).
If you have don't want to store them in the filesystem as an intermediary, then you could write an InformationStudio plugin for MarkLogic that will pull out each row and insert a document into MarkLogic. You'd like need some web-service or rest endpoint that the plugin could call to extract the document data from DB2.
Alternatively, I suspect you could use the DB2 tooling (described by #Fred) that will let you execute some code per row of your table. If you can do that in Java (or .Net), then pull in the MarkLogic XCC APIs which will give you the ability to write documents into MarkLogic.