I'm working with neo4j-community-3.4.7 and Eclipse Oxygen. I've installed the neo4j driver and added the neo4j community edition lib files to my Eclipse library. My problem is that I'm trying to import a csv file located in the import folder of the neo4j-community directory by passing in the Cypher query through a transaction. Passing other Cypher queries this way has been successful in creating nodes and relationships in the database. However, when I try to use "USING PERIODIC COMMIT" to load a large csv I get an error message saying "cannot use periodic commit on a non-updating query."
I've attached my code with the other lines of the transaction commented out, which should work once the csv file is successfully loaded.
Related
I am running a migration script using package manager console after setting dependencies in corresponding files in dot net core based project and i am receiving below error after running the below migration command in package manager console ,
Add-Migration "InitialCreate"
Error received :
Unable to create an object of type 'PaymentDetailContext'. For the different patterns supported at design time, see https://go.microsoft.com/fwlink/?linkid=851728
I have gone through different ways to troubleshoot this issue inorder to create a migration file
I am logging using windows authentication method and kindly help me to resolve this issue in visual studio 2022
Entity framework Dot Net Core Error In Adding Migrations
I have tried to check all the corresponding files whether any spelling mistakes in connection string file , added corresponding packages using browse option and installed
I am expecting this error to be resolved (below one ) after executing the script
Add-Migration "InitialCreate"
Unable to create an object of type 'PaymentDetailContext'. For the different patterns supported at design time, see https://go.microsoft.com/fwlink/?linkid=851728
and thereby generating the migration script inside web api project
I'm getting a provision error when using PostgreSQL JDBC on GraphDB. Actually, I created a connection between postgresql and Graphdb by a virtual repository, and I made a ODBC file which includes the RDF mapping information.
Expected: Normaly I can browse PostgreSQL data's hierarchy and so on in GraphDB.
Error: The connection was fine and the repository was created successfully, but when I tried to browse the data, I got this error:
Actions that I did: I did go /opt/graphdb-free/app/lib/plugins/dependencies-plugin/dependencies-plugin.jar to modify the dependency parameter, but it didn't change anything. I did checked the syntax of ODBC file and I don't see anything wrong there.
Anyone have been though this? Was I in the right place to modify the dependency? Or it's something else?
I've put together a PowerShell script to deploy some reports and corresponding datasets and datasources as well as link the embedded dataset references to the shared datasets but getting stuck at specifying the shared datasource for the shared dataset.
So initially I had to rename the .rds to .rsds for it to show up as a selectable datasource via the SharePoint UI. I'm getting an error though when I programmatically or manually via the UI set the DataSource for the DataSet saying the schema is wrong. So I've tried running the Build->Deploy from BIDS and then downloaded the .rsds to see the difference. Turns out the BIDS version which is what gets built looks like this:
<?xml....?>
<RptDataSource...>
<ConnectionProperties>
<Extension>SHAREPOINTLIST</Extension>
<ConnectionString>...my sharepoint site url...</ConnectionString>
<IntegratedSecurity>true<IntegratedSecurity>
</ConnectionProperties>
<DataSourceID>...some guid...</DataSourceID>
<RptDataSource>
whereas BIDS generates this for SharePoint when doing Build->Deploy
<?xml....?>
<DataSourceDefinition>
<Extension>SHAREPOINTLIST</Extension>
<ConnectionString>...my sharepoint site url...</ConnectionString>
<CredentialRetrieval>Integrated</CredentialRetrieval>
<Enabled>True</Enabled>
</DataSourceDefinition>
So, is there a built in way (either in BIDS or an existing PowerShell module/script) to get this generated when building locally rather than running a Deploy or am I going to have to run some xslt to transform it (Or just copy an existing source file and replace the connection string as it's the only thing that matters) and rename as a post build process?
Roighto! I found that there's a way to create a datasource via the ReportingServices2010.asmx service. Using that and ignoring the .rds written when building the project in BIDS.
I'm using liquibase since several years and it's extremly helpful for me as application developer to bring source code and database in sync, so thank you to all contributors for this tool.
During my daily work, I usually start liquibase from the command line in order to test the changesets and database operations. If everything is wired right, I start my application (Spring Boot) and the liquibase setup within the application performs all those sync steps. These setup works perfect unless my changelog file contains changesets with loaddata in order to populate data from CSV files into the database. Every application start fails with liquibase.exception.ValidationFailedException: Validation Failed:
change sets check sum
The reason seems to be the different file locations for the CVS files mentioned in loaddata which are part of the checksum computation. If startet from the application, the changesets looks like this:
classpath:liquibase/changelog.xml: classpath:liquibase/changelog.xml::loadDefaultRolePermissions::dominik
But if started from commandline, there is no way to use classpath resources, the changeset infos looks like that
liquibase: src/main/resources/liquibase/changelog.xml: src/main/resources/liquibase/changelog.xml::loadDefaultRolePermissions::dominik
Both values differs and leads to different checksums.
If you look into liquibase.integration.commandline.Main.java, there is no classpath resource accessor used:
FileSystemResourceAccessor fsOpener = new FileSystemResourceAccessor();
CommandLineResourceAccessor clOpener = new CommandLineResourceAccessor(classLoader);
CompositeResourceAccessor fileOpener = new CompositeResourceAccessor(fsOpener, clOpener);
from liquibase.integration.commandline.Main.java
Is there any way to let liquibase be interoperable between command line AND application startup run ?
Thanks in advance
Dominik
RESOLVED by updating from liquibase 3.3.1 to 3.5.3
I am new to Phoenix I have connected my Apache Phoenix to HBase and it all going well trough terminal but i need to perform some operation of java on table so can you help me to fix how can i connect my phoenix to eclipse IDE so that i can perform operation from phoenix to HBase table and update that table in HBase.Please help me to do that
This is how I set it up using Eclipse Data Tools.
Install Eclipse Data Tools
Create generic_JDBC_1.x driver in:
In windows/perferences/DataManagement/Connectivity/Driver Definition:
Select Add
In Jar list, point to phoenix-version-client.jar
Next, in the properties tab, select the URL for the connection. The format is:
jdbc:phoenix:zookeepers:/hbase-unsecure
The Zookeeper node in my case is /hbase-unsecure because I have set it up in Ambari and that is the default. It could be only /hbase in your setup. This is set in hbase-site.xml.
For driver class, I selected org.apache.phoenix.jdb.PhoenixDriver
Next step, open Data Source Explorer view and connect to your database.
Now, when you open a SQL file, specify your connection details, i.e., Generic JDBC_1.x, Name, database
To run your SQL script, simply right click in the editor and select Execute current text.
Behdad
ExaPackets LLC