Import csv file in postgresql using scala - postgresql

i am new to scala and need to write a code which can import csv file into postgresql table using scala.Can anyone help regarding this ?

Related

Can we paste multiline code into Pyspark Shell

I am trying to copy code from PyCharm into a pyspark-shell. Even a simple copy of two import statements leads to an error. Please see code snippet below
Can someone please point out what I am doing wrong here. It'll be so helpful if I can copy whole snippets of code into the shell (for e.g. copy pasting the contents of an entire python file). Is this meant to work?
>>> import subprocess
import pickle
File "<stdin>", line 1
import pickle
^
SyntaxError: multiple statements found while compiling a single statement
It sounds like you want to run this:
PYTHONSTARTUP=code.py pyspark
It will run your script in pyspark shell.
Usually when I paste into pyspark the issue is the whitespace.

How to export spark data frames into excel sheets in pyspark

I have an output data frame that I want to export into an excel sheet. So I have used xlsxwriter to export it but when I import it. It shows an error that says that No module named xlsxwriter.Is there any alternative library for converting data frame to excel sheets?
Not:- I am using data bricks community pyspark
There is some helpful information here:
https://xlsxwriter.readthedocs.io/getting_started.html
Try to load the package first, if that isn't available, make sure to install the package and restart your notebook.
pip install --user xlsxwriter
import XlsxWriter
workbook = xlsxwriter.Workbook('hello.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write('A1', 'Hello world')
workbook.close()

Import .db file to postgresql using terminal

I am on ubuntu, i have Postgresql 13 installed on it. I have a file named (order.db) which is actually a database with a few tables in it. I want to import whole of (order.db) into postgres with same database name and of course the data. Is there any way to do it using terminal? any command?
The order.db file was created with the help of sqlite3 collaborating with c++. Order.db had nothing to do with postgresql at earlier phases of my project. I just want this (order.db) file to be imported as a database to my postgres.

Neo4j Import Tool for Dummies

I am new to neo4j and have "0" coding background (although trying to learn some). I understand the basic functionalities and am also able to import nodes and relationships using LOAD CSV. However, I absolutely can not make the neo4j-admin import tool work.
I created a new database, included the simplest CSV file in the import folder and tried the following (I will have to explain in the most simple terms - so don't laugh :))
Name of the file is test.csv
Content;
PropertyTest,:LABEL
proptest,TEST
I tried running the neo4j-import file by trying to open it. A black screen opens up and immediately disappears.
I tried ---> bin/neo4j-admin import --id-type=STRING \
--nodes:TEST=test.csv \
--nodes="test.csv" \
Could someone please explain to me with the simplest terms what the steps would be to import this?
Thank you.
The import folder under your Neo4j installation is fine to use but just bear in mind that the dbms.directories.import setting in neo4j.conf is just for the LOAD CSV command, not for neo4j-admin import.
Since your current directory in the command prompt is the bin folder, when you run the import command specifying import/movies.csv then that implies that the CSV file is in a folder called import under the current directory, under the bin folder.
If you run the command this way it should find the CSV files:
neo4j-admin import --nodes=../import/movies.csv --nodes=../import/actors.csv --relationships=../import/roles.csv
.. means the parent directory so running the command this way means to go up to the parent directory and then into the import directory under the parent dir.

Spark SQL: set avro Schema option using SQL API

I'm trying to read Avro data from Spark SQL using SQL API.
Example:
CREATE TEMPORARY TABLE episodes
USING com.databricks.spark.avro
OPTIONS (path "/tmp/episodes.avro")
Is it possible to set avroSchema (.avsc file) option like in Scala API?
Example:
spark
.read
.format("com.databricks.spark.avro")
.option("avroSchema", new Schema.Parser().parse(new File("user.avsc")).toString)
.load("/tmp/episodes.avro").show()
I think my answer might be helpful to someone who are working on a local machine or learners new to Pyspark. If you are working in Pycharm IDE, it doesn't offer you a method to include Scala or Java dependencies. Spark AVRO doesn't come bundled with Apache Spark. So we need to configure some Java System Variables.
Go to Spark package where you install Spark.--> spark folder looks for
conf --> spark-defaults.conf --> edit the file by adding the below
line at the bottom of the conf file.
> spark.jars.packages org.apache.spark:spark-avro_2.11:2.4.5