FATAL: database does not exist - postgresql

I have a difficult time settings django with postgres.
Here is my settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': 'localhost',
'NAME': 'collector',
'USER': 'collector_user',
'PASSWORD': 'collector'
}
}
I created the user collector_user with password collector as in Postgres First steps website. Also created the collector schema:
postgres=# select nspname from pg_catalog.pg_namespace;
nspname
--------------------
pg_toast
pg_temp_1
pg_toast_temp_1
pg_catalog
public
information_schema
collector
(7 rows)
And here is what django has to say about that:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/migrate.py", line 89, in handle
executor = MigrationExecutor(connection, self.migration_progress_callback)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/executor.py", line 20, in __init__
self.loader = MigrationLoader(self.connection)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/loader.py", line 49, in __init__
self.build_graph()
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/loader.py", line 176, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/recorder.py", line 65, in applied_migrations
self.ensure_schema()
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/recorder.py", line 52, in ensure_schema
if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/base/base.py", line 231, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/base/base.py", line 204, in _cursor
self.ensure_connection()
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/postgresql/base.py", line 175, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python2.7/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
django.db.utils.OperationalError: FATAL: database "collector" does not exist
I also tried dropping the DB and the user and try creating it again. It also didn't work. What might be the problem? Is there something I didn't do? I'm new to postgres, I work on MySQL normally.

In PostgreSQL, a schema is not a database. You created a schema named 'collector' in the database named 'postgres'. But you trying to connect to database named 'collector'. Which does not exist, because you didn't create it. You should probably create a database named 'collector', and then just leave the schema alone (i.e. default to 'public').
Creating a new schema in the 'postgres' database is usually a bad practice anyway, as the database named 'postgres' should usually be reserved for system maintenance tasks.

Related

Adding default to binary type in Ecto for Postgres [Elixir]

I'm having a frustrating issue trying to set a default during an ecto migration
In the migration the code looks like the following:
def encode(binary) do
"\\x" <> Base.encode16(binary, case: :lower)
end
Logger.debug("admin.id = #{inspect admin.id}")
Logger.debug("admin.id = #{inspect UUID.string_to_binary!(admin.id)}")
Logger.debug("admin.id = #{inspect encode(admin.id)}")
alter table(#questions) do
add :owner_id, references(:users, on_delete: :nothing, type: :binary_id), null: false, default: admin.id
end
You can see the attempts I tried above in the logger
I get the error
default values are interpolated as UTF-8 strings and cannot contain null bytes. `<<209, 241,
149, 133, 44, 81, 70, 164, 181, 120, 214, 0, 253, 191, 198, 214>>` is invalid. If you want
to write it as a binary, use "\xd1f195852c5146a4b578d600fdbfc6d6", otherwise refer to
PostgreSQL documentation for instructions on how to escape this SQL type
Any help would be great thanks
When using :binary_id with Postgres, Ecto expects you to pass UUIDs as strings. Your error message implies you tried to pass it as a binary, so you should first convert it to a string:
add :owner_id, references(:users, on_delete: :nothing, type: :binary_id), null: false, default: UUID.binary_to_string!(admin.id)

pyspark DataFrame selectExpr is not working for more than one column

We are trying Spark DataFrame selectExpr and its working for one column, when i add more than one column it throws error.
First one is working, the second one throws error.
Code sample:
df1.selectExpr("coalesce(gtr_pd_am,0 )").show(2)
df1.selectExpr("coalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)").show()
Error log:
>>> df1.selectExpr("coalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)").show()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/2.6.5.0-292/spark2/python/pyspark/sql/dataframe.py", line 1216, in selectExpr
jdf = self._jdf.selectExpr(self._jseq(expr))
File "/usr/hdp/2.6.5.0-292/spark2/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
File "/usr/hdp/2.6.5.0-292/spark2/python/pyspark/sql/utils.py", line 73, in deco
raise ParseException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.ParseException: u"\nmismatched input ',' expecting <EOF>(line 1, pos 21)\n\n== SQL ==\ncoalesce(gtr_pd_am,0),coalesce(prev_gtr_pd_am,0)\n---------------------^^^\n"
check this
df1.selectExpr("coalesce(gtr_pd_am,0)”,”coalesce(prev_gtr_pd_am,0)").show()
You need specify the columns individually

Join two tables using pyspark hive context

I am seeing below error when joining two hive tables using pyspark hive context .
error:
""") File
"/usr/hdp/2.3.4.7-4/spark/python/lib/pyspark.zip/pyspark/sql/context.py",
line 552, in sql File
"/usr/hdp/2.3.4.7-4/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in call File
"/usr/hdp/2.3.4.7-4/spark/python/lib/pyspark.zip/pyspark/sql/utils.py",
line 36, in deco File
"/usr/hdp/2.3.4.7-4/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py",
line 300, in get_return_value py4j.protocol.Py4JJavaError: An error
occurred while calling o41.sql. : org.apache.spark.SparkException: Job
cancelled because SparkContext was shut down EX:
lsf.registerTempTable('temp_table')
out = hc.sql(
"""INSERT OVERWRITE TABLE AAAAAA PARTITION (day ='2017-09-20')
SELECT tt.*,ht.id
FROM temp_table tt
JOIN hive_table ht
ON tt.id = ht.id
""")
Also how to parameterize day ?

Relstorage zodbpack error

Whenever I try and run zobbpack I generate the error:
psycopg2.IntegrityError: null value in column "zoid" violates not-null constraint
Any ideas what is causing this and how to fix it?
relstorage 1.5.1, postgres 8, plone 4.2.1.1
2012-12-03 13:18:03,485 [zodbpack] INFO Opening storage (RelStorageFactory)...
2012-12-03 13:18:03,525 [zodbpack] INFO Packing storage (RelStorageFactory).
2012-12-03 13:18:03,533 [relstorage] INFO pack: beginning pre-pack
2012-12-03 13:18:03,533 [relstorage] INFO pack: analyzing transactions committed Mon Nov 26 12:31:54 2012 or before
2012-12-03 13:18:03,536 [relstorage.adapters.packundo] INFO pre_pack: start with gc enabled
2012-12-03 13:18:03,759 [relstorage.adapters.packundo] INFO analyzing references from objects in 97907 new transaction(s)
2012-12-03 13:18:03,761 [relstorage.adapters.scriptrunner] WARNING script statement failed: '\n INSERT INTO object_refs_added (tid)\n VALUES (%(tid)s)\n '; parameters: {'tid': 0L}
2012-12-03 13:18:03,761 [relstorage.adapters.packundo] ERROR pre_pack: failed
Traceback (most recent call last):
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/packundo.py", line 486, in pre_pack
conn, cursor, pack_tid, get_references)
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/packundo.py", line 580, in _pre_pack_with_gc
self.fill_object_refs(conn, cursor, get_references)
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/packundo.py", line 387, in fill_object_refs
self._add_refs_for_tid(cursor, tid, get_references)
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/packundo.py", line 459, in _add_refs_for_tid
self.runner.run_script_stmt(cursor, stmt, {'tid': tid})
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/scriptrunner.py", line 52, in run_script_stmt
cursor.execute(stmt, generic_params)
IntegrityError: null value in column "zoid" violates not-null constraint
Traceback (most recent call last):
File "zodbpack.py", line 86, in <module>
main()
File "zodbpack.py", line 78, in main
skip_prepack=options.reuse_prepack)
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/storage.py", line 1114, in pack
adapter.packundo.pre_pack(tid_int, get_references)
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/packundo.py", line 486, in pre_pack
conn, cursor, pack_tid, get_references)
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/packundo.py", line 580, in _pre_pack_with_gc
self.fill_object_refs(conn, cursor, get_references)
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/packundo.py", line 387, in fill_object_refs
self._add_refs_for_tid(cursor, tid, get_references)
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/packundo.py", line 459, in _add_refs_for_tid
self.runner.run_script_stmt(cursor, stmt, {'tid': tid})
File "/usr/local/lib64/python2.6/site-packages/RelStorage-1.5.1-py2.6.egg/relstorage/adapters/scriptrunner.py", line 52, in run_script_stmt
cursor.execute(stmt, generic_params)
psycopg2.IntegrityError: null value in column "zoid" violates not-null constraint
Running zodbpack.py without GC enabled (thanks Martijn) worked and enabled the database to pack down from 3.8Gb to 887Mb.
So my zodbpack-conf.xml looks like:
<relstorage>
pack-gc false
<postgresql>
dsn dbname='zodb' user='user' host='host' password='password'
</postgresql>
</relstorage>
and i run it with:
python zodbpack.py -d 7 zodbpack-conf.xml
Note: after the pack completes you still need to vacuum the database to get the space back. I run this from the command line as:
psql zodb postgresuser
zodb=# SELECT pg_database_size('zodb');
zodb=# vacuum full;
zodb=# SELECT pg_database_size('zodb');
Interestingly, when i run the pack command from the ZMI Control Panel, I still get the same error as previously:
Site Error
An error was encountered while publishing this resource.
Error Type: IntegrityError
Error Value: null value in column "zoid" violates not-null constraint
So i am assuming the ZMI pack uses GC enabled, and that there is still an issue with null values in my site database...
Should I try and run some SQL to clean it out? If an object has no 'zoid' value then is it effectively inaccessible junk? A SQL example for this would be great too :)

Cassandra errors when trying the new CQL 3

I downloaded Cassandra 1.1.1 and launched cqlsh under the version 3
I tried to create a new column family:
CREATE TABLE stats (
pid blob,
period int,
targetid blob,
sum counter,
PRIMARY KEY (pid, period, targetid)
);
But I got this:
Traceback (most recent call last):
File "./cqlsh", line 908, in perform_statement
self.cursor.execute(statement, decoder=decoder)
File "./../lib/cql-internal-only-1.0.10.zip/cql-1.0.10/cql/cursor.py", line 117, in execute
response = self.handle_cql_execution_errors(doquery, prepared_q, compress)
File "./../lib/cql-internal-only-1.0.10.zip/cql-1.0.10/cql/cursor.py", line 132, in handle_cql_execution_errors
return executor(*args, **kwargs)
File "./../lib/cql-internal-only-1.0.10.zip/cql-1.0.10/cql/cassandra/Cassandra.py", line 1583, in execute_cql_query
self.send_execute_cql_query(query, compression)
File "./../lib/cql-internal-only-1.0.10.zip/cql-1.0.10/cql/cassandra/Cassandra.py", line 1593, in send_execute_cql_query
self.oprot.trans.flush()
File "./../lib/thrift-python-internal-only-0.7.0.zip/thrift/transport/TTransport.py", line 293, in flush
self._trans.write(buf)
File "./../lib/thrift-python-internal-only-0.7.0.zip/thrift/transport/TSocket.py", line 117, in write
plus = self.handle.send(buff)
error: [Errno 32] Broken pipe
And on the server console:
Error occurred during processing of message.
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:247)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:51)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:60)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getString(AbstractCompositeType.java:140)
at org.apache.cassandra.config.CFMetaData.validate(CFMetaData.java:929)
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:131)
at org.apache.cassandra.cql3.statements.CreateColumnFamilyStatement.announceMigration(CreateColumnFamilyStatement.java:83)
at org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:99)
at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:108)
at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:121)
at org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1237)
at org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3542)
at org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:3530)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
I'd suggest reporting bugs at https://issues.apache.org/jira/browse/CASSANDRA.