UDF Firebird function STRTOINT is not defined in query even if its listed in RDB$FUNCTIONS - firebird

I'm trying to run a SQL query in my Firebird database. I've restored a backup in my local development environment.
My error is the following:
echo "SELECT STRTOINT('3') FROM MyTable;" | isql-fb /var/lib/firebird/3.0/data/dbname.fdb
Statement failed, SQLSTATE = 39000
invalid request BLR at offset 36
-function STRTOINT is not defined
-module name or entrypoint could not be found
To check if the function exists i run this:
echo 'SELECT RDB$FUNCTION_NAME FROM RDB$FUNCTIONS;' | isql-fb /var/lib/firebird/3.0/data/dbname.fdb | grep STRTOINT
STRTOINT
How to call the function correctly? Any hints are welcome!

The fact that the UDF is defined in the database does not mean the UDF exists. UDFs require both a definition in the database, and a native library (.dll/.so) on disk that contains the code for the UDF function.
The error means that either
The UDF does not exist in any of the libraries found,
Firebird has no permission to read the library,
The folder containing the library is not defined in firebird.conf (setting UdfAccess)
The UDF library is 32 bit and Firebird is 64 bit (or vice versa)
See also invalid request BLR at offset 163
Firebird itself has no UDF called STRTOINT, so you need to find out which third-party library this is, and install it correctly.

Have you tried to do the following?
CAST(MyVarcharCol AS INT)
Read more: http://www.firebirdfaq.org/faq139/

Related

Synapse Spark exception handling - Can't write to log file

I have written PySpark code to hit a REST API and extract the contents in an XML format and later wrote to Parquet in a data lake container.
I am trying to add logging functionality where I not only write out errors but updates of actions/process we execute.
I am comparatively new to Spark I have been relying on online articles and samples. All explain the error handling and logging through "1/0" examples and saving logs in the default folder structure (not in ADLS account/container/folder) which do not help at all. Most of the code written in Pure Python doesn't run as-is.
Could I get some assistance with setting up the following:
Push errors to a log file under a designated folder sitting under a data lake storage account/container/folder hierarchy".
Catching REST specific exceptions.
This is a sample of what I have written:
''''
LogFilepath = "abfss://raw#.dfs.core.windows.net/Data/logging/data.log"
#LogFilepath2 = "adl://.azuredatalakestore.net/raw/Data/logging/data.log"
print(LogFilepath)
try:
1/0
except Exception as e:
print('My Error...' + str(e))
with open(LogFilepath, "a") as f:
f.write("An error occured: {}\n".format(e))
''''
I have tried it both ABFSS and ADL file paths with no luck. The log file is already available in the storage account/container/folder.
I have reproduced the above using abfss path in with open() function but it gave me the below error.
FileNotFoundError: [Errno 2] No such file or directory: 'abfss://synapsedata#rakeshgen2.dfs.core.windows.net/datalogs.logs'
As per this Documentation
we can use open() on ADLS file with a path like /synfs/{jobId}/mountpoint/{filename}.
For that, first we need to mount the ADLS.
Here I have mounted it using ADLS linked service. you can mount either by Storage account access key or SAS as per your requirement.
mssparkutils.fs.mount(
"abfss://<container_name>#<storage_account_name>.dfs.core.windows.net",
"/mountpoint",
{"linkedService":"<ADLS linked service name>"}
)
Now use the below code to achieve your requirement.
from datetime import datetime
currentDateAndTime = datetime.now()
jobid=mssparkutils.env.getJobId()
LogFilepath='/synfs/'+jobid+'/synapsedata/datalogs.log'
print(LogFilepath)
try:
1/0
except Exception as e:
print('My Error...' + str(e))
with open(LogFilepath, "a") as f:
f.write("Time : {}- Error : {}\n".format(currentDateAndTime,e))
Here I am writing date time along with the error and there is no need to create the log file first. The above code will create and append the error.
If you want to generate the logs daily, you can generate date file names log files as per your requirement.
My Execution:
Here I have executed 2 times.

postgresql – No crypt function on Debian stretch

I have PostgreSQL 9.6 installation on my Debian Stretch (9). When I want to use crypt() or gen_salt() functions, it says:
ERROR: function gen_salt(unknown, integer) does not exist
LINE 1: select gen_salt('bf', 8)
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
How can I get these functions working?
Installed postgresql packages
You have to enable it using SQL:
CREATE EXTENSION IF NOT EXISTS pgcrypto;
You have to do it on each database that uses pgcrypto functions.

ActiveRecord: "uuid-ossp" extension added, but no uuid functions available

Using rails-5.0.7.1 (according to bundle show rails)
I wrote a migration which adds the "uuid-ossp" extension, and the SQL gets executed, and the extension shows up when I type \dx in the psql console. However, the functions that this extension provides (such as uuid_generate_v4) do NOT show up when I type \df, and so any attempt to use the functions that should be added fails.
When I take the SQL from the ActiveRecord migration and copy+paste it into the psql console directly, everything works as expected - extension is added, and functions are available.
Here is my migration code:
class EnableUuidOssp < ActiveRecord::Migration[5.0]
def up
enable_extension "uuid-ossp"
end
def down
disable_extension "uuid-ossp"
end
end
Here is the output:
$ bundle exec rake db:migrate
== 20190308113821 EnableUuidOssp: migrating ==============================
-- enable_extension("uuid-ossp")
-> 0.0075s
== 20190308113821 EnableUuidOssp: migrated (0.0076s) =====================
^ this all appears to run successfully, but no functions are enabled. Which means future SQL that includes statements such as ... SET uuid = uuid_generate_v4() ... fail with the this error HINT: No function matches the given name and argument types. You might need to add explicit type casts.
What does work
Going directly into psql and typing:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
^ This installs the extension and makes the functions avaiable.
And yet, what doesn't work
Okay, so if I take the above SQL and rewrite my migration this way:
...
def up
execute <<~SQL
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
SQL
end
...
^ this migration will run without error, yet it will still not make the functions available.
So, the same copy+paste SQL that works in psql doesn't work via the ActiveRecord execute method, which really puzzles me. I'm not sure what piece I'm missing that's causing this to fail.
I assume that the schema where the extension is installed is not on your search_path.
You can see that schema with
\dx "uuid-ossp"
Try to qualify the functions with the schema, like in public.uuid_generate_v4().

Create DataSource for DB2 Provider using wsadmin (Websphere Application Server 8.5)

I have a script which create a Data Source using a DB2 JDBC Provider in Websphere Application Server 8.5. So I am fighting with an error while running the script and I need some help pls.
My script:
def createDB2(list):
print 'Creating DB2 Data Source...'
for dataSource in list:
datasourceName=dataSource[0]
dsJNDIName=dataSource[1]
compAuthAlias=dataSource[2]
providerName=dataSource[3]
dataStoreHelperClassName=dataSource[4]
description=dataSource[5]
serverName=dataSource[6]
databaseMaxConnections=dataSource[7]
databaseMinConnections=dataSource[8]
databaseconnTimeout=dataSource[9]
databasereapTime=dataSource[10]
databaseunusedTimeout=dataSource[11]
databaseagedTimeout=dataSource[12]
#Creare sursa de date
dataSourceId = AdminJDBC.createDataSourceAtScope( scope, providerName, datasourceName, dsJNDIName, dataStoreHelperClassName, serverName, [['componentManagedAuthenticationAlias',compAuthAlias],['containerManagedPersistence','true'],['description',description]] )
connectionPoolList = AdminConfig.list('ConnectionPool', dataSourceId)
connectionPoolList = AdminUtilities.convertToList(connectionPoolList)
connectionPoolId = connectionPoolList[0]
AdminConfig.modify(connectionPoolId, [["maxConnections", databaseMaxConnections], ["minConnections", databaseMinConnections], ["connectionTimeout", databaseconnTimeout], ["reapTime", databasereapTime], ["unusedTimeout", databaseunusedTimeout], ["agedTimeout", databaseagedTimeout]])
print 'Saving configuration...'
AdminConfig.save()
print "Configuration saved."
My input list:
[datasourceName, JNDIName, AuthAlias, providerName, dataStoreHelperClassName, description, srvName, maxConnections, minConnections, connTimeout, reapTime, unusedTimeout, agedTimeout]
I am using the same script to create an Oracle Data Source with no errors. The difference that I know between theese process is the serverName. For DB2 is a ServerName and for Oracle is an URL. Is there another difference that I don't know? Does anyone see an error or a mistake in my code?
My error:
Exception: com.ibm.ws.scripting.ScriptingException com.ibm.ws.scripting.ScriptingException: com.ibm.ws.scripting.ScriptingException: WASX8018E: Cannot find a match for option value [databaseName, java.lang.String, TestSRV] for step configureResourceProperties
WASX7017E: Exception received while running file "createDataSource.py"; exception information: com.ibm.ws.scripting.ScriptingException: WASX8018E: Cannot find a match for option value [databaseName, java.lang.String, TestSRV] for step configureResourceProperties
If you need more information leave a comment pls. Thanks in advance!
EDIT 03.03.2015
I found some examples in a RedBook from IBM.
Examples scripts for the DB2 database type:
The following example script includes optional attributes in a string format:
AdminJDBC.createDataSourceAtScope("Cell=IBM-F4A849C57A0Cell01,Node=IBM-F4A849C57A0Node01,Server=server1", "MyTestJDBCProviderName", "newds2", "newds2/jndi", "com.ibm.websphere.rsadapter.DB2UniversalDataStoreHelper", "db1", " category=myCategory, componentManagedAuthenticationAlias=CellManager01/AuthDataAliase, containerManagedPersistence=true, description=’My description’, xaRecoveryAuthAlias=CellManager01/xaAliase", "serverName=localhost, driverType=4,portNumber=50000")
The following example script includes optional attributes in a list format:
AdminJDBC.createDataSourceAtScope("Cell=IBM-F4A849C57A0Cell01,Node=IBM-F4A849C57A0Node01,Server=server1", "MyTestJDBCProviderName", "newds2", "newds2/jndi", "com.ibm.websphere.rsadapter.DB2UniversalDataStoreHelper", "db1", [[’category’, ’myCategory’], [’componentManagedAuthenticationAlias’, ’CellManager01/AuthDataAliase’], [’containerManagedPersistence’, ’true’], [’description’, ’My description’], [’xaRecoveryAuthAlias’, ’CellManager01/xaAliase’]] , [[’serverName’, ’localhost’], [’driverType’, 4], [’portNumber’, 50000]])
EDIT 16.04.2015
I am using the built in function createDataSourceAtScope and I have another example:
def createDataSourceAtScope( scope, JDBCName, datasourceName, jndiName, dataStoreHelperClassName, dbName, otherAttrsList=[], resourceAttrsList=[], failonerror=AdminUtilities._BLANK_ ):
I have to call the function like above. Did anyone see the problem? :)
The built-in scripts are in:dmgrProfile/scriptLibraries/resources/JDBC/V70
I still don't know how to fix my problem. If anyone has an ideea please leave a comment or an answer. Thank you very much!
I know it's too late, but I struggled with the same problem for Websphere on Docker. Then, I would like to share my solution.
Command to debug the scripts on ibmcom/websphere-traditional:8.5.5.18
/opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -conntype None -f exportConfig.py
Jython script
import os
import sys
newjdbc = AdminConfig.getid('/JDBCProvider:"DB2 Universal JDBC Driver Provider"/')
ds = AdminTask.createDatasource(newjdbc, '[-name NameDataSource -jndiName jdbc/NameDataSource -description "DB2 Universal Driver Datasource" -dataStoreHelperClassName com.ibm.websphere.rsadapter.DB2UniversalDataStoreHelper -containerManagedPersistence true -componentManagedAuthenticationAlias db2inst1 -configureResourceProperties [[databaseName java.lang.String SAMPLE][portNumber java.lang.Integer 50000][serverName java.lang.String 172.17.0.3]]]')
AdminConfig.create('MappingModule', ds, '[[authDataAlias db2inst1] [mappingConfigAlias "DefaultPrincipalMapping"]]')
AdminConfig.save()

PostgreSQL pgp_sym_encrypt() broken in version 9.1

The following works in PostgreSQL 8.4:
insert into credentials values('demo', pgp_sym_encrypt('password', 'longpassword'));
When I try it in version 9.1 I get this:
ERROR: function pgp_sym_encrypt(unknown, unknown) does not exist LINE
1: insert into credentials values('demo', pgp_sym_encrypt('pass...
^ HINT: No function matches the given name and argument types. You might need to add
explicit type casts.
*** Error ***
ERROR: function pgp_sym_encrypt(unknown, unknown) does not exist SQL
state: 42883 Hint: No function matches the given name and argument
types. You might need to add explicit type casts. Character: 40
If I try some explicit casts like this
insert into credentials values('demo', pgp_sym_encrypt(cast('password' as text), cast('longpassword' as text)))
I get a slightly different error message:
ERROR: function pgp_sym_encrypt(text, text) does not exist
I have pgcrypto installed. Does anyone have pgp_sym_encrypt() working in PostgreSQL 9.1?
On explanation could be that the module was installed into a schema that is not in your search path - or to the wrong database.
Diagnose your problem with this query and report back the output:
SELECT n.nspname, p.proname, pg_catalog.pg_get_function_arguments(p.oid) as params
FROM pg_catalog.pg_proc p
JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE p.proname ~~* '%pgp_sym_encrypt%'
AND pg_catalog.pg_function_is_visible(p.oid);
Finds functions in all schemas in your database. Similar to the psql meta-command
\df *pgp_sym_encrypt*
Make sure you install the extension on the desired schema.
sudo -i -u postgres
psql $database
CREATE EXTENSION pgcrypto;
OK, problem solved.
I was creating the pgcrypto extension as the first operation in the script. Then I dropped and added the VGDB database. That's why pgcrypto was there immediately after creating it, but didn't exist when running the sql later in the script or when I opened pgadmin.
This script is meant for setting up new databases and if I had tried it on a new database the create extension would have failed right away.
My bad. Thanks for the help, Erwin.
Just mention de schema where is installed pgcrypto like this:
#ColumnTransformer(forColumn = "TEST",
read = "public.pgp_sym_decrypt(TEST, 'password')",
write = "public.pgp_sym_encrypt(?, 'password')")
#Column(name = "TEST", columnDefinition = "bytea", nullable = false)
private String test;
I ran my (python) script again and the CREATE EXTENSION ran without error. The script also executes this command
psql -d VGDB -U postgres -c "select * from pg_available_extensions order by name"
which includes the following in the result set:
pgcrypto | 1.0 | 1.0 | cryptographic functions
So psql believes that it has installed pgcrypto.
Later in the same script when I execute
psql -d VGDB -U postgres -f sql/Create.Credentials.table.sql
where sql/Create.Credentials.table.sql includes this
insert into credentials values('demo', pgp_sym_encrypt('password', 'longpassword'));
I get this
psql:sql/Create.Credentials.table.sql:31: ERROR: function pgp_sym_encrypt(unknown, unknown) does not exist
LINE 1: insert into credentials values('demo', pgp_sym_encrypt('pass...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
When I open pgadmin it does not show pgcrypto in either the VGDB or postgres databases even though the query above called by psql shows that pgcrypto is installed.
Could there be an issue with needing to commit after using psql to execute the "create extension ..." command? None of my other DDL or SQL statements require a commit when they get executed with psql.
It's starting to look like psql is just flakey. Is there another way to call "create extension pgcrypto" - e.g. with Python's database support classes - or does that have to be run through psql?