vertx-db2-client return error codes when querying data with chinese characters - db2

Like picture above
Are there some options to control the encoding? (switch between UTF8 and GBK?)
I found a DEFAULT_CHARSET property(the value is utf8) in io.vertx.db2client.DB2ConnectOptions, but no code use it.
Is there a way to pass custom charset to DB2ConnectOptions instance?
I tried to set charset directly in config file(the connect part will be parsed into a SqlConnectOptions instance), but it seems the client still use utf8
connect:
host: 192.168.0.100
port: 50000
database: test
charset: GBK
pipelined: true
user: test
password: test
pool:
name: test
shared: true
result of select name,value from sysibmadm.dbcfg where name in ('codepage','codeset','territory','country'):
[
{
"NAME": "codepage",
"VALUE": "1386"
},
{
"NAME": "codeset",
"VALUE": "GBK"
},
{
"NAME": "country",
"VALUE": "86"
},
{
"NAME": "territory",
"VALUE": "CN"
}
]

Related

Azure DevOps KeyVault Linked Variable group "Value cannot be null. Parameter name: variableGroupParameters" - how do i fix this?

I have a need to automate the creation of KeyVault linked variable groups in ADO as part of a pipeline task. I can actually create the var groups after a bit of experimenting. However, using the az devops invoke method one is unable to specify the Azure Subscription and this has to be done manually, after the event - however when I do this in the web interface and attempt to save I get the error:
Value cannot be null. Parameter name: variableGroupParameters
This means any subsequent editing of the created KeyVault linked var group is pointless as it is unable to save it.
The JSON I am submitting is as follows:
{
"authorized": true,
"description": "$description",
"name": "$name",
"type": "AzureKeyVault",
"variableGroupProjectReferences": [{
"projectReference": {
"id": "$adoProjectID",
"name": "$ProjectName"
},
"name": "$name",
"description": "$description"
}],
"providerData": {
"serviceEndpointId": "$AdoSvcConnId",
"vault": "$KeyVaultNM"
},
"variables": {
"SOMETHINGSECRET": {
"isSecret": true,
"value": null,
"enabled": true,
"contentType": ""
},
"variables": {
"ANOTHERSECRET": {
"isSecret": true,
"value": null,
"enabled": true,
"contentType": ""
}
}
}
}
Where the $tokenized values are replaced during the powershell/az cli task
The command (which works but results in broken var group that can't be edited or saved) is as follows:
az devops invoke --http-method post --area distributedtask --resource variablegroups --in-file "vgroup_azure_rm.json" --encoding utf-8 --route-parameters project=$ProjectName --api-version 5.0-preview
Anyone have an insight in how to fix this please?
I was experiencing the same problem, and the message had nothing to do with the real issue.
I had the following variables:
"variables": {
"SECRET_1": {
"contentType": "",
"isSecret": true,
"value": "",
"expires": null,
"enabled": true
},
"SECRET_2": {
"contentType": "",
"isSecret": true,
"value": "",
"expires": null,
"enabled": true
},
"SECRET_3": {
"contentType": "",
"isSecret": true,
"value": "",
"expires": null,
"enabled": true
}
}
And was receiving the same error as you. I removed the expires property and the issue was solved! So my thinking is that the validation checks for any null value and returns the error you are receiving.
Try changing the property value to an empty string ("") instead of null and see whether that solves the problem.

keycloak and postgresql on openshift

I tried to deploy keycloak with POSTGRESQL on openshift. I used this image, jboss/keycloak-openshift image for keycloak and rhscl/postgresql-95-rhel7 for postgresql.
I then I added environment variables in keycloak deployment
DB_DATABASE : keycloak
DB_USER : postgresl-secret-database-user
DB_PASSWORD : postgresl-secret-database-password
DB_VENDOR : POSTGRES
I thought this is what I needed to do to make keycloak work with postgresql. These are the errors and warning I am seeing in the pod logs.
IOException occurred while connecting to postgres:5432: java.net.UnknownHostException: postgres
Connection error: : org.postgresql.util.PSQLException: The connection attempt failed.
But it is not working this way. The keycloak pod fails. Do I need to do anything else as well?
What is the name of your database service?
If not the default of postgres that Keycloak expects, you need to set DB_ADDR. I use the following in my template:
{
"name": "KEYCLOAK_USER",
"value": "${KEYCLOAK_USER}"
},
{
"name": "KEYCLOAK_PASSWORD",
"value": "${KEYCLOAK_PASSWORD}"
},
{
"name": "DB_VENDOR",
"value": "postgres"
},
{
"name": "DB_ADDR",
"value": "${KEYCLOAK_NAME}-db"
},
{
"name": "DB_PORT",
"value": "5432"
},
{
"name": "DB_DATABASE",
"value": "keycloak"
},
{
"name": "DB_USER",
"value": "keycloak"
},
{
"name": "DB_PASSWORD",
"value": "${DATABASE_PASSWORD}"
},
Suggest setting them all.
Further details in:
https://github.com/jupyter-on-openshift/poc-hub-keycloak-auth/blob/master/templates/jupyterhub.json

MongoDB define collection that has an array as document

I'm a newbie to mongodb and I'm trying to create my first database.
This is my scenario: A user can connect to a FTP Location and get/put some files. Each user can have more than one FTP Storage that he can access.
These are the fields required for the user document:
username: String
password: String
And these are the fields required for the ftp document:
host: String
port: Number
user: String
pass: String
And here comes my question...Can I have a single collection that contains both documents mentioned above?
More precisely, I need to know how I can get a record like this and how my database should look...
{
"username": "User",
"password": "test",
[
{
"host": "first-host",
"port": 21,
"user": "defined-user",
"pass": "defined-pass"
},
{
"host": "second-host",
"port": 21,
"user": "defined-user",
"pass": "defined-pass"
}
]
}
Not sure if my question is clearly enough, but please let me know if I need to specify additional information to get some answers
Mongo uses json format, so you can have something like that :
"username": "User",
"password": "test",
"storage": [
{
"host": "first-host",
"port": 21,
"user": "defined-user",
"pass": "defined-pass"
},
{
"host": "second-host",
"port": 21,
"user": "defined-user",
"pass": "defined-pass"
}
]
And you shema should be defined like this :
username: String
password: String
storage : Array
BSON documents are basically JSON/JS objects (with a few additional types). So, your mongodb document MUST be a valid json/js object. The desired document shape from your question is not valid js object and, as such, is impossible in mongodb.

Not able to set up my loopback model.Error:Persisted model has not been correctly attached to a DataSource

restraunt.json file
`{
"name": "restraunt",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"name": {
"type": "string",
"required": true
},
"location": {
"type": "string",
"required": true
}
},
"validations": [],
"relations": {},
"acls": [],
"methods": {}
}`
restraunt.js file
`module.exports = function(Restraunt) {
Restraunt.find({where:{id:1}}, function(data) {
console.log(data);
})
};`
model-config.json file
`"restraunt": {
"dataSource": "restrauntManagement"
}`
datasources.json file
`{
"db": {
"name": "db",
"connector": "memory"
},
"restrauntManagement": {
"host": "localhost",
"port": 0,
"url": "",
"database": "restraunt-management",
"password": "restraunt-management",
"name": "restrauntManagement",
"user": "rohit",
"connector": "mysql"
}
}`
I am able to get,put,post from the explorer which means the sql db has been set up properly but i am not able to 'find' from restraunt.js file.It throws an error.
"Error: Cannot call restraunt.find(). The find method has not been setup. The PersistedModel has not been correctly attached to a DataSource"
Besides that executing code in boot folder, there's a possibility to use event, emitted after attaching the model.
You can write your code right in model.js, not in boot folder.
Looks like:
Model.once("attached", function () {})
Model = Accounts (for example).
I know, this is an old topic, but maybe this helps someone else.
Try installing mysql connector again:
npm i -S loopback-connector-mysql
Take a look at your datasources.json, because mysql's port might be wrong, default port is 3306, also you could try changing localhost to 0.0.0.0.
"restrauntManagement": {
"host": "localhost", /* if you're using docker, you need to set it to 0.0.0.0 instead of localhost */
"port": 0, /* default port is 3306 */
"url": "",
"database": "restraunt-management",
"password": "restraunt-management",
"name": "restrauntManagement",
"user": "rohit",
"connector": "mysql"
}
model-config.json must be:
"restraunt": {
"dataSource": "restrauntManagement" /* this name must be the same name in datasources object key (in your case it is restrauntManagement not the connector name which is mysql) */
}
You also need to execute the migration for restaurant model:
create migration.js at /server/boot and add this:
'use strict';
module.exports = function(server) {
var mysql = server.dataSources.mysql;
mysql.autoupdate('restraunt');
};
you need to migrate every single model you'll use it. you also need to migrate the default models (ACL, AccessToken, etc...) if you're going to attach them to a datasource.
Also in the docs says you can't perform any operation inside the model.js file because the system (at that point) it is not fully loaded. Any operation you need to execute must be inside a .js file in the /boot directory because the system is completely loaded there. You can perform operations inside remote methods because the system is loaded as well.

Loopback - GET model using custom String ID from MongoDB

I'm developing an API with loopback, everything worked fine until I decided to change the ids of my documents in the database. Now I don't want them to be auto generated.
Now that I'm setting the Id myself. I get an "Unknown id" 404, whenever I hit this endpoint: GET properties/{id}
How can I use custom IDs with loopback and mongodb?
Whenever I hit this endpoint: http://localhost:5000/api/properties/20020705171616489678000000
I get this error:
{
"error": {
"name": "Error",
"status": 404,
"message": "Unknown \"Property\" id \"20020705171616489678000000\".",
"statusCode": 404,
"code": "MODEL_NOT_FOUND"
}
}
This is my model.json, just in case...
{
"name": "Property",
"plural": "properties",
"base": "PersistedModel",
"idInjection": false,
"options": {
"validateUpsert": true
},
"properties": {
"id": {"id": true, "type": "string", "generated": false},
"photos": {
"type": [
"string"
]
},
"propertyType": {
"type": "string",
"required": true
},
"internalId": {
"type": "string",
"required": true
},
"flexCode": {
"type": "string",
"required": true
}
},
"validations": [],
"relations": {},
"acls": [],
"methods": []
}
Your model setup (with with idInjection: true or false) did work when I tried it with a PostGreSQL DB setup with a text id field for smaller numbers.
Running a Loopback application with DEBUG=loopback:connector:* node . outputs the database queries being run in the terminal - I tried it with the id value you are trying and the parameter value was [2.002070517161649e+25], so the size of the number is the issue.
You could try raising it as a bug in Loopback, but JS is horrible at dealing with large numbers so you may be better off not using such large numbers as identifiers anyway.
It does work if the ID is an alphanumeric string over 16 characters so there might be a work around for you (use ObjectId?), depending on what you are trying to achieve.