How can we read environment variable in postgresql - postgresql

I have a postgresql function where I want to use get user defined environment variable. Is there any default function to get the environment variable directly ?

Since a function is running on the database server, you would get the environment of the database server process (I wonder if that is what you want). This should be restricted to superusers (or perhaps the pg_execute_server_program role).
There is no standard function for this, probably because it is mostly useless, but it should be easy to write a function in C, PL/PerlU or PL/PythonU to do that.

No there is no SQL function available to do this.
But it should not be complicate to do using https://www.postgresql.org/docs/12/xfunc-c.html.

Related

Is there a way to perform a procedure in netlogo knowing only it's name?

I am developing a netlogo extension and I want to add a command where the user will tell me a list of procedures names that have no parameters.
Later I will perform this procedures but the only thing I will know is the name of the procedure that was passed to me before.
The command that the user will use to inform the name of the procedures is the following:
qlearningextension:actions ["procedure1" "procedure2" "procedure3"]
Later the extension will perform this procedures. I want to know if there is a way to get a procedure with only having it's name.
My recommendation would be to change the syntax of your primitive from taking in a list of strings to taking in a repeatable number of anonymous commands. You can do this by setting the syntax to CommandType | RepeatableType. A good reference should be the ControlFlow extension (cf) which uses a similar technique to accept at least 1, but possibly many, boolean/command combinations for its variadic cf:iflese primitive.
The anonymous commands provided will be checked for correctness at compile time, meaning you won't have to rely on the extension user properly typing the name of the procedures or forgetting if they change a name. The commands will also be easily executable by your extension prim at runtime, you won't have to "search" for the right procedure to execute (again, see the cf example).
Users of your extension will need to wrap your prim in parens when using the "more than 1" repeatable syntax: (qlearningextension:actions [procedure1] [procedure2] [procedure3])

How to share data between cmdlets in a module?

I'm currently working on a module in PowerShell which uses a standard REST API in the background. For that, I wrote a Connect-Server cmdlet that retrieves an auth key for later calls.
My question is: Is there any best practice regarding sharing the data with other cmdlets? I know I could easily just return it from the Connect function and pass it to the following cmdlet, but that's not what I'm looking for.
Until now, I've been using global variables for that exchange of data. But as I've read in some best practice guidelines you should try not to pollute the global scope.
Other solutions I've seen use Get and Set cmdlets, but I don't think that's the best PowerShell way of doing it.
So are there any other ways of solving that?
The normal way is to return data from one cmdlet and store it either in a variable or forward it to the pipeline. Another way of sharing data might be serializing (ConverTo-Json, ConvertTo-Csv, ...) it to a file (located in e.g. $env:TEMP, or created via New-Temporaryfile), and deserializing it back again in another cmdlet (at cost of DISK I/O). Personally I'm always the result in a variable for lather usage and inject it in the next cmdlet (or use the pipeline).
Using global variables is not the best idea since you don't know on which parameters your cmdlet/function depends on.
So as the guys at PoshCode stated, the best way to do such a thing is using a variable in script scope as this is available for all cmdlets in the module but not visible for users.

How can you call programs from a stored procedure?

I have followed other examples on stack exchange to call a built-in program from a stored procedure, but continue to get an error.
Working off this example (Looking for a working example of any OS/400 API wrapped in an external SQL stored procedure wrapped in a user defined SQL function) I built the following to attempt to create a wrapper command to allow me to change object security (my issue is that objects I create, regardless of library) are not always accessible to others in my some function, I must manually set to a common security group.
CREATE OR REPLACE PROCEDURE XX.TST( IN XOBJ CHAR(32), IN XOBJTYPE CHAR(10), IN XNEWOWN CHAR(10))
LANGUAGE CL
SPECIFIC XX.TST
NOT DETERMINISTIC
NO SQL
CALLED ON NULL INPUT
EXTERNAL NAME 'QSYS/CHGOBJOWN'
PARAMETER STYLE GENERAL;
CALL XX . TST('XX/TBL1','*FILE','GRPFRIENDS');
I get the following error:
External program CHGOBJOWN in QSYS not found
But have confirmed that going to the CL of the terminal emulator and typing QSYS/CHGOBJOWN takes me into the parameter input screen
You are trying to define a command as a program, and that just won't work. A command object (*CMD) and a program object (*PGM) are two different things, and cannot be invoked the same way. All is not lost though. There is a DB2 service that allows you to execute commands. You just have to build the proper command string.
Instead of defining a stored procedure, you can call the existing DB2 service like this:
call qsys2.qcmdexec('CHGOBJOWN OBJ(XX/TBL1) OBJTYPE(*FILE) NEWOWN(GRPFRIENDS)');
There are a whole list of services. Documentation can be found here.

FileMaker MissingFunction

Set Variable [$Write; Value: <Function Missing>("filepath";$inputedText)]
I'm trying to determine what the missing function is. I'm trying to write data to an external file with this script, and this is one line of code from the script. I can't post the rest of the code for security reasons. Any direction as to what the missing function would be would be greatly appreciated.
The < Function Missing> message means that this code was written with the expectation that a now-missing plugin would be present. To resolve this, you'll need to determine which plugin this is, and install this on your development machine (and likely on all machines needing to use this script, unless you choose to write this to execute as a PSOS script running on the server).
My best guess based on functionality and the arguments being passed is that the missing plugin may be the Monkeybread Plugin.
It's the Write To File function in ScriptMaster

COBOL -> COBOL/DB2 -> COBOL -> COBOL/DB2 pgm call

Lets say like PGM1(cobol) calls--> PGM2(cobol-db2) calls--> PGM3(cobol)--> calls PGM4(cobol-db2).
1Q. PGM3 is modified, which is purely COBOL progam. Do we compile only PGM3 and promote it to production or should we do a BIND again as its being called by and calls cobol-db2 program.
2Q. If PGM4 is modifieed, then what has to be done. (I'm using PACKAGE -> PLAN concept) ?
Also, can any one please explain me bind with package concept when we have cobol-cobol/db2 call.
Ashok,
Its definitely a question of how you making calls.A call can be static and dynamic.
With Dynamic call you do not need to compile main program is sub program changes.
But with Static call you need to compile Main program too.
Ans1 :- Static call in all calls - yes you must compile all programs.
Dynamic call used - just compile sub program.
Ans 2 :- See full details below for package and plan concept.
If you bound the old versions of DBRMs directly into your plan,
· Identify all the DBRMs that are bound directly into that plan for both the changed programs and any unchanged programs, and bind them all into the plan again.
·While you are binding the DBRMs into the plan, applications cannot use the plan to access DB2.
If you bound the old versions of the DBRMs for the changed application programs into packages
·You do not need to bind any other packages or directly-bound DBRMs into the plan again.
·You simply bind the new versions of the DBRMs for the changed application programs into packages with the same names as the old versions.
·You do not need to bind the plan again--it locates the new versions of the packages.
·While you are changing the packages, application programs can still use the other packages and directly-bound DBRMs in the plan.
Hope this helps!!.
As a rule of thumb, if the "consistency token" changes, you should rebind. That is say, if a new DBRM is produced. Draw a picture. It will help. Linking is really a red herring here. If you don't know what a consistency token is you will after your -805. Ask a peer for help (in the first instance).
Also ask you peers about impact analysis. (What else am I not recompiling that I should ?).
If the subroutine contains static SQL statements then it will produce a DBRM when compiled. This changes the consistency token and thus requires the module to be rebound to the database to avoid an 818 consistency token error. If the subroutine contains no SQL then it does not ever need to be bound to the database because no DBRM is ever created for it.
Even a program that contains only dynamic SQL will still create a DBRM that must be bound to the database. The DBRM itself will be pretty much empty apart from the consistency token.
This holds true regardless of whether this is mainframe COBOL or distributed COBOL using DB2 or LUW.
It's been a while since I had to write any COBOL but we always had two relevant rule of thumbs.
Only use Static Calls - your code should be performance tested and if there is no need for a dynamic call for a very specific purpose avoid it at all costs.
Rebind everything when something is changed and check the access paths created PRIOR to putting it live
If you need to wait for a period of outage to complete the task and flip in the updated code in production I would be patient and plan one in and complete the bind then...or get a DBA to do it and get them to confirm it was successful in your outage window or roll back immediately.
If your development environment is sufficiently sophisticated complete the bind in a lower pre-production environment using the stats for the DB2 tables from production (Copy the data in if you can - or get a DBA to do it). And check that none of the access paths for any of the DB2 calls have changed.
Hope this helps
First use this DB2 SQL to get the CONTOKENs
SELECT SUBSTR(COLLID,1,12) AS COLLID ,
SUBSTR(NAME,1,8) AS NAME ,
HEX(CONTOKEN) AS CONTOKEN ,
SUBSTR(OWNER,1,8) AS OWNER ,
SUBSTR(CREATOR,1,8) AS CREATOR ,
PDSNAME ,BINDTIME
FROM SYSIBM.SYSPACKAGE
WHERE NAME= 'program name';
Get the DB2 CONTOKEN (example below)
1ADB70E30768F694
0768F6941ADB70E3 (then reversed contoken 4bytes+4bytes)
Check #1 use reversed token search
Use token 0768F6941ADB70E3 (reversed)
CONTROL.???????.CICSLIB
Should be found
Check #2 use non-reversed token into DBRMLIB
CONTROL.????????.CIC.DBRMLIB
-Use token 1ADB70E30768F694
Should be found
If found then your bind is good.