I see a lot of info regarding serializing tables on kdb but is there a suggested best practice on getting functions to persist on a kdb server? At present, I and loading a number of .q files in my startup q.q on my local and have duplicated those .q files on the server for when it reboots.
As I edit, add and change functions, I am doing so on my local dev machine in a number of .q files all referencing the same context. I then push them one-by-one sending them to the server using code similar to below which works great for now but I am pushing the functions to the server and then manually copying each .q file and then manually editing the q.q file on the server.
\p YYYY;
h:hopen `:XXX.XXX.XX.XX:YYYY;
funcs: raze read0[`$./funcs/funcsAAA.q"];
funcs: raze read0[`$./funcs/funcsBBB.q"];
funcs: raze read0[`$./funcs/funcsCCC.q"];
h funcs;
I'd like to serialize them on the server (and conversely get them when the system reboots. I've dabbled with on my local and seems to work when I put these in my startup q.q
`.AAA set get `:/q/AAAfuncs
`.BBB set get `:/q/BBBfuncs
`.CCC set get `:/q/CCCfuncs
My questions are:
Is there a more elegant solution to serialize and call the functions on the server?
Clever way to edit the q.q on the server to add the .AAA set get :/q/AAAfuncs
Am I thinking about this correctly? I recognize this could be dangerous in a prod enviroment
ReferencesKDB Workspace Organization
In my opinion (and experience) all q functions should be in scripts that the (production) kdb instance can load directly using either \l /path/to/script.q or system"l /path/to/script.q", either from local disk or from some shared mount. All scripts/functions should ideally be loaded on startup of that instance. Functions should never have to be defined on the fly, or defined over IPC, or written serialised and loaded back in, in a production instance.
Who runs this kdb instance you're interacting with? Who is the admin? You should reach out to the admins of the instance to have them set up a mechanism for having your scripts loaded into the instance on startup.
An alternative, if you really can't have your function defined server side, is to define your functions in your local instance on startup and then you send the function calls over IPC, e.g.
system"l /path/to/myscript.q"; /make this load every time on startup
/to have your function executed on the server without it being defined on the server
h:hopen `:XXX.XXX.XX.XX:YYYY;
res:h(myfunc1;`abc);
This loads the functions in your local instance but sends the function to the remote server for evaluation, along with the input parameter `abc
Edit: Some common methods for "loading every time on startup" include:
Loading a script from the startup command line, aka
q myscript.q -p 1234 -w 10000
You could have a master script which loads subscripts.
Load a database or script directory contains scripts from the startup command line, aka
q /path/to/db -p 1234 -w 10000
Jeff Borror mentions this here: https://code.kx.com/q4m3/14_Introduction_to_Kdb%2B/#14623-scripts and here: https://code.kx.com/q4m3/14_Introduction_to_Kdb%2B/#14636-scripts
Like you say, you can have a q.q script in your QHOME
My integration tests for my asp.net core application require a connection to a PostgreSQL database. In my deployment pipeline I only want to deploy if my integration tests pass.
How do I supply a working connection string inside the Microsoft build agent?
I looked under service connections and couldn't see anything related to a database.
If you are using Microsoft hosted agent, then your database need to be accessible from internet.
Otherwise, you need to it on self-hosted agent that can access your database.
I assume the default connectionstring is in appsettings.json, you could store the actual database connectionstring to a secret variable, then update appsettings.json file with that variable value through some task (e.g. Set Json Property) or do it programming (e.g. powershell script) before running web app and starting test during build.
If you can use any PostgreSQL database, you can use service container with a docker image that has PostgreSQL database (e.g. postgres).
For classical pipeline, you could call docker command run the image.
I would recommend you to use runsettings which you can override in task. In that way you will keep your connection string away of source control. Please check this link. And in terms of service connection, you don't need any service connection, only what you need is proper connection string.
Since I don't know how you connect to your DB in details I can't give you more info. If you provide example how you already connect to database I can try to provide a better answer.
I need my docker containers to connect to different PostgreSQL server, depending on the environment (test & production). What I desire is testing my application locally with local database instance, and push the fixes after. From what I read, PostgreSQL's default connection parameters can be determined by environment variables, so I think writing two different environment variables files for test/production and pass the desired one in with --env-file option of docker run command would do the trick.
Is this a suitable way to test & deploy an web application? If not, what would be a better solution?
Yes, in general this is the approach you should take when using Docker. Store your DB connection parameters (URL, Username, Password) in environment variables. There is no real need to use an environment file unless you have a ton of environment variables, you could also pass an arbitrary number of "-e" parameters to docker as well. This is closer to how services like amazon's ECS will expect you to pass parameters.
If you're going to write those to a file, make sure that the file is encrypted/encoded somehow - storing database passwords in a file in plaintext is not a great security practice.
I am trying to export the data of a table on AS400 to another machine through iSeries commands but I am stacked in the middle of the process. I have a stored procedure in which I create the CSV file but after completion I need to transfer this file to another machine (which is of course connected to the AS400).
In the stored procedure, I used the CPYTOIMPF command to export table data to CSV and I wrote the file on the AS400 file system. I don't know if there is an option to write the file directly to another machine.
CALL QSYS2.QCMDEXC(
'CPYTOIMPF FROMFILE(LIBRARY/TABLE) TOSTMF('/QIBM/UserData/TestFolder/2.CSV') STMFCODPAG(*PCASCII) RCDDLM(*CRLF)'
);
This step is completed and the file is written on that directory.
Now I need to transfer this file to a web server that is connected to AS400 without after the above command is completed.
How can I do that?
You can use FTP on IBM i. Here is a modified example from the IBM Knowledge Center
This is a CL program
PGM
OVRDBF FILE(INPUT) TOFILE(MYLIB/QCLSRC) MBR(FTPCMDS)
OVRDBF FILE(OUTPUT) TOFILE(MYLIB/QCLSRC) MBR(OUT)
FTP RMTSYS(SYSxxx)
ENDPGM
This program overrides the input of the FTP client to a script in MYLIB/QCLSRC member FTPCMDS, and the output of the FTP client to MYLIB/QCLSRC member OUT. The first line of the script must contain the userid and password for the remote location.
Here is a sample script from the same Knowledge Center reference:
ITSO ITSO <== This is the user id and password
CD ITSOLIB1
SYSCMD CHGCURLIB ITSOLIB2
GET QCLSRC.BATCHFTP QCLSRC.BATCHFTP (REPLACE
QUIT
This should not be used outside your network as the user id an password are sent in plain text. It also can be a security risk as production user id's and passwords must be stored in plain text in the source file.
In addition, Scott Klement has a presentation on how to use ssh, scp, and sftp on IBM i. This is quite a long thing, so you might want to read about it here.
A short summary is that scp might be the easiest way to go. but you will need to:
Make sure that OpenSSH option of the OS is installed.
Make sure your Windows server is set up as an ssh server.
Set up a digital key to use for the transfer. Private key goes on client side, and public key goes on server side.
Use scp fromfile user#host:tofile in Pase to transfer the file.
When a PHP application makes a database connection it of course generally needs to pass a login and password. If I'm using a single, minimum-permission login for my application, then the PHP needs to know that login and password somewhere. What is the best way to secure that password? It seems like just writing it in the PHP code isn't a good idea.
Several people misread this as a question about how to store passwords in a database. That is wrong. It is about how to store the password that lets you get to the database.
The usual solution is to move the password out of source-code into a configuration file. Then leave administration and securing that configuration file up to your system administrators. That way developers do not need to know anything about the production passwords, and there is no record of the password in your source-control.
If you're hosting on someone else's server and don't have access outside your webroot, you can always put your password and/or database connection in a file and then lock the file using a .htaccess:
<files mypasswdfile>
order allow,deny
deny from all
</files>
The most secure way is to not have the information specified in your PHP code at all.
If you're using Apache that means to set the connection details in your httpd.conf or virtual hosts file file. If you do that you can call mysql_connect() with no parameters, which means PHP will never ever output your information.
This is how you specify these values in those files:
php_value mysql.default.user myusername
php_value mysql.default.password mypassword
php_value mysql.default.host server
Then you open your mysql connection like this:
<?php
$db = mysqli_connect();
Or like this:
<?php
$db = mysqli_connect(ini_get("mysql.default.user"),
ini_get("mysql.default.password"),
ini_get("mysql.default.host"));
Store them in a file outside web root.
For extremely secure systems we encrypt the database password in a configuration file (which itself is secured by the system administrator). On application/server startup the application then prompts the system administrator for the decryption key. The database password is then read from the config file, decrypted, and stored in memory for future use. Still not 100% secure since it is stored in memory decrypted, but you have to call it 'secure enough' at some point!
This solution is general, in that it is useful for both open and closed source applications.
Create an OS user for your application. See http://en.wikipedia.org/wiki/Principle_of_least_privilege
Create a (non-session) OS environment variable for that user, with the password
Run the application as that user
Advantages:
You won't check your passwords into source control by accident, because you can't
You won't accidentally screw up file permissions. Well, you might, but it won't affect this.
Can only be read by root or that user. Root can read all your files and encryption keys anyways.
If you use encryption, how are you storing the key securely?
Works x-platform
Be sure to not pass the envvar to untrusted child processes
This method is suggested by Heroku, who are very successful.
if it is possible to create the database connection in the same file where the credentials are stored. Inline the credentials in the connect statement.
mysql_connect("localhost", "me", "mypass");
Otherwise it is best to unset the credentials after the connect statement, because credentials that are not in memory, can't be read from memory ;)
include("/outside-webroot/db_settings.php");
mysql_connect("localhost", $db_user, $db_pass);
unset ($db_user, $db_pass);
If you are using PostgreSQL, then it looks in ~/.pgpass for passwords automatically. See the manual for more information.
Previously we stored DB user/pass in a configuration file, but have since hit paranoid mode -- adopting a policy of Defence in Depth.
If your application is compromised, the user will have read access to your configuration file and so there is potential for a cracker to read this information. Configuration files can also get caught up in version control, or copied around servers.
We have switched to storing user/pass in environment variables set in the Apache VirtualHost. This configuration is only readable by root -- hopefully your Apache user is not running as root.
The con with this is that now the password is in a Global PHP variable.
To mitigate this risk we have the following precautions:
The password is encrypted. We extend the PDO class to include logic for decrypting the password. If someone reads the code where we establish a connection, it won't be obvious that the connection is being established with an encrypted password and not the password itself.
The encrypted password is moved from the global variables into a private variable The application does this immediately to reduce the window that the value is available in the global space.
phpinfo() is disabled. PHPInfo is an easy target to get an overview of everything, including environment variables.
Your choices are kind of limited as as you say you need the password to access the database. One general approach is to store the username and password in a seperate configuration file rather than the main script. Then be sure to store that outside the main web tree. That was if there is a web configuration problem that leaves your php files being simply displayed as text rather than being executed you haven't exposed the password.
Other than that you are on the right lines with minimal access for the account being used. Add to that
Don't use the combination of username/password for anything else
Configure the database server to only accept connections from the web host for that user (localhost is even better if the DB is on the same machine) That way even if the credentials are exposed they are no use to anyone unless they have other access to the machine.
Obfuscate the password (even ROT13 will do) it won't put up much defense if some does get access to the file, but at least it will prevent casual viewing of it.
Peter
We have solved it in this way:
Use memcache on server, with open connection from other password server.
Save to memcache the password (or even all the password.php file encrypted) plus the decrypt key.
The web site, calls the memcache key holding the password file passphrase and decrypt in memory all the passwords.
The password server send a new encrypted password file every 5 minutes.
If you using encrypted password.php on your project, you put an audit, that check if this file was touched externally - or viewed. When this happens, you automatically can clean the memory, as well as close the server for access.
Put the database password in a file, make it read-only to the user serving the files.
Unless you have some means of only allowing the php server process to access the database, this is pretty much all you can do.
If you're talking about the database password, as opposed to the password coming from a browser, the standard practice seems to be to put the database password in a PHP config file on the server.
You just need to be sure that the php file containing the password has appropriate permissions on it. I.e. it should be readable only by the web server and by your user account.
An additional trick is to use a PHP separate configuration file that looks like that :
<?php exit() ?>
[...]
Plain text data including password
This does not prevent you from setting access rules properly. But in the case your web site is hacked, a "require" or an "include" will just exit the script at the first line so it's even harder to get the data.
Nevertheless, do not ever let configuration files in a directory that can be accessed through the web. You should have a "Web" folder containing your controler code, css, pictures and js. That's all. Anything else goes in offline folders.
Just putting it into a config file somewhere is the way it's usually done. Just make sure you:
disallow database access from any servers outside your network,
take care not to accidentally show the password to users (in an error message, or through PHP files accidentally being served as HTML, etcetera.)
Best way is to not store the password at all!
For instance, if you're on a Windows system, and connecting to SQL Server, you can use Integrated Authentication to connect to the database without a password, using the current process's identity.
If you do need to connect with a password, first encrypt it, using strong encryption (e.g. using AES-256, and then protect the encryption key, or using asymmetric encryption and have the OS protect the cert), and then store it in a configuration file (outside of the web directory) with strong ACLs.
Actually, the best practice is to store your database crendentials in environment variables because :
These credentials are dependant to environment, it means that you won't have the same credentials in dev/prod. Storing them in the same file for all environment is a mistake.
Credentials are not related to business logic which means login and password have nothing to do in your code.
You can set environment variables without creating any business code class file, which means you will never make the mistake of adding the credential files to a commit in Git.
Environments variables are superglobales : you can use them everywhere in your code without including any file.
How to use them ?
Using the $_ENV array :
Setting : $_ENV['MYVAR'] = $myvar
Getting : echo $_ENV["MYVAR"]
Using the php functions :
Setting with the putenv function - putenv("MYVAR=$myvar");
Getting with the getenv function - getenv('MYVAR');
In vhosts files and .htaccess but it's not recommended since its in another file and its not resolving the problem by doing it this way.
You can easily drop a file such as envvars.php with all environment variables inside and execute it (php envvars.php) and delete it. It's a bit old school, but it still work and you don't have any file with your credentials in the server, and no credentials in your code. Since it's a bit laborious, frameworks do it better.
Example with Symfony (ok its not only PHP)
The modern frameworks such as Symfony recommends using environment variables, and store them in a .env not commited file or directly in command lines which means you wether can do :
With CLI : symfony var:set FOO=bar --env-level
With .env or .env.local : FOO="bar"
Documentation :