Remote Informix 11.5 Command Line Client - command-line

Does a command line tool ship with Informix 11.5 similar to SQLCMD for SQL Server?
If yes, how do I connect to a remote server and perform regular SELECT/INSERT/UPDATE queries using it?

As Michal Niklas says, the standard tool provided with IBM Informix Dynamic Server (colloquially IDS or even just Informix) is DB-Access. However, it is distributed only with IDS itself, not with the Informix Client SDK (CSDK) or Informix Connect (I-Connect) products.
If you want to access IDS from a machine that does not have IDS installed, you need either CSDK or I-Connect on the machine, and some other software - perhaps the original (pre-Microsoft by a decade and more) version of SQLCMD. This is what I use - and have used in various versions over the last (cough, splutter, ouch) twenty-two years or so; I wrote it because I didn't like the command line behaviour of a program called isql (part of the product Informix SQL), which was the pre-cursor to DB-Access. (Lot's of history - not too important to you.)
Usage - SQLCMD has more options than you know what to do with. The basics are simple, though:
sqlcmd -d dbname#dbserver -e 'select * from table' -x -f file.sql
This connects to a database called 'dbname' at the database server known as 'dbserver' as specified in the sqlhosts file (normally $INFORMIXDIR/etc/sqlhosts). The '-e' indicates an SQL expression - a select statement; the results will be printed to standard output in a strict format (Informix UNLOAD format), one logical line per record. The '-x' turns on trace mode; the '-f' option means read the named file for further commands. The '.sql' extension is not mandatory (beware: DB-Access requires the '.sql' extension and will add it for you). (Arguments not prefixed by either '-e' or '-f' are interpreted heuristically; if it contains spaces, it is SQL; if it does not, it is a filename.) The '-H' option prints column headings (labels) before a result set; the '-T' option prints the column types (after the headings, before the results). The '-B' option runs in benchmark mode; it turns on trace, prints the statement, the time when the statement started, and times how long it took. (Knowing when the statement started is helpful if the SQL takes many minutes to run - as it can in benchmarking scenarios). There are controls over the output format (including CSV and even variant of XML - but not an XML using namespaces) and date format, and so on. There are 'built-in' commands to redirect input and output and errors; most command line options can also be used in the interpeter, etc. SQLCMD also provides a history mechanism; it saves SQL statements and you can view, edit or rerun them. In conjunction with output redirection, you can save off a list of statements executed, etc.
The only gotcha with SQLCMD is that it is not currently ported to Windows. It did work on Windows once upon about 6 or 7 years ago. Since then, Microsoft's compilers have gotten antsy about non-MS API functions, insisting that even if I ask for them by name (by requesting POSIX functionality), the functions must be prefixed by an underscore, and by deprecating a bunch of functions that can be used safely if you pay attention to what you are doing (but, regrettably, can be abused by those who are not paying attention, and there are more inattentive than attentive coders around, it seems) - I mean functions like 'strcpy()' which can be used perfectly safely if you know the size of the source and destination strings before you call it. It is on my list of things to do - it just hasn't been done yet because it isn't my itch.
There is also another Open Source tool called SQSL that you can consider. It has some advantages over SQLCMD (conditional logic, etc) but I think SQLCMD has some advantages over SQSL.
You could also consider whether Perl + DBI + DBD::Informix + dbish would work for you.

Try DB-Access
...
DB–Access provides a user interface for entering, executing, and debugging Structured Query Language (SQL) statements and Stored Procedure Language (SPL) routines...

Related

How could I run a single line of code (not script) from command prompt?

Simple question here, just can't seem to pass it google in a way it can understand.
Say I wanted to execute a line of actual programming code (c++ or java or python... etc) like SetCursorPos or printf from the command prompt command line. I vaguely imagine I would have to invoke the compiler and pass the command to it like a parameter, where from it would then be converted into machine language and passed to... where exactly?
Okay so that was kind of two questions.
How to run actual code from the command line and
what exactly is happening when a fully compiled program, or converted line of code (presuming these are essentially binary containers at that point), is executed?
Question one takes priority obviously. Unfortunately, I can not find any documentation on it, just a bunch of stuff vaguely related to it.
How to run actual code from the command line
Without delving into the vast amounts of blurriness between them, there are two major categories of language implementations: interpreters and compilers.
With many interpreters (or implementations with implicit compilation, such as V8 JavaScript's jit compiler, or pretty much anything with a repl), running a single line from the command line should be fairly trivial. CPython (the standard implementation of Python) has the -c command option:
$ python -c 'print("Hello, world!")'
Hello, world!
Language implementations with explicit compilation steps will tend to be decidedly less simple. In particular, the compiler would need to either accept source either from directly out of the argument list, or from standard input (via piping or redirection). On the output side, your compiler would have to support immediately executing that program, or outputting it to standard out, so that an operating system feature (if it exists) can execute it from a pipe.
To my knowledge, most explicit compilers are not designed with such usage in mind. In such cases, your best bet is to see if there is a REPL available for the language in question, preferably one as compatible with your compiler as possible, or to create (or find) a wrapper that makes it look like your language has a REPL. The wrapper would:
Accept input along the lines of CPython above.
Create a temporary source file behind the scenes with the code to be run and any necessary boilerplate.
Pass that file to the compiler.
Automatically run the resulting executable.
Delete the source file and executable. These may be cleaned up by the operating system later instead, if they're in a temp directory.
From the point of view of the user, this should look pretty similar to the CPython example, as they wouldn't have to interact with or see the compiler or temporary files.

avoiding exploit in perl variable extrapolation from file

I am optimizing a very time/memory consuming program by running it over a dataset and under multiple parameters. For each "run", I have a csv file, "setup.csv" set up with "runNumber","Command" for each run. I then import this into a perl script to read the command for the run number I would like, extrapolate the variables, then execute it on the system via the system command. Should I be worried about the potential for this to be exploited, (I am worried right now)? If so, what can I do to protect our server? My plan now is to change the file permissions of the "setup.csv" to read only and ownership to root, then go in as root whenever I need to append another run to the list.
Thank you very much for your time.
Run your code in taint mode with -T. That will force you to carefully launder your data. Only pass through strings that are ones you are expecting. Do not launder with .*, but rather check against a list of good strings.
Ideally, there a list of known acceptable values, and you validate against that.
Either way, you want to avoid the shell by using the multi-argument form of system or by using IPC::System::Simple's systemx.
If you can't avoid the shell, you must properly convert the text to pass to the command into shell literals.
Even then, you have to be careful of values that start with -. Lots of tools accept -- to denote the end options, allowing other values to be passed safely.
Finally, you might want to make sure the args don't contain the NUL character (\0).
systemx('tool', '--', #args)
Note: Passing arbitrary strings is not possible in Windows. Extra validation is required.

Hitting ORA-01461 when inserting multibyte characters from perl into oracle

I have a perl script that is inserting records from a text file into our database. Whenever the record has a multibyte character like "RODR_Í_GUEZ". I receive the error ORA-01461, however i'm nowhere near the 4000 characters to switch from varchar2 to long
setting:
$ENV{NLS_CHARACTERSET} = 'AL32UTF8';
before connecting doesn't seem to help.
Using a java client (SQuirreL SQL) and manually writing the INSERT INTO statement inserts the record just fine, so i'm sure it's not how the database is configured.
Any thoughts?
You probably want to set the NLS_LANG environment variable. For Unix-ish systems, there is a script supplied in $ORACLE_HOME/server/bin called nls_lang.sh to output a reasonable value for your system, based on the LANG environment variable.
e.g. for my system (LANG=en_GB.UTF-8) the equivalent Oracle setting is
NLS_LANG=ENGLISH_UNITED KINGDOM.AL32UTF8
More info: http://forums.oracle.com/forums/thread.jspa?threadID=381531
Sergiusz's post there says practically all you need to know: I'll just add that the Perl DBD::Oracle driver is OCI-based, and the pure-Java JDBC driver isn't, hence they work differently in the same environment.

How to discover command line options (if any) for an undocumented executable of unknown origin?

Take an undocumented executable of unknown origin. Trying /?, -h, --help from the command line yields nothing. Is it possible to discover if the executable supports any command line options by looking inside the executable? Possibly reverse engineering? What would be the best way of doing this?
I'm talking about a Windows executable, but would be interested to hear what different approaches would be needed with another OS.
In linux, step one would be run strings your_file which dumps all the strings of printable characters in the file. Any constants chars will thus be shown, including any "usage" instructions.
Next step could be to run ltrace on the file. This shows all function calls the program does. If it includes getopt (or familiar), then it is a sure sign that it is processing input parameters. In fact, you should be able to see exactly what argument the program is expecting since that is the third parameter to the getopt function.
For Windows, you can see this question about decompiling Windows executables. It should be relatively easy to at least discover the options (what they actually do is a different story).
If it's a .NET executable try using Reflector. This will convert the MSIL code into the equivalent C# code which may make it easier to understand. Unfortunately private and local variable names will be lost, as these are not stored in the MSIL but it should still be possible to follow what's going on.

How can I control an interactive Unix application programmatically through Perl?

I have inherited a 20-year-old interactive command-line unix application that is no longer supported by its vendor. We need to automate some tasks in this application.
The most troublesome of these is creating thousands of new records with slightly different parameters (e.g. different identifiers, different names). The records have to be created in sequence, one at a time, which would take many months (and therefore dollars) to do manually. In most cases, creating a record has a very predictable pattern of keying in commands, reading responses, keying in further commands, etc. However, some record creation operations will result in error conditions ('record with this identifier already exists') that require a different set of commands to be exit gracefully.
I can see a few different ways to do this:
Named pipes. Write a Perl script that runs the target application with STDIN and STDOUT set to named pipes then sends the target application the sequence of commands to create a record with the required parameters, and then instructs the target application to exit and shut down. We then run the script as many times as required with different parameters.
Application. Find another Unix tool that can be used to script interactive programs. The only ones I have been able to find though are expect, but this does not seem top be maintained; and chat, which I recall from ages ago, and which seems to do more-or-less what I want, but appears to be only for controlling modems.
One more potential complication: I think the target application was written for a VT100 terminal and it uses some sort of escape sequences to do things like provide highlighting.
My question is what approach should I take? One of these, or something completely different? I quite like the idea of using named pipes and then having a Perl script that opens the FIFOs and reads and writes as required, as it provides a lot of flexibility, but from what I have read it seems like there's a lot of potential problems if I go down this path.
Thanks in advance.
I'd definitely stick to Perl for the extra flexibility, as chaos suggested. Are you aware of the Expect perl module? It's a lot nicer than the named pipe approach.
Note also with named pipes, you can't force the output coming back from your legacy application to be unbuffered, which could be annoying. I think Expect.pm uses pseudo-ttys to get around this problem, but I'm not sure. See the discussion in perlipc in the section "Bidirectional Communication with Another Process" for more details.
expect is a lot more solid than you're probably giving it credit for, but if I were you I'd still go with the Perl option, wanting to have a full and familiar programming language for managing the process and having confidence that whatever weird issues arise, there will be ways of addressing them.
Expect, either with the Tcl or Perl implementations, would be my first attempt. If you are seeing odd sequences in the output because it's doing odd terminal things, just filter those from the output before you do your matching.
With named pipes, you're going to end up reinventing Expect anyway.