I am currently working on PostgreSQL backup and restore functionality for my project. I have read this http://www.codeproject.com/Articles/37154/PostgreSQL-PostGis-Operations article and followed that approach to do this. It is working fine but recently I have changed the PostgreSQL authentication method to password in the pg_hba.con file. Hence it started prompting for the password whenever I execute psql.exe, pg_dump.exe, and pg_restore.exe. To provide the password through my project, I have used the "RedirectStandardInput" method. But it did not work and psql or pg_dump still prompt for the password. However "RedirectStandardOutput" and error methods are working fine.
I went through the PostgreSQL source code and found that GetConsoleMode and SetConsoleMode are used the remove the echo. I hope ( not sure ) it could be the reason, which is why I am unable to redirect the input.
PostgreSQL source code to prompt the password
simple_prompt(const char *prompt, int maxlen, bool echo)
{
int length;
char *destination;
FILE *termin,
*termout;
#ifdef HAVE_TERMIOS_H
struct termios t_orig,
t;
#else
#ifdef WIN32
HANDLE t = NULL;
LPDWORD t_orig = NULL;
#endif
#endif
destination = (char *) malloc(maxlen + 1);
if (!destination)
return NULL;
/*
* Do not try to collapse these into one "w+" mode file. Doesn't work on
* some platforms (eg, HPUX 10.20).
*/
termin = fopen(DEVTTY, "r");
termout = fopen(DEVTTY, "w");
if (!termin || !termout
#ifdef WIN32
/* See DEVTTY comment for msys */
|| (getenv("OSTYPE") && strcmp(getenv("OSTYPE"), "msys") == 0)
#endif
)
{
if (termin)
fclose(termin);
if (termout)
fclose(termout);
termin = stdin;
termout = stderr;
}
#ifdef HAVE_TERMIOS_H
if (!echo)
{
tcgetattr(fileno(termin), &t);
t_orig = t;
t.c_lflag &= ~ECHO;
tcsetattr(fileno(termin), TCSAFLUSH, &t);
}
#else
#ifdef WIN32
if (!echo)
{
/* get a new handle to turn echo off */
t_orig = (LPDWORD) malloc(sizeof(DWORD));
t = GetStdHandle(STD_INPUT_HANDLE);
/* save the old configuration first */
GetConsoleMode(t, t_orig);
/* set to the new mode */
SetConsoleMode(t, ENABLE_LINE_INPUT | ENABLE_PROCESSED_INPUT);
}
#endif
#endif
if (prompt)
{
fputs(_(prompt), termout);
fflush(termout);
}
if (fgets(destination, maxlen + 1, termin) == NULL)
destination[0] = '\0';
length = strlen(destination);
if (length > 0 && destination[length - 1] != '\n')
{
/* eat rest of the line */
char buf[128];
int buflen;
do
{
if (fgets(buf, sizeof(buf), termin) == NULL)
break;
buflen = strlen(buf);
} while (buflen > 0 && buf[buflen - 1] != '\n');
}
if (length > 0 && destination[length - 1] == '\n')
/* remove trailing newline */
destination[length - 1] = '\0';
#ifdef HAVE_TERMIOS_H
if (!echo)
{
tcsetattr(fileno(termin), TCSAFLUSH, &t_orig);
fputs("\n", termout);
fflush(termout);
}
#else
#ifdef WIN32
if (!echo)
{
/* reset to the original console mode */
SetConsoleMode(t, *t_orig);
fputs("\n", termout);
fflush(termout);
free(t_orig);
}
#endif
#endif
if (termin != stdin)
{
fclose(termin);
fclose(termout);
}
return destination;
}
Please help me here, how to send the password to psql or pg_dump via C# code.
Since this is local to the application, the best thing to do is to set %PGPASSWORD% and then psql will not ask for the password. .pgpass could be used if you wanted to avoid supplying a password at all. In general you do not want to specify environment variables from the command line since they may show for other users but this is not a concern here.
Related
I'm using boost::asio with ncurses for a command-line game. The game needs to draw on the screen at a fixed time interval, and other operations (e.g. networking or file operations) are also executed whenever necessary. All these things can be done with async_read()/async_write() or equivalent on boost::asio.
However, I also need to read keyboard input, which (I think) comes from stdin. The usual way to read input in ncurses is to call getch(), which can be configured to either blocking (wait until there is a character available for consumption) or non-blocking (return a sentinel value of there no characters available) mode.
Using blocking mode would necessitate running getch() on a separate thread, which doesn't play well with ncurses. Using non-blocking mode, however, would cause my application to consume CPU time spinning in a loop until the user presses their keyboard. I've read this answer, which suggests that we can add stdin to the list of file descriptors in a select() call, which would block until one of the file descriptors has new data.
Since I'm using boost::asio, I can't directly use select(). I can't call async_read, because that would consume the character, leaving getch() with nothing to read. Is there something in boost::asio like async_read, but merely checks the existence of input without consuming it?
I think you should be able to use the posix stream descriptor to watch for input on file descriptor 0:
ba::posix::stream_descriptor d(io, 0);
input_loop = [&](error_code ec) {
if (!ec) {
program.on_input();
d.async_wait(ba::posix::descriptor::wait_type::wait_read, input_loop);
}
};
There, program::on_input() would call getch() with no timeout() until it returns ERR:
struct Program {
Program() {
initscr();
ESCDELAY = 0;
timeout(0);
cbreak();
noecho();
keypad(stdscr, TRUE); // receive special keys
clock = newwin(2, 40, 0, 0);
monitor = newwin(10, 40, 2, 0);
syncok(clock, true); // automatic updating
syncok(monitor, true);
scrollok(monitor, true); // scroll the input monitor window
}
~Program() {
delwin(monitor);
delwin(clock);
endwin();
}
void on_clock() {
wclear(clock);
char buf[32];
time_t t = time(NULL);
if (auto tmp = localtime(&t)) {
if (strftime(buf, sizeof(buf), "%T", tmp) == 0) {
strncpy(buf, "[error formatting time]", sizeof(buf));
}
} else {
strncpy(buf, "[error getting time]", sizeof(buf));
}
wprintw(clock, "Async: %s", buf);
wrefresh(clock);
}
void on_input() {
for (auto ch = getch(); ch != ERR; ch = getch()) {
wprintw(monitor, "received key %d ('%c')\n", ch, ch);
}
wrefresh(monitor);
}
WINDOW *monitor = nullptr;
WINDOW *clock = nullptr;
};
With the following main program you'd run it for 10 seconds (because Program doesn't yet know how to exit):
int main() {
Program program;
namespace ba = boost::asio;
using boost::system::error_code;
using namespace std::literals;
ba::io_service io;
std::function<void(error_code)> input_loop, clock_loop;
// Reading input when ready on stdin
ba::posix::stream_descriptor d(io, 0);
input_loop = [&](error_code ec) {
if (!ec) {
program.on_input();
d.async_wait(ba::posix::descriptor::wait_type::wait_read, input_loop);
}
};
// For fun, let's also update the time
ba::high_resolution_timer tim(io);
clock_loop = [&](error_code ec) {
if (!ec) {
program.on_clock();
tim.expires_from_now(100ms);
tim.async_wait(clock_loop);
}
};
input_loop(error_code{});
clock_loop(error_code{});
io.run_for(10s);
}
This works:
Full Listing
#include <boost/asio.hpp>
#include <boost/asio/posix/descriptor.hpp>
#include <iostream>
#include "ncurses.h"
#define CTRL_R 18
#define CTRL_C 3
#define TAB 9
#define NEWLINE 10
#define RETURN 13
#define ESCAPE 27
#define BACKSPACE 127
#define UP 72
#define LEFT 75
#define RIGHT 77
#define DOWN 80
struct Program {
Program() {
initscr();
ESCDELAY = 0;
timeout(0);
cbreak();
noecho();
keypad(stdscr, TRUE); // receive special keys
clock = newwin(2, 40, 0, 0);
monitor = newwin(10, 40, 2, 0);
syncok(clock, true); // automatic updating
syncok(monitor, true);
scrollok(monitor, true); // scroll the input monitor window
}
~Program() {
delwin(monitor);
delwin(clock);
endwin();
}
void on_clock() {
wclear(clock);
char buf[32];
time_t t = time(NULL);
if (auto tmp = localtime(&t)) {
if (strftime(buf, sizeof(buf), "%T", tmp) == 0) {
strncpy(buf, "[error formatting time]", sizeof(buf));
}
} else {
strncpy(buf, "[error getting time]", sizeof(buf));
}
wprintw(clock, "Async: %s", buf);
wrefresh(clock);
}
void on_input() {
for (auto ch = getch(); ch != ERR; ch = getch()) {
wprintw(monitor, "received key %d ('%c')\n", ch, ch);
}
wrefresh(monitor);
}
WINDOW *monitor = nullptr;
WINDOW *clock = nullptr;
};
int main() {
Program program;
namespace ba = boost::asio;
using boost::system::error_code;
using namespace std::literals;
ba::io_service io;
std::function<void(error_code)> input_loop, clock_loop;
// Reading input when ready on stdin
ba::posix::stream_descriptor d(io, 0);
input_loop = [&](error_code ec) {
if (!ec) {
program.on_input();
d.async_wait(ba::posix::descriptor::wait_type::wait_read, input_loop);
}
};
// For fun, let's also update the time
ba::high_resolution_timer tim(io);
clock_loop = [&](error_code ec) {
if (!ec) {
program.on_clock();
tim.expires_from_now(100ms);
tim.async_wait(clock_loop);
}
};
input_loop(error_code{});
clock_loop(error_code{});
io.run_for(10s);
}
I use tcl shellicon command to extract icons, as it mentioned on wiki page below, there are some international character problems in it, then I write some code to test but it doesn't work, could anyone to help me correct it.
/*
* testdll.c
* gcc compile: gcc testdll.c -ltclstub86 -ltkstub86 -IC:\Users\L\tcc\include -IC:\Users\L\tcl\include -LC:\Users\L\tcl\lib -LC:\Users\L\tcc\lib -DUSE_TCL_STUBS -DUSE_TK_STUBS -shared -o testdll.dll
*/
#include <windows.h>
#include <tcl.h>
#include <stdlib.h>
int TestdllCmd(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj * CONST objv[]) {
char * path;
Tcl_DString ds;
if (objc > 2) {
Tcl_SetResult(interp, "Usage: testdll ?path?",NULL);
return TCL_ERROR;
}
if (objc == 2) {
path = Tcl_GetString(objv[objc-1]);
path = Tcl_TranslateFileName(interp, path, &ds);
if (path != TCL_OK) {
return TCL_ERROR;
}
}
Tcl_AppendResult(interp, ds, NULL);
return TCL_OK;
}
int DLLEXPORT Testdll_Init(Tcl_Interp *interp) {
if (Tcl_InitStubs(interp, "8.5", 0) == NULL) {
return TCL_ERROR;
}
Tcl_CreateObjCommand(interp, "testdll", TestdllCmd, NULL, NULL);
Tcl_PkgProvide(interp, "testdll", "1.0");
return TCL_OK;
}
I compile it with:
gcc compile: gcc testdll.c -ltclstub86 -ltkstub86 -IC:\Users\USERNAME\tcc\include -IC:\Users\USERNAME\tcl\include -LC:\Users\USERNAME\tcl\lib -LC:\Users\USERNAME\tcc\lib -DUSE_TCL_STUBS -DUSE_TK_STUBS -shared -o testdll.dll
windows cmd shell run: tclsh testdll.tcl
load testdll
puts [testdll C:/Users/L/桌面]
the output is:
// This line isn't in the output, just to show the first line of output is a *EMPTY LINE*
while executing
"testdll 'C:/Users/L/桌面'"
invoked from within
"puts [testdll 'C:/Users/L/桌面']"
(file "testdll.tcl" line 2)
In fact, I want to print a line, whose content is "C:/Users/L/桌面"
I write this dll to debug how to replace Tcl_GetString,Tcl_TranslateFileName with Tcl_FSGetNormalizedPath, Tcl_FSGetNativePath, I wonder if it's clear?
Thank you!
Remove this:
if (path != TCL_OK) {
return TCL_ERROR;
}
You are comparing a char * to an int.
The manual page for Tcl_TranslateFileName says:
However, with the advent of the newer Tcl_FSGetNormalizedPath
and Tcl_FSGetNativePath, there is no longer any need to use this
procedure.
You should probably switch to more modern API call.
in open-jdk-8 :
this jin function : Java_java_net_PlainSocketImpl_socketSetOption:
/*
* SO_TIMEOUT is a no-op on Solaris/Linux
*/
if (cmd == java_net_SocketOptions_SO_TIMEOUT) {
return;
}
file: openjdk7/jdk/src/solaris/native/java/net/PlainSocketImpl.c
does this mean , on linux setOption of SO_TIMEOUT will be ignored ?
I am can't found the jin for linux. but the solaris's code seems also works for linux .
No, it just means it isn't implemented as a socket option. Some platforms don't support it. On those platforms select() or friends are used.
The source inside solaris folder is also used for Linux.
SO_TIMEOUT is ignored in Java_java_net_PlainSocketImpl_socketSetOption0. But timeout is kept as a field when AbstractPlainSocketImpl.setOption is called:
case SO_TIMEOUT:
if (val == null || (!(val instanceof Integer)))
throw new SocketException("Bad parameter for SO_TIMEOUT");
int tmp = ((Integer) val).intValue();
if (tmp < 0)
throw new IllegalArgumentException("timeout < 0");
// Saved for later use
timeout = tmp;
break;
And timeout is used when doing read in SocketInputStream:
public int read(byte b[], int off, int length) throws IOException {
return read(b, off, length, impl.getTimeout());
}
I am trying to erase one page in flash on an STM32F103RB like so:
FLASH_Unlock();
FLASH_ClearFlag(FLASH_FLAG_BSY | FLASH_FLAG_EOP | FLASH_FLAG_PGERR | FLASH_FLAG_WRPRTERR | FLASH_FLAG_OPTERR);
FLASHStatus = FLASH_ErasePage(Page);
However, FLASH_ErasePage fails producing FLASH_ERROR_WRP
Manually enabling/disabling write protection in the stm32-linker tool doesn't fix the problem.
Basically FLASH_ErasePage fails with WRP error without trying to do anything if there's previous WRP error in the status register.
What comes to your FLASH_ClearFlag call, at least FLASH_FLAG_BSY will cause assert_param(IS_FLASH_CLEAR_FLAG(FLASH_FLAG)); to fail (though I'm not really sure what happens in this case).
#define IS_FLASH_CLEAR_FLAG(FLAG) ((((FLAG) & (uint32_t)0xFFFFC0FD) == 0x00000000) && ((FLAG) != 0x00000000))
What is your page address ? Which address are you trying to access ?
For instance, this example is tested on STM32F100C8 in terms of not only erasing but also writing data correctly.
http://www.ozturkibrahim.com/TR/eeprom-emulation-on-stm32/
If using the HAL driver, your code might look like this (cut'n paste from an real project)
static HAL_StatusTypeDef Erase_Main_Program ()
{
FLASH_EraseInitTypeDef ins;
uint32_t sectorerror;
ins.TypeErase = FLASH_TYPEERASE_SECTORS;
ins.Banks = FLASH_BANK_1; /* Do not care, used for mass-erase */
#warning We currently erase from sector 2 (only keep 64KB of flash for boot))
ins.Sector = FLASH_SECTOR_4;
ins.NbSectors = 4;
ins.VoltageRange = FLASH_VOLTAGE_RANGE_3; /* voltage-range defines how big blocks can be erased at the same time */
return HAL_FLASHEx_Erase (&ins, §orerror);
}
The internal function in the HAL driver that actually does the work
void FLASH_Erase_Sector(uint32_t Sector, uint8_t VoltageRange)
{
uint32_t tmp_psize = 0U;
/* Check the parameters */
assert_param(IS_FLASH_SECTOR(Sector));
assert_param(IS_VOLTAGERANGE(VoltageRange));
if(VoltageRange == FLASH_VOLTAGE_RANGE_1)
{
tmp_psize = FLASH_PSIZE_BYTE;
}
else if(VoltageRange == FLASH_VOLTAGE_RANGE_2)
{
tmp_psize = FLASH_PSIZE_HALF_WORD;
}
else if(VoltageRange == FLASH_VOLTAGE_RANGE_3)
{
tmp_psize = FLASH_PSIZE_WORD;
}
else
{
tmp_psize = FLASH_PSIZE_DOUBLE_WORD;
}
/* If the previous operation is completed, proceed to erase the sector */
CLEAR_BIT(FLASH->CR, FLASH_CR_PSIZE);
FLASH->CR |= tmp_psize;
CLEAR_BIT(FLASH->CR, FLASH_CR_SNB);
FLASH->CR |= FLASH_CR_SER | (Sector << POSITION_VAL(FLASH_CR_SNB));
FLASH->CR |= FLASH_CR_STRT;
}
Second thing to check. Is interrupts enabled, and is there any hardware access between the unlock call and the erase call?
I hope this helps
EDIT: I have created a ticket for this which has data on an alternative to this way of doing things.
I have updated the code in an attempt to use MY_CXT's callback as gcxt was not storing across threads. However this segfaults at ENTER.
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#ifndef aTHX_
#define aTHX_
#endif
#ifdef USE_THREADS
#define HAVE_TLS_CONTEXT
#endif
/* For windows */
#ifndef SDL_PERL_DEFINES_H
#define SDL_PERL_DEFINES_H
#ifdef HAVE_TLS_CONTEXT
PerlInterpreter *parent_perl = NULL;
extern PerlInterpreter *parent_perl;
#define GET_TLS_CONTEXT parent_perl = PERL_GET_CONTEXT;
#define ENTER_TLS_CONTEXT \
PerlInterpreter *current_perl = PERL_GET_CONTEXT; \
PERL_SET_CONTEXT(parent_perl); { \
PerlInterpreter *my_perl = parent_perl;
#define LEAVE_TLS_CONTEXT \
} PERL_SET_CONTEXT(current_perl);
#else
#define GET_TLS_CONTEXT /* TLS context not enabled */
#define ENTER_TLS_CONTEXT /* TLS context not enabled */
#define LEAVE_TLS_CONTEXT /* TLS context not enabled */
#endif
#endif
#include <SDL.h>
#define MY_CXT_KEY "SDL::Time::_guts" XS_VERSION
typedef struct {
void* data;
SV* callback;
Uint32 retval;
} my_cxt_t;
static my_cxt_t gcxt;
START_MY_CXT
static Uint32 add_timer_cb ( Uint32 interval, void* param )
{
ENTER_TLS_CONTEXT
dMY_CXT;
dSP;
int back;
ENTER; //SEGFAULTS RIGHT HERE!
SAVETMPS;
PUSHMARK(SP);
XPUSHs(sv_2mortal(newSViv(interval)));
PUTBACK;
if (0 != (back = call_sv(MY_CXT.callback,G_SCALAR))) {
SPAGAIN;
if (back != 1 ) Perl_croak (aTHX_ "Timer Callback failed!");
MY_CXT.retval = POPi;
} else {
Perl_croak(aTHX_ "Timer Callback failed!");
}
FREETMPS;
LEAVE;
LEAVE_TLS_CONTEXT
dMY_CXT;
return MY_CXT.retval;
}
MODULE = SDL::Time PACKAGE = SDL::Time PREFIX = time_
BOOT:
{
MY_CXT_INIT;
}
SDL_TimerID
time_add_timer ( interval, cmd )
Uint32 interval
void *cmd
PREINIT:
dMY_CXT;
CODE:
MY_CXT.callback=cmd;
gcxt = MY_CXT;
RETVAL = SDL_AddTimer(interval,add_timer_cb,(void *)cmd);
OUTPUT:
RETVAL
void
CLONE(...)
CODE:
MY_CXT_CLONE;
This segfaults as soon as I go into ENTER for the callback.
use SDL;
use SDL::Time;
SDL::init(SDL_INIT_TIMER);
my $time = 0;
SDL::Timer::add_timer(100, sub { $time++; return $_[0]} );
sleep(10);
print "Never Prints";
Output is
$
it should be
$ Never Prints
Quick comments:
Do not use Perl structs (SV, AV, HV, ...) outside of the context of a Perl interpreter object. I.e. do not use it as C-level static data. It will blow up in a threading context. Trust me, I've been there.
Check out the "Safely Storing Static Data in XS" section in the perlxs manpage.
Some of that stuff you're doing looks rather non-public from the point of view of the perlapi. I'm not quite certain, though.
$time needs to be a shared variable - otherwise perl works with separate copies of the variable.
My preferred way of handling this is storing the data in the PL_modglobal hash. It's automatically tied to the current interpreter.
We have found a solution to this using Perl interpreter threads and threads::shared. Please see these
Time.xs
Also here is an example of a script using this code.
TestTimer.pl