I found this when reading the warp source code.
-- | Run an 'Application' with the given 'Settings'.
runSettings :: Settings -> Application -> IO ()
runSettings set app = withSocketsDo $
bracket
(bindPortTCP (settingsPort set) (settingsHost set))
sClose
(\socket -> do
setSocketCloseOnExec socket
runSettingsSocket set socket app)
-- Copied from: https://github.com/mzero/plush/blob/master/src/Plush/Server/Warp.hs
setSocketCloseOnExec :: Socket -> IO ()
#if WINDOWS
setSocketCloseOnExec _ = return ()
#else
setSocketCloseOnExec socket =
setFdOption (fromIntegral $ fdSocket socket) CloseOnExec True
#endif
Sockets used by the web server need to be set so that they are not leaked into processes fork/exec'd by the shell.
However, acceptFork in Network.Simple.TCP module does not set CloseOnExec option. Shouldn't this option be the default?
Related
I was tasked with implementing TLS support in one of our legacy perl scripts. It should be fairly simple but I failed (I am not able to connect to the queue manager through an encrypted channel). Could anyone tell me what am I missing?
My code in TestConnect.pl:
#!/usr/bin/perl
use strict;
use MQSeries;
use MQSeries::QueueManager;
my $qmgr = MQSeries::QueueManager->new(
QueueManager => $ENV{MQMANAGER},
AutoConnect => 0,
ClientConn => {
ChannelName => $ENV{MQCHANNEL},
TransportType => 'TCP',
ConnectionName => $ENV{MQHOSTNAME} . "(" . $ENV{MQPORT} . ")",
SSLCipherSpec => $ENV{MQ_CIPH},
Version => MQCD_VERSION_7
},
SSLConfig => {
KeyRepository => $ENV{MQKEYR},
CertificateLabel => $ENV{MQ_C_LABEL},
Version => MQSCO_VERSION_5
}
);
if (!$qmgr->Connect()) {
print STDERR "Unable to connect to queue manager\nCompCode => " . $qmgr->CompCode() . "\nReason => " . $qmgr->Reason() . " (" . MQReasonToText($qmgr->Reason()) . ")\n";
}
else {
print STDERR "\n\nALL OK!!\n\n"
}
Similar code in C++:
#pragma GCC diagnostic ignored "-Wwrite-strings"
#include <iostream>
#include <cstring>
#include <cmqc.h>
#include <cmqxc.h>
#include <cmqstrc.h>
#include <cstdlib>
#include <cstdio>
int main(int argc, char** argv) {
MQHCONN hconn;
MQLONG lCompCode=(MQLONG)0;
MQLONG lReason=(MQLONG)0;
MQCNO connectOpts = {MQCNO_DEFAULT};
MQCD clientConn = {MQCD_CLIENT_CONN_DEFAULT};
MQSCO sslConfig = {MQSCO_DEFAULT};
connectOpts.Version = MQCNO_VERSION_5;
strncpy(connectOpts.StrucId, MQCNO_STRUC_ID, 4);
strcpy((char*)connectOpts.ConnTag, (const char*)MQCT_NONE);
connectOpts.ClientConnPtr = &clientConn;
connectOpts.SSLConfigPtr = &sslConfig;
strcpy(clientConn.ChannelName, std::getenv("MQCHANNEL"));
clientConn.TransportType = MQXPT_TCP;
sprintf(clientConn.ConnectionName, "%s(%s)", std::getenv("MQHOSTNAME"), std::getenv("MQPORT"));
strcpy(clientConn.SSLCipherSpec, std::getenv("MQ_CIPH"));
clientConn.Version = MQCD_VERSION_7;
strcpy(sslConfig.KeyRepository, std::getenv("MQKEYR"));
strcpy(sslConfig.CertificateLabel, std::getenv("MQ_C_LABEL"));
sslConfig.Version = MQSCO_VERSION_5;
MQCONNX(std::getenv("MQMANAGER"), &connectOpts, &hconn, &lCompCode, &lReason);
std::cout << "connection result " << MQCC_STR(lCompCode) << " with reason " << MQRC_STR(lReason) << "\n";
}
Please disregard strcpy instead of strncpy, not closing the connection and other less important things - I wanted the code to be as simple as possible (for the sake of tests).
All the environment variables above are set by me to proper values during execution, so that I can test the same connection details through Perl and through C++. I'm deliberately using non-standard names (e.g. avoid $MQSSLKEYR) to not pollute my environment with MQM variables, thus eliminating any possible side effects...
There are two tests: $MQCHANNEL points to a non-encrypted channel, with $MQ_CIPH empty: this works from both Perl and C++.
When I setup my $MQCHANNEL to point to a TLS-channel with $MQ_CIPH set accordingly, it works like a charm from C++ but in Perl, it shows:
MQCONN failed (Reason = 2393) (SSL initialization error.) at ./TestConnect.pl line 25.
Unable to connect to queue manager
CompCode => 2
Reason => 2393 (SSL initialization error.)
Unfortunately, the documentation is really vague and the error logs on MQ server say that there might be something wrong with the keys or certificate. But I know there's nothing wrong with these because I tested the C++ solution on multiple QManagers with TLS channels. The Perl solution does not work in the exact same circumstances.
There are those additional settings in connectOpts (MQCNO struct), which I wasn't able to simulate in Perl (so I assume they are not as important there). Also, I hope that the positional nature of MQCD and MQSCO structs has no impact on Perl handling of the hashes.
I don't see any typo in my connection options (and if there was any, Params::Validate takes care of the validation in QueueManager).
If it's of any use, the configuration of the channel is:
CHANNEL(CHANNEL_TLS) CHLTYPE(SVRCONN)
ALTDATE(2020-12-07) ALTTIME(09.30.54)
CERTLABL( ) COMPHDR(NONE)
COMPMSG(NONE)
DESCR()
DISCINT(0) HBINT(300)
KAINT(AUTO) MAXINST(300)
MAXINSTC(999999999) MAXMSGL(16777216)
MCAUSER(mqm) MONCHL(QMGR)
RCVDATA( ) RCVEXIT( )
SCYDATA( ) SCYEXIT( )
SENDDATA( ) SENDEXIT( )
SHARECNV(40) SSLCAUTH(REQUIRED)
SSLCIPH(TLS_RSA_WITH_AES_128_CBC_SHA256)
SSLPEER( ) TRPTYPE(TCP)
The non-TLS channel simply has SSLCIPH empty.
PS: This is the response in /var/mqm/errors/AMQERR01.LOG:
01/19/2021 08:33:51 AM - Process(24037.1) User(mqm) Program(TestConnect.pl)
Host(localhost.localdomain) Installation(Installation1)
VRMF(9.1.0.1)
Time(2021-01-19T08:33:51.725Z)
RemoteHost(10.132.4.9(1414))
CommentInsert1(CHANNEL_TLS)
CommentInsert2(host1 (10.132.4.9)(1414))
AMQ9642E: No SSL or TLS certificate for channel 'CHANNEL_TLS'.
EXPLANATION:
The channel 'CHANNEL_TLS' did not supply a certificate to use during SSL or
TLS handshaking, but a certificate is required by the remote queue manager.
The remote host is 'host1 (10.132.4.9)(1414)'.
The channel did not start.
ACTION:
Ensure that the key repository of the local queue manager or MQ client contains
a certificate which is associated with the queue manager or client. If you have
configured a certificate label, check that the certificate exists.
Alternatively, if appropriate, change the remote channel definition so that its
SSLCAUTH attribute is set to OPTIONAL and it has no SSLPEER value set.
----- cmqxrfpt.c : 690 --------------------------------------------------------
I'm trying to write a very basic webserver in Haskell. This is my code:
{-# LANGUAGE OverloadedStrings #-}
import Network (withSocketsDo, listenOn, PortID(..))
import Network.Socket (Socket, accept, close, setSocketOption, SocketOption(..))
import Network.Socket.ByteString (send, sendAll, recv)
import Control.Concurrent.Async (async)
import Control.Monad (forever)
import Data.ByteString.Char8 (unpack)
import Request
main = withSocketsDo $ do
sock <- listenOn $ PortNumber 3000
putStrLn "Listening on port 3000..."
forever $ do
(conn, _) <- accept sock
async $ handleAccept conn
handleAccept :: Socket -> IO ()
handleAccept sock = do
putStrLn $ "Connected!"
rawReq <- recv sock 4096
let req = parseRawRequest $ unpack rawReq -- returns Maybe Request
putStrLn $ show req
handleRequest sock req
handleRequest :: Socket -> Maybe Request -> IO ()
handleRequest sock Nothing = do
putStrLn "Closing..."
handleRequest sock req = do
sendAll sock "In handleRequest!" -- Doesn't appear until server is killed.
This is what I expected to happen:
Start server.
"Listening on port 3000..." is printed on server-side.
Do curl localhost:3000
"Connected!" is printed server-side.
The request is printed server-side.
"In handleRequest!" is printed.
What actually happens:
Start server.
"Listening on port 3000..." is printed on server-side.
Do curl localhost:3000
"Connected!" is printed server-side.
The request is printed server-side.
I wait patiently
I kill the server with CTRL+C
"In handleRequest!" prints client-side.
I suspect this has something to do with possible laziness in recv, although I use the value immediately afterwards (I parse the raw request into a Request type), so theoretically it should be evaluated.
If I put sendAll sock "Yadda yadda at the end of handleAccept, everything works fine. It's when I move this behaviour into a new function, handleRequest, that things go wonky.
Any thoughts? I'm new-ish to Haskell, so I'd appreciate any comments on the issue, or my code generally.
Cheers.
EDIT:
This is super weird! I "fixed" it, but I have no idea why this occurs.
This is the line that only appeared after I killed the server:
handleRequest sock req = do
sendAll sock "In handleRequest!" -- Doesn't appear until server is killed.
If I intentionally close the socket after sending, it works:
handleRequest sock req = do
sendAll sock "In handleRequest!" -- Now appears without killing the server
close sock
So it sends when the connection is closed. This is consistent with previous behaviour, since the connection automatically closes when the server is killed.
Now for the confusing bit. If I replace it with:
handleRequest sock req = do
sendAll sock "In handleRequest!\n" -- Works perfect
This works without closing the connection! It does what I expected, just by adding a newline. Why does this occur?
What on earth? Is it a printing problem with my terminal, not the code? (OSX iTerm2)
EDIT 2:
Was asked to provide the code for my Request module:
import Data.List (isInfixOf)
import Data.List.Split (splitOn)
data RequestType = GET | PUT
deriving Show
data Request =
Request {
reqType :: RequestType,
path :: String,
options :: [(String, String)]
} deriving Show
-- Turn a raw HTTP request into a request
-- object.
parseRawRequest :: String -> Maybe Request
parseRawRequest rawReq =
Request <$> parseRawRequestType rawReq
<*> parseRawRequestPath rawReq
<*> parseRawRequestOps rawReq
-- Turn an (entire) raw HTTP request into just
-- the request type.
parseRawRequestType :: String -> Maybe RequestType
parseRawRequestType rawReq =
case typ of
"GET" -> Just GET
"PUT" -> Just PUT
_ -> Nothing
where typ = (head . words . head . lines) rawReq
-- Turn an (entire) raw HTTP request into just
-- the path.
parseRawRequestPath :: String -> Maybe String
parseRawRequestPath = Just . (!! 1) . words . head . lines
-- Turn an (entire) raw HTTP request into just
-- a lookup table of their options.
parseRawRequestOps :: String -> Maybe [(String, String)]
parseRawRequestOps rawReq = Just [("One", "Two")] -- Test impl
I have one answer and one suggestion.
The suggestion is for you to turn off the naggle algorithm after accept:
setSocketOption conn NoDelay 1
The answer is that your sendAll is sending data but curl is not printing it. You can confirm this with netcat, for example. I commented out your Nothing case so that no matter what I typed in netcat I was sure to get the "In handleRequest!" message back:
server:
% ghc so.hs && ./so
Listening on port 3000...
Connected!
Nothing
client:
% nc localhost 3000
test ; My input, with a newline
In handleRequest! ; Printed out, no newline
Alternatively, you can use curl's -N option to disable buffering.
% curl -N localhost:3000
In handleRequest!
Using Wireshark to debug, I receive the following error when sending UDP packets on localhost:
Destination Unreachable (Port Unreachable)
Checksum: 0x0000 (Illegal)
I am constructing my server first on a port between 10000 - 15000 using
startServer :: Port -> IO Server
startServer port = withSocketsDo $ do
-- Look up the server address and port information.
addrs <- getAddrInfo (Just $ defaultHints { addrFlags = [AI_PASSIVE] }) Nothing (Just port)
let serverAddress = head addrs
-- Bind to the socket.
sock <- socket (addrFamily serverAddress) Datagram defaultProtocol
bindSocket sock (addrAddress serverAddress)
-- Create the server and run the client send and receive threads.
clients <- newMVar $ createEmptyClients
let server = Server sock port clients
_ <- forkIO $ forever $ receiveClientJoin server
return server
I am listening for new clients connecting via UDP using
-- | Connected a client to the server.
receiveClientJoin :: Server -> IO ()
receiveClientJoin server = do
print "Receiving"
(msg, _, clSockAddr) <- recvFrom (sSocket server) 4096
print $ "Server received client join message: " ++ msg
And I am connecting to the server with clients using
connectToServer port = do
-- Get the server's address and port information.
addrInfo <- getAddrInfo Nothing (Just "localhost") (Just port)
let serverAddr = head addrInfo
sock <- socket (addrFamily serverAddr) Datagram defaultProtocol
sendTo sock "Hello from this client!" (addrAddress serverAddr)
Why are my clients' packets not finding the server?
The problem is you are listening on an IPv6 address and trying to connect to an IPv4 address. This is actually a slightly common problem. For example, I ran across this issue when working with commsec.
Consider the fragments where you discover your AddrInfo:
import Network.Socket
main :: IO ()
main = do
let port = "2474"
addrs <- getAddrInfo (Just $ defaultHints { addrFlags = [AI_PASSIVE] }) Nothing (Just port)
let serverAddress = head addrs
print serverAddress
addrInfo <- getAddrInfo Nothing (Just "localhost") (Just port)
let serverAddr = head addrInfo
print serverAddr
Now the output will vary by machine, but on one of my CentOS systems with both IPv4 and IPv6 addresses the output clearly shows the second (connect) address is IPv6 while the first (listen) address is IPv4:
AddrInfo {addrFlags = [AI_PASSIVE], addrFamily = AF_INET, addrSocketType = Stream, addrProtocol = 6, addrAddress = 0.0.0.0:2474, addrCanonName = Nothing}
AddrInfo {addrFlags = [AI_ADDRCONFIG,AI_V4MAPPED], addrFamily = AF_INET6, addrSocketType = Stream, addrProtocol = 6, addrAddress = [::1]:2474, addrCanonName = Nothing}
One solution is to force a particular version of IP via a hint or an address (ex. an IPv4 address as in my comment). The hint solution is probably more desirable:
-- For servers:
addrs <- getAddrInfo (Just defaultHints { addrFamily = AF_INET6
, addrFlags = [AI_PASSIVE] })
Nothing (Just port)
-- For clients:
addrInfo <- getAddrInfo (Just defaultHints { addrFamily = AF_INET6 })
(Just "localhost") (Just port)
I’m trying to make a simple web application using Snap. I want to have a global MongoDB connection which I can use across handlers.
I open this connection in appInit, which works.
data App = App { _mongoDB :: Pipe -- connection
}
appInit :: SnapletInit App App
appInit = makeSnaplet "site" "My Site" Nothing $ do
db <- liftIO $ do
pipe <- runIOE $ connect (host "127.0.0.1")
return pipe
return $ App db
However, I have no idea how to access this connection (_mongoDB) from a handler. I tried several things, including the following, but I get type errors every time.
watchHandler :: Handler App App ()
watchHandler = do
res <- liftIO $ do
pipe <- gets _mongoDB -- type error
results <- access pipe master "db" (find $ select [] "movies")
return results
writeBS "test"
It gives me the following error:
No instance for (MonadState App IO)
arising from a use of `gets'
Possible fix: add an instance declaration for (MonadState App IO)
In a stmt of a 'do' block: pipe <- gets _mongoDB
In the second argument of `($)', namely
`do { pipe <- gets _mongoDB;
results <- access pipe master "db" (find $ select [] "movies");
return results }'
In a stmt of a 'do' block:
res <- liftIO
$ do { pipe <- gets _mongoDB;
results <- access pipe master "db" (find $ select [] "movies");
return results }
It confuses the hell out of me. How can I access my MongoDB connection from a handler?
The liftIO block is just for IO operations, accessing the application state needs to be done in the Handler monad itself.
watchHandler :: Handler App App ()
watchHandler = do
pipe <- gets _mongoDB
res <- liftIO $ do
results <- access pipe master "db" (find $ select [] "movies")
return results
writeBS "test"
Also, binding a value and then immediately returning it is redundant. You can just write:
watchHandler :: Handler App App ()
watchHandler = do
pipe <- gets _mongoDB
res <- liftIO $ access pipe master "db" (find $ select [] "movies")
writeBS "test"
I'm messing around with an echo server I found online, trying to get a feel for network programming with Haskell, and I'm hitting a stumbling block. I can't seem to figure out how to send data to the server (via another program or any other means). My current attempt is as follows:
import Network (connectTo, Socket, PortID(..))
import System.IO (hPutStrLn, hClose, hSetBuffering, BufferMode(..))
main :: IO ()
main = do
handle <- connectTo "127.0.0.1" (PortNumber 5555)
hSetBuffering handle LineBuffering
hPutStrLn handle "echo hello, world!"
hPutStrLn handle "add 1 2"
hClose handle
When I run main, I get the error "Server.hs: : hPutChar: resource vanished (Broken pipe)" in the terminal in which the server is running. The server code is as follows:
import Network (listenOn, accept, withSocketsDo, PortID(..), Socket)
import System (getArgs)
import System.IO (hSetBuffering, hGetLine, hPutStrLn, BufferMode(..), Handle)
import Control.Concurrent (forkIO)
main :: IO ()
main = withSocketsDo $ do
args <- getArgs
let port = fromIntegral (read $ head args :: Int)
sock <- listenOn $ PortNumber port
putStrLn $ "Listening on " ++ show port
sockHandler sock
sockHandler :: Socket -> IO ()
sockHandler sock = do
(handle, _, _) <- accept sock
hSetBuffering handle NoBuffering
forkIO $ commandProcessor handle
sockHandler sock
commandProcessor :: Handle -> IO ()
commandProcessor handle = do
line <- hGetLine handle
let cmd = words line
case (head cmd) of
("echo") -> echoCommand handle cmd
("add") -> addCommand handle cmd
_ -> do hPutStrLn handle "Unknown command."
commandProcessor handle
echoCommand :: Handle -> [String] -> IO ()
echoCommand handle cmd = do
hPutStrLn handle (unwords $ tail cmd)
addCommand :: Handle -> [String] -> IO ()
addCommand handle cmd = do
hPutStrLn handle $ show $ (read $ cmd !! 1) + (read $ cmd !! 2)
How do I go about fixing this? I want to get to extending the server so I can learn some more. Thanks!
In this instance, the problem is simple. You're writing to the server an echo command, and then an add command, and then disconnecting. The server then tries to process the echo command, and then it tries to write back to the client, but the client already disconnected! hence you get an exception.
The client can't disconnect until it reads back enough data from the server -- and the server should handle exceptions so that a client disconnect doesn't kill it.