Remote logging: rsyslog vs. RESTful API - rest

I'm working with Django and Loggly, and I need to decide between using Loggly with rsyslog or with its RESTful API. For the second option, I'd use grequests, sending a single request at a time (i.e., just to make the calls non-blocking, but I wouldn't send requests in bulk).
What are the advantages of using rsyslog over the RESTful API and vice versa?

Haven't tested it yet, but using the syslog approach has several advantages:
You can centralize logs at a system level, without particular
configurations on the django app
Logging is decoupled from the django app, you can set it to log to file,
a remote syslog server or loggly, without touching the django app
It should be faster if using UDP
If using a centralized syslog server, you only have to set the loggly
agent there
On the other hand, using the RestAPI would couple the app to the loggly implementation, and it could raise some errors while trying to report errors (DNS resolution failures, network problems, etc)

Related

SOAP Web Services with load balancing

My customer has 2 Windows Server 2019.
On both of them, an instance of a SOAP Web Service is running.
URLs:
https://host1.domainname.com/SOAPService
and
https://host2.domainname.com/SOAPService
Now, the requirement of the customer is to provide a single, unique URL that the clients can use to consume the SOAP WebService(s).
I read through several websites and if I got it right, I need a tool that is called "reserve proxy"... Using this tool, clients can access the webservice by using an URL such as https://host.domainname.com/SOAPService and the tool will automatically route the request to the available webservice.
Correct?
I also have an architectural question:
On which machine do I have to run such a Reserve Proxy?
Is it on host1 or host2 or do I need a dedicated machine (like a supervisor)?
If it is a dediciated machine, how can I apply high availability of this Reverse Proxy? E.g. is it possible to run 2 Reserve Proxies in parallel on different machines? Which tool could afford this?
Thanks

Exe as Webservice Endpoint

I got a webservice endpoint and I stumple upon how to correctly implement it.
It seems to be an parameterized exe-file which returns an XML Reply.
There is no documentation.
I am used to soap, wcf and rest but this is completely unknown to me, has anyone a guide or a best case how to implement such a service?
I can consume it with a HTTP GET but there are some questions left to me:
I know the questions are quite broad... But I could not find anything about it in the interwebz.
Is there a secure way to publish exe files as webservice?
Are there any critical downsides implementing such an interface?
Make I myself a fool and this is just an alias?
Example Url:
http://very.exhausting.company/Version/SuperStrange.exe?parameter=String
Web servers
What you call a webservice endpoint is nothing else than a web server listening on some host (normally 0.0.0.0) and some port on a physical or virtual machine and responding with some HTTP response to HTTP requests sent to that host, port and URIs that the web server cares to process.
Any web server is itself an application or a static or dynamic component of an application as the following examples illustrate:
JBoss, Glassfish, Tomcat etc. are applications, known as application servers, into which containers/servlets/plugins implementing web servers and corresponding endpoints are deployed. These listen on some port exposing generic web servers routing requests to those containers and their servlets;
a fat jar started with java -jar on a JVM which deploys a vert.x verticle featuring a vert.x HttpServer listening on some port is nothing else than a web server;
an interpreter such as node.js parsing and executing JavaScript code based on the express module will most likely deploy a web server on some port;
finally, a statically or dynamically linked application written in languages such as C++ or Go can expose a web server listing on some port.
All of the above cases feature different deployment mechanisms, but what they deploy is essentially the same: a piece of software that listens for HTTP requests on some port, executes some logic based on request and returns HTTP responses to the caller.
Your windows exe file is most likely a statically linked application that provides a web server.
Protocols
So we know you have a web server as it reacts to an HTTP GET. How does it relate to REST, SOAP etc? Effectively, REST, SOAP etc are higher level protocols. TCP is the low level, HTTP is based on top of that and your server supports that. REST, SOAP and everything else that you mention are higher level protocols that are based, among others, on HTTP. So all you know is that your application (web server) supports HTTP, but you do not know which higher level data exchange protocol it implements. It definitely implements some, at least a custom one that its author came up with to exchange data between a client and this application.
You can try to reverse engineer it, but it is not clear how would you find out about all possible endpoints, arguments, payload structures, accepted headers etc. Essentially, you have a web server publishing some sort of an API, but there is no generic way of telling what that API is.
Security
The world around you does not have to know how the API is published. You can put any of the above 4 web server implementations behind exactly the same firewall or a reverse proxy with SSL termination exposing just one host and port over SSL. So there is no difference in security, with respect to the world, whether you deploy it as exe or as a war into JBoss. This is not to say, that your exe file is secure: depending on how it is implemented it may allow all sorts of attacks, but again, this is equally true for any mechanism.

Securing access to REST API of Kafka Connect

The REST API for Kafka Connect is not secured and authenticated.
Since its not authenticated, the configuration for a connector or Tasks are easily accessible by anyone. Since these configurations may contain about how to access the Source System [in case of SourceConnector] and destination system [in case of SinkConnector], Is there a standard way to restrict access to these APIs?
In Kafka 2.1.0, there is possibility to configure http basic authentication for REST interface of Kafka Connect without writing any custom code.
This became real due to implementation of REST extensions mechanism (see KIP-285).
Shortly, configuration procedure as follows:
Add extension class to worker configuration file:
rest.extension.classes = org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension
Create JAAS config file (i.e. connect_jaas.conf) for application name 'KafkaConnect':
KafkaConnect {
org.apache.kafka.connect.rest.basic.auth.extension.PropertyFileLoginModule required
file="/your/path/rest-credentials.properties";
};
Create rest-credentials.properties file in above-mentioned directory:
user=password
Finally, inform java about you JAAS config file, for example, by adding command-line property to java:
-Djava.security.auth.login.config=/your/path/connect_jaas.conf
After restarting Kafka Connect, you will be unable to use REST API without basic authentication.
Please keep in mind that used classes are rather examples than production-ready features.
Links:
Connect configuratin
BasicAuthSecurityRestExtension
JaasBasicAuthFilter
PropertyFileLoginModule
This is a known area in need of improvement in the future but for now you should use a firewall on the Kafka Connect machines and either an API Management tool (Apigee, etc) or a Reverse proxy (haproxy, nginx, etc.) to ensure that HTTPS is terminated at an endpoint that you can configure access control rules on and then have the firewall only accept connections from the secure proxy. With some products the firewall, access control, and SSL/TLS termination functions can be all done in a fewer number of products.
As of Kafka 1.1.0, you can set up SSL and SSL client authentication for the Kafka Connect REST API. See KIP-208 for the details.
Now you are able to enable certificate based authentication for client access to the REST API of Kafka Connect.
An example here https://github.com/sudar-path/kc-rest-mtls

Trace HTTPS Web API calls from iPhone App

I am working with an iPhone application which interacts with a Web API. Since the endpoints are HTTPS, the data which communicates in-between the device and the Web API are suppose to be encrypted.
I am in need of finding every End-Points and the Data which communicates (Headers, Body Content) for each business scenario & for negative testing-flows.
Since the data which transmits are encrypted, I was unable to trace from the Fiddler which I tried while referring so several on-line tutorials.
(The reason why I am in need is because of I have got assigned to make a API Automation tool to simulate all the testing scenarios (happy-path, negative test-cases, etc))
Is there any better approach I can take to trace these API calls?
OR, is there a tool which I can try to trace these Web API calls which sends and receives from the iPhone?
TIA
Managed to get the Certificates for the HTTPS endpoints and added to the Certificate Manager (in a windows pc). Afterwards configured the proxy ports with fiddler echo service from the mobile device and was able to trace the HTTPs calls.
With the help of installing the certificates the HTTPS, intercepting the HTTPS is possible.

How to configure Rhino ESB with multiple servers

I'm working on a web application that will use Rhino Service Bus to send messages that are then consumed by a windows service on the app server. I've been able to test this on my machine (hosting the web app and the windows service) and it works fine. I was also able to test this in our dev environment, which has one web server and one app server, without any problems. However, our staging environment has two web servers and two app servers, so I'm not sure how to configure the endpoint to which the messages are sent.
I know I can edit the config section for each web server to point to one of the app servers. I can also put the windows service on only one machine and send everything to a queue on that machine. Neither of these sounds like a good option. What's the best practice in a scenario like this?
Any help would be appreciated.
It depends on which transport you're using. If you're using Rhino.Queues you can leverage hardware based load balancing + DNS. If you're using MSMQ, then you would need to use the MSMQ load balancer in RSB. You can find tests in the source that demonstrate this. Your workarounds that you mentioned would also work.