multiple instances of httpd24 RHSCL - redhat

I need to create a second apache instance (same version) on Redhat 7.9. The reason for this is that I want to create a second development environment where apache restart will not affect the other apache instance. I am using httpd24-httpd 2.4.34 RHSCL and I am not able to find any related documentation.
Do you know if multiple apache instances is supported for httpd24-httpd RHSCL on RH 7.9 and if there is any documentation I can follow?
Thank you in advance

It's a bit of a guess tbh, but what about installing the second instance into a container? See here https://catalog.redhat.com/software/containers/rhscl/httpd-24-rhel7/57ea8d049c624c035f96f42e

Related

Can i use different versions of cassandra in a cluster?

Can i use different versions of cassandra in a single cluster? My goal is to transfer data from one DC(A) to new DC(B) and decommission DC(A), but DC(A) is on version 3.11.3 and DC(B) is going to be *3.11.7+
* I Want to use K8ssandra deployment with metrics and other stuff. The K8ssandra project cannot deploy older versions of cassandra than 3.11.7
Thank you!
K8ssandra itself is purposefully an "opinionated" stack, which is why you can only use certain more recent and not-known to include major issues versions of Cassandra.
But, if you already have the existing cluster, that doesn't mean you can't migrate between them. Check out this blog for an example of doing that: https://k8ssandra.io/blog/tutorials/cassandra-database-migration-to-kubernetes-zero-downtime/

How to connect kafka with OPC?

I need to put data from OPC-UA into a topic in kafka.
I tried searching for connector or technology to accomplish this, but I didn't find anything.
You can try to use Visual Logger for OPC: https://onewayautomation.com/visual-logger
Its community edition is free. Can be deployed in Docker container, as well available to run in Windows and Linux.

Apache kylin and PostgreSQL

I’m a student and i’m working on my last year project, the project is about Data warhousing, BI, etc...
So Im asked to work with Apache Kylin
I did some researchs about it, learned some
And I looked for if it is possible to use a PostgreSQL as Data warehouse and make it communicate with Apache Kylin to build cubes
But found nothing...
So would you please answer to my following question:
Is it possible to make the apache kylin communicate with a postgreSQL DWH?
And if there is some hidden documentations about it would you please share it?
Time is running guys and i really appreciate your answers and guides
Thanks in advance.
Khalil
It's doable. Kylin provides data source adapter for JDBC data sources. PostgreSQL could be one of the data source adapters. MySQL is supported by default. You could check this link to learn more: http://kylin.apache.org/development/datasource_sdk.html

Deploying Kubernetes on bare metal rather than VM

Stupid question, but right now I'm deploying my Kubernetes cluster inside a VM. Is there a way to deploy it directly onto my machine?
I'm sure there has to be a easy fix but many of the docs I've read have been focused on deploying it inside VM.
I am assuming you are using some flavor of Linux; otherwise the information below won't be useful to you.
The easiest way of bare metal deployment ("onto your machine") is by using kubeadm. The documentation for that is excellent.
(If you need help with then reply with your exact OS flavor and version and I can edit this answer to reflect that specific situation.)

Configure JDBC driver in JBoss 7 - as a deployment OR as a module?

As mentioned in the article https://community.jboss.org/wiki/DataSourceConfigurationInAS7 JBoss 7 provides 2 main ways to configure a data source.
What is the BEST practice of configuring a data source in JBoss 7 AS ? Is it
As a module?
As a deployment?
(The same question has been asked in the thread https://community.jboss.org/thread/198023, but no one has provided an acceptable answer yet.)
The guide JBoss AS7 DS configuration says the recommended way is to configure the datasource by deployment
But according to discussion on the link Jboss 7 DS configuration JBoss Community Discussion on page 54 of the guide it mentions that the recommended way to deploy JDBC driver is to use modular approach
But I personally say that the better(not the best) approach to configure JDBC driver would be to use modules because of 3 reasons
JDBC driver will generally not change.
Re-usability : You can use the same module across various applications and not deploy the jar along with each application, this prevents duplicacy
Space Effective : Using the module approach lets you reduce the size of your EAR/WAR as you do not need to supply the jar with the package
Hence I would argue that the better of the two approaches is via modules
#Mukul Goel
It's not necessary to include it the EAR of your application it's sufficient to put the .jar inside the deployments folder so:
no need to embed in ear
no need to create a module
Jist deploy in deployments folder or via admin console