I have 8 servers connected together to a kind of cluster. On one of the servers i have DirectAdmin that controll all configs (config files mounted over NFS). Everything works correct except restarting services after configs updates.
How to force service restart on all 8 servers?
To restart services on your server you need to login it, I will suggest create bash script on your one of the server and execute it, You can setup SSH key to login your other server from you one server to others, so that your bash script will able to restart your services on all other servers.
Related
For auditing purposes, if we need to record all the linux commands that have been executed inside a Kubernetes container then how can we do it?
this is possible using eBPF, there are a few kubernetes tools that can do session auditing, one of them is called teleport, which acts as bastion host for services and have capabilites to record commands that are run on pods shell/bash/ash etc...
https://goteleport.com/
We've just shipped a standalone service fabric cluster to a customer site with a misconfiguration. Our setup:
Service Fabric 6.4
2 Windows servers, each running 3 Hyper-V virtual machines that host the cluster
We configured the cluster locally using static IP addresses for the nodes. When the servers arrived, the IP addresses of the Hyper-V machines were changed to conform to the customer's available IP addresses. Now we can't connect to the cluster, since every IP in the clusterConfig is wrong. Is there any way we can recover from this without re-installing the cluster? We'd prefer to keep the new IP's assigned to the VM's if possible.
I've tested this only on my test environment (I've never done this on production before so do it on your own risk), but since you can't connect to the cluster anyway I think it is worth to try.
Connect to each virtual machine which is a part of the cluster and do following steps:
Locate Service Fabric Cluster files (usually C:\ProgramData\SF\{nodeName}\Fabric)
Take ClusterManifest.current.xml file and copy it to temp folder (for example C:\temp)
Go to Fabric.Data subfolder, take InfrastructureManifest.xml file and copy it to the same temp folder
Inside each file you have copied change IP addresses for nodes to correct values
Stop FabricHostSvc process by running net stop FabricHostSvc command in powershell
After successful stop run this powershell (admin mode) command to update node cluster configuration:
New-ServiceFabricNodeConfiguration -ClusterManifestPath C:\temp\ClusterManifest.current.xml -InfrastructureManifestPath C:\temp\InfrastructureManifest.xml
Once the config is updated start FabricHostSvc net start FabricHostSvc
Do this for each node and pray for the best.
Followed instructions here to create a local 3 node secure cluster
Got the go example app running with the following DB connection string to connect to the secure cluster
sql.Open("postgres", "postgresql://root#localhost:26257/dbname?sslmode=verify-full&sslrootcert=<location of ca.crt>&sslcert=<location of client.root.crt>&sslkey=<location of client.root.key>")
Cockroach DB worked well locally so I decided to move the DB (as in the DB solution and not the actual data) to GCP Kubernetes Engine using the instructions here
Everything worked fine - pods created and could use the built in SQL client from the cloud console.
Now I want to use the previous example app to now connect to this new cloud DB. I created a load balancer using kubectl expose command and got a public ip to use in the code.
How do I get the new ca.crt, client.root.crt, client.root.key files to use in my connection string for the DB running on GCP?
We have 5+ developers and the idea is to have them write code on their local machines and connect to the cloud db using the connection strings and the certificates.
Or is there a better way to let 5+ developers use a single DEV DB cluster running on GCP?
The recommended way to run against a Kubernetes CockroachDB cluster is to have your apps run in the same cluster. This makes certificate generation fairly simple. See the built-in SQL client example and its config file.
The config above uses an init container to send a CSR for client certificates and makes them available to the container (in this case just the cockroach sql client, but it would be anything else).
If you wish to run a client outside the kubernetes cluster, the simplest way is to copy the generated certs directly from the client pod. It's recommended to use a non root user:
create the user through the SQL command
modify the client-secure.yaml config for your new user and start the new client pod
approve the CSR for the client certificate
wait for the pod to finish initializing
copy the ca.crt, client.<username>.crt and client.<username>.key from the pod onto your local machine
Note: the public DNS or IP address of your kubernetes cluster is most likely not included in the node certificates. You either need to modify the list of hostnames/addresses before bringing up the nodes, or change your connection URL to sslmode=verify-ca (see client connection parameters for details).
Alternatively, you could use password authentication in which case you would only need the CA certificate.
I want to use following deployment architecture.
One machine running my webserver(nginx)
Two or more machines running uwsgi
Postgresql as my db on another server.
All the three are three different host machines on AWS. During development I used docker and was able to run all these three on my local machine. But I am clueless now as I want to split those three into three separate hosts and run it. Any guidance, clues, references will be greatly appreciated. I preferably want to do this using docker.
If you're really adamant on keeping the services separate on individual hosts then there's nothing stopping you from still using your containers on a Docker installed EC2 host for nginx/uswgi, you could even use a CoreOS AMI which comes with a nice secure Docker instance pre-loaded (https://coreos.com/os/docs/latest/booting-on-ec2.html).
For the database use PostgreSQL on AWS RDS.
If you're running containers you can also look at AWS ECS which is Amazons container service, which would be my initial recommendation, but I saw that you wanted all these services to be on individual hosts.
you can use docker stack to deploy the application in swarm,
join the other 2 hosts as worker and use the below option
https://docs.docker.com/compose/compose-file/#placement
deploy:
placement:
constraints:
- node.role == manager
change the node role as manager or worker1 or workern this will restrict the services to run on individual hosts.
you can also make this more secure by using vpn if you wish
While going through Redhat Fuse ESB documentation , I found mention of fabric containers as something different from stand-alone container. Are Fabric containers virtual/logical containers?
Link : https://access.redhat.com/documentation/en-US/Fuse_ESB_Enterprise/7.1/html/Deploying_into_the_Container/files/FESBLocateFabric.html
Fabric containers are real JVMs that are started and controlled by Fabric servers. They are not 'virtual' containers but are real JVM processes.
Standalone containers are single JVMs that monitor their "deploy" folder by default to look for artifacts to deploy. You can start a standalone Fuse server by simply running bin/fuse. This server will not contact any other Fuse servers.
A Fabric is a clustered group of Fuse instances. Because the cluster needs to distribute its artifacts according to some configuration it doesn't look at its deploy folder anymore (it ignores the contents) but uses "profiles" which are stored on the Fabric servers.
If you would create a cluster of 3 hardware servers, you would run 3 fabric servers on them.
On the first server, you start Fuse by running bin/start.
Then run bin/client -r 10 to connect to the server.
You now still have a standalone instance. To turn it into a Fabric server run fabric:create --clean --wait-for-provisioning
On the other two servers, you start Fuse the same way, but instead of running fabric:create you run fabric:join with the relevant arguments to have them connect to the first server.
You'll notice that when you look at the administration console of the first server you'll see the other 2 servers as well, and you will be able to start fabric containers on any one of those 3 servers. You can also attach profiles to those containers.