Add endpoint in Cloud Integration service - ibm-cloud

I failed to add endpoint in Cloud Integration service, following the steps below:
Login to Bluemix
Create a Cloud Integration service
click on Secure connections tab
Download the connector and install it on the controller node
provide the public key
Refresh the connection. It should show connected
try to add the endpoint It is giving error fail to connect endpoint

Each time that you create a new basic connection, a new installation .tar file is created specific to that installation. They all have unique /home/nativeapiadmin/mgmt.tunnel files that are configured specifically for that connector. If you want to reuse an existing copy of the installation, you must edit the mgmt.tunnel file with the correct host name or IP address, and port numbers.Then, restart the connector.
If above does not resolve your problem, run the following procedure to clean up and recreate the endpoint:
delete your basic connector from cloud integration
create a new basic connector, with a new name
Download the Linux 64-bit installer and make sure the file size is  around 844,128 bytes
remove the older connector from the system
delete the "nativeapiadmin" user and the user's directory
delete the known_host file in the /root/.ssh directory
reinstall the connector, please read the INSTALL_README file that is included in the zip
upload the id_rsa.pub key from same machine as connector was installed
create the endpoint

Related

Moodle on AWS ECS

I am looking to host Moodle LMS on AWS ECS.
I have seen some resources like Bitnami's docker images for Moodle. But not sure if it's the best or the easiest option.
Is there any other good documentation or steps that I can follow to set it up?
After you have created an aws account, login to your account:
Search for Ec2, under search results click on Ec2
Create and then launch an Ec2 instance [recommendation: ubuntu]
You are given public and private IP addresses, DNS.
An access key file [anyname.pem] file would be given, save that file because you would need it to connect to the FTP file manager client.
Wait for the instance state to show running, then connect to that EC2 instance;
Uploading Moodle files to ec2 file manager
Install FileZilla client, launch it
Go to Edit tab click on settings.
Under the settings box, select SFTP then upload that access key file downloaded from ec2, save and close that tab.
Then under host, put in your public IP address, enter username[ubuntu], enter port as 22, then quick connect.
When the file directory loads you need to set up apache virtual hosts.
Check this documentation to create apache virtual hosts:
https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-18-04-quickstart
After setting up the virtual hosts follow the Moodle docs for the step by step installation instructions for moodle
The Moodle docs provide full step-by-step installation instructions for Moodle on Ubuntu. This covers all standard dependencies. You can find the instructions here:
https://docs.moodle.org/310/en/Step-by-step_Installation_Guide_for_Ubuntu
Otherwise, you can certainly use third party images such as Bitnami's docker images, but obviously the instructions for getting those working, and maintaining them going forward, would need to come from the organisation providing the image.

How to read a file on a remote server from openshift

I have an app (java, Spring boot) that runs in a container in openshift. The application needs to go to a third-party server to read the logs of another application. How can this be done? Can I mount the directory where the logs are stored to the container? Or do I need to use some Protocol to remotely access the file and read it?
A remote server is a normal Linux server. It runs an old application running as a jar. It writes logs to a local folder. An application that runs on a pod (with Linux) needs to read this file and parse it
There is a multiple way to do this.
If a continious access is needed :
A Watcher access with polling events ( WatchService API )
A Stream Buffer
File Observable with Java rx
Then creating an NFS storage could be a possible way with exposing the remote logs and make it as a persistant volume is better for this approach.
Else, if the access is based on pollling the logs at for example a certain time during the day then a solution consist of using an FTP solution like Apache Commons FTP Client or using an ssh client which have an SFTP implementation like JSch which is a native Java library.

Having problem to add local hasura to Google cloud run

Do you have some information or tutorial to add local hasura to google cloud run.
I already successfully set the hasura at google cloud run, but it seems i have a problem to connect it with our local database in hasura.
i got an error
ERROR: (gcloud.builds.submit) Unable to read file [cloudbuild.yaml]: [Errno 2] No such file or directory: 'cloudbuild.yaml'
Is there something is not configured yet or?
Best
Zaid
Your question is vague.
The error you reference is Google Cloud Build and suggests that you're trying gcloud builds submit ... and this is failing because the command is unable to find a cloudbuild.yaml file. It's entirely probably you want to do the deployment using Cloud Build but you'll need to create the cloudbuild.yaml file for this to work.
For those of unfamiliar with "hasura", do you mean hasura.io?
This appears to require a container image running that defaults to port :8080 (which is good as that's a default assumed by Cloud Run) and a connection to a PostreSQL database.
If you're using Cloud SQL to run PostgreSQL, you can follow the instructions here

Hyperledger composer rest server not updating

can someone help me when it comes in deploying a rest server, because when I added or edit my participants and assets on my business model and I use
composer create archive -t dir -n . and deploy it with composer-rest-server
my http://localhost:3000/explorer does not update the things i change in my business model it is still the same as before I make change of it.
thank you for those who will can help me..
this doc explains how to update a network definition with a new bna and it shows you how to change the version number:
https://hyperledger.github.io/composer/latest/business-network/upgrading-bna
Your problem is the version number which you more than likely left unchanged.
Once you manage to update your network definition don't forget to regenerate the REST service.
Your Rest service probably runs on the default port 3000. Kill the process using something like :
sudo kill $(sudo lsof -t -i:3000)
where 3000 is the port number it runs on, then run the composer-rest-server command again. It will see the new definition and it will recreate the endpoints correctly.
You can update your network definition using the playground if you prefer as well, you can upload your bna that way and update it using the UI which makes it easier, if you run a development setup.
Any time you change your model or .js files, remember to go into your package.json and update the version number. Then deploy the new .bna file. (This file will have the new version number.)
When you start the Composer Rest Server you see the first thing it does is to "Discover" the network and build the endpoints. It does this only when you start the rest server. So if you change your model and upgrade the network you will need to stop the rest server and start it again for it to do a new discovery and build the new endpoints. (Also need to refresh the page in the Browser if you are using the Explorer through a browser window.)
you have to install the network again after updating your BNA file.
follow these steps:-
1) install the network again
2) start the network
3) ping the network with your card
then start the composer rest server

How can I setup a cell and collective in Bluemix

I'm trying to setup a cell and a collective in a WAS for bluemix service. I've found a few steps online for generic liberty setup, but nothing specific for a bluemix collective or cell. Can someone point me in the right direction?
At a high level, you should be able to do the following for a Cell:
Login to the Admin Console as wsadmin
Create a server.
Open all the ports on each host for each server created by running the openFirewallPorts.sh script. Below, you will find the standard ports for a new server given that only one server exists on each host You may need to open more ports for additional servers on the same host since ports can be unique per server. Try the following:
cd WAS_HOME/virtual/bin
export serverPorts=2810:TCP,2810:UDP,8880:TCP,8880:UDP,9101:TCP,9101:UDP,9061:TCP,9061:UDP,9080:TCP,9080:UDP,9354:TCP,9354:UDP,9044:TCP,9044:UDP,9443:TCP,9443:UDP,5060:TCP,5060:UDP,5061:TCP,5061:UDP,11005:TCP,11005:UDP,11007:TCP,11007:UDP,9633:TCP,9633:UDP,7276:TCP,7276:UDP,7286:TCP,7286:UDP,5558:TCP,5558:UDP,5578:TCP,5578:UDP
sudo ./openFirewallPorts.sh -ports $serverPorts -persist true
Start your server.
Deploy your application.
There are a few slight differences for a Liberty Collective, but again, at a high level, you should be able to try the following:
Switch your user to wsadmin or ssh to your host using wsadmin / password
On each host, create a server and join it to the collective. Be sure to use the full host name of the controller for the --host parameter.
cd WAS_HOME/bin
./server create server
./collective join server --host=yourhostname --port=9443 --user=wsadmin --password=xxxxxxxx --keystorePassword=yyyyyyyy
Accept the chain certificate (y/n) y
Save the output from each join so you can paste it into each host's application server.xml file before deploying your application.
Install the features required by your application on each host. The features listed below are an example.
cd /opt/IBM/WebSphere/Liberty/bin
./featureManager install --acceptLicense ejblite-3.2 websocket-1.0 jsp-2.3 jdbc-4.1 jaxrs-2.0 cdi-1.2 beanValidation-1.1
NOTE: Output from this command will contain messages similar to:
chmod: changing permissions of
`/opt/IBM/WebSphere/Liberty/bin/featureManager': Operation not
permitted
This is OK. You should see this message upon completion:
Product validation completed successfully.
Update your application's server.xml file with the information saved in Step 2.
Start your server.
Deploy your application.
Verify your application is reachable :9080/appname