Create new openshift application - eclipse

When I try to create a new openshift application in Eclipse, they show me this error:
The authenticity of host 'droptest-daasenv.rhcloud.com' can't be established . RSA key fingerprint is cf:ee:....:a7.
When I click next, they show this msg:
Could not clone the repository. Authentication failed.
Please make sure that you added your private key to the ssh preferences.
ssh://56a4d5fb7628e14ac3000081#droptest-daasenv.rhcloud.com/~/git/droptest.git/: Session.connect: java.io.IOException: End of IO Stream Read
Can you guys tell me what should I do?

I assume you already have a RH cloud account.
In this case you just need to configure yout SSH keys for OpenShift online:
http://tools.jboss.org/documentation/howto/openshift_configssh.html
A second option is: take a look at this page, where I put a 4-minutes (mute) video to show how to use JBossTools to connect to OpenShift Online:
http://www.asegno.com/jboss-dev/

Related

Allow own signed certificat in owncloud on a synology

I have owncloud version 9.1.8 running on a synology. Now I installed onlyoffice on a local server with a self signed certificat. It is important to know, that the onlyoffice server is running locally in a network. So I cannot access the server like e.g. with lets encrypt, because I only have a local server name and not a public server name. Lets Encrypt therefore cannot verify the server. However if I want (and if you have a solution doing that), I can access the internet using the server.
Now i have the problem, that owncloud delivers me the following error message
"Error while downloading the document file to be converted."
when I want to save the url in the onlyoffice configuration in owncloud. I guess the problem is, that I am using a self signed certificat. Do you know what I can do? Google does not really help me.
"Error while downloading the document file to be converted."
means that DocumentServer cannot validate your storage's self-signed certificate (OC in your case)
There are 2 possible workarounds:
1) Change "rejectUnauthorized" to false in the /etc/onlyoffice/documentserver/default.json config file
2) Change the default Node.js CAstore:
Edit the files:
/etc/supervisor/conf.d/onlyoffice-documentserver-converter.conf
/etc/supervisor/conf.d/onlyoffice-documentserver-docservice.conf
Add a flag --use-openssl-ca to the parameters in this line
Then you need to add your certificate to the the default CA store and restart ONLYOFFICE services:
supervisorctl restart all

Unable to fetch data from T24(TAFJ R18) when working with design studio

I faced the below error when importing t24 applications in design studio. The T24 server (TAFJ R18) which I try to connect to is up (jboss is running), but still I face this issue:
Unable to fetch data from T24. Check your connection details and if T24 is up and running.
Subroutine:
Return Code: FAILURE
Response size: 1
Response 1 ->Response Code: EB-SECURITY.VIOLATION,Response Type: NON_FATAL_ERROR,Response Text: Please check your Login Credential and/or access rights,Response Info: 98748ebf-f73d-4e86-8506-950b2fd0b5d2,
Looks like the Username and Password you have provided in the t24-server/config/server.properties is not correct. Make sure you can login to T24 (Browser or Classic) with the T24 User provided in these settings:
#T24 User name used for introspection and deployment (TAFJ)
username=INPUTT
#T24 Encrypted password used for introspection and deployment (TAFJ)
password={encoded}gXhuXZkbBuL09T8WFlRR+w==
Other important settings in this file:
#T24 host name to connect to (IP address or Domain name)
host=localhost
#T24 Web service (TAFJ) port number to connect
ws.port=8080
#Protocol: ftp, sftp or local (TAFC & TAFJ: used for *.b and *.d file transfer)
protocol=ws
#context for web-service
context=axis2
We can check the connectivity and also if anyone restarting the jboss while importing.
We can check the server status is "active" in DS, or we can restart the server connectivity.
And make sure if you are using any VPN to connect the Database and still it is active.

Problem accessing orion-psb-image-R5.4 on FIWARE Lab using ssh

these are the steps i did :
1- created a keypair.
2- downloaded the keypair and used puttygen to generate a private key
3-created a new instance using the orion-psb-image-R5.4 image for a context broker.
4-created a security group and added a rule that opened the ssh port
5- associated a floating ip to that image
6-tried to access the image from putty using the floating ip and the private key generated in step 2
putty gives me this error:
Disconnected : No supported authentication methods available (server sent:publickey).
I would like to know how to solve this issue and understand the reason for it.
update:
Screen shots:
1.loading the downloaded keypair into puttygen
2.the downloaded keypair file from fiware lab (keypair.pem) and the generated private key
3.entering the floating ip for the contextbroker instance
4.loading the generated private key to use during connection establishment
5.the error message when i try to connect
Seems to be a problem with key generation or Putty configuration. Unfortunatelly, the question post doesn't include enough detail to provide a more precices anser.
I'd suggest you to edit your question post to include full detail of each step you have done (even including screenshots as you go).
EDIT: use centos as user login instead of root

Bluemix dashboard: Unable to update route (BXNUI0030E)

This error seems like a Bluemix internal error to me.
I try to add another route using the new Bluemix dashboard to one of my liberty apps and the error message I get is:
BXNUI0030E: The 'xxxxxx.au-syd.mybluemix.net' route wasn't mapped to
the 'xxxxx-arya' app because a problem occurred contacting Cloud
Foundry. Try again later. If you see this message again, go to the
Bluemix status page to check whether a service or component has an
issue. If the problem continues, click the Account and Support icon in
the top menu bar, click Get help, and search for help or get support.
The error message is not clear to identify cause of the problem. The message just tells Cloud Foundry command behind Web interface failed to create map-route, but it didn't tell what exactly happen. Please run below commands to know what happened during creating map-route and fix the problem;
Login into the space by cfcli;
cf login -a api.au-syd.bluemix.net -u -p -o -s
List all routes in the space and make sure the app or the map-route are there;
cf routes
Create map-route by command;
cf map-route APPNAME au-syd.mybluemix.net --hostname NEW-HOSTNAME
Check error message and fix the problem
*) Most common problem is duplicate map-route name if the hostname used in in other spaces. Please contact Bluemix Support with your new hostname if you cannot find the hostname in your organizations.

Not able to connect to cluster. Facing Certificate signed by unknown authority

I am not sure either what I am trying to do is possible or correct way.
One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access.
After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace.
I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master'
But when I try to list of existing pods using 'cluster/kubecfg.sh list pods'
I see
"F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe")
I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.
You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.
It is probably due to a not implemented feature, see this issue:
https://github.com/GoogleCloudPlatform/kubernetes/issues/1886
you can copy the files from /usr/share/nginx/... on the master
into your home dir and try again.
I figured out a workaround: set the -insecure_skip_tls_verify option
In kubecfg.sh, change the code near the bottom to
else
auth_config=(
"-insecure_skip_tls_verify"
)
fi
Obviously this is insecure and you are putting yourself at risk of a man in the middle attack, etc.