Syndesis (Fuse-online) Integration build failed for unknown host "maven1.repo.org" - kubernetes

We installed fuse-online 7.4 on openshift 3.11. We created an integration containing an OpenApiProvider connection and an SQL connection.
When we publish the integration, the build fails with the following error:
"repo1.maven.org: Name or service not known: Unknown host repo1.maven.org: Name or service not known"
Openshift is installed behing an enterprise http proxy
The image registry.access.redhat.com/fuse7/fuse-ignite-s2i is pulled correctly since docker is configured with proxy.
syndesis-server DeploymentConfig has been set with proxies environment variables
I suppose that, since the buildconfig for the integration is created dynamically, is not possible to inject HTTP_PROXY,HTTPS_PROXY,NO_PROXY env variables to the build pod.
We read https://docs.openshift.com/container-platform/3.11/install_config/http_proxies.html#s2i-builds but since we don't have any rights to modify s2i image we cannot proceed.
Is there any way to provide proxy information during during fuse-online integration build?

Finally we succeeded to inject http proxy environment variables in dynamic created build pods.
We modified syndesis-server-config config map reporting proxy variables on mavenOptions key like this:
mavenOptions: "-XX:+UseG1GC -XX:+UseStringDeduplication -Xmx310m -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts="
Thanks for the support
Let me know if you have any other idea of resolving the issue

Can you check the DNS of your network connection? Not sure why but sometimes I have to use one of the "reliable" DNS on my machine (like the 8.8.8.8 from Google) to make sure repo1.maven.org is reachable.
You can check if this is the problem trying a simple
$ ping repo1.maven.org
If that doesn't work, you have to check your DNS.

Related

Fabric8/JBoss Fuse Creating ssh container creation

I'm trying to create a new SSH container to an existing fabric and It was successfully created using this command:
fabric:container-create-ssh --proxy-uri http://"root-container":8181/maven/download/ --jvm-opts "-Xms1024m -XX:MaxPermSize=1024m -Xmx2014m -Djavax.net.debug=ssl" --path /app/testing/ --host "testIP" --private-key ~/.ssh/id_rsa --profile default --profile anotherprofile --resolver localhostname --zookeeper-password "zookeepr pass" testing-container
The problem is that once i created this new container all the existing ssh container changes their maven download/upload proxy to the above container IP
so instead of using http://"currentroot":8181/maven/download/ to http://testIP:8181/maven/download/
i tried a lot to change the maven proxy from the "root-container" fabric profile and default profile but still couldn't reach a solution?
Is there a missing step that i should take to solve this issue in adding a new ssh container without updating the existing maven repo?
It depends on what is anotherprofile
http://host:port/maven/download/ URIs are registered in Zookeeper registry that's used by fabric environment.
fabric-maven-proxy feature, which is declared in fabric profile is the feature that's responsible for starting a maven proxy inside any container that has this feature/profile installed. Please check if your new container has this feature installed. Maybe anotherprofile has fabric profile set as parent?
How containers use remote repositories
Each container, when provisioned by fabric-agent, uses io.fabric8.agent PID configuration (OSGi configuration admin) and the property is org.ops4j.pax.url.mvn.repositories. It contains a list of remote repositories that are searched for artifacts to install in container.
But there's some dynamism involved too. Fabric agent always searches Zookeeper registry and finds URIs that are registered by other containers that run the above mentioned feature (fabric-maven-proxy). All such discovered URIs are prepended to the list found in org.ops4j.pax.url.mvn.repositories property.
How to check maven problem in logs
If you add karaf profile to a container, you'll have logging configuration available in org.ops4j.pax.logging PID - you can nicely configure it in hawtio. By default, there's commented section like this:
# help with identification of maven-related problems with fabric-maven
#log4j.logger.org.eclipse.aether = TRACE
#log4j.logger.org.apache.http.headers = DEBUG
#log4j.logger.io.fabric8.maven.util = TRACE
#log4j.logger.io.fabric8.maven.url = TRACE
#log4j.logger.io.fabric8.agent.download = DEBUG
You can uncomment these to see (much) more information about how maven repositories are used.

Issue connecting composer to Blockchain on Bluemix - identity or token does not match

I have fabric composer 0.72 installed on my mac, and I was able to follow this thread to get it connected to my Blockchain (v.61 of Fabric) on Bluemix.
fabric-composer-integration-with-bluemix-blockchain-service
Now I am trying to build an ubuntu (16.04) docker container and run composer-rest-server there. When I try to connect to my blockchain service from my docker container (using the same id, WebAppAdmin, that I used on my mac) I get an error:
Discovering types from business network definition ...
Connection fails: Error: Identity or token does not match.
It will be retried for the next request.
{ Error: Identity or token does not match.
at /home/composer/.nvm/versions/node/v6.10.3/lib/node_modules /composer-rest-server/node_modules/grpc/src/node/src/client.js:417:17 code: 2, metadata: Metadata { _internal_repr: {} } }
I tried copying the cert from my mac to my docker container:
/home/composer/.composer-credentials/member.WebAppAdmin
but when I did that I got a different error that says "signature does not verify". I did some additional testing, and I discovered that if I used an id that I had not previously used with composer (i.e. user_type1_0) then I could connect, and I could see a new cert in my .composer-credentials directory.
I tried deleting that container and building a new one (I dorked something else up) I could not use that same userid again.
Does anybody know how security and these certs are supposed to work? It would seem as though something to do with certificate generation/validation is tied to the client (i.e. hardware address), such that if I try to re-use an id on a different machine, the certs or keys or something don't match. I have a way to make things work, but it doesn't seem like it's the right way if I can't use the same id from different machines.
Thanks!
Hi i tried to recreate this by having blockchain running on a unix machine and then i copied my connection profile and certificate to my mac and then edited my connection profile to update the ip address and key store. I then did a composer network ping and it worked fine.
I am using composer v0.7.4 so you could try that?
I have also faced this issue, and concluded that
There is inconsistent behavior while deploying network using composer on Cloud environment includeing Bluemix. Problem is not with composer, but with fabric 0.6.
I am assuming that this issue is also indirectly related to following known bugs into fabric 0.6, which will not be fixed in fabric 0.6.
ERROR:
"
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:217:12)
"
So far, We have understood that following three JIRA are root cause , where essentially the cloud networking layer ends up killing the idle event hub connection after a period of inactivity and the fabric SDK cannot handle this.
https://jira.hyperledger.org/browse/FAB-4002 FAB-3310
https://jira.hyperledger.org/browse/FAB-3310
or FAB-2787
Conclusion:
There is no alternative way of fixing this issue with Bluemix or any cloud environment with fabric 0.6
You may not experience this issue with Fabric 1.0, but there is still possibilities as all above mentioned defects are not fixed yet.

Jboos connectivity Issue

I am getting the following error when trying to connect my application to jboss:
WARN | ISPN004022: Unable to invalidate transport for server:
/127.0.0.1:12222 ERROR | ISPN004017: Could not fetch transport
org.infinispan.client.hotrod.exceptions.TransportException:: Could not
connect to server: /127.0.0.1:12222
Tried searching a lot for a solution. It would be great is someone could help me out with this. Thanks
You must recall the following actions:
Make sure that your webapp is using the same port as defined in the socket-binding definitions for hotrod in the standalone.xml for JDG configuration folder;
Make sure that your webapp is using the proper inject annotations for your RemoteCacheManager class (remember to use the #ApplicationScopped annotation at the class definition and for additional methods used to get the cache instance);
If you are using JBoss and JDG on the same host, you must check declarations of the JBOSS_HOME environment variable. This variable must be assigned to the JDG installation home directory and not the JBoss EAP home (check also port-offset settings at startup if you're using a custom shell script);
If you are not using both products on the same host, check firewall and network settings;
Remember to re-deploy the application always after every modification and check both EAP and JDG console output for warnings and/or errors.
The following errors are related (for example):
14:38:42,610 WARN [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (http-127.0.0.1:8080-1) ISPN004022:
Unable to invalidate transport for server: /127.0.0.1:11322
14:38:42,610 ERROR [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (http-127.0.0.1:8080-1) ISPN004017:
Could not fetch transport: java.lang.IllegalStateException: Pool not open

installing kubernetes on coreos with rkt and automated script

I'm trying to install kuberentes with rkt on my real (not virtual) coreos servers at home using the scripts at https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and I have some questions.
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
thanks!
update
Rob thank you so much for your response. I wasn't clear enough regarding etcd2. I already have etcd2 tls installed and properly configured on my coreos servers. so I configured my etcd servers in the controller-install.sh file:
export ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
but when I run the controller-install.sh script, it returns and repeat the following output:
Waiting for etcd...
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
...
so I was guessing it's because i didn't define etcd related tls certificates in the controller script and that is why it stuck in that faze.
on my macbook pro laptop I have the following alias configured:
alias myetcdctl="~/apps/etcd-v3.0.8-darwin-amd64/etcdctl --endpoint=https://coreos-2.tux-in.com:2379 --ca-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/ca.pem --cert-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1.pem --key-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1-key.pem --timeout=10s"
so when I run myetcdctl member list I get:
8832ce6a269a7dac: name=ccff826d5f564c67abf35467306f80a0 peerURLs=https://coreos-3.tux-in.com:2380 clientURLs=https://coreos-3.tux-in.com:2379 isLeader=true
a2c0ac9708ef90fc: name=dc38bc8f20e64940b260d3f7b260430d peerURLs=https://coreos-2.tux-in.com:2380 clientURLs=https://coreos-2.tux-in.com:2379 isLeader=false
so I'm guessing that I don't really have a problem there.
any ideas?
thanks!
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
These scripts don't start an etcd server. You will need to set one up manually and will be able to use TLS and as many nodes as you would like. This isn't clear in the current form of the document, I will attempt a PR to fix.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
Only CONTROLLER_ENDPOINT be a domain name.
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
These scripts include/start the rkt API service. As you can see below, it also has a Restart parameter set (source):
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service

Configuring Application Endpoints in AppFabric

I am using AppFabric 1.1 in IIS 7.5 on windows 7 machine to host my Workflows as service. Though I am able to see System endpoints and Application Default endpoints in AppFabric dashboard in IIS, I am not able to see my endpoints that I defined in Web.config file of the application.Also when I add Service Reference in my client projects, I can only see default endpoint configuration values provided by AppFabric. It appears that AppFabric is ignoring <'Service'> tag values in application's web.config file.What could be the reason? Is there something I might have missed ? Any suggestions are greately appreciated.
Thanks
I found the answer. When I changed the service name (in service tag element) exactly as it is in appfabric dashboard for the service, the application endpoints are showing up.