I'm trying to get Eclipse-Che to run on EC2, and I'm running into a few issues.
I can get the Eclipse-Che server to start if I only port-map 8080, but then I cannot connect to any workspaces, presumably because I'm missing ports 32768-32788.
If I map ports 32768-32788 in addition to 8080, then I cannot connect to the che server at all.
I've been reading the Eclipse-Che docker usage documentation and I can't figure out how to set the -it flags when I define my task on EC2.
I'm new to both Docker and EC2, so it isn't clear if those flags are important, nor if they'd be causing the behavior I'm seeing.
Any help would be appreciated. Thanks.
It seems that you are attempting to launch Che from the Docker container? There is two ways to launch it - natively, or within a container. I recommend the native approach. You then just need to make sure that your EC2 node has 8080 and 32700+ ports opened to the external world. If you have more questions, you can get an issue posted to github.com/eclipse/che, and an engineer will respond.
Related
I am new to kubernetes and trying to throw together a quick learning project, however I am confused on how to connect a pod to a local service.
I am storing some config in a ZooKeeper instance that I am running on my host machine, and am trying to connect a pod to it to grab config.
However I cannot get it to work, I've tried the magic "10.0.2.2" that I've read about, but that did not work. I also tried creating a service and endpoint, but again to no avail. Any help would be appreciated, thanks!
Also, for background I'm using minikube on macOS with the hyperkit vm-driver.
I'm a complete newbie with Kubernetes, and have been trying to get secure CockroachDB running. I'm using the instructions and preconfigured .yaml files provided by Cockroach. https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html
I'm using the Cloud Shell in my Google Cloud console to set everything up. Everything goes well, and I can do local SQL tests and load data. Monitoring the cluster by proxying to localhost, with the comnmand below starts off serving as expected
kubectl port-forward cockroachdb-0 8080
However, when using cloud shell web preview on port 8080 to attach to localhost, the browser session returns "too many redirects".
My next challenge will be to figure out how to expose the cluster on a public address, but for now I'm stuck on what seems to be a fairly basic problem. Any advice would be greatly appreciated.
Just to make sure this question has an answer, the problem was that the question asker was running port-forward from the Google Cloud Shell rather than from his local machine. This meant that the service was not accessible to his local machine's web browser (because the Cloud Shell is running on a VM in Google's datacenters).
The ideal solution is to run the kubectl port-forward command from his own computer.
Or, barring that, to expose the cockroachdb pod externally using the kubectl expose pod cockroachdb-0 --port=8080 --type=LoadBalancer as suggested in the comments.
I want to use following deployment architecture.
One machine running my webserver(nginx)
Two or more machines running uwsgi
Postgresql as my db on another server.
All the three are three different host machines on AWS. During development I used docker and was able to run all these three on my local machine. But I am clueless now as I want to split those three into three separate hosts and run it. Any guidance, clues, references will be greatly appreciated. I preferably want to do this using docker.
If you're really adamant on keeping the services separate on individual hosts then there's nothing stopping you from still using your containers on a Docker installed EC2 host for nginx/uswgi, you could even use a CoreOS AMI which comes with a nice secure Docker instance pre-loaded (https://coreos.com/os/docs/latest/booting-on-ec2.html).
For the database use PostgreSQL on AWS RDS.
If you're running containers you can also look at AWS ECS which is Amazons container service, which would be my initial recommendation, but I saw that you wanted all these services to be on individual hosts.
you can use docker stack to deploy the application in swarm,
join the other 2 hosts as worker and use the below option
https://docs.docker.com/compose/compose-file/#placement
deploy:
placement:
constraints:
- node.role == manager
change the node role as manager or worker1 or workern this will restrict the services to run on individual hosts.
you can also make this more secure by using vpn if you wish
I'm running a small openshift cluster and would like to provide our developers with an hosted instance of mongo on it, which they connect to externally.
Which is easy enough, I thought. Sadly it still looks like all traffic has to go over haproxy and is limited to http/https. But my developers need to transparently access the correct mongo port 27017.
is there some way to expose the internal pod port, to the outside world, without knowing which pod it run on.
right now our dirty workaround is
oc port-forward mongodb-1-2n1ov 27017:27017
and than the client does a ssh forwarding from there machine to this.
instead we would rather have an automated solution that allows tcp forwarding for virtual defined hostnames.
could anyone point me in the right direction please?
You are right. We too had similar issue and only other way we though was to update the serviceCIDR which was routable within our network. We did not go that route though. HAProxy is http/https..while the services do support tcp/udp and mongodb:27017 relies on UDP.
I too would like to know more about this if anyone else can share.
I've built a Container that leverages a CF app that's bound to a service, Cloudant to be specific.
When I run the container locally I can connect to my Cloudant service.
When I build and run my image in the Bluemix container service I can no longer connect to my Cloudant service. I did use --bind to bind my app to the container. I have verified that the VCAP_Services info is propagating to my container successfully.
To narrow the problem down further, I tried just doing an
ice -run --name NAME IMAGE_NAME ping CLOUDANT_HOST
and I found I was getting an unknown host.
So I then tried to just ping the IP, and got Network is unreachable.
If we can not resolve bluemix services over the network, how can we leverage them? Is there just a temporary problem, or perhaps I'm missing something?
Again, runs fine locally but fails when hosted in the container service.
It has been my experience that networking is not reliable in IBM Containers for about 5 seconds at startup. Try adding a "sleep 10" to your CMD or ENTRYPOINT. Or set it up to retry for X seconds before giving up.
Once the networking comes up it has been reliable for me. But the first few seconds of a containers life have had troubles with DNS, binding, and outgoing traffic.
looking at your problem it could be related to a network error on container when on Bluemix.
Try to access your container through shell when on Bluemix (using cf ic console or docker one) and check if the network has been rised correctly and then if its network interface(s) has an IP to use.