What if I can't deploy additional Node-RED nodes anymore. I can add them to an existing and small flow, but they have this little blue dot, even when I click on DEPLOY.
seems like the nodes do get deployed, just the blue dots don't dissapear for some reason even after restart
Related
I had Created a resource element in IBM Cloud for NODE RED post loading it show dull.
I tried with different browsers and cleaned my cache, tried to delete resource and add another it didn't helped
I have joomla installed as one of half a dozen local wampserver projects (I use joomla as a web-page management portal to taskmanage everything I do).
Joomla works, so phpadmin is working and so must be php.
But I continually get 2 out of 3 services running.
I've tried the "mysqlold" solution in task manager and nothing changes.
Given the above and what is working, what service isn't running and what can I do to make it so?
I apologise for the newbie nature of the question, but I've only just started using wampserver.
Well the 3 services that WAMPServer is talking about are
Apache
MySQL Server
mariaDB Server
Now its quite legitimate to have only Apache and MySQL running, or Apache and mariaDB.
If you have stopped lets say mariaDB then you would see the tooltip
"2 of 3 services running"
but as you only want Apache and MySQL that would be fine. NOTE If you have stopped one of the services the WAMPServer icon will be Orange instead of Green.
Its rare you want to have MySQL and mariaDB running at the same time, its unlikely one site would require both so it is possible to STOP one of the databases. If you have, intensionally or accidentally done this you will see a black square beside the Services Administration wampmariadb64 menu line
And if the service is started you will see a Green Tick
It is also possible to turn off one of the database servers completely.
Using the wampmanager menu
**right click**->Wamp Settings->Allow MariaDB
You should see 2 lines
Allow MySQL
Allow mariaDB
If these 2 lines have a Green Tick beside them then that database's service should be installed, if not then that database service has been uninstalled. If you click "Allow XXXX" then it will toggle that service to or from installed. If you uninstall one, the tool tip that you get when hovering over the wampmanager icon will say
"2 of 2 serives started"
assuming there are no other issues :)
I was facing the same issues and tried a lot of things. Finally I fixed it using below instructions:
Uninstall wamp and delete wamp directory
Install VC 16 package
Install wamp again and it will work like a charm
I have an HTTP application (Odoo). This app support install/updating modules(addons) dynamically.
I would like to run this app in a Kubernetes cluster. And I would like to dynamically install/update the modules.
I have 2 solutions for this problem. However, I was wondering if there are other solutions.
Solution 1:
Include the custom modules with the app in the Docker image
Every time I made a change in the custom module and push it to a git repository. Jinkins pull the changes and create a new image and then apply the new changes to the Kubernetes cluster.
Advantages: I can manage the docker image version and restart an image if something happens
Drawbacks: This solution is not bad for production however the list of all custom module repositories should all be included in the docker file. Suppose that I have two custom modules each in its repository a change to one of them will lead to a rebuild of the whole docker image.
Solution 2:
Have a persistent volume that contains only the custom modules.
If a change is made to a custom module it is updated in the persistent volume.
The changes need to apply to each pod running the app (I don't know maybe doing a restart)
Advantages: Small changes don't trigger image build. We don't need to recreate the pods each time.
Drawbacks: Controlling the versions of each update is difficult (I don't know if we have version control for persistent volume in Kubernetes).
Questions:
Is there another solution to solve this problem?
For both methods, there is a command that should be executed in order to take the module changes into consideration odoo --update "module_name". This command should include the module name. For solution 2, How to execute a command in each pod?
For solution 2 is it better to restart the app service(odoo) instead of restarting all the nodes? Meaning, if we can execute a command on each pod we can just restart the service of the app.
Thank you very much.
You will probably be better off with your first solution. Specially if you already have all the toolchain to rebuild and deploy images. It will be easier for you to rollback to previous versions and also to troubleshoot (since you know exactly which version is running in each pod).
There is an alternative solution that is sometime used to provision static assets on web servers: You can add an emptyDir volume and a sidecar container to the pod. The sidecar pull the changes from your plugins repositories into the emptyDir at fixed interval. Finally your app container, sharing the same emptyDir volume will have access to the plugins.
In any case running the command to update the plugin is going to be complicated. You could do it at fixed interval but your app might not like it.
Earlier today I started the trial period with Jelastic, mainly to see if I can use it to deploy online my projects.
After I upload my .war file in the dashboard though, the Deploy to drop down doesn't appear. Same thing happens to the sample project as well. The drop down is absent.
Have tried with three different browsers, same weird "bug".
Any suggestions?
Turns out, deploying to a cluster is different than a single instance. So yes, I had to create my own environment (Glassfish and Postgresql) and the deployment manager works as intended :)
I'm wondering which options are there for docker container deployment in production. Given I have separate APP and DB server containers and data-only containers holding deployables and other holding database files.
I just have one server for now, which I would like to "docker enable", but what is the best way to deploy there(remotely will be the best option)
I just want to hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers.
There is myriad of tools(Fleet, Flocker, Docker Compose etc.), I'm overwhelmed by the choices.
Only thing I'm clear is, I don't want to build images with codes from git repo. I would like to have docker images as wrappers for my releases. Have I grasped the docker ideas from wrong end?
My team recently built a Docker continuous deployment system and I thought I'd share it here since you seem to have the same questions we had. It pretty much does what you asked:
"hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers"
We had the challenge that our Docker deployment scripts were getting too complex. Our containers depend on each other in various ways to make the full system so when we deployed, we'd often have dependency issues crop up.
We built a system called "Skopos" to resolve these issues. Skopos detects the current state of your running system and detects any changes being made and then automatically plans out and deploys the update into production. It creates deployment plans dynamically for each deployment based on a comparison of current state and desired state.
It can help you continuously deploy your application or service to production using tags in your repository to automatically roll out the right version to the right platform while removing the need for manual procedures or scripts.
It's free, check it out: http://datagridsys.com/getstarted/
You can import your system in 3 ways:
1. if you have a Docker Compose, we can suck that in and start working iwth it.
2. If your app is running, we can scan it and then start working with it.
3. If you have neither, you can create a quick descriptor file in YAML and then we can understand your current state.
I think most people start their container journey using tools from Docker Toolbox. Those tools provide a good start and work as promised, but you'll end up wanting more. With these tools, you are missing for example integrated overlay networking, DNS, load balancing, aggregated logging, VPN access and private image repository which are crucial for most container workloads.
To solve these problems we started to develop Kontena - Docker Container Orchestration Platform. While Kontena works great for all types of businesses and may be used to run containerized workloads at any scale, it's best suited for start-ups and small to medium sized business who require worry-free and simple to use platform to run containerized workloads.
Kontena is an open source project and you can view it on GitHub.