I was trying to install Orion though Chef Recipes in FIWARE Lab (creating a new template) but I couldn't find the package in the list.
Also, while trying to run it cloning a blueprint template that already exists, it returns an error (image can't be found). I've also realised that in the blueprint template, the Orion Context Broker version is outdated (0.13.0).
Could somebody perform this actions without errors? Is it under maintenance?
Yes It was under maintenance. Now you should be able to create templates with orion and clone the predefined templates.
Sorry for any inconvenience.
Related
I am in the process of migrating Atlassian Confluence from on-prem to Kubernetes. I found the official docker image for confluence and was able to spin up the application. I need to configure ssl and i already have the key and certificate. I tried to import the certificates and restarted the server.xml and it is not working. Has any worked on confluence migration from on-prem to kubernetes/docker and if any can provide a link/experience related to the same, it would be helpful.
Regards,
John
It's certainly possible, the healthcheck might be tricky and the reason for that is there is no automated install as far as I'm aware when it becomes live, meaning there will always been a manual configuration stage.
You're best looking at some package manager examples for this, which for Kubernetes is Helm. This allows you to iterate and rollback quickly.
Have a look at this example) which is for Jira, but the same flow should apply. Confluence and Jira are heavily related, so it should be relevant.
Similar to this lonely questioner I'm trying to install a Python package from a private PyPI repo such that it's available to our Google Cloud Composer Airflow instance.
I've followed these instructions but Airflow continues not to know about my package:
No module named 'foopackage'
I can't find any reference to my pip.conf in any logs anywhere so I'm not sure whether the file is in the right place, or has the right contents.
How can I proceed with debugging this problem?
The Cloud Composer environment logs show that there was a problem with copying pip.conf from the bucket, but don't give any other details:
{
insertId: "16qa4c8g540zxs3"
logName: "projects/{my-env}/logs/composer-agent"
receiveTimestamp: "2020-02-06T15:59:03.164564368Z"
resource: {…}
severity: "ERROR"
textPayload: "Copying gs://{my-bucket}/config/pip/pip.conf...
"
timestamp: "2020-02-06T15:59:00.857642186Z"
}
I first thought this might be a permissions issue, but the file seems to have the same set of permissions as other files in this bucket.
Where can I get more detailed information on what went wrong when copying that file?
update
I'm on composer-1.7.2-airflow-1.10.2.
update
The service account for my Composer environment already has the project.editor role.
This is an indicator that the Docker image used for the web server failed to build. To find the root cause, please view Cloud Build logs in project.
The reason for this, is a failed or taking long time operation, it timed out on the Composer’s backend. In some cases these errors persist in the backend, blocking future attempts. You can try re-enabling the API:
First solution that comes to my mind is running following commands in cloud shell:
gcloud services disable composer.googleapis.com
gcloud services enable composer.googleapis.com
After enabling the API, please update your Composer environment as usual.
When you install packages, the Composer environment re-creates Docker containers for the Airflow workers and scheduler, then performs a rolling update within the GKE cluster to update the workers to keep workers available. You can check Kubernetes Engine > Workloads to see if your environment timed out because of waiting for the scheduler and workers to come back online.
When Composer environment is using a custom service account and does not have IAM access to use Cloud Build, builds will fail immediately, so please check it. You can diagnose these by going to Cloud Build > History, and when you see builds without a log, it means that builds failed even before trying to build a container.
When your package implement bindings, it will fail at runtime if the libraries don't exist on the system. This means it is incompatible with Cloud Composer, because getting shared libraries into the build environment is not currently supported.
Another thing, make sure if your project is packed in correct way.
I hope you find the above pieces of information useful.
We have created a process template on the Enterprise level access on Microsoft AzureDevOps platform. We were looking to export the process template so that it can imported for some other organization. However we do not find an option to do so. Can anyone help?
The only way I've found so far to export inherited processes to other organizations is to use the process-migrator tool that's on GitHub made by Microsoft. There are some wonky things about it that don't totally work but hopefully should be a good start:
https://github.com/Microsoft/process-migrator
You download and install dependencies on the tool then you can run migrate or export/import (I think I usually do export/import).
I think that it works okay as-is except for if you have work item rules that are type CurrentUserIsMemberOfGroup, and picklists don't export correctly, but you'll want to do some testing of the tooling first. I also found out recently that this tooling uses an old SDK/API version (API v4.1) so hopefully it will be updated soon.
I am not sure through Azure DevOps UI but there are methods in Azure DevOps Services REST API
Export Process Template REST API Documentation
Import Process Template REST API Documentation
Parameters are pretty straighforward and well explained in MS Documentation.
There's lots of documentation and a kludgy console to set up continuous deployment in Cloud Foundry, but I haven't found any documentation on what the artifacts inside a repository need to be.
I don't want to cut-n-paste flows from the node red editor. If that's the only way, then IBM is not ready for prime time. I also am aware of most everything about my flows being in the Cloudant nodered db.
A node red application is more than the flows though. What about my _design docs for my dbs?
I need device info and other stuff from the Watson console, Cloudant info and my flows packaged up into something deployable.
Has anyone scripted this?
What I mean by this is I can clone a Docker project, an npm project and all sorts of projects that implement a build->test->push mechanism. They employ a configuration script of some sort (e.g. package.json) and contain a bunch of source files for the actual application, test scripts, db scripts, whatever is necessary to deploy the application and its environment into a host. I see lots of documentation on the toolchain and its features, but I'm not clear on if it's possible to make use of it for my hosted node red application. Or if I have to write the scripting mechanisms to offload flow info from the nodered db and query all my other dbs for their respective _design docs and all the other configuration information required to set up an IoT node red application.
I forgot to mention, the copy/paste method loses information; you get no tab level metadata. The only way to get all the flow stuff is to pull if from the nodered flow record.
Node-RED will release a new version in a couple of days that will introduce projects, so you'll be able to use GitHub and all the usual tools to handle your app: https://twitter.com/NodeRED/status/956934949784956931 and https://nodered.org/docs/user-guide/projects/
While it doesn't address your short-term needs, I think it's the best long-term solution. Hopefully that helps.
Latest since this checkin the service-connector cf plugin seems to be gone for SwisscomDev.
Official Link to the plugin simply returns a 404.
Isn't it supported anymore? What's the alternative? And did I miss communication about it?
Isn't it supported anymore? What's the alternative?
Yes. Our proprietary CF CLI client plugin is phased out. The alternative is cf ssh (from upstream). See Accessing Services with SSH on docs.developer.swisscom.com.
On Migrate from legacy MariaDB to MariaDB Ent you have a step by step howto for cf ssh. Please adapt that for your service and port.
cf ssh proy-app -L 13000:<old-db-host>:<old-db-port>
Here on #swisscomdev there is also a posting Alternative to Swisscom CF plugin named Service Connector with MongoDB Ops Manager example.
We have an edge case for an enterprise customer that may still need a cf sc feature (not available in cf ssh). Investigation ongoing.
And did I miss communication about it?
Sorry we failed in communication.
Sorry that you notice this change in our GitHub repo first. We wished to update docs first, then communicate the EOL. We somehow forgot it in yesterdays newsletter.