I'm running a node.js app on bluemix dedicated. What is odd is that it seems to create a new app called "Delivery Pipeline". They both appear as node.js apps in my dashboard and they both appear to share the same actual delivery pipeline.
What is also odd is that the "delivery pipeline" app appears to be the one that actually is running and owns the route.
Just seems really odd to me...
I guess you have to check the delivery Pipeline of your toolchain.
Within the Delivery Pipeline of your toolchain you can define the target application. In your case you might have 2 Delivery Pipelines or you just defined the wrong application name.
Related
So to be more clear, we have some infrastructure and application related alerts set on Prometheus which is running on cluster A. Also, we have 2 teams one is devops and the other one is app team. I would like to make sure that devops team only receives the alerts those related to infra and app team receives the alerts those related to application.
Is there a way to achieve this?
Due to hardware restrictions, we are unable to retrieve the current status of many of our lights (their color/brightness/etc.).
In the QA test cases spreadsheet found here, at the bottom under Deploying, a number of QUERY intents are listed to be tested. Does this mean our Smart Home application will not be able to pass certification?
Thank you for reading.
There is some expectation from the user to know the status of your their house at any time. If you cannot retrieve the state directly from devices, you should be able to use your cloud provider to store a virtual equivalent of the device. Then instead of querying the device directly you can return the state of the virtual device.
If anything, just try to be honest with the review team and they will keep certain limitations in mind.
When submitting for review make sure you provide them with a perfectly working test environment. So if some of your lights don't function like you want them to and you can't get their info, don't provide them for testing.
I'm not familiar with the review process of Smart Home applications but if you provide the review team with the right information of which hardware is and isn't supported I'm sure they won't straight up reject your application for it.
The server holds logic, iOS/Android App holds UI. Common case.
How do I suppose to deploy new features in this case with continuous deployment methodology?
I assume that server-side deploy looks like that:
I'm triggering new feature deployment, load balancer starts redirecting 1% of all users to the server instance with the new feature. If everything goes smoothly, then load balancer starts redirecting 10%, 30%, etc up to 100%.
The same can be done for client apps, using, say, Codepush.
So, if I'll deploy server without an app, then there will be no new features usage and therefore no problems with new deployment for sure.
So, probably I have to deploy app first and put some kind of server version checker, so if the server has api for this new feature, the UI for this feature is being shown, and if the app is connected to the wrong server, the new UI is hidden.
That's seems primitive. I need to persist socket connection to the same server to avoid hitting the wrong server, right? And what if instance/zone/region will go down and the user will be suddenly redirected to another sone/region and new server will not have the new feature api? Probably, my assumption is wrong.
So, how do I suppose to deploy new features in this case with continuous deployment methodology?
I would say that your question is more of version compatibility nature of server/client API than CD. We have a similar requirement where a server and the clients communicate and both are constantly enhanced with features. I don't know your production software architecture which might change the needs accordingly but I'll try to come up with some ideas.
I'm going to describe two cases which might apply for you.
First case:
The thing is easier when you do not face the situation that new client versions need to communicate with old server versions. The new server version is deployed first and old clients simply do not use the new feature, as you've already pointed out. In this situation my recommendation is to deploy the server app first and then start to roll out the new client apps. If that's possible I would do that. It applies only when the new feature doesn't force you to break the API.
Second case:
In the case that new client app versions need to talk to an old server app, which I would try to avoid at all costs, the new client needs some switch inside to deactivate feature e.g. B when it's talking to an old server that doesn't support this feature. An API version counter could be the solution. But it requires the client to be able to distinct between server versions. In REST you often see the .../v1/.. inside the URL but could be solved differently as well. Hopefully the API provides some mechanism to get the version the server speaks.
We faced both cases at the same time, the protocol changed over the time including breaking changes, so we needed to implement an API version negotiation mechanism.
My house has an home automation system from the 1960's that I have managed to tap into. I've been able to setup an interface which allows me to write adapters for various technologies such as Node Red, Alexa, and now Google Assistent.
Given that this will only ever work with my house, I see no reason to make public Smart Home Actions. On Alexa's side, I can let these services stay in a Development state indefinitely which has worked great for the last 6 months. On Google's side, however, the FAQ says (https://developers.google.com/actions/smarthome/faq):
Q: How often do I need to run gactions test?
A: gactions test needs be refreshed every 3 days. After 3 days the test agent will disappear from mobile-HomeControl settings. If you run into this, just run gaction test again.
Therefore, I was wondering what they best way is to make a PERSONAL Google Actions service? Of course, the obvious method would be to script and schedule the gactions call to keep testing alive but I would hope there was a better way to support this!
Additional details: I'm using Amazon's OAuth service for sign-in. This way, I can validate the Amazon ClientID, UserID, etc. through the AccesssToken Google passes in for authorization. Therefore, I could theoretically run this publicly without any issues but I would need to figure out how Google could review it for testing purposes! I don't need some Google employee turning on and off my lights while the Google Maps car drive by to verify the change... ;)
I would just use a script to call gaction periodically.
Publishing it would unnecessarily pollute the Actions directory. Also, they'll make you jump through hoops for "brand verification" and other restrictions they have for naming invocation terms.
If you did publish it, you give them a temporary account for verification purposes and disable the account when published. They would be randomly controlling the lights during the verification period though which can be up to a week!
I want to write a workflow application that routes a link to a document. The routing is based upon machines not users because I don't know who will ever be at a given post. For example, I have a form. It is initially filled out in location A. I now want it to go to location B and have them fill out the rest. Finally, it goes to location C where a supervisor will approve it.
None of these locations has a known user. That is I don't know who it will be. I only know that whomever it is is authorized (they are assigned to the workstation and are approved to be there.)
Will Microsoft Windows Workflow do this or do I need to build my own workflow based on SQL Server, IP Addresses, and so forth?
Also, How would the user at a workstation be notified a document had been sent to their machine?
Thanks for any help.
I think if I was approaching this problem workflow would work to do it. It is a state machine you want that has three states:
A Start
B Completing
C Approving
However workflow needs to work in one central place (trust me on this, you only want to have one workflow run time running at once, otherwise the same bit of work can be done multiple times see our questions on MSDN forum). So a central server running the workflow is the answer.
How you present this to the users can be done in multiple ways. Dave suggested using an ASP.NET site to identify the machines that are doing the work, which is probably how I would do it. However you could also write a windows forms client that would do the same thing. This would require using something like SOAP / WCF to facilitate communication between client form applications and the central workflow service. This would have the advantage that you could use a system try icon to alert the user.
You might also want to look at human workflow engines, as they are designed to do things such as this (and more), I'm most familiar with PNMsoft's Sequence
You can design a generic "routing" workflow that will cause data to go to a workstation. The easiest way to do this would be to embed the workflow in an ASP.NET application. Each workstation should visit the application with a workstation ID in the querystring:
http://myapp/default.aspx?wid=01
When the form is filled out at workstation A, the workflow running in the web app can enter it into the "work bin" of the next workstation. Anyone sitting at the computer for which the form is destined will see it appear in their list of forms to review. You can use AJAX to make it slick and auto-updating.