I would like to make Worker Role in azure that handles some behind the scene processing for a web role. In the web role i would like to upload a plugin (a DLL most likely) which becomes avalible for the worker role to use.
What about security? If i was to let 3th party people upload a dll to my azure worker role. Can i do anything to limit what it can do. Would not be nice if they could take control over the management API or something like this.
I am new to azure and exploring if its a platform to use for this project.
Last question, i noticed that i could remote desktop my cloud service. Could i upload binary programs to that and call that from the worker role aswell? (another kind of plugin).
There are a few things you might want to look at. Let's assume your Worker Role is an empty shell. After starting the Worker Role you could start a timer that runs every X minutes to get the latest assemblies from a blob storage container for example.
You can download these assemblies to a folder and use MEF to scan them and import all objects implementing IWorkerRolePlugin for example (this would be a custom interface you would create). MEF would be the best choice when you want to work with plugins. You could even create a custom catalog that directly links with a blob storage container.
Now about the security part. In your Worker Role you could for example create a restricted AppDomain to make sure these plugins can't do anything wrong. This code should get you started: Restricted AppDomain example
Try the Azure Plugin Library by Richard Astbury!
Sounds like Lokad.Cloud is just what you need.
It has an execution framework part which consists of worker roles capable of running what they have named a Cloud Service. It comes with a web console which allows you to add new CloudService implementations by uploading assemblies, and if you configure it to allow for Azure self management you can also adjust the number of worker instances through the web console.
Related
I have a .NET Core application used by many clients. Each client has its own subdomain. Each client has its own database. All databases are identical. Different data, same schema. The subdomains and database names look something like this.
client1.myapp.com Client1Db
client2.myapp.com Client2Db
client3.myapp.com Client3Db
I use Visual Studio Web Deploy to publish. For each client, I have publishing-settings.
Each set of publishing-settings specifies the proper subdomain, the database name for that subdomain, and the Entity Framework Migration Info for the subdomain. (See images below)
I end up with a list of .pubxml files in my VisualStudioSolution\Properties\PublishProfiles folder that looks kind of like this...
client1.myapp.com.pubxml
client1.myapp.com.pubxml.user
client2.myapp.com.pubxml
client2.myapp.com.pubxml.user
client3.myapp.com.pubxml
client3.myapp.com.pubxml.user
It was not a big deal to publish these individually when I only had a few subdomains to publish, but now I'm getting enough clients that it is becoming a pain.
Can someone recommend a way to automate this process? I'm not even sure where to start. Thanks!
You may use a DevOps pipeline in which you can use, for example "Jenkins X" and deploy a single app to multiple servers.
You can start with: Jenkins jobs in multiple servers
There is others CD (continuous delivery) platforms and services, but this question have better place in devops.stackexchange
I have a bit of a problem understanding how to design a system that communicates using the kerberos protocol. Let's imagine - I have an application instance that has a large number of plugins that need to communicate with different services. For example, one plugin is responsible for working with postgres, another plugin is responsible for working with "windows AD". But I need these plugins not to have access to each other's services. I.e. postgres plugin should not be able to go to windows ad service and vice versa. Or if I have multiple instances of the postgres plugin running, there should be different service accesses for each of them.
What is the actual question - how do I store keytabs and/or ccaches so that each service has its own, restricted accesses from the others. Let's say the pgx library requires that there already be a TGT (ccache) on connection to the system, it can only be changed in the environment variable of the whole application. But what should I do if I need to create another connection in the same application, but with a different TGT? It would be nice if the pgx library could take the keytab and generate the TGT automatically with every connection, but unfortunately it doesn't know how to do this.
I just don't understand, how I could organize multiple connections from my application, taking into account that every plugin must have different accesses, and considering that several plugins can connect either to the same service, or to different ones
I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.
I've deployed in Windows Azure a website made with Umbraco, using
Windows Azure Accelerator for Umbraco.
For development and test i used a test Hostname. Now it's time to switch to the official DNS hostname..
How can I change current hostname?
Actually i configured hostname at deployment time (the only way i know to do this) but i can't deploy again, since many files have been changed working on website on Azure.
EDIT
Let me explain: at the step prompt in the image (during web site deploying) I used as Domain Name "test.mywebsite.com", and configured real DNS.
Now the website is configured, so I'd like to make mywebsite.com point to that site;
But is'nt enough if i configure mywebsite DNS! Shall I deploy again? An will I lose any of the changes I made?
I'd like to make two comments on your question:
1) In order to host your Azure application under a custom host name, you will need to sign up with a DNS provider that supports C-NAME records (most do). I suggest someone like GoDaddy.com because by default C-NAME records can only resolve your "www.domainname.com" records and cannot do anything for queries where "www." is dropped from the URL. DNS providers like GoDaddy also have an option to redirect all traffic destined for "domainname.com" to a URL of your choice. This is a huge deal for Azure apps. Frankly speaking, it is somewhat disappointing that for all the PaaS and IaaS features of Azure, DNS was not included in the overall package.
2) I am a little worried when you say that you can no longer redeploy your app due to the changes made. Can you elaborate on that? Have you made changes to the application's code running on VM's in Azure without going through redeployment process? If so, this is a huge no-no. Your VM's running in Azure are not "permanent". Microsoft and your redeployment process can (and will) re-stage those VM's to the original package at any given time. Microsoft will re-image your VM's at least once a month during their monthly OS upgrades. But they can also do so when they need to move your VM to another rack, etc. Whatever changes that you make to your app must be either stored in source-control before deployment or in a permanent storage facility like SQL Azure, Azure Storage, etc.
HTH
Finally i think that the answers to my questions are:
-Shall I deploy again? Yes, i must deploy again
-Will I lose any of the changes I made? Many changes will be mantained since are stored into DB. But I have to do many activities to make new website work!
This answer confirms my theory:
In my case, I created and uploaded a site with a name, let's say
http://www.contoso.com and then paid a domain from a registrar let's say
http://www.example.com, when I mapped
http://MyAcceleratorsService.cloudapp.net/ to my new domain
( http://www.example.com ) and tried to open that domain I got the home page of
the Accelerator and not the uploaded site.
I had to upload the site again to Azure (using UploadUmbracoSite.cmd
from Accelerator application) and when uploading enter the same domain
name as the one I registered: http://www.example.com. Then, I was able to
browse my uploaded site as expected.
As for your question, will upload site again using
UploadUmbracoSite.cmd (is in the Setup folder) and will enter the new
domain name when requested.
Exactly what I was trying to avoid.. but the only solution, i suppose.
Well it was not easy to publish again, i got errors of many type (i suppose tied to some components that i've installed after deploy and that are not installed in new deployed website).. i'm going to solve them.
Edit
Completed my work:
- loads of different attempts, no-one worked
- CTP backup of DB
- deleted DB and website
- new full deploy of umbraco
- CTP restore of DB
finally:
-all work on content is OK
-all work on styles, pages, templates is lost
Changing hostname is hard; dont'use test hostname but definitive hostname from the beginning.
If anyone has suggest, i'll be pleased to test it, anyway
This is not really an answer to your question, but it might be a solution to your problem: Use a CNAME record to make the production DNS name point to your development name. E.g. www.productionname.com will the point to www.testname.com. I am not sure if everything will just work out of the box, but it seems to be worth a try.
This requires, that your hosting provider allows you to set up CNAME records.
http://en.wikipedia.org/wiki/CNAME_record
We use Windows Azure Cloud services to host our application. One of the great features of Windows Azure is the Production/Staging model. You can have the clients of your application routed to your production server, while you can test your new code running on a staging server. For example, you can configure Azure to point a production server to http://www.coolapp.com while designating a staging server for the same app to something like this: http://7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net.
Physically both of these servers are publicly facing. If you were to know the cryptic URL of a staging server you would be able to browse to the app just as easily as you would browse to www.coolapp.com. However, the presence of a GUID in the URL makes it virtually impossible for someone to guess it, thus making the staging server "private". This gives a nice mechanism to the developers of an application to deploy and test the new bits on a staging server before releasing them to public. Once they make sure that things look good, with a flip of a switch they swap the two servers, making staging server a production server and vice versa.
This model creates a small problem for us in relation to Facebook integration. To be able to integrate Facebook plugins you have to register your app with them. FB will then issue an AppId and an AppSecret keys. These keys are tied to the URL of your application. So in order for my app to work with FB plugins I need to obtain one set of keys that is tied to 7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net, and another set that is tied to www.coolapp.com.
When I read about Windows Azure, they really urge developers to treat staging vs. production servers as the same. The only difference between them should be the URL. In other words, Azure does not recommend basing your app logic on which server the code happens to be running on as Azure has no inherent knowledge of this. Staging vs. production is just a handy "abstraction" if you will. I guess you see the problem here. In our example above, I have to use one set of keys issued by FB versus another depending on which URL (production vs. staging) my app is running at. I assume I am not the first one running into this problem. What are the correct ways of handling this? One obvious way is to sniff the URL property of the Request object and branch my logic that way. However, intuition tells me this is a hack. Any other ideas?
Regards,
Archil
The mechanisms I know of are:
using "production" within a totally separate service account to "testing" - this leaves "staging" within the production service to be used as an area for "deployment candidates" and provides a separate clean testing domain with a non-changing URL for earlier "dev and test" work.
using different .cscfg files for staging and production - and being careful to update this .cscfg before you do any live switching.
sniffing the incoming URL - as you suggest
Personally, I use the first of these techniques - its easy and it helps prevent nasty accidents
As an aside, one of the techniques we've used for "removing" the Guid from staging is to CNAME the Guid with a really short TTL on the DNS - this allows us to quickly and automatically update the CNAME record for the staging server when we deploy.