Registration of COM Components on Team Foundation Services (Azure) - azure-devops

One of my projects requires that a COM server be registered on the build machine. My first (and only) lame attempt was a simple pre-build step, but I assumed that would not work in the cloud, and I was correct. Problem is, I need to use this component, I only have a binary, and I'm a bit stumped as to what to do.
The error message is predictable:
The command "regsvr32 /s "path_to_dll" exited with code 5. Please verify that you have sufficient rights to run this command.
TFS Azure is in preview at the moment, so I'm not sure how many people have experience with it yet. I posted the same question on the official forums and have not yet received a response. Searching did not help either.

Silly me, just need to reference the interop assembly instead of directly referencing the native DLL. Problem solved.

Related

AWS CodeArtifact, NuGet and Linux

wondering if anybody has worked with AWS CodeArtifact on Linux.
Per Method #1, I've installed the CodeArtifact Credential provider per this page, which seemed to have no effect.
Per Method #2, I installed NuGet (and Mono) and tried using the aws codeartifact login command, but get a "nuget was not found" message even though I can execute nuget from the shell.
Before I bang my head against any further walls, I'm wondering if this is a Windows-only type of implementation from AWS. Getting answers from them is always difficult, so I was hoping somebody around here has already walked this minefield.
THanks

Domino 8.5.3 - Create an organization extension library / codestore

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

Deploying Meteor App to own server

I have a completed meteor project and is currently deployed on the meteor website. I would like to move it to my own website, which is currently hosted by GoDaddy.
How do I install Node and Mongo on my server (linux) and then run my meteor project? I received ssh access to my server, so I assume I can do this, but I'm just not sure how.
So how exactly do I proceed?
Additional Info:
I'm not exactly sure what of linux it is. On GoDaddy, it simply says linux.
When I ssh, it shows me:
-bash-3.2$:
Also, I having my website simply show the myapp.meteor.com webpage would work too. An explanation on how to do this would work.
Discover Meteor has a chapter on deployment which helps to answer this question. For ubuntu-based servers they recommend meteor-up. I haven't used it, but it's probably worth checking out. Previous versions of the book recommended meteoric.
I wrote my own set of bash scripts using a few ideas from meteoric, but I already had a lot of experience doing deployment scripting. Frankly there's nothing quite like figuring it all out yourself, but doing sysadmin tasks doesn't appeal to everyone and it can be hard to pick up in a hurry.

Publish-AzureServiceProject is not updating files on the cloud

I have a PHP Azure project which I have to manage with Powershell cmdlets. One of these, Publish-AzureServiceProject doesn't seem to be detecting file changes so these are not updated on the cloud (even though no errors are displayed).
I have remote desktop'd into the machines and the code is definitely not updated from weeks ago.
If I deploy to the local emulator, it is fine but this is much more obvious because it displays "removing old package" and "creating local package". The cloud package definitely contains the latest files, so the packaging is working fine.
Can anyone tell me how to force the publish to update the files on the cloud and more importantly, why this is not happening? Also, if I force the update, will it deploy to a new box and get a new IP Address?
Thanks.
It seems to work now.
I have removed and reinstalled azure libraries from my machine and created a new project from scratch and copied the original files over into it. I have not included diagnostics (not sure if that's an issue) and I have modified the Publish-AzureServiceProject script to select the subscription each time before it publishes.
It is possible that the subscription confusion was not helping (I have two Azure subscriptions and it might have used the wrong one at some point and done something weird) and also it was possible there was some conflict with various versions of the Azure SDK since I have been using it for over 6 months but at the moment, all is good.
A related article on my blog here: Problems with PHP Azure
Thanks for the interest

.net application throwing TypeLoadExceptions or saying that side-by-side configuration is invalid, etc

I post this merely as a reference for others that might end up being in the same situation and since I spent almost 3 days trying to figure out the root cause of the problem, I thought it would be a good idea to post the solution here.
My situation was as follows:
I tried to build a deployment package for a .net application and got TypeLoadExceptions, FileNotFoundExceptions (regarding DLLs), Side-By-Side configuration errors, etc. once I tried to run it on a vanilla test machine.
[edit]: stackoverflow won't let me answer my own question within 8 hours of it being posted, the answer follows in ~8 hours ;)
The problem was that one of the dependency projects of my application was set to "Debug" build in the Visual Studio configuration manager, therefore the debug dll of the dependency ended up being used for release builds as well. On any development machine this was no problem after all since all debug runtimes were available.
On the vanilla test machine however only the release runtimes were present which caused so much trouble to me and gave me unmeaning exceptions that lead me to so many wrong directions via google, etc.
In my case it was SlimDX that was set to build a debug build in the VS configuration manager, even when doing release builds. Since SlimDX makes use of the VC runtimes I got the above problem, but this could happen with any .net assembly that uses the VC runtimes.
I hope this will eventually safe someone some hours ;)