AWS MobileAnalyticsManager access to folder 'AWS Mobile Services\M4SP' is denied - event-log

I am trying to add the AWSSDK DLL into my C# code to collect my event data and pass the data to the AWS bucket. My C# code is created with VS Share point template. The project contains WSP files. The following code indicates how I use the AWSSDK :
using Amazon;
using Amazon.CognitoIdentity;
using Amazon.MobileAnalytics.MobileAnalyticsManager;
CognitoAWSCredentials credentials = new CognitoAWSCredentials(
"us-east-1:xxxxxx",//PoolID
RegionEndpoint.USEast1
);
Amazon.AWSConfigs.ApplicationName = "M4SP";
AWSConfigs.LoggingConfig.LogMetrics = true;
AWSConfigs.LoggingConfig.LogResponses = ResponseLoggingOption.Always;
AWSConfigs.LoggingConfig.LogMetricsFormat = LogMetricsFormatOption.JSON;
MobileAnalyticsManager manager = MobileAnalyticsManager.GetOrCreateInstance(
"xxxxxxxxxxxxxxxxxxx",//AppID
credentials,
RegionEndpoint.USEast1 // Region
);
CustomEvent customEvent = new CustomEvent("TestRecordEvent");
customEvent.AddAttribute("label", "M4SP");
customEvent.AddAttribute("action", "invoke");
customEvent.AddAttribute("details", "run the workflow test");
manager.RecordEvent(customEvent);
I found the code inside AWSSDK DLL was trying to log the data to local folder before passing it to AWS database. The location of the folder is C:\Users\[userid]\AppData\Roaming\AWS Mobile Services.
There is no problem in a standalone project since it always uses current user’s identity to run the application so it has access to the folder. But, because of the authentication mechanism of SharePoint solutions, it uses Application Pool Identity to access the folder and it gets access denied issue and the whole process fails.
Here is the error:
"Access to the path 'AWS Mobile Services\M4SP' is denied."
I modified the access right of Share point Application Pool Identity (in my case, it is “network service” account) but it still can’t access the folder .
Does anyone have a solution for this issue? Thanks very much for the help!!

Related

Error opening MDB file on network location with EntityFrameworkCore.jet.oledb 3.1

I have a project opening up MS Access DBs on a network folder. The project is a .net core 3.1 webapi.
EDIT: I'm using EntityFrameworkCore.Jet.OleDb v3.1 with provider in connection string Provider=Microsoft.ACE.OLEDB.12.0.
It very simply updates a list of boards based on new ones
public void SyncBoards(List<Board> boards)
{
_cutriteDbContext.RemoveRange(boards);
_cutriteDbContext.SaveChanges();
_cutriteDbContext.AddRange(boards);
_cutriteDbContext.SaveChanges();
}
I'm getting the error (sanitized)
System.Data.OleDb.OleDbException (0x80004005): The Microsoft Access
database engine cannot open or write to the file
'\{SHAREDFOLDER}{PATH_TO_FILE}\imatv11.mdb'. It is already opened
exclusively by another user, or you need permission to view and write
its data.
This works fine in IIS Express when debugging from VS 2019. I believe this is because the API doesn't have the credentials to access the file. The DBs do not have password protection. Is there a way to provide credentials to the file?
I had to set the identity in the IIS application pool in advanced settings.

How to read a local csv file using Azure Data Factory and a self-hosted runtime?

I have a Windows Server VM with the ADF Integration Runtime installed running under a local account called deploy. This account is a member of the local admins group. The server is not domain-joined.
I created a new linked service (File System) and pointed it to a csv file on the root of the C drive as a test. When I test the connection I get Connection failed.
Error occurred when trying to access the file in Folder 'C:\etr.csv', File filter: ''. The directory name is invalid. Activity ID: 1b892702-7cc3-48d5-83c7-c680d6d15afd.
Any ideas on a fix?
The linked service needs to be a folder on the target machine. In your screenshot, change C:\etr.csv to C:\ and then define a new dataset that uses the linked service to select etr.csv.
The dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. So the linked service should point to the folder instead of file. It should be C:\ instead of C:\etr.csv

How to share information across notebooks in a DSX project

Is it possible to share information (such as credentials) across multiple notebooks in a DSX project, e.g. with environment variables?
For example a Cloud Foundry application in Bluemix has a control setting where environment variables can be defined, is there a similar concept for a DSX project (I couldn't see anything in the various project level settings).
Separate notebooks have separate runtimes in the background and at the moment it is not possible to share credentials among notebooks by defining environment variables. But there are helper methods for most obvious credential requirements in a project. This is called the "Insert to code" method.
For example: if you have an object store associated with your project.
Select the "Data" tab in the top bar.
Add some file to the object store by browsing or simple drag-n-drop.
Insert credentials of that object store container in your notebook by selecting the "Insert credentials" option, right besides your file in the right hand side panel.
You can then directly insert those credential (Step 3) in any other notebook in that project.
Besides "Insert to code" there are other helper functions like "Insert SparkR dataframe", "Pandas dataframe" etc. to speed up the analytics process of data scientists. Hope that was a bit helpful.
FYI - I've added a feature request on uservoice to allow Bluemix services to be bound to a project and then the credentials be accessed in the same way a Bluemix application accessess credentials. Please vote if you think this would be useful.
Currently, one pattern I use quite a lot is to create a notebook in my project that is used to save credentials to a file on DSX:
! echo '{ "username": "xxxx", "password": "xxxx", ... }' > cloudant_creds.json
That file is now available to all of your notebooks on the project. NOTE: the file is saved on the spark service file system. If you use the same spark service in other dsx projects, they will also be able to access the file.
The credentials for cloudant normally include other fields such as host, I haven't shown these fields here so I can Keep the example simple. I have indicated there are more fields with the .... I normally copy this json from the bluemix service credentials field.
In your other notebooks, you would read the credentials something like this:
with open('cloudant_creds.json') as data_file:
sourceDB = json.load(data_file)
You can then refer the credentials like this:
dfReader = sqlContext.read.format("com.cloudant.spark")
dfReader.option("cloudant.host", sourceDB.host)
if sourceDB.username:
dfReader.option("cloudant.username", sourceDB.username)
if sourceDB.password:
dfReader.option("cloudant.password", sourceDB.password)
df = dfReader.load(sourceDB.database).cache()

How to deploy with Release Management to remote datacenter

We are running TFS and Release Management on premises, and i want to deploy my applications to a remote datacenter.
Access is over the internet, so there is no windows shares available.
I am using the vNext templates, and afaik RM seems to only support unc paths over windows shares.
How can i use Release Management to deploy software to this datacenter?
Im working on this solution:
Use WebDav on a IIS located inside the datacenter.
RM server and Target can use the WebDav client built into windows and access it by an unc path.
I haven't gotten this to work yet, as RM won't use the correct credentials to logon to the webdav server.
Updated with my solution
This is only a proof of concept, and is not production tested.
Setup a WebDav site accessible from both RM server and Target server
Install the feature "Desktop experience" on both servers
Make the following DLL
using System;
using System.ComponentModel.Composition;
using System.Diagnostics;
using System.IO;
using Microsoft.TeamFoundation.Release.Common.Helpers;
using Microsoft.TeamFoundation.Release.Composition.Definitions;
using Microsoft.TeamFoundation.Release.Composition.Services;
namespace DoTheNetUse
{
[PartCreationPolicy(CreationPolicy.Shared)]
[Export(typeof(IThreadSafeService))]
public class DoTheNetUse : BaseThreadSafeService
{
public DoTheNetUse() : base("DoTheNetUse")
{}
protected override void DoAction()
{
Logger.WriteInformation("DoAction: [DoTheNetUse]");
try
{
Logger.WriteInformation("# DoTheNetUse.Start #");
Logger.WriteInformation("{0}, {1}", Environment.UserDomainName, Environment.UserName);
{
Logger.WriteInformation("Net use std");
var si = new ProcessStartInfo("cmd.exe", #"/c ""net use \\sharedwebdavserver.somewhere\DavWWWRoot\ /user:webdavuser webdavuserpassword""");
si.UseShellExecute = false;
si.RedirectStandardOutput = true;
si.RedirectStandardError = true;
var p = Process.Start(si);
p.WaitForExit();
Logger.WriteInformation("Net use output std:" + p.StandardOutput.ReadToEnd());
Logger.WriteInformation("Net use output err:" + p.StandardError.ReadToEnd());
}
//##########################################################
Logger.WriteInformation("# Done #");
}
catch (Exception e)
{
Logger.WriteError(e);
}
}
}
}
Name it "ReleaseManagementMonitor2.dll"
Place it in the a subfolder to The service "ReleaseManagementMonitor"
Configure the shared path as the solution below states.
DO NOT OVERWITE THE EXISTING "ReleaseManagementMonitor2.dll"
The reason that this works is MEF.
The ReleaseManagementMonitor service tries to load the dll "ReleaseManagementMonitor2.dll" from all subfolders.
This dll implements a service interface that RM recognises.
It the runs "net use" to apply the credentials to the session that the service runs under, and thereby grants access to the otherwise inaccessible webdav server.
This solution is certified "Works on my machine"
RM does work only with UNC, you are right on that.
You can leverage that to make your scenario work -
In Theory
Create a boundary machine on the RM domain, where your drops can be copied.
The deploy action running on your datacenter can then copy bits from this boundary machine, using credentials that have access on that domain. (These credentials are provided by you in the WPF console)
How this works
1. Have a dedicated machine on the RM server domain (say D1) that will be used as a boundary machine.
2. Define this machine as a boundary machine in RM by specifying a shared path that will be used by your data centre. Go to settings tab in your WPF console, create a new variable - { Key = RMSharedUNCPath, Value = \\BoundaryMachine\DropsLocation }. RM now understands you want to use this machine as your boundary machine.
3. Make sure you take care of these permissions
RM Server should have write permissions on the \\BoundaryMachine\DropsLocation share.
Pass down credentials of domain D1 to the target machine in the data centre (Domain D2), that can be used to access the share.
4. Credentials can be passed down fron the WPF console, you will have to define the following two config variables in the settings tab again.
Key = RMSharedUNCPathUser ; Value = domain D1 user name
Key = RMSharedUNCPathPwd ; Value = password for the user defined above.
PS - Variable names are case sensitive.
Also, to let RM know that you want to use the SharedUNC mechanism, check the corresponding checkbox for the RM server and connect to it via IP and not DNS name as these must be in different domains, i.e.
Try to use Get-Content on local-server then Set-Content on the remote server passing the file contents over;
Could package everything into an archive of some kind.
The Release Management is copying VisualStudioRemoteDeployer.exe to C:\Windows\DtlDownloads\VisualStudioRemoteDeployer folder on the target server then is copying the scripts from the specified location to target server using robocopy.
So you have to give permissions from your target server to your scripts location.
Release Management update 4 supports "Build drops stored on TFS servers"
http://blogs.msdn.com/b/visualstudioalm/archive/2014/11/11/what-s-new-in-release-management-for-vs-2013-update-4.aspx

Enterprise Library for Logging to Flat File Trace Listener

What kind of permission needs to be given to the Error.log file in the server (running IIS 7.5) so that errors are written to the Error.log file?
I have created a directory named ErrorLog and set up the listener in web.config like below.
The windows identity in play (depends on how you authenticate)
var principal = ClaimsPrincipal.Current; //normally this reverts to Thread.CurrentPrincipal, but can be changed
return principal.Identity.Name;
or
var windowsIdentity = WindowsIdentity.GetCurrent();
if (windowsIdentity != null)
{
return windowsIdentity.Name;
}
The Executing windows identity Will need create permission in directory and write permission to the file specified in your EL config.
You can place file anywhere you like using config. Default for file without path is Start project directory containing assembly dll.
See In IIS the Application Pool used by the website. The pool determines the windows Identity. See the advanced settings tab for the Pool. The identity approach to be used is defined here.
I don't see your configuration example, but..
I find that it is best not to use Flat File outside of development. The Event Log or Database listeners are more suitable for a web application in production.
If you must use Flat File, you will need to give the account (for the application pool that is assigned to the web application) write access to the directory that the file should be created in. By default I think that is the web application's bin directory, unless you include a path in the file name in your configuration.
It probably depends on what context your app in operating in. The app pool thread is responsible for writing the log so give the read/write folder permissions for the same identity as your app pool. I think this is NETWORKSERVICE by default.