ADPlus Dump Analysis. If PDB file is not present in production? - dump

I am analyzing one application which hangs randomly in production. I want to use ADPlus/DebugDiag to analyse by generating dumps. If I am generating dumps with the 'debug' mode application running, I am able to see the proper call stack with function name details mentioned but if I generate dumps with 'release' mode application, call stack is not resolved properly as PDB file is not associated with it in release mode.
I may not be allowed to copy PDB file in production enviornment. So is there any other way to resolve the call stack?
Please let me know if you need any more detais.
Thanks All.

If you have access to the release pdbs then you should be able to set your sympath to that folder when loading the dump. The easiest way to do this would be to set the _NT_SYMBOL_PATH environment variable to the path of those pdbs. Then, when loading the dump, the debugger should be able to load them.

Related

Warning: Your deployed application may error out because file or folder paths not present

I get the following warning when deploying an application with matlab:
[Warning: Your deployed application may error out because file or folder paths
not present in the deployed environment may be included in your MATLAB startup
file. Use the MATLAB function "isdeployed" in your MATLAB startup file to
determine the appropriate execution environment when including file and folder
paths, and recompile your application.
]
I have tried to reduce my application to merely a program that creates a figure, nothing more, and I still get the message.
Note: When I start my application, I get the splash screen and then it crashes.
I have tried deploying with R2016b, R2017a, R2017b. How do I get around this? I have tried using my own startup file, I have used isdeployed as suggsted. Nothing seems to work.
All help appreciated!
Type which startup.m -all in your command window. If you find the file then delete it from the path. Then perform the build process again. It might work.

Elasticsearch Script File in Local Mode

Using ElasticSearch, one can place scripts of various languages in ElasticSearch's /config/scripts directory, and they will be automatically loaded for use in Update requests and other types of operations. In my production environment, I was able to accomplish this and run a successful Update using the script.
So far, however, I've been unsuccessful in getting this feature to work when running a node in local mode for integration tests. I assumed that, since one can configure the ElasticSearch node, with an elasticsearch.yml on the classpath, one should also be able to add a scripts directory and place her desired script there, causing it to be loaded into the local node. That doesn't seem to be the case since when I try to execute an Update utilizing that script it cannot be found.
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: Unable to find on disk script scripts.my_script
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:269)
at org.elasticsearch.script.ScriptService.executable(ScriptService.java:417)
at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:194)
... 6 more
Does anyone know the proper way to do automatic script loading into a local ElasticSearch node for testing?
I am using the basic ElasticSearch client included in "org.elasticsearch:elasticsearch:1.5.2".
After perusing the source code, I discovered that the reason my script was not being picked up by Elasticsearch's directory watcher was because it was watching user.dir, the default configuration directory. The scripts/ subdirectory would have had to have been under there for the node to pick up my script and load it into the ScriptService for it to be used during updates.
The configuration directory can be overridden in your elasticsearch.yml with the key path.conf. Setting that to somewhere in your project would allow you to load scripts during testing and add those scripts to version control as well. Make sure that under that directory is a scripts/ directory; that is where your scripts will be loaded from.

Is it possible to save settings and load resources when compiling to just one standalone exe?

If I compile a script for distribution as a standalone exe, is there any way I can store settings within the exe itself, to save having to write to an external file? The main incentive for this is to save having to develop an installation process. I only need to store a few bytes.
Also, can resources such as images be compiled into the exe?
Using alternate data streams opens up a can of worms so i wouldn't go that way. Writing back config data into the exe itself won't work as the file is locked for write access during execution.
What i usually do is to store config data under %A_AppData%\%A_ScriptName%\%A_ScriptName%.ini
When the script starts i use IniRead which also provides a default value if the key isn't found - which is the case the script is executing for the first time.
The complementing IniWrite's in a OnExit subroutine/function will create the ini file if necessary.
This way no installation is needed and the config is stored in the proper, familiar place.
The autohotkey forum has dealt with this question before.
In that case, the user didn't want extra files -- period.
The method was to use the file system to save alternate data.
Unfortunately I can't find the post.
A simpler method is to use fileinstall command.
When the script is compiled, the external file is stored within the exe.
When the script executes the same command as an exe, the file is copied to the same
directory as the running script. It is a simple yet effective 'install'.
With a little testing for the config file, the fileinstall command can be skipped.
Skipping the fileinstall could allow changes to be made to the configuration after 'installation'
I have not tried saving settings within the compiled exe file, but I have included resources. I'm not sure which version of AHK you're using or how you are compiling, but I can right-click my scripts to compile. There's an option to compile with options, where you can include resources in your compiled exe.Compile with options

Named Config files - Hot or Not?

I've chosen a solution to our config file process and need to justify it. I'd appreciate insight and criticism. Thanks!
Problem:
Our web applications have a single application.cfm file containing variables and conditionals
<cfif url = "*dev*" or "jargon123">
this is a dev enviroment, do devy things
</cfif>
So a dev new to the application will deploy a local instance. Fire it up and start poking around. Problem is that the config file contains production values. It starts hitting production data and sending production emails. Also, since the url they are hitting is http://App_name:PORT or http://localhost - the dev conditionals are never set. So there is more production stuff happening in dev.
What other co-workers want:
A Switch statement. The app.cfm will lead with an environment variable set to "development", then declares general variables. It will then go into a switch statement and declare environment specific variables. I disagree with this method as some of our config files are 100 - 250 lines. That can be a massive Switch statement I don't want to muck around in.
My chosen solution:
App.cfm has been deleted and removed from version control. We now have multiple Applicaiton.Enviroment.cfm files, i.e. Applicaiton.Prod.cfm, Application.Dev.cfm, Applicaiton.MyName.cfm etc. This file contains all of the environment specific data. I moved Production specific settings out of conditionals and into App.Prod.cfm. Deployments to new environments are now 1. Duplicate App.Dev.cfm as App.Me.cfm and commit. 2. Update all variables to my personal data (email, login, etc) 3. Duplicate App.me.cfm as App.cfm and use for config file.
I won't go into why I'm not doing the other solutions but here is my reason for my solutions:
Forces the deployment engineer into selecting the right config file for the environment. The app won't work without an app.cfm
Limits potential of user error. Scenario would be a user copies data into a new environment mode and accidentally copies production content.
It's cleaner and easier to work with - config value's are completely compartmentalized from each other.
I've found a lot of articles on working with environment specific config files but not why they are better. That's the motivation behind this post.
I would also delete the production config and provide only development versions of the config file. Reasons:
a config file could contain security relevant data
many developers are just lazzy, if the application runs, the don't care about the config
if the developer do not use the currently provided mechanisms (the dev url), who could you be sure they set the environment variable?
using the live config during testing could result in active debug options on the production later (forgotten to remove from configuration)
You (development) need to be able to switch between different configurations for different versions of your software at any time. If each setup has its own configuration file, this is a lot easier than if they all share the same file.
If you have all configuration in a single file, you have to read the whole big file, deciding which parts to ignore. This is messier than just reading the whole file.
(I assume that you can have multiple versions of the software installed concurrently in different locations on the same machine. If you can't, you have a bigger problem. But even so, having separate configuration files is beneficial.)
Those are strong 'pros' for separate configuration files - they outweigh the minor 'con': you have to identify where the configuration file is by some mechanism or another. It might be via an environment variable or via a command-line option, with a suitable default if neither is specified. Command-line should override environment.

Packaging with NAnt, how to handle different environments

I'm using NAnt to build an ASP.NET MVC project.
The NAnt script then creates a zip package, containing a deploy script and all the necessary files.
The deploy script backs up the current running website, sets up the newer version of the website and updates the DB.
This works fine for a single environment.
However, we're asked more and more to set up a Staging/Acceptance environment next to the production. These environments, of course, differ in file structure, DB server, config settings etc.
How can I best handle this in the deploy scripts? I don't want to create separate variables for each environment, distinguishable by name only.
Providing defaults and providing the variables in separate files seems more logical.
Does anyone have practical experiences with this?
Store the things that you think are likely to change between environments in config files.
Visual Studio can do the heavy lifting here if you like; you can create settings and specify default values from the Settings tab of a Visual Studio project's properties.
This will create the config file for you and provide strongly-typed access through Properties.Settings.Default.
As for handling multiple environments through your build, I've seen some people recommend maintaining multiple copies of the config files - one for each environment for example - and others recommend using nant to modify the config files during the build or deployment phase. You can use a property passed to nant on the command line (for example) to select which environment you are building (or deploying, depending on how you're doing it).
I don't recommend either of these approaches because:
They both require changes to your build to support new environments.
If you change a setting in a deployed environment and forget to update the build then the next deployment will reset the change (somewhat defeating the point of config settings).
If someone creates a new environment (lets say they want to explore issues arising from upgrading to a new version of SQL Server for example) and doesn't fancy creating all new config files in the build system, they might decide to just use an existing environment's settings. Let's say they choose to deploy using the live settings and forget to change something afterwards. Your new 'test' environment could now be pointing to live kit.
I create a copy of each config file (called web.config.example, for example) and comment out the settings within them (unless they have meaningful defaults). I check these in and have those deployed instead of the real web.config (that is, web.config is NOT deployed automatically. web.config.example is deployed as web.config.example.
The admin of the new environment will have to copy and rename the file to web.config and provide meaningful values). I also put all the calls to the settings behind my own wrapper class - if a mandatory setting is missing I throw an exception.
The build and my environments no longer depend on each other - one build can be deployed to any environment.
If a setting is missing (a new environment or a new setting in an existing environment) then you get a nice clear exception raised to tell the admin what to do.
Existing settings are not altered after an upgrade because only the .example files were updated. It's an admin task to compare the current settings with the latest example and revise if necessary.
To configure the deployment, you could put all the environmental settings (install paths, etc) into nant properties and move them into a separate file (settings.build for example) then use the nant include task to include that file at the top of your deployment file (deploy.build for example). You can then deploy a new version of deploy.build without overwriting your config changes as they are in settings.build. If a new property is introduced into deploy.build nant will fail with a nice message to tell you that you haven't set that property.