How to set WinDbg cache* local symbol path - windbg

This is my WinDbg target launch link.
From
"E:\software\Windows Kits\10\Debuggers\x86\windbg.exe" -y SRV*
E:\symbol*http://msdl.microsoft.com/download/symbols -b -k com:port=//./pipe/com_1,baud=115200,pipe
to
"E:\software\Windows Kits\10\Debuggers\x86\windbg.exe" -y SRV*[cache*]E:\symbol;D:\projects*http://msdl.microsoft.com/download/symbols -b -k com:port=//./pipe/com_1,baud=115200,pipe
My local symbolic address is D:\projects, The local pdb file is always locked.

You are mixing the HTTP server SRV* with the cache cache*. And all in all I wonder why you actually need a cache. It doesn't look like you want one. You may have a larger misunderstanding of how a symbol path works. This answer will not go into all details as well.
Microsoft symbols
Let's begin with the Microsoft symbol server:
SRV*E:\symbol*http://msdl.microsoft.com/download/symbols
srv says that this is a HTTP server.
E:\symbol says where those symbols shall be stored
http://... says where to get the symbols from
the individual parts of that definition are separated by *
Your own symbols
What you probably want is to have your local symbols (PDB files on your disk) available. You do that with just
D:\projects
and nothing else, where D:\projects is a directory which directly contains the PDB files, which is often the case when you build the project locally on your machine.
If your company has a network share, you simply add the network share:
srv to say it's a online resource
C:\netsymbols as your local directory
\\ourserver\symbols for the network share
the individual parts of that definition are separated by * (like before)
If you have a company symbol server via HTTP (like TFS offers), you would use
srv to be a HTTP server.
E:\oursymbols says where those symbols shall be stored (don't put that directory near your source code, e.g. don't use D:\projects, because that likely contains your projects, not symbols)
http://tfs.example.com/myproject for your company's server.
the individual parts of that definition are separated by * (like before)
Combination of different symbol paths
You can combine different symbol paths using ;. You typically want to do that in the order of latency and throughput, i.e.
Your local hard disk (like D:\projects)
Your local network (like srv*C:\netsymbols*\\ourserver\symbols or local HTTP servers)
Internet (like Microsoft HTTP server)
D:\projects;srv*C:\netsymbols*\\ourserver\symbols;srv*E:\oursymbols*http://tfs.example.com/myproject;SRV*E:\symbol*http://msdl.microsoft.com/download/symbols
The cache
Now, the symbol cache is a harder to grasp concept. It is defined by
cache*E:\symbolcache
cache is the indicator that you want a cache
E:\symbolcache is where you want the cache to be on hard disk
the individual parts of that definition are separated by * (like before)
The cache will store everything which is right of it. So typically you put that first, giving
cache*E:\symbolcache;D:\projects;srv*C:\netsymbols*\\ourserver\symbols;srv*E:\oursymbols*http://tfs.example.com/myproject;SRV*E:\symbol*http://msdl.microsoft.com/download/symbols
I never used a cache, because I prefer to have individual locations for the different symbols. The cache may be useful if you don't specify HDD locations for each individual part.
Commands
If you're not sure how to construct a symbol path, take a look at .symfix and .sympath+. These will help you get a correct Microsoft symbol server as well as combine other paths correctly. See this answer for more examples on symbol paths and how they work.

Related

customize the location of save files (apkovl)

Trying to be concise, I would like alpine to save backups made with lbu ci within a subdir of the bootable disk while the behavior is to put the saves in its root.
Insight
I have searched the internet and tried various things but they all failed.
Here it talks about the boot parameter of syslinux.conf:
A relative path, interpreted relative to the root of the alpine_dev.
This is my append inside syslinux.conf
This boot parameter should be used to specify where the backups are at startup, while where they should be saved with lbu ci should be written in /etc;/lbu/lbu.conf.
however, I don't understand how to use these variables here either,
although it should be clear.

Firebase hosting: The remote web server hosts what may be a publicly accessible .bash_history file

We host our website on firebase. We fail a security check due to the following reason:
The remote web server hosts publicly available files whose contents may be indicative of a typical bash history. Such files may contain sensitive information that should not be disclosed to the public.
The following .bash_history files are available on the remote server : - /.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /cgi-bin/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file. - /scripts/.bash_history Note, this file is being flagged because you have set your scan to 'Paranoid'. The contents of the detected file has not been inspected to see if it contains any of the common Linux commands one might expect to see in a typical .bash_history file.
The problem is that we don't have an easy way to get access to the hosting machine and delete these files.
Anybody knows how it can be solved?
If you are using Firebase Hosting, you should check the directory (usually public) that you are uploading via the firebase deploy command. Hosting serves only those files (plus a couple of auto-generated ones under the reserved __/ path for auto-configuration).
If you have a .bash_history, cgi-bin/.bash_history or scripts/.bash_history in that public directory, then it will be uploaded to and served by Hosting. There are no automatically served files with those name.
You can check your public directory, and update the list of files to ignore on the next deploy using the firebase.json file (see this doc). You can also download all the files that Firebase Hosting is serving for you using this script.

Docker and sensitive information used at run-time

We are dockerizing an application (written in Node.js) that will need to access some sensitive data at run-time (API tokens for different services) and I can't find any recommended approach to deal with that.
Some information:
The sensitive information is not in our codebase, but it's kept on another repository in encrypted format.
On our current deployment, without Docker, we update the codebase with git, and then we manually copy the sensitive information via SSH.
The docker images will be stored in a private, self-hosted registry
I can think of some different approaches, but all of them have some drawbacks:
Include the sensitive information in the Docker images at build time. This is certainly the easiest one; however, it makes them available to anyone with access to the image (I don't know if we should trust the registry that much).
Like 1, but having the credentials in a data-only image.
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now. This is very convenient too, but then we can't spin up new servers easily (maybe we could use something like etcd to synchronize them?)
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
PS: I've done some research but couldn't find anything similar to my problem. Other questions (like this one) were about sensitive information needed at build-time; in our case, we need the information at run-time
I've used your options 3 and 4 to solve this in the past. To rephrase/elaborate:
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now.
I use config management (Chef or Ansible) to set up the credentials on the host. If the app takes a config file needing API tokens or database credentials, I use config management to create that file from a template. Chef can read the credentials from encrypted data bag or attributes, set up the files on the host, then start the container with a volume just like you describe.
Note that in the container you may need a wrapper to run the app. The wrapper copies the config file from whatever the volume is mounted to wherever the application expects it, then starts the app.
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
Yes, it's cumbersome to pass a bunch of env variables using -e key=value syntax, but this is how I prefer to do it. Remember the variables are still exposed to anyone with access to the Docker daemon. If your docker run command is composed programmatically it's easier.
If not, use the --env-file flag as discussed here in the Docker docs. You create a file with key=value pairs, then run a container using that file.
$ cat >> myenv << END
FOO=BAR
BAR=BAZ
END
$ docker run --env-file myenv
That myenv file can be created using chef/config management as described above.
If you're hosting on AWS you can leverage KMS here. Keep either the env file or the config file (that is passed to the container in a volume) encrypted via KMS. In the container, use a wrapper script to call out to KMS, decrypt the file, move it in to place and start the app. This way the config data is not exposed on disk.

WebApp configuration in mod_perl 2 environment

I have a web app I'm writing in mod_perl 2. (It's a custom handler module, not registry or perlrun scripts.) There are several configuration options I'd like to have set at server initialization, preferably from a configuration file. The problem I'm having is that I haven't found a good place to pass a filename for my app's config file.
I first tried loading "./app.conf" but the current directory isn't the location of the modules, so it's unpredictable and error-prone. Or, I have to assume some path -- relative or absolute. This is inflexible and could be problematic if the host OS distribution is changed. I don't want to hard-code a path (though, something in /etc may be acceptable if there's just no better way).
I also tried PerlSetVar, but the value isn't available until request time. While this is workable, it means I'm potentially reading a config file from disk at least once per child (thread) init. I would rather load at server init and have an immutable static hash that is part of the spawned environment when a child is created.
I considered using a config.pl, but this means I either have a config.pl with one option to configure where to find the app.conf file, or I move the options themselves into config.pl and require end-users to respect Perl syntax when setting options. Future users will be internal admins, so that's not unreasonable, but it's more complicated than I'd like.
So what am I missing? Any good alternatives?
Usually a top priority is to avoid having configuration files amongst your executables. Otherwise a server misconfiguration could accidentally show your private configuration info to the world. I put everything the app needs under /srv/app0, with subdir cfg which is a sibling to the dirs containing executables. (More detail.)
If you're pre-loading modules via PerlPostConfigRequire startup.pl to access mod/startup.pl then that's the best place to put the configuration file location ../cfg/app.cnf and you have complete flexibility re how to store the configuration in memory. An alternative is to PerlModule your modules and load the configuration (with a relative path as above) in a BEGIN block within one of them.
Usually processing a configuration file doesn't take appreciable time, so a popular option is to lazy-load: if the code detects the configuration is missing it loads it before continuing. That's no use if the code needed to know the configuration earlier than that, but it avoids lots of problems, especially when migrating code to a non-modperl env.

MS Source Server: significance of srcsrv.ini variable

The MS source server technology uses an initialization file named srcsrv.ini. One of the values identifies the source server location(s), e.g.,
MYSERVER=\\machine\foobar
The docs leave much unanswered about this value. To start with, I haven't been able to find the significance of the value name, i.e., what's on the left side--and I don't see it used anywhere else. Hewardt & Pravat in Advanced Windows Debugging say "The left side ... represents the project name", but that doesn't seem to jibe with MS's "MYSERVER" example.
What is the significance of the left side? Where else is it used? Does the value reference a server or a project, and is there one per server, or one per project?
For anyone looking into this in the future, I received the following information from MS:
The name on the left side is the logical name of a version
control server. The name is also used in the source-indexed symbol files
(pdb). For example, a symbol file may contain this string value: MYSERVER=mymachine1.sys-mygroup.corp.microsoft.com:2003and the source files are referenced like this in pdb: *MYSERVER*/base/myfolder/mycode.cWhen SrcSrv starts, it looks at Srcsrv.ini for values; these values override the information contained in the .pdb file: "MYSERVER=mymachine.sys-mygroup.corp.microsoft.com:1666" overrides
"MYSERVER=mymachine1.sys-mygroup.corp.microsoft.com:2003"This enables users to configure a debugger to use an alternative source control server at debug time. The info is documented at http://msdn.microsoft.com/en-us/library/ms680641.aspx.
So, it is a logical name for a source server, and its value can be changed at debug time to reference a different server than the one originally used when the PDBs were created.
The way the debugger retrieves your source is by srcsrv using some command line utility. The utility program itself and the command line used varies depending on which type of repository hosts your code. One of the issues preventing retrieval is that when that command line program is invoked it fails.
To find out why use the command !sym noisy in WinDBG. It is mostly helpful in diagnosing symbol server issues but for source indexed PDB it also will show the actual command line WinDBG used. Copy the command from the command log window and run it in CMD.EXE to get more details on the failure.