Microsoft Form Update System - forms

I'm working on creating a Front End for multiple users for MS Access, and I have come up with a method to update their forms if there are any changes that need to be made. Basically what my VBA code will do, is delete the old forms and import the new ones if there are any (approximately 10 forms).
There is one issue with my process... Every time a new form is being imported, it is asking the user to accept the security warning and it gets annoying when there are so many forms and sometimes it can be a lengthy process to sit and wait for each to import and click accept every time a new import comes in.
Is there a more logical way to do this? Does Access have a build in function that will detect any changes to a form and update it based on a separate database?
Private Function PullNewForms()
DoCmd.TransferDatabase acImport, "Microsoft Access", _
"LOCATION", _
acForm, "frmLogin", "frmLogin"
DoCmd.TransferDatabase acImport, "Microsoft Access", _
"LOCATIONb", _
acForm, "frmNewUser", "frmNewUser"
DoCmd.TransferDatabase acImport, "Microsoft Access", _
"LOCATION", _
acForm, "frmOptionsMenu", "frmOptionsMenu"
DoCmd.TransferDatabase acImport, "Microsoft Access", _
"LOCATION, _
acForm, "frmResetPassword", "frmResetPassword"
DoCmd.TransferDatabase acImport, "Microsoft Access", _
"LOCATION", _
acForm, "frmVendorMainForm", "frmVendorMainForm"
End Function
Can this be consolidated to one line of code?

Reconsider this current setup. You should not be importing application objects like forms, reports, even modules on the fly like this as corruption and crashing may occur. You need a more stable version control system among your user base. Only data should ever really be imported.
Consider the following when deploying an MS Access FrontEnd application to multiple users:
Give each user their own FrontEnd to run on their local machines.
Keep a Master FrontEnd on a shared network that they can all access but never directly use, still maintaining the split architecture:
Give each user their own batch file (.bat) that they can double click from their desktops or wherever to copy the latest Master FrontEnd to replace their older version.
Keep for yourself, the developer, a development FrontEnd copy that when tested and debugged and ready to deploy into production, you replace as the new Master FrontEnd. This is hard at first but try not to make development changes in Master or local copies since you may overwrite your own changes.
Finally, with every new form/report/macro/module change, inform every user of a new available FrontEnd and to have them simply click their batch file to replace previous version.
Batch file
(Save below text in Notepad but as a .bat file and not default .txt file which automatically makes it a double-click executable script with gear icon. Give each user their own batch file tailoring local paths accordingly and save to their desktop or wherever their FrontEnd is located)
#echo off
Copy "\\Server\Path\To\MasterFrontEnd.accdb" "C:\User\JaneDoe\LocalFrontEnd.accdb" /Y
start "" cmd /c "echo UPDATE COMPLETE!&echo(&pause"

Related

conda init powershell - no action taken - operation failed

In both cmd and powershell, when I input conda init powershell, it always failed as follows:
(ifcmapping) C:\Windows\system32>conda init powershell
no change C:\Users\haoli\anaconda3\Scripts\conda.exe
no change C:\Users\haoli\anaconda3\Scripts\conda-env.exe
no change C:\Users\haoli\anaconda3\Scripts\conda-script.py
no change C:\Users\haoli\anaconda3\Scripts\conda-env-script.py
no change C:\Users\haoli\anaconda3\condabin\conda.bat
no change C:\Users\haoli\anaconda3\Library\bin\conda.bat
no change C:\Users\haoli\anaconda3\condabin\_conda_activate.bat
no change C:\Users\haoli\anaconda3\condabin\rename_tmp.bat
no change C:\Users\haoli\anaconda3\condabin\conda_auto_activate.bat
no change C:\Users\haoli\anaconda3\condabin\conda_hook.bat
no change C:\Users\haoli\anaconda3\Scripts\activate.bat
no change C:\Users\haoli\anaconda3\condabin\activate.bat
no change C:\Users\haoli\anaconda3\condabin\deactivate.bat
no change C:\Users\haoli\anaconda3\Scripts\activate
no change C:\Users\haoli\anaconda3\Scripts\deactivate
no change C:\Users\haoli\anaconda3\etc\profile.d\conda.sh
no change C:\Users\haoli\anaconda3\etc\fish\conf.d\conda.fish
no change C:\Users\haoli\anaconda3\shell\condabin\Conda.psm1
no change C:\Users\haoli\anaconda3\shell\condabin\conda-hook.ps1
no change C:\Users\haoli\anaconda3\Lib\site-packages\xontrib\conda.xsh
no change C:\Users\haoli\anaconda3\etc\profile.d\conda.csh
needs sudo C:\Users\haoli\OneDrive\??\WindowsPowerShell\profile.ps1
No action taken.
Operation failed.
I already run as Admin. How to solve it? Thanks!
In here
needs sudo C:\Users\haoli\OneDrive??\WindowsPowerShell\profile.ps1
I assume the ??\ are not English letters, you need to change it to English.
Check onedrive online, it is English on my local PC but not in English in the cloud. I changed the folder name to 'Documents'(where my PowerShell is at) in the cloud and synced the change to local. Worked for me.
On Windows when you let OneDrive back up your Documents folder, the folder points to the Documents folder on the cloud, and the actual name of it depends on the language of your system, which I assume it is "文档" in your case.
I just disabled the Documents backup in OneDrive settings, and re-run conda init, which gave me the result
modified C:\Users\username\Documents\WindowsPowerShell\profile.ps1
If you'd prefer not disabling the OneDrive backup, you can try Zixin's answer, I am just not sure if you should mess up with the default settings of the backup folder and if it will cause problem.

Blast+ Local Configuration: How to configure nt and nr databases?

I am configuring Blast+ on my mac (os sierra) and am having trouble configuring my nr and nt databases that I also downloaded locally. I am trying to follow NCBI's instructions here, and am getting hung up on the Configuration and Example Execution steps.
They say to change my .bash_profile so that it says:
export PATH=$PATH:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/ncbi-blast-2.6.0+/bin
That works fine, and they say configure a path for BLASTDB "similarly" but to the file where my DB will be, so I have done this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/nt.00
which specifies the exact folder that I got when I unzipped the nt tar file from their FTP. With this path, if I run the command...
blastn -query test_query.fa -db nt.00 -task blastn -outfmt "7 qseqid sseqid evalue bitscore" -max_target_seqs 5
then it runs successfully and I get results, but I am worried that these are only being checked against the nt.00 section of the entire nt.00 database file, especially because if I run my test_query.fa sequence on the Web Blast, I get different results.
Also, their instructions say that the path only needs to point to the folder that contains the whole database folder nt.00, from the tar I unzipped--and not the specific nt.00 itself--, which in my case would just be "blastdb/" (As opposed to "blastdb/nt.00/" which then contains nt.00.nhd, nt.00.nal, etc.). That makes sense because when I am working I want to be able to blastn on the nt database but also blastp on the nr one, etc. by changing the -db flag on my command, and there shouldn't be a problem with having them all in this folder, right? But if I must specify the path for BLASTDB with the nt.00 DB added to the end, how could I ever use nr.00 in the same folder (blastdb/)? Essentially, I want to do as the instructions say, and just have this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/
And then depending on what database I want to use I could just say so after the -db flag on my command. But when I make the path like that above, it gives me this error:
BLAST Database error: No alias or index file found for nucleotide database [nt] in search path [/Users/LJStout::/Users/LJStout/Documents/Luke/Research/Pedulla 17-18/blast/blastdb:]
I have tried running that same blastn command from above and swapping out "nt" for "nt.00", and have tried these commands with the path for BLASTDB ending in both "blastdb/" and "blastdb/nt" and of course "blastdb/nt.00" which is the only one that runs without errors.
Here's an example of another thread I read where the OP is worried about his executions not checking the entire nt.00 folder, this was different than my problem however.
Thanks for you help!
This whole problem came down to having the nt.00 & nr.00 files, the original folders that result from unzipping their respective .tar.gz's, in the same parent folder when it should be that their contents are in the same parent folder. I simply deleted the folders they came in and copied the contents over to my new, singular parent. I was kind of mislead by the instructions, it was a simple mistake. Now, I have one folder, blastdb/ that contains all of the contents of every database I plan on using, including nt,nr, and refseq.

AHK code to locate the residing folder and highlight the active file

It’s often requires to quick locate the folder location of open active file and highlight or select the active file while working on different software applications. Quick locating the residing folder needed for finding related files in same folder, rename of the opened files or residing folder or move the file to different related folder. Current options require navigating through the loads of folders to find and locate the specific folder where it’s buried with bunch of similar other files (similar to find needle in a haystack). Microsoft Office suite has built-in feature named “document location” which can be added to quick access toolbar. But it only allow to see the folder location or full path but no single click command or key available (AFAIK) to conveniently jump to that locating folder and highlight/identified the opened file so that further operation (e.g. rename, move) could be done on that specific file/folder. This is also the case for other software applications where native program have options to get full path but no way to jump to the specific file/folder. Considering one of Microsoft Office suites application (e.g. word) as test cases the processes I could imagine as follows;
1 Get the full path (D:\Folder\Subfolder\Mywordfile.docx) of currently opened word document
2 Close the file
3 Explorer command to select and highlight the file in folder given full path (process 1)
Operation on file/folder as desire manually and double click to return to file operating applications (e.g. word).
In my assessment for Implementation of above tasks following are possibilities
Task 1 Microsoft Word has a built-in function called "document location" to get the full path of the opened document and its currently possible to copy the file path in the clipboard.
Task 2 Close the file (Ctrl+W or Ctrl+F4)
Task 3 AHK code for Explorer command to select the file for a given full path (available in Task 1)
I am facing difficulties in Task 3 and I tried each of these but so far no luck
Clipboard := “fullpath” ; Full path (D:\Folder\Subfolder\Mywordfile.docx ) copied from Word
Run explorer /e, “Clipboard”
Run %COMSPEC% /c explorer.exe /select`, "%clipboard%"
So far above explorer command only take me to my documents folder not in the specific folder location (path in Task 1). I am curious know what would be the right explorer code to select the file for a given full path in clipboard. Appreciate for supporting AHK code or better way to do this task. Thank in advance.
I'm not clear on why your sample code doesn't work. I suspect it's because of the extra characters.
After running this command Windows Explorer will be open and have the desired file selected (if it exists).
FullPathFilename := "e:\temp\test.csv"
Explorer := "explorer /select," . FullPathFilename
Run, %Explorer%
I don't know if you tried the other approach, but I think this is simpler and shorter:
1) Store the full path of the document in a string: oldfile = ActiveDocument.FullName
2) SaveAs the document with ActiveDocument.SaveAs
3) Delete the old file with Kill oldfile
All this is from VBA directly, no need to use Explorer shell. The same exists for the other applications.
Here is a fully working code for the Word Documents:
Sub RenameActiveDoc()
Dim oldfile As String
Set myDoc = ActiveDocument
'1) store current file
oldfile = myDoc.FullName
'2) save as the active document (prompt user for file name)
myDoc.SaveAs FileName:=InputBox("Enter new name", "Rename current document", myDoc.Name)
'3) Delete the old file with
On Error GoTo FileLocked
Kill oldfile
On Error GoTo 0
Exit Sub
FileLocked:
MsgBox "Could not delete " & oldfile, vbInformation + vbOKOnly, "File is locked"
End Sub
With contribution of Ro Yo Mi I am able to come up with following solution. However I am assuming that there might better solution to this task.
;;; Customize Document Location (Choose form All Commands) in Quick Access Toolbar and get its position (#4 for my case)
#If WinActive("ahk_class OpusApp") || WinActive("ahk_class XLMAIN") || WinActive("PPTFrameClass")
#p:: ;Close Word/Excel/PowerPoint Document and Locate in Explorer Folder Location
clipboard = ;empty the clipboard
Send !4 ; Select full path while document location at #4 position in Quick Access toolbar
Send ^c ; copy the full path
ClipWait ; waits for the clipboard to have content
Send {esc}
Send, ^{f4} ; Close opened document only but keep Word/Excel/PPT program running
Explorer := "explorer /select," . clipboard
Run, %Explorer%\
return

Where in the user account directory does Crystal Reports store temp files?

There's an error in crystal reports that says access to a report file is denied because "another program may be using it". This is commonly cited as being resolved by giving the proper permissions to the "C:\Windows\Temp" directory.
However, I've also encountered a permutation during local debugging for Visual Studio in which the error had to be resolved by setting permissions on a folder somewhere under the "C:\Users[Username]" directory. I figured it out once or twice, but under circumstances in which I didn't take note of the directory name for later reference.
Can someone tell me where Crystal Reports stores its temporary files for individual user accounts?
Crystal Reports saves its temporary files in a directory which is based on the Environment Variables of the OS.
Usually the default directory for Windows 7 is C:\Users\[Username]\AppData\Local\Temp but there are better ways to determine it dynamically.
Go to Computer → Properties → Advanced system settings → Advanced → Environment variables find the TEMP variable in the User variables for [Username]
Run cmd.exe, type echo %temp% and hit enter
To test the path, we can generate temp files from the Crystal Reports Engine by just connecting a report file to a Crystal Reports Viewer and running the code. This process will generate temp files in the temp path.
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim Report1 As New CrystalReport1
CrystalReportViewer1.ReportSource = Report1
End Sub
The temp files will look something like the following:
temp_0194c263-1a68-493f-94f1-9c3911cb0c7d {8D3CD485-167C-4DDB-AD91-A8586B36459A}.rpt
temp_0194c263-1a68-493f-94f1-9c3911cb0c7d.rpt
~cpe{F9155453-1E39-42B6-846D-07C8497B0373}.tmp
~DF0DC28410DCDF26A9.TMP

dpkg: How to use trigger?

I wrote a little CDN server that rebuilds its registry pool when new pool-content-packages are installed into that registry pool.
Instead of having each pool-content-package call the init.d of the cdn-server, I'd like to use triggers. That way it would restart the server only once at the end of an installation run, after all packages were installed.
What have I to do to use triggers in my packages with debhelper support?
What you are looking for is dpkg-triggers.
One solution with use of debhelper to build the debian packages is this:
Step 1)
Create file debian/<serverPackageName>.triggers (replace <serverPackageName> with name of your server package).
Step 1a)
Define a trigger that watch the directory of your pool. The content of file would be:
interest /path/to/my/pool
Step 1b)
But you can also define a named trigger, which have to be fired explicit (see step 3).
content of file:
interest cdn-pool-changed
The name of the trigger cdn-pool-changed is free. You can take what ever you want.
Step 2)
Add handler for trigger to file debian/<serverPackageName>.postinst (replace <serverPackageName> with name of your server package).
Example:
#!/bin/sh
set -e
case "$1" in
configure)
;;
triggered)
#here is the handler
/etc/init.d/<serverPackageName> restart
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0
Replace <serverPackageName> with name of your server package.
Step 3) (only for named triggers, step 1b) )
Add in every content package the file debian/<contentPackageName>.triggers (replace <contentPackageName> with names of your content packages).
content of file:
activate cdn-pool-changed
Use same name for trigger you defined in step 1.
More detailed Information
The best description for dpkg-triggers I could found is "How to use dpkg triggers". The corresponding git repository with examples you can get here:
git clone git://anonscm.debian.org/users/seanius/dpkg-triggers-example.git
I had a need and read and re-read the docs many times. I think that the process is not clearly explain or rather what goes where is not clearly explained. Here I hope to clarify the use of Debian package triggers.
Service with Configuration Directory
A service reading its settings in a specific directory can mark that directory as being of interest.
Say I create a new service which reads settings from /usr/share/my-service/config/...
That service gets two additions:
In its debian directory I add my-service.triggers
And here are the contents:
# my-service.triggers
interest /usr/share/my-service/config
This means if any other package installs or removes a file from that directory, the trigger enters its "needs to be run" state.
In its debian directory I also add my-service.postinst
And I have a script as follow to check whether the trigger happened and run a process as required:
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "/usr/share/my-service/config" ]
then
# this may or may not be what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
That's it.
Now packages adding extensions to your service can add their own configuration file(s) under /usr/share/my-service/config (or a directory under /etc/my-service/my-service.d/... or /var/lib/my-service/..., although that last one should be reserved for dynamic files, not files installed from a package) and dpkg automatically calls your postinst script with:
postinst triggered /usr/share/my-service/config
# where /usr/share/my-service/config is your <interest-path>
This call happens only once and after all the packages were installed, hence the advantage of having a trigger in the first place. This way each package does not need to know that it has to restart my-service and it does not happen more than once, which could cause all sorts of side effects (i.e. the service tries to listen on a TCP port and get error: address already in use).
IMPORTANT: keep in mind that the postinst should include a line with #DEBHELPER#.
So you do not have to do anything special in other packages. Only make sure to install the configuration files in the correct directory and dpkg picks up from there (i.e. in my example under /usr/share/my-service/config).
I have an extension to BIND9 called ipmgr which makes use of .ini files saved in a specific folder. It uses the files to generate DNS zones (way less errors that way! and it includes support for getting letsencrypt certificates and settings for dmarc/dkim). This package uses this case: a simple directory where configuration files get installed. Other packages do not need to do anything other than install files in the right place (/usr/share/ipmgr/zones, for this package).
Service without a Configuration Folder
In some (rare?) cases, you may need to trigger something in a service which is not driven by the installation of a new configuration file.
In this case, you can use an arbitrary name (it should include your package name to make sure it is unique since this name is global to the entire Debian/Ubuntu system).
To make this one work, you need three files, one of which is a trigger in the other packages.
State the Interest
As above, we have an interest. In this case, the interest is stated as a name on its own. The dpkg system distinguish between a name and a path because a name cannot include a slash (/) character. Names are limited to ASCII except control characters and spaces. I would suggest you stick to a-z, 0-9 and dashes (-).
# my-service.triggers
interest my-service-settings
This is useful if you cannot simply track a folder. For example, the settings could come from a network connection that a package offers once installed.
Listen for the Triggers
Again, as above, you need a postinst script in your Service Package. This captures the trigger and allows you to run a command. The script is the same, only you test for the name instead of the folder (note that you can have any number of triggers, so you could also have both: a folder as above and a special name as here).
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "my-service-settings" ]
then
# this may or may not what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
The Trigger
As mentioned above, we need a third file. An arbitrary name is not going to be triggered automatically by dpkg. It wouldn't know whether your other package needs to trigger something just like that (although it is fairly automated as it is already).
So in other packages, you create a trigger file which looks like this:
# other-package.triggers
activate my-service-settings
Now we recognize the name, it is the same as the interest stated above.
In other words, if the trigger needs to run for something other than just the installation of files in a given location, use a special name and add this triggers file with the activate keyword.
Other Features
I have not tested the other features of the dpkg-trigger(1) tool. There are other keywords support in the triggers files:
interest
interest-await
interest-noawait
activate
activate-await
activate-noawait
The deb-triggers manual page has additional information about those. I am not too sure what the await/noawait implies other than the trigger may happen at any time when nowait is used.
Automatic Trigger Added
The build system on Ubuntu (probably Debian too) automatically adds a triggers file with the following when your package includes a library:
$ cat triggers
# Triggers added by dh_makeshlibs/11.1.6ubuntu2
activate-noawait ldconfig
I suggest you exercise caution if your package includes libraries. If you have your own triggers file, I do not know whether this addition will still happen automatically.
This also shows us a special case where it wants to use the noawait. If I understand correctly, it has to run the ldconfig trigger ASAP so your commands will work as expected after the unpack. Otherwise ldd will not know anything about your newly installed library.