I'm looking in to a way to improve the autocomplete for GPS ADA
(Version: GPS 6.0.1 with GNAT Pro 6.4.2).
GPS autocomplete searches for matches which begin with the text you entered.
I would like to match my string anywhere in the text.
Currently the regex would be something like:
/myString.*/i
I would like it to be:
/.*myString.*/i
Is there an option I've missed to do this?
Does anyone know of a GPS plugin that does this?
I've also had a look into writing this plugin myself, the documentation at http://docs.adacore.com/gps-docs/users_guide/_build/html/GPS.html#GPS.Completion which references "completion.py" - which i haven't been able to find - i'm guessing this may have only been included with later verisons of GPS.
You could indeed write this yourself (recent developments of GPS do not include this feature, which I believe was never requested before).
The goal is to define an action, which you can then bind to a key shortcut. So for instance the plug-in would start with something like:
import GPS, gps_utils
#gps_utils.interactive(name='My Completion', filter='Source editor'):
def my_completion():
buffer = GPS.EditorBuffer.get() # the current editor
loc = buffer.current_view().cursor() # the current location
start = loc.forward_word(-1) # beginning of word
end = loc.forward_word(1) # end of word
text = buffer.get_chars(start, end) # the text the user is currently typing
# then search in current buffer (or elsewhere) for matching text
match = buffer.beginning_of_buffer().search(text)
if match:
match_start, match_end = match
match_text = buffer.get_chars(match_start, match_end)
# then go back to initial location, remove text and replace with match
buffer.delete(start, end)
buffer.insert(start, match_text)
This is a rough outline, there are likely hundreds of details that I did not look at. It should get your started though.
Related
So I have created a huge screen that essentially just shows the robot status for every robot in this factory (individually)… At the very end of the project, they decided they want one object on the screen that blinks if any of the 300 robots fault. I am trying to think of a way to make this work. Maybe a global script of some kind? Problem is, I do not do much scripting in Cimplicity, so any help is appreciated.
All the points that are currently used on this screen (to indicate a fault) have very similar names… as in, the beginning is the same… so I was thinking of a script that could maybe recognize if a bit is high based on PART of it's string name characteristic. The end will change a little each time, but I am sure there is a way to only look for part of a string and negate the rest. If the end has to be hard coded, that's fine.
You can use a Python script in Cimplicity.
I will not go into detail on the use of python in Cimplicity, which is well described in the documentation indicated above.
Here's an example of what can be done... note that I don't have a way to test it and, of course, this will work if the name of your robots in the declaration follows the format Robot_1, Robot_2, Robot_3 ... Robot_10 ... Robot_300 and it also depends on the Name and the Type of the fault variable... as you didn't define it, I imagine it can be an integer, with ZERO indicating no error. But if you use something other than that, you can easily change it.
import cimplicity
(...)
OneRobotWithFault = False
# Here you get the values and check for fault
for i in range(0, 300):
pointName = f'MyFactory.Robot_{i}.FaultCode'
robotFaultCode = cimplicity.point_get(pointName)
if robotFaultCode > 0:
OneRobotWithFault = True
break
# Set the status to the variable "WeHaveRobotWithFault"
cimplicity.point_set("WeHaveRobotWithFault", OneRobotWithFault)
I am currently working on a plugin which will take voice commands and upon receiving a response from the server, act accordingly. For example if i say "delete word", the last word will be deleted. I want to be able to move the pointer left/right/up one line/down one line. Does Word Javascript API provide a way to achieve this?
There are no cursor movement APIs. But there are methods on the Paragraph object for getting the previous paragraph and the next paragraph. There are also ways to move among Ranges, if you can get a collection or an array of the ranges that you want to move around. And you can find the AdjacentAfter and AdjacentBefore ranges using the Range.compareLocationWith method. The Range object also has getNextRange and getNextRangeOrNullObject methods. Finally, the Range.select("Start") and Range.select("End") will put the cursor just before/after the current selected range.
I am using deep speech for speech to text. Up to 0.8.1, when I ran transcriptions like:
byte_encoding = subprocess.check_output(
"deepspeech --model deepspeech-0.8.1-models.pbmm --scorer deepspeech-0.8.1-models.scorer --audio audio/2830-3980-0043.wav", shell=True)
transcription = byte_encoding.decode("utf-8").rstrip("\n")
I would get back results that were pretty good. But since 0.8.2, where the scorer argument was removed, my results are just rife with misspellings that make me think I am now getting a character level model where I used to get a word-level model. The errors are in a direction that looks like the model isn't correctly specified somehow.
Now I when I call:
byte_encoding = subprocess.check_output(
['deepspeech', '--model', 'deepspeech-0.8.2-models.pbmm', '--audio', myfile])
transcription = byte_encoding.decode("utf-8").rstrip("\n")
I now see errors like
endless -> "endules"
service -> "servic"
legacy -> "legaci"
earning -> "erting"
before -> "befir"
I'm not 100% that it is related to removing the scorer from the API, but it is one thing I see changing between releases, and the documentation suggested accuracy improvements in particular.
Short: The scorer matches letter output from the audio to actual words. You shouldn't leave it out.
Long: If you leave out the scorer argument, you won't be able to detect real world sentences as it matches the output from the acoustic model to words and word combinations present in the textual language model that is part of the scorer. And bear in mind that each scorer has specific lm_alpha and lm_beta values that make the search even more accurate.
The 0.8.2 version should be able to take the scorer argument. Otherwise update to 0.9.0, which has it as well. Maybe your environment is changed in a way. I would start in a new dir and venv.
Assuming you are using Python, you could add this to your code:
ds.enableExternalScorer(args.scorer)
ds.setScorerAlphaBeta(args.lm_alpha, args.lm_beta)
And check the example script.
DM scripting beginner here, almost no programming skills.
I would like to know the commands to access all the metadata of DM images/spectra.
I realized that all my STEM images at 80 kV taken between 2 dates (let's say 02.11.2017-05.04.2019) have the scale calibration wrong by the same factor (scale of all such images needs to be multiplied by 1.21).
I would like to write a script which multiplies the scale value by a factor only for images in scanning mode at 80 kV taken during a period for all images in a folder with subfolders or for all images opened in DM and save the new scale value.
I checked this website http://digitalmicrograph-scripting.tavernmaker.de/other%20resources/Old-DMHelp/AllFunctions.html but only found how to call the scale value (ImageGetDimensionCalibration). I have a general idea how to write the script based on other scripts if I find out how to call the metadata.
If anyone can write the whole script for me I would greatly appreciate your effort.
All general meta-data is organized in the image tag-structure
You can see this, if you open the Image Display Info of an image. (Via the menu, or by pressing CTRL + D) and then browse to the "Tags" section:
All info on the right are image tags and they are organized in a hierarchical tree.
How this tree looks like, and what information is written where, is totally open and will depend on what GMS version you are using, how the hardware is configured etc. Also custom scripts might alter this information.
So for a scripting start, open the data you want to modify and have a look in this tree.
Hint: The following min-script can be useful. It opens a tag-browsing window for the front-most image but as a modeless dialog (i.e. you can keep it open and interact with other parts):
GetFrontImage().ImageGetTagGroup().TagGroupOpenBrowserWindow(0)
The information you need to check against is most probably found in the Microscope Info sub-tree. Here, usually all information gathered from the microscope during acquisition is stored. What is there, will depend on your system and how it is set up.
The information of the STEM image acquisition - as far as the scanning engine and detector is concerned - is most probably in the DigiScan sub-tree.
The Data Bar sub-tree usually contains date and time of creation etc.
Calibration values are not stored in the image tag-structure
What you will not find in this tag-structure is the image calibration, i.e. the values actually used by DM to display calibrated values. These values are "one level up" so to speak here:
This is important to know in the following for your script, because you will need different commands for both the "meta-data" from the tags, and the "calibration" you want to change.
Accessing meta-data by script
The script-commands you need to read from the tags are all described in the F1 help documentation here:
Essentially, you need a command to get the "root" TagGroup of an image, which is ImageGetTagGroup() and then you traverse within this tree.
This might seem confusing - because there are a lot of slightly different commands for the different types of stored tags - but the essential bits are easy:
All "Paths" through the tree are just the individual names (typed exactly)
For each "branch" you have to use a single colon :
The commands to set/get a tag-value all require as input the "root" tagGroup object and the "path" as a string. The get commands require a variable of matching type to store the value in, the set commands need the value which should be written.
= The get commands themeselves return true or false depending on whether or not a tag-path could be found and the value could be read.
So the following script would read the "Imaging Mode" from the tags of the image shown as example above:
string mode
GetFrontImage().ImageGetTagGroup().TagGroupGetTagAsString( "Microscope Info:Imaging Mode", mode )
OKDialog( "Mode: " + mode )
and in a little more verbose form:
string mode // variable to hold the value
image img // variable for the image
string path // variable/constant to specify the where
TagGroup tg // variable to hold the "tagGroup" object
img := GetFrontImage() // Use the selected image
tg = img.ImageGetTagGroup() // From the image get the tags (root)
path = "Microscope Info:Imaging Mode" // specify the path
if ( tg.TagGroupGetTagAsString( path, mode ) )
OKDialog( "Mode: " + mode )
else
Throw( "Tag not found" )
If the tag is not a string but a value, you will need the according commands, i.e.
TagGroupGetTagAsNumber().
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I have many spreadsheets with travel information on them amongst other things.
I need to extract start and end locations where the row describes travel, and one or two more things from the row, but what those extra fields are shouldn't be important.
There is no known list of all locations and no fixed pattern of text, all that I can look for is location names.
The field I'm searching in has 0-2 locations, sometimes locations have aliases.
The Problem
If we have this:
00229 | 445 | RTF | Jan | trn_rtn_co | Chicago to Base1
00228 | 445 | RTF | Jan | train | Metroline to home coming from Base1
00228 | 445 | RTF | Jan | train_s | Standard train journey to Friends
I, for instance (though it will vary), will want this:
RTF|Jan|Chicago |Base1
RTF|Jan|Home |Base1
RTF|Jan|NULL |Friends
And then to go though, look up what Base1 and Friends mean for that person (whose unique ID is RTF) and replace them with sensible locations (assuming they only have one set of 'friends'):
RTF|Jan|Chicago |Rockford
RTF|Jan|Home |Rockword
RTF|Jan|NULL |Milwaukee
What I need
I need a way to pick out key words from the final column, such as: Metroline to home coming from Base1.
There are three types of words I'm looking for:
Home LocationsThese are known and limited, I can get these from a list
Home AliasesThese are known and limited, I can get these from a list
Away LocationsThese are unknown but cities/towns/etc in the UK I don't know how to recognize these in the string. This is my main problem
My Ideas
My go to program I thought of was awk, but I don't know if I can reliably search to find where a proper noun (i.e. location) is used for the location names.
Is there a package, library or dictionary of standard locations?
Can I get a program to scour the spreadsheets and 'learn' the names of locations?
This seems like a problem that would have been solved already (i.e. find words in a string of text), but I'm not certain what I'm doing, and I'm only a novice programmer.
Any help on what I can do would be appreciated.
Edit:
Any answer such as "US_Locations_Cities is something you could check against", "Check for strings mentioned in a file in awk using ...", "There is a library for language X that will let a program learn to recognise location names, it's not RegEx, but it might work", or "There is a dictionary of location names here" would be fine.
Ultimately anything that helps me do what I want to do (i.e get the location names!) would be excellent.
Sorry to tell you, but i think this is not 100% programmable.
The best bet would be to define some standard searches:
Chicago to Base1
[WORD] to [WORD]:
where "to" is fixed and you look for exactly one word before and after. the word before then is your source and word after your target
Metroline to home coming from Base1
[WORD] to [WORD] coming from [WORD]:
where "to" and "coming from" is fixed and you look for three words in the appropriate slots.
etc
if you can match a source and target -> ok
if you cannot match something then throw an error for that line and let the user decide or even better implement an appropiate correction and let the program automatically reevaluate that line.
these are non-trivial goals.
consider:
Cities out of us of a
Non english text entries
Abbreviations
for automatic error corrections try to match the found [WORD]'s with a list of us or other cities.
if the city is not found throw an error. if you find that error either include that not found city to your city list or translate a city name in a publicly known (official) name.
The best I can suggest is that, as long as your locations are all US cities, you can use a database of zip codes such as this one.
I don't know how you expect any program to pick up things like Friends or Base1
I have to agree with hacktick that as it stands now, it is not programmable. It seems that the only solution is to invent a language or protocol.
I think an easy implementation follows:
In this language you have two keywords: to and from (you could also possibly allocate at as a keyword synoym for from as well).
These keywords define a portion of string that follows as a "scan area" for
recognizing names
I'm only planning on implementing the simplest scan, but as indicated at the end of the post allows you to do your fallback.
In the implementation you have a "Preferred Name" hash, where you define the names that you want displayed for things that appear there.
{ Base1 => 'Rockford'
, Friends => 'Milwaukee'
, ...
}
You could split your sentences by chunks of text between the keywords, using the following rules:
A. First chunk, if not a keyword is taken as the value of 'from'.
A. On this or any subsequent chunk, if keyword then save the next chunk
after that for that value.
A. Each value is "scanned" for a preferred phrase before being stored
as the value.
my #chunks
= grep {; defined and ( s/^\s+//, s/\s+$//, length ) }
split /\b(from|to)\s+/i, $note
;
my %parts = ( to => '', from => '' );
my $key;
do {
last unless my $chunk = shift #chunks;
if ( $key ) {
$parts{ $key } = $preferred_title{ $chunk } // $chunk;
$key = '';
}
elsif ( exists $parts{ lc $chunk } ) {
$key = lc $chunk;
}
elsif ( !$parts{from} ) {
$parts{from} = $preferred_title{ $chunk } // $chunk;
}
} while ( #chunks );
say join( '|', $note, #parts{ qw<from to> } );
At the very least, collecting these values and printing them out can give you a sieve to decide on further courses of action. This will tell you that 'home coming' is perceived as a 'from' statement, as well as 'Standard train journey'.
You *could fix the 'home coming' by amending the regex thusly:
/\b(?:(?:coming )?(from)|(to))\s+/i
And we could add the following key-value pair to our preferred_title hash:
home => 'Home'
We could simply define 'Standard train journey' => '', or we could create a list of rejection patterns, where we reject a string as a meaningful value if they fit a pattern.
But they allow you to dump out a list of values and refine your scan of data. Another idea is that as it seems that your pretty consistent with your use of capitals (except for 'home') for places. So we could increase our odds of finding the right string by matching the chunk with
/\b(home|\p{Upper}.*)/
Note that this still considers 'Standard train journey' a proper location. So this would still need to be handled by rejection rules.
Here I reiterate that this can be a minimal approach to scanning the data to the point that you can make sense of what it this system takes to be locations and "80/20" it down: that is, hopefully those rules handle 80 percent of the cases, and you can tune the algorithm to handle 80 percent of the remaining 20, and iterate to the point that you simply have to change a handful of entries at worst.
Then, you have a specification that you would need to follow in creating travel notes from then on. You could even scan the notes as they were entered and alert something like
'No destination found in note!'.