Related
I have the following Apex trigger which should prevent a Parent Case from closing if the parent case has open child cases. Kindly assist with trouble shooting as the Apex Trigger is not firing.
trigger CaseTriggerCloseChild on Case ( before update ) {
Set < Id > setCaseIds = new Set < Id >();
Map < Id, Integer > mapOpenCaseCount = new Map < Id, Integer >();
for ( Case objCase : trigger.new ) {
if ( objCase.Status == 'Closed' ) {
setCaseIds.add( objCase.Id );
}
}
for ( Case objCase : [ SELECT ParentId FROM Case WHERE ParentId IN: setCaseIds AND IsClosed = false ] ) {
if ( mapOpenCaseCount.containsKey( objCase.ParentId ) ) {
mapOpenCaseCount.put( objCase.ParentId, mapOpenCaseCount.get( objCase.ParentId ) + 1 );
} else {
mapOpenCaseCount.put( objCase.ParentId, 1 );
}
}
for ( Case objCase : trigger.new ) {
if ( objCase.Status == 'Closed' ) {
if ( mapOpenCaseCount.containsKey( objCase.Id ) ) {
objCase.addError( 'You cannot close this Case. It has ' + mapOpenCaseCount.get( objCase.Id ) + ' open Child Cases.' );
}
}
}
}
Looks like your map of open cases is keyed on the parent ID and you are using the current case ID to look for open cases. You'll need to restructure it some.
Update#2 link to TIFF file (needed row is 135) https://drive.google.com/file/d/1g3e3xenm5b-awQwpvfZyhfXIqSVGGw2t/view
(Updated#1)
Dear stackoverflow users,
first of all I need to mention that I am amateur in programming but I admire coding possibilities and try to exploit them in my work.
In our laboratory we acquire images using Zeiss microscopes, which save images in TIFF format with detailes attached as a text if viewed in notepad. Thus, when you put TIFF image in a Digital Micrograph it is not calibrated and you have to manualy calibrate each image.
On the other hand, in these TIFF files there is a row (its number is fixed but differs from microscope to microscope) with a pixel size looking like these:
"Pixel Size = 0.6 nm"
My thought was, that if I could extract number from this row and put it in script declaring dimensions for every open image - it would save a lot of time.
I made a few steps:
Image img
img.GetFrontImage()
While(img.ImageIsValid())
{Number pixelsize
pixelsize = 0.5
img.imagesetdimensionscale(0,pixelsize)
img.imagesetdimensionscale(1,pixelsize)
img.imagesetdimensionunitstring(0,"nm")
img.imagesetdimensionunitstring(1,"nm")
imagedisplay imgdisp=img.imagegetimagedisplay(0)
imgdisp.applydatabar(2)
img.SetName(img.GetName()+"*")
img := FindNextImage(img)
}
As I understand:
I used a script from "DM script book" for apllying action on all opened images
I create variable number called pixelsize
I set it manually from file
I put it in dimensions scale in x and y
I set dimensions units as "nm"
I also create scalebar.
So Could you tell me please:
Is it possible to set pixelsize equal to a number in a row in a TIFF file?
Sincerely,
Shadowbane
Yes, it is doable but not trivial. I've found an old script of mine in which I was doing that. I'm no longer in the position to test it, so I'm not 100% certain it will work.
Also, it is a bit too long for posting here, so I've also put it under this pastebin link.
// Some constants (See TIFF Format)
number kLittleEndianTIFF = 0x4949
number kBigEndianTIFF = 0x4D4D
number kMagicIDNumberTIFF = 42
number FALSE = 0
number TRUE = 1
Interface I_ImportTiffWithTags
{
// Aux. method to simplify single value read from stream
number ReadValueOfType( object self, string type );
// Aux. method to strip multiple pre- and post- string occurances from a string
string ClipText( object self, string input, string leadIn, string leadOut, number doMultiple );
string ClipTextWhiteSpaces( object self, string input );
// Aux. method to convert ASCII text of specific format into a TagStructure
TagGroup CreateTagsFromString( object self, string input, string entrySep, string keyToValueSep, string GroupLeadIn, string GroupLeadOut );
TagGroup CreateTagsFromString_FEI_Default( object self, string input );
TagGroup CreateTagsFromString_ZEISS_Default( object self, string input );
// Get all TIFF tags from file in path as tagGroup
TagGroup GetTIFFTagsAsTG( object self, string path, number doTextFields, number doSingleValues, number includedSkipped );
TagGroup GetTIFFTagsAsTG( object self, string path, number includedSkipped ); // doTextField = true, doSingleValues = true
TagGroup GetTIFFTagsAsTG( object self, string path ); // doTextField = true, doSingleValues = true, includedSkipped = false
// Get all ASCII TIFF tags as single string
String GetTIFFAscii( object self, string path, number splitSeparateTags );
String GetTIFFAscii( object self, string path); // splitSeparateTags = true
// Import Image and append TIFF tags as TagGroup "TIFF Tags" to image
image OpenTIFFWithTags( object self, string path );
// Import TIFF Image stored by FEI SEMs, import info from ASCII in image, calibrate if possible
image OpenFEI_TIFF( object self, string path, number withTiffTags );
image OpenFEI_TIFF( object self, string path );
// Import TIFF Image stored by ZEISS SEMs, import info from ASCII in image, calibrate if possible
image OpenZEISS_TIFF( object self, string path, number withTiffTags );
image OpenZEISS_TIFF( object self, string path);
// Aux. to find calibration from the tags, format specific
number GetCalibrationFromTags_FEI( object self, tagGroup FEItgs, number &sx, number &ox, string &ux, number &sy, number &oy, string &uy );
number GetCalibrationFromTags_ZEISS( object self, tagGroup ZEISStags, number &sx, number &ox, string &ux, number &sy, number &oy, string &uy );
}
Class CImportTIFFWithTags
{
object fStream
number byteOrder
number verbose
number kMaxTextShow
CImportTIFFWithTags( object self )
{
verbose = FALSE
kMaxTextShow = 20
}
number ReadValueOfType( object self, string type )
{
if ( !fStream.ScriptObjectIsValid() ) Throw( "Invalid file stream." )
number val = 0
TagGroup tg = NewTagGroup()
if ( type == "bool" )
{
tg.TagGroupSetTagAsBoolean( type, 0 )
tg.TagGroupReadTagDataFromStream( type, fstream, byteOrder )
tg.TagGroupGetTagAsBoolean( type, val )
}
else if ( type == "uint8" )
{
string str = fStream.StreamReadAsText(0,1)
val = asc(str)
}
else if ( type == "uint16" )
{
tg.TagGroupSetTagAsUInt16( type, 0 )
tg.TagGroupReadTagDataFromStream( type, fstream, byteOrder )
tg.TagGroupGetTagAsUInt16( type, val )
}
else if ( type == "uint32" )
{
tg.TagGroupSetTagAsUInt32( type, 0 )
tg.TagGroupReadTagDataFromStream( type, fstream, byteOrder )
tg.TagGroupGetTagAsUInt32( type, val )
}
else if ( type == "float" )
{
tg.TagGroupSetTagAsFloat( type, 0 )
tg.TagGroupReadTagDataFromStream( type, fstream, byteOrder )
tg.TagGroupGetTagAsFloat( type, val )
}
else if ( type == "double" )
{
tg.TagGroupSetTagAsDouble( type, 0 )
tg.TagGroupReadTagDataFromStream( type, fstream, byteOrder )
tg.TagGroupGetTagAsDouble( type, val )
}
else Throw("Invalid read-type:"+type)
return val
}
string ClipText( object self, string input, string leadIn, string leadOut, number doMultiple )
{
string work = input
if ( len(leadIn) <= len(work) )
{
while ( leadIn == left(work,len(leadIn)) )
{
work = right( work, len(work) - len(leadIn) )
if ( !doMultiple ) break
if ( len(work) < len(leadin) ) break
}
}
if ( len(leadOut) <= len(work) )
{
while ( leadOut == right(work,len(leadout)) )
{
work = left( work, len(work) - len(leadOut) )
if ( !doMultiple ) break
if ( len(leadOut) <= len(work) ) break
}
}
return work
}
string ClipTextWhiteSpaces( object self, string input )
{
return self.ClipText( input, " ", " ", TRUE );
}
TagGroup CreateTagsFromString( object self, string input, string entrySep, string keyToValueSep, string GroupLeadIn, string GroupLeadOut )
{
TagGroup tg = NewTagGroup()
number bCheckGroup = ( len(GroupLeadIn) && len(GroupLeadOut) ) // true if groupLeads are specified
string work = input
string groupName = ""
if ( "" == entrySep )
entrySep = "\n"
number pos = find( work, entrySep )
number skipped = 0
while( -1 != pos )
{
//OKDialog("work:"+work)
//OKDialog("Entry:"+len(entrySep) )
string entry = left( work, pos )
work = right( work, len(work) - len(entry) - len(entrySep) + 1)
number sep = find( entry, keyToValueSep )
if( -1 < sep )
{
// Entry matches format "KEY=VALUE" with '=' being the defined entrySep
string key = left( entry, sep )
string value= right( entry, len(entry) - sep - len(keyToValueSep) )
// Truncate trailing white spaces
key = self.ClipTextWhiteSpaces(key)
value = self.ClipTextWhiteSpaces(value)
string tagPath = groupName + ( "" == groupName ? "" : ":" ) + key
tg.TagGroupSetTagAsString( tagPath, value )
}
else if ( bCheckGroup )
{
// Entry does not match format "KEY=VALUE", check if it is a groupname
number leadIn = find( entry, GroupLeadIn )
number leadOut = find( entry, GroupLeadOut )
if ( ( -1 < leadIn ) && ( -1 < leadOut ) && ( leadIn < leadOut ) ) // Is it a new group? "[GROUPNAME]"
{
groupName = mid( entry, leadIn + len(GroupLeadIn), leadOut - leadIn - len(GroupLeadOut) )
}
else
skipped++
}
else
{
skipped++
}
// Find next entry
pos = find( work, entrySep )
}
if ( 0 != len(work) )
skipped++
if ( verbose )
{
if ( skipped )
Result( "\nText To Tag skipped " + skipped + " lines.\n" )
}
return tg
}
TagGroup CreateTagsFromString_FEI_Default( object self, string input )
{
return self.CreateTagsFromString( input, "\n" , "=", "[", "]" )
}
TagGroup CreateTagsFromString_ZEISS_Default( object self, string input )
{
return self.CreateTagsFromString( input, "\n" , "=", "", "" )
}
TagGroup GetTIFFTagsAsTG( object self, string path, number doTextFields, number doSingleValues, number includedSkipped )
{
if ( !DoesFileExist( path ) ) Throw( "File not found.\n"+path)
// Open Stream
number fileID = OpenFileForReading( path )
fStream = NewStreamFromFileReference(fileID,1)
// Find Byte Order
number val
byteOrder = 0
val = self.ReadValueOfType( "uint16" )
byteOrder = ( kLittleEndianTIFF == val ) ? 2 : ( kBigEndianTIFF == val ? 1 : 0 )
// Verify TIFF image
val = self.ReadValueOfType( "uint16" )
if ( val != kMagicIDNumberTIFF ) Throw( "Not a valid TIFF image" )
// Find first directory start
number offset = self.ReadValueOfType( "uint32" )
// Browse all directories
number nIFD = 0
TagGroup AllTiffTagsTGlist = NewTagList()
TagGroup TiffTGs
while( 0 != offset )
{
// Set to start of directory and read it in
nIFD++
TiffTGs = NewTagGroup()
if ( verbose ) Result( "Reading IFD #" + nIFD + " at \t" + offset )
fStream.StreamSetPos( 0, offset )
number nEntries = self.ReadValueOfType( "uint16" )
for ( number e=0; e<nEntries; e++ )
{
number tag = self.ReadValueOfType( "uint16" )
string tagStr = "ID("+tag+")"
number typ = self.ReadValueOfType( "uint16" )
number count = self.ReadValueOfType( "uint32" )
number dataOffset = self.ReadValueOfType( "uint32" )
if ( verbose )
Result( "\n DirEntry # "+Format(e,"%3.0f") + ": TiffIDTag: " + tag + "]ttyp: " + typ + "\tcount: " + count )
// Read only, if it is either a single value or a string
if ( ( doTextFields ) && ( 2 == typ ) )// ASCII
{
number currentPos = fStream.StreamGetPos()
fStream.StreamSetPos( 0, dataOffset )
string textField = fStream.StreamReadAsText( 0, count )
fStream.StreamSetPos( 0, currentPos )
if ( verbose ) Result( "\t value: " + left( textField, min(len(textField), kMaxTextShow ) ) )
TiffTGs.TagGroupSetTagAsString( tagStr, textField )
}
else if ( ( doSingleValues ) && ( 1 == count ) ) // Single Value.
{
// Note that the 'dataOffset'-4-bytes are already the value for data fitting in 4-bytes!
// Otherwise, it specifies the staring offset for the value
if ( 1 == typ ) // uInt8
{
TiffTGs.TagGroupSetTagAsShort( tagStr, dataOffset )
if ( verbose ) Result("\t value:" + dataOffset )
}
else if ( 3 == typ ) // uInt16
{
TiffTGs.TagGroupSetTagAsUInt16( tagStr, dataOffset )
if ( verbose ) Result("\t value:" + dataOffset )
}
else if ( 4 == typ ) // uInt32
{
TiffTGs.TagGroupSetTagAsUInt32( tagStr, dataOffset )
if ( verbose ) Result("\t value:" + dataOffset )
}
else if ( 5 == typ ) // uInt32 / uInt32 (rational)
{
number currentPos = fStream.StreamGetPos()
number val1 = self.ReadValueOfType( "uint32" )
number val2 = self.ReadValueOfType( "uint32" )
TiffTGs.TagGroupSetTagAsLongPoint( tagStr, val1, val2 )
fStream.StreamSetPos( 0, currentPos )
if ( verbose ) Result("\t value:" + val1 + "/" + val2 )
}
else if ( 11 == typ ) // float
{
TiffTGs.TagGroupSetTagAsFloat( tagStr, dataOffset )
if ( verbose ) Result("\t value:" + dataOffset )
}
else if ( 12 == typ ) // double
{
number currentPos = fStream.StreamGetPos()
number val = self.ReadValueOfType( "double" )
TiffTGs.TagGroupSetTagAsDouble( tagStr, val )
fStream.StreamSetPos( 0, currentPos )
if ( verbose ) Result("\t value:" + val )
}
else if ( includedSkipped )
{
if ( verbose ) Result("\t value: SKIPPED DATA TYPE" )
TiffTGs.TagGroupSetTagAsString( tagStr, "Not imported, TagType (TIFF "+typ+")" )
}
}
else if ( includedSkipped )
{
// Multiple value entries
if ( verbose ) Result("\t value: Multiple values. Arrays are not read." )
TiffTGs.TagGroupSetTagAsString( tagStr, "Not imported, TagType (TIFF "+typ+") "+count+" values" )
}
}
// Read next directory offset.
// This is 0000 for the last directory according to spec
offset = self.ReadValueOfType( "uint32" )
AllTiffTagsTGlist.TagGroupAddTagGroupAtEnd(TiffTGs)
}
// Return list if entries or just one group if there is one directory only
if ( 1 < nIFD )
return AllTiffTagsTGlist
else
return TiffTGs
}
TagGroup GetTIFFTagsAsTG( object self, string path, number includedSkipped )
{
return self.GetTIFFTagsAsTG( path, TRUE, TRUE, includedSkipped )
}
TagGroup GetTIFFTagsAsTG( object self, string path )
{
return self.GetTIFFTagsAsTG( path, TRUE, TRUE, FALSE )
}
String GetTIFFAscii( object self, string path, number splitSeparateTags )
{
String allText
TagGroup tgs = self.GetTIFFTagsAsTG(path, TRUE, FALSE, FALSE )
number nTags = tgs.TagGroupCountTags()
for ( number n=0; n<nTags; n++ )
{
string label = tgs.TagGroupGetTagLabel(n)
if ( splitSeparateTags )
{
if ( 1<nTags )
allText += (n?"\n":"")+"TIFF Tag " + label + ":\n"
}
string text
if ( tgs.TagGroupGetTagAsString( label, text ) )
allText += text + "\n"
}
return allText
}
String GetTIFFAscii( object self, string path)
{
return self.GetTIFFAscii( path, true )
}
image OpenTIFFWithTags( object self, string path )
{
image Imported := OpenImage( path )
TagGroup TIFFTags = self.GetTIFFTagsAsTG(path, TRUE, TRUE, TRUE)
Imported.ImageGetTagGroup().TagGroupSetTagAsTagGroup( "TIFF Tags", TIFFTags )
return Imported
}
image OpenFEI_TIFF( object self, string path, number withTiffTags )
{
image Imported := OpenImage( path )
if ( withTiffTags )
{
TagGroup TIFFTags = self.GetTIFFTagsAsTG(path, TRUE, TRUE, FALSE )
Imported.ImageGetTagGroup().TagGroupSetTagAsTagGroup( "TIFF Tags", TIFFTags )
}
string ASCIITags = self.GetTIFFAscii(path)
TagGroup FEITags = self.CreateTagsFromString_FEI_Default( ASCIITags )
Imported.ImageGetTagGroup().TagGroupSetTagAsTagGroup( "Info", FEITags )
number sx, sy, ox, oy
string ux, uy
if ( self.GetCalibrationFromTags_FEI(FEITags, sx, ox, ux, sy, oy, uy ) )
{
Imported.ImageSetDimensionCalibration(0,ox,sx,ux,0)
Imported.ImageSetDimensionCalibration(1,oy,sy,uy,0)
}
else
{
if ( verbose )
Result( "\n FEI calibration not found. Image remains uncalibrated." )
}
return Imported
}
number GetCalibrationFromTags_FEI( object self, tagGroup FEItags, number &sx, number &ox, string &ux, number &sy, number &oy, string &uy )
{
if ( !FEItags.TagGroupIsValid() ) return FALSE
number foundSomething = FALSE
ox=0
oy=0
sx=1
sy=1
ux=""
uy=""
number value
if ( FEITags.TagGroupGetTagAsNumber( "Scan:PixelWidth", value ) )
{
foundSomething = TRUE
// [value] is in [m], scale to nm or um
if ( value < 1e-6 )
{
if ( value < 1e-9 )
{
ux = "nm"
sx = value * 1e9
}
else
{
ux = "µm"
sx = value * 1e6
}
}
}
if ( FEITags.TagGroupGetTagAsNumber( "Scan:PixelHeight", value ) )
{
foundSomething = TRUE
// [value] is in [m], scale to nm or um
if ( value < 1e-6 )
{
if ( value < 1e-9 )
{
uy = "nm"
sy = value * 1e9
}
else
{
uy = "µm"
sy = value * 1e6
}
}
}
return foundSomething
}
image OpenFEI_TIFF( object self, string path )
{
return self.OpenFEI_TIFF( path, FALSE )
}
image OpenZEISS_TIFF( object self, string path, number withTiffTags )
{
image Imported := OpenImage( path )
if ( withTiffTags )
{
TagGroup TIFFTags = self.GetTIFFTagsAsTG(path, TRUE, TRUE, FALSE )
Imported.ImageGetTagGroup().TagGroupSetTagAsTagGroup( "TIFF Tags", TIFFTags )
}
string ASCIITags = self.GetTIFFAscii(path)
TagGroup ZEISSTags = self.CreateTagsFromString_ZEISS_Default( ASCIITags )
Imported.ImageGetTagGroup().TagGroupSetTagAsTagGroup( "Info", ZEISSTags )
number sx, sy, ox, oy
string ux, uy
if ( self.GetCalibrationFromTags_ZEISS(ZEISSTags, sx, ox, ux, sy, oy, uy ) )
{
// "scale" gives full FOV!
if ( sx!= 1)
Imported.ImageSetDimensionCalibration(0,ox,sx/Imported.ImageGetDimensionSize(0),ux,0)
if ( sy!= 1)
Imported.ImageSetDimensionCalibration(1,oy,sy/Imported.ImageGetDimensionSize(1),uy,0)
}
else
{
if ( verbose )
Result( "\n ZEISS calibration not found. Image remains uncalibrated." )
}
return Imported
}
image OpenZEISS_TIFF( object self, string path )
{
return self.OpenZEISS_TIFF( path, FALSE )
}
number GetCalibrationFromTags_ZEISS( object self, tagGroup ZEISStags, number &sx, number &ox, string &ux, number &sy, number &oy, string &uy )
{
if ( !ZEISStags.TagGroupIsValid() ) return FALSE
number foundSomething = FALSE
ox=0
oy=0
sx=1
sy=1
ux=""
uy=""
string value
if ( ZEISStags.TagGroupGetTagAsString( "Width", value ) )
{
number pos = find( value, " " ) // separate unit!
if ( -1 < pos )
{
sx = val( left(value,pos) ) // FOV width not scale!!
ux = right( value, len(value)-pos-1 )
foundSomething = TRUE
}
}
if ( ZEISStags.TagGroupGetTagAsString( "Height", value ) )
{
number pos = find( value, " " ) // separate unit!
if ( -1 < pos )
{
sy = val( left(value,pos) ) // FOV width not scale!!
uy = right( value, len(value)-pos-1 )
foundSomething = TRUE
}
}
return foundSomething
}
}
///////////////////
// CALL EXAMPLES //
///////////////////
number option = 1
string msg
msg += "IMPORT TIFF - EXAMPLE\n"
msg += "Select what you want to do:\n"
msg += "(1)\t Import and calibrate FEI TIFF.\n"
msg += "(2)\t Import and calibrate ZEISS TIFF.\n"
msg += "(3)\t Show text content of TIFF.\n"
msg += "(4)\t Show TIFF tags in browser.\n"
while ( 4<option || 1>option )
{
if ( !GetNumber(msg,option,option)) exit(0)
}
string path = GetApplicationDirectory("open_save",0)
if (!OpenDialog(NULL,"Select TIFF file",path, path)) exit(0)
if ( 1 == option )
Alloc(CImportTIFFWithTags).OpenFEI_TIFF(path).ShowImage()
else if ( 2 == option )
Alloc(CImportTIFFWithTags).OpenZEISS_TIFF(path).ShowImage()
else if ( 3 == option )
Result( "\n\n\n\n_____________________________\n" +Alloc(CImportTIFFWithTags).GetTIFFAscii(path) )
else if ( 4 == option )
Alloc(CImportTIFFWithTags).GetTIFFTagsAsTG(path,true,true,true).TagGroupOpenBrowserWindow(path,0)
So what I want to be able to do is take a field value, a date field, and add a set period of time to it and then make that into a merge tag that I can then add back into that value or use else where.
I know how to make a new merge tag, that's not the issue. My question is how do I get a field value to use in that calculation?
add_filter( 'gform_replace_merge_tags', 'new_date_plus_30', 10, 7 );
function new_date_plus_30( $text, $form, $entry, $url_encode, $esc_html, $nl2br, $format ) {
$merge_tag = '{date_plus_30}';
$new_date = date('m/d/Y', strtotime('+30 days'));
return str_replace( $merge_tag, $new_date, $text );
}
So where I do the new date calculation, I need to be able to pull in a field from that post and use it.
I was also thinking of doing a If / Else script where I would do the date calculation based on what was set in a form. So if a user said repeat this every 15 days, I would have something like:
add_filter( 'gform_replace_merge_tags', 'new_date_plus_30', 10, 7 );
function new_date_plus_30( $text, $form, $entry, $url_encode, $esc_html, $nl2br, $format ) {
if ( $form_id == 34 && $field_id == 2 && $value == 'add 30 days') {
$merge_tag = '{date_plus_30}';
$new_date = date('m/d/Y', strtotime('+30 days'));
}
else if ( $form_id == 34 && $field_id == 2 && $value == 'first of month') {
$merge_tag = '{first_of_month}';
$new_date = date('m/d/Y', strtotime('first of next month'));
}
}
return str_replace( $merge_tag, $new_date, $text );
}
But my issue is still the same. How can I use two filters at the same time? I assume I need to use the gform_get_input_value. Kindly review my code and give feedback is there other way?
Or maybe something like this...
add_filter( 'gform_merge_tag_filter', function ( $value, $merge_tag, $options, $field, $raw_value ) {
if ( $field->id == '2' && $value == 'first of the month') {
$merge_tag = '{the_next_date}';
$thedatetochange = 'Not sure how to get the date value here...';
$value = date('m/d/Y', strtotime($thedatetochange . 'first of the next month'));
return $value;
}
else if ( $field->id == '2' && $value == 'the 15th') {
$merge_tag = '{the_next_date}';
$thedatetochange = 'Not sure how to get the date value here...';
$the_first_date = date('m/d/Y', strtotime($thedatetochange . 'first of the next month' ));
$value = date('m/d/Y', strtotime($the_first_date . '+15 days' ));
return $value;
}
}, 10, 5 );
So after doing more digging, would I be able to use something like this to get the value of the field?
$theDateToChange = rgar( $entry, ‘3’);
This assumes that the field 3 is a date value. Would this work for retrieving the current entry date?
The $entry is passed through the gform_replace_merge_tags filter. You can fetch any field value from the $entry by its field ID. For example, if your field ID was 1:
$value = $entry[1];
Alternately, if you're open to capturing this modified date as a secondary Date field in your form, we have a snippet that can handle the functionality for you.
https://gravitywiz.com/populate-dates-gravity-form-fields/
new GW_Populate_Date( array(
'form_id' => 1,
'target_field_id' => 2,
'modifier' => '+30 days'
) );
So here is my current working code...
add_action( 'gform_admin_pre_render', 'add_merge_tags' );
function add_merge_tags( $form ) {
?>
<script type="text/javascript">
gform.addFilter('gform_merge_tags', 'add_merge_tags');
function add_merge_tags(mergeTags, elementId, hideAllFields, excludeFieldTypes, isPrepop, option){
mergeTags["custom"].tags.push({ tag: '{the_next_date}', label: 'The Next Date' });
return mergeTags;
}
</script>
<?php
//return the form object from the php hook
return $form;
}
add_action('wp', 'add_merge_tags');
/** MY MERGE TAGS HERE */
add_filter( 'gform_replace_merge_tags', 'new_date', 10, 7 );
function new_date( $value, $merge_tag, $options, $field, $raw_value, $entry, $text, $form, $url_encode, $esc_html, $nl2br, $format ) {
$pmoptions = $entry[7];
if ( $pmoptions == 'Monthly') {
$merge_tag = '{the_next_date}';
$old_date = $entry[2];
$new_date = date('m/d/Y', strtotime( $old_date . '+1 month'));
return str_replace( $merge_tag, $new_date, $text );
}
else if ( $pmoptions == 'Quarterly') {
$merge_tag = '{the_next_date}';
$old_date = $entry[2];
$new_date = date('m/d/Y', strtotime( $old_date . '+3 month'));
return str_replace($merge_tag, $new_date, $text);
}
}
apply_filters( 'gform_replace_merge_tags', $value, $merge_tag, $options, $field, $raw_value, $entry, $text, $form, $url_encode, $esc_html, $nl2br, $format );
I currently have a controller that pulls objects from the Pimcore Objects exactly how the sample data demonstrated.
What we need to accomplish with this build, is to allow for a "Featured" category to be assigned to any NewsArticle object or EventsArticle object. We need to pull a combined list from both NewsArticle and EventsArticle objects that also have the Featured category assigned to them. We need to keep track of all the IDs returned in this list, and exclude them from the single track lists so that they aren't displayed twice on the same page.
These are our two single track lists which work as expected, limited by custom properties that live on the document.
Requirements:
Filter by featured category.
Prevent any post from being listed twice.
Able to sort by date asc or desc.
// TODO: List Featured News and Events Objects...
// $this->view->featured = $featuredList->getObjects();
// List News Objects...
$newsList = new Object\NewsArticle\Listing();
$newsList->setOrderKey( "date" );
$newsList->setOrder( "DESC" );
$newsList->setLimit( $this->document->getProperty( 'newsLimit' ) );
// TODO: Exclude any IDs in $this->view->featured
$this->view->news = $newsList->getObjects();
// List Events Objects...
$eventsList = new Object\EventsArticle\Listing();
$eventsList->setOrderKey( "date" );
$eventsList->setOrder( "DESC" );
$eventsList->setLimit( $this->document->getProperty( 'eventsLimit' ) );
// TODO: Exclude any IDs in $this->view->featured
$this->view->events = $eventsList->getObjects();
This approach will give you the featured list:
$featuredListObj = \Pimcore\Model\Object::getByPath("featured-list");
$featuredListObjId = $featuredListObj->getId();
$eventClassId = \Pimcore\Model\Object\ClassDefinition::getByName("news")->getId();
$newsClassId = \Pimcore\Model\Object\ClassDefinition::getByName("event")->getId();
$combinedListing = new Pimcore\Model\Object\Listing();
$combinedListing->setCondition("o_className IN ('event','news') AND o_id IN (
SELECT o_id FROM object_$eventClassId WHERE categories LIKE '%,$featuredListObjId,%'
UNION SELECT o_id FROM object_$newsClassId WHERE categories LIKE '%,$featuredListObjId,%'
)");
foreach ($combinedListing as $item) {
echo get_class($item) . "<br>";
}
After some toil I've figured out a way to do this. It may not be optimal, but it works and gets the job done.
The reason why I've found this way to be most effective is because setCondition will also check non-target classes for columns they might not have, causing the list to fail.
#
# Hybridized Featured Articles List
#
# Get News and Events Objects...
$hybridList = new Object\Listing();
$hybridList->setCondition( "o_className IN ( 'newsArticle', 'eventsArticle' )" );
# Get a list of IDs for News and Events that have the "featured" category...
$featuredList = array();
foreach( $hybridList->getObjects() as $obj ) {
foreach( $obj->categories as $obj_cat ) {
if( $obj_cat->o_key == 'featured' ) {
$key = strtotime( $obj->date->date );
$key .= str_pad( $obj->o_id, 8, "0", STR_PAD_LEFT );
$featuredList[ $key ] = $obj;
break;
}
}
}
# Sort and Slice the list...
if( $this->document->getProperty( 'featuredSort' ) == 'asc' ) {
ksort( $featuredList ); // Oldest First
} else {
krsort( $featuredList ); // Newest First
}
$this->view->featured = array_slice( $featuredList, 0, $this->document->getProperty( 'featuredLimit' ) );
#
# Audit the Hybridized Featured Articles List for IDs
#
$block_ids = array();
foreach( $this->view->featured as $featured ) {
$block_ids[] = (int)$featured->o_id;
}
#
# News Articles List...
#
$newsList = new Object\NewsArticle\Listing();
$newsList->setOrderKey( "date" );
$newsList->setOrder( $this->document->getProperty( 'newsSort' ) == 'asc' ? 'asc' : 'desc' );
$newsList->setCondition( 'o_id NOT IN ('.implode( ',', $block_ids ).')' );
$newsList->setLimit( $this->document->getProperty( 'newsLimit' ) );
$this->view->news = $newsList->getObjects();
#
# Events Articles List...
#
$eventsList = new Object\EventsArticle\Listing();
$eventsList->setOrderKey( "date" );
$eventsList->setOrder( $this->document->getProperty( 'eventsSort' ) == 'asc' ? 'asc' : 'desc' );
$eventsList->setCondition( 'o_id NOT IN ('.implode( ',', $block_ids ).')' );
$eventsList->setLimit( $this->document->getProperty( 'eventsLimit' ) );
$this->view->events = $eventsList->getObjects();
I'm trying to write a macro for destructuring BSON data which looks like this:
let bson: Document = ...;
let (id, hash, name, path, modification_time, size, metadata, commit_data) = bson_destructure! {
get id = from (bson), optional, name ("_id"), as ObjectId;
get hash = from (bson), as String, through (|s| ContentHash::from_str(&s));
get name = from (bson), as String;
get path = from (bson), as Bson, through (PathBuf::from_bson);
get modification_time = from (bson), as UtcDatetime, through (FileTime);
get size = from (bson), as I64, through (|n| n as u64);
get metadata = from (bson), as Document, through (Metadata::from_bson);
get commit_data = from (bson), optional, as Document, through (CommitData::from_bson);
ret (id, hash, name, path, modification_time, size, metadata, commit_data)
};
I've written the following macro (pretty large) for it:
macro_rules! bson_destructure {
// required field
(
#collect req,
[$target:ident, $source:expr, $field:expr, Bson, $f:expr],
[];
$($rest:tt)*
) => {{
let $target = try!(match $source.remove($field) {
Some(v) => $f(v),
None => Err(BsonDestructureError::MissingField {
field_name: $field,
expected: "Bson"
}),
});
bson_destructure!($($rest)*)
}};
(
#collect req,
[$target:ident, $source:expr, $field:expr, $variant:ident, $f:expr],
[];
$($rest:tt)*
) => {{
let $target = try!(match $source.remove($field) {
Some(v) => match v {
::ejdb::bson::Bson::$variant(v) => $f(v),
v => Err(BsonDestructureError::InvalidType {
field_name: $field,
expected: stringify!($variant),
actual: v
})
},
None => Err(BsonDestructureError::MissingField {
field_name: $field,
expected: stringify!($variant)
}),
});
bson_destructure!($($rest)*)
}};
// optional field
(
#collect opt,
[$target:ident, $source:expr, $field:expr, Bson, $f:expr],
[];
$($rest:tt)*
) => {{
let $target = try!(match $source.remove($field) {
Some(v) => $f(v).map(Some),
None => Ok(None),
});
bson_destructure!($($rest)*)
}};
(
#collect opt,
[$target:ident, $source:expr, $field:expr, $variant:ident, $f:expr],
[];
$($rest:tt)*
) => {{
let $target = try!(match $source.remove($field) {
Some(v) => match v {
::ejdb::bson::Bson::$variant(v) => $f(v).map(Some),
v => Err(BsonDestructureError::InvalidType {
field_name: $field,
expected: stringify!($variant),
actual: v
})
},
None => Ok(None),
});
bson_destructure!($($rest)*)
}};
// change variant name
(
#collect $k:tt,
[$target:ident, $source:expr, $field:expr, $variant:ident, $f:expr],
[as $nv:ident, $($word:ident $arg:tt),*];
$($rest:tt)*
) => {
bson_destructure!(
#collect $k,
[$target, $source, $field, $nv, $f],
[$($word $arg),*];
$($rest)*
)
};
// change final mapping function
(
#collect $k:tt,
[$target:ident, $source:expr, $field:expr, $variant:ident, $f:expr],
[through ($nf:expr), $($word:ident $arg:tt),*];
$($rest:tt)*
) => {
bson_destructure!(
#collect $k,
[$target, $source, $field, $variant, $nf],
[$($word $arg),*];
$($rest)*
)
};
// change field name
(
#collect $k:tt,
[$target:ident, $source:expr, $field:expr, $variant:ident, $f:expr],
[name ($nn:expr), $($word:ident $arg:tt),*];
$($rest:tt)*
) => {
bson_destructure!(
#collect $k,
[$target, $source, $nn, $variant, $f],
[$($word $arg),*];
$($rest)*
)
};
// main forms
(get $target:ident = from ($source:expr), $($word:ident $arg:tt),*; $($rest:tt)*) => {
bson_destructure!(
#collect req,
[$target, $source, stringify!($target), Bson, Ok],
[$($word $arg),*];
$($rest)*
)
};
(get $target:ident = from ($source:expr), optional, $($word:ident $arg:tt),*; $($rest:tt)*) => {
bson_destructure!(
#collect opt,
[$target, $source, stringify!($target), Bson, Ok],
[$($word $arg),*];
$($rest)*
)
};
// final form
(ret $e:expr) => { $e }
}
However, the first example above results in the following compilation error:
src/db/data.rs:345:22: 345:25 error: no rules expected the token `opt`
src/db/data.rs:345 #collect opt,
^~~
I'm somewhat surprised that it doesn't show the error location as usual (that is, there is no indication where expansion happens), however, the error vanishes when I comment the piece of code which uses the macro out.
I can't see why it says that no rules expected this token because there is such a rule, but maybe I don't understand something.
I'm pretty sure that this is possible because that's roughly what quick_error crate does, but it seems that my macro writing skills are still lacking.
How should I fix the macro so it would work as I expect?
For completeness, the following is the definition of BsonDestructureError:
#[derive(Debug, Clone)]
pub enum BsonDestructureError {
InvalidType {
field_name: &'static str,
expected: &'static str,
actual: Bson
},
InvalidArrayItemType {
index: usize,
expected: &'static str,
actual: Bson
},
MissingField {
field_name: &'static str,
expected: &'static str
}
}
I'm also using bson crate reexported from ejdb crate. Here is a minimal example, runnable with cargo script on stable Rust.
Both cargo script, a recursive muncher, and my favourite internal rule syntax; how can I not?
First, the exact problem can be identified by running cargo rustc -- -Z trace-macros. This will output each rule as it gets expanded, giving us a "backtrace" which, after some manual reformatting, comes out looking like so:
bson_destructure! {
get id = from ( bson ) , optional , name ( "_id" ) , as ObjectId ;
get hash = from ( bson ) , as String ;
get name = from ( bson ) , as String ;
get path = from ( bson ) , as Bson ;
get modification_time = from ( bson ) , as UtcDatetime ;
get size = from ( bson ) , as I64 , through ( | n | n as u64 ) ;
get metadata = from ( bson ) , as Document ;
get commit_data = from ( bson ) , optional , as Document ;
ret ( id , hash , name , path , modification_time , size , metadata , commit_data )
}
bson_destructure! {
# collect opt ,
[ id , bson , stringify ! ( id ) , Bson , Ok ] ,
[ name ( "_id" ) , as ObjectId ] ;
get hash = from ( bson ) , as String ;
get name = from ( bson ) , as String ;
get path = from ( bson ) , as Bson ;
get modification_time = from ( bson ) , as UtcDatetime ;
get size = from ( bson ) , as I64 , through ( | n | n as u64 ) ;
get metadata = from ( bson ) , as Document ;
get commit_data = from ( bson ) , optional , as Document ;
ret ( id , hash , name , path , modification_time , size , metadata , commit_data )
}
bson_destructure! {
# collect opt ,
[ id , bson , "_id" , Bson , Ok ] , [ as ObjectId ] ;
get hash = from ( bson ) , as String ;
get name = from ( bson ) , as String ;
get path = from ( bson ) , as Bson ;
get modification_time = from ( bson ) , as UtcDatetime ;
get size = from ( bson ) , as I64 , through ( | n | n as u64 ) ;
get metadata = from ( bson ) , as Document ;
get commit_data = from ( bson ) , optional , as Document ;
ret ( id , hash , name , path , modification_time , size , metadata , commit_data )
}
A careful perusal of the rules in bson_destructure! shows the issue: there is no rule which matches the third expansion. macro_rules! is, frankly, rubbish at reporting sane error locations when it comes to recursive rules; that it's pointing to the opt token is irrelevant. The real problem is that it couldn't find a matching rule.
In particular, the offending rule is this one:
// change variant name
(
#collect $k:tt,
[$target:ident, $source:expr, $field:expr, $variant:ident, $f:expr],
[as $nv:ident, $($word:ident $arg:tt),*];
$($rest:tt)*
) => {
...
};
Note the presence of a comma immediately after $nv:ident. Also note that there is no such comma in the input. This can be solved by moving the comma inside the repetition, like so:
// change field name
(
#collect $k:tt,
[$target:ident, $source:expr, $field:expr, $variant:ident, $f:expr],
[name ($nn:expr) $(, $word:ident $arg:tt)*];
$($rest:tt)*
) => {
...
};
Another alternative (and the one I ususally go with), is to simply mutate the input when it is first encountered to make sure there is always a trailing comma in place.
The code won't actually compile on my machine, due to a native dependency, but I did verify that making this change (both here, and to the other rules with a similar issue) allows it to complete macro expansion. You can check the output looks correct using cargo rustc -- -Z unstable-options --pretty=expanded.