I am just getting started with Meteor and have encountered something that isn't necessarily an issue but something that I just don't understand. I have the following code in a file called chat.coffee...
Meteor.setInterval ( ->
console.log "Hello " + roomName
Meteor.call('keepAlive', Meteor.user(), roomName)
return
), 5000
I originally was under the impression that coffee-script files only ran on their associated html files. This doesn't seem to be the case here as this code runs on every single page regardless of the file name. Is this the intended way things are supposed to work, and if so, is there a way to enforce that only certain code runs on certain pages.
One thing to mention is that this code is running in the client side folder.
On the client side, Meteor will associate your templates with their javascript functions and helpers based upon shared template names, but that is not inherently tied to your file names.
By way of example, if you have a template named "chat" in an html file as follows:
<template name="chat"></template>
Meteor will run scripts such as Template.chat.helpers({}) or Template.chat.events({}) only in connection with the "chat" template. But that is not dependent on your file naming conventions. It could be placed in a file name chat.js for organization and convention, but could equally well reside in a file named client.js or any other arbitrarily named .js file.
Similarly, your <template name="chat"> could reside in a file named chat.html, or client.html, or an arbitrary name of your choosing.
Your setInterval function is not tied to a specific template so it will run on every page, even if it resides in a file named chat.js.
Correct.
Meteor merges all your javascript ( via coffeescript ) and all the html, which it stores in its own special way. It merges all the html in heads and body etc into a page and serves that up, and it will then render templates as you specify.
To have a more "page" oriented app you can use something like iron router.
Related
I am attempting to remove all references of a managed package that is going to be uninstalled that spans throughout code base in VS Code
I have using a query to find the field permissions but am wondering if there is a way to search for the reference outside of specifying the exact field name compared to the field containing only "agf" since they are all using it.
Below is the search query:
<fieldPermissions>
<editable>false</editable>
<field>User.agf_Certified_Product_Owner__c</field>
<readable>false</readable>
</fieldPermissions>
In the field, I want to be able to find and delete the 5 associated lines from multiple files if they match "agf" in any combination. Something like the below:
<fieldPermissions>
<editable>false</editable>
<field>agf</field>
<readable>false</readable>
</fieldPermissions>
With any combination of agf in the field, delete all from any file it appears in.
Not an answer but too long for a comment
You don't have to? Profiles/perm sets don't block package's delete. Probably neither do reports.
You'd use your time better by searching for all instances of agf__ (that's with double underscore), should find fields, objects... used in classes, flows, page layouts etc. And search for agf. (with dot) should find all instances where your Apex code calls their classes marked as global.
Alternatively Apex / VF pages with dependencies on package will have it listed in their "meta.xml", for example
<?xml version="1.0" encoding="UTF-8"?>
<ApexClass xmlns="http://soap.sforce.com/2006/04/metadata">
<apiVersion>54.0</apiVersion>
<packageVersions>
<majorNumber>236</majorNumber>
<minorNumber>1</minorNumber>
<namespace>SBQQ</namespace>
</packageVersions>
<status>Active</status>
</ApexClass>
Last but not least - why not just spawn a dev sandbox and attempt the delete there? If it succeeds - great. If not - it'll list the dependencies that blocked the delete. It'll be "the real thing", it'll smite you even if your VSCode project doesn't contain all flows, layouts and thus could lull you into false sense of security. I'd seriously do it in sandbox and then run all tests for good measure, just in case there are some dynamic soql queries that don't count as hard, delete-blocking references.
After delete's done - fetch Profiles / Permsets from this org and the field references will be gone from the xml.
I have a Windows Forms .NET application in Visual Studio. Making a form "Localizable" adds a Form1.resx file nested below the form. I also want to add a separate .resx file for each form (Form1Resources.resx). This is used for custom form-specific resources, e.g. messages generated using the code behind.
This is set up as follows:
It would be tidier to nest the custom .resx file beneath the form (see this question for details about nest how to do this), as follows:
However, this results in the following error when I build the application:
Two output file names resolved to the same output path:
"obj\Debug\WindowsFormsApp1.Form1.resources" WindowsFormsApp1
I'm guessing that MSBuild uses some logic to find nested .resx files and generate .resources file based on its parent. Is there any way that this can be resolved?
Note that it is not possible to add custom messages to the Form1.resx file - this is for design-specific resources only and any resources that you add get overwritten when you save changes in design mode.
The error comes from the GenerateResource task because the 2 resx files (EmbeddedResource items in msbuild) passed both have the same ManifestResourceName metadata value. That values gets created by the CreateManifestResourceNames task and assumingly when it sees an EmbeddedResource which has the DependentUpon metadata set (to Form1.cs in your case) it always generates something of the form '$(RootNamespace).%(DependentUpon)': both your resx files end up with WindowsFormsApp1.Form1 as ManifestResourceName. Which could arguably be treated as the reason why having all resx files under Form1 is not tidier: it's not meant for it, requires extra fiddling, moreover it could be confusing for others since they'd typcially expect to contain the resx fils placed beneath a form to contain what it always does.
Anyway: there's at least 2 ways to work around this:
there's a Target called CreateCustomManifestResourceNames which is meant to be used for custom ManifestResourceName creation. A bit too much work for your case probably, just mentioning it for completeness
manually declare a ManifestResourceName yourself which doesn't clash with the other(s); if the metadata is already present it won't get overwritten by
Generic code sample:
<EmbeddedResource Include="Form1Resources.resx">
<DependentUpon>Form1.cs</DependentUpon>
<ManifestResourceName>$(RootNamespace).%(FileName)</ManifestResourceName>
...
</EmbeddedResource>
I have a document management system which stores files in a MS Word format. In my application, I would like to be able to open that document in Word.
I would like Word to handle all of the file system access out of the content management system. What I need to do is the following:
1) Create a new document based off a template, and then provide information that can be parsed and placed into specific fields.
I see I can do this as follows:
Runtime.getRuntime().exec("C:/Program Files (x86)/Microsoft Office/Office15/winword.exe /ttemplate_name");
My assumption here is that the template is installed on the local drive. However I would like to provide some data so that fields could be prepopulated and I am not sure how to do that?
2) I would like to be able to run a macro to open the document directly from the content management system. I think I can run a macro as follows:
Runtime.getRuntime().exec("C:/Program Files (x86)/Microsoft Office/Office15/winword.exe /mmacro_name");
However, in this case, I would need to provide the document id from the content management system so that it can retrieve it and open it.
I am unsure what switch or parameter I can use to provide the additional data for word?
Thanks!
Word provides no command-line facility to pass arguments or data when opening or creating a document.
As long as macro code is available, the macro can read data that's stored somewhere, such as in an XML file. But the file path would need to be hard-coded or derivable from a known location (path).
You don't necessarily need to call a macro in a document (or template attached to the document). If the macro is named AutoNew or AutoOpen it will execute automatically when a document is created from the template or, respectively, when a document is opened.
Apologies for the lack of precision in the question, but I'm not completely sure which of possibly many things I'm doing wrong here.
I'm relatively new to Coffeescript and also geo applications in general, but here goes:
I've got a working (simple) Meteor (.7.0.1) application utilizing coffeescript in both client and server. The issue I'm having occurs when attempting to utilize TopoJSON encoded files to create a layer of US congressional districts. (the purpose of the app is to help highlight voter suppression in the US)
So, a few things: Typically in a non-Meteor app, I would just load the topoJSON file like so:
$.getJSON('./data/us-congress-113.json', function (data) {
var congress_geojson = topojson.feature(data, data.objects.districts);
congress_layer.addData(congress_geojson);
});
Now of course this won't work in Meteor because its not asynchronous.
One of the things that was recommended here on SO was to not worry about reading the file, and to instead change the json file to .js, and then set the contents (which are of course just an object) equal to a variable.
Here's what I did:
First, I changed the .json file to a .js file in the server directory, and added the "congress =" to the beginning of the file. It's a huge file so forgive me for omitting the whole object.
congress = {"type":"Topology",
"objects":
{"districts":
{"type":"GeometryCollection","geometries":[{"type":"Polygon"
Now here's where everything starts to give me issues:
In the server.coffee, I've created a variable like so to reference the congress object:
#congress_geojson = topojson.feature(congress, congress.objects.districts)
Notice how I'm putting the # symbol there? I've been told this allows a variable in Coffeescript to be globally scoped? I tried to also use a Meteor feature called "share" where I declare the variable as "share.congress_geojson". That led to the same issues which I will describe below.
Now in the client.coffee file, I'm trying to call this variable to load into a Leaflet map.
congress_layer = L.geoJson(null,
style:
color: "#DE0404"
weight: 2
opacity: 0.4
fillOpacity: 0.1
)
congress_layer.addData(#congress_geojson)
This isn't working, and specifically (despite attempts to find other ways, the errors I'm getting in the console are:
Exception from Deps afterFlush function: TypeError: Cannot read property 'features' of undefined
at o.GeoJSON.o.FeatureGroup.extend.addData (http://localhost:3000/packages/leaflet.js?ad7b569067d1f68c7403ea1c89a172b4cfd68d85:39:16471)
at Object.Template.map.rendered (http://localhost:3000/client/client.coffee.js?37b1cdc5945f3407f2726a5719e1459f44d1db2d:213:18)
I have no doubt that I'm missing something stupidly obvious here. Any suggestions or tips for what I'm doing completely wrong would be appreciated. Is it a case where an object globally declared in a .js file isn't available to code in a .coffee file? Maybe I'm doing something wrong on the Meteor side?
Thanks!
Edit:
So I was able to get things working by putting the .js file containing the congress object in a root /lib folder, causing the object to load first, and then calling the congress object from the client. However, I'm still wanting to know how I could simply share this object from the server? What is the "Meteor way" here?
If you are looking for the Meteor way to order the loading of files, use the Meteor.startup function and put the initialization code there. That function is the $.ready of the Meteor world, i.e., it will execute only after all your files have been successfully loaded on the client.
So in your case:
Meter.startup ->
congress_layer.addData(#congress_geojson)
IS there a way to add/initate a comment ( e.g. $dom->createComment ... ) such that it comments out an entire block of xml tags. Basically I want to turn-off the content between the comment.
For example, it would look like this:
<TT>
<AA>keep</AA>
<!-- comment to blocking
<BB>hideme1</BB>
<CC>hideme2</CC>
-->
<DD>d's content is good</DD>
</TT>
Actually this question is a pre-cursor to my attempt to figure-out a method to be able to markup/label/identify the changes to an xml files in support of new client software functionality, but be able to have the ability to remove / back-out these xml changes in the rare event the client needs to fall back to the previous software version (and no I can't just simply point back to the original xml file because the client is allowed to make minor modifications to existing node text values). This is all going to be controlled via a perl script and LibXML's core modules (I can't use modules the client doesn't have).
So basically I've identified three possible types of xml changes as a result of new client sw functionality:
1.) ADD new element node(s) (typically to support new sw functionality)
2.) DELETE element node(s), or blocks of (would be rare, but never-the-less a possibility)
3.) CHANGE node text values (rare, but the new sw may require a new value)
For all three types, the client needs the ability to back out the changes. One thing I was thinking to use is ATTRIBUTES since the existing xml files don't use them. For example, for an ADD change type, I could include an atribute like 'ADD="sw version 4.1"' . This way if it needs to be removed, I could just simply have the perl script find those attribute strings and delete them (using LibXML methods). Same thing with CHANGE change type - I could use an attribute like CHG="newvalue_oldvalue", then again use straight perl (or LibXML) to switch back the value based on the contents of the attribute. The DELETE change type is giving me a problem though (as welll as the others lol!). I want to be able to "keep" the deleted lines in the xml file soley for the purposes if the sw falls back a version (at some late point the perl script could eventually cleanup/delete them).
I know this is a lot, I'm new to LibXML (but not to perl). I was just wonder if any of you have any thoughts as to how to go about it or seen anything resembling this kind of request ... I'd be grateful for any kind of advice! Thank you...