Missing timestamps for uploaded images - mongodb

I have mongoid and carrierwave-mongoid gems in my project (for avatars in user model) and need timestamps in uploaded images URL. I know there is Wiki page for this (https://github.com/jnicklas/carrierwave/wiki/How-to%3A-Use-a-timestamp-in-file-names) but there is note "This does not seem to be reliable. I'd strongly recommend saving the timestamp to the database and reading it from the model to generate the filename instead of using this method." and I'm not sure how to do that.
I think Rails should generate timestamps for images URL automatically or from specific attribute right? So I'm not sure what is the right name for this attribute or what is the right approach for this with Mongoid and carrierwave-mongoid.
Could you please provide me some info or link where I can found more about this or info about the solution.
Because my repo on GitHub is private I did this Gist (https://gist.github.com/2355128) where you can see my user model and avatar uploader.
Thanks for your help.

This may be more generic than you want for an answer, but hopefully it will get you to an appropriate solution. The unique ObjectID that is used for the default _id fields in MongoDB has a timestamp "built in", BSON::ObjectId would be what you are looking for in Mongoid to extract it.
There is also a write up on it here:
http://www.mongodb.org/display/DOCS/Optimizing+Object+IDs#OptimizingObjectIDs-Extractinsertiontimesfromidratherthanhavingaseparatetimestampfield.
Hence, you could use a wrapper around the ObjectID to construct your URLs and avoid the use of an extra timestamp entirely.

Related

Kentico Import Tool inconsistent/buggy when updating documents

I've had a number of problems using the provided Kentico Import Toolkit, namely when using the "Import new and overwrite existing pages" option to update my existing/already imported pages. I'm using a custom SQL query to import and have had a profile saved for each import I've needed (client has article based site so a few tables of similar information) to try and keep each as consistent as possible between imports.
Here's the problems I've encountered thus far (in no particular order):
the tool tries to guess which fields from the query correlates to the fields of the page type in Kentico for you, which is a nice idea, but seems poorly implemented. If I'm not very careful and reload the profiles every time I import I've had instances where fields changed inexplicably when testing imports because the tool thought it knew which field I wanted
this is more the problem when importing/reimporting multiple times in a session and choosing to go back and load the same profile (without reloading)
the NodeAlias field is only seemingly required on update/reimport rather than on initial import. I'm sure there's an internal cleaning of the document's title to generate a NodeAlias and this is generated fine when importing documents while NOT providing the NodeAlias. After importing the items initially and wishing to update however the NodeAlias is seemingly required as you'll get errors with text asking it be included. This implies to me that there's matching of the NodeAlias along with the given ID field, which should be fine in theory but isn't specifically mentioned anywhere in the tool as best I can tell.
I've had instances where reimporting items will change/strip their NodeAliasPath. I've gotten around this by specifically setting the NodeAliasPath (which only shows after selecting "Show Advanced Columns") but like NodeAlias path before it, I'd think the tool should be smart enough to know to keep the path if not specifically given for updated items.
it seems very odd that in order to match on ID for previous items you have to provide the name of the new column instead of the old one. My example: client was using just a field named 'id' and the new one is 'OriginalID' to clearly differentiate it from the Kentico derived ID fields. To match the items I have to use 'OriginalID' rather than 'id'
A couple of notes/niceties or potential updates along with the above:
it would be nice if there were some way to select if the page should
be published or not through a single query. Currently having the
"Automatically publish pages under workflow" toggle checked seems to always publish
the items. I have an instance where the client has old documents in
the provided DB dump that they don't want visible on the site but
want preserved in the DB if they change their mind later. Currently I
have to perform 2 imports, 1 for the unpublished and a second for the
published items, to accommodate this, which is quite cumbersome
I'll likely edit/add to this as I get responses. This isn't really a specific problem (as I managed a workaround to the NodeAliasPath stripping problem, which inspired this post initially) but more just me asking if these are bugs,if I'm not using the software as intended, etc.
You've stated all the problems you're having/experiencing and possibilities why they are happening but didn't ask a particular question. If you suspect they are bugs, then I'd go to directly to Kentico Support and report the issues there since these are things that have been part of the KIT for as long as I have worked with it.

Word API Custom Properties

I need some help on word add-ins
I will be programmatically creating a document and as part of that I need to add custom property (Pub_Doc_ID) to the document, as in the picture below.
I am using Word Java APIs now and could not find a way to do this job. The work flow I am targeting is very simple. Create a Document, get the Pub_Doc_Id from DB which is primary Key and assign to the document. Now primary key is attached to the document, so it will be lived with document.
Some more background :
As I mentioned earlier I am using Word APIs. I am adding text, sections, images etch. Now I need to have one connector (Pub_Doc_ID) between Doc and DB. So wanted to use custom properties. If there is any better way to do it. Then let me know.
I know how to do this in VSTO. I am looking for Word Java API.
This pub_doc_id ID then I will be using to call API's and to load task pane.
Thanks, really appreciate any help on this.
*Pub_Doc_Id : Publishing Document ID.
R/W access to custom properties is something my team is working on and would be delivered towards the end of the year.
Seems that for your scenarios you don't necessarily need to store that information as a custom property and you have a couple of alternatives in the meantime:
You can add your own customXmlPart to the doc to store this information. Here is a great example on how to use this: https://github.com/OfficeDev/Word-Add-in-Work-with-custom-XML-parts/tree/master/C%23/CustomXMLAppWeb/App
You could also store it a setting of your add in. Check out the settings object and how to store and retrieve settings: https://dev.office.com/reference/add-ins/shared/document.settings
Hope this helps!!
Thanks
You cannot presently access custom properties via the JavaScript API. They are currently working on it and have put information about proposed APIs on GitHub

Symfony2 - How to populated a select field with a very large amount of data (nearly 40.000)

I use Symfony 2. I have so far 2 bundles. The first bundle is a called UserBundle and is build using FosUserBundle. The second bundle is called GeoBundle and contains one entity called France which's table contain almost 40.000 records. Each record refers to a city with postal code, regional code.... Basically I use this entity in my user registration form so that the user can select an appropriate city from the list. By the way I use an Entity Field Type to do that.
My problem is that everything is working fine with just a few records in the table but with the almost 40.000 records the page where my form is, is not even opening. I already extended my memory in php.ini to 256M and more but the page is still not opening.
So my question is simple. What would be the best solution to populate a select field with that many records? I am of course open to other solutions. Thank you in advance. Cheers. Marc
You're probably best using an autocomplete field. You can find plenty of solutions on Google. Or try a bundle like https://github.com/shtumi/ShtumiUsefulBundle
Simply don't load the full list, that would be bad for your server, your bandwidth and for the memory required by the client (if it's a mobile browser, forget it!)
I suggest to use jQuery UI Autocomplete, this is a jQuery plug-in that can load the elements required via Ajax from your server, so you don't have to load the hole thing. It's also well tested and easy to implement.
http://jqueryui.com/autocomplete/#remote-jsonp
Try using this Symfony2 bundle.
https://github.com/genemu/GenemuFormBundle/blob/master/Resources/doc/jquery/autocomplete/text.md
This is very helful and in Symfony2. It will save your time and have a look at its other extensions

How to pre-populate custom field in signup form for secure zone in Adobe Business Catalyst?

I have created a signup form for secure zone in Business Catalyst. I want to give user access to that form in order to update the fields. I have created the page and its working only problem is there is no way to pre-populate custom fields in the form. I talked to their support and research a lot but all in vain. This is very basic thing BC missing. Is there a hack for it or some alternatives?
Just adding for anyone finding this that you can also populate fields that have been created and extended in the crm if they're stored against the customer record.
{module_customerfield,crmextformID,FieldID}
eg
{module_customerfield,7470,82256}
More info in the forums.
Good News Now Business Catalyst supports this feature for more information read:
Allowing Customers to view and update CRM details
I have had this same issue and hopefully there will be someway they can fix this in the future. What I did for the time being is I used other tags that I was not using. IE module_workcity was not and won't be used by me so I then put that in a field that I needed the custom tag for. Here is a screen shot of what I am referring to. - http://screencast.com/t/b3pvuOcTi one thing to note here the screen name is different than username for this site.
Note: When the user signs up I have the person filling out a field for the "workcity" and just change the labeling.
Not sure if this is an option for you, and it can take some work but might help.
Hope this helps. - Another note BC related question you will get quicker answers on the forums there - http://businesscatalyst.com/support/forums

How do websites change content daily?

I just started learning HTML and CSS, with no knowledge on other languages such as javascript, Php, and so forth. Websites like Refdesk.com boast fresh content everyday, there has to be someway they are able to have new content everyday other then changing it by hand. Some Google searches came up with nothing but RSS feeds.
How is this done?
Thanks for the helpful answers, it answers half of my question, but does this also mean that the owner would have to manually add the webpage each day for new content, or say add in the content for a few days and have them displayed day after day automatically?
Most dynamic websites derive their page content from a database. Change the content in the database, and the content on the pages changes to follow suit.
Likely they have some form of content management system which allows non-technical users to update the site. In some systems, the content manager itself can get quite advanced. Here's a description of the latest version of the one used at the BBC, CPS, which drives the many BBC websites and more.
They most probably use a database where they store the content and the newest entries are retrieved from this database and displayed. This requires a server side language like PHP, Java, Python.
The HTML is generated dynamically.
The answers about databases combined with a server-side language like PHP are pretty good and very direct, but depending on how new you are to web development they might not be conceptual enough.
The first thing you need to understand is that a database is a collection of tables - each like any you might be familiar with in excel.
For example, one table in your database might be named "daily_links" and it might have two columns, one named "Date", and one named "Link". So every time you want to publish a new link, you just make a new row.
So now you are half way there.
Now what the server-side scripting language is able to do is to go to the database, look at your table "daily_links" and bring back each all the information that it found there.
From there it can do anything with that information like make a new anchor tag in html for each row it found, and give it an href of the data found in the column "Link".
That is rough idea in (very) general terms.
I hope that is easy to understand.