How to use only TinyMCE WYSWYG Core Plan - tinymce

I have registered for https://www.tiny.cloud/ but by default it gave me 1 month FREE Trial of premium features. I don't want to use any of this feature. If I implement this my client will use these features and after 1 m he will ask. How can convert my free trial to basic Tiny Core which is open source for ever.

The API key will convert to the free version automatically. If you are just using the community features you will be good to go! No need to change or do anything.
Of course, you might want to try the premium features during the 30-day trial to see if they are useful and valuable to your use case.

You should go to Github and clone/download the open source project

Related

How to create a C++ library to sell it and Demo trial mode

I have created some C++ classes.
I want to sell them as a library.
What should I do so that no one can see how I implemented but he can use it.
Also I want to create a demo trial version of the library which expires after some time, how can I do this.
Thanks
Tarun
If you distribute them compiled (dll or so extensions for example), then it won't be easy to see how you implemented them.
I gave an answer here that might be helpful to create a trial version with time expiration.
If you're really paranoid, you can run your code through a C++ obfuscator.

Create Task Report from Mylyn?

is there a way to create a task/activity report (say a weekly report) off tasks managed with Mylyn? I've been using Rachota TimeTracker which allows me to create reports (in html format)
http://rachota.sourceforge.net/en/demo.html
I've just started using mylyn (our company uses Embarcadero JBuilder which is is based on Eclipse), but I don't see anywhere in the Eclipse or Embarcadero docs about reporting capabilities.
Is it possible? Is it possible to query activities worked on a prior week and report statistics out of it (management like reports, you know;) I'm sure it is, but I haven't been able to google it out.
Thanks.
You're in luck, Tasktop Pro (the supported version of Mylyn) has reporting. It allows you to:
View all task activity times for the previous day, week, and month
Manually adjust times as necessary to account for meetings and discussions
Submit your adjusted times, on tasks you select, to your task repository
Create reports in various formats
I'd recommend this short video which explains the reporting features in about 6 minutes.
David Shepherd
Tasktop Technologies
As you already know by now, the reporting functionality is included into commercial Tasktop product, which is developed by the same people who created Mylyn. So, obviously they are not interested to include some features into a free version. Now you have two options, either buy Tasktop, or develop your own extension for Mylyn. The task data is stored in reasonable simple xml file, so you not necessarily have to create an Eclipse plugin.
the reporting feature was stripped from the project when it used to be called mylar, in 2007, and since the project went commercial never came back to the open source mylyn for obvious reasons..
I found this simple perl script which outputs a pretty basic text only report, good enough for me.
http://rachaelandtom.info/mylyn-report
No takers? Not surprised since I can't find anything on the subject. For what's worth, there is an experimental task/activity report available for Mylyn with the sandbox jar. However, I could not get mine to work as I'm tied up with a JBuilder installation behind a firewall (and I can't download anything on the corp network that is not pre-evaluated... it sucks, I know.)
I'm going to have to experiment with the mylyn sandox at home, but it would be great if someone knows of an easier, more stable alternative.

Documentation and version control

Given a project I'm about to start there will be documentation produced.
What is the best practice for this?
Should the documents live with the code and assets or should there be a separate documentation store?
Edit
I'd like a wiki but I will need to print the documents etc... It's a university project.
It really depends on your team. Where I work, we keep documentation in a wiki which is linked in with our team website. For the purposes of shipping documentation, the wiki can be exported and we run it through a parser that "fancifies" the look and feel of the documentation for customer purposes.
Storing the documentation with the code (typically in your source repository) is not a bad idea. Just make sure to keep them separated. For example, keep a docs folder which is on the same level with your src folder in your repository. This way, you can quickly ship the current documentation, you can easily track revisions, and anybody new to the project can immediately jump in without having to go to multiple locations for information.
Storing it in source control is fine.
This is an interesting question -- basically, what others are saying is right about generated documentation, source files and templates/etc. should be stored in source control and generated during your build process.
As far as requirements/specs/etc. documentation, I have worked both ways, and I very much prefer using SharePoint or a Wiki/document portal that is designed for document sharing/versioning. The reason is, most non-developer folks aren't comfortable working with source control systems, and you don't gain any of the advantages of intelligent merging if you are using a binary format like Word. Plus it's nice to have internet-based access so you can reference and work on the docs in a distributed team without people having to install extra software.
Here's a 2017 summary of the options and my experience:
(extreme 1) Completely external (e.g. a wiki, Google Docs, LaTeX, MS Word, MS Onedrive)
People aren't bothered about keeping it up to date (half of them don't even know where to find the page that needs updating since it's so out of the trenches).
wiki platforms are “captive user interfaces” - your data gets stored in their proprietary schemas and is not easy to examine with a simple text editor (Confluence is even worse in that you have no access to the plaintext content at all anymore)
(extreme 2) Completely internal (e.g. javadoc)
pollutes the source code, and is usually too low level to be of any use. Well-written source code is still the best form of low level documentation.
However, I feel package-info.java files are underutilized.
(balance) Colocated documentation (e.g. README.md)
A good half way solution, with the benefits of version control. If a single README.md file is not enough, consider a doc/ folder. The only drawback of this I've seen is whether to source control helpful graphics (e.g. png files) and risk bloating the repo.
One interesting way to avoid this problem is to use plaintext diagram tools (I find Grapheasy and Text Diagram to be a breath of fresh air).
plaintext can be easily read even if your rendering engine changes as the years go by.
Github's success is in no small part thanks to its README.md located in the root of the project.
One tiny disadvantage of this approach though is that your continuous integration system will trigger a new build each time you make edits to the README.md file.
If you are writing versioned user documentation associated with each release of the product, then it makes sense to put the documentation in source control along with its associated product release.
If you are writing internal developer documentation, use automated internal source code documentation (javadoc, doxygen, .net annotations, etc) for source level documentation and a project wiki for design level documentation.
I think most of us in the industry are not really following best-practices and it of course also depends a lot on your situation.
In an agile environment where you would have a very iterative process of release, you will want to "travel light". In this particular case, Jason's suggestion of a separate Wiki really works great.
In a water-fall/big bang model, you will have a better opportunity to have a decent documentation update with each new release. Also you will need to clearly document what version of the requirements was agreed on and have loads of documentation for every tiny change you do to requirements (due to the effects it has on subsequent stages). Often if the documentation can live together with the version controlled source code it is the best.
Are you using any sort of auto-documentation or is it completely manual? Assuming that you are using an auto-documentation system, the documentation is more or less generated on the fly, and would be part of the code itself.
To me, (assuming that it's possible with whatever code you are using), this would be the preferred method of handling it, as you wouldn't need to maintain the documentation source at all.

Is gchart safe to use?

The home page for gchart, a client side charting add-in for Google Web Toolkit (GWT), has a long screed about how the project's only maintainer thinks his Google account has been hacked and because of that he will be "disavowing/abandoning my own project and Google account". Does that mean the project is an orphan? Is somebody taking it over?
There is always a risk on basing your project on somebody else's code because they may stop supporting it or abandon it during your project's life time, but it seems to me that with the fast evolution of Java and GWT, using gchart in a new project may be a big mistake. Am I right?
I've released Client-side GChart 2.7 in a brand-new Google Code project (untainted by the previous rootkitting of my laptop) that you can find here:
http://clientsidegchart.googlecode.com
For details on the new security-related improvements I've instituted in an effort to prevent a future breach, follow the "release notes" link on the home page to the GChart 2.7 release notes.
I wish it had not taken me so long to re-release. I was attempting to correct the part of the problem that was under my direct control: my deep ignorance of all things related to computer security and systems administration.
I encourage you to give the re-released, better-secured and administered, Client-side GChart 2.7 a second look.
John C. Gunther, Client-side GChart author
I would have to say so. If the only maintainer of the project has lost control of his account, using any subsequent versions of gchart could mean you're implementing malicious code un-knowingly.
Unless he spins up another project to move the code-base forward, I'd avoid it.

PowerBuilder 11.5 & Version Control

What is the best version control system to implement with PowerBuilder 11.5?
If you have examples of how you have did branching/trunk/tags that would be awesome. We have tried to wrap our heads around it a few times and always run into problems because we use shared libraries such as PFC/PFE in multiple applications.
Right now we are only using PBNative, and it sucks.
The Agent SVN is a MS-SCCI Subversion plug-in works with PowerBuilder.
Here is a link that describes how to setup Agent SVN to work with PowerBuilder and Subversion.
We currently use Perforce and it's P4SCC plugin, which works very well. In fact, I'm sure I read somewhere that the guys at Sybase who write PowerBuilder, actually use Perforce themselves.
So, to be fair, let's start out by saying that while you're asking about version control, PBNative is source control. If you compare something that is intended to have more features than just keep two developers from editing the same piece of source, then yes, PBNative will suck. The Madone SL may be an incredible bicycle, but if you're trying to take a couple of laps around an Indy track, it will suck.
"Best" is a pretty subjective word. There are lots of features available in version control and configuration management tools. You can get tons of features, but you'll pay through the nose. StarTeam has some nice features like being able to trace a client change request or bug report all the way through to the changed code, and being able to link in a customized diff tool (which is particularly useful in PB). Then again, if cost is your key criteria rather than features, there are lots of free options that will get the job done. As long as the tool supports the Microsoft SCC interface, you should be OK.
There is a relatively active NNTP newsgroup that focuses on source control with PowerBuilder, which you can also access via the web. You can probably find some already-posted opinions there.
Many years ago I used Starteam to control PB applications. PowerBuilder needless to say is an outdated bear, and it has to export each and every object from its "libraries" into source control.
Currently our legacy PB apps have its libraries saved whole into Subversion, without any support for diff's etc.
We use Visual SourceSafe. We don't use PFC, but we do have libraries that are shared among several projects. Till now, each project was developed separately from the others, and so the shared libraries were duplicated. To have them synchronized, they were all shared at the VSS level. Lately we've reorganized our sources so all projects are near each other, and there's only one instance of the shared libraries.
VSS is definitively not the best source control system, to say the least, but it integrates into PB without the need of any bridges. PB has an inherent problem working with source control, so it probably won't make a very difference working with one instead of the other (at least from the PB point of view).
Now, on a personal note, I'd like to say PB 11.5 is a piece of sh*t. It constantly crashes, full of unbelievable UI nuisance and just brings productivity to its knees. It's probably the worst IDE ever created. Stay away if possible.
FYI: The new PB12 (PB.NET) will integrate with SCC systems so you can easily choose which source control system that you want to use. Since we basically have dropped PBLs (they are now directories) files can be checked in/out individually - even with a plain vanilla editor since files are now normal (unicode) text files.
StarTeam integrates so beautifully with the PB IDE. I used that combination at my previous company (PB9 and ST5.x) for several years. You should be managing your code at the object level - don't log the entire PBL into ST...
If you're having problems with that setup, hit me up offline. phoran at sybase dot com.
We use Merant Version Manager for older projects and TFS for newer work. The only issue we have is that TFS does not support keyword expansion and changing the 'read the flowerbox comments' attitude people have. Some folks are nervous about losing the inline versioning history.
We use StarTeam and have been very pleased with it. It combines bug tracking with version control. Unfortunately though we don't store our files on the object level. We just store the PBL files directly in source control. Anything that supports the SCC interface theoretically should work correctly in PowerBuilder.
PB9: We used PVCS but had stability problems with pbl corruption and also problems co-existing with later versions of Crystal Reports (dll conflict) so now we use PB9 with Dynamsoft's Source Anywhere Standalone. This system is more primitive; it is missing the more advanced features for promotion levels and for pulling out an older milestone version of all objects to make a patch build.
What we are looking for now is something which will allow more advanced "change management", to support promotion levels at the change level (rather than at the object level). Would it be better to use perforce, starteam, or (harvest change manager + HarPB), or something else? Any advice on these combinations would be greatly appreciated.
You can always use Plastic SCM with PowerBuilder through SCC. Plastic is pretty advanced in terms of graphics, tools, replica and so on, so it's always a good choice to keep in mind.