Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
The ADO.NET implementation of the .NET framework 2.0 beta 1 included a writeable resultset which was removed in the beta 2. The obvious reasons for this is that it would hog server resources to keep updateable cursors open on the server. However, this would be useful when used correctly; specifically when updating or inserting large numbers of records.
I'm aware of the bulk copy api now available, but there are still many uses for a writeable resultset, so I am interested in researching how it was implemented in beta 1, to determine the feasibility of re-implementing this in a new library.
I'm interested in hearing suggestions on how to do something similar (create a writable resultset) or how to research this further, with or without access to the old framework.
Or, is there a way I can obtain the old framework so I can look into seeing how this was done?
The license for .NET 2 beta 1 and beta 2 (any beta, including RC's) has expired many years ago. You cannot legally use it anymore. Even if you find it, you will be violating the law if you use it.
Microsoft's beta licenses are always structured such that they expire when the final gold version ships. They typically give you a grace period to upgrade, but 6 years is well beyond any grace period.
You should update the app to not use the deprecated API from that beta .net framework. Typically, deprecated APIs have equiavalent and often better ways of doing the same thing.
Beta releases are to get feedback (issues etc...) back to the framework team but also for users to get a preview of what is coming so they can update their app. You should avoid depending on a deprecated API for that long from an old beta framework. Betas are by definition in transition ...
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I tried to learn it from wiki Software Versioning control , but I could't understand very well. I know the last numbers means that many bugs fixed and I already got something from wiki.
but what is the main difference between the versions of app, like 1.2.1 and 2.4.5 ? is there any source for quick explanation? When I update my app on play store, how should I choose version? if I change API should I change the last digits of version numbers from 1.1.2 to 1.1.3 or I must change the first number like 1.1.2 to the 2.1.2?
Thanks.
Semantic Versioning (Major.Minor.Build)
The answer depends on the development team's choice of a versioning scheme. The most common scheme in my experience is the Semantic Versioning scheme in which the three numbers have a semantic value attached to them.
What is the main difference between the versions of app, like 1.2.1 and 2.4.5?
This would indicate that the newer software 2.4.5 has a breaking change and could result in problems for you or any software that consumes that code.
If I change API should I change the last digits of version numbers from 1.1.2 to 1.1.3 or I must change the first number like 1.1.2 to the 2.1.2?
In this case you should choose the version 2.1.2 if the change risks breaking other code that consumes the API, and you should choose 1.1.2 if your change includes something that adds to the API but does not take away from the functionality or interface of the API.
When I update my app on play store, how should I choose version?
Choose what makes sense to you or your team. Sometimes if it controlled by the platform you may not be able to choose, but lean on the side of conforming to the environment you are put in. Meaning follow the versioning conventions of the platform you are developing for.
Major Version
The first number in the sequence (1.x.x) is the major version and this semantically means that the software has a breaking change that could affect any other software that depends on it. For example, you could have an API that completely changes the URI path in an upgraded version from 1.x.x to 2.x.x.
Minor Version
Minor versions are changes to the code that do not reflect breaking changes, but are significant enough to warrant a version increase. More often then not this includes additions to the code that add functionality and does not break it. So if you added a new endpoint to an existing API and kept all other endpoints the same then the API's version could be increased from x.1.x to x.2.x.
Bug/Build Version
The last number in the scheme stands for the bug/build version depending on how you want to look at it. In teams like mine we use the third number to automatically increase the version for every push to our CI/CD pipeline that pushes a version artifact to our repository. This also could be used for bug fixes and hot fixes that come up in the lifecycle of your application. For example, x.x.1 to x.x.2.
Leaving Thoughts
This is not the only versioning scheme out there, and not the last one to be invented. However, this scheme seems to have traction in the industry at the movement and is worth learning. Plus, it enables some cool automation tricks on your CI/CD pipeline when you have meaning behind version numbers and commit standards that align with them. (Conventional Commits)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Im trying to find some information about the Cognos analytics but without success.
Im trying to find out what is the latest version of Cognos 11 that is availible and When it was the version released?
How much updates this release have already?
How much bugs were discovered?
Does the upgrade from Cognos 10 to 11 is smooth?
Thanks alot
While the question appears a bit broad, I will try to answer the parts of the question I can understand and, hopefully, it will assist the OP.
These are the approximate release dates for the Cognos Analytics 11 that have been released as of the time of this post:
11.0.0: December 23, 2015
11.0.1: March 29, 2016
11.0.2: May 6, 2016
Interestingly, while the 11.0.1 and 11.0.2 releases are seeming to include fixes only, IBM does not appear to be using Fix Packs in the same way as prior releases of Cognos BI. These are full releases that install and upgrade in the same way as if you were upgrading from BI 10.2.1 to 10.2.2 . There have been at least two interim fixes (IF) released as well, one for 11.0.0 and one for 11.0.2, both related to security, if I recall correctly.
While I don't think there is any official statistic on how many bugs were discovered, fix lists can be found for the released versions here:
11.0.0
11.0.1
11.0.2
The upgrade process from Cognos 10 to 11 is smooth in the sense that the overall process is similar to upgrades in past releases. There are some architectural changes for multi-node environments that change the process for installing subsequent nodes. That said, there are some very important deprecations and feature changes/removals that you will want to learn about, not to mention the new navigation, authoring, and content consumption interfaces.
There are a lot of facets to the release that need considering for any production upgrade -- I would definitely jump into the documentation and, assuming you are a current customer, set up a sandbox to start testing functionality before I made any hard plans for moving a production system forward. If you want more very high level feature discussion, a quick Google search for "Cognos 11 new features" or similar will give you a lot of helpful information.
To follow the announcements on the latest releases, you can subscribe to the IBM Analytics blog:
https://www.ibm.com/communities/analytics/cognos-analytics/blog/
or periodically check the product page:
https://www.ibm.com/analytics/us/en/technology/products/cognos-analytics/
For bug fixes, you should refer to the product release notes:
http://www-01.ibm.com/support/docview.wss?uid=swg27047187
, though the list of bugs seem to be less granular than what you would normally see from other IBM products. The list of bugs does not seem to be broken down by fix pack either.
I do not think the upgrade is a smooth one, since it is not an in-place upgrade like you would have from, let's say, 11.0.1 to 11.0.6. I also could not find a clear statement about the upgrade from 10.x in the installation guide, so it is unclear (for me, as of now) whether the process entails the usual backup of the content manager database from original version and restore the backup image to a new database to be used by the new version
I tried exporting the contents from the 10.x installation via Cognos Admin console and then importing it into the 11.0x release, but 3 out of 5 of my reports simply hang on launch even after performing a report upgrade operation via Admin console.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm currently using Firebase as a prototyping tool to showcase a front end design for a documentation tool. In the process we've come to really like the real-time power of Firebase, and are exploring the potential to use it for our production instance of an open source/community version.
The first challenge is version control. Our legacy project used Hibernate/Envers in a Java stack, and we were previously looking at Gitlab as a way to move into a more "familiar" git environment.
This way?
Is there a way to timestamp and version control the data being saved? And thoughts on how to best recall this data without redesigning the wheel (e.g. any open source modules?)?
The real-time aspect of something like Firepad is great for documentation, but we require the means to commit or otherwise distinctly timestamp the state or save of a document.
Or?
Or is it best to use Firebase only for the realtime functionality, and implement Gitlab to commit the instance to a non-realtime database. In other words abstracting the version control entirely out to a more traditional relationship to a db?
Thoughts?
Both options you offer are valid, and feasible. In general, I'd suggest you to use firebase only as your real-time stack (data sync). And connect it to your own backend (gitlib or custom-db).
I've went that path, and find the best solution is to integrate your own backend db with firebase on top. Depend on firebase exclusively for everything, and you'll hit walls sooner or later..
The best solution is to keep full control on your data structure, security and access models, and use firebase where needed to keep clients in sync (online and offline). The integration is simple.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
My boss asked me to develop an order system for our company's salesmen. Our company has almost 100,000 items to sale. In order to improve the performance, we will ask salesmen to download all data from sql server to iPhone local sqllite one time per week, and build index.
I'm a windows mobile developer, and it's very easy to use RDA to download data from sql server to local sqlce. The size in windows mobile device is about 20M.Now I need to do the same thing in iPhone.
I'm a newbie at iPhone development. Please give me some ideas about this project.Any input will be appreciated
Here is some information on using SQLite in iOS: iOS offline data storage tutorial
You'll probably need to export the DB as SQL and download it from the server, then import the SQL into SQLite.
As another answerer suggested, you could expose a REST interface on the server--assuming your server is setup to export the contents of the entire product database. Then there are any number of third party tools for importing JSON data (eg: via REST) into CoreData. Or if your REST data isn't too complicated it's not hard to parse it and directly add it to CoreData.
I personally recommend CoreData rather than using sqlite directly--iOS makes it very easy to do so. But it's also a matter of personal choice and I know lots of people prefer to use sqlite directly (especially if they want to build some cross-platform code, eg: to make an Android version which shares the same DB schema and logic).
There's probably many ways to do this, but I would go with build a rest api for the server data. Then on the iPhone side of things, make a network call to access the data.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've been developing modules for DNN since version 2 and back then I was able to easily able to run my module as I developed it on my environment and still easily deploy my module as a DLL.
When version 4 came out and used the web site solution (rather than the Web Application solution). It seems like there was something lost. I can continue to develop in my test environment and immediately see changes as I make them, but releasing for me has become a headache.
I mostly do my development for one site in particular have just been using FTP deployment of the modules to the main site after I was done making changes.
I'd like to set up a good environment for multiple developers to be able to work on the module(s).
When adding stuff to source control, are people generally putting all of DNN into source control so they can bring the whole solution down to work on, or just their module and each person needs to set up their own dev DNN environment?
I'd like to start getting my modules projects organized so more people could work on them and I feel a bit lost for some best practices both in doing this and deploying those changes to a live site.
I have a few detailed blog postings about this on my blog site, mitchelsellers.com.
I personally use the WAP development model and I do NOT check the DNN solution, or any core files into source control, as I do NOT modify the core for any of my clients. When working with multiple people we create a similar environment for each person, and still can work with each of our individual projects, at times we will have completely isolated dev environments with individual databases and code, at other times I have worked with a shared dev database to resolve issues with dev module installation issues.
With the WAP model I use a method to dynamically create my installation packages on project build using a post-build event and then I have a test installation that I use to validate that the packages occur. Debugging is then done via Attach to Process.
I would suggest Mitchel book if you are needing some reference material - Professional Dotnetnuke Module Programming by Wrox Module Programming - Michel Sellers