Restriction regarding multiple namespaces in trial version of Codefluent? - codefluent

I have not yet bought the ultimate but i'm really consider it, however i have problems.
I have the ultimate trial version and it seems that it is not possible to use multiple namespaces.
Is there a restriction due to trial version?
When using multiple namespaces i get compiler errors in the "bom"library project.

Related

How to generate a dart client from swagger json with nullsafety turned on?

I have a backend service that is producing a swagger.json as a description of their interface.
I want to connect to that service, so up to now the easiest way has been to use a generator (for example swagger-gen or the OpenAPI variant) to generate this code. For Dart you even have choices in generation based on which HTTP client you want. Great. Worked well.
Now Flutter is only my hobby, I use other languages and Frameworks at work. I loved the idea of null-safety and I really liked seeing it rolled out into production in this style. People are picking up, packages getting adapted. I love it.
However, it seems after months and months of null-safety, beta, master, dev, stable, there is no movement, not github issues, not pull requests to get those generators to generate null-safe code. I have seen the packages that make working with the generators easier adopt null-safety for their own code, but the generated code still fails in recent Dart versions. It's still Dart 2.0 from 2018.
So what am I missing here? Surely there are production apps out there that connect to backends? Did I miss a secret switch that has been there all the time? Is swagger maybe not used any more in the Flutter/Dart ecosystem, do all the professional app developers use something else nowadays?
I'm used to swagger/OpenAPI, but I have full control of the backend, if something else is the latest best way of doing it, I can change the backend.
How do I generate null-safe client Dart code from a backend service interface file?
You can use swagger_dart_code_generator (as of now 2.1.2) from https://github.com/epam-cross-platform-lab/swagger-dart-code-generator which supports null safety.
The only issue with this that I observed at this point is that all properties are nullable, so basically the client code is bloated with null checks, exactly what the null-safety wanted to avoid.
Not sure whether this is coming from the generator itself or from the chopper_generator (as of now 4.0.1) it uses. I also came to this thread trying to find out how to solve this inconvenience.
You'll only be able to generate the null-safe client once the generator is updated to generate null-safe code.
As you mentioned there are no issues on the repository, a good place to start would be to create the issue and find out the authors inclination towards updating the code to be null safe.
Null-safety has been in beta but has only recently been added to stable, the general practice here was to gradually migrate to null safety as this change has repercussions throughout the codebase. For example build_config: ^1.0.0 which is published by the core dart team published support for null safety in April 20th 2021, only a few days ago at the time of this post.

SAP Commerce / Hybris upgrade multiple versions

Which is the more feasible strategy for version upgrades when you are multiple versions behind. For example from 6.4 to 2005.
Should we really do a version by version approach as SAP suggests. I understand it's recommended way but still.
Any one can share their experience regarding this?
What difficulties could be faced when directly migrating multiple versions?
Thanks!
There are several approaches you can take. Which one you take depends on the knowledge your team has and with the amount of customizations you already performed.
Step by Step
This is the recommended way by SAP. This is a more secure strategy, where it's very clear what changed between different versions. With every version, you will experience build failures, startup failures and possibly even data issues that need to be migrated. But it's very clear what version caused those issues. With the SAP help and the upgrade notes, you should be able to easily find what was changed, and how to fix it. Disadvantage with this approach is that you need to download, unzip and build for every version, and that takes time. Sometimes you even need to fix the same code twice, when the implentation was changed multiple times
One Shot
With this approach, you go straight to the latest version. You just put your custom code in the latest version and just see what build failures you get.
With this approach, it will be harder to figure out what exact version upgrade caused a specific issue. You should still check all upgrade notes, to make sure that no migrations are needed. Advantage is that you only perform everything once. If you have an experienced team, this is a feasable approach. If you have a new team, be carefull with this approach. You might encounter some difficult errors where you won't be sure what version caused them, so finding info in the SAP help might be harder
Hybrid approach
A third option would be a hybrid approach, where you upgrade several versions at once (For example to versions that contain big changes, like with the addition of backoffice in 6.3). This makes it easier to apply changes for those big changes, while you don't have to go through every version one by one.
Conclusion
I've tried all approaches in the past. The step by step approach takes a lot of time, but makes the changes easier and clearer. With the One Shot approach, you only need to download the latest version, but it might be somewhat harder to find the bugs. If you have an experienced team, you should go for the one shot approach. When you are a lot of versions behind and there were big changes, you could go for the hybrid approach
I had a similar requirement of upgrading from version 6.2 to 2005, I went with the One-Shot approach as explained by Yoni, and the biggest challenge I faced was due to Java version change.
I believe One-Shot approach and Step by Step approach will take a similar amount of time in the major version upgrade, though Step by Step approach is safe but redundant. My personal favorite is One-Shot.
I recently did a platform upgrade from hybris 6.7 to 2005 and did it step by step, mainly because of the java version change and, other than that, there were certain migration steps in each intermediate version that were needed to be done. Also the customer had a lot of custom promotion rules and they needed some special care.
In my case, the process in each step was this:
Upgrade to new version - there is an help.sap.com page for each step, I recommend you follow it and go through each of the section to see what applies to your project, e.g Upgrading Platform from 6.7 to 1808
Compile the project - some deprecated things will be removed in some steps and you have to refactor where needed. This step took me the most amount of time
Start the hybris server - after you finish the refactoring and your project builds successfully with ant clean all there is the possibility that the platform will fail to start due to some (now) incorrect xml config. The "good" part here is that you can see in the console what the problem is and the fix should go faster than the previous step.
Perform the necessary upgrade steps - here is the tricky part, once your platform starts you have to perform the necessary upgrade steps for each extension and add-on that needs it, otherwise you risk working with some broken business logic. You need to do some regression tests and check that everything works as it should.
All in all, an upgrade takes time and depends on how many versions you have to go through, but I think taking it step by step is the most efficient way to accomplish it.

ADO.NET Provider for SAP HANA - Version mismatch issue

I have a WPF application which is using ADO.NET client for SAP HANA ( Version 1.0.9.0) in my local development environment and i have added the same reference of SAP.DATA.HANA.v4.5.dll in my code. The connection works fine.
When i try to run the same application on a server which is having a different version of ADO.NET client , it throws error.
It should refer to the client from the location(C:\Program Files\sap\hdbclient\ado.net\v4.5) instead of version number ?
Can someone please explain if i am doing something wrong.
You're not doing anything wrong. But I think you're missing a key piece of knowledge about assembly references. And there are ways to deal with it, both for you and SAP.
When .NET references an assembly, it does so against the exact version used at compile time. Which means if at runtime the version number is different, it fails by default. Differences in the signing key used for strong-named assemblies or cultures can also cause the assembly loading to fail. This previous answer discusses that and suggests an approach to dealing with it using the AssemblyResolve event. You might have a lot more to do if your application has a complex plugin loading mechanism, but if it's really simple, you can get it updated with minimal effort. Of course, you'll still have issues if the client DB makes a breaking change, but they'll be different issues.
Another approach is to edit your application's configuration file to redirect assembly versions yourself. Sometimes Nuget packages are even set up to automatically insert these redirects into the application configuration file. Unfortunately this isn't an easy strategy with HANA given how frequently the versions change. And you need to know the exact version that's present to do this. So if you're deploying your application to multiple clients with different HANA versions, it could be a bit of a nightmare.
The sad thing is that there's already an existing mechanism in place for dealing with this sort of thing by the vendor who publishes the library. It's called a publisher policy assembly, but SAP botched its delivery of the policy assembly by including (and embedding) the most useless binding redirect ever conceived.
Normally, with a binding redirect, the purpose is to allow for seamless upgrading of an assembly to a newer version as long as your types haven't changed their public interfaces significantly. All you do is specify the range of versions that can be handled by this version in the XML.
SAP, however has this as their binding redirect element:
<bindingRedirect oldVersion="1.0.120.0-1.0.120.0" newVersion="1.0.120.0" />
Which does you the wonderful favor of allowing only the currently installed version of their DB client (in this case, 1.0.120.0). And no, that DLL doesn't change its ADO.NET classes enough for this sort of strict versioning. In my case, everything about the different assembly versions was exactly the same.
Had I the ability to get in touch with someone sane on the HANA team with enough clout to fix this, I'd suggest they set the oldVersion attribute properly to the oldest upgradable version and upgrade every subsequent driver package. So if you've installed 1.0.120.0, it'd look something more like this with the :
<bindingRedirect oldVersion="1.0.9.0-1.0.120.0" newVersion="1.0.120.0" />
And then it could be used by any software on the machine without extra effort. I mean, it's in the GAC. It wouldn't be that much effort for them to make this significantly easier. Although they'd need to patch all versions of the HANA client, as many applications (even those shipped by SAP) are tied to specific HANA versions. If they only fixed it in the latest version, we wouldn't see the benefits for several years.

Better documentation and instructions for Mirth upgrades?

Several days ago, Mirth Connect version 3.3.0 was released. Noting the great new features, we decided to upgrade immediately (just days after the initial release). We followed these Upgrade Guide instructions during the upgrade. However, the specifics of upgrading from 3.2 to 3.3 are missing from these Upgrade Guide, so we did not suspect much to change on the way Mirth should be implemented...
During this process, we ran into a handful of issues that caused our production channels to go down for several hours (†).
It would have been really nice to have specific information for this upgrade. Some issues that would have been really useful to know beforehand (just examples, no need to actually answer):
Are you changing the default toString() method for objects/arrays to return JSON representations?
Does this upgrade include a db migration, meaning we can't revert to
previous version once upgraded?
Because code templates are now children of "libraries", will we need to access the code template through the library, or will we be able to call it directly (as it was in 3.2)?
A solid documentation like this would have allowed us to understand the full gravity of what needs to be accounted for when upgrading. Typically, Mirth has some documentation for each minor release. But even then, the documentation is very terse. Would it be possible for the Mirth team to start being very explicit with what the upgrade entails?
The Rails Upgrade Guide (obviously much larger team, so can spend more bandwidth on this spec) provides a really great example of what an upgrade guide should entail.
† yes, yes, I learned my lesson, I won't upgrade immediately to production anymore
The Release Notes page will give the lowest level of changes to the application, but you're right in that a better documentation is needed.

Jaspersoft: 4.2.1 upgrade creates issues with olap access grant schemas

We are in the process of developing all our domains, olap schemas, reports, etc....in preparation for a Q1 launch of jasper replacing an older BI suite. We had been working in 4.1 and had a working environment with users that had JIProfileAttributes and passing these attributes in filters for both Domains and OLAP connections via access grants. This was all working correctly in 4.1 applying data security where necessary. We recently upgraded the server to 4.2.1 as there were some additional features we wanted to take advantage of for our development but it appears the upgrade broke the security for the OLAP. None of the profile attributes are applying any filters within OLAP after the upgrade. They ARE still working with domains.....just the OLAP that broke. Wondering if anyone else has had a similar issue with 4.2.1. Have a ticket opened with Jaspersupport but have not gotten any feedback on this yet. Unfortunately it has stalled some of our development as data security needs to be tested and this piece simply no longer works. I have tried re-doing the upgrade to make sure that was done correctly and also tried simply reloading the olap schema, connection and access grant but still not working in 4.2.1. Any feedback would be appreciated. At this point I'd settle for at least knowing it's a known issue and will be addressed ASAP. Luckily we are still in development else this would have been a major issue for us. Thanks.
It's a known issue and will be addressed ASAP.
You should hear back directly from Jaspersoft Technical Support as well. I suppose they'll have more info about when a patch is expected.
I came across an issue recently with roles and permissions behaving very strangely. Eventually found out the problem was down to that I had two JasperReport Server instances running on my development PC, and that JasperReports Server actually stores in cache files information about access control lists (as well as other things). I found that one instance of JRS was incorrectly picking up the ACL cache of the other, causing all sorts of problems.
I found that bringing down each server, deleting the cache files, and then only running one server at a time (remembering to delete the files between bouncing) solved all issues.
I'm just thinking, reading your problem, that may be you've installed the upgrade either over the top of the existing install, or in a different directory, but it is picking up the previous install old cache files and causing these problems.
As I'm developing on Windows I found the cache files under C:\Users\my.profile\AppData\Local\Temp\ehcache and C:\Users\my.profile\AppData\Local\Temp\ehcache-hibernate. I don't know where on Linux/Unix this may be stored, but I think it uses the Java environment variable java.io.tmpdir.
Hope this helps..