Difference between Word.CustomXmlPart and Office.CustomXmlParts - ms-word

Word.CustomXmlPart is still in preview and not recommended to use (link) but I can use Office.CustomXmlParts to store any data (link). I don't find any documentation around difference between these two. What basically is difference between these two and what should I use ?

They do essentially the same things. The Word-specific APIs perform much better because they use batching of commands, whereas the Common APIs make a round trip between the add-in and the document with every command. For that reason, we recommend that when the Word-specific APIs are released, you use them. Until then, for production add-ins, use the Common APIs (Office.CustomXmlParts).

Related

Is it possible to have nested application in single spa framework?

What i am looking at is having a angular microfront-end inside another react microfront-end, is this something we can achieve from single-spa
Yes, this can be done two ways. It depends on the frameworks being used by your applications.
Option 1: Cross microfrontend imports
See the Single-spa documentation on cross microfrontend imports here.
This option is ideal if your applications are using the same framework, and uses the simplicity of normal import statements.
Option 2: Single-spa Parcels
See the Single-spa documentation on Parcels here. This option is ideal if you need to cross-framework support, but Parcels are harder to use and understand so we generally don't recommend using them unless you're sure that's needed.
(As an aside, many people think they need to embed on microfrontend in another, but this isn't always true and you might be able to solve your requirement in a different way; depends on your use case).

Rest APIs in Go - using net/http vs. a library like Gorilla

I see that Go itself has a package net/http, which is adequate at providing everything you need to get your own REST APIs up and running. However, there are a variety of frameworks; the most popular maybe say gorilla.
Considering that one of the main things I need to do going forward is to build REST APIs that will access some back-end storage (databases, caches, etc.) to perform CRUD operation, is it good to go with Go's standard library itself, or should I consider using some frameworks?
Normally, people write a new library or framework which solves the problem present in the existing library. But a lot of the frameworks also tend to make things worse when actual demands are simple.
So I have few questions:
Is the basic library in go lang good enough to support basic to moderate functionality for REST?
If I do end up using the inbuilt library and tomorrow have to change it to use some framework (like a gorilla), how difficult/costly would that be?
Are frameworks really addressing the problems or just making simple problems complex?
I would be extremely grateful for someone to share his thoughts here (who has been through making this choice himself) while I research more of my own.
The net/http package is probably sufficient for most scenarios, but if you want to ease your development, you should use a third-party package, such as Gorilla.
For example, net/http's ServeMux does a great job at routing incoming requests for fixed URL paths but for pretty paths which use variables, you will need to implement a custom multiplexer while using Gorilla, you are getting this for free.
Another example is if you want to specify RESTful resources with
proper HTTP methods, it is hard to work with the standard
http.ServeMux, while with Gorilla's mux package,
requests can be matched based on URL host, path, path prefix,
schemes, header and query values, and HTTP methods.
One of the great benefits of Gorilla is that it is fully compatible with the net/http package and can be substituted in the future.
See 1.
I totally encourage you to use Gorilla's toolkit to develop REST services.
The built-in net/http package is sufficient to build a complete REST API. However, some of the libraries can make building an API slightly easier, particularly if the REST API is complex. Changing from the built-in facilities to any decent framework is relatively straightforward - they generally accept handlers of the http.Handler type.
In the end, though, this is an extremely situational choice. The best thing you can do is examine each available solution, contrast and compare, and build a proof of concept with the top options if you possibly can. First-hand experience will guide you best.

Mimic SAP Transaction in RFC

How would one go about creating a SAP RFC that runs a transaction with parameters and return its data?
I have seen someone use a PERFORM BDC_DYNPRO and when I run the code through the debugger it seems to run the actual transaction screens. How do you go about setting this up?
There's plenty of RFCs in SAP systems that does exactly that - they're called BAPI functions. Filling parameters can be tricky sometimes and documentation for some of them are not really helpful. Take a look in transaction BAPI to see a list.
You can also create documents in transactions through code using IDOCs, that should be called using built-in IDOC RFCs.
BDCs are not really recommended for what you're trying to achieve, as they simulate the screenflow inside the system and that can consume a lot of resources for some simple tasks (like adding a new item to a document). BDCs also depends on positional references and that can be a pain to implement/maintain. BAPIs are always preferred over BDCs, however, in some cases you don't have BAPIs for a transaction and there's no other solution than using BDCs.
Finally, as I said some BAPIs can be really tricky to implement, so a RFC "wrapper" could be a way of simplifying the integration processes.

Communication between applications written in different languages

I am looking at linking a few applications together (all written in different languages like C#, C++, Python) and I am not sure how to go about it.
What I mean by linking? The system I am working on consists of small programs each responsible for a particular processing task. I need to be able to transfer a data set from one application to another easily (the data set in question is not huge, probably a few megabytes) and I also need some form of way to control the current state of the operation (This is where a client-server model rings a bell)
It seems like sockets or maybe SOAP would be a universal solution but just wanted to get some opinions as to what people think about this subject.
Comments/suggestions will be appreciated, thanks!
I personally take a liking towards ØMQ. It's a library that has a familiar BSD-sockets-like interface for passing messages, but you'll find it implements interesting patterns for distributing tasks.
It sounds like you want to arrange several processes in a pipeline. ØMQ allows you to do that using push and poll sockets. (And afterwards, you'll find it's even possible to scale up across multiple processes and machines with little effort.) Take a look at the guide to get started, and the zmq_socket(3) manpage specifically for how push and pull works.
Bindings are available for all the languages you mention.
As for the contents of the message, ØMQ doesn't concern itself with that, they are just blocks of raw data. You can use any format that suits you, such as JSON, or perhaps Protocol Buffers.
What I'm not sure about is the ‘controlling state’ you mention. Are you interested in, for example, cancelling a job halfway through?
For C# to C# you can use Windows Communication Foundation. You may be able to use it with Python and C++ as well.
You may also want to checkout named pipes.
I would think about moving to a model where you eliminate the issue by having centralized data that all of the applications look at. Keep "one source of the truth" so to speak.
Most outside software has trouble linking against C++ code, due to the name-mangling algorithm it uses for its symbols. For that reason, when interfacing with programs written in other languages, it is often best to declare wrappers to things as extern "C" or inside an extern "C" { block.
I need to be able to transfer a data set from one application to another easily (the data set in question is not huge, probably a few megabytes)
Use the file system.
and I also need some form of way to control the current state of the operation
Again, use the file system. A "current_state.json" file with a JSON serialized object is perfect for multiple languages to work with.
It seems like sockets or maybe SOAP would be a universal solution.
Perhaps. But it's overkill for this kind of thing. Your OS already has all the facilities you need. Just use the file system. It's very simple and very reliable.
There are many ways to do interprocess communication. As you said, sockets may be a universal solution. SOAP, i think, is somewhat an overkill. You may also use mailslots. I wrote C++ application using it a couple of years ago. Named pipes could be also a solution, but if you are coding on Windows, it may be difficult.
In my opinion:
Sockets
Mailslots
Are the best candidates.

What level of complexity requires a framework?

At what level of complexity is it mandatory to switch to an existing framework for web development?
What measurement of complexity is practical for web development? Code length? Feature list? Database Size?
If you work on several different sites then by using a common framework across all of them you can spend time working on the code rather than trying to remember what is located where and why.
I'd always use a framework of some sort, even if it's your own, as the uniformity will help you structure your project. Unless it's a one page static HTML project.
There is no mandatory limit however.
I don't think there is a level of complexity that necessitates a framework. For me whenever I am writing a dynamic site I immediately consider a framework, and if it will save me time, I use it(it almost always does, and I almost always do).
Consider that the question may be faulty. Many of the most complex websites don't use any popular, preexisting, framework. Google has their own web server and their own custom way of doing things, as does Amazon, and probably lots of other sites.
If a framework makes your task easier, or provides added value, go for it. However, when you get that framework you are tied to a new dependancy. I'm starting to essentially recreate a Joel on Software post, so I will redirect you here for more on adding unneeded dependencies to your code:
http://www.joelonsoftware.com/articles/fog0000000007.html
All factors matter. You should measure how much time you can save using 3rd party framework and compare it to the risks of using other's code
Never "mandatory." Some problems are not well solved by any framework. It would be suggestible to switch to a framework when most of the code you are implementing has already be implemented by the framework in question in a way that suits your particular application. This saves you time, energy, and will most likely be more stable than the fresh code you would have written.
This is really two questions, you realize. :-) The answer to the first one is that it's never mandatory, but honestly, parsing HTML request parameters directly is pretty horrible right from the start. I don't want to do it even once, so I tend to go toward a framework relatively early on.
As far as what measurement is practical, well, what are you worried about? All of the descriptions that you list have value. Database size matters primarily for scaling, in my opinion (you can write a very simple app if you have a very simple schema, even if there are hundreds of thousands of rows in the database). The feature list will probably determine the number and complexity of UI pages, which will in turn help to dictate the code length.
There are frameworks that are there for getting moving very quickly with a simple blog, django or RoR all the way to enterprise full-stack applications Zope. Not to be tied to just the buzz world, you also have ASP.Net and J2EE, etc.
All frameworks and libraries are tools at your disposal. Determine which ones will make your life easier for your given project and use them.
I would say the reverse is true. At some point, your project gets so expansive, that you actually get slowed down by the shortcomings of the framework. For sufficiently large projects you may, in fact, be better off developing your own framework, to meet your own needs. I have seen many times where people were held back in the decisions they could make, or the work they could produce, because they were trying to do something that the framework didn't anticipate. And doing these things that the framework doesn't anticipate can be very troublesome. The nice thing about making your own framework, is that it can evolve with your project, to be a help to you system, instead of a hindrance.
So, to conclude, small projects should be use existing frameworks. Large projects should contain their own framework.