Multiple users working remotely on Tally ERP9 - tally

I am not very sure whether this is the right forum to ask this question or not.
We are having a TallyERP9 server with Multiple Licenses. Now our 3 users working remotely on the same Data. We have set up Google Drive for Data Syncing. But most of the time its giving issue due to synchronisation process.
What could be the best soltion so that multiple users can work on same data from Remote Locations?

This is the answer - http://mirror.tallysolutions.com/Downloads/TallyTips/GettingStartedwithDataSynchronisation.pdf
Thanks to #MitaleeRao...
Edit placed here for brevity:
These are 2 points I've noted regarding the Tally architecture:
The database is a flat file in a tree structure, and there are numerous checkpoints at each level for maintaining this inheritance (for e.g., a voucher has inventory entries that have stock items, which have units, etc.).
The SOAP XML protocol that Tally uses does not have multi-threading capabilities - i.e., the Tally server will only accept one request and give a response at a time.
The Data syncronisation that Tally has introduced is probably the automating of exporting the XML of all masters/vouchers and importing them onto the central Database (whether on the Tally.NET server or on a local computer with a static IP). Not sure how the Google Drive client works, but I'm assuming it is a variation of the same (i.e., XML based data export and then import onto a main computer).

Related

Working with multiple data warehouses in dbt

I'm building an application where each of our clients needs their own data warehouse (for security, compliance, and maintainability reasons). For each client we pull in data from multiple third party integrations and then merge them into a unified view, which we use to perform analytics and report metrics for the data across those integrations. These transformations and all relevant schemas are the same for all clients. We would need this to scale to 1000s of clients.
From what I gather dbt is designed so each project corresponds with one warehouse. I see two options:
Use one project and create a separate environment target for each client (and maybe a single dev environment). Given that environments aren't designed for this, are there any catches to this? Will scheduling, orchestrating, or querying the outputs be painful or unscalable for some reason?
profiles.yml:
example_project:
target: dev
outputs:
dev:
type: redshift
...
client_1:
type: redshift
...
client_2:
type: redshift
...
...
Create multiple projects, and create a shared dbt package containing most of the logic. This seems very unwieldy needing to maintain a separate repo for each client and less developer friendly.
profiles.yml:
client_1_project:
target: dev
outputs:
client_1:
type: redshift
...
client_2_project:
target: dev
outputs:
client_2:
type: redshift
...
Thoughts?
I think you captured both options.
If you have a single database connection, and your client data is logically separated in that connection, I would definitely pick #2 (one package, many client projects) over #1. Some reasons:
Selecting data from a different source (within a single connection), depending on the target, is a bit hacky, and wouldn't scale well for 1000's of clients.
The developer experience for packages isn't so bad. You will want a developer data source, but depending on your business you could maybe get away with using one client's data (or an anonymized version of that). It will be good to keep this developer environment logically separate from any individual client's implementation, and packages allow you to do that.
I would consider generating the client projects programmatically, probably using a Python CLI to set up, dbt run, and tear down the required files for each client project (I'm assuming you're not going to use dbt Cloud and have another orchestrator or compute environment that you control). It's easy to write YAML from Python with pyyaml (each file is just a dict), and your individual projects probably only need separate profiles.yml, sources.yml, and (maybe) dbt_project.yml files. I wouldn't check these generated files for each client into source control -- just check in the script and generate the files you need with each invocation of dbt.
On the other hand, if your clients each have their own physical database with separate connections and credentials, and those databases are absolutely identical, you could get away with #1 (one project, many profiles). The "hardest" parts of that approach would likely be managing secrets and generating/maintaining a list of targets that you could iterate over (ideally in a parallel fashion).

PowerApps datasource to overcome 500 visible or searchable items limit

For PowerApps, what data source, other than SharePoint lists are accessible via Powershell?
There are actually two issues that I am dealing with. The first is dynamic updating and the second is the 500 item limit that SharePoint lists are subject to.
I need to dynamically update my data source, which I am currently doing with PowerShell. My data source is not static and updating records by hand is time-consuming and error prone. The driving force behind my question is that the SharePoint list view threshold is 5,000 records however you are limited to 500 visible and searchable records when using SharePoint lists in the Gallery View and my data source contains greater than 500 but less than 1000 records. If you have any items beyond the 500th record that should match the filter criteria, they will not be found. So SharePoint lists are not optional for me until that limitation is remediated
Reference: https://powerapps.microsoft.com/en-us/tutorials/function-filter-lookup/
To your first question, Powershell can be used for almost anything on the Microsoft stack. You could use SQL server, Dynamics 365, SP, Azure, and in the future there will be an SDK for the Common Data Service. There are a lot of connectors, and Powershell can work with a good majority of them.
Take note that working with these data structures through Powershell is independent from Powerapps. Powerapps just takes the data that the data connector gives it, and if you have something updating the data in the background (Powershell, cron job, etc.), In order to get a dynamic list of items, you can use a Timer control and a Refresh function on your data source to update the list every ~5-20 seconds.
To your second question about SharePoint, there is an article that came out around the time you asked this regarding working with large lists. I wouldn't say it completely solves your question, but this article seems to state using the "Filter" function on basic column types would possibly work for you:
...if you’d like to filter the set of items that you are showing in the gallery control, you will make use of a “Filter” expression, rather than the “Search” expression, which is the default that existing apps used. With our changes, SharePoint connector now supports “equals” type of queries on columns that support filtering (Single line of text, choice, numbers, dates and people), so make sure that the columns and the expressions you use are supported and watch for the same warning to avoid reverting back to the top 500 items.
It also notes that if you want to pull from a list larger than the 5k threshold, you would need to use indexes, I have not fully tested this yet but it seems that this could potentially solve your problem.

Billing by tag in Google Compute Engine

Google Compute Engine allows for a daily export of a project's itemized bill to a storage bucket (.csv or .json). In the daily file I can see X-number of seconds of N1-Highmem-8 VM usage. Is there a mechanism for further identifying costs, such as per tag or instance group, when a project has many of the same resource type deployed for different functional operations?
As an example, Qty:10 N1-Highmem-8 VM's are deployed to a region in a project. In the daily bill they just display as X-seconds of N1-Highmem-8.
Functionally:
2 VM's might run a database 24x7
3 VM's might run batch analytics operation averaging 2-5 hrs each night
5 VM's might perform a batch operation which runs in sporadic 10 minute intervals through the day
final operation writes data to a specific GS Buckets, other operations read/write to different buckets.
How might costs be broken out across these four operations each day?
The Usage Logs do not provide 'per-tag' granularity at this time and it can be a little tricky to work with the usage logs but here is what I recommend.
To further break down the usage logs and get better information out of em, I'd recommend trying to work like this:
Your usage logs provide the following fields:
Report Date
MeasurementId
Quantity
Unit
Resource URI
ResourceId
Location
If you look at the MeasurementID, you can choose to filter by the type of image you want to verify. For example VmimageN1Standard_1 is used to represent an n1-standard-1 machine type.
You can then use the MeasurementID in combination with the Resource URI to find out what your usage is on a more granular (per instance) scale. For example, the Resource URI for my test machine would be:
https://www.googleapis.com/compute/v1/projects/MY_PROJECT/zones/ZONE/instances/boyan-test-instance
*Note: I've replaced the "MY_PROJECT" and "ZONE" here, so that's that would be specific to your output along with the name of the instance.
If you look at the end of the URI, you can clearly see which instance that is for. You could then use this to look for a specific instance you're checking.
If you are better skilled with Excel or other spreadsheet/analysis software, you may be able to do even better as this is just an idea on how you could use the logs. At that point it becomes somewhat a question of creativity. I am sure you could find good ways to work with the data you gain from an export.
9/2017 update.
It is now possible to add user defined labels, then track usage and billing by these labels for Compute and GCS.
Additionally, by enabling the billing export to Big Query, it is then possible to create custom views or hit Big Query in a tool more friendly to finance people such as Google Docs, Data Studio, or anything which can connect to Big Query. Here is a great example of labels across multiple projects to split costs into something friendlier to organizations, in this case a Data Studio report.

Split MS Access DB Needs Compact/Repair as well as Re Link on Front and Backend, Why?

I have an ACCDB that I split a while ago that contains many forms with sub forms (based on tables) and over two hundred tables in the BE (almost all are small lookup tables for vehicle objects) and 400+ queries. There also happens to exist another ACCDB with a single table in it with 6.5M rows that the FE links to with basic history info. The two backends do not link to each other in any way. The FE is 14MB, BE is 1.2G and the single table DB is 900MB, all with primary keyes and indexes setup appropriately. The DB is 100% normalized. Both BE's grow 5% every month. The DB is currently slated to be migrated to an Oracle 11G environment later this year.
Question:
I found out recently that if I compact and repair the back end or front end that none of the forms containing subforms open; the whole FE just freezes to white. Even if all 3 are repaired I still have issues. BUT if I compact/repair all 3 as well as relink the entire front end to the two backends the forms all of sudden start working. It was only recently that this behavior began.
Why do I have to relink to make the forms work again?
You should not have to re-link anything here at all after a C+R.
The only thing that comes to mind is the user who is doing the C+R has some restricted rights in the folder or directory where the C+R occurs.
Remember, when the user does the C+R, then a COPY of the file is created – and thus possible inheriting of the CURRENT user’s rights can occur WHEN the NEW file is created. So it sounds like some permissions issues exists on the folder, or the user that is doing the C+R has some special (different) rights. (perhaps some inherited rights do to membership in some security group).
Of course one should ensure that you are using UNC path names, and of course the front end needs to be placed on each machine.
Perhaps again the user doing the C+R has “different” drive mappings and thus links to the back end databases are thus wrong due to different drive letter. So if not already, as a general rule I would STRONGLY avoid drive letters and use NC path names (if you not already).
If you are using UNC path names, then the likely issue is permissions.
There also a possibility that the new user doing the C+R is running the front end from a “non” trusted location.
Also, the table of 6.5 million rows seems a bit large, and I assume the 1.2 gig size is RIGHT AFTER a C+R? (but this issue is for another post).
This suggests a drive mapping issue, a permissions issue, or perhaps the user launching the application is messing up references. I would shift by-pass into the application and ensure that the user doing the C+R can compile the application, and would from VBA editor take CAREFUL note that say office 14 references are not being hi-jacked to office 15 references for example.
You're reaching the "hassle-free" viable (as opposed to "documented") limits of Access as a database. remember the queries need to be compiled which means resolving all the table links, and verifying existing indexes and other meta-data. it's possible that simply over-writing this information by manually using the linked table manager as you have, may be more efficient.
Here's a few prescribed tips which might help you out:
http://office.microsoft.com/en-gb/access-help/improve-performance-of-an-access-database-HP005187453.aspx
And some more...
http://www.fmsinc.com/MicrosoftAccess/Performance.html#Linked%20Tables
And a related thread from this site:
Proper way to program a Microsoft Access Backend Database in a Multiuser Environment
Issues which may not be helping you:
queries which don't restrict the dataset sufficiently, particularly those running a dynaset
backed database files sitting too low in the windows folder structure (the higher the better)
As the 2nd link suggests, the truth is there are so many variables at work that resolving this will require some tinkering, with trial & error playing a major part.
All that, or you can upsize to SQL Server Express :)
http://office.microsoft.com/en-gb/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx

SSRS won't allow parameters in embedded dataset based on data source

Whenever I construct a report that uses an embedded dataset and try to use a parameter (such as #StartDate and #EndDate), I receive an error that states I must declare scalar values. However, this error only comes up if I set a data source that uses the "credentials stored securely in the report server" option. If I set the data source to use "Windows integrated security," I do not receive the error.
I am at a complete loss. These reports need to be accessed by a large amount of people. We have granted them "browser" privileges through an Active Directory Group through SSRS, including the data sources.
What is the best way to proceed? Is there an easy fix?
I generally deploy with the option already set by going into the Data Source and choosing 'Log on to SQL Server' section > 'Use SQL Server Authentication'> (Set your user and settings). When you use a windows user as your main user after you deploy there could be issues.
The other question would be does this work correctly at all times in Business Intelligence Development Studio, BIDS, and just not on the server? It is very interesting a permission issue alone would cause a scalar error to return. Generally when users have to get to the report they may still get the error but not storing the credentials merely asks them for credentials. It would help more to know the datasets and what they are returning or supposed to be returning. Generally a Start and End are typically defined as 'DataTime' in SSRS and are in a predicate like 'Where thing between #Start and #End' and there data is chosen from a calendar by a user. If you are binding them to other datasets and there is the possibility of a user selecting multiple values that could present an issue.
I took a look at the data source that had been set up by our DBA. It was set up as an ODBC connection. I changed it to Microsoft SQL. It works now. I do not understand why and would appreciate if a more seasoned individual might explain.