How to force Windows Indexing "activity" [closed] - windows-xp

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
The Windows Indexing Service pauses itself when it detects the "user is active." Is there a registry entry or something to make it continue indexing regardless of user activity?
Clarification: in Windows XP

Microsoft Indexing service indexing optimization is controlled by a set of registry entries in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ContentIndex
Follow these steps to easily tune the performance of Indexing Service to index documents immediately:
Open the Computer Management control panel.
Expand the "Services and Applications" item.
Stop Indexing Service (right click -> Stop).
Right click Indexing Service -> All Tasks -> Tune Performance.
Select the Customize radio button and click the "Customize" button.
Select "Instant" indexing performance.
OK, OK.
Start Indexing Service (right click -> Start).
Note, that now indexing service will index documents as soon as the file system notifies it of any changes. WARNING: This setting applies to all catalogs. The setting could cause system slowdown due to the amount of documents being indexed in the background.
The indexing settings can be further tuned by selecting other items in the Tune Performance dialog. Each of these tunings, correspond to a set of values written to the ContentIndex registry key. These values can be tweaked to obtain the best performance balance.

Right-click on the Index service icon in the system tray (the magnifying glass), and click "Index Now." I know it sounds like an action that will only happen once, but this is in fact a toggle that, when turned on, does exactly what you are asking.

Related

What causes cold start in serverless [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have read enough papers on serverless cold start, but have not found a clear explanation on what causes cold start. Could you try to explain it from both commercial and open-source platform's points of view?
commercial platform such as AWS Lambda or Azure Funtion. I know they are more like a black-box to us
There are open-source platforms such as OpenFaaS, Knative, or OpenWhisk. Do those platforms also have a cold start issue?
My initial understanding about cold start latency is time spent on spinning up a container. After the container being up, it can be reused if not being killed yet, so there is a warm start. Is this understanding really true? I have tried to run a container locally from the image, no matter how large the image is, the latency is near to none.
Is the image download time also part of cold start? But no matter how many cold starts happened in one node, only one image download is needed, so this seems to make no sense.
Maybe a different question, I also wonder what happened when we instantiate a container from the image? Are the executable and its dependent libraries (e.g., Python library) copied from disk into memory during this stage? What if there are multiple containers based on the same image? I guess there should be multiple copies from disk to memory because each container is an independent process.
There's a lot of levels of "cold start" that all add latency. The hottest of the hot paths is the container is still running and additional requests can be routed to it. The coldest is a brand new node so it has to pull the image, start the container, register with SD, wait for the serverless plane's routing stuffs to update, probably some more steps if you dig deep enough. Some of those can happen in parallel but most can't. If the pod has been shut down because it wasn't being used, and the next run schedules on the same machine then yes kubelet usually skips pulling image (unless imagePullPolicy Always is forced somewhere) so you get a bit of a faster launch. K8s' scheduler doesn't generally optimize for that though.

Is there a way that an application or a system to update without shutting down? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I work in a hospital where the system shuts down when updating. making all orders hanging with no approvals or modifications. considering it's a hospital, this is a huge problem. so, my question is how can we update the system without it shutting down. I'm most interested in rolling updates where there's no down time.
This is a very broad question, but generally, yes, it is perfectly possible to update a system without shutting down the system.
The simplest possible solution is to have a duplicate system. Let's say you are currently working with System A. When you want to do an update, you update System B. The update can take as long as it needs, since you are not using System B. There will be no impact at all.
Once the update is finished, you can test the hell out of System B to make sure the update didn't break anything. Again, this has no impact on working with the system. Only after you are satisfied that the update didn't break anything, do you switch over to using System B.
This switchover is near instantaneous.
If you discover later that there are problems with the update, you can still switch back to System A which is still running the old version.
For the next update, you again update the system which is currently not in use (in this case System A) and follow all the same steps.
You can do the same if you have a backup system. Update the backup system, then fail over, then update the main system. Just be aware of the fact that while the update is happening, you do not have a backup system. So, if the main system crashes during the update process, you are in trouble. (Thankfully, this is not entirely as bad as it sounds, because it least you will already have a qualified service engineer on the system anyway who can immediately start working on either pushing the update forward to get the backup online or fix the problem with the main system.)
The same applies when you have a redundant system. You can temporarily disable redundancy, then update the disabled system, flip over, do it again. Of course, just like in the last option, you are operating without a safety net while the update process is ongoing.
If your system is a cluster system, it's even easier. If you have enough resources, you can take one machine out of the cluster, update it, then add it back into the cluster again, then do the next machine, and so on. (This is called a "rolling update", and is how companies like Netflix, Google, Amazon, Microsoft, Salesforce, etc. are able to never have any downtime.)
If you don't have enough resources, you can add a machine to the cluster just for the update, and then you are back to the situation that you do have enough resources.
Yes.
Every kind of component may be updated rebootlessly.
For windows you always can postpone reboots.

Check if value already exists while typing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In Meteor, what is the most efficient way to check the database to see if something exists while the user is typing?
For example, I'm trying to check if the username exists in database while the user is typing his/her desired name to register an account.
I could create a keydown event to check every time when there's a key stroke, or I could use setInterval, but I feel like that's an overkill.
Is there a built in method in Meteor to do something like this?
I did't see anything like that, so you'll have to built it yourself.
Security
Showing which usernames are taken while typing makes it very easy to retrieve a list of existing users. This could be okay if the user list is available to public anyway (for example in a forum), but in most applications you should avoid that.
Waiting until user stops typing
Users probably type faster than the service is able to check the database. Therefore checking on every key stroke would cause a lot of unnecessary service calls. You should at least implement a delay or wait until the field looses focus.
Forseeing next character
You should try to minimize service calls. For example if someone types "Mic", besides checking the exact name, you could add that "Mick" and "Mic1" are already taken too. Further optimization would be to predict more than one character based on common names, but that probably will never be needed.
Reusing Autocomplete Code
You could reuse some code of a autocomplete component, for example when to trigger a service call. But most of the code you can't reuse, because the user interface is very different.
You might find this smart package useful.
https://github.com/mizzao/meteor-autocomplete

Best database for a Statistics System [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I need to build a Statistics System but I don't know if MongoDB would be the best solution. The system needs to track couple of things and than display the information. For example of a similar thing - a site, and every user that first visits the site adds a row with information about him. The system needs to store the data as fast as possible, and, for example, it creats a chart of the growth of users viewing the page using Google Chrome. Also, if a user visits again, a field in the users's already row is updated (say a field called "Days").
The system needs to handle 200,000 new visits a day (new records), 20,000,000 users visits again (updates) a day, and 800,000,000 DB records. It needs also to output the data fast - for example, creating a chart of how much users visits each day from England, using Google Chrome, etc.
So what would be the best DB to handle this data? Would MongoDB handle this fine?
Thanks!
Mongodb allows atomic updates and scales very well. That's exactly what it's designed for. But keep in mind two things: beware the disk space, it may run out very quickly and if you need quick stats (like region coverage, traffic sources, etc.), you have to precompute them. The fastest way is to build a simple daemon for this that would keep all numbers in memory and save it hourly/daily.
Redis is a very good choice for it, provided you have a lot of RAM, or a strategy to shard the data over multiple nodes. it's good because:
it is in memory, so you can do real time analytics (I think bit.ly's real time stats use it). in fact, it was originally created for that.
it is very very fast, can do hundreds of thousands of updates a seconds with ease.
it has atomic operations.
it has sorted sets which are great for time series.
RDM Workgroup is a database management system for desktop and server environments and allows in-memory speed as well.
You can also use its persistence feature; where you manage data in-memory and then transfer that data on-disk when the application shuts down so there is no data loss.
It is based on the network model with an intuitive interface so its scalability is top-notch and will be able to handle the large load of new visitors that you will be expecting.

Version Control for Virtual Appliances [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
My understanding of a virtual appliance is 1+ pre-configured VM(s) designed to work with one another and each with a pre-configured:
Virtual hardware configuration (disks, RAM, CPUs, etc.)
Guest OS
Installed & configured software stack
Is this (essentially) the gist of what an appliance is? If not please correct me and clarify!
Assuming that my understanding is correct, it begins to beg the question: what are the best ways to back up an appliance? Obviously a SCM like SVN would not be appropriate because an appliance isn't source code - its an enormous binary file representing an entire machine or even set of machines.
So how does SO keep "backups" of appliances? How does SO imitate version control for appliance configurations?
I'm using VBox so I'll use that in the next example, but this is really a generic virtualization question.
If I develop/configure an appliance and label it as the "1.0" version, and deploy that appliance to a production server running the VBox hypervisor, then I'll use software terms and call that a "release". What happens if I find a configuration issue with the guest OS of that appliance and need to release a 1.0.1 patch?
Thanks in advance!
From what I've seen and used, appliances are released with the ability to restore their default VM, probably from a ghost partition of some kind (I'm thinking about Comrex radio STL units I've worked with). Patches can be applied to the appliance, with the latest patch usually containing all the previous patches (if needed).
A new VM means a new appliance - Comrex ACCESS 2.0 or whatever, and 1.0 patches don't work on it. It's never backed up, rather it can just be restored to a factory state. The Comrex units store connection settings, static IP configuration, all that junk, but resetting kills all that and has to be re-entered (which I've had to do before).