Why does the Kubernetes revisionHistoryLimit default to 2? - kubernetes

Kubernetes' .spec.revisionHistoryLimit is used in keeping track of changes to the Deployment definition. Since these definitions are small yaml files, it doesn't seem like much (given the typical conditions of a modern cloud server) to keep 100's or more of these definitions around if necessary.
According to the documentation, it is going to be set to 2 (instead of unlimited). Why is this?

Keeping it unlimited would eventually clog up etcd and iiuc etcd isn't designed for big data usages. Also the Kubernetes control plane is syncing and downloading regulary, which means there would be much unnecessary data to process. Just image service with daily deploymets, which runs over a year.
And setting this to a high-ish number like 100 seems very random to me. Why not 1000 vor 9001? Besides that I cannot imaging anyone who might want to roll back something a hundred versions.
Anyway, we are only talking about a default setting, so you can set it to a very high number, if your usw case requires it.

Related

What impact does increasing the number of days historical events are stored have on overall Tableau Server performance

I'm looking at increasing the number of days historical events that are stored in the Tableau Server database from the default 183 to +365 days and I'm trying to understand what the performance impact to Tableau Server itself would be since the database and backup sizes also start increasing. Would it cause the overall Tableau Server running 2019.1.1 over time to slow to a crawl or begin to have a noticeable impact with respect to performance?
I think the answer here depends on some unknowns and makes it pretty subjective:
How much empty space is on your PostGres node.
How many events typically occur on your server in a 6-12 month period.
Maybe more importantly than a yes or a no (which should be taken with a grain of salt) would be things to consider prior to making the change.
Have you found value in the 183 default days? Is it worth the risk adding 365? It sounds like you might be doing some high level auditing, and a longer period is becoming a requirement. If that's the case, the answer is that you have no choice but to go ahead with the change. See steps below.
Make sure that the change is made in a Non-prod environment first. Ideally one with high traffic. Even though you will not get an exact replica - it would certainly be worth the practice in switching it over. Also you want to make sure that Non-prod and prod environments match exactly.
Make sure the change is very well documented. For example, if you were to change departments without anyone having knowledge of a non-standard config setting, it might make for a difficult situation if ever Support is needed or if there is a question as to what might be causing slow behavior.
Things to consider after a change:
Monitor the size of backups.
Monitor the size of the historical table(s) (See Data-Dictionary for table names if not already known.)
Be ready to roll back the config change if the above starts to inflate.
Overall:
I have not personally seen much troubleshooting value from these tables being over a certain number of days (ie: if there is trouble on the server it is usually investigated immediately and not referenced to 365+ days ago.) Perhaps the value for you lies in determining the amount of usage/expansion on Tableau Server.
I have not seen this table get so large that it brings down a server or slows it down. Especially if the server is sized appropriately.
If you're regularly/heavily working with and examining PostGres data, it may be wise to extract at a low traffic time of the day. This will prevent excess usage from outside sources during peak times. Remember that adhoc querying of PostGres is Technically unsupported. This leads to awkward situations if things go awry.

EC2 Linux MarkLogic9 start service failed

I added an instance that is RedHat Linux64. Installed JDK successfully. Then used SSH to send MarkLogic9 installation package to Linux and install finished. When I start MarkLogic service the messages came as following. (P.S: this is my first time to install MarkLogic)
Instance is not managed
Waiting for device mounted to come online : /dev/xvdf
Volume /dev/sdf has failed to attach - aborting
Warning: ec2-startup did not complete successfully
Check the error logs for details
Starting MarkLogic: [FAILED]
And following is log info:
2017-11-27 11:16:39 ERROR [HandleAwsError # awserr.go.48] [instanceID=i-06sdwwa33d24d232df [HealthCheck] error when calling AWS APIs. error details - NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Using the Source of Infinate Wisdom, I googled for "Install MarkLogic ec2 aws"
Not far down I find [https://docs.marklogic.com/guide/ec2.pdf][1]
Good document to read.
If you choose to ignore the (literally "STOP" in all caps) "STOP: Before you do Anything!" suggestion on the first page, you can go further and find that ML needs a Data volume, and that using the root volume is A Bad Idea (its too small and crash your system when it fills up, let alone vanish if your instance terminates). So if you choose to not use the recommended CloudFormation script for your first experience, you will need to manually create and attach a data volume, among other things.
. [1]: https://docs.marklogic.com/guide/ec2.pdf
the size and compute power of the host systems runnin ML are othaomal o the deployment and orchestration methods.
entirely diverent issues. yes you should start wit the sample cloud formation scripts... but not due to size and performance, due to
the fact they were built to make a successful first time experience as painless as possible. you would have had your ML server up and running in less time them it took to post to stackoverflow a question asking why it wasn’t,
totally unrelated - except for the built in set of instance types for the amis (1)
what configurations are possible v recommended o supported,
all large;y dependent on workload and performance expectations.
marklogic can and run on resource constrained system — whether and how it works well requires the same mythodology to answer for micro and mega systems ..., workload. data size and format, query and data processing code used, performance requirements, working set, hw, sw, vm, networking, storage ... while designed to support large enterprise workloads well,
there are also very constrained platforms and workloads in use in production systems. a typical low end laptop can run ML fine ... for the some use cases, where others may need a cluster of a dozen or a hundred high end monsters.
(1). ‘supported instance types’ with marketplace amis ...
yes these do NOT include entry level ec2 instance types last i looked.
the rationale similar to why the st dre scripts make it hard to abuse the root volume for a data volume — not because it cannot be done,
rather an attempt to provide the best chance of a successful first time experience to the targeted market segment ... constrained by having only one chance to do it, knowing nothing at all about the intended use. ... a blind educated guess coupled with a lot of testing and support history about how people get things wrong no matter how much you lead them.
while ‘micro’ systems can be made to work successfully —in some specialized use cases, usually they don’t do as well as, as easily, reliably and handle as large a variety of whateveryouthrowatthem without careful workload specific tuning and sophisticated application code —
similarly ,,, there is a reason the docs make it as clear as humanly possible, even annoyingly so, that you should start with the cloud formation templates —
short of refusing to run without them.
can ML run on Platform X with Y-Memory, Z hypervisor, on docker or vmware or virtual box or brand acme raid controller ...
very likely —,with some definition of ‘run’ and configured for those exact constraints
very unlikely for arbitrary definitions of ‘run’ and no thought or effort to match the deployment with the environment
will it be easy to setup by someone who’s never done it before, run ‘my program’, at ‘my required speeds’ out of the box with no problems, no optimization’s, performance analysis, data refactoring, custom queries.
for a reasonably large set of initial use cases — for at least a reasonable and quick POC, very likely — if you follow the installation guide, with perhaps a few parameter adjustments
is that the best it can do ? absolutely not.
but it’s very close given absolutely no knowledge of the users actual application, technical,experience, workloads, budget, IT staff, dev and qa team, requirements, business policies, future needs, staff, phase of the moon.
recommend, read the ec2 docs.
do what they say
try it out with a realistic set of data and applications for your use,
test. measure, experiment , learn
THEN and ONLY THEN worry about if it will work on. t2.micro or m4.64xlarge9orbclusters thereof .. )
that is the beginning not the end
the end is never, you can and should consider continual analysis and improving IT configurations as part of ongoing operating procedures.
minimizing cost is a systemic problem with many dimensions —
and on aws it’s FREE to change. It’s EXPENSIVE to not plan forchange.
change is cheep
experimentation is cheep
choose instance types, storage, networking etc last not first.
consider TCOA . question requirements ... do you NEED that dev system running sunday at 3am? can QA tolerate occasional failures in exchange for 90% cost savings ? Can you avoid over commitment by auto scaling ?
Do you need 5 9’s or is 3 9’s enough ? can ingest be offloaded to non production systems with cheaper storage ? Can a middle tear be used ... or removed to novevwork to the most cost effectiv4 components ? is labor or it more costly
instant type is actually one of the least relevant components in TCOA

Proper way to scale out build agents on a VM for VS Team Services

What's the best practice for scaling out VSTS build agents on a single VM?
I've read through this and this, but I'm still not sure how to do it correctly.
I'm thinking maybe a folder called c:\builds. Then extract the agent bits to a folder under that for each agent (i.e. c:\builds\1 or c:\builds\agent01). Not sure about that, though.
Just register multiple agents using whatever directory scheme your heart desires. On Windows, shorter paths are better because of the maddening legacy 260 character limit on file paths.
Also keep in mind that builds are typically I/O limited. Putting multiple agents on the same physical hard drive isn't going to get you too much unless you're using SSDs. I wouldn't bother with more than 2 agents per disk, although your mileage may vary depending on disk speed, memory, etc. It's something worth profiling. Past a certain point, your builds will actually run slower, but in parallel.

Difference between Memcached and Hadoop?

What is the basic difference between Memcached and Hadoop? Microsoft seems to do memcached with the Windows Server AppFabric.
I know memcached is a giant key value hashing function using multiple servers. What is hadoop and how is hadoop different from memcached? Is it used to store data? objects? I need to save giant in memory objects, but it seems like I need some kind of way of splitting this giant objects into "chunks" like people are talking about. When I look into splitting the object into bytes, it seems like Hadoop is popping up.
I have a giant class in memory with upwards of 100 mb in memory. I need to replicate this object, cache this object in some fashion. When I look into caching this monster object, it seems like I need to split it like how google is doing. How is google doing this. How can hadoop help me in this regard. My objects are not simple structured data. It has references up and down the classes inside, etc.
Any idea, pointers, thoughts, guesses are helpful.
Thanks.
memcached [ http://en.wikipedia.org/wiki/Memcached ] is a single focused distributed caching technology.
apache hadoop [ http://hadoop.apache.org/ ] is a framework for distributed data processing - targeted at google/amazon scale many terrabytes of data. It includes sub-projects for the different areas of this problem - distributed database, algorithm for distributed processing, reporting/querying, data-flow language.
The two technologies tackle different problems. One is for caching (small or large items) across a cluster. And the second is for processing large items across a cluster. From your question it sounds like memcached is more suited to your problem.
Memcache wont work due to its limit on the value of object stored.
memcache faq . I read some place that this limit can be increased to 10 mb but i am unable to find the link.
For your use case I suggest giving mongoDB a try.
mongoDb faq . MongoDB can be used as alternative to memcache. It provides GridFS for storing large file systems in the DB.
You need to use pure Hadoop for what you need (no HBASE, HIVE etc). The Map Reduce mechanism will split your object into many chunks and store it in Hadoop. The tutorial for Map Reduce is here. However, don't forget that Hadoop is, in the first place, a solution for massive compute and storage. In your case I would also recommend checking Membase which is implementation of Memcached with addition storage capabilities. You will not be able to map reduce with memcached/membase but those are still distributed and your object may be cached in a cloud fashion.
Picking a good solution depends on requirements of the intended use, say the difference between storing legal documents forever to a free music service. For example, can the objects be recreated or are they uniquely special? Would they be requiring further processing steps (i.e., MapReduce)? How quickly does an object (or a slice of it) need to be retrieved? Answers to these questions would affect the solution set widely.
If objects can be recreated quickly enough, a simple solution might be to use Memcached as you mentioned across many machines totaling sufficient ram. For adding persistence to this later, CouchBase (formerly Membase) is worth a look and used in production for very large game platforms.
If objects CANNOT be recreated, determine if S3 and other cloud file providers would not meet requirements for now. For high-throuput access, consider one of the several distributed, parallel, fault-tolerant filesystem solutions: DDN (has GPFS and Lustre gear), Panasas (pNFS). I've used DDN gear and it had a better price point than Panasas. Both provide good solutions that are much more supportable than a DIY BackBlaze.
There are some mostly free implementations of distributed, parallel filesystems such as GlusterFS and Ceph that are gaining traction. Ceph touts an S3-compatible gateway and can use BTRFS (future replacement for Lustre; getting closer to production ready). Ceph architecture and presentations. Gluster's advantage is the option for commercial support, although there could be a vendor supporting Ceph deployments. Hadoop's HDFS may be comparable but I have not evaluated it recently.

Long term source code archiving: Is it possible?

I'm curious about keeping source code around reliably and securely for several years. From my research/experience:
Optical media, such as burned DVD-R's lose bits of data over time. After a couple years, I don't get all the files off that I put on them. Read errors, etc.
Hard drives are mechanical and subject to failure/obsolescence with expensive data recovery fees, that hardly keep your data private (you send it away to some company).
Magnetic tape storage: see #2.
Online storage is subject to the whim of some data storage center, the security or lack of security there, and the possibility that the company folds, etc. Plus it's expensive, and you can't guarantee that they aren't peeking in.
I've found over time that I've lost source code to old projects I've done due to these problems. Are there any other solutions?
Summary of answers:
1. Use multiple methods for redundancy.
2. Print out your source code either as text or barcode.
3. RAID arrays are better for local storage.
4. Open sourcing your project will make it last forever.
5. Encryption is the answer to security.
6. Magnetic tape storage is durable.
7. Distributed/guaranteed online storage is cheap and reliable.
8. Use source control to maintain history, and backup the repo.
The best answer is "in multiple places". If I were concerned about keeping my source code for as long as possible I would do:
1) Backup to some optical media on a regular basis, say burn it to DVD once a month and archive it offsite.
2) Back it up to multiple hard drives on my local machines
3) Back it up to Amazon's S3 service. They have guarantees, it's a distributed system so no single points of failure and you can easily encrypt your data so they can't "peek" at it.
With those three steps your chances of losing data are effectively zero. There is no such thing as too many backups for VERY important data.
Based on your level of paranoia, I'd recommend a printer and a safe.
More seriously, a RAID array isn't so expensive anymore, and so long as you continue to use and monitor it, a properly set-up array is virtually guaranteed never to lose data.
Any data you want to keep should be stored in multiple places on multiple formats. While the odds of any one failing may be significant, the odds of all of them failing are pretty small.
I think you'd be surprised how reasonably priced online storage is these days. Amazon S3 (simple storage solution) is $0.10 per gigabyte per month, with upload costs of $0.10 per GB and download costing $0.17 per GB maximum.
Therefore, if you stored 20GB for a month, uploaded 20GB and downloaded 20GB it would cost you $8.40 (slightly more expensive in the European data center at $9).
That's cheap enough to store your data in both US and EU data centers AND on dvd - the chances of losing all three are slim, to say the least.
There are also front-ends available, such as JungleDisk.
http://aws.amazon.com
http://www.jungledisk.com/
http://www.google.co.uk/search?q=amazon%20s3%20clients
The best way to back up your projects is to make them open source and famous. That way there will always be people with a copy of it and able to send it to you.
After that, just care of the magnetic/optical media, continued renewal of it and multiple copies (online as well, remember you can encrypt it) on multiple media (including, why not, RAID sets)
If you want to archive something for a long time, I would go with a tape drive. They may not hold a whole lot, but they are reliable and pretty much the storage medium of choice for data archiving. I've never personally experienced dataloss on a tape drive, however.
Don't forget to use Subversion (http://subversion.tigris.org/). I subversion my whole life (it's awesome).
The best home-usable solution I've seen was printing out the backups using a 2D barcode - the data density was fairly high, it could be re-scanned fairly easily (presuming a sheet-feeding scanner), and it moved the problem from the digital domain back into the physical one - which is fairly easily met by something like a safe deposit box, or a company like Iron Mountain.
The other answer is 'all of the above'. Redundancy always helps.
For my projects, I use a combination of 1, 2, & 4. If it's really important data, you need to have multiple copies in multiple places. My important data is replicated to 3-4 locations every night.
If you want a simpler solution, I recommend you get an online storage account from a well known provider which has an insured reliability guarantee. If you are worried about security, only upload data inside TrueCrypt encrypted archives. As far as cost, it will probably be pricey... But if it's really that important the cost is nothing.
For regulatory mandated archival of electronic data, we keep the data on a RAID and on backup tapes in two separate locations (one of which is Iron Mountain). We also replace the tapes and RAID every few years.
If you need to keep it "forever" probably the safest way is to print out the code and stick that in a plastic envelope to keep it safe from the elements. I can't tell you how much code I've lost to a backup means which are no longer reachable.... I don't have a paper card reader to read my old cobol deck, no drive for my 5 1/4" floppies, or my 3 1/2" floppies. but yet the print out that I made of my first big project still sits readable...even after my once 3 year old decided that it would make a good coloring book.
When you state "back up source code", I hope you include in your meaning the backing up of your version control system too.
Backing your current source code (to multiple places) is definitely critical, but backing up your history of changes as preseved by your VCS is paramount in my opinion. It may seem trivial especially when we are always "living in the present, looking towards the future". However, there have been way too many times when we have wanted to look backward to investigate an issue, review the chain of changes, see who did what, whether we can rollback to a previous build/version. All the more important if you practise heavy branching and merging. Archiving a single trunk will not do.
Your version control system may come with documentation and suggestions on backup strategies.
One way would be to periodically recycle your storage media, i.e. read data off the decaying medium and write it to a fresh one. There exist programs to assist you with this, e.g. dvdisaster. In the end, nothing lasts forever. Just pick the least annoying solution.
As for #2: you can store data in encrypted form to prevent data recovery experts from making sense of it.
I think Option 2 works well enough if you have the write backup mechanisms in place. They need not be expensive ones involving a third-party, either (except for disaster recovery).
A RAID 5 configured server would do the trick. If a hard drive fails, replace it. It is HIGHLY unlikely that all the hard drives will fail at the same time. Even a mirrored RAID 1 drive would be good enough in some cases.
If option 2 still seems like a crappy solution, the only other thing I can think of is to print out hard-copies of the source code, which has many more problems than any of the above solutions.
Online storage is subject to the whim of some data storage center, the security or lack of security there, and the possibility that the company folds, etc. Plus it's expensive,
Not necessarily expensive (see rsync.net for example), nor insecure. You can certainly encrypt your stuff too.
and you can't guarantee that they aren't peeking in.
True, but there's probably much more interesting stuff to peek at than your source-code. ;-)
More seriously, a RAID array isn't so expensive anymore
RAID is not backup.
I was just talking with a guy who is an expert in microfilm. While it is an old technology, for long term storage it is one of the most enduring forms of data storage if properly maintained. It doesn't require sophisticated equipment (magifying lens and a light) to read altough storing it may take some work.
Then again, as was previously mentioned, if you are only talking in the spans of a few years instead of decades printing it off to paper and storing it in a controlled environment is probable the best way. If you want to get really creative you could laminate every sheet!
Drobo for local backup
DVD for short-term local archiving
Amazon S3 for off-site,long-term archiving