How can I round up MyBB View count format - mybb

Am Using MyBB for my forum,
My post view always display as e.g 31344, 211313
How can I round it up to something like 31k, 21k
And when it reach million or billions it should be 31M, 21B
You Get The Idea?, please how can I fixed that
Remember am using MyBB

For that, you need to edit the function my_number_format in stats.php file in global directory.
It responsibility is to parse the number in a given format.

Related

How to find the column of the number that you're looking for?

After I watched a video about finding back a number with random numbers (Video link, if you want to know), I tried to make a sheets program for it. When I was trying to find out how much does it take to get back to a number, I have no idea on which function to use. In order to find how much "tries" to get back to a number. I tried =FIND and =HLOOKUP and it failed because it has multiple same numbers.
Example:
[1 needs 3 tries to get back to 1, so the result should be 3. Same thing with 2, but it needs 17 tries.
Here's the link of the google sheet
try something like this:
=IFERROR(IF($B7=INDEX($B$7:$B, MATCH(B7, ROW($A$7:$A)-(ROW($A$7)-1), 0)),,
INDEX($B$7:$B, MATCH(B7, ROW($A$7:$A)-(ROW($A$7)-1), 0))))
spreadsheet demo

How do I reduce the number of 301 redirect entries using wildcards and variables in Squarespace?

I recently renamed all of the URLs that make up my blog... and have written redirects for almost every page... using wildcards where I can... keeping in mind... all that I know is the * wildcard at this time...
Here is an example of what I have...
/season-1/2017/1/1/snl-s01e01-host-george-carlin -> /season-1/snl-s01e01-george-carlin 301
I want to write a catch-all that will redirect all 38 seasons of reviews with one redirect entry... but I can't figure out how to get rid of just the word "host" between s01e01- and -george-carlin... and was thinking it would work something like this...
/season-*/*/*/*/snl-s*e*-host-*-* -> /season-*/snl-s*e*[code to remove the word "host"]-*-* 301
Is that even close to being correct? Do I need that many *s
Thanks in advance for any help...
Unfortunately, you won't be able to reduce the number of individual redirect entries using the redirect features that Squarespace has to offer, namely the wildcard (*) and a single variable ([name]). Multiple variables would be needed, but only [name] is supported.
The closest you can get is:
/season-1/*/*/*/snl-s01e01-host-[name] -> /season-1/snl-s01e01-[name] 301
But, if I'm understanding things, while the above redirect appears more general, it would still need to be copy/pasted for each post individually. So although it demonstrates the best that could be achieved, it is not a technical improvement.
Therefore, there are only two alternatives:
Create a Google Sheet (or other spreadsheet) where the old URLs are copy/pasted in column one, a formula using arrayformula and regular expressions to parse the old URL and generate the new URL is added in column two, and in column three a formula is written to join the two cells with -> and 301. With that done, you could click, drag and highlight all cells in column 3, copy, and paste them in the "URL Shortcuts" text area in Squarespace.It can be quite time consuming to figure out, write and test the correct formula, but it does avoid having to manually type out every redirect. Whether it is less time/effort in total depends on the number of redirects and one's proficiency with writing spreadsheet formulas.It could be that using the redirect code above would simplify the formula that'd need to be written in the spreadsheet, which may save some time.
Another alternative would be to remove your redirects and instead handle the redirect via JavaScript added to the 404/Page-not-found page. Because it sounds like you already have all of the redirects in place but are simply trying to reduce the overall number, I wouldn't recommend changing to a JavaScript-based approach. There are other drawbacks to using JavaScript, in any case.

Grok filter for a time counter HH:MM

I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)

Scala Gatling keep inserting user until end of feed file is reached

I'm trying to inject users to a scenario in such a way that it will keep inserting user until every single entry of the feed file is used since the feed file contains log in information. I would like all the users in the feed file to log in. Right now all I could think of is two possible approaches.
Here I insert the number of rows in the feedfile at once.
scenario("Verified_Login")
.exec(LoginScenario.scn)
.inject(atOnceUsers(number_of_entries_in_feedfile))
Here I insert a very high time duration, for example, 100 seconds and then make the feedfile circular.
scenario("Verified_Login")
.exec(LoginScenario.scn)
.inject(atOnceUsers(1),constantUsersPerSec(1) during(100 seconds)
The problem with the first approach is I have to find the number of entries in the feed file which can be tedious as there could be thousands there. The problem with the second is that entries could and probably will be repeated. So is there a way to keep injecting users till feed file runs out of entries?
According to this source, from last year, Stéphane Landelle - who is the leading contributor of gatling, says that you must provide enough data for a simulation to complete using this method.
The post I linked from Stéphane does suggest to simply read the length of the file and use that to drive the amount of users, as you have already mentioned in your question.
I suggest you read the post as it will give you an alternate method to achieving what you want. Seems to be as close as you will ever get unless things have changed.
Here is their code.
val systemsIdentifier = jdbcFeeder(databaseUrl, databaseUser, databasePassword, sql_systemsIdentifier)
val count = new AtomicInteger(systemsIdentifier.records.size).asLongAs(_ => count.getAndIncrement < systemsIdentifier.records.size)
val comScn = scenario("My scenario")
.repeat(systemsIdentifier.records.size / count) {
feed(systemsIdentifier)
.exec(performActionsChain)
}
setUp(comScn.inject(rampUsers(count) over (60 seconds))).protocols(httpConf)

what is the difference between doc.Content.Text and doc.Range(start, end).Text

Could you please explain what is the difference between doc.Content.Text and doc.Range(start, end).Text
Actually, if I extract a string like
doc.Content.Text.SubString(start, lenofText)
and if I do the same with
doc.Range(start, start + lenofText)
I get correct result for doc.Content.Text but incorrect result with doc.Range ... do you know the reason? I need to find a text and then convert it to a Hyper LINK but the doc.Range does not give the me the correct results...
Your description is a little vague (for instance, how is it not the correct results?) but a document is actually comprised of as many as 17 story parts (which includes things like the main story [the document area], footers, headers, footnotes, and comments). 'Content' refers specifically to the main text story. ‘Doc.Range’ is broader and can include more than one story. If the results are not correct because it looks like the text is offset by a certain number of characters, it may be counting other stories. If you want to limit the results to the body text, specify one of the following:
doc.Content
doc.StoryRanges(wdMainTextStory)