rpmlint: "W: no-version-in-last-changelog" - rpm-spec

When building RPMs in SLES12, I use rpmlint to check the spec files and packages.
I always get the warning W: no-version-in-last-changelog, but I cannot explain it:
The last lines of my %changelog reads like
* Tue Jul 11 2011 My Self <my.email#address.here>
- Release 5.2.5-0.0: Initial release.
So I guess that's the "last" changelog the warning refers to, and there's clearly a version in it.
Even if it would refer to the first (latest) version, it wouldn't make a difference, as it looks very much the same:
%changelog
* Thu Apr 8 2021 My Self <my.email#address.here>
- Release 5.28.0-0.0: Updated to...
more text.
* Thu Jul 5 2018 ...next entry using just the same format...

If we take a look at OpenSUSE RPM packaging guidelines on creating the changes file... https://en.opensuse.org/openSUSE:Creating_a_changes_file_(RPM)
They suggest using the following format...
Tue Apr 22 20:54:26 UTC 2013 - your#email.com - x.y.z
- Update to new upstream release x.y.z:
* bling and changes from upstream for that version
* just the relevant parts, no info about other OS
* and keep it as short as possible
So I think it's complaining because you ought to use format which looks something like this...
* Tue Jul 11 2011 My Self <my.email#address.here> - 5.2.5-0.0
- Initial release.
I'm pretty sure that you could also use the following if it's important to have the version number on the first line of the changes.
* Tue Jul 11 2011 My Self <my.email#address.here>
- 5.2.5-0.0
- Initial release.

Related

CPAN installation prompting "waiting for read lock"

I just installed perl and am trying to install CPAN without root privilege.
As I run cpan App::cpanminus, I got into an infinite loop echoing Waiting for read lock on '~/.cpan/FTPstats.yml'. I tried removing the file and the error persists.
Highly appreciate any solution and thanks in advance!
A snapshot of full log is as follows.
Loading internal null logger. Install Log::Log4perl for logging messages
Reading '/home/louis/.cpan/sources/authors/01mailrc.txt.gz'
............................................................................DONE
Reading '/home/louis/.cpan/sources/modules/02packages.details.txt.gz'
Database was generated on Tue, 01 Mar 2022 04:17:03 GMT
.............
New CPAN.pm version (v2.29) available.
[Currently running version is v2.16]
You might want to try
install CPAN
reload cpan
to both upgrade CPAN.pm and run the new version without leaving
the current session.
...............................................................DONE
Fetching with LWP:
http://www.cpan.org/modules/03modlist.data.gz
Tue Mar 1 13:33:09 2022: waiting for read lock on '/home/louis/.cpan/FTPstats.yml' (since Tue Mar 1 13:32:59 2022)
Tue Mar 1 13:33:12 2022: waiting for read lock on '/home/louis/.cpan/FTPstats.yml' (since Tue Mar 1 13:32:59 2022)
And some details of access right of FTPstats.yml.
-rw-r--r-- 1 louis louis 0 Mar 1 13:32 /home/louis/.cpan/FTPstats.yml

Error message in Talend tool connecting with server - How to resolve this issue

Error message in Talend tool connecting with server - How to resolve this issue
Execution failed : java.security.cert.CertificateExpiredException: NotAfter: Sun Jan 17 05:36:12 IST 2021
[NotAfter: Sun Jan 17 05:36:12 IST 2021]
You're most likely using a subscription product that comes with support. You can find the required steps here:
https://community.talend.com/s/article/FAQ-for-REQUIRED-by-Jan-17-2021-Mandatory-Talend-Certificate-update-for-Talend-On-premises-and-cloud?language=en_US
Applying the latest cumulative patch should fix your problem.

Why can't linux read hwclock some month shift?

We have a linux system that we are building with yocto.
We can read our hardware clock after reboots, change both system time and hardware time without any error (most of the time). However; after some new month, every year that we have tried we are running in to this error. "hwclock: RTC_RD_TIME: Invalid argument".
Example 1:
root#:~# date
Thu Apr 30 23:59:50 UTC 2020
root#:~# hwclock
Thu Apr 30 23:59:52 2020 0.000000 seconds
root#:~#
root#:~#
root#:~# date
Fri May 1 00:00:10 UTC 2020
root#:~# hwclock
hwclock: RTC_TD_TIME: Invalid argument
root#:~#
This is not happening every new month, if I do the same test in January linux can read the hwclock without any issues. It does also not matter if the unit is powered or not. If I set the hwclock to first of May 00:00:00 it can keep track of the time.
The same error occurs on the following month shift:
Feb (it does not matter if it is leap year or not) -> Mar
Apr -> May
Jun -> Jul
Sep -> Oct
Nov -> Dec
Dec (Not sure because of new year or new month) -> Jan
In my understanding, this is happening because rtc-lib.c cannot verify the time correctly.
I have tried on multiple different hardware
Does anyone have any idea what might cause this?
Solution:
The fault was not in rtc-lib.c. The cause of the error was a faulty RTC implementation. The RTC month value is 1-indexed, but the kernel assumes it is 0-indexed. Added a patch for this to rtc-[my_rtc_model].c and now it seems to be working.

Couchbase: 20k items stuck in Tap Queue

We are currently evaluating couchbase as a memcached replacement in the first place. Our setup looks like this:
php -> localhost moxi -> couchbase bucket (Total bucket size = 10240 MB (2048 MB x 5 nodes with replica count 1))
The Servers have 16GB RAM and are SSD backed.
We were inserting at about 400 ops/s and had no problem for a few days. When we reached about 13 million items. We found out that we forgot to implement the delete function in our testsetup and a lot of keys had no expiration set.
To start over again we flushed the bucket through the webinterface. This where our problems began.
We started to see that we had temp ooms, back-offs, and tap queue was filled with 20k items. the drain and fill rate was nearly the same. See attached screenshot
What also catched our eye was that node 4 had only 220k items, where everyone else had around 1.39M
Somehow it looks like the replication messed up something, but im relatively new to couchbase. Any hints, suggestions? - See more at: http://www.couchbase.com/communities/q-and-a/20k-items-stuck-tap-queue#sthash.v9MxNnTk.dpuf
The problem was solved for a short time, after removing the failing node from the cluster.
So now with this four nodes left in the cluster, after some hours the same happend again with another node. We tried setting the now failing node into FailOver state. That fixed the problem again, but after Re-Adding the node, the same phenomenon happened again on that node.
Other things we realized are:
* Three out of four nodes have thousands of items in their TAP replication queue, but one
("the failing one") has 0.
* Also three out of four nodes have a back-off rate of around 400, but one ("the failing one") has 0.
* Only the failing one has a massive amount of "Temp OOMs per second", but the other three have 0.
The phenomenon seems to disappear, if we lower the load to the servers by disabling the couchbase-writes for one out of two software project writing to couchbase.
But if we enable the writes again, after around 10 minutes we can see this in the memcached.log on the failing node:
Tue Dec 17 12:29:05.010547 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:29:05.010576 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=277 rev=522
Tue Dec 17 12:29:08.748103 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:29:08.748257 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=321 rev=948
Tue Dec 17 12:40:17.354448 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 12:40:17.354476 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=303 rev=491
This error then happens around 5 times within four hours:
Tue Dec 17 14:19:32.145071 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
And after these four hours it starts spamming this instantly (Maybe, because the load increased heavily, because in the evening our page generates much more load than in the morning/noon) together with this "error from mccouch":
Tue Dec 17 16:42:30.875343 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:42:36.493317 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:25.239876 CET 3: (CENSORED) Received error[86] from mccouch for unknown
Tue Dec 17 16:43:25.240052 CET 3: (CENSORED) Retry notify CouchDB of update, vbucket=296 rev=483
Tue Dec 17 16:43:25.903997 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:31.906178 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:36.913045 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:42.919114 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:48.920354 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:43:54.924017 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
Tue Dec 17 16:44:00.928572 CET 3: (CENSORED) TAP (Producer) eq_tapq:replication_ns_1#10.65.20.12 - Suspend for 5.00 secs
We have no clue what is happening here, why this failing node seems to reject every replication and throwing this error.
Do you have any idea?
Thanks for all your help and greetings from Cologne,
Andy!
Seeing as you just want to delete all items in the Bucket have you tried just deleting and re-creating the bucket?
This will be much faster than flush, as flush actually needs to send a delete request for every document in the bucket.
I can't find it in the docs at the moment, but I think Flush is not really recommended with the latest versions.
you are not writing what is your operating system. If it's Linux try to check maximum amount of open sockets for user running the Couchbase. Check the file /etc/security/limits.conf.
the command for check on Linux is: ulimit -Hn.
Hope that helps.
Daniel
I think you should try these settings:
http://docs.couchbase.com/couchbase-manual-2.1/#specifying-backoff-for-replication

mongorestores behaving differently on different machines?

I do the following:
mongorestore -d connect connect
on my local machine and it works fine. On my development machine on amazon i get this output from the same command and the same database dump and the same version of mongodb (2.0.4):
don't know what to do with file [connect/connect/channels.metadata.json]
don't know what to do with file [connect/connect/movies.metadata.json]
Thu Dec 12 09:11:46 connect/connect/movies.bson
Thu Dec 12 09:11:46 going into namespace [connect.movies]
2667 objects found
Thu Dec 12 09:11:46 connect/connect/teams.bson
Thu Dec 12 09:11:46 going into namespace [connect.teams]
335 objects found
don't know what to do with file [connect/connect/broadcasts.metadata.json]
Thu Dec 12 09:11:46 connect/connect/channels.bson
Thu Dec 12 09:11:46 going into namespace [connect.channels]
82 objects found
don't know what to do with file [connect/connect/series.metadata.json]
Thu Dec 12 09:11:46 connect/connect/sportsevents.bson
Thu Dec 12 09:11:46 going into namespace [connect.sportsevents]
24 objects found
The data imported is not complete. What do i do wrong?
The metadata.json files are only created in MongoDB 2.2 or newer, so you definitely have a newer version of mongodump on your local machine than your development machine (2.0.4).
The metadata.json file includes useful information like index definitions and capped collection properties. If you try to restore using an older version of mongorestore, it won't know how to handle those files and so your restore will not be complete. If you are relying on newer features of MongoDB such as the Aggregation Framework, these also won't be available in MongoDB 2.0.x.
You should upgrade your development machine on AWS to match the version on your local machine. If you are using a 2.2.x or 2.4.x that isn't the latest production point release in that series, you should also upgrade your local machine at the same time.