Updated Django version giving conflicting migrations in installed "contenttypes" app - django-migrations

I just upgraded from Django 2.2.6 to 4.0.3.
When I try to run my app locally, I see:
You have 18 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
When I try to run the migrations, I get:
CommandError: Conflicting migrations detected; multiple leaf nodes in the migration graph: (0001_initial 2, 0002_remove_content_type_name, 0002_remove_content_type_name 2 in contenttypes; 0001_initial 2, 0002_alter_permission_name_max_length 2, 0003_alter_user_email_max_length 2, 0004_alter_user_username_opts 2, 0005_alter_user_last_login_null 2, 0006_require_contenttypes_0002 2, 0007_alter_validators_add_error_messages 2, 0008_alter_user_username_max_length 2, 0009_alter_user_last_name_max_length 2, 0010_alter_group_name_max_length 2, 0011_update_proxy_permissions 2, 0012_alter_user_first_name_max_length, 0012_alter_user_first_name_max_length 2 in auth; 0001_initial, 0001_initial 2 in sessions; 0001_initial 2, 0002_logentry_remove_auto_add 2, 0003_logentry_add_action_flag_choices, 0003_logentry_add_action_flag_choices 2 in admin).
To fix them run 'python manage.py makemigrations --merge'
If I try to run the merge command, I get:
ValueError: Could not find common ancestor of ['0001_initial', '0001_initial 2']
It seems to be coming from the built in contenttypes app:
python ./manage.py showmigrations contenttypes
contenttypes
[ ] 0001_initial 2
[X] 0001_initial
[X] 0002_remove_content_type_name
[ ] 0002_remove_content_type_name 2
How can I edit the migrations of a built in app? I'd like to remove the duplicate migrations so that they don't break my prod server when I deploy my app, but I can't find them anywhere. Please help me :)

Try to delete your virtualenv, install new one and install all packages.
My problem has been fixed with this.

Related

GPG Check fails on CentOS Stream 9, but not on Fedora 35

I am having an issue with a lab server I am running using CentOS 9, when I'm trying to install Grafana, the GPG check fails. This is the output I get:
Importing GPG key 0x24098CB6:
Userid : "Grafana <info#grafana.com>"
Fingerprint: 4E40 DDF6 D76E 284A 4A67 80E4 8C8C 34C5 2409 8CB6
From : https://packages.grafana.com/gpg.key
Is this ok [y/N]: y
Key import failed (code 2). Failing package is: grafana-8.5.5-1.x86_64
GPG Keys are configured as: https://packages.grafana.com/gpg.key
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: GPG check FAILED
When I try the same on my local Fedora 35 machine, I get this:
Importing GPG key 0x24098CB6:
Userid : "Grafana <info#grafana.com>"
Fingerprint: 4E40 DDF6 D76E 284A 4A67 80E4 8C8C 34C5 2409 8CB6
From : https://packages.grafana.com/gpg.key
Is this ok [y/N]: y
Key imported successfully
Running transaction check
The packages being downloaded are the same grafana-8.5.5-1.x86_64.rpm, I am using dnf for both installations, and the grafana.repo files are both the same:
[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
I know I could just turn off the gpg checking, but I am not comfortable with a solution like that.
Any help resolving this would be greatly appreciated! Let me know if I should supply any more information.
I've quite recently swapped over to CentOS and Fedora, so I apologize if this has been resolved before, but I was unable to find it.
There has been some change with the default crypto policies in CentOS streams 9.
update-crypto-policies --set DEFAULT:SHA1
The packages need to be re-signed with a SHA256 or SHA521 key instead of SHA1.
Ref: https://access.redhat.com/articles/6846411

Azure DevOps TF400860: The current version of the following service is not supported: CodeSense. Version: 3, MinVersion: 3

I didn't found any similarly question. I have Azure DevOps 2020 on Premise and when I want to rebuild codeindex then I get message TF400860: The current version of the following service is not supported: CodeSense. Version: 3, MinVersion: 3.
I try followed command: tfsconfig codeIndex /reindexall /collectionName:MyCollection
But same error I get by this command: tfsconfig codeIndex /indexingStatus /collectionName:MyCollection
What could be wrong?
Thanks

Mongo connector error: Unable to process oplog document

I am new to neo4j-doc-manager and I am trying to use neo4j-doc-manager to view the collection from my mongoDB to a created graph in neo4j as per:
https://neo4j.com/developer/mongodb/
I've have my mongoDB and neo4j instance running in local and I'm using the following command:
mongo-connector -m mongodb://localhost:27017/axa -t
http://<user_name>:
<password>#localhost:7474/C:/Users/user_name/.Ne
o4jDesktop/neo4jDatabases/database-c791fa15-9a0d-4051-bb1f-
316ec9f1c7df/installation-4.0.3/data/ -d neo4j_doc_manager
However I get an error:
2020-04-17 15:49:47,011 [ERROR] mongo_connector.oplog_manager:309 - **Unable to process oplog document** {'ts': Timestamp(1587118784, 2), 't': 9, 'h': 0, 'v': 2, 'op': 'i', 'ns': 'axa.talks', 'ui': UUID('3245621e-e204-49fc-8350-d9950246fa6c'), 'wall': datetime.datetime(2020, 4, 17, 10, 19, 44, 994000), 'o': {'session': {'title': '12 Years of Spring: An Open Source Journey', 'abstract': 'Spring emerged as a core open source project in early 2003 and evolved to a broad portfolio of open source projects up until 2015.'}, 'topics': ['keynote', 'spring'], 'room': 'Auditorium', 'timeslot': 'Wed 29th, 09:30-10:30', 'speaker': {'name': 'Juergen Hoeller', 'bio': 'Juergen Hoeller is co-founder of the Spring Framework open source project.', 'twitter': 'https://twitter.com/springjuergen', 'picture': 'http://www.springio.net/wp-content/uploads/2014/11/juergen_hoeller-220x220.jpeg'}}}
Traceback (most recent call last):
File "c:\users\user_name\pycharmprojects\axa_experience\venv\lib\site-packages\py2neo\core.py", line 258, in get
response = self.__base.get(headers=headers, redirect_limit=redirect_limit, **kwargs)
File "c:\users\user_name\pycharmprojects\axa_experience\venv\lib\site-packages\py2neo\packages\httpstream\http.py", line 966, in get
return self.__get_or_head("GET", if_modified_since, headers, redirect_limit, **kwargs)
File "c:\users\user_name\pycharmprojects\axa_experience\venv\lib\site-packages\py2neo\packages\httpstream\http.py", line 943, in __get_or_head
return rq.submit(redirect_limit=redirect_limit, **kwargs)
File "c:\users\user_name\pycharmprojects\axa_experience\venv\lib\site-packages\py2neo\packages\httpstream\http.py", line 452, in submit
return Response.wrap(http, uri, self, rs, **response_kwargs)
File "c:\users\user_name\pycharmprojects\axa_experience\venv\lib\site-packages\py2neo\packages\httpstream\http.py", line 489, in wrap
raise inst
**py2neo.packages.httpstream.http.ClientError: 404 Not Found**
Versions used:
Python - 3.8
mongoDB - 4.2.5
neo4j - 4.0.3
Any help in this regards, I would really appreciate.
I was having the same problem and I think the issue has to do with the version of py2neo. Mongo connector only seems to work with version 2.0.7 but when you install that version Neo4j 4.0 doesn't work with version 2.0.7. This is where I got stuck and found no solution to fix it. Maybe using Neo4J 3.0 could fix that but that wouldn't work for me as I need 4.0 for a fabric database. I've recently started looking into APOC procedures for mongodb instead. Hope this was helpful.
The doc-manager library you are using requires that the Mongo api-rest work, and in new versions it no longer works. If you want to use mongo version <3.2 (it has the active api rest).

TYPO3 Upgrade Wizard Fails on DatabaseRowsUpdateWizard

i updated a project from TYPO3 7.6 to ^8 by following the official guide. latest steps were the composer update. i removed extensions/packages not compatible with ^8 and updated the ones available for ^8. im able to reach the install tool, the TYPO3 admin backend and the frontend (with errors).
so i ended up at the step were i should use the upgrade wizards provided by the install tool. i completed a few wizards without any issues but then faces a pretty one - first i tried to run DatabaseRowsUpdateWizard within the install tool but that failed with a memory error - i tried the cli approach with
php -d memory_limit=-1 vendor/bin/typo3cms upgrade:wizard DatabaseRowsUpdateWizard
the processing worked but it ended up with following error:
[ Helhum\Typo3Console\Mvc\Cli\FailedSubProcessCommandException ]
#1485130941: Executing command "upgrade:subprocess" failed (exit code: "1")
thrown in file vendor/helhum/typo3-console/Classes/Install/Upgrade/UpgradeHandling.php
in line 284
the command initially failed is:
'/usr/bin/php7.2' 'vendor/bin/typo3cms' 'upgrade:subprocess' '--command' 'executeWizard' '--arguments' 'a:3:{i:0;s:24:"DatabaseRowsUpdateWizard";i:1;a:0:{}i:2;b:0;}'
and here is the subprocess exception:
[ Sub-process exception: TYPO3\CMS\Core\Resource\Exception\InvalidPathException ]
#1320286857: File ../disclaimer_de.html is not valid (".." and "//" is not allowed in path).
thrown in file typo3/sysext/core/Classes/Resource/Driver/AbstractHierarchicalFilesystemDriver.php
in line 71
im pretty much lost and dont know were to start to get this fixed - help is much appreciated
Issues like these usually stem from broken URLs in RTE fields as can be seen in the error output:
File ../disclaimer_de.html is not valid (".." and "//" is not allowed in path)
In this case you should manually prepare the database and run SQL statements which replace the broken/obsolete ../ prefix from all affected records. An example query:
UPDATE tt_content
SET bodytext = REPLACE(bodytext, 'href="../', 'href="')
WHERE bodytext LIKE '%href="../';
Notice that this query is very basic and can destroy your data, so make sure you run some SELECT statements first to make sure nothing breaks. Also keep a backup of your database at hand.
Sometime, custom or TER extension also have RTE such as tt_news where you might come across same issue. To fix that, you just need to run the same query with the according table.

Error 13038: Can't find any special indices: 2d (needs index), 2dsphere (needs index)

I've been searching for this for so many hours. Everytime I call the 'near' method on my Model, it gives the following error:
2.0.0p247 :001 > Status.near(#coordinates, 10).to_a
Moped::Errors::QueryFailure: The operation: #<Moped::Protocol::Query
#length=157
#request_id=3
#response_to=0
#op_code=2004
#flags=[:slave_ok]
#full_collection_name="howsmycity_development.statuses"
#skip=0
#limit=0
#selector={"deleted_at"=>nil, "coordinates"=>{"$nearSphere"=>[74.3344609, 31.5130751], "$maxDistance"=>0.002526046147566618}}
#fields=nil>
failed with error 13038: "can't find any special indices: 2d (needs index), 2dsphere (needs index), for: { deleted_at: null, coordinates: { $nearSphere: [ 74.3344609, 31.5130751 ], $maxDistance: 0.002526046147566618 } }"
I've already tried running: rake db:mongoid:create_indexes
Using Ruby 2, Rails 4, Mongoid 4, MongoDB 2.4.4 and Geocoder 1.1.8. And BTW I'm using Mongoid-Paranoia Gem too. I've also tried pointing all gems to their github repos with no luck. I've opened an issue here as well.
Any help appreciated.
With countless hours of debugging, I've found that installing the Geocoder Gem from HEAD actually fixed the problem, as the Gem's Author said:
gem 'geocoder', github: 'alexreisner/geocoder'
But at the time it didn't work for me. I think my database was corrupted. I say that because I had to completely remove all DBs from my Local Machine and also Remove & Add the MongoHQ addon again to my Heroku instance as well, just to make it work (since the problem persisted on Heroku as well).
Once I did that, I just re-did rake db:mongoid:create_indexes and everything was working perfectly.