When running the update-database command to my mssql server to seed a large dataset parsed from a CSV I am getting a stack overflow exception.
I seed the data from a number of legacy CSV files in OnModelCreating.
It is a large data set and contains around 9.5k lines of input which is split across a number of POCOs.
-verbose gives no further information on the issue other than it is trying to apply the migration.
I am using Visual Studio 2019 Community and EntityFrameworkCore 3.1.3.
Can I safely comment out some of the seeding (i.e. seed 50%) in the up() method of the migration?
How would I then seed the remaining data safely? (would EF on the OnModelBuilding method check the data on the next add-migration and generate the missing data? or would it assume it was seeded after checking the migrations table?
Your initialcreate.Up(MigrationBuilder) method is too big and is blowing up the JITter. The StackOverflow is occuring when the MSIL for the method is compiled to machine language code.
This is because there's just too much code of the form
migrationBuilder.InsertData(
table: "Addresses",
columns: new[] { "Id", "Address1", "Address2", "Address3", "Country", "DateCreated", "IsArchived", "IsPrimaryAddress", "LastModifiedDate", "LastModifiedUser", "NickName", "Notes", "PostCode", "Region", "Town" },
values: new object[,]
{
{ 1, "669A George Street", null, null, "", new DateTime(2020, 3, 28, 14, 58, 51, 108, DateTimeKind.Utc).AddTicks(1514), null, true, new DateTime(2020, 3, 28, 14, 58, 51, 108, DateTimeKind.Utc).AddTicks(1545), null, null, null, "AB25 3XP", "", "Aberdeen" },
{ 12307, "7 Moredon Road", null, null, "", new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8799), null, true, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8800), null, null, null, "SN25 3DQ ", "", "Swindon" },
{ 12306, "", null, null, null, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8735), null, false, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8736), null, null, null, null, null, null },
{ 12305, "67 HIGH STREET, ", null, null, "", new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8690), null, true, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8690), null, null, null, "TQ95 5NU", "", "TOTNES" },
{ 12304, "", null, null, null, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8577), null, false, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8579), null, null, null, null, null, null },
{ 12303, "BOWS KITCHEN, 10 MARGATE PLACE,", null, null, "", new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8530), null, true, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8531), null, null, null, "CT9 1EN", "", "MARGATE" },
{ 12302, "", null, null, null, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8464), null, false, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8465), null, null, null, null, null, null },
{ 12308, "", null, null, null, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8849), null, false, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8850), null, null, null, null, null, null },
{ 12301, "270 ST SAVIOURS RD", null, null, "", new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8417), null, true, new DateTime(2020, 3, 28, 14, 58, 51, 198, DateTimeKind.Utc).AddTicks(8418), null, null, null, "LE5 4HF", "", "LEICESTER" },
in the method. To resolve this, the best method would be to read the data for each InserData from disk before each call. Or to embed the data in your source code as, say JSON strings and convert it to an object[,] at runtime.
Related
Rabbitmq version 3.8.16
followed this guide.
I tried enabling the plugin.
sudo rabbitmq-plugins enable rabbitmq_auth_backend_oauth2
However it throws back an error.
** (CaseClauseError) no case clause matching: {:could_not_start, :jose, {:jose, {{:shutdown, {:failed_to_start_child, :jose_server, {{:case_clause, {:ECPrivateKey, 1, <<104, 152, 88, 12, 19, 82, 251, 156, 171, 31, 222, 207, 0, 76, 115, 88, 210, 229, 36, 106, 137, 192, 81, 153, 154, 254, 226, 38, 247, 70, 226, 157>>, {:namedCurve, {1, 2, 840, 10045, 3, 1, 7}}, <<4, 46, 75, 29, 46, 150, 77, 222, 40, 220, 159, 244, 193, 125, 18, 190, 254, 216, 38, 191, 11, 52, 115, 159, 213, 230, 77, 27, 131, 94, 17, ...>>, :asn1_NOVALUE}}, [{:jose_server, :check_ec_key_mode, 2, [file: 'src/jose_server.erl', line: 189]}, {:lists, :foldl, 3, [file: 'lists.erl', line: 1267]}, {:jose_server, :support_check, 0, [file: 'src/jose_server.erl', line: 153]}, {:jose_server, :init, 1, [file: 'src/jose_server.erl', line: 93]}, {:gen_server, :init_it, 2, [file: 'gen_server.erl', line: 423]}, {:gen_server, :init_it, 6, [file: 'gen_server.erl', line: 390]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 226]}]}}}, {:jose_app, :start, [:normal, []]}}}}
(rabbitmqctl 3.8.0-dev) lib/rabbitmq/cli/plugins/plugins_helpers.ex:210: RabbitMQ.CLI.Plugins.Helpers.update_enabled_plugins/2
(rabbitmqctl 3.8.0-dev) lib/rabbitmq/cli/plugins/plugins_helpers.ex:107: RabbitMQ.CLI.Plugins.Helpers.update_enabled_plugins/4
(rabbitmqctl 3.8.0-dev) lib/rabbitmq/cli/plugins/commands/enable_command.ex:121: anonymous fn/6 in RabbitMQ.CLI.Plugins.Commands.EnableCommand.do_run/2
(elixir 1.10.4) lib/stream.ex:1325: anonymous fn/2 in Stream.iterate/2
(elixir 1.10.4) lib/stream.ex:1538: Stream.do_unfold/4
(elixir 1.10.4) lib/stream.ex:1609: Enumerable.Stream.do_each/4
(elixir 1.10.4) lib/stream.ex:956: Stream.do_enum_transform/7
(elixir 1.10.4) lib/stream.ex:1609: Enumerable.Stream.do_each/4
{:case_clause, {:could_not_start, :jose, {:jose, {{:shutdown, {:failed_to_start_child, :jose_server, {{:case_clause, {:ECPrivateKey, 1, <<104, 152, 88, 12, 19, 82, 251, 156, 171, 31, 222, 207, 0, 76, 115, 88, 210, 229, 36, 106, 137, 192, 81, 153, 154, 254, 226, 38, 247, 70, 226, ...>>, {:namedCurve, {1, 2, 840, 10045, 3, 1, 7}}, <<4, 46, 75, 29, 46, 150, 77, 222, 40, 220, 159, 244, 193, 125, 18, 190, 254, 216, 38, 191, 11, 52, 115, 159, 213, 230, 77, 27, 131, ...>>, :asn1_NOVALUE}}, [{:jose_server, :check_ec_key_mode, 2, [file: 'src/jose_server.erl', line: 189]}, {:lists, :foldl, 3, [file: 'lists.erl', line: 1267]}, {:jose_server, :support_check, 0, [file: 'src/jose_server.erl', line: 153]}, {:jose_server, :init, 1, [file: 'src/jose_server.erl', line: 93]}, {:gen_server, :init_it, 2, [file: 'gen_server.erl', line: 423]}, {:gen_server, :init_it, 6, [file: 'gen_server.erl', line: 390]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 226]}]}}}, {:jose_app, :start, [:normal, []]}}}}}
Any pointers or documentation for this configuration.
Thanks,
Sajith
Well, rabbitmq 3.8.5 seems to work. I assume the plugin built with 3.8.16 has a problem.
How can I get region from the selected result from autocomplete?
From the result I am getting, there is 3rd object named region but actually it is department not region.
Here is the example address:
54b route de brie, 91800 Brunoy, France
Mapbox is giving me: Essonne // Its department not region
But actually its: Ile-de-France
How do I get the correct region?
Here is my working demo:
https://jsfiddle.net/rv085oL1/
That information isn't included. But if you just need your site to work in France, it would be straightforward to include a lookup table mapping from département to région, using the last two characters of the short_code. Here's one: https://gist.github.com/SiamKreative/f1074ed95507e69d08a0
"regions": {
"alsace": [67, 68],
"aquitaine": [40, 47, 33, 24, 64],
"auvergne": [43, 3, 15, 63],
"basse-normandie": [14, 61, 50],
"bourgogne": [21, 58, 71, 89],
"bretagne": [29, 35, 22, 56],
"centre": [45, 37, 41, 28, 36, 18],
"champagne-ardenne": [10, 8, 52, 51],
"corse": ["2b", "2a"],
"franche-compte": [39, 25, 70, 90],
"haute-normandie": [27, 76],
"languedoc-roussillon": [48, 30, 34, 11, 66],
"limousin": [19, 23, 87],
"lorraine": [55, 54, 57, 88],
"midi-pyrennees": [46, 32, 31, 12, 9, 65, 81, 82],
"nord-pas-de-calais": [62, 59],
"pays-de-la-loire": [49, 44, 72, 53, 85],
"picardie": [2, 60, 80],
"poitou-charentes": [17, 16, 86, 79],
"provences-alpes-cote-dazur": [4, 5, 6, 13, 84, 83],
"rhones-alpes": [38, 42, 26, 7, 1, 74, 73, 69],
"ile-de-france": [77, 75, 78, 93, 92, 91, 95, 94]
},
I am trying to create bar chart with each bar have different sets of data.
This is how normally things work in chart.js (without nested array of data) but this is not helpful for me:
datasets: [
{
label: "My First Label",
stack: "Stack 0"
data: [65, 59, 20, 81, 56, 55, 40],
},
{
label: "My Second Label",
stack: "Stack 0"
data: [65, 59, 20, 81, 56, 55, 40],
}
]
And the below code doesn't work as I am trying to feed data with nested array, is there a way to make it work?
datasets: [
{
label: "My First Label",
data: [[65, 59, 20, 81], [56, 55, 40]],
},
{
label: "My Second Label",
data: [[65, 59, 20], [30, 81, 56, 55, 40]],
}
]
I need to receive some information from Weather Company Data For IBM Bluemix APIs about a specific period (from jan 2012 to jan 2015).
The documentation includes this example API:
https://twcservice.mybluemix.net:443/api/weather/v1/geocode/33.40/-83.42/almanac/daily.json?units=e&start=0112&end=0115
But this is the result:
{"metadata":{"transaction_id":"1472145329818:-319071226","status_code":400},"success":false,"errors":[{"error":{"code":"PVE-0003","message":"The field 'start' contains a value '112' which is outside the expected range of [1 to 12]."}}]}
Can you let me know how I can search the historic information?
Thank you
https://twcservice.eu-gb.mybluemix.net/api/weather/v1/geocode/33.40/-83.42/almanac/daily.json?start=0112&end=0115&units=e
It works for me!!
{
"metadata": {
"language": "en-US",
"transaction_id": "1501849033676:-1423101749",
"version": "1",
"latitude": 33.4,
"longitude": -83.42,
"units": "e",
"expire_time_gmt": 1501869684,
"status_code": 200
},
"almanac_summaries": [
{
"class": "almanac",
"station_id": "095988",
"station_name": "MONTICELLO",
"almanac_dt": "0112",
"interval": "D",
"avg_hi": 56,
"avg_lo": 28,
"record_hi": 75,
"record_hi_yr": 1916,
"record_lo": 0,
"record_lo_yr": 1982,
"mean_temp": 42,
"avg_precip": 0.13,
"avg_snow": 0.1,
"record_period": 30
},
{
"class": "almanac",
"station_id": "095988",
"station_name": "MONTICELLO",
"almanac_dt": "0113",
"interval": "D",
"avg_hi": 56,
"avg_lo": 28,
"record_hi": 77,
"record_hi_yr": 1911,
"record_lo": 8,
"record_lo_yr": 1918,
"mean_temp": 42,
"avg_precip": 0.12,
"avg_snow": 0,
"record_period": 30
},
{
"class": "almanac",
"station_id": "095988",
"station_name": "MONTICELLO",
"almanac_dt": "0114",
"interval": "D",
"avg_hi": 56,
"avg_lo": 28,
"record_hi": 78,
"record_hi_yr": 1937,
"record_lo": 10,
"record_lo_yr": 1918,
"mean_temp": 42,
"avg_precip": 0.13,
"avg_snow": 0,
"record_period": 30
},
{
"class": "almanac",
"station_id": "095988",
"station_name": "MONTICELLO",
"almanac_dt": "0115",
"interval": "D",
"avg_hi": 56,
"avg_lo": 28,
"record_hi": 80,
"record_hi_yr": 1932,
"record_lo": 11,
"record_lo_yr": 1964,
"mean_temp": 42,
"avg_precip": 0.12,
"avg_snow": 0,
"record_period": 30
}
]
}
I have a simple "users" collection inside which right now i have only 2 documents.
{
"_id": ObjectId("4ef8e1e41d41c87069000074"),
"email_id": {
"0": 109,
"1": 101,
"2": 64,
"3": 97,
{
"_id": ObjectId("4ef6d2641d41c83bdd000001"),
"email_id": {
"0": 109,
"1": 97,
"2": 105,
"3": 108,
now if i try to create a new index with {unique: true} on email_id field, mongodb complaints me with "E11000 duplicate key error index: db.users.$email_id dup key: { : 46 }". I get same error even after specifying {dropDups: true}, however i don't think this is the case here, as both document have different email id's stored.
I am not sure what's going on here, any pointers will be greatly appreciated.
Edit: Full view of documents:
{
"_id": ObjectId("4ef8e1e41d41c87069000074"),
"email_id": {
"0": 109,
"1": 101,
"2": 64,
"3": 97,
"4": 98,
"5": 104,
"6": 105,
"7": 110,
"8": 97,
"9": 118,
"10": 115,
"11": 105,
"12": 110,
"13": 103,
"14": 104,
"15": 46,
"16": 99,
"17": 111,
"18": 109
}
}
and
{
"_id": ObjectId("4ef6d2641d41c83bdd000001"),
"email_id": {
"0": 109,
"1": 97,
"2": 105,
"3": 108,
"4": 115,
"5": 102,
"6": 111,
"7": 114,
"8": 97,
"9": 98,
"10": 104,
"11": 105,
"12": 110,
"13": 97,
"14": 118,
"15": 64,
"16": 103,
"17": 109,
"18": 97,
"19": 105,
"20": 108,
"21": 46,
"22": 99,
"23": 111,
"24": 109
}
}
There are a couple of more fields like "display_name", "registered_since", etc which i have omitted from the display above (i don't think they have any role in the error thrown, if you still need them i can probably paste the entire documents here)
I am using erlang mongodb driver for communication with my mongo instance. All fields as can be seen are saved as binary bytes, thats why you see such weird email_id in document.
Note: Binary byte format is not forced by my code logic, i very much pass string email_id inside my bson documents, but i always end up seeing my data as binary bytes. (Probably because how erlang mongodb driver is written, i didn't really investigate on this, since my find(), find_one() and other queries works as expected even with fields saved as binary bytes)
Edit: > db.users.findOne()
{
"_id" : ObjectId("4ef6d2641d41c83bdd000001"),
"email_id" : [
109,
97,
105,
108,
115,
102,
111,
114,
97,
98,
104,
105,
110,
97,
118,
64,
103,
109,
97,
105,
108,
46,
99,
111,
109
],
"display_name" : [
65,
98,
104,
105,
110,
97,
118,
43,
83,
105,
110,
103,
104
],
"provider" : [
106,
97,
120,
108,
46,
105,
109
],
"provider_id" : [ ]
}
When MongoDB indexes an array field, it actually indexes the individual elements in the array. This is to efficiently support queries looking for a particular element of an array, like:
db.users.find({email_id: 46})
Since this email_id (46) exists in both documents, there are duplicate keys in your unique index.
I'm not sure why you would get this error if you have dropDups: true set... can you show a code sample with how you're invoking createIndex? You should also try dropDups: 1, as MongoDB erroneously treats 1 and true differently in this context (see https://jira.mongodb.org/browse/SERVER-4562).
For others having this problem, check your mongo version with db.version(). If you are running Mongo 3 and are trying to use dropDups to clear duplicates, it will fail and give you this error.