Hololens store submission input not working - unity3d

Building from VS to device works just fine. But building with Master settings and then it won't do. Since master is required for store submission then, downloading from store is getting the problem.
I have been trying to track the issue, and basically the input is just not working. If I perform a tap I get :
The thread 0x12a4 has exited with code 0 (0x0).
I think spatial mapping also gets a weird treatment which got me thinking it may be related to multithread (since the only correlation I can think of is the usage of multi-threading internally).
I thought it was only with one of our app first but then got told a second app totally different is also getting the same problem.
Anyone knowing what to do?
I posted on hololens forum about 3 weeks ago but no one has yet replied.
Contacting Microsoft is a basic waste of time as I will probably never get anyone with that kind of knowledge.
Any idea?

Issue seems to come from the multiple .rcs files I was keeping. Only holding to the latest one seems to allow Master build to work.

I once had this problem a long time ago, tbh I think this has something to do with the OS.
I started in a Win 10 Home pc, and I was getting the error. When I switched to another PC with Win 10 Enterprise, it was working fine. So I have been sticking with the Win 10 Enterprise pc ever since.

Related

My $LFS storage gone everytime I reboot my machine

Somehow im still a nerd so hope you can understand my question and answer it respectfully.
I don't have any problems when doing this installation (because the "book" is tooooo far good). But, I don't have that much health and money to pay for my electricity more, so I have to shut down and reboot the machine after I've done "some of the thing".
And as expected, whenever I open up my machine, all my pre-config files and folders on the $LFS path (it's /mnt/lfs specifically use for building LFS system) suddenly gone. I mean "as expected" because I have met that situation before with building Arch, Gentoo, etc. At that time, I booted up on a live usb so I don't really think that's a problem. But now, on a Arch-installed machine, I still met this situation.
I think the main problem is because the mount point or sth kinda like that. Anyone has any ideas?
(Also can some packages like gcc-12.2.0 available for building LFS?)

Concurrent Connection Test

So I ran into a network problem the other day and I was trying to find a way to test for this problem in the future.
I had a lot of users online at once and hit my routers max IP connection limit (not DHCP! TCP/UDP connections.)
Once I figured out what the problem was it was fairly simple to fix however I was wondering if there is any way to simulate this kind of activity? Everything worked fine when I tested it, it wasn't until I had 150+ users that I discoved I had a problem.
I have spent the last 3-4hrs looking for such a test/audit tool. Here is what I found:
http://bittwist.sourceforge.net/ -DDoS simulator (can't make it work, barly get +300 connections)
http://stevesouders.com/hpws/max-connections.php -Browser concurrent connection tester (This hits the browser limit (6 in chrome) w/o making a dent on my router even open in 70+ tabs at the same time)
http://www.smallnetbuilder.com/lanwan/lanwan-howto/31103-how-we-test-hardware-routers-revision-3 -Some tool linked about halfway down the page (Reads like its exactly what I want, however it barely has a noticable effect on my router.)
http://www.http-kit.org/600k-concurrent-connection-http-kit.html -Concurrent HTTP connection simulator (This one seems like it would do what I want, but my linux-foo is limited and I can't get it working. tear)
So do you guys have a tool to test your routers with? I would love something that does both TCP/UDP.
(btw, for anyone misunderstanding I'm not trying to test "speed", just sheer number of connections)
Thanks!
Kz

Log4perl problems on Ubuntu Server

I have been running a large-ish site for years, with a typical nginx / apache setup, and all the "pages" are mod_perl. Up until recently, I was running on FreeBSD. After a hardware-replacement, and due to other reasons, I was forced to migrate to Ubuntu (12.04.2 LTS), which I use on many other servers, so no big deal. However, I have a problem with my logs now.
For some reason, more and more "actions" are no longer logged through Log4Perl. This was never a problem on my previous setup, but now I seem to "lose" between 2 and 15 % of my log-entries. This is checked and verified by logging the data to a database at the same time.
Does anyone have a clue why this would happen?
Is there something I should know about large log-files and ubuntu? (It's not that large tbh, 390 MB atm)?
I get nothing in my error-logs anywhere, and as the database-logging happens AFTER the $log->info("ENTRY HERE"), the script obviously doesn't crash. But I am missing a lot of those ENTRY HEREs :)
The log in question is "hit" about once per second on average, but I should not think this would be a big problem?
Could there be "too many processes" trying to write to the log in parallel, causing locking-issues, preventing data form being appended to the file? Any typical Ubuntu-settings that might be adjusted for something like this?
Any help would be greatly appreciated.
Spinner

Core data fails to open store: "Error validating url for store"

I've been working on an app for quite a while and suddenly started to hit this error when the app tries to open a Core Data store. I hadn't made any changes to my data model or the data access code for over a month, so I don't think it can be anything that I'm doing wrong as far as interacting with Core Data. (Meaning, the URLs are ok, the call pattern is ok, etc...)
Interestingly, these are the log lines immediately before the error:
/SourceCache/GoogleMobileMaps/GoogleMobileMaps-217.2/googlenav/mac/TileStore.mm:209 unable to open /var/mobile/Library/Caches/MapTiles/MapTiles.sqlitedb: (14) unable to open database file
/SourceCache/GoogleMobileMaps/GoogleMobileMaps-217.2/googlenav/mac/TileStore.mm:155 file doesn't exist /var/mobile/Library/Caches/MapTiles/MapTiles.sqlitedb: (2)
/SourceCache/GoogleMobileMaps/GoogleMobileMaps-217.2/googlenav/mac/TileStore.mm:209 unable to open /var/mobile/Library/Caches/MapTiles/MapTiles.sqlitedb: (14) unable to open database file
/SourceCache/GoogleMobileMaps/GoogleMobileMaps-217.2/googlenav/mac/TileStore.mm:235 unable to open /var/mobile/Library/Caches/MapTiles/MapTiles.sqlitedb: tile data will not be cached
So it looks like there is "something" wrong with the sqlite layer in general. Has anybody seen this before? Is there a recovery option besides wiping my device? It's currently running 3.1.3 and I'd really hate to upgrade to 4 because it's currently my only way to test that the app will run for people who haven't upgraded.
One thing I did notice: shortly after I first hit this error, I wanted to see if any other apps were having problems. Sure enough, the iPod app had forgotten everything about me, but it was able to recover after syncing. So maybe there is some recovery mode? (Although, even if I can recover for my app, the Maps APIs might burn a lot of bandwidth if they can't cache the map tiles...)
Ryan
For what it's worth, I found the culprit and it has nothing to do with Core Data, sqlite, or the file system. The app uses a lot of small audio clips and I was pre-caching them all as AVAudioPlayers. I knew this was probably a bad idea, but it was quick and easy so I figured I'd keep doing it that way until I hit some kind of problem. (I'd put a wrapper around the players so that I could delay instantiation if required without affecting the rest of the system, which is what I'm doing now.) I just assumed the problem would show up as an audio player problem and not somewhere else seemingly totally unrelated.
I realized there must be a code error when I found the simulator also misbehaving, but in a different totally inexplicable way (keyed archives weren't being written properly). When I backed out of the most recent change I'd made (adding a new batch of audio clips), the problems vanished.
Hopefully this helps someone in the future!

How fast can you get a fixed bug into production?

I'm working with 2 very different applications.
App #1 is a web app where I have direct access to the FTP, so fixing bugs is pretty easy. Cat A bugs are usually fixed within the next day. No problems here.
App #2 is an oil business document control app, where we have to go through two acceptancy test phases - end users test and system test. Any bugs discovered after this phase will retain until the next version, usually 2-3 months. Every new release package is a huge cost. It's really hard to explain to the end users that they have to live with some of the bugs until the next version.
How do you relate to critical bugs that can't be fixed immediately?
The faster I fix bugs the more bugs I find I need to fix.
In my personal opinion in your described situation is a very deep structural problem and it should have been dealt with before the project has started. Every programmer should know at least one person to directly push changes if needed and the procedure for this must be clear. Honestly what about security or database problems with potential data loss? I mean of course, if you can't fix it directly inform the staff and tell them to "please don't do this", but honestly the best way is to get this problem out of the world asap. I had a similar case in a terminal application where a program simply quit working after a button was pressed twice. The fix was trivial, but no one was allowed to fix it and it literally cost hours for all the people depending on this thing to run. Demand a shortcut for important changes!
The speed which management allows you to fix a bug is directly related to the cost management will endure until the bug is fixed.
I'm a 1-man team. Nothing stands between me and my bugs :)
It really depends on a combination of the organisation size, system size, importance of the system & impact of the bug eg:
One Man Shop or Low Impact System (quickest - App#1 above)
Time to fix bug = time to find bug + time to code fix + time to deploy to production
Large Organisation or Important System (longest - App#2 above)
Time to fix bug = time to find bug + time to document & prioritise bug + time to estimate cost + time to approve work on fix + time to design fix + time to document fix + time to code fix + time to document test plan + time to test fix + time to regression test + time to performance/load test + time to schedule & approve deployment + time to deploy fix
Edit: How many Microsoft employees does it take to change a lightbulb? is an interesting read on the topic.
1: See http://blogs.msdn.com/ericlippert/archive/2003/10/28/53298.aspx
The answer would be a ratio of how much access one has to the production environment to the quantity of lives or money at stake.
Workarounds.
I've had previous experience where a user deemed a functionality dead due to a bug, notified us, waited til the bug was fixed, then told us that during the downtime on that section that they've been entering information into their old excel version of the application (Oracle APEX migration from Excel) and then nicely asked us the turn around time on us dynamically inserting the data from their excel application again. The turn around for that was longer than the downtime for the original bug.