Ember CLI - Babel is too slow on Mac - ember-cli

From last couple of days my ember is slow, here is the output
version: 2.4.2
Livereload server on http://localhost:49152
Serving on http://localhost:4200/
Build successful - 76337ms.
Slowest Trees | Total
----------------------------------------------+---------------------
Babel | 70809ms
Slowest Trees (cumulative) | Total (avg)
----------------------------------------------+---------------------
Babel (20) | 72361ms (3618 ms)
How can I debug and fix it. I have checked it on Mac(SSD + 16gb Ram)
Edit
After updating npm to latest version, cleaning npm cache, and reinstalling all node packages. It was quite fast but after couple of rebuilds, it became slow again.
Build successful - 56809ms.
Slowest Trees | Total
----------------------------------------------+---------------------
Babel | 55422ms
Slowest Trees (cumulative) | Total (avg)
----------------------------------------------+---------------------
Babel (20) | 55580ms (2779 ms)
Edit #2
Is there anything to do with which file it changed here are two results
file changed site/components/site/carpets/neworderform-form/template.hbs - 1822 lines in this template
Build successful - 62844ms.
Slowest Trees | Total
----------------------------------------------+---------------------
Babel | 61122ms
Slowest Trees (cumulative) | Total (avg)
----------------------------------------------+---------------------
Babel (20) | 61280ms (3064 ms)
file changed site/components/site/countries/country-form/template.hbs - 64 lines total in this template
Build successful - 1322ms.
Slowest Trees | Total
----------------------------------------------+---------------------
SourceMapConcat: Concat: App | 424ms
SourceMapConcat: Concat: App Tests | 77ms
Slowest Trees (cumulative) | Total (avg)
----------------------------------------------+---------------------
SourceMapConcat: Concat: App (1) | 424ms
Babel (20) | 199ms (9 ms)
SourceMapConcat: Concat: App Tests (1) | 77ms
Edit 3 (after 2 days)
Now I guess I really should fix this or else I cant work on it.
Slowest Trees | Total
----------------------------------------------+---------------------
Babel | 146179ms
Slowest Trees (cumulative) | Total (avg)
----------------------------------------------+---------------------
Babel (20) | 170889ms (8544 ms)

Related

Bootup/Startup Script Not Working for nVidia GPU Clock Offset

Trying to change GPU Graphics and Memory Transfer Rate Clock Offset on my nVidia EVGA 1030 SC on bootup. I am a linux noob here using Rocky Linux 9.
Question:
How do I check my current set values for GPUGraphicsClockOffsetAllPerformanceLevels and GPUMemoryTransferRateOffsetAllPerformanceLevels?
Currently I am checking by starting up nVidia X Server then look for the values in Graphics Clock Offset and Memory Transfer Rate Offset under the PowerMizer tab. Is there a better way? I don't even really know if that is always up to date...
What did I do wrong and what do I need to change to fix my bootup script below to work?
I think my bootup script is NOT working because the PowerMizer in nVidia X Server does NOT show clock offset values of 50MHz and 200MHz for Graphics and Memory Transfer Rates, respectively, when I boot with my script. It only shows 0 and 0.
However, it does show 50MHz and 200MHz when I enter the following 3 commands line after line directly in bash terminal.
nvidia-smi -pm 1
nvidia-settings -a [gpu:0]/"GPUGraphicsClockOffsetAllPerformanceLevels=50"
nvidia-settings -a [gpu:0]/"GPUMemoryTransferRateOffsetAllPerformanceLevels=200"
Below is the bootup script...
i.
Wrote a shell script file named nVidiaStartUp.sh and placed it in: /etc/rc.d/init.d
nVidiaStartUp.sh contains
#!/bin/bash
nvidia-smi -pm 1
nvidia-settings -a [gpu:0]/"GPUGraphicsClockOffsetAllPerformanceLevels=50"
nvidia-settings -a [gpu:0]/"GPUMemoryTransferRateOffsetAllPerformanceLevels=200"
ii.
In Terminal, executed chmod +x /etc/rc.d/init.d/nVidiaStartUp.sh
iii.
Added a script named nVidiaStartUp.service in /etc/systemd/system with contents below
[Unit]
Description=nVidia Startup Script Call with Undervolt
After=getty.target
[Service]
Type=simple
ExecStart=/etc/rc.d/init.d/nVidiaStartUp.sh
TimeoutStartSec=0
#RemainAfterExit=yes
[Install]
WantedBy=default.target
#WantedBy=graphical.target
#WantedBy=multi-user.target
iv.
Ran in terminal
systemctl enable nVidiaStartUp.service
v.
Reboot and then I check my clockoffset under PowerMizer in nVidia X Server. I don't see 50MHz and 200MHz. I only see 0 and 0. That seems to imply my bootup script isn't working? Please help!
======================================================================================================================================================
Additional background info:
I have installed nVidia driver and it loads with proper GPU information. Here is what it shows after running nvidia-smi
Fri Nov 4 11:39:36 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:0B:00.0 On | N/A |
| 48% 50C P0 N/A / 30W | 373MiB / 2048MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2309 G /usr/libexec/Xorg 100MiB |
| 0 N/A N/A 2439 G /usr/bin/gnome-shell 139MiB |
| 0 N/A N/A 3270 G ...470561649073451082,131072 130MiB |
+-----------------------------------------------------------------------------+
My relevant sections for Cool-bits (I am using 8 for cool-bits as this card is passively cooled) of my /etc/X11/xorg.conf shows
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "NVIDIA GeForce GT 1030"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "Coolbits" "8"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Modelica model simulating successfully in OMEdit but not as FMU

I am trying to export and simulate a model with the CVode solver.
If I simulate the model in OMEdit (Windows) using CVode the simulation finishes successfully, even if I use something like a rectangular pulse as input. However, if I export the model as FMU (via omc on Linux) with CVode simulation flag I get the following Error after some time:
[CVODE ERROR] CVode tout too close to t0 to start integration.
fmi2DoStep: ##CVODE## -27 error occurred at time = 1.001.Traceback
(most recent call last):
...
Exception: fmi2DoStep failed with
status 4.
To export the FMU I am using sundials-5.7.0 and for OMEdit OpenModelica1.17.0.
To my knowledge OMEdit uses the exact same solver, so I do not really understand why the simulation works in one case but not in the other.
Might this be related to running on Windows vs Linux, or does OMEdit maybe change some default simulation settings?
Any hints on possible causes and solutions are very welcome!
Notice:
The CVode settings when simulating the FMU look like this:
LOG_SOLVER | info | CVODE linear multistep method CV_BDF
LOG_SOLVER | info | CVODE maximum integration order CV_ITER_NEWTON
LOG_SOLVER | info | CVODE use equidistant time grid YES
LOG_SOLVER | info | CVODE Using relative error tolerance 1.000000e-06
LOG_SOLVER | info | CVODE Using dense internal linear solver SUNLinSol_Dense.
LOG_SOLVER | info | CVODE Use internal dense numeric jacobian method.
LOG_SOLVER | info | CVODE uses internal root finding method NO
LOG_SOLVER | info | CVODE maximum absolut step size 0
LOG_SOLVER | info | CVODE initial step size is set automatically
LOG_SOLVER | info | CVODE maximum integration order 5
LOG_SOLVER | info | CVODE maximum number of nonlinear convergence failures permitted during one step 10
LOG_SOLVER | info | CVODE BDF stability limit detection algorithm OFF\

How to install camel-http feature on Karaf

I am using Fuse 7.7 on Apache Karaf.
I am getting this error
2020-09-28 18:08:57,689 | ERROR | lueprint Extender: 2 | o.a.a.b.c.BlueprintContainerImpl | 51 - org.apache.aries.blueprint.core - 1.10.2 |
Unable to start container for blueprint bundle com.esb.iis-to-rm-vr/1.0.0 due to unresolved dependencies [(&(component=http)(objectClass=org.apache.camel.spi.ComponentResolver))]
java.util.concurrent.TimeoutException: null
I did the below steps. camel-http is not installed.
karaf#root()> features:install camel-http
karaf#root()> features:list | grep camel-http
camel-http4
karaf#root()> list | grep camel-http
67 | Active | 50 | 2.21.0.fuse-770013-redhat-00001 | camel-http-common
255 | Active | 50 | 2.21.0.fuse-770013-redhat-00001 | camel-http4
Please let me know apart from the below step, what are the steps i need to follow to install camel-http.
karaf#root()> features:install camel-http
Be careful that camel-http is only meant to be a producer. You won't be able to do from("http://...") only with him. To be able to do it, you need to add a camel component that will allow your route to bind itself to the karaf's jetty. You can try to install camel-jetty.
Moreover, you're feature:list | grep camel-http seemed to have only returned camel-http4. I'm note sure camel-http feature has been dropped, but you could always install http4 component

Cannot run VSCode source code because its unable to find electron app in directory

Overview:
When I attempt to run VSCode with the instructions given in the contributions page to download all the packages, build the source code, and then run it all on the terminal, an error message pops up saying that I don't have the electron app in the vscode directory. Shouldn't have the electron app been installed when I ran the yarn command to install and build all the dependencies?
Steps to reproduce the bug:
$ yarn //building and installing all dependencies
$ yarn watchd //building vscode
$ ./scripts/code.sh //running vscode
Error Message:
Error launching app
Unable to find Electron app at /home/juan/Desktop/Projects/vscode
Cannot find module '/home/juan/Desktop/Projects/vscode/out/main'. Please verify that the package.json has a valid "main" entry
System Details:
CPUs | Intel(R) Core(TM) i7-6600U CPU # 2.60GHz (4 x 3200)
-- | --
GPU Status | 2d_canvas: unavailable_softwareflash_3d: disabled_softwareflash_stage3d: disabled_softwareflash_stage3d_baseline: disabled_softwaregpu_compositing: disabled_softwaremultiple_raster_threads: enabled_onoop_rasterization: disabled_offprotected_video_decode: disabled_offrasterization: disabled_softwareskia_renderer: disabled_off_okvideo_decode: disabled_softwareviz_display_compositor: enabled_onviz_hit_test_surface_layer: disabled_off_okwebgl: unavailable_softwarewebgl2: unavailable_software
Load (avg) | 1, 1, 1
Memory (System) | 7.63GB (0.12GB free)
Process Argv | . --no-sandbox
Screen Reader | no
VM | 0%
OS|Ubuntu 18.04 LTS
Extensions:
Extension | Author (truncated) | Version
-- | -- | --
Bookmarks | ale | 11.2.0
vscode-sqlite | ale | 0.8.2
code-gnu-global | aus | 0.2.2
npm-intellisense | chr | 1.3.0
vscode-svgviewer | css | 2.0.0
vscode-markdownlint | Dav | 0.36.0
jshint | dba | 0.10.21
vscode-eslint | dba | 2.1.5
vscode-html-css | ecm | 0.2.3
EditorConfig | Edi | 0.15.1
vscode-npm-script | eg2 | 0.3.12
vscode-firefox-debug | fir | 2.8.0
beautify | Hoo | 1.5.0
vscode-emacs-friendly | lfs | 0.9.0
rainbow-csv | mec | 1.7.0
python | ms- | 2020.5.80290
cpptools | ms- | 0.28.2
debugger-for-chrome | msj | 4.12.8
sqltools | mtx | 0.22.5
material-icon-theme | PKi | 4.1.0
rust | rus | 0.7.8
lc2k | vio | 1.1.1
Here is the bug report I filled in the vscode github page: https://github.com/microsoft/vscode/issues/99537
I got this same error myself when the code did not build correctly.
In your second step you do:
yarn watchd
I tried this command myself, but ran into the same issue that you have stated here. Although the official wiki suggests this as a tip, I would just ignore it.
Instead, do either of these instead (this is what the official wiki originally suggest to do):
Type: Ctrl + Shift + B
Or alternatively use the Command Palette:
Type: Ctrl + Shift + P
Search for the option called: Tasks: Run Build Task and select it.
Once you start the build task you'll see a couple of things:
Firstly, at the bottom of VS Code (on your status line), VS Code will let you know the code is building.
Secondly, The build command will open two terminals:
Task - Build VS Code
Task - Build VS Code Extensions
Watch the output for both of terminals, make sure:
Task - Build VS Code terminal outputs: [some time] Finished compilation ...
and
Task - Build VS Code Extensions terminal outputs: [some time] Finished compilation extensions ...
If not and the build fails, you'll probably get a notification from VS code saying so (You'll probably get the error twice, one for each task):
yarn ... exited with code [some non-zero integer]
A common error that may occur is the ENOSPC error from inotify (also documented well in a medium blog). You'll want to issue this command:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Arch users would issue:
echo fs.inotify.max_user_watches=524288 | sudo tee /etc/sysctl.d/40-max-user-watches.conf && sudo sysctl --system
After fixing this, trying to build again should work. Start the build task again and make sure both tasks succeed. (You'll notice that the tasks do not end after they succeed. This is because they will watch for changes you make in the code while developing and automatically recompile for you).
If successful you may finally issue:
./scripts/code.sh
A new instance of VS Code should open called: Code - OSS dev. This is the version of VS Code you just built.

PostgreSQL Transaction ID went backwards

In PostgreSQL 9.0, I have a table that keeps tracks of last processed transactions. For some reason, it went backwards (in time)! Here is the table data:
seq_id | tx_id
628 | 10112
629 | 10118
630 | 10124
631 | 10130
632 | 10136
654 | 10160
655 | 10166 <---
656 | 4070 <---
657 | 4071
658 | 4084
659 | 4090
660 | 4096
How can this happen? Can a restart of the database induce such behavior?
Thanks for any hints.
Regards,
D.
This is an invalid issue. Please ignore.
It turns out that the issue came out of restoring the table from a backup and continue working with (invalid) previous data, in a newly created database :-(
Thanks you for all those who responded already.
Case closed.
Lesson learned: TXID will NOT go backwards and they do get synced to a slave instance if you're using a Master/Slave setup. TXID rollovers are also correctly handled. Hope this will help others who might be thinking TXID can go backwards!