UBuntu 18.04 on Xorg, multiple display nightmare - multiple-monitors

I have a Dell XPS 9550 on Ubuntu 18.04 with Gnome (on Xorg).
I have it configured with the nvidia drivers v390 as below:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 960M/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 390.87
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.6.0 NVIDIA 390.87
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 390.87
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:
It "works" but the issue is that the main screen on this laptop is 4K and I have connected 2 external monitors to it which have a max resolution of 1920x1080. If I zoom 200% on the 4K screen, the display is very comfortable but the zooming is also applying to the 2 other monitors making it far too big to do anything on them. If I set the zoom at 100%, the 2 external monitors display very well but then the 4K monitor font size and icons are far too small to even read.
I read a lot of material where people us xrandr but then it doesn't seem to apply with nvidia-drivers in the way...
Has anyone managed to configure something similar where I can use 2 or 3 monitors with an NVidia card on Xorg where one is 4K and the rest is normal resolution?
Of course there is another solution to use Wayland and this one does a very good scaling, but it doesn't work with the NVidia drivers (so falls back on the Intel card of the laptop) and a lot of applications do not work on Wayland.
Any help is appreciated ;o)
Thanks

Related

OpenGL benchmark on iPhone

I'm looking for a comparison of iPhone 3G / 3GS graphical systems (opengl) to the ones on a PC / MAC.
Google didn't help. Perhaps anybody here?
While this might be more of a hardware question, there is enough that might influence the design of an OpenGL-based application here that I'll bite.
Using my Molecules application as a template, I benchmarked the rendering throughput of that running on iPhone 3G, iPad, and a 2nd generation MacBook Air (Nvidia GeForce 9400M). For the MacBook, the numbers were generated from running the application in the Simulator with nothing else executing on the system:
iPhone 3G: 423,000 triangles / s
iPad: 1,830,000 triangles / s
MacBook Air: 2,150,000 triangles / s
You can grab the code for the application and try this yourself by enabling the RUN_OPENGL_BENCHMARKS define in SLSMoleculeGLViewController. This causes structures to be rendered for 100 frames, then the total time measured and the rendering rate figured from the complexity of the model being shown.
Note that this is an OpenGL ES 1.1 application which is geometry-limited in its current state. A fill-rate-limited application might have completely different performance characteristics, same as with one that uses OpenGL / OpenGL ES 2.0 shaders.
Aside from performance differences, the OpenGL command set differs between iOS and the Mac. OpenGL ES cuts out a lot of the cruft that's built up in OpenGL over the years (immediate mode, etc.). In general, OpenGL ES is a subset of OpenGL, so you can pretty much port something written for OpenGL ES to OpenGL without a lot of trouble. OpenGL on the desktop also uses a newer version of GLSL for its shaders (1.4, I believe), so some of the commands supported there will not work on iOS devices.
Apple has more about platform-specific differences in their Platform Notes section in the OpenGL ES Programming Guide for iOS.

PowerVR SGX535 Shader Performance (OpenGL ES 2.0)

I'm currently working on a couple of shaders for an iPad game and it seems as if Apple's GLSL compiler isn't doing any optimizations (or very few). I can move a single line in a shader and drop my FPS from 30 to 24 but I really have no idea why this is happening.
Does anyone have any references for the following:
what PowerVR instructions are generated from GLSL instructions?
what are the timings of the PowerVR instructions?
what sort of parallel processing units are in the PowerVR535 and how can they be exploited?
Thanks,
Tristan
Imagination Technologies recently added Mac support for their PVRUniSCo compiler and PVRUniSCoEditor interactive shader editor. These can be downloaded for free as part of the PowerVR SDK. The compiler has support for both the PowerVR SGX 53x series as well as the 540 series in the iPad 2. Unfortunately, the editor runs as a clunky X11 application, but at least it works now.
The editor gives you line-by-line estimates of the number of GPU cycles required throughout your vertex or fragment shader, as well as more accurate best and worst case estimates of total cycles required.
I've been using it to profile my iOS shaders, and it has proven to be extremely useful in finding hotspots:
http://www.imgtec.net/factsheets/SDK/POWERVR%20SGX.OpenGL%20ES%202.0%20Application%20Development%20Recommendations.1.1f.External.pdf
This documet should help you to optimize your shaders for maximum performance. Apple should provide similar information as well.

How to utilize POWERVRâ„¢ SGX Series5 GPU from iPhone OS 4.0?

POWERVRâ„¢ SGX Series5 GPU is embedded in Apple iPhone and I wonder how to utilize it (i.e. use it as low power GP-GPU) from iPhone OS 4.0?
You can access the GPU using OpenG ES 2.0, this is a shader based API and most GPGPU tutorials will map directly to GLSL.

Open GL ES 1.1 and iPhone games

Will iPhone games that draw 2D textures like this:
glPushMatrix();
glTranslatef(xLoc, yLoc, 0);
[myTexturePointer drawAtPoint:CGPointZero];
glPopMatrix();
work with newer iPhones (that support OpenGL ES 2.0)? I'm asking because I just learned OpenGL ES 2.0 doesn't support glPushMatrix etc.
Cheers!
The newer phones still support the older OpenGL ES 1.1, so this code should run fine, as long as you're running it in a 1.1 context.
Newer iphones do support both standards. So your code should work.
To maintain compatibility with the
OpenGL ES 1.1 used in existing iPhone
and iPod touch devices, "the graphics
driver for the PowerVR SGX also
implements OpenGL ES 1.1 by
efficiently implementing the
fixed-function pipeline using
shaders," sources report. This
indicates that games and other
applications unique to the iPhone 3G S
and other future models of the iPhone
and iPod touch are likely to arrive
that will either be exclusive to the
new model, or more likely, will
support improved 3D graphics on the
new device while still working on
previous models using the older
fixed-function 3D pipeline.
Source

How to get started with implementing my own VGP code for iPhone OpenGL ES development?

How do I get started with implementing my own Vertex Geometry Processor code for iPhone OpenGL ES development, in order to fully benefit from its VGP Lite chip?
The iPhone chip can only be used for fixed-function processing as part of the normal transform and lighting pipeline. There's no way to run custom vertex programs.