I'm trying to make a 2D top down game with a field of view.
My field of view is shown by a 2D mesh of the fov, not being able to pass through walls.
I need to be able to put some objects such as enemies in a layer that's only rendered when it's inside the view cone.
I was following this tutorial but couldn't find the overwrite setting shown at 18:16 (I believe this is because the LWRP no longer exists in Unity). Are there any alternatives or other solutions?
The concept of using Layer-wise render passes and the stencil buffer is basically still the same also in URP (the replacement for LWRP).
The asset to configure this should by default be in
Assets/UniversalRenderPipelineAsset_Renderer
Also see See Through Objects with Stencil Buffers using Unity URP which explains it pretty good.
Or if you use the built-in render pipeline you would need shaders that directly implement this via a Stencil pass or subshader.
As an alternative you can also use the DepthTexture. For some examples using that and the ShaderGraph checkout Creating Overlap Shader Effects, Such as Underwater and Night Vision, in Unity's Shader Graph.
if you want to implement it from scratch and WITH BUILT-IN RENDER PIPELINE, you can read along :
lets say this is the scene , the white plane is fov and cubes are enemies:
the fov can have a simple shader. very simple shader. this for example:
Shader "Unlit/simple_shader"
{
Properties
{
}
SubShader
{
Tags { "RenderType"="Opaque" }
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
return 1;
}
ENDCG
}
}
}
Then you'll need a seperate camera to render ONLY the fov mesh and save it as an image, which we know as RenderTexture. which will look like a black and white mask:
then make another camera layer that only renders enemies:
then add the generated mask to it:
then put it on top of the main camera:
here is the sample asset I exported along with the scene:
https://usaupload.com/shared/hkcayffn-9psb5e_loknmu3086nnh2--tbk0fm5s8xqj8ilnyok2zklav8-4mk7i8tf1tsjfbstgzrpw3upwi_w2e96k9nqm05-knhk3m_1237myy8nvco910c67afmo
The solution used in your tutorial is to have a shader using the stencil buffer to only show the part of the enemies that stands inside your FOV mesh.
This solution a quite possible in any Render Pipeline
Hope that helped ;)
If your are using 2019.3 URP or later, you need to change the FieldOfView shader type to Universal Render Pipeline/Simple Lit. It has the same Render Face functions as the shader from the video. https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal#7.1/manual/simple-lit-shader.html
Related
In Unity, I want to create an effect where an arbitrary shape (a quad or a cube) acts as a "portal" that reveals an image. No matter which way the object rotates, or what the camera perspective is, the image "inside the portal" always faces the same direction.
In this image, I have a 3D plane that reveals a checkerboard pattern texture, like a cut-out in the scene. Whichever way the plane object is rotated or camera is positioned, the image inside the portal remains completely fixed. The inner image doesn't move or distort.
I want to be able to do this with multiple objects in the scene. So a sphere could be a portal to a fixed picture of a dog, or a cube could be a portal into a tiled pattern. Even knowing the name of this effect would be helpful. Do I have to write a shader to do this?
This is called a Screen Space shader. Where most shaders will calculate uv coordinates based on a pixel's location on the mesh, these shaders use the location on the screen. Here's a great article about them.
Hot tip: this is commonly used with a second camera rendering to a RenderTexture in order to create portals to 3D spaces.
You may need to play with the tiling to get the aspect ration of your texture correct, this shader assumes it is the same as your screen, i.e. 16:9
Shader "Ahoy/Screen Space Texture"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "Queue"="Transparent" "RenderType"="Transparent"}
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 uv : TEXCOORD0;
float4 screenPos:TEXCOORD1;
};
sampler2D _MainTex;
float4 _MainTex_ST;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.screenPos = ComputeScreenPos(o.vertex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
float2 uvScreen = i.screenPos.xy / i.screenPos.w;
uvScreen = TRANSFORM_TEX(uvScreen,_MainTex);
return tex2D(_MainTex, uvScreen);
}
ENDCG
}
}
}
I am trying to get the world position of a pixel inside a fragmented shader.
Let me explain. I have followed a tutorial for a fragmented shader that let's me paint on objects. Right now it works through texture coordinates but I want it to work through pixel's world position. So when I click on a 3D model to be able to compare the Vector3 position (where the click happended) to the pixel Vector3 position and if the distance is small enough to lerp the color.
This is the setup I have. I created a new 3D project just for making the shader with the intent to export it later into my main project. In the scene I have the default main camera, directional light, a object with a script that shows me the fps and a default 3D cube with a mesh collider. I created a new material and a new Standard Surface Shader and added it to the cube. After that I assigned the next C# script to the cube with the shader and a camera reference.
Update: The problem right now is that the blit doesn't work as expected. If you change the shader script as how Kalle said, remove the blit from the c# script and change the shader from the 3D model material to be the Draw shader, it will work as expected, but without any lighting. For my purposes I had to change distance(_Mouse.xyz, i.worldPos.xyz); to distance(_Mouse.xz, i.worldPos.xz); so it will paint a all the way through the other side. For debugging I created a RenderTexture and every frame I am using Blit to update the texture and see what is going on. The render texture does not hold the right position as the object is colored. The 3D model I have has lot of geometry and as the paint goes to the other side it should be all over the place on the render texture...but right now it is just on line from the top to the bottom of the texture. Also I try to paint on the bottom half of the object and the render texture doesn't show anything. Only when I paint on the top half I can see red lines (the default painting color).
If you want you can download the sample project here.
This is the code I am using.
Draw.shader
Shader "Unlit/Draw"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_Coordinate("Coordinate",Vector)=(0,0,0,0)
_Color("Paint Color",Color)=(1,1,1,1)
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 100
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
fixed4 _Coordinate,_Color;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
float draw =pow(saturate(1-distance(i.uv,_Coordinate.xy)),100);
fixed4 drawcol = _Color * (draw * 1);
return saturate(col + drawcol);
}
ENDCG
}
}
}
Draw.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Draw : MonoBehaviour
{
public Camera cam;
public Shader paintShader;
RenderTexture splatMap;
Material snowMaterial,drawMaterial;
RaycastHit hit;
private void Awake()
{
Application.targetFrameRate = 200;
}
void Start()
{
drawMaterial = new Material(paintShader);
drawMaterial.SetVector("_Color", Color.red);
snowMaterial = GetComponent<MeshRenderer>().material;
splatMap = new RenderTexture(1024, 1024, 0, RenderTextureFormat.ARGBFloat);
snowMaterial.mainTexture = splatMap;
}
void Update()
{
if (Input.GetMouseButton(0))
{
if(Physics.Raycast(cam.ScreenPointToRay(Input.mousePosition),out hit))
{
drawMaterial.SetVector("_Coordinate", new Vector4(hit.textureCoord.x, hit.textureCoord.y, 0, 0));
RenderTexture temp = RenderTexture.GetTemporary(splatMap.width, splatMap.height, 0, RenderTextureFormat.ARGBFloat);
Graphics.Blit(splatMap, temp);
Graphics.Blit(temp, splatMap, drawMaterial);
RenderTexture.ReleaseTemporary(temp);
}
}
}
}
As for what I have tried to solve the problem is this. I searched on google about the problem this thread is about and tried to implement it in my project. I have also found some projects that have the feature I need like this one Mesh Texuture Painting. This one works exactly how I need it, but it doesn't work on iOS. The 3D object is turns to black. You can check out a previous postI made to solve the problem and also talked with the creator on twitter but he can't help me. Also I have tried this asset that works ok but in my main project runs with very little fps, it's hard for me to customize it for my needs and it doesn't paint on the edges of my 3D model.
The shader that works well, is simple enough so I can change it and get the desired effect is the one above.
Thank you!
There are two approaches to this problem - either you pass in the texture coordinate and try to convert it to world space inside the shader, or you pass in a world position and compare it to the fragment world position. The latter is no doubt the easiest.
So, let's say that you pass in the world position into the shader like so:
drawMaterial.SetVector("_Coordinate", new Vector4(hit.point.x, hit.point.y, hit.point.z, 0));
Calculating a world position per fragment is expensive, so we do it inside the vertex shader and let the hardware interpolate the value per fragment. Let's add a world position to our v2f struct:
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
float3 worldPos : TEXCOORD1;
};
To calculate the world position inside the vertex shader, we can use the built-in matrix unity_ObjectToWorld:
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
return o;
}
Finally, we can access the value in the fragment shader like so:
float draw =pow(saturate(1-distance(i.worldPos,_Coordinate.xyz)),100);
EDIT: I just realized - when you do a blit pass, you are not rendering with your mesh, you are rendering to a quad which covers the whole screen. Because of this, when you calculate the distance to the vertex, you get the distance to the screen corners, which is not right. There is a way to solve this though - you can change the render target to your render texture and draw the mesh using a shader which projects the mesh UVs across the screen.
It's a bit hard to explain, but basically, the way vertex shaders work is that you take in a vertex which is in local object space and transform it to be relative to the screen in the space -1 to 1 on both axes, where 0 is in the center. This is called Normalized Device Coordinate Space, or NDC space. We can leverage this to make it so that instead of using the model and camera matrices to transform our vertices, we use the UV coordinates, converted from [0,1] space to [-1,1]. At the same time, we can calculate our world position and pass it onto the fragment separately. Here is how the shader would look:
v2f vert (appdata v)
{
v2f o;
float2 uv = v.texcoord.xy;
// https://docs.unity3d.com/Manual/SL-PlatformDifferences.html
if (_ProjectionParams.x < 0) {
uv.y = 1 - uv.y;
}
// Convert from 0,1 to -1,1, for the blit
o.vertex = float4(2 * (uv - 0.5), 0, 1);
// We still need UVs to draw the base texture
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
// Let's do the calculations in local space instead!
o.localPos = v.vertex.xyz;
return o;
}
Also remember to pass in the _Coordinate variable in local space, using transform.InverseTransformPoint.
Now, we need to use a different approach to render this into the render texture. Basically, we need to render the actual mesh as if we were rendering from a camera - except that this mesh will be drawn as a splayed out UV sheet across the screen. First, we set the active render texture to the texture we want to draw into:
// Cache the old target so that we can reset it later
RenderTexture previousRT = RenderTexture.active;
RenderTexture.active = temp;
(You can read about how render targets work here)
Next, we need to bind our material and draw the mesh.
Material mat = drawMaterial;
Mesh mesh = yourAwesomeMesh;
mat.SetTexture("_MainTex", splatMap);
mat.SetPass(0); // This tells the renderer to use pass 0 from this material
Graphics.DrawMeshNow(mesh, Vector3.zero, Quaternion.identity);
Finally, blit the texture back to the original:
// Remember to reset the render target
RenderTexture.active = previousRT;
Graphics.Blit(temp, splatMap);
I haven't tested or verified this, but i have used a similar technique to draw a mesh into UVs before. You can read more about DrawMeshNow here.
I'm trying to make a decal shader to use with a projector in Unity. Here's what I've put together:
Shader "Custom/color_projector"
{
Properties {
_Color ("Tint Color", Color) = (1,1,1,1)
_MainTex ("Cookie", 2D) = "gray" {}
}
Subshader {
Tags {"Queue"="Transparent"}
Pass {
ZTest Less
ColorMask RGB
Blend One OneMinusSrcAlpha
Offset -1, -1
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 uvShadow : TEXCOORD0;
float4 pos : SV_POSITION;
};
float4x4 unity_Projector;
float4x4 unity_ProjectorClip;
v2f vert (float4 vertex : POSITION)
{
v2f o;
o.pos = UnityObjectToClipPos (vertex);
o.uvShadow = mul (unity_Projector, vertex);
return o;
}
sampler2D _MainTex;
fixed4 _Color;
fixed4 frag (v2f i) : SV_Target
{
fixed4 tex = tex2Dproj (_MainTex, UNITY_PROJ_COORD(i.uvShadow));
return _Color * tex.a;
}
ENDCG
}
}
}
This works well in most situations:
However, whenever it projects onto a transparent surface (or multiple surfaces) it seems to render an extra time for each surface. Here, I've broken up the divide between the grass and the paving using grass textures with transparent areas:
I've tried numerous blending and options and all of the ZTesting options. This is the best I can get it to look.
From reading around I gather this might be because the a transparent shader does not write to the depth buffer. I tried adding ZWrite On and I tried doing a pass before the main pass:
Pass {
ZWrite On
ColorMask 0
}
But neither had any effect at all.
How can this shader be modified so that it only projects the texture once on the nearest geometries?
Desired result (photoshopped):
The problem is due to how projectors work. Basically, they render all meshes within their field of view a second time, except with a different shader. In your case, this means that both the ground and the plane with the grass will be rendered twice and layered on top of each other. I think it could be possible to fix this using two steps;
First, add the following to the tags of the transparent (grass) shader:
"IgnoreProjector"="True"
Then, change the render queue of your projector from "Transparent" to "Transparent+1". This means that the ground will render first, then the grass edges, and finally the projector will project onto the ground (except appearing on top, since it is rendered last).
As for the blending, i think you want regular alpha blending:
Blend SrcAlpha OneMinusSrcAlpha
Another option if you are using deferred rendering is to use deferred decals. These are both cheaper and usually easier to use than projectors.
For my game I have written a shader that allows my texture to tile nicely over multiple objects. I do that by choosing the uv not based on the relative position of the vertex, but on the absolute world position. The custom shader is as follows. Basically it just tiles the texture in a grid of 1x1 world units.
Shader "MyGame/Tile"
{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
struct Input
{
float2 uv_MainTex;
float3 worldPos;
};
void surf (Input IN, inout SurfaceOutput o)
{
//adjust UV for tiling
float2 cell = floor(IN.worldPos.xz);
float2 offset = IN.worldPos.xz - cell;
float2 uv = offset;
float4 mainTex = tex2D(_MainTex, uv);
o.Albedo = mainTex.rgb;
}
ENDCG
}
FallBack "Diffuse"
}
I have done this approach in Cg and in HLSL shaders on XNA before and it always worked like a charm. With the Unity shader, however, I get a very visible seam on the edges of the texture. I tried a Unity surface shader as well as a vertex/fragment shader, both with the same results.
The texture itself looks as follows. In my game it is actually a .tga, not a .png, but that doesn't cause the problem. The problem occurs on all texture filter settings and on repeat or clamp mode equally.
Now I've seen someone have a similar problem here: Seams between planes when lightmapping.
There was, however, no definitive answer on how to solve such a problem. Also, my problem doesn't relate to a lightmap or lighting at all. In the fragment shader I tested, there was no lighting enabled and the issue was still present.
The same question was also posted on the Unity answers site, but I received no answers and not a lot of views, so I am trying it here as well: Visible seams on borders when tiling texture
This describes the reason for your problem:
http://hacksoflife.blogspot.com/2011/01/derivatives-i-discontinuities-and.html
This is a great visual example, like yours:
http://aras-p.info/blog/2010/01/07/screenspace-vs-mip-mapping/
Unless you're going to disable mipmaps, I don't think this is solvable with Unity, because as far as I know, it won't let you use functions that let you specify what mip level to use in the fragment shader (at least on OS X / OpenGL ES; this might not be a problem if you're only targeting Windows).
That said, I have no idea why you're doing the fragment-level uv calculations that you are; just passing data from the vertex shader works just fine, with a tileable texture:
struct v2f {
float4 position_clip : SV_POSITION;
float2 position_world_xz : TEXCOORD;
};
#pragma vertex vert
v2f vert(float4 vertex : POSITION) {
v2f o;
o.position_clip = mul(UNITY_MATRIX_MVP, vertex);
o.position_world_xz = mul(_Object2World, vertex).xz;
return o;
}
#pragma fragment frag
uniform sampler2D _MainTex;
fixed4 frag(v2f i) : COLOR {
return tex2D(_MainTex, i.position_world_xz);
}
I'm using GC for writing shaders inside Unity3D.
I'm using vertex colors attributes for passing some parameters to the shader. They won't be used so for defining colors, and should be forwarded from vertex shader to pixel shader without modifyng them.
This is the structure I'm taking as input from Unity3D to the vertex shader:
struct appdata_full {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float4 texcoord : TEXCOORD0;
float4 texcoord1 : TEXCOORD1;
fixed4 color : COLOR;
#if defined(SHADER_API_XBOX360)
half4 texcoord2 : TEXCOORD2;
half4 texcoord3 : TEXCOORD3;
half4 texcoord4 : TEXCOORD4;
half4 texcoord5 : TEXCOORD5;
#endif
};
This is the structure returned by vertex shader as input to the fragment:
struct v2f {
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
fixed4 col: COLOR;
};
If I simply forward the parameter to the fragment shader, of course it will be interpolated:
v2f vert (appdata_full v)
{
v2f output;
//....
output.col = v.color;
}
I'd like to pass v.color parameter not interpolated to the fragment shader.
Is this possible?if yes how?
EDIT
like Tim pointed out, this is the expected behavior, because of the shader can't do anything else than interpolating colors if those are passed out from vertex shader to fragment.
I'll try to explain better what I'm trying to achieve. I'm using per vertex colors to store other kind of information than colors. Without telling all details on what I'm doing with that, let's say you can consider each color vertex as an id(each vertex of the same triangle, will have the same color. Actually each vertex of the same mesh).
So I used the color trick to mask some parameters because I have no other way to do this. Now this piece of information must be available at the fragment shader in some way.
If a pass as an out parameter of the vertex shader, this information encoded into a color will arrive interpolated at the fragment, that can't no longer use it.
I'm looking for a way of propagating this information unchanged till the fragment shader (maybe is possible to use a global variable or something like that?if yes how?).
I'm not sure this counts for an answer but it's a little much for a comment. As Bjorke points out, the fragment shader will always receive an interpolated value. If/when Unity supports Opengl 4.0 you might have access to Interpolation qualifiers, namely 'flat' that disables interpolation, deriving all values from a provoking vertex.
That said, the problem with trying to assign the same "color" value to all vertices of a triangle is that the vertex shader iterates over the vertices once, not per triangle. There will always be a "boundary" region where some vertex shares multiple edges with other vertices of a different "color" or "id", see my dumb example below. When applied to a box at (0,0,0), the top will be red, the bottom green, and the middle blue.
Shader "custom/colorbyheight" {
Properties {
_Unique_ID ("Unique Identifier", float) = 1.0
}
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 pos : SV_POSITION;
fixed4 color : COLOR;
};
uniform float _Unique_ID;
v2f vert (appdata_base v)
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
float3 worldpos = mul(_Object2World, v.vertex).xyz;
if(worldpos[1] >= 0.0)
o.color.xyz = 0.35; //unique_id = 0.35
else
o.color.xyz = 0.1; //unique_id = 0.1
o.color.w = 1.0;
return o;
}
fixed4 frag (v2f i) : COLOR0 {
// local unique_id's set by the vertex shader and stored in the color
if(i.color.x >= 0.349 && i.color.x <=0.351)
return float4(1.0,0.0,0.0,1.0); //red
else if(i.color.x >= 0.099 && i.color.x <=0.11)
return float4(0.0,1.0,0.0,1.0); //green
// global unique_id set by a Unity script
if(_Unique_ID == 42.0)
return float4(1.0,1.0,1.0,1.0); //white
// Fallback color = blue
return float4(0.0,0.0,1.0,1.0);
}
ENDCG
}
}
}
In your addendum note you say "Actually each vertex of the same mesh." If that's the case, why not use a modifiable property, like I have included above. Each mesh just needs a script then to change the unique_id.
public class ModifyShader : MonoBehaviour {
public float unique_id = 1;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
renderer.material.SetFloat( "_Unique_ID", unique_id );
}
}
I know this is an old thread, but it's worth answering anyway since this is one of the top google results.
You can now use the nointerpolation option for your variables in regular CG shaders. i.e.
nointerpolation fixed3 diff : COLOR0;
This is a pretty old thread, but I recently had a similar issue and I found a super simple answer. OSX Mavericks now supports OpenGL 4.1 so soon it won't be an issue at all, but it still may take a while before Unity3d picks it up.
Anyway, there is a neat way to enable flat shading in Unity even on earlier OSX (e.g. Mountain Lion) !
The shader below will do the job (the crucial part is the line with #extension, otherwise you'd get a compilation error for using a keyword flat"
Shader "GLSL flat shader" {
SubShader {
Pass {
GLSLPROGRAM
#extension GL_EXT_gpu_shader4 : require
flat varying vec4 color;
#ifdef VERTEX
void main()
{
color = gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
#endif
#ifdef FRAGMENT
void main()
{
gl_FragColor = color; // set the output fragment color
}
#endif
ENDGLSL
}
}
}
Got to it by combining things from:
http://poniesandlight.co.uk/notes/flat_shading_on_osx/
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Debugging_of_Shaders
The GPU will always interpolated between values. If you want a constant value for a triangle, you need to set the same value for all vertices of that triangle. This can at times be inefficient, but it's how OpenGL (and DirectX) works. There is no inherent notion of a "face" value.
You might do this: glShadeModel(GL_FLAT). This turns off interpolation for all fragment shader inputs, and is available in older OpenGL also (pre 4.0).
If you have some inputs you want to interpolate and some you don't, render once with GL_FLAT to a texture of the same resolution as your output, and then render again with GL_SMOOTH and sample the texture to read the flat values for each pixel (while also getting interpolated values in the usual way).
If you could use DirectX9 instead, you can use the nointerpolation modifier on individual fragment shader inputs (shader model 4 or later).
The following steps works for me.
Unfortunately, DX uses vertex 0 as the provoking vertex while GL by default uses 2.
You can change this in GL but glProvokingVertex does not seem to be exposed.
We are doing flat shading and this reduces our vertex count significantly.
We have to reorder the triangles and compute normals in a special way (if anyone is interested I can post example source).
The problem is that we have to have different meshes on GL vs DX as the triangle indexes need to be rotated in order for the triangles to use the appropriate provoking vertex.
Maybe there is some way to execute a GL command via a plugin.