

FVF was already depreciated and through FVF you are unable to do things like set up a tangent basis.

Under DX10 you need to create a shader then create an input layout attached to a given shader.Īs codeka points out the D3DVERTEXELEMENT9 has been the recommended way to create shader signatures since DX9 was introduced. You just set a declaration and set a shader. The other big change is the fact that under DX10 vertex declarations are tied to a compiled shader (in CreateInputLayout). This broke my architecture somewhat because I was rather relying on being able to make a small change and leave all the rest of the states the same (This only really becomes a problem when you set states from a shader). The biggest change I've noticed between DX9 and DX10 is the fact that under DX10 you need to set an entire renderstate block where in DX9 you could change individual states. If something new pops up of my mind I will update my answer. System-Value Semantics: a generalization and extensions of POSITION, DEPTH, COLOR semantics, that are now SV_Position, SV_Depth, SV_Target and add of per stage new semantics like SV_InstanceID, SV_VertexId, etc. Instead of tex2D( g_mySampledTexture, texCoord )īuffers: a new kind of resource for accessing data that need no filtering in a random access way, using the new Object.Load function. You now use syntax g_myTexture.Sample( g_mySampler, texCoord ) Textures and Samplers have been dissociated. You have now access to binary casts to interpret an int as a float, a float as an uint etc. Integer and bitwise operations are now fully IEEE-compliant (and not emulated via floating point). Intrinsic functions (with some exceptions like for GS stage). But there are indeed noticable differences: I would say there's no radical changes in the HLSL syntax itself between DX9 and DX10 (and by extension DX11).Īs codeka said, changes are more a matter of cleaning the API and a road toward generalization (for the sake of GPGPU).
