to solve this, we find the distance from the pixel's current texture coordinate to the center of the texel. we get the rate of change of the texture coordinate across pixels and put it in a 2x2 matrix. we invert that matrix, and then we can use that matrix to transform world positions, which effectively "quantizes" the position data we write to the geometry buffer with respect the texel size. that way we "trick" the light calculations into treating each pixel that exists within a texel as having the same position value.