What I understand of film grain synthesis is that it’s actually sampling the frame-by-frame appearance and disappearance of film grain spots, the general size and pattern of it, filtering it out, and then encoding that output as if it were source video instead; and then trying to recreate the film grain appearance on-the-fly at playback, which can actually help obscure some underlying encoding artefacts, and also prevent it from appearing a little uncanny or ‘blurry’ if film grain was left out entirely.
In your sample, it seems like the film grain is very faint and fine, so there’s probably not a great amount to be gained from it. Meanwhile re-encoding some of the older X-Files with grain synthesis can have some pretty big filesize gains, since it’s usually heavier and larger film grain.
10-bit definitely helps rid of banding quite a bit, even if the source is 8-bit, because I believe it has more color depth to work with (rather being more constrained) for compressed representation, and weirdly may often even come out a little smaller, given most of the development focus and optimization seems to be on 10-bit encoding.
There is a bit of blocking noticeable at a few parts, but I assume that’s purely the source video, because it seems to fairly hard to see much blocking artefacts in AV1, unless it’s really potato-quality encoded.
I assume that was encoded with Intel’s SVT-AV1 then? Any specific encoding preset used?
@arcanicanis In the future I'd use something like this "--preset 6 --rc 0 --input-depth 10 --superres-mode 4 --keyint -1 --film-grain 10 --film-grain-denoise 0 --crf 30"
--keyint -1 because scene detection is done elsewhere and I just feed the scenes into svt-av1 and then mux each GOP in order