Simulating CRT Monitors with FFmpeg (Pt. 1: Color CRTs)

...or: Who needs shaders? Just do it the slow way!

Sample video (watch @ 1080p)

Part of an update series on FFcrt.  See also:

  1. CRTs Part 1: Color
  2. CRTs Part 2: Monochrome
  3. Flat-Panel Displays

This is just a fun little experiment which I've been tinkering with here and there lately.  You won't get much serious use out of this, even if we pretend that there is such a thing as "serious use" for simulating CRT monitors.  That's mostly due to the speed of doing this all on the CPU.

I'm not kidding about "slow" - if you actually want to apply this to a video of more than a few seconds, in good quality, prepare to be outpaced by a glacier running a marathon through molasses.  Of course, there are a ton of shaders for use in your favorite emulator which do it on the GPU in real-time, but I don't know of a decent way to just apply them to a stand-alone image or video file without tearing your hair out.

(FFmpeg can also be told to use the GPU, but not for everything - also, its GPU acceleration isn't portable and doesn't seem to make things very much faster, anyway.  If you know of a way around these shortcomings, that'd be nice to know.)

This script (a Windows batch file only, at least for now) will let you perform a CRT transform on an image or a video of the original-resolution material.  You can get it at https://github.com/viler-int10h/FFmpeg-CRT-transform, which includes everything you'll need, except for FFmpeg itself of course.

The Youtube video sample will show you the effect in motion, but here are a few still-image results at a higher resolution:

NTSC TV w/slot mask (CGA)
NTSC TV w/slot mask (CGA)
PAL TV w/aperture grille (C64)
PAL TV w/aperture grille (C64)
TTL RGBI, 320x200 w/overscan
TTL RGBI: 320x200 w/overscan
TTL RGBI: fuzzy w/coarse dot mask
TTL RGBI: fuzzy w/coarse dot mask
TTL RGBI: 640x200 EGA
TTL RGBI: 640x200 EGA
Analog RGB, poor H-resolution
Analog RGB, poor H-resolution
VGA (double-scanned) w/border
VGA (double-scanned) w/border
VGA custom mode (360x350), sharp
VGA custom mode (360x350), sharp
VGA (640x400), fuzzy
VGA (640x400), fuzzy

Usage

Other than the files included above, you'll only need to have ffmpeg.exe/ffprobe.exe in your PATH.  There's one batch file for still images and another for video which handles both still images and videos; to invoke, just run:

ffcrt <config_file> <input_file> [output_file]


If the output filename is omitted, the script will use the same name as the input appended with "_OUT".

You'll see some temporary files being created during the run ("TMP*"), but unless something goes wrong, the script cleans up after itself when it's done; in fact all files starting with "TMP" in the current directory will be deleted, so try not to have any.  If FFmpeg returns an error at any point, the script aborts without deleting the temp files, since they may help with debugging.

For video, keep this in mind: the output will always be in the RGB colorspace / pixel format, because these scripts assume RGB input and attempt to preserve the color information.  Specifically, the exact FFmpeg parameters are -c:v libx264rgb -crf 8 (all intermediate steps use -crf 0 for their temporary files, to keep things lossless).

For better compatibility with editing apps, and better color reproduction on video sharing and streaming services, you might want to convert the result to a YUV color space yourself.  More info on how to do this optimally will come in a future post.

Configuration

The configFile argument specifies the configuration file, which is where you tune the simulation options: scaling, shadow-mask parameters, pixel blur, halation (the diffuse glow caused by scattering/reflection effects), scanline profile, beam bloom (where brighter pixels make the scanline appear wider), surface curvature (convexity), corner radius, and vignette (the darkening towards the edges of the CRT surface). 

All these elements characterize the normal output of a functional tube display -- I tried to stay away from "aesthetic revisionism", so you won't find things like noise, chromatic aberration, ghosting, fake flicker/beam trail, sync errors and other 'glitch' visuals.  If you like such things, FFmpeg can pull them off quite well, so you can always have a go at adding to the script.

The download includes a few config files to start you out.  Here's one so you can get a feel for what you can tweak:

; FFmpeg CRT script config file (comments start with a semicolon)

; This example is intended for low-res VGA input which has already been double-
; scanned (such as DOSBox produces with machine=vgaonly), e.g. 640x400.
; For 1:1 input (e.g. 320x200), double the values for SX and SY, and set
; SCAN_FACTOR to double.

; Input pre-scaling + aspect ratio adjustment: --------------------------------

SX                5       ; Width scale factor (integer)
SY                6       ; Height scale factor (integer)
                          ; Larger factors = slower processing, better quality

; Final output scaling: -------------------------------------------------------

OY                1080    ; Output height (width will be set to height*4/3)
OMARGIN           8       ; Minimum width of included margins (edge padding)
OFILTER           lanczos ; Output scaling filter; recommended: 'lanczos' for
                          ; triad shadowmask, 'gauss' for slot/grille (reduces
                          ; moire)
                          ; See FFmpeg Scaler Options for all available values

; Simulation settings: --------------------------------------------------------

OVL_TYPE          triad   ; Shadow mask (phosphor overlay) type, e.g. triad,
                          ; slot, grille.  "_.png" must exist.
OVL_SCALE         0.08    ; Scale factor for shadow mask image
OVL_ALPHA         0.6     ; Shadow mask opacity (0..1)
OVL_BRIGHTFIX     2.05    ; Brightness multiplier applied after color overlay -
                          ;   compensates for apparent darkening

H_PX_BLUR         50      ; Horizontal pixel blur factor (% of pixel width)
V_PX_BLUR         12      ; Vertical pixel blur factor (% of pixel height) -
                          ; should generally be 0 if scanlines are enabled

HALATION_ON       no      ; Add halation?
HALATION_RADIUS   30      ;   - radius
HALATION_ALPHA    0.12    ;   - opacity (0..1)

SCANLINES_ON      yes     ; Add scanlines?
SL_ALPHA          0.8     ;   - opacity (0..1)
SL_WEIGHT         1       ; 0..1 for a slimmer beam, >1 for a fatter beam
SCAN_FACTOR       single  ; Ratio of scanline count to input height: 'single',
                          ; 'double', or 'half'

BLOOM_ON          yes     ; Add scanline bloom? (widens brighter scanlines)
BLOOM_POWER       0.65    ;   - bloom factor: 0 (none) .. 1 (full)
                          ;     more noticeable for smaller scanline weights

CORNER_RADIUS     0       ; Radius of rounded bezel corners; 0 to disable
CURVATURE         0.04    ; CRT curvature (barrel distortion); 0 to disable
VIGNETTE_ON       yes     ; Add vignette effect? (darkens image towards edge)
VIGNETTE_POWER    0.1     ;   - amount of darkening

; Video only - no effect on still images: -------------------------------------

P_DECAY_FACTOR    0       ; Phosphor decay factor (0..1) - exponential: <0.9 is
                          ; very mild, 0.95=heavy, 0.99=very long, 1.0=infinite
P_DECAY_ALPHA     0.5     ;   - opacity (0..1) - 0.3 to 0.5 seem realistic

Implementation

I won't go into what every single line in the batch file is doing (if you look at it, you'll understand why), but the general flow is this:

  • Setup:
    Make sure there's nothing funny with the command line arguments, get the input dimensions (using ffprobe), read config settings.

  • Rounded corners, if enabled:
    Draw a quarter-circle (using FFmpeg's geq filter), scale it down to the desired size (CORNER_RADIUS) w/anti-aliasing, rotate it four ways; create a blank canvas of the same dimensions as the scaled input, and place the results in the corners.  Apply the lenscorrection filter with the desired CRT curvature factor (CURVATURE).

    The result is a temporary image that will function as a transparent layer, except for the four corners.  You may wonder why we're already applying the curvature here, when we could just do it in a single pass later, when the layers have already been combined.  That will be answered below, but for now we're just putting this layer aside for later.

  • Scanlines, if enabled:
    Create a greyscale vertical profile of a single scanline, again with the geq filter: luminance is determined by a sine function (using SL_WEIGHT as a modifier), and height depends on the scaled input dimensions and the SCAN_FACTOR parameter.  This is tiled vertically to get an alpha map of the scanlines.

    Now we add curvature to this image as well - but this time, we pre-scale it by 4x before lenscorrection, and scale it back down afterwards.  This is necessary because FFmpeg's lens correction does nearest-neighbor interpolation, and with hi-frequency detail (such as these scanlines), this creates awful moiré patterns.  The 4x oversampling gets around that, but operations at that size cary extreme CPU/RAM expenses, and THAT's why we separate the curvature passes: we want to do moiré mitigation only for still-image layers, and only when strictly necessary.

  • Shadowmask:
    The image of the phosphor overlay is taken from the respective .png file, scaled down by the specified OVL_SCALE factor (w/gamma correction before and after!), then tiled and cropped to fill the 'screen'.  This also gets the anti-moiré curvature treatment with 4x oversampling, and is saved as a temporary image.

  • Phosphor decay (video only):
    If the P_DECAY_FACTOR is nonzero, take the input video and simulate phosphor persistence as an additional stage before it gets upscaled.  For this we use the lagfun filter, with the decay factor as the parameter, and blend the result back with the original using P_DECAY_ALPHA as the opacity factor.

  • Input scaling + pixel blur:
    Take the original resolution input image (or video - with or without phosphor decay), and upscale it by the configured width/height factors (SX/SY) to get an aspect-corrected version at a much higher resolution: this resolution will be used for all intermediate steps.
    Then, apply a gaussian pixel blur (gblur) using the configured PX_BLUR factors, but not before applying a 2.2 gamma transform using lutrgb, to ensure a gamma-aware application of the blur effect.

  • Halation, if enabled:
    Split the output of the previous step into two streams; blur one of them even more (using HALATION_RADIUS), to simulate the diffuse glow caused by slight scatter/backscatter of the electron beam and the phosphor light; then mix that with the other unmodified stream (using the blend filter in 'lighten' mode, with the opacity factor specified by HALATION_ALPHA).

  • Surface curvature, if enabled:
    Now that we're done with the gamma-senstive blur steps, apply the inverse (0.454545) gamma transform to our image/video; then feed it to the lenscorrection filter to get the convex effect, this time without the 4x pre-scaling, since moiré is not a real issue here, and oversampling the actual video would be crazy slow anyway.

  • Beam bloom, if enabled:
    Create a greyscale version of the previous stage's output (image/video), so we preserve only the luminance information.  This is done with the hue filter by setting saturation to 0 - once again using gamma transform before and after.

    Now we blend this with the mask of the already-curved scanlines, which we had put aside before, using a custom geq blending formula to "fatten" the scanline according to the brightness in the greyscale stream we've just created.  The result is a video/image stream containing the luminance mask for the scanlines, with 'beam bloom' included.

  • Blend everything together:
    Take the image/video which we've already scaled up, blurred, halated, and curved; if bleam bloom was enabled - do a 'multiple' blend with the result of the previous stage, to add the scanlines with the bloom baked in; if it wasn't, just do the same with the original scanline mask we made previously.  The SL_ALPHA parameter determines the opacity.  (If scanlines are disabled, do none of the above.)

    Then... take the curved-up shadowmask image, do another 'multiple' blend (with opacity set by OVL_ALPHA), and add the one with the rounded corners while we're at it.  Because the scanlines and shadowmask darken the picture quite a bit, we now compensate by multiplying each color channel with the OVL_BRIGHTFIX parameter (clipping the result to 255).

  • Crop to contents:
    The surface curvature distortion has introduced some superfluous padding around the picture area, so this part uses the cropdetect filter to find the dimensions of the actual content, without the extra fluff.  For this, we generate a "dummy" frame with nothing but white, put it through the same distortion, and feed it to the filter (3 times, since that's the minimum number of frames it needs for a reliable result).

    The dummy approach beats examining the actual video, because (1) it's much faster, and (2) if your video starts with a black frame - or just with any black whatsoever extending to the edge - it'd be worse than useless.

  • Final scaling + vignette:
    After cropping, the image/video is scaled down to the final desired size while preserving a 4:3 aspect ratio.  This scaling step is also gamma-corrected, and if you've specified a nonzero OMARGIN, it'll leave room for that.  The scaling filter itself is specified in the configuration (OFILTER) - see Scaler Options for the list supported by FFmpeg.

    All we have to do now is to set the Storage Aspect Ratio (setsar) to 1:1, since the previous steps have likely mucked it up; apply vignette for the edge-darkening effect (using the VIGNETTE_POWER parameter); and pad with the afore-mentioned margins if applicable.

If you haven't expired of natural causes yet, and your CPU hasn't melted, you will now have the final result!  Was it worth it? - you tell me.  And if you can figure out a way to speed up the process, or to make it more efficient without losing the essentials, or to get the GPU to put in some real effort, that'd be cool too.

12 comments:

Fantastic work!

Mark Rejhon here, inventor of TestUFO and founder of Blur Busters.

You should add a temporal component to this, to do a CRT emulation at sub refresh time scales, in a manner similar to my MAME Temporal HLSL proposal.

https://github.com/mamedev/mame/issues/6762

The concept is to emulate a CRT temporally on a high-Hz panel, with electron beam emulation over multiple refresh cycles. e.g. 4 segments of rolling scan to emulate a 60 Hz CRT onto a 240 Hz gaming monitor, and 6 segments of rolling scan to emulate a 60 Hz CRT onto a 360 Hz gaming monitor. ASUS has roadmapped a 1000Hz display, so that gives opportunity to emulate a CRT using 16 slices per 60Hz CRT refresh cycles.

BTW, I have beam-raced GeForces and Radeons, and wouldn't mind partnering up with any demoscene peeps on a private github for publicly releasing "Tearline Jedi": http://www.pouet.net/topic.php?which=11422&page=1 ... but create something fancier, perhaps speed up the 8000 pixels/sec with a front buffer generating pixel rows bufferlessly at least 15,750/sec (NTSC scanrate), treating GeForces/Radeons/Intel GPUs like an Atari TIA. The plan is to open source this works after some competition.

I've also helped some emulators with lagless VSYNC (synchronization of emulator raster and real raster), at https://github.com/tonioni/WinUAE/issues/133 which was successfully implemented into WinUAE (as well as a prototype build of Tom Hart's CLK as well as Calamity's GroovyMAME, respectively, plus some other derivative works such as RTSS Scanline Sync was based upon my Tearline Jedi work)

Oh, and I've got instructions to create 240fps video files via ffmpeg ("UltraHFR") at https://www.blurbusters.com/ultrahfr

So, it would be neat if your CRT emulator could also temporally emulate a rolling scan, with no refresh rate cap. As a hobbyist, I'm so interested in seeing open source temporal CRT emulators being developed to preserve CRT temporal look-and-feel during the refresh rate race to retina refresh rates.

Tests of a high speed video (following instructions at the Mame Temporal HLSL github) -- GoPro HERO8 240fps video of a 60Hz CRT, played back in real-time on a 240Hz monitor, shows a very uncanny temporal-mimicking behaviour (CRT flicker, CRT rolling scan, reduction of motion blur, more accurate phosphor emulation). So what we're looking for is a "temporally-accurate" software-based emulation of a CRT.

I remember seeing 'Tearline Jedi' (probably that same thread on pouet), and being much impressed - hats off!  Modern GPU coding is a few parsecs outside my area of expertise, but I enjoy the idea that these things can still be treated like oldschool display controllers by intentionally introducing tearlines for raster effects (and that you can race the beam using cross-platform HLLs, on top of that).

I'm still working on this spaghetti-monster of a script (there should be a Part 2 post soon), and concentrating on other features for now, but I guess simulating a simple 'segmented' rolling scan shouldn't be very difficult with FFmpeg.  And now that I'm thinking about it... you can even add a somewhat realistic phosphor persistence effect, so it'd look like an actual scanning electron beam (instead of a simple cropped rectangle sliding down a black background, use a moving alpha gradient that wraps around vertically).

Problem is, I'm stuck with garden-variety 60Hz IPS panels, and probably will be for the foreseeable future, so I'd have absolutely no way to test how realistic it actually turns out.  But I gotta say it's a neat idea, so I might eventually add it as an experimental feature if anyone feels like doing some testing.

Understood! I've been spreading the word about the need for temporal-based CRT emulation. I've also got a RetroArch version too at: https://github.com/libretro/RetroArch/issues/10757 ...

It's also partially CRT history preservation: Even if you have no ability, I welcome anybody in the scene to spread the word to other coders about the concept of sub-refresh temporal emulation of a CRT (whether via GPU shader, or CPU, or other means). As CRTs get harder to find it gets harder to show people what they temporally looked like.

The the refresh rate race provide a solution in the CRT temporal department. With mainstreaming of 120Hz (smartphones getting it, consoles getting it, and most 4K HDTVs supporting 1080p 120Hz), and hearing DELL considering 120Hz for office monitors -- by year 2030 it will be as hard to buy a 60Hz-only display as it is to buy a 480p or 720p HDTV.

Theoretically (algorithmically), the easiest way is likely to clone the CRT frames (e.g. 6 frames per frame) at the refresh rate divisor (e.g. 360/16) and delete the data from the frames, with some alpha-blended edge overlaps (gamma-corrected) between adjacent frames, possibly via precomputed alpha masks (6 masks for the 6 bar positions), which probably would be ffmpeg-script compatible. One would need to pre-supply masks for all refresh rate divisors (/4 /5 /6 /7 /8 etc), like mapping 60fps to 240/300/360/420/480. (Rumor is that 480Hz 1080p LCDs are coming out in 2022-2023).

I think I have the skills to make such a theoretical git commit (not now, I've got too many projects), I will probably make a realtime HTML5 temporal CRT emulator first instead (TestUFO demo), since HTML5 <CANVAS> supports a lot of the primitives that . I already do some impressive HTML5 feats such as realtime "interpolation" (in JavaScript) to emulate a variable refresh rate on a fixed-Hz display: https://www.testufo.com/vrr

I have come up with a method that does something similar to your ffmpeg filter but using HTML5 <CANVAS> GPU-accelerated bitmap blend/masking/math operations (no curvature though, but everything else would be simulatable) without needing WebGL. I posted a github entry at the JavaScript Emularity github, documenting the non-WebGL HTML5 APIs necessary: https://github.com/db48x/emularity/issues/74 but it would be still GPU accelerated because GPU-accelerated browsers run the bitmap math operations on the GPU instead. I think I will create something standalone in TestUFO later this year to emulate a CRT filtermask in real time (running a different TestUFO animation, but piped through a realtime CRT filtermask script)

Indeed, my expertise is currently all about high-Hz displays nowadays. I cut my teeth in raster interrupts back in the 8-bit days. I grew up on Commodore 64 and programmed the Space Zoom game (Uridium Clone) in 1986-1991 as a hobby during after-school time. I was age 11-16 when I created the 100% 6502 game in SuperMon64 https://www.lemon64.com/forum/viewtopic.php?t=62416 -- split screens, scrolling zones, multicolor -- although nothing as advanced as demoscene stuff of the day. But it gave me the necessary beam racing experience and low-level programming experience that I could mash-together with modern GPUs.

Oh, and if you (or anyone else) decide to do the temporally-compensated ffmpeg filter yourself, here are best practices:

- Can't use sharp bar boundaries, or you will get tearing artifacts.
- Need massive alphablend overlaps for same pixels in adjacent frames (much like adjacent frames of high speed video of CRT).
- Will need to gamma-correct the alphablend overlaps. This is because RGB(128,128,128) on two adjacent frames don't add up to the same number of photons as RGB(255,255,255). To fix this, you have to run the gamma-correction math.
- Video file would only look "correct" on the gamma it's targeted.
- To have full control, probably need precomputed PNGs of the rolling-bar alpha masks (either precomputed at beginning, or committed to github)
- If precomputing upon startup, add curvature to the alpha masks in sync with CRT curvature. But this isn't critical initially, especially since the rolling-scan will be too fast for majority to see non-matching curvature in the rolling bars

For coders stuck with 60Hz LCDs, you can probably simulate a 15Hz CRT with 4-segment, and 20Hz CRT with 3-segment, and 30Hz CRT with 2-segment. Also, one can also verify if their HDTV has an undocumented 120Hz mode: www.blurbusters.com/overclock/120hz-pc-to-tv/ and some cheap 60Hz laptop LCDs are overclockable to 120Hz-180Hz: https://forums.blurbusters.com/viewtopic.php?f=8&t=188 (ironically, sometimes the cheaper the easier because the onscreen-display menu chips are an overclocking bottleneck, and removal of those often unlocks the overclockability because of lack of "OUT OF RANGE" watchdog cop firmware).

Even if you can't do it this year, these comments will probably help somebody else to do so eventually as higher Hz becomes more cheap/widespread. As this is a longterm goal (2025-2030) to have an opensource temporally-accurate CRT emulation reference that others can build upon.

I agree that emulation in general should make use of such "temporal supersampling" wherever that helps with fidelity/accuracy.  And yep, on the CRT front it sure would.  Once these high-refresh displays become more widespread I believe we'll see your evangelization of this concept being taken up.

Re: gamma correction- roger that.  That's something I'm already mindful of, and tried to work into my script wherever blending, scaling, blurring, etc. are involved.  For some reason this concept is neglected too often in image/video processing, even in hardware (e.g. interpolated scaling in GPUs and monitors), not to mention emulation.

Back to the idea of high-refresh beam-scan simulation, there's still one big question mark left for me: how can the overall brightness of the picture be preserved?  All these methods would still run into the problem that the output grows darker as you increase your refresh-rate multiplier.  With hardware BFI, it seems that the display compensates by pulse-width-modulating the backlight's brightness level (as far as I can tell), but I can't really see how this can be overcome in software, let alone something like a video file... of course, I'd be happy to be proven wrong.

For brightness loss, there's a solution: HDR technology

The new 1000-nit and 2000-nit HDR screens, and I saw the Sony 10,000-nit HDR prototype at CES 2019. Most of the time, HDR is a 1% window, meaning only 1% of the pixels are "permitted to peak" (eye protection algorithm). Such as the sun in the sky, or bright streetlamps/neon signs, or the sunglint reflections off a chromed car. It looks FRIGGIN' AMAZING to see that kind of "brighter-than-white" accents on a true >1000-nit HDR display.

Most CRTs are not as bright as today's LCDs.

Now, a future 1000Hz HDR display, would require 16 refresh cycles per 60Hz emulated CRT refresh cycle, which would be approximately a 1/16th HDR window (6.25%), which might not be allowed to peak at the maximum HDR values, but it will still be allowed to peak beyond the normal brightnesses (100-300 nits).

The G-SYNC Ultimate Displays are allowed to have >10% peaking on 1000nits, so realistically we can have 1000-nit to 1500-nit rolling bars. This should still provide ~200-300 nits at 360 to 480Hz range -- still brighter than some CRTs at their maximum brightness.

Admittedly, many high-Hz screens (above 120Hz) aren't HDR, but Huawei is rumored to be working on a 240Hz full-array-local-dimming HDR gaming monitor. The mainstream western brands will probably merge HDR and high-Hz this decade.

This means it's just a matter of time (HDR+Hz) that solves the problem of nitroom for software-bsaed rolling-bar emulation.

Interesting, although transitioning to HDR sounds like it might be trouble for other reasons - guess I have some reading up to do about HDR color spaces, how sRGB->HDR color mapping is done, how the whole thing is (or will be) supported by video formats, and so on.  But I might just kick that can down the road and put something together at some point... the aforementioned headache could wait for when HDR dispalys start showing up en masse.

And those different luminance levels... at least the term "nit" gets us halfway to the proper expression of disdain for marketing departments playing around with units of measurement, I'll give them that.

Totally. HDR is a royal pain at the moment.

For now I'd just add a simple rolling scan filter, and worry about the HDR stuff later. The dimming problem doesn't become super-extreme until going to beyond approximately 3:1 to 4:1 ratios. Many LCDs can reach almost 400 nits, and 100 nits is the floor of comfortable CRT brightness.

About HDR windows, here's how a modern LG OLED behaves:

https://www.avsforum.com/threads/oled-tvs-technology-advancements-thread.681125/page-839#post-60408684

It's got the ability to peak at 1000nits at 10% window, but falls to 700nits at 25% window (screen area illuminated).

Now, if you're doing lots of black fill (CRT phosphor dots), then a 25% bar might actually be more like 10% window instead. So it benefits both the spatial dimming problem AND the temporal dimming problem.

Personally I'd wait on HDR addition to HLSL CRT emulation until a bit later, but it does present a solution to a lot of software-based tube emulation problems.

Wow VileR, that is seriously impressive. I have been trying to do the same thing in After Effects, and the word slow is just as apt. I will definitely be giving your scripts a run. I have found that working in floating point colour is going to help massively with creating the glow found on CRT's. Hopefully I will have something to share shortly, but I have posted a early example on the AmigaLove website if you want to have a look. Nowhere close to your stuff, but you might find it interesting.

@Gareth: thanks, appreciate the feedback.  I've never used AE so I can't comment on what might be possible with that, but I saw your preliminary screenshot and it's looking good already!

Full floating-point color processing would theoretically be best, but this script is probably slow and RAM-hungry enough as it is.  I went for the next-best thing and added an option for 16-bit (per channel) processing across the chain.  It does seem to eliminate the banding artifacts I used to get with the glow/halation effect, so I'm hoping it's good enough.

One problem that needs fixing is the h.264 video output format.  Right now the script does either RGB24 or 10-bit YUV 4:4:4, and many editors/converters don't seem to like these formats very much (YouTube can handle the latter, at least).  For some reason, it's the more standard formats that are giving me issues in the output stage.
Still working on that, so just a heads-up in case you get any funny colorspace problems with FFcrt's output.

Write a response:

* Required.
Your email address will not be published.