...or: Who needs shaders? Just do it the slow way!
This is just a fun little experiment which I've been tinkering with here and there lately. You won't get much serious use out of this, even if we pretend that there is such a thing as "serious use" for simulating CRT monitors. That's mostly due to the speed of doing this all on the CPU.
I'm not kidding about "slow" - if you actually want to apply this to a video of more than a few seconds, in good quality, prepare to be outpaced by a glacier running a marathon through molasses. Of course, there are a ton of shaders for use in your favorite emulator which do it on the GPU in real-time, but I don't know of a decent way to just apply them to a stand-alone image or video file without tearing your hair out.
(FFmpeg can also be told to use the GPU, but not for everything - also, its GPU acceleration isn't portable and doesn't seem to make things very much faster, anyway. If you know of a way around these shortcomings, that'd be nice to know.)
This script (a Windows batch file only, at least for now) will let you perform a CRT transform on an image or a video of the original-resolution material. You can get it at https://github.com/viler-int10h/FFmpeg-CRT-transform, which includes everything you'll need, except for FFmpeg itself of course.
The Youtube video sample will show you the effect in motion, but here are a few still-image results at a higher resolution:
Other than the files included above, you'll only need to have ffmpeg.exe/ffprobe.exe in your PATH. There's one batch file
for still images and another for video which handles both still images and videos; to invoke, just run:
If the output filename is omitted, the script will use the same name as the input appended with "_OUT".
You'll see some temporary files being created during the run ("TMP*"), but unless something goes wrong, the script cleans up after itself when it's done; in fact all files starting with "TMP" in the current directory will be deleted, so try not to have any. If FFmpeg returns an error at any point, the script aborts without deleting the temp files, since they may help with debugging.
For video, keep this in mind: the output will always be in the RGB colorspace / pixel format, because these scripts assume RGB input and attempt to preserve the color information. Specifically, the exact FFmpeg parameters are -c:v libx264rgb -crf 8 (all intermediate steps use -crf 0 for their temporary files, to keep things lossless).
For better compatibility with editing apps, and better color reproduction on video sharing and streaming services, you might want to convert the result to a YUV color space yourself. More info on how to do this optimally will come in a future post.
The configFile argument specifies the configuration file, which is where you tune the simulation options: scaling, shadow-mask parameters, pixel blur, halation (the diffuse glow caused by scattering/reflection effects), scanline profile, beam bloom (where brighter pixels make the scanline appear wider), surface curvature (convexity), corner radius, and vignette (the darkening towards the edges of the CRT surface).
All these elements characterize the normal output of a functional tube display -- I tried to stay away from "aesthetic revisionism", so you won't find things like noise, chromatic aberration, ghosting, fake flicker/beam trail, sync errors and other 'glitch' visuals. If you like such things, FFmpeg can pull them off quite well, so you can always have a go at adding to the script.
The download includes a few config files to start you out. Here's one so you can get a feel for what you can tweak:
; FFmpeg CRT script config file (comments start with a semicolon) ; This example is intended for low-res VGA input which has already been double- ; scanned (such as DOSBox produces with machine=vgaonly), e.g. 640x400. ; For 1:1 input (e.g. 320x200), double the values for SX and SY, and set ; SCAN_FACTOR to double. ; Input pre-scaling + aspect ratio adjustment: -------------------------------- SX 5 ; Width scale factor (integer) SY 6 ; Height scale factor (integer) ; Larger factors = slower processing, better quality ; Final output scaling: ------------------------------------------------------- OY 1080 ; Output height (width will be set to height*4/3) OMARGIN 8 ; Minimum width of included margins (edge padding) OFILTER lanczos ; Output scaling filter; recommended: 'lanczos' for ; triad shadowmask, 'gauss' for slot/grille (reduces ; moire) ; See FFmpeg Scaler Options for all available values ; Simulation settings: -------------------------------------------------------- OVL_TYPE triad ; Shadow mask (phosphor overlay) type, e.g. triad, ; slot, grille. "_
.png" must exist. OVL_SCALE 0.08 ; Scale factor for shadow mask image OVL_ALPHA 0.6 ; Shadow mask opacity (0..1) OVL_BRIGHTFIX 2.05 ; Brightness multiplier applied after color overlay - ; compensates for apparent darkening H_PX_BLUR 50 ; Horizontal pixel blur factor (% of pixel width) V_PX_BLUR 12 ; Vertical pixel blur factor (% of pixel height) - ; should generally be 0 if scanlines are enabled HALATION_ON no ; Add halation? HALATION_RADIUS 30 ; - radius HALATION_ALPHA 0.12 ; - opacity (0..1) SCANLINES_ON yes ; Add scanlines? SL_ALPHA 0.8 ; - opacity (0..1) SL_WEIGHT 1 ; 0..1 for a slimmer beam, >1 for a fatter beam SCAN_FACTOR single ; Ratio of scanline count to input height: 'single', ; 'double', or 'half' BLOOM_ON yes ; Add scanline bloom? (widens brighter scanlines) BLOOM_POWER 0.65 ; - bloom factor: 0 (none) .. 1 (full) ; more noticeable for smaller scanline weights CORNER_RADIUS 0 ; Radius of rounded bezel corners; 0 to disable CURVATURE 0.04 ; CRT curvature (barrel distortion); 0 to disable VIGNETTE_ON yes ; Add vignette effect? (darkens image towards edge) VIGNETTE_POWER 0.1 ; - amount of darkening ; Video only - no effect on still images: ------------------------------------- P_DECAY_FACTOR 0 ; Phosphor decay factor (0..1) - exponential: <0.9 is ; very mild, 0.95=heavy, 0.99=very long, 1.0=infinite P_DECAY_ALPHA 0.5 ; - opacity (0..1) - 0.3 to 0.5 seem realistic
I won't go into what every single line in the batch file is doing (if you look at it, you'll understand why), but the general flow is this:
Make sure there's nothing funny with the command line arguments, get the input dimensions (using ffprobe), read config settings.
Rounded corners, if enabled:
Draw a quarter-circle (using FFmpeg's geq filter), scale it down to the desired size (CORNER_RADIUS) w/anti-aliasing, rotate it four ways; create a blank canvas of the same dimensions as the scaled input, and place the results in the corners. Apply the lenscorrection filter with the desired CRT curvature factor (CURVATURE).
The result is a temporary image that will function as a transparent layer, except for the four corners. You may wonder why we're already applying the curvature here, when we could just do it in a single pass later, when the layers have already been combined. That will be answered below, but for now we're just putting this layer aside for later.
Scanlines, if enabled:
Create a greyscale vertical profile of a single scanline, again with the geq filter: luminance is determined by a sine function (using SL_WEIGHT as a modifier), and height depends on the scaled input dimensions and the SCAN_FACTOR parameter. This is tiled vertically to get an alpha map of the scanlines.
Now we add curvature to this image as well - but this time, we pre-scale it by 4x before lenscorrection, and scale it back down afterwards. This is necessary because FFmpeg's lens correction does nearest-neighbor interpolation, and with hi-frequency detail (such as these scanlines), this creates awful moiré patterns. The 4x oversampling gets around that, but operations at that size cary extreme CPU/RAM expenses, and THAT's why we separate the curvature passes: we want to do moiré mitigation only for still-image layers, and only when strictly necessary.
The image of the phosphor overlay is taken from the respective .png file, scaled down by the specified OVL_SCALE factor (w/gamma correction before and after!), then tiled and cropped to fill the 'screen'. This also gets the anti-moiré curvature treatment with 4x oversampling, and is saved as a temporary image.
Phosphor decay (video only):
If the P_DECAY_FACTOR is nonzero, take the input video and simulate phosphor persistence as an additional stage before it gets upscaled. For this we use the lagfun filter, with the decay factor as the parameter, and blend the result back with the original using P_DECAY_ALPHA as the opacity factor.
Input scaling + pixel blur:
Take the original resolution input image (or video - with or without phosphor decay), and upscale it by the configured width/height factors (SX/SY) to get an aspect-corrected version at a much higher resolution: this resolution will be used for all intermediate steps.
Then, apply a gaussian pixel blur (gblur) using the configured PX_BLUR factors, but not before applying a 2.2 gamma transform using lutrgb, to ensure a gamma-aware application of the blur effect.
Halation, if enabled:
Split the output of the previous step into two streams; blur one of them even more (using HALATION_RADIUS), to simulate the diffuse glow caused by slight scatter/backscatter of the electron beam and the phosphor light; then mix that with the other unmodified stream (using the blend filter in 'lighten' mode, with the opacity factor specified by HALATION_ALPHA).
Surface curvature, if enabled:
Now that we're done with the gamma-senstive blur steps, apply the inverse (0.454545) gamma transform to our image/video; then feed it to the lenscorrection filter to get the convex effect, this time without the 4x pre-scaling, since moiré is not a real issue here, and oversampling the actual video would be crazy slow anyway.
Beam bloom, if enabled:
Create a greyscale version of the previous stage's output (image/video), so we preserve only the luminance information. This is done with the hue filter by setting saturation to 0 - once again using gamma transform before and after.
Now we blend this with the mask of the already-curved scanlines, which we had put aside before, using a custom geq blending formula to "fatten" the scanline according to the brightness in the greyscale stream we've just created. The result is a video/image stream containing the luminance mask for the scanlines, with 'beam bloom' included.
Blend everything together:
Take the image/video which we've already scaled up, blurred, halated, and curved; if bleam bloom was enabled - do a 'multiple' blend with the result of the previous stage, to add the scanlines with the bloom baked in; if it wasn't, just do the same with the original scanline mask we made previously. The SL_ALPHA parameter determines the opacity. (If scanlines are disabled, do none of the above.)
Then... take the curved-up shadowmask image, do another 'multiple' blend (with opacity set by OVL_ALPHA), and add the one with the rounded corners while we're at it. Because the scanlines and shadowmask darken the picture quite a bit, we now compensate by multiplying each color channel with the OVL_BRIGHTFIX parameter (clipping the result to 255).
Crop to contents:
The surface curvature distortion has introduced some superfluous padding around the picture area, so this part uses the cropdetect filter to find the dimensions of the actual content, without the extra fluff. For this, we generate a "dummy" frame with nothing but white, put it through the same distortion, and feed it to the filter (3 times, since that's the minimum number of frames it needs for a reliable result).
The dummy approach beats examining the actual video, because (1) it's much faster, and (2) if your video starts with a black frame - or just with any black whatsoever extending to the edge - it'd be worse than useless.
Final scaling + vignette:
After cropping, the image/video is scaled down to the final desired size while preserving a 4:3 aspect ratio. This scaling step is also gamma-corrected, and if you've specified a nonzero OMARGIN, it'll leave room for that. The scaling filter itself is specified in the configuration (OFILTER) - see Scaler Options for the list supported by FFmpeg.
All we have to do now is to set the Storage Aspect Ratio (setsar) to 1:1, since the previous steps have likely mucked it up; apply vignette for the edge-darkening effect (using the VIGNETTE_POWER parameter); and pad with the afore-mentioned margins if applicable.
If you haven't expired of natural causes yet, and your CPU hasn't melted, you will now have the final result! Was it worth it? - you tell me. And if you can figure out a way to speed up the process, or to make it more efficient without losing the essentials, or to get the GPU to put in some real effort, that'd be cool too.