Find the native resolution(s) of upscaled material (mostly anime)
Start with installing the depdencies through pip
.
Start by executing:
$ python getnative.py inputFile [--args]
That's it.
To run this script you will need:
- Python 3.6
- matplotlib
- Vapoursynth R39+
- descale (really slow for descale) or descale_getnative (perfect for getnative)
- ffms2 or lsmash or imwri
Input Command:
$ python getnative.py /home/infi/mpv-shot0001.png -k bicubic -b 1/3 -c 1/3
Output Text:
Using imwri as source filter
501/501
Kernel: bicubic AR: 1.78 B: 0.33 C: 0.33
Native resolution(s) (best guess): 720p, 987p
done in 29.07s
Output Graph:
Property | Description | Default value | Type |
---|---|---|---|
help | Automatically render the usage information when running -h or --help |
False | Action |
Absolute or relative path to the input file | Required | String | |
frame | Specify a frame for the analysis. | num_frames//3 | Int |
mode | Choose a predefined mode ["bilinear", "bicubic", "bl-bc", "all"] | None | String |
kernel | Resize kernel to be used | bicubic | String |
bicubic-b | B parameter of bicubic resize | 1/3 | Float |
bicubic-c | C parameter of bicubic resize | 1/3 | Float |
lanczos-taps | Taps parameter of lanczos resize | 3 | Int |
aspect-ratio | Force aspect ratio. Only useful for anamorphic input | w/h | Float |
min-height | Minimum height to consider | 500 | Int |
max-height | Maximum height to consider | 1000 | Int |
use | Use specified source filter (e.g. "lsmas.LWLibavSource") | None | String |
is-image | Force image input | False | Action |
generate-images | Save detail mask as png | False | Action |
plot-scaling | Scaling of the y axis. Can be "linear" or "log" | log | String |
plot-format | Format of the output image. Specify multiple formats separated by commas. Can be svg, png, tif(f), and more | svg | String |
show-plot-gui | Show an interactive plot gui window. | False | Action |
no-save | Do not save files to disk. | False | Action |
This script's success rate is far from perfect. If possible, do multiple tests on different frames from the same source. Bright scenes generally yield the most accurate results. Graphs tend to have multiple notches, so the script's assumed resolution may be incorrect. Also, due to the current implementation of the autoguess, it is not possible for the script to automatically recognize 1080p productions. Use your eyes or anibin if necessary.
BluBb_mADe, kageru, FichteFoll, stux!
Join https://discord.gg/V5vaWwr (Ask in #encode-autism for help)