ListianingrumR / fsds

Introduction to Programming module for CASA MSc programmes.

Home Page:https://www.ucl.ac.uk/bartlett/casa/programmes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About this Repository

The Foundations of Spatial Data Science (FSDS) module is part of CASA's MSc degree offering builds on a step-change in our ability to work with large spatial data sets. The design of the module is informed by the dearth of planners and geographers able to think computationally using programming, data analysis, and data manipulation skills. There is a severe skills shortage in this domain across all sectors: non-profit, government, corporate, and academic.

The module's objective is to enable students to access, understand, utilise, visualise and communicate data in a spatial context. FSDS is not about pushing buttons, but about using logic, programming, and analytical skills to tackle complex real-world problems in a creative, reproducible, and open manner.

Using this Repository

In order to make use of these materials you will need to download and install the Spatial Data Science computing environment.

To Dos

  • Move Debugging Manifesto in Lecture 2.5 and re-render video.
  • Shift away from .values
  • Swap Weeks 8 (Visualising Data) and 9 (Dimensions in Data) + explain rationale to students.
  • Shift focus of (new) Week 9 & 10 to review/discussing the final assessment. Content still available to view/practice but not required/delivered.
  • Require more active use of GitHub/Git — a submission in which it’s easy to automate checks that they’ve created a repo and populated it with the completed notebooks?
  • Refresh Code Camp and make it more self-test oriented. Use the self-test to set expectations about level of effort/preparation required (and whether or not taking the module will be useful).
  • Standardise delivery of practical content by TAs: should have consistent approach across practicals.
  • Point to cross-module content/recaps.
  • Joint reading list?
  • Look into automating movie generation.
    • This code appears to be better-tuend but isn't compatible and generates larger files: StackOverflow
ffmpeg -framerate 30 -i input.jpg -t 15 \
    -c:v libx265 -x265-params lossless=1 \
    -pix_fmt yuv420p -vf "scale=3840:2160,loop=-1:1" \
    -movflags faststart \
    out.mp4
  • This code appears to generate something that is Mac-compatible from a PNG file:
ffmpeg -r 0.01 -loop 1 -i image.jpg -i audio.mp3 -c:v libx264 -tune stillimage -preset  ultrafast -ss 00:00:00 -t 00:00:27   -c:a aac  -b:a 96k -pix_fmt yuv420p  -shortest out.mp4 -y

Working FFMPEG code

The below seems to generate an mp4 file with audio track that sounds like what I’d expect. The length seems to be automatically set to the length of the audio track (so -t 75 is ignored, which is probably for the best). So what we get here is a merger of the two files into one video file.

ffmpeg -r 30 -t 75 -loop 1 \
  -i 1.1-Getting_Oriented_19_1280x720.png -i test.m4a \
	-c:v libx264 -tune stillimage -preset ultrafast -pix_fmt yuv420p \
	-c:a aac -b:a 64k \
	out1.mp4

We need to repeat this for every slide in the deck and then merge the videos together into one long video. This approach does not re-encode the data so it’s probably faster and less prone to creating artefacts; however it also generates errors and seems to leave a blank spot at the start.

$ cat list.txt
file 'out2.mp4'
file 'out3.mp4'
ffmpeg -f concat -safe 0 -i list.txt -c copy out4.mp4

This looks promising: Documentation for concat:

ffmpeg -i opening.mkv -i episode.mkv -i ending.mkv -filter_complex \
  '[0:0] [0:1] [0:2] [1:0] [1:1] [1:2] [2:0] [2:1] [2:2]
   concat=n=3:v=1:a=2 [v] [a1] [a2]' \
  -map '[v]' -map '[a1]' -map '[a2]' output.mkv

And:

movie=part1.mp4, scale=512:288 [v1] ; amovie=part1.mp4 [a1] ;
movie=part2.mp4, scale=512:288 [v2] ; amovie=part2.mp4 [a2] ;
[v1] [v2] concat [outv] ; [a1] [a2] concat=v=0:a=1 [outa]

this seems to work:

ffmpeg -i out2.mp4 -i out3.mp4 \
-filter_complex "[0:v] [0:a] [1:v] [1:a] concat=n=2:v=1:a=1 [vv] [aa]" \
-map "[vv]" -map "[aa]" out5.mp4

This is headed in the right direction… I think:

ffmpeg -r 30 \
  -i 1.1-Getting_Oriented_19_1280x720.png -i 1.1-Getting_Oriented_18_1280x720.png \
  -i test.m4a -i test2.m4a \
  -filter_complex "[0:v] [1:v] [0:a] [1:a] concat=n=2:v=1:a=1 [vv] [aa]" \
  -map "[vv]" -map "[aa]" \
  -c:v libx264 -tune stillimage -preset ultrafast -pix_fmt yuv420p \
  -c:a aac -b:a 64k \
  out1.mp4

But the only examples I can find involve a transparent overlay. Also this example.

Adding Filters

I think I’ll need to this later: see docs.

Ooooh, and a cheatsheet

New Approach

This approach will join short video segments created from PNG files with audio files not merged. However, there seems to be a keyframe or other issue in the later video – so you can’t scan forward, although it does seem to view properly if you don’t fast forward.

ffmpeg -r 30 -t 5 -loop 1 \
  -i 1.1-Getting_Oriented_19_1280x720.png \
  -c:v libx264 -tune stillimage -preset ultrafast -pix_fmt yuv420p \
  slide1.mp4
ffmpeg -r 30 -t 5 -loop 1 \
  -i 1.1-Getting_Oriented_20_1280x720.png \
  -c:v libx264 -tune stillimage -preset ultrafast -pix_fmt yuv420p \
  slide2.mp4
ffmpeg \
  -i slide1.mp4 -i slide2.mp4 \
  -i test.m4a -i test2.m4a \
  -filter_complex "[0:v:0][2:a:0][1:v:0][3:a:0] concat=n=2:v=1:a=1[outv][outa]" \
  -map "[outv]" -map "[outa]" \
  -c:v libx264 -tune stillimage -preset ultrafast -pix_fmt yuv420p \
  -c:a aac -b:a 64k \
  out6.mp4

Also Works

This works well to and may be the easiest: it’s a straightforward concatenation.

ffmpeg -r 30 -loop 1 \
  -i 1.1-Getting_Oriented_19_1280x720.png -i test.m4a \
	-c:v libx264 -tune stillimage -preset ultrafast -pix_fmt yuv420p \
	-c:a aac -b:a 64k \
	out1.mp4
ffmpeg -r 30 -loop 1 \
  -i 1.1-Getting_Oriented_20_1280x720.png -i test2.m4a \
	-c:v libx264 -tune stillimage -preset ultrafast -pix_fmt yuv420p \
	-c:a aac -b:a 64k \
	out2.mp4
$ cat list.txt
file 'out2.mp4'
file 'out3.mp4'
ffmpeg -f concat -safe 0 -i list.txt -c copy out4.mp4

How about:

ffmpeg \
  -r 30 -t 50 -loop 1 -i 1.1-Getting_Oriented_19_1280x720.png \
  -r 30 -t 30 -loop 1 -i 1.1-Getting_Oriented_20_1280x720.png \
  -i test.m4a -i test2.m4a \
  -filter_complex "[0:v:0][2:a:0][1:v:0][3:a:0] concat=n=2:v=1:a=1[outv][outa]" \
  -map "[outv]" -map "[outa]" \
  -c:v libx264 -tune stillimage -preset ultrafast -pix_fmt yuv420p \
  -c:a aac -b:a 64k \
  out7.mp4

Something happening here and in other ones where I try to merge PNG images into one video: at about 3:39 the preview goes black and if you fast forward or rewind in that area you get no picture.

Failing OK! So this problem disappears if the -t option is longer than the audio file for input>0. But then you get video with no audio past the point where the recording stopped. So you kind of need to set the time to the length of the related audio file.

Consider also adding filters on audio and video channeles:

  • atadenoise denoising here
  • blend for blending one layer into another (watermarking?) here
  • chromakey for green-screening here (also has useful thing for overlaying on a static black background)
  • colorize is here
  • colortemperature is here
  • coreimage to make use of Apple's CoreImage API here
  • crop is here
  • dblur for directional blur could be fun on intro/outro here
  • decimate is here
  • displace (probably a bad idea but...) is here
  • drawtext (for writing in date/year) is here (requires libfreetype)
  • fade (to fade-in/out the input video) is here
  • frames per second is here not sure how it differs from framerate
  • Gaussian blur is here
  • Hue/saturation/intensity is here
  • Colour adjustment is here
  • Loop is here
  • Monochrome is here and could be used with colourisation on live video, for instance
  • Normalise is here for mapping input histogram on to output range
  • Overlay is here and will be useful for adding an intro/outro
  • Perspective correction i shere
  • Scale is here to rescale inputs.
  • Trim is here
  • Variable blur i shere and could be useful for background blurring behind a talking head.
  • Vibrance to increase/change saturation is here
  • Vstack as faster alternative to Overlay and Pad is here
  • Xfade to perform cross-fading between input streams is here
  • Zoom and pan is here

Currently available video sources are here

Mastodon

About

Introduction to Programming module for CASA MSc programmes.

https://www.ucl.ac.uk/bartlett/casa/programmes


Languages

Language:Jupyter Notebook 99.5%Language:TeX 0.3%Language:JavaScript 0.1%Language:Python 0.0%Language:SCSS 0.0%Language:Shell 0.0%Language:CSS 0.0%