chbrown / overdrive

Bash script to download mp3s from the OverDrive audiobook service

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is there a way to pull chapter timestamps out as we're doing this?

LeLawnGames opened this issue · comments

I don't know much about the back end of how this works, I'm admittedly very new to this world but this whole project has encouraged me to learn how to use bash scripts and python so I'm slowly getting a feel for it.

I've written a script that allows me to decipher the table of contents from the overdrive "listen in browser" feature and extract the time stamps to use as label markers to edit chapters, but that process still forces me to manually enter into the inspect element for each book. Curious if there's any way to pull that information out at the same time we're pulling the downloads.

Or if that just doesn't make sense for this specific project I'll find another solution. Thanks for all you do!

I don't understand exactly what you're trying to accomplish, but I'm fairly certain it's outside the scope of this repo, and wish you luck. A few pointers that might speed your journey:

  • Open the raw .odm file you downloaded from your library in a text editor. If whatever timestamps / markers that you want are in there, it'd be relatively easy to pull them out.
  • If not, check whether the (meta)data you want is embedded in the .mp3 files. E.g., with https://exiftool.org/, call exiftool Part01.mp3. There's a "User Defined Text" field with some "OverDrive MediaMarkers" snippet, but at least for the book I've got right in front of me, it doesn't look very interesting.
  • Maybe this other user already wrote exactly what you want: #39
  • Another issue, another tool that someone likes: #18

HTH.


this whole project has encouraged me to learn how to use bash scripts

Aww, glad to hear it ❤️ ... my work here is done :)

Thanks so much for all of the pointers!

@chbrown One more q for you -- is there a way for me to modify the bash script so that it places the extracted folder inside a specific folder that isn't the one holding the .odm? I tried doing that myself using the --path argument but it didn't recognize that.

Yeah there is no --path argument but it downloads files into a new Author - Title folder in your current directory, regardless of where the .odm file is, e.g.:

mkdir -p other/folder
cd other/folder
overdrive download ../../*.odm

Or you can modify the bash script around this line:
https://github.com/chbrown/overdrive/blob/2.3.1/overdrive.sh#L288

Helpful! I'm looking to extract the _.odm.metadata file to land inside of the folder that the mp3's and artwork are placed within. Been troubleshooting my way through the bash script to trial and error but figured I'd ask if you knew how I'd go about doing that in case it saves me some time. Thanks!

Oh, gotcha. You could run it a second time with the metadata subcommand:

overdrive metadata ../wherever/*.odm > just-metadata-right-here.xml

Really appreciate you giving me pointers. You fully don't need to respond to this one if I'm overstepping but I'm just gonna share the code I'm using as I'm having trouble figuring out how to pair what you were saying with what I'm doing.

Essentially I'm trying to automate the process of extracting what I need from the odm's and then trashing what I don't need. I'd love to find a way to get the metadata as an xml into the folder holding the exported mp3s each time so it's easy for me to keep track of what goes with what. Putting my code below. Again, no worries if you couldn't be bothered I'm having plenty of fun exploring how to do this on my own.

folder="/Users/jonas/Documents/SERVER/QUEUE"

for file in $folder/*.odm
do
    # Keep running the command until it completes successfully
    while true; do
        cd /Users/jonas/Documents/SERVER/QUEUE
        if ~/.local/bin/overdrive download $file; then
            break
        fi
        sleep 2 # wait for 2 seconds before retrying
    done

    # Search for and delete files with specified pattern
    var=$(basename "$file" .odm)
    find $folder -type f -name "$var.odm.metadata" -delete
    find $folder -type f -name "$var.odm" -delete
    find $folder -type f -name "$var.odm.license" -delete
done

Oh hmm, right, that is a bit tricky, since you can't tell the script precisely what to call the new subfolder, which by default is named based on certain fields from the .odm file.

Couple of options:

  • Create a new empty directory (could use a temporary one with mktemp -d), cd into it, and download from there. At that point there will be one and only one folder, which you can easily write the metadata into like overdrive metadata /path/back/to/your/book.odm > metadata.xml && mv metadata.xml */, and then move back out elsewhere like mv * ~/Desktop.
  • Hack the script to hardcode dir="newbook" or something, instead of dir="$Author - $Title", and then just always make sure that newbook/ is cleaned out before you start.

Actually, that second change isn't a bad idea, at least as an optional parameter. I'll have to think about it.

Figure it out! Not perfect but solid building blocks for anyone to work off of and seems to be working well enough for my needs.

folder="/Users/jonas/Documents/SERVER/QUEUE"

for file in $folder/*.odm
do
    # Make temp dir in loop so that it clears even if there's any errors
    temp_dir=$(mktemp -d)
    cd "$temp_dir"

    echo "folder path: $folder"
    echo "temp_dir path: $temp_dir"

    while true; do
        if ~/.local/bin/overdrive download $file; then
            break
        fi
        sleep 2 # wait for 2 seconds before retrying
    done

    # Get the name of the downloaded folder
    downloaded_folder=$(find "$temp_dir" -type d -mindepth 1 -maxdepth 1 | head -n 1)
    echo "downloaded_folder: $downloaded_folder"

    # Move the downloaded folder to the destination
    mv "$downloaded_folder" "/Users/jonas/Documents/SERVER/BOOKS/TEST"

    #Move the .odm.metadata to the downloaded folder
    metadata_file=$(find "$folder" -name "*.odm.metadata")
    mv "$metadata_file" "/Users/jonas/Documents/SERVER/BOOKS/TEST/$(basename "$downloaded_folder")"

    rm -rf "$temp_dir"

done
commented

overdrive2opus will recode the entire thing to a much smaller file size with no discernible loss of quality and also add chapter information for the entire thing.

For anyone who stumbles across this and was interested I just built this repository to leverage @chbrown 's work here into a larger process that outputs chapterized mp3's with metadata: https://github.com/LeLawnGames/overdrive-plex

For anyone who stumbles across this and was interested I just built this repository to leverage @chbrown 's work here into a larger process that outputs chapterized mp3's with metadata: https://github.com/LeLawnGames/overdrive-plex

Does that output one mp3 file (not re-encoded) with all the chapter information or a set of mp3 files, each having chapter info?

For anyone who stumbles across this and was interested I just built this repository to leverage @chbrown 's work here into a larger process that outputs chapterized mp3's with metadata: https://github.com/LeLawnGames/overdrive-plex

Does that output one mp3 file (not re-encoded) with all the chapter information or a set of mp3 files, each having chapter info?

It exports one mp3 file per chapter with all the book metadata (author, title, genre, etc) encoded. Follows best practices for Prologue as that's what it was optimized for.

Just to save someone else's time.

This re-encodes the mp3 into mp3 with all the associated issues.

It is not a lossless conversion of the downloaded mp3 files.

For the record, it is possible to output a lossless conversion into one single mp3 file with the metadata and chapter information.

I don't know why that would be preferred when using a service like Prologue that can't read chapter metadata etc. No worries if this solution doesn't work for your use case but it's the only answer to those looking for say an m4b output with chapters or listening on a platform that requires a delineated set of chapters. (Also if someone does have a good lossless solution for splitting mp3's that would be dope but in my research there wasn't a great option and I built the best version I could with the experience I had).