this post was submitted on 25 Mar 2026
445 points (93.4% liked)

Microblog Memes

11183 readers
2963 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Mothra@mander.xyz 7 points 2 days ago (2 children)
[–] webghost0101@sopuli.xyz 20 points 2 days ago (3 children)

Its an open source tool to download youtube videos

About every mainstream youtube download program you or your parents have ever used are actually just a wrapper for this.

Bonus: If you want to learn more about coding its not that hard to make a script that automatically downloads the last video from a list of channels that runs on a schedule. Even ai can do it.

[–] generic_computers@lemmy.zip 9 points 2 days ago (1 children)

Not just YouTube videos, but pretty much every video platform/website you can think of!

[–] rumschlumpel@feddit.org 4 points 2 days ago (1 children)
[–] generic_computers@lemmy.zip 2 points 1 day ago

Especially porn sites!

[–] CommissarVulpin@lemmy.world 3 points 2 days ago (2 children)

Is there like a “tutorial for dummies” for this? I tried to use it once but got nowhere.

[–] kazerniel@lemmy.world 5 points 2 days ago

I've been using Open Video Downloader (youtube-dl-gui) for some years, and it's very user-friendly.

[–] moody@lemmings.world 5 points 2 days ago (2 children)

It's a command line tool. You type in "yt-dlp" followed by the url of a video, and it does the rest.

It has many other options, but the defaults are good enough for most cases.

[–] CommissarVulpin@lemmy.world 2 points 2 days ago (1 children)

I think you vastly overestimate my level of computer savviness.

[–] FG_3479@lemmy.world 1 points 1 day ago

Use winget install yt-dlp-nightly to install it.

Then run yt-dlp -f "bestvideo[height<=1080][ext=mp4]+bestaudio[ext=m4a]" "https://youtube.com/watch?v=EXAMPLE" to download a video.

The file will be in C:\Users\YourUsername unless you use cd to enter a certain folder.

If yt-dlp stops working, then yt-dlp --update-to nightly should fix it.

[–] webghost0101@sopuli.xyz 1 points 2 days ago* (last edited 2 days ago) (2 children)

There is no single stop for a tutorial for stuff like this because you could use any scripting language and which ones you have available may depend on your os.

But honestly any half decent llm can generate something that works for your specific case.

If you really want to avoid using those,

Here is a simple example for windows powershell.


# yt-dlp Channel Downloader
# --------------------------
# Downloads the latest video from each channel in channels.txt
#
# Setup:
#   1. Install yt-dlp:  winget install yt-dlp
#   2. Install ffmpeg:  winget install ffmpeg
#   3. Create channels.txt next to this script, one URL per line:
#        https://www.youtube.com/@SomeChannel
#        https://www.youtube.com/@AnotherChannel
#   4. Right-click this file → Run with PowerShell

# Read each line, skip blanks and comments (#)
foreach ($url in Get-Content ".\channels.txt") {
    $url = $url.Trim()
    if ($url -eq "" -or $url.StartsWith("#")) { continue }

    Write-Host "`nDownloading latest from: $url"

    yt-dlp --playlist-items 1 --merge-output-format mp4 --no-overwrites `
        -o "downloads\%(channel)s\%(title)s.%(ext)s" $url
}

Write-Host "`nDone."

And here is my own bash script (linux) which has only gotten bigger with more customization over the years.

(part 1, part 2 in the next reply)

#!/bin/bash
# ============================================================================
#  yt-dlp Channel Downloader (Bash)
# ============================================================================
#
#  Automatically downloads new videos from a list of YouTube channels.
#
#  Features:
#    - Checks RSS feeds first to avoid unnecessary yt-dlp calls
#    - Skips livestreams, premieres, shorts, and members-only content
#    - Two-pass download: tries best quality first, falls back to 720p
#      if the file exceeds the size limit
#    - Maintains per-channel archive and skip files so nothing is
#      re-downloaded or re-checked
#    - Embeds thumbnails and metadata into the final .mp4
#    - Logs errors with timestamps
#
#  Requirements:
#    - yt-dlp       (https://github.com/yt-dlp/yt-dlp)
#    - ffmpeg        (for merging video+audio and thumbnail embedding)
#    - curl          (for RSS feed fetching)
#    - A SOCKS5 proxy on 127.0.0.1:40000 (remove --proxy flags if not needed)
#
#  Channel list format (Channels.txt):
#    The file uses a simple key=value block per channel, separated by blank
#    lines. Each block has four fields:
#
#      Cat=Gaming
#      Name=SomeChannel
#      VidLimit=5
#      URL=https://www.youtube.com/channel/UCxxxxxxxxxxxxxxxxxx
#
#    Cat       Category label (currently unused in paths, available for sorting)
#    Name      Short name used for filenames and archive tracking
#    VidLimit  How many recent videos to consider per run ("ALL" for no limit)
#    URL       Full YouTube channel URL (must contain the UC... channel ID)
#
# ============================================================================

export PATH=$PATH:/usr/local/bin

#
***
Configuration ----------------------------------------------------------
# Change these to match your environment.

SCRIPT_DIR="/path/to/script"           # Folder containing this script and Channels.txt
ERROR_LOG="$SCRIPT_DIR/download_errors.log"
DOWNLOAD_DIR="/path/to/downloads"      # Where videos are saved
MAX_FILESIZE="5G"                      # Max file size before falling back to lower quality
PROXY="socks5://127.0.0.1:40000"       # SOCKS5 proxy (remove --proxy flags if unused)

#
***
End of configuration ---------------------------------------------------

cd "$SCRIPT_DIR"

# ============================================================================
#  log_error - Append or update an error entry in the error log
# ============================================================================
#  If an entry with the same message (ignoring timestamp) already exists,
#  it replaces it so the log doesn't fill up with duplicates.
#
#  Usage: log_error "[2025-01-01 12:00:00] ChannelName - URL: ERROR message"

log_error() {
    local entry="$1"

    # Strip the timestamp prefix to get a stable key for deduplication
    local key=$(echo "$entry" | sed 's/^\[[0-9-]* [0-9:]*\] //')

    local tmp_log=$(mktemp)
    if [[ -f "$ERROR_LOG" ]]; then
        grep -vF "$key" "$ERROR_LOG" > "$tmp_log"
    fi
    echo "$entry" >> "$tmp_log"
    mv "$tmp_log" "$ERROR_LOG"
}

# ============================================================================
#  Parse Channels.txt
# ============================================================================
#  awk reads the key=value blocks and outputs one line per channel:
#    Category  Name  VidLimit  URL
#  The while loop then processes each channel.

awk -F'=' '
  /^Cat/ {Cat=$2}
  /^Name/ {Name=$2}
  /^VidLimit/ {VidLimit=$2}
  /^URL/ {URL=$2; print Cat, Name, VidLimit, URL}
' "$SCRIPT_DIR/Channels.txt" | while read -r Cat Name VidLimit URL; do

    archive_file="$SCRIPT_DIR/DLarchive$Name.txt"   # Tracks successfully downloaded video IDs
    skip_file="$SCRIPT_DIR/DLskip$Name.txt"          # Tracks IDs to permanently ignore
    mkdir -p "$DOWNLOAD_DIR"

    # ========================================================================
    #  Step 1: Check the RSS feed for new videos
    # ========================================================================
    #  YouTube provides an RSS feed per channel at a predictable URL.
    #  Checking this is much faster than calling yt-dlp, so we use it
    #  as a quick "anything new?" test.

    # Extract the channel ID (starts with UC) from the URL
    channel_id=$(echo "$URL" | grep -oP 'UC[a-zA-Z0-9_-]+')
    rss_url="https://www.youtube.com/feeds/videos.xml?channel_id=%24channel_id"

    # Fetch the feed and pull out all video IDs
    new_videos=$(curl -s --proxy "$PROXY" "$rss_url" | \
        grep -oP '(?<=<yt:videoId>)[^<]+')

    if [[ -z "$new_videos" ]]; then
        echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] RSS fetch failed or empty, skipping"
        continue
    fi

    # Compare RSS video IDs against archive and skip files.
    # If every ID is already known, there's nothing to do.
    has_new=false
    while IFS= read -r vid_id; do
        in_archive=false
        in_skip=false

        [[ -f "$archive_file" ]] && grep -q "youtube $vid_id" "$archive_file" && in_archive=true
        [[ -f "$skip_file" ]]    && grep -q "youtube $vid_id" "$skip_file"    && in_skip=true

        if [[ "$in_archive" == false && "$in_skip" == false ]]; then
            has_new=true
            break
        fi
    done <<< "$new_videos"

    if [[ "$has_new" == false ]]; then
        echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] No new videos, skipping"
        continue
    fi

    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] New videos found, processing"

    # ========================================================================
    #  Step 2: Build shared option arrays
    # ========================================================================

    # Playlist limit: restrict how many recent videos yt-dlp considers
    playlist_limit=()
    if [[ $VidLimit != "ALL" ]]; then
        playlist_limit=(--playlist-end "$VidLimit")
    fi

    # Options used during --simulate (dry-run) passes
    sim_base=(
        --proxy "$PROXY"
        --extractor-args "youtube:player-client=default,-tv_simply"
        --simulate
        "${playlist_limit[@]}"
    )

    # Options used during actual downloads
    common_opts=(
        --proxy "$PROXY"
        --download-archive "$archive_file"
        --extractor-args "youtube:player-client=default,-tv_simply"
        --write-thumbnail
        --convert-thumbnails jpg
        --add-metadata
        --embed-thumbnail
        --merge-output-format mp4
        --output "$DOWNLOAD_DIR/${Name} - %(title)s.%(ext)s"
        "${playlist_limit[@]}"
    )

    # ========================================================================
    #  Step 3: Pre-pass — identify and skip filtered content
    # ========================================================================
    #  Runs yt-dlp in simulate mode twice:
    #    1. Get ALL video IDs in the playlist window
    #    2. Get only IDs that pass the match-filter (no live, no shorts)
    #  Any ID in (1) but not in (2) gets added to the skip file so future
    #  runs don't waste time on them.

    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pre-pass: identifying filtered videos (live/shorts)"

    all_ids=$(yt-dlp "${sim_base[@]}" --print "%(id)s" "$URL" 2>/dev/null)
    passing_ids=$(yt-dlp "${sim_base[@]}" \
        --match-filter "!is_live & !was_live & original_url!*=/shorts/" \
        --print "%(id)s" "$URL" 2>/dev/null)

    while IFS= read -r vid_id; do
        [[ -z "$vid_id" ]] && continue
        grep -q "youtube $vid_id" "$archive_file" 2>/dev/null && continue
        grep -q "youtube $vid_id" "$skip_file"    2>/dev/null && continue
        if ! echo "$passing_ids" | grep -q "^${vid_id}$"; then
            echo "youtube $vid_id" >> "$skip_file"
            echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Added $vid_id to skip file (live/short/filtered)"
        fi
    done <<< "$all_ids"

[–] bold_omi@lemmy.today 2 points 2 days ago (1 children)

You had me until you said "LLM."

[–] webghost0101@sopuli.xyz 1 points 1 day ago

Absolutely Fair, they are quite a major source in the accelerated enshitification of modern life, thats why I provided examples so people can still learn without one.

But it would also be ignorant for me to not recognise how much i managed to learn about linux/open source from these same tools in the last few years. The traditional ways of learning things were never compatible with my personal neurology.

Without llms, id probably still be stuck on windows.

[–] webghost0101@sopuli.xyz 2 points 2 days ago

part 2

# ========================================================================
    #  Step 4 (Pass 1): Download at best quality, with a size cap
    # ========================================================================
    #  Tries: best AVC1 video + best M4A audio → merged into .mp4
    #  If a video exceeds MAX_FILESIZE, its ID is saved for the fallback pass.
    #  Members-only and premiere errors cause the video to be permanently skipped.
 
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 1: best quality under $MAX_FILESIZE"
 
    yt-dlp \
        "${common_opts[@]}" \
        --match-filter "!is_live & !was_live & original_url!*=/shorts/" \
        --max-filesize "$MAX_FILESIZE" \
        --format "bestvideo[vcodec^=avc1]+bestaudio[ext=m4a]/best[ext=mp4]/best" \
        "$URL" 2>&1 | while IFS= read -r line; do
            echo "$line"
            if echo "$line" | grep -q "^ERROR:"; then
 
                # Too large → save ID for pass 2
                if echo "$line" | grep -qi "larger than max-filesize"; then
                    vid_id=$(echo "$line" | grep -oP '(?<=\[youtube\] )[a-zA-Z0-9_-]{11}')
                    [[ -n "$vid_id" ]] && echo "$vid_id" >> "$SCRIPT_DIR/.size_failed_$Name"
 
                # Permanently unavailable → skip forever
                elif echo "$line" | grep -qE "members only|Join this channel|This live event|premiere"; then
                    vid_id=$(echo "$line" | grep -oP '(?<=\[youtube\] )[a-zA-Z0-9_-]{11}')
                    if [[ -n "$vid_id" ]]; then
                        if ! grep -q "youtube $vid_id" "$skip_file" 2>/dev/null; then
                            echo "youtube $vid_id" >> "$skip_file"
                            echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Added $vid_id to skip file (permanent failure)"
                        fi
                    fi
                fi
 
                log_error "[$(date '+%Y-%m-%d %H:%M:%S')] ${Name} - ${URL}: $line"
            fi
        done
 
    # ========================================================================
    #  Step 5 (Pass 2): Retry oversized videos at lower quality
    # ========================================================================
    #  For any video that exceeded MAX_FILESIZE in pass 1, retry at 720p max.
    #  If it's STILL too large, log the actual size and skip permanently.
 
    if [[ -f "$SCRIPT_DIR/.size_failed_$Name" ]]; then
        echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 2: lower quality fallback for oversized videos"
 
        while IFS= read -r vid_id; do
            [[ -z "$vid_id" ]] && continue
            echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Retrying $vid_id at 720p max"
 
            yt-dlp \
                --proxy "$PROXY" \
                --download-archive "$archive_file" \
                --extractor-args "youtube:player-client=default,-tv_simply" \
                --write-thumbnail \
                --convert-thumbnails jpg \
                --add-metadata \
                --embed-thumbnail \
                --merge-output-format mp4 \
                --max-filesize "$MAX_FILESIZE" \
                --format "bestvideo[vcodec^=avc1][height<=720]+bestaudio[ext=m4a]/bestvideo[height<=720]+bestaudio[ext=m4a]/best[height<=720]/worst" \
                --output "$DOWNLOAD_DIR/${Name} - %(title)s.%(ext)s" \
                "https://www.youtube.com/watch?v=%24vid_id" 2>&1 | while IFS= read -r line; do
                    echo "$line"
                    if echo "$line" | grep -q "^ERROR:"; then
 
                        # Still too large even at 720p — give up and log the size
                        if echo "$line" | grep -qi "larger than max-filesize"; then
                            filesize_info=$(yt-dlp \
                                --proxy "$PROXY" \
                                --extractor-args "youtube:player-client=default,-tv_simply" \
                                --simulate \
                                --print "%(filesize,filesize_approx)s" \
                                "https://www.youtube.com/watch?v=%24vid_id" 2>/dev/null)
                            if [[ "$filesize_info" =~ ^[0-9]+$ ]]; then
                                filesize_gb=$(echo "scale=1; $filesize_info / 1073741824" | bc)
                                size_str="${filesize_gb}GB"
                            else
                                size_str="unknown size"
                            fi
                            if ! grep -q "youtube $vid_id" "$skip_file" 2>/dev/null; then
                                echo "youtube $vid_id" >> "$skip_file"
                                log_error "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Skipped $vid_id - still over $MAX_FILESIZE at 720p ($size_str)"
                            fi
                        fi
 
                        log_error "[$(date '+%Y-%m-%d %H:%M:%S')] ${Name} - ${URL}: $line"
                    fi
                done
        done < "$SCRIPT_DIR/.size_failed_$Name"
 
        rm -f "$SCRIPT_DIR/.size_failed_$Name"
    else
        echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 2: no oversized videos to retry"
    fi
 
    # Clean up any stray .description files yt-dlp may have left behind
    find "$DOWNLOAD_DIR" -name "${Name} - *.description" -type f -delete
 
done
[–] Mothra@mander.xyz 2 points 2 days ago (1 children)

I see. I am not a programmer, not by a long shot. More on the grandma side of things instead. So please forgive if I'm saying something very stupid - I'm just ignorant.

I've been happy with NewPipe so far, 95% of my video watching happens on my phone. The only thing Newpipe can't do is access age restricted videos. If this tool can do that on my phone, then I'm definitely interested.

[–] webghost0101@sopuli.xyz 1 points 2 days ago

Yes and no,

Yes because i am doing it, no because it’s just one part of the process.

Newpipe is cool but it doesn’t run on my phone so i needed something else.

You may have heard of plex, “run your own netflix”, i much prefer its competitor jellyfin but that doesn’t matter here.

Point is i download my YouTube videos on a schedule/script straight to the library folder of jellyfin, from which i can login from any type of device.

[–] Successful_Try543@feddit.org 7 points 2 days ago* (last edited 2 days ago)

yt-dlp is a feature-rich command-line audio/video downloader with support for thousands of sites. The project is a fork of youtube-dl based on the now inactive youtube-dlc.

https://github.com/yt-dlp/yt-dlp