Merge branch 'tubearchivist:master' into master

This commit is contained in:
crocs 2023-10-17 16:08:50 -05:00 committed by GitHub
commit ae83b9f9d4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
39 changed files with 684 additions and 288 deletions

3
.gitignore vendored
View File

@ -3,3 +3,6 @@ mkdocs/site
# ignore local cache
mkdocs/.cache/
# python
.venv

View File

@ -1,6 +1,6 @@
# build the docs and load static files into nginx
FROM python:3.10.9-slim-bullseye AS builder
FROM python:3.11.3-slim-bullseye AS builder
ENV PATH=/root/.local/bin:$PATH
RUN apt-get update -y && apt-get install -y libcairo2

142
mkdocs/docs/advanced.md Normal file
View File

@ -0,0 +1,142 @@
---
description: Collection of advanced concepts and debug info.
---
# Advanced Notes
!!! note
As a general rule of thumb, make sure your backups are up to date before continuing with anything here.
A loose collection of advanced debug info, may or may not apply to you, only use this when you know what you are doing. Some of that functionality might get implemented in the future in the regular UI.
## Reactivate documents
As part of the metadata refresh task, Tube Archivist will mark videos, channels and playlists as deactivated, if they are no longer available on YouTube. For some reasons, that might have deactivated something that shouldn't have, for example if a video got reinstated after a copyright strike on YT. You can reactivate all things in bulk, so the refresh task will check them again and deactivate the ones that are actually not available anymore.
Curl commands to run within the TA container to reactivate documents:
??? Videos
```bash
curl -XPOST "$ES_URL/ta_video/_update_by_query?pretty" -u elastic:$ELASTIC_PASSWORD -H "Content-Type: application/json" -d '
{
"query": {
"term": {
"active": {
"value": false
}
}
},
"script": {
"source": "ctx._source.active = true",
"lang": "painless"
}
}'
```
??? Channels
```bash
curl -XPOST "$ES_URL/ta_channel/_update_by_query?pretty" -u elastic:$ELASTIC_PASSWORD -H "Content-Type: application/json" -d '
{
"query": {
"term": {
"channel_active": {
"value": false
}
}
},
"script": {
"source": "ctx._source.channel_active = true",
"lang": "painless"
}
}'
```
??? Playlists
```bash
curl -XPOST "$ES_URL/ta_video/_update_by_query?pretty" -u elastic:$ELASTIC_PASSWORD -H "Content-Type: application/json" -d '
{
"query": {
"term": {
"playlist_active": {
"value": false
}
}
},
"script": {
"source": "ctx._source.playlist_active = true",
"lang": "painless"
}
}'
```
## Corrupted ES index reset
After a hard reset of your server or any other hardware failure you might experience data corruption. ES can be particularly unhappy about that, especially if the reset happens during actively writing to disk. It's very likely that only your `/indices` folder got corrupted, as that is where the regular read/writes happen. Luckily you have your [snapshots](settings/application.md#snapshots) set up.
ES will not start, if the data is corrupted. So, stop all containers, delete everything *except* the `/snapshot` folder in the ES volume. After that, start everything back up. Tube Archivist will create a new blank index. All your snapshots should be available for restore on your settings page, you probably want to restore the most recent one. After restore, run a [filesystem rescan](settings/actions.md#rescan-filesystem) for good measures.
## ES mapping migrations troubleshooting
Tube Archivist will apply mapping changes at application startup. That usually is needed when changing how an existing field is indexed. That should be seamless and automatic, but can leave your index in a messed up state if that process gets interrupted for any reason. Common reasons could be that if you artificially limit the memory to the container, disabling the OS to dynamically manage that, or if you don't have enough available storage on the ES volume, or if you interrupt that because of your impatience (don't do that).
In general the process is:
- Compare existing mapping with predefined expected mapping
- If that is identical, there is nothing to do
- Else create a `_backup` of the existing index
- Delete the original index and create a new empty one with the new mapping in place
- Copy over the previously created `_backup` index to apply the new mappings
- Delete the now leftover `_backup` index.
If you are not sure if anything is happening, you can monitor your index and `docs.count` value for each index, that should change over time during that process and you should get an indicator of progress happening:
From within the ES container:
```bash
curl -u elastic:$ELASTIC_PASSWORD "localhost:9200/_cat/indices?v&s=index"
```
If that process gets interrupted before deleting the `_backup` index and you try to run this again, you will see an error like `resource_already_exists_exception` for example `index [ta_comment_backup/...] already exists` indicating in this case that your migration previously failed for the `ta_comment` index.
First make sure you have the original index still with the command above, after verifying that, stop the TA container then you can delete the `_backup` index e.g. in the case of `ta_comment_backup`.
```bash
curl -XDELETE -u elastic:$ELASTIC_PASSWORD "localhost:9200/ta_comment_backup?pretty"
```
and you should get:
```json
{
"acknowledged" : true
}
```
Then you can start everything again and the migration will run again. If your error persists, the ES and TA logs should give additional debug info.
## Manual yt-dlp update
This project strives for timely updates when yt-dlp makes a new release, but sometimes ideals meet reality. Also sometimes yt-dlp has a fix published, but not yet released.
Doing this is **very likely** going to break things for you. You will want to try this out on a testing instance first. Regularly there have been subtle changes in the yt-dlp API, so only do this if you know how to debug this project by yourself, but obviously share your fixes so any problems can be dealt with before release.
**Build your own image**: Update the version in `requirements.txt` and rebuild the image from `Dockerfile`. This will use your own image, even on container rebuild.
**Update yt-dlp on its own**: You can also update the yt-dlp library alone in the container.
- Restart your container for changes to take effect.
- These changes won't persist a container rebuild from image.
Update to newest regular yt-dlp release:
```
pip install --upgrade yt-dlp
```
To update to nightly you'll have to specify the correct `--target` folder:
```
pip install \
--upgrade \
--target=/root/.local/bin \
https://github.com/yt-dlp/yt-dlp/archive/master.tar.gz
```
This is obviously particularly likely to create problems. Also note that the `--version` command will only show the latest regular release, not a nightly mention.

View File

@ -132,10 +132,14 @@ Change watched state, where the `id` can be a single video, or channel/playlist
Validate your connection with the API
**GET** `/api/ping/`
When valid returns message with user id:
When valid returns message with user id and parsed TubeArchivist version (Family, Major, Minor):
```json
{
"response": "pong",
"user": 1
"user": 1,
"version": [
0,
3,
6
]
}
```

View File

@ -7,7 +7,7 @@ Parameter:
- filter: subscribed
Subscribe to a list of channels:
Subscribe/Unsubscribe to a list of channels:
**POST** `/api/channel/`
```json
{
@ -17,6 +17,16 @@ Subscribe to a list of channels:
}
```
## Channel Search
⚠️ **Experimental**
**GET** `/api/channel/?q=`
Parameter:
- q: Query to search channel
## Channel Item
**GET** `/api/channel/<channel_id>/`
**DELETE** `/api/channel/\<channel_id>/`

View File

@ -17,6 +17,8 @@ Add list of videos to download queue:
]
}
```
Parameter:
- autostart: true
Delete download queue items by filter:
**DELETE** `/api/download/?filter=ignore`

View File

@ -5,6 +5,12 @@ This page has a generic overview with how the Tube Archivist API functions. This
!!! note
These API endpoints *have* changed in the past and *will* change again while building out additional integrations and functionality. For the time being, don't expect backwards compatibility for third party integrations using these endpoints.
!!! note
Endpoints marked as **experimental** are particularly likely to change again.
!!! note
Not all endpoints will return expected status codes for errors, e.g. sometimes you'll see an error **500 Server Error** even though it should be **400 Bad request**. If you encounter any such cases, [please fix them](https://github.com/tubearchivist/tubearchivist/blob/master/CONTRIBUTING.md#how-to-make-a-pull-request) as you find them, no need to clutter up the issue queue.
## Context
- All changes to the API are marked with a `[API]` keyword for easy searching, for example search for [commits](https://github.com/tubearchivist/tubearchivist/search?o=desc&q=%5Bapi%5D&s=committer-date&type=commits). You'll find the same in the [release notes](https://github.com/tubearchivist/tubearchivist/releases).
- Check the commit history and release notes to see if a documented feature is already in your release. The documentation might be ahead of the regular release schedule.

View File

@ -3,8 +3,24 @@
## Playlist List
**GET** `/api/playlist/`
Subscribe/Unsubscribe to a list of playlists:
**POST** `/api/playlist/`
```json
{
"data": [
{"playlist_id": "PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha", "playlist_subscribed": true}
]
}
```
## Playlist Item
**GET** `/api/playlist/<playlist_id>/`
Delete playlist, metadata only:
**DELETE** `/api/playlist/<playlist_id>/`
Delete playlist, also delete all videos in playlist:
**DELETE** `/api/playlist/<playlist_id>/?delete-videos=true`
## Playlist Videos
**GET** `/api/playlist/<playlist_id>/video/`

37
mkdocs/docs/api/stats.md Normal file
View File

@ -0,0 +1,37 @@
# Statistics API endpoints
## Primary
⚠️ **Experimental**
**GET** `/api/stats/primary/`
Get primary statistics for your videos, channels, playlists and download queue.
## Watch Progress
⚠️ **Experimental**
**GET** `/api/stats/watch/`
Get statistics over your watch progress.
## Download History
⚠️ **Experimental**
**GET** `/api/stats/downloadhist/`
Get statistics for last days download history.
## Biggest Channels
⚠️ **Experimental**
**GET** `/api/stats/biggestchannels/`
Get a list of biggest channels, specify *order* parameter.
Parameter:
- order: doc_count (default), duration, media_size

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -1,28 +1,31 @@
---
description: Subscribe to channels, browse your channels and access additional metadata.
---
# Channels Pages
The channels are organized on two different levels, similar to the [playlists](playlists.md):
## Channels Overview
Accessible at `/channel/` of your Tube Archivist, the **Overview Page** shows a list of all channels you have indexed.
- You can filter that list to show or hide subscribed channels with the toggle. Clicking on the channel banner or the channel name will direct you to the *Channel Detail Page*.
- If you are subscribed to a channel an *Unsubscribe* button will show, if you aren't subscribed, a *Subscribe* button will show instead.
The **Subscribe to Channels** button <img src="/assets/icon-add.png?raw=true" alt="add icon" width="20px" style="margin:0 5px;"> opens a text field to subscribe to a channel. You have a few options:
- Enter the YouTube channel ID, a 25 character alphanumeric string. Example:
- `UCBa659QWEk1AI4Tg--mrJ2A`
- Enter the URL to the channel page on YouTube. Example:
- `https://www.youtube.com/channel/UCBa659QWEk1AI4Tg--mrJ2A` or alias url `https://www.youtube.com/@TomScottGo`
- Enter a channel alias starting with *@*, for example: `@TomScottGo`
- Enter the video URL for any video and let Tube Archivist extract the channel ID for you, for example `https://www.youtube.com/watch?v=2tdiKTSdE9Y`
- If you want to subscribe to more than one channel directly, you can add one channel per line in the text field
- Enter a [channel](urls.md#channel).
- Enter a [video](urls.md#video) and let Tube Archivist extract the channel ID for you.
- Add one per line.
To search your channels, click on the search icon <img src="/assets/icon-search.png?raw=true" alt="search icon" width="20px" style="margin:0 5px;"> to reach the search page. Start your query with `channel:`, learn more on the [search](search.md) page.
## Channel Detail
Each channel gets a set of channel detail pages.
- If you are subscribed to the channel, an *Unsubscribe* button will show, else the *Subscribe* button will show.
- The **Mark as Watched** button will mark all videos of this channel as watched.
- You'll see some statistics of the channel, like how many videos you have, total playback and total size. That aggregation is based your filter, e.g. if you toggle *Hide watched*, the aggregation will be over your unwatched videos only.
- The **Mark as Watched** and **Mark as Unwatched** buttos will mark all videos of this channel as watched/unwatched.
### Videos
Accessible at `/channel/<channel-id>/`, this page shows all the videos you have downloaded from this channel.
@ -43,12 +46,14 @@ On the *Channel About* page, accessible at `/channel/<channel-id>/about/`, you c
- The button **Reindex** will reindex all channel metadata. This will also categorize existing videos as shorts or streams.
- The button **Reindex Videos** will reindex metadata for all videos in this channel.
The channel customize form gives options to change settings on a per channel basis. Any configurations here will overwrite your configurations from the [settings](settings.md) page.
If available, you can find the channel description and channel tags there.
The channel customize form gives options to change settings on a per channel basis. Any configurations here will overwrite your configurations from the [settings](settings/application.md) page.
- **Download Format**: Overwrite the download quality for videos from this channel.
- **Auto Delete**: Automatically delete watched videos from this channel after selected days.
- **Index Playlists**: Automatically add all Playlists with at least a video downloaded to your index. Only do this for channels where you care about playlists as this will slow down indexing new videos for having to check which playlist this belongs to.
- **SponsorBlock**: Using [SponsorBlock](https://sponsor.ajay.app/) to get and skip sponsored content. Customize per channel: You can *disable* or *enable* SponsorBlock for certain channels only to overwrite the behavior set on the [settings](settings.md) page. Selecting *unset* will remove the overwrite and your setting will fall back to the default on the settings page.
- **SponsorBlock**: Using [SponsorBlock](https://sponsor.ajay.app/) to get and skip sponsored content. Customize per channel: You can *disable* or *enable* SponsorBlock for certain channels only to overwrite the behavior set on the [settings](settings/application.md) page. Selecting *unset* will remove the overwrite and your setting will fall back to the default on the settings page.
### Downloads
If you have any videos from this channel pending in the download queue, a *Downloads* link will show, bringing you directly to the [downloads](downloads.md) page, filtering the list by the selected channel.

View File

@ -0,0 +1,16 @@
You can enable support for authentication proxies such as Authelia.
This effectively disables credentials-based authentication and instead authenticates users if a specific request header contains a known username.
You must make sure that your proxy (nginx, Traefik, Caddy, ...) forwards this header from your auth proxy to tubearchivist.
Check the documentation of your auth proxy and your reverse proxy on how to correctly set this up.
Note that this automatically creates new users in the database if they do not already exist.
- `TA_ENABLE_AUTH_PROXY` (ex: `true`) - Set to anything besides empty string to use forward proxy authentication.
- `TA_AUTH_PROXY_USERNAME_HEADER` - The name of the request header that the auth proxy passes to the proxied application (tubearchivist in this case), so that the application can identify the user.
Check the documentation of your auth proxy to get this information.
Note that the request headers are rewritten in tubearchivist: all HTTP headers are prefixed with `HTTP_`, all letters are in uppercase, and dashes are replaced with underscores.
For example, for Authelia, which passes the `Remote-User` HTTP header, the `TA_AUTH_PROXY_USERNAME_HEADER` needs to be configured as `HTTP_REMOTE_USER`.
- `TA_AUTH_PROXY_LOGOUT_URL` - The URL that tubearchivist should redirect to after a logout.
By default, the logout redirects to the login URL, which means the user will be automatically authenticated again.
Instead, you might want to configure the logout URL of the auth proxy here.

View File

@ -1,3 +1,7 @@
---
description: Populate your download queue by rescanning your Subscriptions or manually adding items to the download queue.
---
# Downloads Page
Accessible at `/downloads/` of your Tube Archivist, this page handles all the download functionality.
@ -5,46 +9,31 @@ Accessible at `/downloads/` of your Tube Archivist, this page handles all the do
## Rescan Subscriptions
The **Rescan Subscriptions** icon <img src="/assets/icon-rescan.png?raw=true" alt="rescan icon" width="20px" style="margin:0 5px;"> will start a background task to look for new videos from the channels and playlists you are subscribed to.
Tube Archivist will get available *videos*, *shorts* and *streams* from each channel, you can define the channel and playlist page size on the [settings page](settings.md#subscriptions). With the default page size, expect this process to take around 2-3 seconds for each channel or playlist you are subscribed to. A status message will show the progress.
Tube Archivist will get available *videos*, *shorts* and *streams* from each channel, you can define the channel and playlist page size on the [settings page](settings/application.md#subscriptions). With the default page size, expect this process to take around 2-3 seconds for each channel or playlist you are subscribed to. A status message will show the progress.
Then for every video found, **Tube Archivist** will skip the video if it has already been downloaded or if you added it to the *ignored* list before. All the other videos will get added to the download queue. Expect this to take around 2 seconds for each video as **Tube Archivist** needs to grab some additional metadata and artwork. New videos will get added at the bottom of the download queue.
## Download Queue
The **Start Download** icon <img src="/assets/icon-download.png?raw=true" alt="download icon" width="20px" style="margin:0 5px;"> will start the download process starting from the top of the queue. Take a look at the relevant settings on the [Settings Page](settings.md#downloads). Once the process started, a progress message will show with additional details and controls:
The **Start Download** icon <img src="/assets/icon-download.png?raw=true" alt="download icon" width="20px" style="margin:0 5px;"> will start the download process. This will prioritize videos added as *auto start* or as *download now*, starting from the top of the queue. Once the process started, a progress message will show with additional details and controls:
- The stop icon <img src="/assets/icon-stop.png?raw=true" alt="stop icon" width="20px" style="margin:0 5px;"> will gracefully stop the download process, once the current video has been finished successfully.
- The stop icon <img src="/assets/icon-stop.png?raw=true" alt="stop icon" width="20px" style="margin:0 5px;"> will gracefully stop the download process, once the current video has been finished successfully. This will also reset the auto start behavior to avoid confusion.
- [Currenlty broken] The cancel icon <img src="/assets/icon-close-red.png?raw=true" alt="close icon" width="20px" style="margin:0 5px;"> is equivalent to killing the process and will stop the download immediately. Any leftover files will get deleted, the canceled video will still be available in the download queue.
After downloading, Tube Archivist tries to add new videos to already indexed playlists and if activated on the settings page, get comments for the new videos.
## Add to Download Queue
The **Add to Download Queue** icon <img src="/assets/icon-add.png?raw=true" alt="add icon" width="20px" style="margin:0 5px;"> opens a text field to manually add videos to the download queue. Add one item per line. You have a few options:
The **Add to Download Queue** icon <img src="/assets/icon-add.png?raw=true" alt="add icon" width="20px" style="margin:0 5px;"> opens a text field to manually add videos to the download queue. Add one item per line. The *Add to queue* will add the videos as regular items to the queue, you'll be able to ignore undesired videos before starting the queue. If you add them with *Download Now*, this will start the download automatically with priority.
You have a few options:
### Videos
- Add a YouTube video ID, for example `2tdiKTSdE9Y`
- Add a link to a YouTube video, for example `https://www.youtube.com/watch?v=2tdiKTSdE9Y`
- Add a link to a YouTube video by providing the shortened URL, for example `https://youtu.be/2tdiKTSdE9Y`
- Add a link to a shorts video, for example `https://www.youtube.com/shorts/UOfe6e0k7cQ`
Add a [video](urls.md#video) URL to download a single video.
### Channels
- When adding a channel, Tube Archivist will ignore the channel page size as described above, this is meant for an initial download of the whole channel. You can still ignore selected videos from the queue before starting the download.
- Download a complete channel including shorts and streams by entering:
- Channel ID: `UCBa659QWEk1AI4Tg--mrJ2A`
- Channel URL: `https://www.youtube.com/channel/UCBa659QWEk1AI4Tg--mrJ2A`
- Channel `@` alias handle: For example `@TomScottGo`
- Channel alias URL: `https://www.youtube.com/@TomScottGo`
- Download videos, live streams or shorts only, by providing a partial channel URL:
- Videos only: `https://www.youtube.com/@IBRACORP/videos`
- Shorts only: `https://www.youtube.com/@IBRACORP/shorts`
- Streams only: `https://www.youtube.com/@IBRACORP/streams`
- Every other channel sub page will default to download all, for example `https://www.youtube.com/@IBRACORP/featured` will download videos and shorts and streams.
Add a [channel](urls.md#channel) to download the complete channel, or a [channel sub page](urls.md#channel-sub-pages) to download a partial channel.
### Playlist
- Add a playlist ID or URL to add every available video in the list to the download queue, for example `https://www.youtube.com/playlist?list=PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha` or `PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha`.
- When adding a playlist to the queue, this playlist will automatically get [indexed](playlists.md#playlist-detail).
- When you add a link to a video in a playlist, Tube Archivist assumes you want to download only the specific video and not the whole playlist, for example `https://www.youtube.com/watch?v=CINVwWHlzTY&list=PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha` will only add one video, `CINVwWHlzTY`, to the queue.
Add a [playlist](urls.md#playlist) to download all videos in the list. When adding a playlist to the queue, this playlist will automatically get [indexed](playlists.md#playlist-detail).
## The Download Queue
Below the three buttons you find the download queue. New items will get added at the bottom of the queue, the next video to download once you click on **Start Download** will be the first in the list.
@ -56,10 +45,12 @@ Every video in the download queue has two buttons:
- **Ignore**: This will remove that video from the download queue and this video will not get added again, even when you **Rescan Subscriptions**.
- **Download now**: This will give priority to this video. If the download process is already running, the prioritized video will get downloaded as soon as the current video is finished. If there is no download process running, this will start downloading this single video and stop after that.
Failed videos will show an error message of what went wrong and will give you additional options with how to continue. Usually this means the video in the queue is no longer available on YouTube. Tube Archivist will not retry to download a failed video.
You can flip the view by activating **Show Only Ignored Videos**. This will show all videos you have previously *ignored*.
Every video in the ignored list has two buttons:
- **Forget**: This will delete the item from the ignored list.
- **Add to Queue**: This will add the ignored video back to the download queue.
You can delete your download queue from the [Settings](settings.md#actions) page.
You can delete your download queue from the [Settings](settings/actions.md) page.

View File

@ -1,3 +1,7 @@
---
description: Frequently asked questions about what this project is, what it tries and what it doesn't try to do.
---
# Frequently Asked Questions
## What is the scope of this project?
@ -25,7 +29,8 @@ Although there are similarities between these excellent projects and Tube Archiv
Part of the scope is to be its own media server, to be able to overcome these limitations, so that's where the focus and effort of this project is. That being said, the nature of self hosted and open source software gives you all the possible freedom to use your media as you wish.
- **Jellyfin**: There is a proof of concept script for linking these two APIs together and to populate metadata from Tube Archivist to Jellyfin: [tubearchivist/jellyfin](https://github.com/tubearchivist/jellyfin). Please contribute to improve this integration.
- **Jellyfin**: There is an API to API integration available to sync metadata from Tube Archivist to Jellyfin: [tubearchivist/tubearchivist-jf](https://github.com/tubearchivist/tubearchivist-jf). Follow the instructions there. Please contribute to improve this integration.
- **Plex**: There is a Plex Scanner and Agent combination that allows integration between Tube Archivist and Plex: [tubearchivist/tubearchivist-plex](https://github.com/tubearchivist/tubearchivist-plex). Follow the instructions there. Please contribute to improve this integration.
## How do I install this natively?
This project is a classical Docker application: There are multiple moving parts that need to be able to interact with each other and need to be compatible with multiple architectures and operating systems. Additionally Docker also drastically reduces development complexity which is highly appreciated.
@ -43,9 +48,9 @@ That might be an unconventional choice at first glance. Tube Archivist is built
That comes at a price: ES can use a lot of memory, particularly on a big index, and will heavily use in memory cached queries to be able to respond within milliseconds, even when searching through multiple GBs of raw text.
## Why does subscribing to a channel not download the complete channel?
For Tube Archivist, these are two different things: To download a complete channel, add it to the [download queue](downloads/#add-to-download-queue) with the form or with [Tube Archivist Companion](https://github.com/tubearchivist/browser-extension), the browser extension. This is meant for a complete archival.
For Tube Archivist, these are two different things: To download a complete channel, add it to the [download queue](downloads.md#add-to-download-queue) with the form or with [Tube Archivist Companion](https://github.com/tubearchivist/browser-extension), the browser extension. This is meant for a complete archival.
Subscribing to a channel is for downloading new videos as they come out. That is designed to be as quick as possible, to allow you to efficiently rescan your favourite channels frequently. This will add videos to your download queue based on your [channel page size](settings/#subscriptions).
Subscribing to a channel is for downloading new videos as they come out. That is designed to be as quick as possible, to allow you to efficiently rescan your favourite channels frequently. This will add videos to your download queue based on your [channel page size](settings/application.md#subscriptions).
If you want to archive the complete channel **and** any future videos, you can do both.
@ -55,3 +60,12 @@ Using a Proxy/VPN can be advantages for heavy users of this project. Some users
This project doesn't make any recommendations: Some people prefer to convert their home router to a VPN client, some have a home firewall capable of routing traffic, some prefer to set up their host network as a client and others prefer to use a networking container to tunnel container traffic through. Some prefer one of the many proxy protocols, others use various OpenVPN configurations, others use WireGuard.
There are too many variations of that problem to be implemented in this project, use any of the various solutions out there that fits your needs.
## Why is there no flexible naming structure?
Unlike other similar projects, Tube Archivist needs to keep track of its media files indefinitely while everything can change: Channel names and aliases and titles regularly change over time. Previous attempts failed at handling that properly and the metadata refresh task kept failing because of that.
This project tries to be compatible with as many filesystem/OS variations out there as possible. Using channel names and titles to build file paths that can be any Unicode character is a flawed and highly error prone approach of doing that, there is always a filesystem/OS out there that proves to be incompatible with how something is named.
That's why this project has landed on `<channel-id>/<video-id>.mp4`. These values are guaranteed to be static, are guaranteed to be compatible with every filesystem out there and make things predictable where all files will go on every instance of Tube Archivist indefinitely.
For browsing these files you have the fancy interface provided by this project, or use a supported integration as stated above. If you really want to you could easily also create your own file naming structure with the API and symlinks, but that is not part of the scope of this project.

View File

@ -1,12 +1,16 @@
---
description: Home of the documentation, additional installation instructions and user guide. Recommended reading for all interested in the project.
---
# Tube Archivist
Welcome to the official Tube Archivist Docs. This is an up-to-date documentation of user functionality.
## Getting Started
1. [Subscribe](channels#channels-overview) to some of your favourite YouTube channels.
2. [Scan](downloads#rescan-subscriptions) subscriptions to add the latest videos to the download queue.
3. [Add](downloads#add-to-download-queue) additional videos, channels or playlist - ignore the ones you don't want to download.
4. [Download](downloads#download-queue) and let **Tube Archivist** do it's thing.
1. [Subscribe](channels.md#channels-overview) to some of your favourite YouTube channels.
2. [Scan](downloads.md#rescan-subscriptions) subscriptions to add the latest videos to the download queue.
3. [Add](downloads.md#add-to-download-queue) additional videos, channels or playlist - ignore the ones you don't want to download.
4. [Download](downloads.md#download-queue) and let **Tube Archivist** do it's thing.
5. Sit back and enjoy your archived and indexed collection!
## General Navigation
@ -25,6 +29,7 @@ You can control the video player with the following keyboard shortcuts:
- `?`: Show help
- `m`: toggle mute
- `f`: toggle fullscreen
- `c`: toggle subtitles if available
- `>`: increase playback speed
- `<`: decrease playback speed

View File

@ -21,10 +21,10 @@ The main Python application that displays and serves your video collection, buil
- Set the environment variable `TA_HOST` to match with the system running Tube Archivist. This can be a domain like *example.com*, a subdomain like *ta.example.com* or an IP address like *192.168.1.20*. If you are running Tube Archivist behind a SSL reverse proxy, specify the protocoll. You can add multiple hostnames separated with a space. Any wrong configurations here will result in a `Bad Request (400)` response.
- Change the environment variables `TA_USERNAME` and `TA_PASSWORD` to create the initial credentials.
- `ELASTIC_PASSWORD` is for the password for Elasticsearch. The environment variable `ELASTIC_USER` is optional, should you want to change the username from the default *elastic*.
- Optionally set `ES_SNAPSHOT_DIR` to change the folder where ES is storing it's snapshots. When changing that, make sure you have persistence. That is an absolute path from inside the ES container.
- Set `ES_DISABLE_VERIFY_SSL`, boolean value, to disable SSL verification for connections to ES, e.g. for self signed certificates.
- For the scheduler to know what time it is, set your timezone with the `TZ` environment variable, defaults to *UTC*.
- Serves the interface on port `8000`
- Needs a volume for the video archive at `/youtube`
- Set the environment variable `ENABLE_CAST=True` to send videos to your cast device, [read more](#enable-cast).
- Set the environment variable `ENABLE_CAST=True` to send videos to your cast device, [read more](../configuration/cast.md).
## Configuring TubeArchivist

View File

@ -1,4 +1,4 @@
!!! note
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing-and-updating). If you see any issues here while using these instructions, please contribute.
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing). If you see any issues here while using these instructions, please contribute.
There is a Helm Chart available at [https://github.com/insuusvenerati/helm-charts](https://github.com/insuusvenerati/helm-charts). Mostly self-explanatory but feel free to ask questions in the discord / subreddit.

View File

@ -1,9 +1,9 @@
!!! note
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing-and-updating). If you see any issues here while using these instructions, please contribute.
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing). If you see any issues here while using these instructions, please contribute.
Podman handles container hostname resolving slightly differently than docker, so you need to make a few changes to the `docker-compose.yml` to get up and running.
### Follow the installation instructions from the [README](https://github.com/tubearchivist/tubearchivist#installing-and-updating), with a few additional changes to the `docker-compose.yml`.
### Follow the installation instructions from the [README](https://github.com/tubearchivist/tubearchivist#installing), with a few additional changes to the `docker-compose.yml`.
Edit these additional changes to the `docker-compose.yml`:

View File

@ -1,8 +1,8 @@
!!! note
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing-and-updating). If you see any issues here while using these instructions, please contribute.
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing). If you see any issues here while using these instructions, please contribute.
There are several different methods to install TubeArchivist on Synology platforms. This will focus on the available `docker` package and `docker-compose` implementations.
There are several different methods to install TubeArchivist on Synology platforms. This will focus on the available `docker` package implementation.<!-- and `docker-compose` implementations. -->
### Prepare Directories/Folders
Before we setup TubeArchivist, we need to setup the directories/folders. You are assumed to be logged into the Synology NAS.
@ -57,7 +57,9 @@ Once all of the folders have been created, it should have a folder structure wit
![Synology - Docker Folder Structure](../assets/Synology_0.2.0_Docker-Folder-Structure.png)
#### 8. Change Permissions - CLI Required
> If you do not have SSH access enabled for CLI, [enable it](https://kb.synology.com/en-sg/DSM/tutorial/How_to_login_to_DSM_with_root_permission_via_SSH_Telnet) before continuing.
!!! note
If you do not have SSH access enabled for CLI, [enable it](https://kb.synology.com/en-sg/DSM/tutorial/How_to_login_to_DSM_with_root_permission_via_SSH_Telnet) before continuing.
1. Open the SSH connection to the Synology. Login as your primary `Admin` user, or the user that was enabled for SSH access.
2. Elevate your access to `root`. Steps are provided [here](https://kb.synology.com/en-sg/DSM/tutorial/How_to_login_to_DSM_with_root_permission_via_SSH_Telnet).
3. Change directories to the **Volume** where the "Docker" folder resides.
@ -69,10 +71,10 @@ Once all of the folders have been created, it should have a folder structure wit
6. Change the owner of the "redis" folder. *If correct, this does not have an output.*
</br>Example: `chown 999:100 redis`
7. Change the owner of the "es" folder. *If correct, this does not have an output.*
</br>Example: `chown 1000:1000 es`
</br>Example: `chown 1000:0 es`
8. Confirm that the folders have the correct permissions.
</br>Example: `ls -hl`
![Synology - Docker Folder Permissions Command](../assets/Synology_0.2.0_Docker-Folder-Permissions-Commands.png)
![Synology - Docker Folder Permissions Command](../assets/Synology_0.3.6_Docker-Folder-Permissions-Commands.png)
9. Logout from root.
</br>Example: `logout`
10. Disconnect from the SSH connection.
@ -96,12 +98,14 @@ Once all of the folders have been created, it should have a folder structure wit
2. After `Docker` is installed, open the `Docker` utility.
3. Go to the `Registry` tab.
4. Search for the following `images` and download them. Follow the recommended versions for each of the images.
- `redislabs/rejson`
![Synology - Redis Image Search](../assets/Synology_0.2.0_Docker-Redis-Search.png)
- `redis/redis-stack-server`
![Synology - Redis Image Search](../assets/Synology_0.3.6_Docker-Redis-Search.png)
- `bbilly1/tubearchivist-es`
![Synology - ElasticSearch Image Search](../assets/Synology_0.2.0_Docker-ES-Search.png)
- `bbilly1/tubearchivist`
![Synology - TubeArchivist Image Search](../assets/Synology_0.2.0_Docker-TA-Search.png)
> !!! note
"Upgrades in Synology require use of the `latest` tag."
#### 3. Configure ElasticSearch
@ -162,12 +166,12 @@ Once all of the folders have been created, it should have a folder structure wit
9. In the **Port Settings** tab, replace the "Auto" entry under **Local Port** with the port that will be used to connect to TubeArchivist (default is 8000).
10. In the **Links** tab, select the "tubearchivist-es" container from the **Container Name** dropdown and provide it the same alias, "tubearchivist-es".
11. In the **Links** tab, select the "tubearchivist-redis" container from the **Container Name** dropdown and provide it the same alias, "tubearchivist-redis".
12. In the **Environment** tab, add in the following TubeArchivist specific environment variables that may apply. **Change the variables as-is appropriate to your use case. Follow the [README section](https://github.com/tubearchivist/tubearchivist#tube-archivist) for details on what to set each variable.**
12. In the **Environment** tab, add in the following TubeArchivist specific environment variables that may apply. **Change the variables as is appropriate to your use case. Follow the [README section](https://github.com/tubearchivist/tubearchivist#installing) for details on what to set each variable.**
- `TA_HOST=synology.local`
- `ES_URL=http://tubearchivist-es:9200`
- `REDIS_HOST=tubearchivist-redis`
- `HOST_UID=1000`
- `HOST_GID=1000`
- `HOST_GID=0`
- `TA_USERNAME=tubearchivist`
- `TA_PASSWORD=verysecret`
- `ELASTIC_PASSWORD=verysecret`
@ -176,6 +180,7 @@ Once all of the folders have been created, it should have a folder structure wit
- Do not use the default password as it is very insecure.
- Ensure that ELASTIC_PASSWORD matches the password used on the tubearchivist-es container.
![Synology - TubeArchivist Environment Configurations](../assets/Synology_0.2.0_Docker-TA-Env-Conf.png)
13. Click on the **Apply** button.
14. Back on the **Create Container** screen, click the **Next** button.
15. Review the settings to confirm, then click the **Apply** button.
@ -190,4 +195,24 @@ Once all of the folders have been created, it should have a folder structure wit
**From there, you should be able to start up your containers and you're good to go!**
If you're still having trouble, join us on [discord](https://www.tubearchivist.com/discord) and come to the #support channel.
### Synology Docker Upgrade
When a new version of the image is available, you can follow the following steps to more easily upgrade your previous instance.
!!!note "If you did not use the `latest` tag, you may have some variances in your upgrade steps. Those are detailed below these instructions."
1. Go to the Registry Tab and download the newest instance of the `:latest` tag, as seen in the Installation Instructions earlier.
2. Go to Image Tab and confirm that you have the newer version available.
3. Stop the running `tubearchivist` container.
4. Click on the **Action🔽** button and choose "Reset".
5. This will load the newer image we downloaded earlier. This should not delete any files if all of your volumes were setup correctly.
6. If it doesn't start automatically, start the `tubearchivist` container. Monitor the upgrade in the logs and confirm that the service starts up successfully.
7. Once you are able to login successfully to the web page for TubeArchivist, you have successfully upgraded your container!
!!! note "If you did not use the `latest` tag for the `tubearchivist` container, then you will instead do the following:"
1. Shut down the old container.
2. Download the new image.
3. Follow the Installation instructions again *for just the TubeArchivist image*, using the same configurations as the existing container. It'll have to be named slightly differently.
4. After the image is now running and the upgrade of the backend files occurs, shut down the new container. Rename or delete the old container. Rename the new container to have the intended name.
!!! note "Links are incredibly important if you upgrade or change the ES or Redis container images. You will either need to remove the links, create the new containers, then re-add the links or rebuild all of the images with the same instructions as Installation, starting at Step 3."
If you're still having trouble, join us on [discord](https://www.tubearchivist.com/discord) and come to the #support channel.

View File

@ -1,10 +1,10 @@
!!! note
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing-and-updating). If you see any issues here while using these instructions, please contribute.
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing). If you see any issues here while using these instructions, please contribute.
Truenas Scale can be a bit confusing, with its k3s kubernetes implementation.
However, there is a step by step guide available for it's users here:
https://heavysetup.info/applications/tube-archivist/dataset/
[heavysetup.info](https://heavysetup.info/applications/tube-archivist/dataset/)
- Ensure you are navigating the columns under `Tube Archivist` on the left hand side of the screen

View File

@ -1,5 +1,5 @@
!!! note
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing-and-updating). If you see any issues here while using these instructions, please contribute.
These are beginner's guides/installation instructions for additional platforms generously provided by users of these platforms. When in doubt, verify the details with the [project README](https://github.com/tubearchivist/tubearchivist#installing). If you see any issues here while using these instructions, please contribute.
Tube Archivist, and all if it's dependencies are located in the [community applications](https://unraid.net/community/apps?q=tubearchivist) store. The three containers you will need are as follows:
@ -12,7 +12,7 @@ Tube Archivist, and all if it's dependencies are located in the [community appli
![TubeArchivist-RedisJSON](../assets/unraid_redis_install.png)
This is the easiest container to setup of the thee, just make sure that you do not have any port conflicts, and that your `/data` is mounted to the correct path. The other containers will map to the same root directory (/mnt/user/appdata/TubeArchivist).
If you need to install `TubeArchivist-RedisJSON`on a different port, you'll have to follow [these steps](https://github.com/tubearchivist/tubearchivist#redis-on-a-custom-port) later on when installing the `TubeArchivist` container.
If you need to install `TubeArchivist-RedisJSON`on a different port, you'll have to follow [these steps](docker-compose.md#redis-on-a-custom-port) later on when installing the `TubeArchivist` container.
Make sure and start Redis and the ElasticSearch containers approximately one minute before starting `TubeArchivist`
@ -60,5 +60,7 @@ It's finally time to set up TubeArchivist!
**From there, you should be able to start up your containers and you're good to go!**
If you run into permission errors, try ```'newperms /mnt/user/appdata/TubeArchivist/'``` to reset the permissions to the root of your TubeArchivist appdata folder.
If you're still having trouble, join us on [discord](https://www.tubearchivist.com/discord) and come to the [#support channel.](https://discord.com/channels/920056098122248193/1006394050217246772)

View File

@ -1,19 +1,21 @@
---
description: Subscribe to playlists, browse your playlists and access additional metadata.
---
# Playlist Pages
The playlists are organized in two different levels, similar as the [channels](channels.md):
## Playlist Overview
Accessible at `/playlist/` of your Tube Archivist, this **Overview Page** shows a list of all playlists you have indexed over all your channels.
- You can filter that list to show only subscribed to playlists with the toggle.
You can index playlists of a channel from the channel detail page as described [here](channels.md#channel-detail).
The **Subscribe to Playlist** button <img src="/assets/icon-add.png?raw=true" alt="add icon" width="20px" style="margin:0 5px;"> opens a text field to subscribe to playlists. You have a few options:
- Enter the YouTube playlist id, for example:
- `PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha`
- Enter the Youtube dedicated playlist url, for example:
- `https://www.youtube.com/playlist?list=PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha`
- If you want to subscribe to more than one playlist directly, you can add one playlist per line in the text field
- Enter a playlist [playlist](urls.md#playlist)
- Add one per line.
!!! note
It doesn't make sense to subscribe to a playlist if you are already subscribed the corresponding channel as this will slow down the **Rescan Subscriptions** [task](downloads.md#rescan-subscriptions).
@ -24,7 +26,7 @@ You can search your indexed playlists by clicking on the search icon <img src="/
Each playlist will get a dedicated playlist detail page accessible at `/playlist/<playlist-id>/` of your Tube Archivist. This page shows all the videos you have downloaded from this playlist.
- If you are subscribed to the playlist, an Unsubscribe button will show, else the Subscribe button will show.
- The **Mark as Watched** button will mark all videos of this playlist as watched.
- The **Mark as Watched** and **Mark as Unwatched** buttos will mark all videos of this playlist as watched/unwatched.
- The button **Reindex** will reindex the playlist metadata.
- The button **Reindex Videos** will reindex all videos from this playlist.
- The **Delete Playlist** button will give you the option to delete just the *metadata* which won't delete any media files or *delete all* which will delete metadata plus all videos belonging to this playlist.

View File

@ -1,3 +1,7 @@
---
description: Unified search pagage to query your index.
---
# Search Page
Accessible at `/search/` of your **Tube Archivist**, search your archive for Videos, Channels and Playlists - or even full text search throughout your indexed subtitles.
@ -32,7 +36,7 @@ Start your query with the **primary keyword** `video:` to search for videos only
- Note the omitted term after the primary key, this will show all videos from the channel *Tom Scott* that are no longer active on YouTube.
## Channel
Start with the `channel:` **primary keyword** to search for channels matching your query. This will search through the *channel name* and *channel description* fields. Narrow your search down with secondary keywords:
Start with the `channel:` **primary keyword** to search for channels matching your query. This will search through the *channel name*, *channel description* and *channel tags* fields. Narrow your search down with secondary keywords:
- `subscribed:` is a boolean value, search for channels that you are subscribed to or not.
- `active:` is a boolean value, to search for channels that are still active on YouTube or that are no longer active.
@ -55,7 +59,7 @@ Start your query with the **primary keyword** `playlist:` to search for playlist
- `playlist:html css active:yes`: Search for playlists containing *HTML CSS* that are still active on YouTube.
## Full
Start a full text search by beginning your query with the **primary keyword** `full:`. This will search through your indexed Subtitles showing segments with possible matches. This will only show any results if you have activated *subtitle download and index* on the settings page. The operator for full text searches is `or` meaning when searching for multiple words not all words need to match, but additional words will change the ranking of the result, the more words match and the better they match, the higher ranked the result. The matching words will get highlighted in the text preview.
Start a full text search by beginning your query with the **primary keyword** `full:`. This will search through your indexed Subtitles showing segments with possible matches. This will only show any results if you have activated *subtitle download and index* on the settings page. The operator for full text searches is `or` meaning when searching for multiple words not all words need to match, but additional words will change the ranking of the result, the more words match and the better they match, the higher ranked the result. The matching words will get highlighted in the text preview and you will see a score indicating how well your term is matching.
Clicking the play button on the thumbnail will open the inplace player at the timestamp from where the segment starts. Same when clicking the video title, this will open the video page and put the player at the segment timestamp. This will overwrite any previous playback position.

View File

@ -1,199 +0,0 @@
# Settings Page
Accessible at `/settings/` of your **Tube Archivist**, this page holds all the configurations and additional functionality related to the database.
Click on **Update Settings** at the bottom of the form to apply your configurations.
## Color scheme
Switch between the easy on the eyes dark theme and the burning bright theme.
## Archive View
- **Page Size**: Defines how many results get displayed on a given page. Same value goes for all archive views.
## Subscriptions
Settings related to the channel management. Disable shorts or streams by setting their page size to 0 (zero).
- **Channel Page Size**: Defines how many pages will get analyzed by **Tube Archivist** each time you click on *Rescan Subscriptions*. The default page size used by yt-dlp is **50**, that's also the recommended value to set here. Any value higher will slow down the rescan process, for example if you set the value to 51, that means yt-dlp will have to go through 2 pages of results instead of 1 and by that doubling the time that process takes.
- **Live Page Size**: Same as above, but for channel live streams.
- **Shorts page Size**: Same as above, but for shorts videos.
## Downloads
Settings related to the download process.
- **Download Limit**: Stop the download process after downloading the set quantity of videos.
- **Download Speed Limit**: Set your download speed limit in KB/s. This will pass the option `--limit-rate` to yt-dlp.
- **Throttled Rate Limit**: Restart download if the download speed drops below this value in KB/s. This will pass the option `--throttled-rate` to yt-dlp. Using this option might have a negative effect if you have an unstable or slow internet connection.
- **Sleep Interval**: Time in seconds to sleep between requests to YouTube. It's a good idea to set this to **3** seconds. Might be necessary to avoid throttling.
- **Auto Delete Watched Videos**: Automatically delete videos marked as watched after selected days. If activated, checks your videos after download task is finished.
## Download Format
Additional settings passed to yt-dlp.
- **Format**: This controls which streams get downloaded and is equivalent to passing `--format` to yt-dlp. Use one of the recommended one or look at the documentation of [yt-dlp](https://github.com/yt-dlp/yt-dlp#format-selection). Please note: The option `--merge-output-format mp4` is automatically passed to yt-dlp to guarantee browser compatibility. Similar to that, `--check-formats` is passed as well to check that the selected formats are actually downloadable.
- **Embed Metadata**: This saves the available tags directly into the media file by passing `--embed-metadata` to yt-dlp.
- **Embed Thumbnail**: This will save the thumbnail into the media file by passing `--embed-thumbnail` to yt-dlp.
## Subtitles
- **Download Setting**: Select the subtitle language you like to download. Add a comma separated list for multiple languages.
- **Source Settings**: User created subtitles are provided from the uploader and are usually the video script. Auto generated is from YouTube, quality varies, particularly for auto translated tracks.
- **Index Settings**: Enabling subtitle indexing will add the lines to Elasticsearch and will make subtitles searchable. This will increase the index size and is not recommended on low-end hardware.
## Comments
- **Download and index comments**: Set your configuration for downloading and indexing comments. This takes the same values as documented in the `max_comments` section for the youtube extractor of [yt-dlp](https://github.com/yt-dlp/yt-dlp#youtube). Add without space between the four different fields: *max-comments,max-parents,max-replies,max-replies-per-thread*. Example:
- `all,100,all,30`: Get 100 max-parents and 30 max-replies-per-thread.
- `1000,all,all,50`: Get a total of 1000 comments over all, 50 replies per thread.
- **Comment sort method**: Change sort method between *top* or *new*. The default is *top*, as decided by YouTube.
- The [Refresh Metadata](#refresh-metadata) background task will get comments from your already archived videos, spreading the requests out over time.
Archiving comments is slow as only very few comments get returned per request with yt-dlp. Choose your configuration above wisely. Tube Archivist will download comments after the download queue finishes, your videos will be already available while the comments are getting downloaded.
## Cookie
Importing your YouTube Cookie into Tube Archivist allows yt-dlp to bypass age restrictions, gives access to private videos and your *watch later* or *liked videos*.
### Security concerns
Cookies are used to store your session and contain your access token to your google account, this information can be used to take over your account. Treat that data with utmost care as you would any other password or credential. *Tube Archivist* stores your cookie in Redis and will automatically append it to yt-dlp for every request.
### Auto import
Easiest way to import your cookie is to use the **Tube Archivist Companion** [browser extension](https://github.com/tubearchivist/browser-extension) for Firefox and Chrome.
### Manual import
Alternatively you can also manually import your cookie into Tube Archivist. Export your cookie as a *Netscape* formatted text file, name it *cookies.google.txt* and put it into the *cache/import* folder. After that you can enable the option on the settings page and your cookie file will get imported.
- There are various tools out there that allow you to export cookies from your browser. This project doesn't make any specific recommendations.
- Once imported, a **Validate Cookie File** button will show, where you can confirm if your cookie is working or not.
### Use your cookie
Once imported, additionally to the advantages above, your [Watch Later](https://www.youtube.com/playlist?list=WL) and [Liked Videos](https://www.youtube.com/playlist?list=LL) become a regular playlist you can download and subscribe to as any other [playlist](playlists.md).
### Limitation
There is only one cookie per Tube Archivist instance, this will be shared between all users.
## Integrations
All third party integrations of TubeArchivist will **always** be *opt in*.
- **API**: Your access token for the Tube Archivist API.
- **returnyoutubedislike.com**: This will get return dislikes and average ratings for each video by integrating with the API from [returnyoutubedislike.com](https://www.returnyoutubedislike.com/).
- **SponsorBlock**: Using [SponsorBlock](https://sponsor.ajay.app/) to get and skip sponsored content. If a video doesn't have timestamps, or has unlocked timestamps, use the browser addon to contribute to this excellent project. Can also be activated and deactivated as a per [channel overwrite](Settings#channel-customize).
## Snapshots
!!! note
This will make a snapshot of your metadata index only, no media files or additional configuration variables you have set on the settings page will be backed up.
System snapshots will automatically make daily snapshots of the Elasticsearch index. The task will start at 12pm your local time. Snapshots are deduplicated, meaning that each snapshot will only have to backup changes since the last snapshot. Old snpshots will automatically get deleted after 30 days.
- **Create snapshot now**: Will start the snapshot process now, outside of the regular daily schedule.
- **Restore**: Restore your index to that point in time.
# Scheduler Setup
Schedule settings expect a cron like format, where the first value is minute, second is hour and third is day of the week. Day 0 is Sunday, day 1 is Monday etc.
Examples:
- `0 15 *`: Run task every day at 15:00 in the afternoon.
- `30 8 \*/2`: Run task every second day of the week (Sun, Tue, Thu, Sat) at 08:30 in the morning.
- `0 \*/3,8-17 *`: Execute every hour divisible by 3, and every hour during office hours (8 in the morning - 5 in the afternoon).
- `0 8,16 *`: Execute every day at 8 in the morning and at 4 in the afternoon.
- `auto`: Sensible default.
- `0`: (zero), deactivate that task.
!!! note "BE AWARE"
- Changes in the scheduler settings require a container restart to take effect.
- Cron format as *number*/*number* are none standard cron and are not supported by the scheduler, for example `0 0/12 *` is invalid, use `0 */12 *` instead.
- Avoid an unnecessary frequent schedule to not get blocked by YouTube. For that reason, the scheduler doesn't support schedules that trigger more than once per hour.
## Rescan Subscriptions
That's the equivalent task as run from the downloads page looking through your channel and playlist and add missing videos to the download queue.
Become a sponsor and join [members.tubearchivist.com](https://members.tubearchivist.com/) to get access to *real time* notifications for new videos uploaded by your favorite channels.
## Start download
Start downloading all videos currently in the download queue.
## Refresh Metadata
Rescan videos, channels and playlists on youtube and update metadata periodically. This will also refresh your subtitles and comments based on your current settings. If an item is no longer available on YouTube, this will deactivate it and exclude it from future refreshes. This task is meant to be run once per day, set your schedule accordingly.
The field **Refresh older than x days** takes a number where TubeArchivist will consider an item as *outdated*. This value is used to calculate how many items need to be refreshed today based on the total indexed. This will spread out the requests to YouTube. Sensible value here is **90** days.
Additionally to the outdated documents, this will also refresh very recently published videos. This is to keep metadata and statistics uptodate during the first few days when the video goes live.
## Thumbnail check
This will check if all expected thumbnails are there and will delete any artwork without matching video.
## ZIP file index backup
Create a zip file of the metadata and select **Max auto backups to keep** to automatically delete old backups created from this task. For data consistency, make sure there aren't any other tasks running that will change the index during the backup process. This is very slow, particularly for large archives. Use snapshots instead.
# Actions
## Delete download queue
The button **Delete all queued** will delete all pending videos from the download queue. The button **Delete all ignored** will delete all videos you have previously ignored.
## Manual Media Files Import
!!! note
This is inherently error prone, as there are many variables, some outside of the control of this project. Read this carefully and use at your own risk.
Add the files you'd like to import to the */cache/import* folder. Only add files, don't add subdirectories. All files you are adding, need to have the same *base name* as the media file. Then start the process from the settings page *Manual Media Files Import*.
Valid media extensions are *.mp4*, *.mkv* or *.webm*. If you have other file extensions or incompatible codecs, convert them first to mp4. **Tube Archivist** can identify the videos with one of the following methods.
### Method 1:
Add a matching *.info.json* file with the media file. Both files need to have the same base name, for example:
- For the media file: `<base-name>.mp4`
- For the JSON file: `<base-name>.info.json`
The import process then looks for the 'id' key within the JSON file to identify the video.
### Method 2:
Detect the YouTube ID from filename, this accepts the default yt-dlp naming convention for file names like:
- `<base-name>[<youtube-id>].mp4`
- The YouTube ID in square brackets at the end of the filename is the crucial part.
### Offline import:
If the video you are trying to import is not available on YouTube any more, **Tube Archivist** can import the required metadata:
- The file `<base-name>.info.json` is required to extract the required information.
- Add the thumbnail as `<base-name>.<ext>`, where valid file extensions are *.jpg*, *.png* or *.webp*. If there is no thumbnail file, **Tube Archivist** will try to extract the embedded cover from the media file or will fallback to a default thumbnail.
- Add subtitles as `<base-name>.<lang>.vtt` where *lang* is the two letter ISO country code. This will archive all subtitle files you add to the import folder, independent from your configurations. Subtitles can be archived and used in the player, but they can't be indexed or made searchable due to the fact, that they have a very different structure than the subtitles as **Tube Archivist** needs them.
- For videos, where the whole channel is not available any more, you can add the `<channel-id>.info.json` file as generated by *youtube-dl/yt-dlp* to get the full metadata. Alternatively **Tube Archivist** will extract as much info as possible from the video info.json file.
### Some notes:
- This will **consume** the files you put into the import folder: Files will get converted to mp4 if needed (this might take a long time...) and moved to the archive, *.json* files will get deleted upon completion to avoid having duplicates on the next run.
- For best file transcoding quality, convert your media files with desired settings first before importing.
- Maybe start with a subset of your files to import to make sure everything goes well...
- A notification box will show with progress, follow the docker logs to monitor for errors.
## Embed thumbnails into media file
This will write or overwrite all thumbnails in the media file using the downloaded thumbnail. This is only necessary if you didn't download the files with the option *Embed Thumbnail* enabled or you want to make sure all media files get the newest thumbnail.
## ZIP file index backup
This will backup your metadata into a zip file. The file will get stored at *cache/backup* and will contain the necessary files to restore the Elasticsearch index formatted **nd-json** files. For data consistency, make sure there aren't any other tasks running that will change the index during the backup process. This is very slow, particularly for large archives.
!!! note "BE AWARE"
This will **not** backup any media files, just the metadata from the Elasticsearch.
## Restore From Backup
The restore functionality will expect the same zip file in *cache/backup* as created from the **Backup database** function. This will recreate the index from the zip archive file. There will be a list of all available backup to choose from. The *source* tag can have these different values:
- **manual**: For backups manually created from here on the settings page.
- **auto**: For backups automatically created via a sceduled task.
- **update**: For backups created after a Tube Archivist update due to changes in the index.
- **False**: Undefined.
!!! note "BE AWARE"
This will **replace** your current index with the one from the backup file. This won't restore any media files.
## Rescan Filesystem
This function will go through all your media files and looks at the whole index to try to find any issues:
- Should the filename not match with the indexed media url, this will rename the video files correctly and update the index with the new link.
- When you delete media files from the filesystem outside of the Tube Archivist interface, this will delete leftover metadata from the index.
- When you have media files that are not indexed yet, this will grab the metadata from YouTube like it was a newly downloaded video. This can be useful when restoring from an older backup file with missing metadata but already downloaded mediafiles. NOTE: This only works if the media files are named in the same convention as Tube Archivist does, particularly the YouTube ID needs to be at the same index in the filename, alternatively see above for *Manual Media Files Import*.
- The task will stop, when adding a video fails, for example if the video is no longer available on YouTube.
- This will also check all of your thumbnails and download any that are missing.
!!! note "BE AWARE"
There is no undo.

View File

@ -0,0 +1,78 @@
---
description: Administration tasks for the application.
---
# Actions Page
Accessible at `/settings/actions/` of your **Tube Archivist**, this page allows admins to perform actions related to the database and other functions.
## Delete download queue
The button **Delete all queued** will delete all pending videos from the download queue. The button **Delete all ignored** will delete all videos you have previously ignored.
## Manual Media Files Import
!!! note
This is inherently error prone, as there are many variables, some outside of the control of this project. Read this carefully and use at your own risk.
Add the files you'd like to import to the */cache/import* folder. Only add files, don't add subdirectories. All files you are adding, need to have the same *base name* as the media file. Then start the process from the settings page *Manual Media Files Import*.
Valid media extensions are *.mp4*, *.mkv* or *.webm*. If you have other file extensions or incompatible codecs, convert them first to mp4. **Tube Archivist** can identify the videos with one of the following methods.
### Method 1:
Add a matching *.info.json* file with the media file. Both files need to have the same base name, for example:
- For the media file: `<base-name>.mp4`
- For the JSON file: `<base-name>.info.json`
The import process then looks for the 'id' key within the JSON file to identify the video.
### Method 2:
Detect the YouTube ID from filename, this accepts the default yt-dlp naming convention for file names like:
- `<base-name>[<youtube-id>].mp4`
- The YouTube ID in square brackets at the end of the filename is the crucial part.
### Offline import:
If the video you are trying to import is not available on YouTube any more, **Tube Archivist** can import the required metadata:
- The file `<base-name>.info.json` is required to extract the required information.
- Add the thumbnail as `<base-name>.<ext>`, where valid file extensions are *.jpg*, *.png* or *.webp*. If there is no thumbnail file, **Tube Archivist** will try to extract the embedded cover from the media file or will fallback to a default thumbnail.
- Add subtitles as `<base-name>.<lang>.vtt` where *lang* is the two letter ISO country code. This will archive all subtitle files you add to the import folder, independent from your configurations. Subtitles can be archived and used in the player, but they can't be indexed or made searchable due to the fact, that they have a very different structure than the subtitles as **Tube Archivist** needs them.
- For videos, where the whole channel is not available any more, you can add the `<channel-id>.info.json` file as generated by *youtube-dl/yt-dlp* to get the full metadata. Alternatively **Tube Archivist** will extract as much info as possible from the video info.json file.
### Some notes:
- This will **consume** the files you put into the import folder: Files will get converted to mp4 if needed (this might take a long time...) and moved to the archive, *.json* files will get deleted upon completion to avoid having duplicates on the next run.
- For best file transcoding quality, convert your media files with desired settings first before importing.
- Maybe start with a subset of your files to import to make sure everything goes well...
- A notification box will show with progress, follow the docker logs to monitor for errors.
## Embed thumbnails into media file
This will write or overwrite all thumbnails in the media file using the downloaded thumbnail. This is only necessary if you didn't download the files with the option *Embed Thumbnail* enabled or you want to make sure all media files get the newest thumbnail.
## ZIP file index backup
This will backup your metadata into a zip file. The file will get stored at *cache/backup* and will contain the necessary files to restore the Elasticsearch index formatted **nd-json** files. For data consistency, make sure there aren't any other tasks running that will change the index during the backup process. This is very slow, particularly for large archives.
!!! note "BE AWARE"
This will **not** backup any media files, just the metadata from the Elasticsearch.
## Restore From Backup
The restore functionality will expect the same zip file in *cache/backup* as created from the **Backup database** function. This will recreate the index from the zip archive file. There will be a list of all available backup to choose from. The *source* tag can have these different values:
- **manual**: For backups manually created from here on the settings page.
- **auto**: For backups automatically created via a sceduled task.
- **update**: For backups created after a Tube Archivist update due to changes in the index.
- **False**: Undefined.
!!! note "BE AWARE"
This will **replace** your current index with the one from the backup file. This won't restore any media files.
## Rescan Filesystem
This function will go through all your media files and looks at the whole index to try to find any issues:
- Should the filename not match with the indexed media url, this will rename the video files correctly and update the index with the new link.
- When you delete media files from the filesystem outside of the Tube Archivist interface, this will delete leftover metadata from the index.
- When you have media files that are not indexed yet, this will grab the metadata from YouTube like it was a newly downloaded video. This can be useful when restoring from an older backup file with missing metadata but already downloaded mediafiles. NOTE: This only works if the media files are named in the same convention as Tube Archivist does, particularly the YouTube ID needs to be at the same index in the filename, alternatively see above for *Manual Media Files Import*.
- The task will stop, when adding a video fails, for example if the video is no longer available on YouTube.
- This will also check all of your thumbnails and download any that are missing.
!!! note "BE AWARE"
There is no undo.

View File

@ -0,0 +1,86 @@
---
description: Configure this application.
---
# Application Settings Page
Accessible at `/settings/application/` of your **Tube Archivist**, this page holds all of the general application configuration (minus configuration of the [scheduler](scheduling.md)).
Click on **Update Application Configurations** at the bottom of the page to apply your configurations.
## Subscriptions
Settings related to the channel management. Disable shorts or streams by setting their page size to 0 (zero).
- **Channel Page Size**: Defines how many pages will get analyzed by **Tube Archivist** each time you click on *Rescan Subscriptions*. The default page size used by yt-dlp is **50**, that's also the recommended value to set here. Any value higher will slow down the rescan process, for example if you set the value to 51, that means yt-dlp will have to go through 2 pages of results instead of 1 and by that doubling the time that process takes.
- **Live Page Size**: Same as above, but for channel live streams.
- **Shorts page Size**: Same as above, but for shorts videos.
- **Auto Start**: This will prioritize and automatically start downloading videos from your subscriptions over regular video added to the download queue.
## Downloads
Settings related to the download process.
- **Download Speed Limit**: Set your download speed limit in KB/s. This will pass the option `--limit-rate` to yt-dlp.
- **Throttled Rate Limit**: Restart download if the download speed drops below this value in KB/s. This will pass the option `--throttled-rate` to yt-dlp. Using this option might have a negative effect if you have an unstable or slow internet connection.
- **Sleep Interval**: Time in seconds to sleep between requests to YouTube. It's a good idea to set this to **3** seconds. Might be necessary to avoid throttling.
- **Auto Delete Watched Videos**: Automatically delete videos marked as watched after selected days. If activated, checks your videos after download task is finished.
## Download Format
Additional settings passed to yt-dlp.
- **Format**: This controls which streams get downloaded and is equivalent to passing `--format` to yt-dlp. Use one of the recommended one or look at the documentation of [yt-dlp](https://github.com/yt-dlp/yt-dlp#format-selection). Please note: The option `--merge-output-format mp4` is automatically passed to yt-dlp to guarantee browser compatibility. Similar to that, `--check-formats` is passed as well to check that the selected formats are actually downloadable.
- **Format Sort**: This allows you to change how yt-dlp sorts formats by passing `--format-sort` to yt-dlp. Refere to the [documentation](https://github.com/yt-dlp/yt-dlp#sorting-formats), what you can pass here. Be aware, that some codecs might not be compatible with your browser of choice.
- **Extractor Language**: Some channels provide tranlated video titles and descriptions. Add the two letter ISO language code, to set your prefered default language. This will only have an effect, if the uploader adds translations. Not all language codes are supported, see the [documentation](https://github.com/yt-dlp/yt-dlp#youtube) (the `lang` section) for more details.
- **Embed Metadata**: This saves the available tags directly into the media file by passing `--embed-metadata` to yt-dlp.
- **Embed Thumbnail**: This will save the thumbnail into the media file by passing `--embed-thumbnail` to yt-dlp.
## Subtitles
- **Download Setting**: Select the subtitle language you like to download. Add a comma separated list for multiple languages. For Chinese you must specify `zh-Hans` or `zh-Hant`, specifying "zh" is invalid, otherwise the subtitle won't download successfully.
- **Source Settings**: User created subtitles are provided from the uploader and are usually the video script. Auto generated is from YouTube, quality varies, particularly for auto translated tracks.
- **Index Settings**: Enabling subtitle indexing will add the lines to Elasticsearch and will make subtitles searchable. This will increase the index size and is not recommended on low-end hardware.
## Comments
- **Download and index comments**: Set your configuration for downloading and indexing comments. This takes the same values as documented in the `max_comments` section for the youtube extractor of [yt-dlp](https://github.com/yt-dlp/yt-dlp#youtube). Add without space between the four different fields: *max-comments,max-parents,max-replies,max-replies-per-thread*. Example:
- `all,100,all,30`: Get 100 max-parents and 30 max-replies-per-thread.
- `1000,all,all,50`: Get a total of 1000 comments over all, 50 replies per thread.
- **Comment sort method**: Change sort method between *top* or *new*. The default is *top*, as decided by YouTube.
- The [Refresh Metadata](scheduling.md#refresh-metadata) background task will get comments from your already archived videos, spreading the requests out over time.
Archiving comments is slow as only very few comments get returned per request with yt-dlp. Choose your configuration above wisely. Tube Archivist will download comments after the download queue finishes, your videos will be already available while the comments are getting downloaded.
## Cookie
Importing your YouTube Cookie into Tube Archivist allows yt-dlp to bypass age restrictions, gives access to private videos and your *watch later* or *liked videos*.
### Security concerns
Cookies are used to store your session and contain your access token to your google account, this information can be used to take over your account. Treat that data with utmost care as you would any other password or credential. *Tube Archivist* stores your cookie in Redis and will automatically append it to yt-dlp for every request.
### Auto import
Easiest way to import your cookie is to use the **Tube Archivist Companion** [browser extension](https://github.com/tubearchivist/browser-extension) for Firefox and Chrome.
### Manual import
Alternatively you can also manually import your cookie into Tube Archivist. Export your cookie as a *Netscape* formatted text file, name it *cookies.google.txt* and put it into the *cache/import* folder. After that you can enable the option on the settings page and your cookie file will get imported.
- There are various tools out there that allow you to export cookies from your browser. This project doesn't make any specific recommendations.
- Once imported, a **Validate Cookie File** button will show, where you can confirm if your cookie is working or not.
### Use your cookie
Once imported, additionally to the advantages above, your [Watch Later](https://www.youtube.com/playlist?list=WL) and [Liked Videos](https://www.youtube.com/playlist?list=LL) become a regular playlist you can download and subscribe to as any other [playlist](../playlists.md).
### Limitation
There is only one cookie per Tube Archivist instance, this will be shared between all users.
## Integrations
All third party integrations of TubeArchivist will **always** be *opt in*.
- **API**: Your access token for the Tube Archivist API.
- **returnyoutubedislike.com**: This will get return dislikes and average ratings for each video by integrating with the API from [returnyoutubedislike.com](https://www.returnyoutubedislike.com/).
- **SponsorBlock**: Using [SponsorBlock](https://sponsor.ajay.app/) to get and skip sponsored content. If a video doesn't have timestamps, or has unlocked timestamps, use the browser addon to contribute to this excellent project. Can also be activated and deactivated as a per [channel overwrite](../channels.md#about).
## Snapshots
!!! note
This will make a snapshot of your metadata index only, no media files or additional configuration variables you have set on the settings page will be backed up.
System snapshots will automatically make daily snapshots of the Elasticsearch index. The task will start at 12pm your local time. Snapshots are deduplicated, meaning that each snapshot will only have to backup changes since the last snapshot. Old snpshots will automatically get deleted after 30 days.
- **Create snapshot now**: Will start the snapshot process now, outside of the regular daily schedule.
- **Restore**: Restore your index to that point in time.

View File

@ -0,0 +1,6 @@
---
description: Overview and statistics about the application.
---
# Dashboard
Accessible at `/settings/` of your **Tube Archivist**, this page shows the status and various statistics related to your library.

View File

@ -0,0 +1,60 @@
---
description: Configure the scheduler.
---
# Scheduling Settings Page
Accessible at `/settings/scheduling/` of your **Tube Archivist**, this page holds all the configuration for scheduled tasks.
Click on **Update Scheduler Settings** at the bottom of the page to apply your configurations.
## Configuring Schedules
Schedule settings expect a cron like format, where the first value is minute, second is hour and third is day of the week. Day 0 is Sunday, day 1 is Monday etc.
Examples:
- `0 15 *`: Run task every day at 15:00 in the afternoon.
- `30 8 */2`: Run task every second day of the week (Sun, Tue, Thu, Sat) at 08:30 in the morning.
- `0 */3,8-17 *`: Execute every hour divisible by 3, and every hour during office hours (8 in the morning - 5 in the afternoon).
- `0 8,16 *`: Execute every day at 8 in the morning and at 4 in the afternoon.
- `auto`: Sensible default.
- `0`: (zero), deactivate that task.
!!! note "BE AWARE"
- Changes in the scheduler settings require a container restart to take effect.
- Cron format as *number*/*number* are none standard cron and are not supported by the scheduler, for example `0 0/12 *` is invalid, use `0 */12 *` instead.
- Avoid an unnecessary frequent schedule to not get blocked by YouTube. For that reason, the scheduler doesn't support schedules that trigger more than once per hour.
## Notifications
Some of the tasks support sending notifications at task completion with a short summary message. Tasks can get started through the scheduler or manually from the interface. This uses the amazing [Apprise](https://github.com/caronc/apprise) framework, refer to the wiki about the [basics](https://github.com/caronc/apprise/wiki/URLBasics) how to build links and a list of [supported services](https://github.com/caronc/apprise/wiki#notification-services) for the details.
Send yourself a test notification to verify your link works, e.g.:
```bash
docker exec -it tubearchivist apprise -b "Hello from TA" <link>
```
Notes:
- This will only send notifications when a task returns anything, e.g. if a [Rescan Subscriptions](#rescan-subscriptions) task doesn't find any new videos to add, no notification will get sent.
- Due to the fact that apprise is running inside a docker container, [desktop notifications](https://github.com/caronc/apprise/wiki#desktop-notification-services) will not work.
- Add one per line.
## Rescan Subscriptions
That's the equivalent task as run from the downloads page looking through your channel and playlist and add missing videos to the download queue.
Become a sponsor and join [members.tubearchivist.com](https://members.tubearchivist.com/) to get access to *real time* notifications for new videos uploaded by your favorite channels.
## Start download
Start downloading all videos currently in the download queue.
## Refresh Metadata
Rescan videos, channels and playlists on youtube and update metadata periodically. This will also refresh your subtitles and comments based on your current settings. If an item is no longer available on YouTube, this will deactivate it and exclude it from future refreshes. This task is meant to be run once per day, set your schedule accordingly.
The field **Refresh older than x days** takes a number where TubeArchivist will consider an item as *outdated*. This value is used to calculate how many items need to be refreshed today based on the total indexed. This will spread out the requests to YouTube. Sensible value here is **90** days.
Additionally to the outdated documents, this will also refresh very recently published videos. This is to keep metadata and statistics uptodate during the first few days when the video goes live.
## Thumbnail check
This will check if all expected thumbnails are there and will delete any artwork without matching video.
## ZIP file index backup
Create a zip file of the metadata and select **Max auto backups to keep** to automatically delete old backups created from this task. For data consistency, make sure there aren't any other tasks running that will change the index during the backup process. This is very slow, particularly for large archives. Use [snapshots](application.md#snapshots) instead.

View File

@ -0,0 +1,14 @@
---
description: Configure the user settings of the application.
---
# User Settings Page
Accessible at `/settings/user/` of your **Tube Archivist**, this page holds all the settings that control the look and feel of the application.
Click on **Update User Configurations** at the bottom of the page to apply your configurations.
## Color scheme
Switch between the easy on the eyes dark theme and the burning bright theme.
## Archive View
- **Page Size**: Defines how many results get displayed on a given page. Same value goes for all archive views.

View File

@ -40,6 +40,10 @@ footer {
border-color: var(--accent-font-light);
}
.highlight .nv {
color: var(--highlight-error-light);
}
[data-md-color-scheme="tubearchivist"] {
--md-default-bg-color: var(--main-bg);
--md-default-fg-color: var(--main-font);
@ -60,6 +64,7 @@ footer {
--md-code-hl-string-color: var(--accent-font-dark);
--md-code-hl-number-color: var(--highlight-error-light);
--md-code-hl-operator-color: var(--highlight-error);
--md-code-nv-color: var(--highlight-error);
}
:root {

43
mkdocs/docs/urls.md Normal file
View File

@ -0,0 +1,43 @@
---
description: How URLs from YouTube get parsed
---
# URLs
This document describes how Tube Archivist identifies and treats links from YouTube.
!!! note
Application logic of Tube Archivist is tied only to the IDs, not the names.
## Video
A video ID is **11** characters long, e.g. `2tdiKTSdE9Y`.
Urls can have several forms:
- Watch URL: Regular URLs so will see while browsing YouTube, with the path */watch* and a *v* parameter, e.g. `https://www.youtube.com/watch?v=2tdiKTSdE9Y`
- Share URL: Link you will get when you click on *share* on a video, e.g. `https://youtu.be/2tdiKTSdE9Y`
- Shorts URL: e.g. `https://www.youtube.com/shorts/U80grnZJm_8`
## Channel
A channel ID is **24** characters long, e.g. `UCBa659QWEk1AI4Tg--mrJ2A`.
Channel URLs can have these forms, all will get translated to the ID:
- ID URL: With a *channel* path, e.g. `https://www.youtube.com/channel/UCBa659QWEk1AI4Tg--mrJ2A`
- Channel Handle: Starting with a `@` this handle is personal and unique, e.g. `@TomScottGo`
- Alias URL: Based off the channel handle, e.g. `https://www.youtube.com/@TomScottGo`
### Channel sub pages
Tube archivist can separate between different subpages:
- Videos only: `https://www.youtube.com/@IBRACORP/videos`
- Shorts only: `https://www.youtube.com/@IBRACORP/shorts`
- Streams only: `https://www.youtube.com/@IBRACORP/streams`
- Every other channel sub page will default to download all, for example `https://www.youtube.com/@IBRACORP/featured` will download videos and shorts and streams.
## Playlist
A playlist ID can be `34`, `26` or `18` characters long, e.g. `PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha`
- Playlist URLs start with a *playlist* path and has a *list* parameter, e.g. `https://www.youtube.com/playlist?list=PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha`
### Playlist vs Video URLs
While browsing YouTube videos in Playlists, you might encounter URLs looking like that: `https://www.youtube.com/watch?v=QPZ0pIK_wsc&list=PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha`. As established above, based on the */watch* path and the *v* parameter, Tube Archivist will treat this as a video with the ID `QPZ0pIK_wsc` and **not** as a playlist. If you mean the playlist, you can easily grab the correct ID from the URL, e.g. `PL96C35uN7xGLLeET0dOWaKHkAlPsrkcha`.

View File

@ -1,3 +1,7 @@
---
description: Create users, reset passwords, acces the admin interface.
---
# User Management
For now, **Tube Archivist** is a single user application. You can create multiple users with different names and passwords, they will share the same videos and permissions but some interface configurations are on a per user basis. *More is on the roadmap*.

View File

@ -1,3 +1,7 @@
---
description: Complete Video metadata with playlist navigation and comments.
---
# Video Page
Every video downloaded gets a dedicated page accessible at `/video/<video-id>/` of your Tube Archivist. Throughout the interface, click on a video title to access the video page.
@ -9,6 +13,8 @@ Clicking on the channel name or the channel icon will bring you to the dedicated
If available, a tag cloud will show, representing the tags set by the uploader.
There you can also find stream metadata like file size, video codecs, video bitrate and resolution, audio codecs and bitrate.
The video description is truncated to the first few lines, click on *show more* to expand the whole description.
## Playlist

View File

@ -4,23 +4,31 @@ nav:
- Home: 'index.md'
- User Guide:
- 'FAQ': 'faq.md'
- 'URLs': 'urls.md'
- 'Downloads Page': 'downloads.md'
- 'Channels Pages': 'channels.md'
- 'Video': 'video.md'
- 'Playlists Pages': 'playlists.md'
- 'Search': 'search.md'
- 'Users': 'users.md'
- 'Settings': 'settings.md'
- Settings:
- 'Dashboard': 'settings/dashboard.md'
- 'User': 'settings/user.md'
- 'Application': 'settings/application.md'
- 'Scheduling': 'settings/scheduling.md'
- 'Actions': 'settings/actions.md'
- 'Advanced': 'advanced.md'
- Installation:
- 'Docker-Compose (default)': 'installation/docker-compose.md'
- 'Docker-Compose (default)': 'installation/docker-compose.md'
- 'Unraid': 'installation/unraid.md'
- 'Synology': 'installation/synology.md'
- 'Podman': 'installation/podman.md'
- 'Truenas Scale': 'installation/truenas-scale.md'
- 'Helm Charts': 'installation/helm-charts.md'
- Configuration:
- 'LDAP Authentication': 'configuration/ldap.md'
- 'Cast Support': 'configuration/cast.md'
- Configuration:
- 'LDAP Authentication': 'configuration/ldap.md'
- 'Forward Authentication': 'configuration/forward-auth.md'
- 'Cast Support': 'configuration/cast.md'
- API:
- 'Introduction': 'api/introduction.md'
- 'Video': 'api/video.md'
@ -29,6 +37,7 @@ nav:
- 'Download': 'api/download.md'
- 'Snapshot': 'api/snapshot.md'
- 'Task': 'api/task.md'
- 'Stats': 'api/stats.md'
- 'Additional': 'api/additional.md'
- Links:
- 'Main site': https://www.tubearchivist.com
@ -52,6 +61,7 @@ theme:
scheme: tubearchivist
features:
- navigation.footer
- content.code.copy
extra_css:
- stylesheets/style.css
extra:
@ -59,9 +69,8 @@ extra:
provider: custom
plugins:
- social:
cards_color:
fill: "#00202f"
text: "#eeeeee"
cards_font: Sen-Bold
cards_layout_options:
background_color: "#00202f"
color: "#eeeeee"
- search:
lang: en

View File

@ -1 +1 @@
<script async defer data-website-id="ce3dd392-5518-416e-8a6d-0ebe6baa0238" src="https://stats.tubearchivist.com/umami.js"></script>
<script async defer data-website-id="ce3dd392-5518-416e-8a6d-0ebe6baa0238" src="https://stats.tubearchivist.com/script.js"></script>

View File

@ -1,4 +1,4 @@
cairosvg==2.7.0
mkdocs==1.4.2
mkdocs-material==9.1.5
pillow==9.5.0
cairosvg==2.7.1
mkdocs==1.5.3
mkdocs-material==9.4.1
pillow==10.0.1