tubearchivist/README.md

173 lines
9.9 KiB
Markdown
Raw Normal View History

2021-09-14 11:33:55 +00:00
![Tube Archivist](assets/tube-archivist-banner.jpg?raw=true "Tube Archivist Banner")
2021-09-05 17:10:14 +00:00
<center><h1>Your self hosted YouTube media server</h1></center>
2021-09-05 17:10:14 +00:00
## Table of contents:
* [Core functionality](#core-functionality)
* [Screenshots](#screenshots)
* [Problem Tube Archivist tries to solve](#problem-tube-archivist-tries-to-solve)
* [Installing and updating](#installing-and-updating)
* [Getting Started](#getting-started)
* [Import your existing library](#import-your-existing-library)
* [Backup and restore](#backup-and-restore)
* [Potential pitfalls](#potential-pitfalls)
* [Roadmap](#roadmap)
* [Known limitations](#known-limitations)
* [Donate](#donate)
------------------------
2021-09-05 17:10:14 +00:00
## Core functionality
* Subscribe to your favorite YouTube channels
2021-09-05 17:10:14 +00:00
* Download Videos using **yt-dlp**
* Index and make videos searchable
* Play videos
* Keep track of viewed and unviewed videos
2021-09-09 09:47:37 +00:00
## Screenshots
![home screenshot](assets/tube-archivist-screenshot-home.png?raw=true "Tube Archivist Home")
*Home Page*
![channels screenshot](assets/tube-archivist-screenshot-channels.png?raw=true "Tube Archivist Channels")
*All Channels*
![single channel screenshot](assets/tube-archivist-screenshot-single-channel.png?raw=true "Tube Archivist Single Channel")
*Single Channel*
![video page screenshot](assets/tube-archivist-screenshot-video.png?raw=true "Tube Archivist Video Page")
*Video Page*
![video page screenshot](assets/tube-archivist-screenshot-download.png?raw=true "Tube Archivist Video Page")
*Downloads Page*
2021-09-05 17:10:14 +00:00
## Problem Tube Archivist tries to solve
Once your YouTube video collection grows, it becomes hard to search and find a specific video. That's where Tube Archivist comes in: By indexing your video collection with metadata from YouTube, you can organize, search and enjoy your archived YouTube videos without hassle offline through a convenient web interface.
2021-09-05 17:10:14 +00:00
## Installing and updating
Take a look at the example `docker-compose.yml` file provided. Tube Archivist depends on three main components split up into separate docker containers:
2021-09-05 17:10:14 +00:00
### Tube Archivist
The main Python application that displays and serves your video collection, built with Django.
- Serves the interface on port `8000`
- Needs a mandatory volume for the video archive at **/youtube**
- And another recommended volume to save the cache for thumbnails and artwork at **/cache**.
- The environment variables `ES_URL` and `REDIS_HOST` are needed to tell Tube Archivist where Elasticsearch and Redis respectively are located.
- The environment variables `HOST_UID` and `HOST_GID` allows Tube Archivist to `chown` the video files to the main host system user instead of the container user.
2021-09-05 17:10:14 +00:00
### Elasticsearch
Stores video meta data and makes everything searchable. Also keeps track of the download queue.
- Needs to be accessible over the default port `9200`
2021-09-05 17:10:14 +00:00
- Needs a volume at **/usr/share/elasticsearch/data** to store data
Follow the [documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) for additional installation details.
### Redis JSON
Functions as a cache and temporary link between the application and the file system. Used to store and display messages and configuration variables.
- Needs to be accessible over the default port `6379`
2021-09-05 17:10:14 +00:00
- Takes an optional volume at **/data** to make your configuration changes permanent.
2021-09-30 11:03:23 +00:00
### Redis on a custom port
For some architectures it might be required to run Redis JSON on a nonstandard port. To for example change the Redis port to **6380**, set the following values:
- Set the environment variable `REDIS_PORT=6380` to the *tubearchivist* service.
- For the *archivist-redis* service, change the ports to `6380:6380`
- Additionally set the following value to the *archivist-redis* service: `command: --port 6380 --loadmodule /usr/lib/redis/modules/rejson.so`
### Updating Tube Archivist
You will see the current version number of **Tube Archivist** in the footer of the interface so you can compare it with the latest release to make sure you are running the *latest and greatest*.
* There can be breaking changes between updates, particularly as the application grows, new environment variables or settings might be required for you to set in the your docker-compose file. Any breaking changes will be marked in the **release notes**.
* All testing and development is done with the Elasticsearch version number as mentioned in the provided *docker-compose.yml* file. This will be updated when a new release of Elasticsearch is available. Running an older version of Elasticsearch is most likely not going to result in any issues, but it's still recommended to run the same version as mentioned.
## Potential pitfalls
### vm.max_map_count
**Elastic Search** in Docker requires the kernel setting of the host machine `vm.max_map_count` to be set to at least 262144.
To temporary set the value run:
```
sudo sysctl -w vm.max_map_count=262144
```
To apply the change permanently depends on your host operating system:
- For example on Ubuntu Server add `vm.max_map_count = 262144` to the file */etc/sysctl.conf*.
- On Arch based systems create a file */etc/sysctl.d/max_map_count.conf* with the content `vm.max_map_count = 262144`.
- On any other platform look up in the documentation on how to pass kernel parameters.
### Permissions for elasticsearch
If you see a message similar to `AccessDeniedException[/usr/share/elasticsearch/data/nodes]` when initially starting elasticsearch, that means the container is not allowed to write files to the volume.
That's most likely the case when you run `docker-compose` as an unprivileged user. To fix that issue, shutdown the container and on your host machine run:
```
chown 1000:0 /path/to/mount/point
```
This will match the permissions with the **UID** and **GID** of elasticsearch within the container and should fix the issue.
2021-09-05 17:10:14 +00:00
## Getting Started
1. Go through the **settings** page and look at the available options. Particularly set *Download Format* to your desired video quality before downloading. **Tube Archivist** downloads the best available quality by default.
2. Subscribe to some of your favorite YouTube channels on the **channels** page.
3. On the **downloads** page, click on *Rescan subscriptions* to add videos from the subscribed channels to your Download queue or click on *Add to download queue* to manually add Video IDs, links, channels or playlists.
2021-09-05 17:10:14 +00:00
4. Click on *Download queue* and let Tube Archivist to it's thing.
5. Enjoy your archived collection!
## Import your existing library
So far this depends on the video you are trying to import to be still available on YouTube to get the metadata. Add the files you like to import to the */cache/import* folder. Then start the process from the settings page *Manual media files import*. Make sure to follow one of the two methods below.
### Method 1:
Add a matching *.json* file with the media file. Both files need to have the same base name, for example:
- For the media file: \<base-name>.mp4
- For the JSON file: \<base-name>.info.json
- Alternate JSON file: \<base-name>.json
**Tube Archivist** then looks for the 'id' key within the JSON file to identify the video.
### Method 2:
Detect the YouTube ID from filename, this accepts the default yt-dlp naming convention for file names like:
- \<base-name>[\<youtube-id>].mp4
- The YouTube ID in square brackets at the end of the filename is the crucial part.
### Some notes:
- This will **consume** the files you put into the import folder: Files will get converted to mp4 if needed (this might take a long time...) and moved to the archive, *.json* files will get deleted upon completion to avoid having duplicates on the next run.
2021-09-14 11:33:55 +00:00
- Maybe start with a subset of your files to import to make sure everything goes well...
- Follow the logs to monitor progress and errors: `docker-compose logs -f tubearchivist`.
## Backup and restore
2021-09-22 07:58:50 +00:00
From the settings page you can backup your metadata into a zip file. The file will get stored at *cache/backup* and will contain the necessary files to restore the Elasticsearch index formatted **nd-json** files as well a complete export of the index in a set of conventional **json** files.
The restore functionality will expect the same zip file in *cache/backup* and will recreate the index from the snapshot.
BE AWARE: This will **replace** your current index with the one from the backup file.
2021-09-05 17:10:14 +00:00
## Roadmap
This should be considered as a **minimal viable product**, there is an extensive list of future functions and improvements planned.
### Functionality
- [ ] Access control
2021-09-05 17:10:14 +00:00
- [ ] User roles
- [ ] Delete videos and channel
- [ ] Create playlists
- [ ] Podcast mode to serve channel as mp3
- [ ] Implement [PyFilesystem](https://github.com/PyFilesystem/pyfilesystem2) for flexible video storage
- [ ] Un-ignore videos
2021-09-26 05:59:58 +00:00
- [ ] Add thumbnail embed option
- [X] Dynamic download queue [2021-09-26]
- [X] Backup and restore [2021-09-22]
- [X] Scan your file system to index already downloaded videos [2021-09-14]
### UI
- [ ] Create a github wiki for user documentation
2021-09-05 17:10:14 +00:00
- [ ] Show similar videos on video page
- [ ] Multi language support
- [ ] Grid and list view for both channel and video list pages
- [ ] Show total video downloaded vs total videos available in channel
2021-09-05 17:10:14 +00:00
## Known limitations
- Video files created by Tube Archivist need to be **mp4** video files for best browser compatibility.
- Every limitation of **yt-dlp** will also be present in Tube Archivist. If **yt-dlp** can't download or extract a video for any reason, Tube Archivist won't be able to either.
2021-09-05 17:10:14 +00:00
- For now this is meant to be run in a trusted network environment.
## Donate
The best donation to **Tube Archivist** is your time, take a look at the [contribution page](CONTRIBUTING) to get started.
Second best way to support the development is to provide for caffeinated beverages:
* [Paypal.me](https://paypal.me/bbilly1) for a one time coffee
* [Paypal Subscription](https://www.paypal.com/webapps/billing/plans/subscribe?plan_id=P-03770005GR991451KMFGVPMQ) for a monthly coffee
* [co-fi.com](https://ko-fi.com/bbilly1) for an alternative platform