New install startup fix, #build

Changed:
- Fixed not loading new default configs at expected time
- Better startup error handling
This commit is contained in:
simon 2023-02-18 09:57:11 +07:00
commit 5d8dc76e7a
No known key found for this signature in database
GPG Key ID: 2C15AA5E89985DD4
7 changed files with 18 additions and 11 deletions

View File

@ -66,10 +66,10 @@ There's dedicated user-contributed install steps under [docs/Installation.md](./
For minimal system requirements, the Tube Archivist stack needs around 2GB of available memory for a small testing setup and around 4GB of available memory for a mid to large sized installation. Minimal with dual core with 4 threads, better quad core plus.
Note for arm64 hosts: The Tube Archivist container is multi arch, so is Elasticsearch. RedisJSON doesn't offer arm builds, but you can use the image `bbilly1/rejson`, an unofficial rebuild for arm64.
This project requires docker. Ensure it is installed and running on your system.
Note for **arm64**: Tube Archivist is a multi arch container, same for redis. For Elasitc Search use the official image for arm64 support. Other architectures are not supported.
Save the [docker-compose.yml](./docker-compose.yml) file from this reposity somewhere permanent on your system, keeping it named `docker-compose.yml`. You'll need to refer to it whenever starting this application.
Edit the following values from that file:
@ -153,6 +153,8 @@ Wildcards "*" can not be used for the Access-Control-Allow-Origin header. If the
Use `bbilly1/tubearchivist-es` to automatically get the recommended version, or use the official image with the version tag in the docker-compose file.
Use official Elastic Search for **arm64**.
Stores video meta data and makes everything searchable. Also keeps track of the download queue.
- Needs to be accessible over the default port `9200`
- Needs a volume at **/usr/share/elasticsearch/data** to store data

View File

@ -24,7 +24,7 @@ services:
- archivist-es
- archivist-redis
archivist-redis:
image: redislabs/rejson # for arm64 use bbilly1/rejson
image: redis/redis-stack-server
container_name: archivist-redis
restart: unless-stopped
expose:

View File

@ -121,8 +121,8 @@ The field **Refresh older than x days** takes a number where TubeArchivist will
## Thumbnail check
This will check if all expected thumbnails are there and will delete any artwork without matching video.
## Index backup
Create a zip file of the metadata and select **Max auto backups to keep** to automatically delete old backups created from this task.
## ZIP file index backup
Create a zip file of the metadata and select **Max auto backups to keep** to automatically delete old backups created from this task. For data consistency, make sure there aren't any other tasks running that will change the index during the backup process. This is very slow, particularly for large archives. Use snapshots instead.
# Actions
@ -166,8 +166,8 @@ If the video you are trying to import is not available on YouTube any more, **Tu
## Embed thumbnails into media file
This will write or overwrite all thumbnails in the media file using the downloaded thumbnail. This is only necessary if you didn't download the files with the option *Embed Thumbnail* enabled or want to make sure all media files get the newest thumbnail. Follow the docker-compose logs to monitor progress.
## Backup Database
This will backup your metadata into a zip file. The file will get stored at *cache/backup* and will contain the necessary files to restore the Elasticsearch index formatted **nd-json** files.
## ZIP file index backup
This will backup your metadata into a zip file. The file will get stored at *cache/backup* and will contain the necessary files to restore the Elasticsearch index formatted **nd-json** files. For data consistency, make sure there aren't any other tasks running that will change the index during the backup process. This is very slow, particularly for large archives.
BE AWARE: This will **not** backup any media files, just the metadata from the Elasticsearch.

View File

@ -168,4 +168,4 @@ class Command(BaseCommand):
message = f" 🗙 {index_name} vid_type update failed"
self.stdout.write(self.style.ERROR(message))
self.stdout.write(response)
CommandError(message)
raise CommandError(message)

View File

@ -32,7 +32,9 @@ SECRET_KEY = PW_HASH.hexdigest()
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = bool(environ.get("DJANGO_DEBUG"))
ALLOWED_HOSTS = [i.strip() for i in environ.get("TA_HOST").split()]
ALLOWED_HOSTS = []
if environ.get("TA_HOST"):
ALLOWED_HOSTS = [i.strip() for i in environ.get("TA_HOST").split()]
CSRF_TRUSTED_ORIGINS = []
for host in ALLOWED_HOSTS:

View File

@ -266,6 +266,7 @@ class ScheduleBuilder:
def build_schedule(self):
"""build schedule dict as expected by app.conf.beat_schedule"""
AppConfig().load_new_defaults()
self.config = AppConfig().config
schedule_dict = {}
for schedule_item in self.SCHEDULES:

View File

@ -286,8 +286,9 @@
</div>
</div>
<div class="settings-group">
<h2>Index backup</h2>
<h2>ZIP file index backup</h2>
<div class="settings-item">
<p><i>Zip file backups are very slow for large archives and consistency is not guaranteed, use snapshots instead. Make sure no other tasks are running when creating a Zip file backup.</i></p>
<p>Current index backup schedule: <span class="settings-current">
{% if config.scheduler.run_backup %}
{% for key, value in config.scheduler.run_backup.items %}
@ -332,8 +333,9 @@
</div>
</div>
<div class="settings-group">
<h2>Backup database</h2>
<h2>ZIP file index backup</h2>
<p>Export your database to a zip file stored at <span class="settings-current">cache/backup</span>.</p>
<p><i>Zip file backups are very slow for large archives and consistency is not guaranteed, use snapshots instead. Make sure no other tasks are running when creating a Zip file backup.</i></p>
<div id="db-backup">
<button onclick="dbBackup()">Start backup</button>
</div>