diff --git a/NEWS.md b/NEWS.md index dcd1c6fc..9d6c2236 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,4 +1,24 @@ -# Old Updates +# NEWS + +This page contains old news about the project. + +### 2025-03-13 + +We have recently added support for plex webhooks via tautulli which you can use if you don't have PlexPass. This should +help close the gap with other media servers. + +### 2025-02-19 + +We have introduced new experimental feature to allow syncing watch progress for played items. This feature is still in +early stages, and might not work as expected. and there are probably still many bugs that we need to fix. Please report +any issues you might face. + +The feature is disabled by default, to enable it you need to run add this environment variable `WS_PROGRESS_THRESHOLD` +with seconds as value, the minimum value is `180` seconds. `0` seconds means it's disabled. We think reasonable value is +`86400` or more this number is about 1day. + +We are still not keen on this feature, and it might be removed in future releases if we aren't able to deal with the +issues we are facing. ### 2025-02-11 diff --git a/README.md b/README.md index 1d437ac7..8b8873dd 100644 --- a/README.md +++ b/README.md @@ -9,23 +9,34 @@ out of the box, this tool support `Jellyfin`, `Plex` and `Emby` media servers. # Updates +### 2025-05-05 + +We’ve added a new feature that lets you send requests **sequentially** to the backends instead of using the default +**parallel** mode. This can be especially helpful if you have very large libraries, slow disks, or simply want to avoid +overloading the backends with too many concurrent requests. You can enable by enabling `WS_HTTP_SYNC_REQUESTS` +environment variable. This mode only applies to `import`, `export`, and `backup` tasks at the moment. + +Additionally, two command-line flags let you override the mode on the fly `--sync-requests` and `--async-requests`. + +We’ll be evaluating this feature for a trial period. If it proves effective (and the slowdown is acceptable), we may +make **sequential** mode the default in a future release. + +> [!NOTE] +> Because we cache many HTTP requests, comparing timings between sequential and parallel runs of `import` can be +> misleading. To get an accurate benchmark of `--sync-requests`, either start with a fresh setup (new installation) or +> purge your Redis instance before testing. + ### 2025-04-06 -We have recently re-worked how the `backend:create` command works, and we no longer generate random name for invalid backends names or usernames. We do a normalization step to make sure the name is valid. This should help with the confusion of having random names. This means if you re-run the `backend:create` you most likely will get a different name than before. So, we suggest to re-run the command with `--re-create` flag. This flag will delete the current sub-users, and regenerate updated config files. +We have recently re-worked how the `backend:create` command works, and we no longer generate random name for invalid +backends names or usernames. We do a normalization step to make sure the name is valid. This should help with the +confusion of having random names. This means if you re-run the `backend:create` you most likely will get a different +name than before. So, we suggest to re-run the command with `--re-create` flag. This flag will delete the current +sub-users, and regenerate updated config files. -We have also added new guard for the command, so if you already generated your sub-users, re-running the command will show you a warning message and exit without doing anything. to run the command again either you need to use `--re-create`or `--run` flag. The `--run` flag will run the command without deleting the current sub-users. - -### 2025-03-13 - -We have recently added support for plex webhooks via tautulli which you can use if you don't have PlexPass. This should help close the gap with other media servers. - -### 2025-02-19 - -We have introduced new experimental feature to allow syncing watch progress for played items. This feature is still in early stages, and might not work as expected. and there are probably still many bugs that we need to fix. Please report any issues you might face. - -The feature is disabled by default, to enable it you need to run add this environment variable `WS_PROGRESS_THRESHOLD` with seconds as value, the minimum value is `180` seconds. `0` seconds means it's disabled. We think reasonable value is `86400` or more this number is about 1day. - -We are still not keen on this feature, and it might be removed in future releases if we aren't able to deal with the issues we are facing. +We have also added new guard for the command, so if you already generated your sub-users, re-running the command will +show you a warning message and exit without doing anything. to run the command again either you need to use +`--re-create`or `--run` flag. The `--run` flag will run the command without deleting the current sub-users. --- Refer to [NEWS](NEWS.md) for old updates. @@ -42,11 +53,13 @@ Refer to [NEWS](NEWS.md) for old updates. * Sync your watch progress/play state via webhooks or scheduled tasks. * Check if your media backends have stale references to old files. -If you like my work, you might also like my other project [YTPTube](https://github.com/arabcoders/ytptube), which is simple and to the point yt-dlp frontend to help download content from all supported sites by yt-dlp. +If you like my work, you might also like my other project [YTPTube](https://github.com/arabcoders/ytptube), which is +simple and to the point yt-dlp frontend to help download content from all supported sites by yt-dlp. # Install -First, start by creating a directory to store the data, to follow along with this setup, create directory called `data` at your working directory. Then proceed to use your preferred method to install the tool. +First, start by creating a directory to store the data, to follow along with this setup, create directory called `data` +at your working directory. Then proceed to use your preferred method to install the tool. ### Via compose file. @@ -79,26 +92,34 @@ $ docker run -d --rm --user "${UID:-1000}:${GID:-1000}" --name watchstate --rest ``` > [!IMPORTANT] -> It's really important to match the `user:`, `--user` to the owner of the `data` directory, the container is rootless, as such it will crash if it's unable to write to the data directory. -> -> It's really not recommended to run containers as root, but if you fail to run the container you can try setting the `user: "0:0"` or `--user '0:0'` if that works it means you have permissions issues. refer to [FAQ](FAQ.md) to troubleshoot the problem. +> It's really important to match the `user:`, `--user` to the owner of the `data` directory, the container is rootless, +> as such it will crash if it's unable to write to the data directory. +> +> It's really not recommended to run containers as root, but if you fail to run the container you can try setting the +`user: "0:0"` or `--user '0:0'` if that works it means you have permissions issues. refer to [FAQ](FAQ.md) to +> troubleshoot the problem. ### Unraid users -For `Unraid` users You can install the `Community Applications` plugin, and search for **watchstate** it comes preconfigured. Otherwise, to manually install it, you need to add value to the `Extra Parameters` section in advanced tab/view. add the following value `--user 99:100`. - -This has to happen before you start the container, otherwise it will have the old user id, and - you then have to run the following command from terminal `chown -R 99:100 /mnt/user/appdata/watchstate`. +For `Unraid` users You can install the `Community Applications` plugin, and search for **watchstate** it comes +preconfigured. Otherwise, to manually install it, you need to add value to the `Extra Parameters` section in advanced +tab/view. add the following value `--user 99:100`. + +This has to happen before you start the container, otherwise it will have the old user id, and +you then have to run the following command from terminal `chown -R 99:100 /mnt/user/appdata/watchstate`. ### Podman instead of docker -To use this container with `podman` set `compose.yaml` `user` to `0:0`. it will appear to be working as root inside the container, but it will be mapped to the user in which the command was run under. +To use this container with `podman` set `compose.yaml` `user` to `0:0`. it will appear to be working as root inside the +container, but it will be mapped to the user in which the command was run under. # Management After starting the container, you can access the WebUI by visiting `http://localhost:8080` in your browser. -At the start you won't see anything as the `WebUI` is decoupled from the WatchState and need to be configured to be able to access the API. In the top right corner, you will see a cogwheel icon, click on it and then Configure the connection settings. +At the start you won't see anything as the `WebUI` is decoupled from the WatchState and need to be configured to be able +to access the API. In the top right corner, you will see a cogwheel icon, click on it and then Configure the connection +settings. ![Connection settings](screenshots/api_settings.png) @@ -117,31 +138,40 @@ From the host machine, you can run the following command $ docker exec watchstate console system:apikey ``` -Insert the `API key` into the `API Token` field and make sure to set the `API URL` or click the `current page URL` link. If everything is ok, the reset of the navbar will show up. +Insert the `API key` into the `API Token` field and make sure to set the `API URL` or click the `current page URL` link. +If everything is ok, the reset of the navbar will show up. -To add your backends, please click on the help button in the top right corner, and choose which method you want [one-way](guides/one-way-sync.md) or [two-way](guides/two-way-sync.md) sync. and follow the instructions. +To add your backends, please click on the help button in the top right corner, and choose which method you +want [one-way](guides/one-way-sync.md) or [two-way](guides/two-way-sync.md) sync. and follow the instructions. -### Supported import method +### Supported import methods Currently, the tool supports three methods to import data from backends. -- **Scheduled Tasks**. - - `A scheduled job that pull data from backends on a schedule.` -- **On demand**. - - `Pull data from backends on demand. By running the import task manually.` -- **Webhooks**. - - `Receive events from backends and update the database accordingly.` +- **Scheduled Tasks**. + - `A scheduled job that pull data from backends on a schedule.` +- **On demand**. + - `Pull data from backends on demand. By running the import task manually.` +- **Webhooks**. + - `Receive events from backends and update the database accordingly.` > [!NOTE] -> Even if all your backends support webhooks, you should keep import task enabled. This help keep healthy relationship and pick up any missed events. For more information please check the FAQ related to webhooks limitations. +> Even if all your backends support webhooks, you should keep import task enabled. This help keep healthy relationship +> and pick up any missed events. For more information please check the [webhook guide](/guides/webhooks.md) to +> understand +> webhooks limitations. # FAQ -Take look at this [frequently asked questions](FAQ.md) page, or the [guides](guides/) for more in-depth guides on how to setup things. +Take look at this [frequently asked questions](FAQ.md) page, or the [guides](/guides/) for more in-depth guides on how +to +configure things. # Social channels -If you have short or quick questions, or just want to chat with other users, feel free to join this [discord server](https://discord.gg/haUXHJyj6Y), keep in mind it's solo project, as such it might take me a bit of time to reply to questions, I operate in `UTC+3` timezone. +If you have short or quick questions, or just want to chat with other users, feel free to join +this [discord server](https://discord.gg/haUXHJyj6Y), keep in mind it's solo project, as such it might take me a bit of +time to reply to questions, I operate in `UTC+3` timezone. # Donate diff --git a/config/config.php b/config/config.php index f17a4d29..f905430a 100644 --- a/config/config.php +++ b/config/config.php @@ -138,6 +138,7 @@ return (function () { $config['http'] = [ 'default' => [ 'maxRetries' => (int)env('WS_HTTP_MAX_RETRIES', 3), + 'sync_requests' => (bool)env('WS_HTTP_SYNC_REQUESTS', false), 'options' => [ 'headers' => [ 'User-Agent' => ag($config, 'name') . '/' . getAppVersion(), diff --git a/config/env.spec.php b/config/env.spec.php index ddc2ecca..54f6154c 100644 --- a/config/env.spec.php +++ b/config/env.spec.php @@ -229,6 +229,11 @@ return (function () { }, 'mask' => true, ], + [ + 'key' => 'WS_HTTP_SYNC_REQUESTS', + 'description' => 'Whether to send backend requests in parallel or sequentially.', + 'type' => 'bool', + ], ]; $validateCronExpression = function (string $value): string { diff --git a/config/services.php b/config/services.php index f171d0c9..a310a7c7 100644 --- a/config/services.php +++ b/config/services.php @@ -44,7 +44,7 @@ use Symfony\Contracts\HttpClient\HttpClientInterface; return (function (): array { return [ iLogger::class => [ - 'class' => fn () => new Logger(name: 'logger', processors: [new LogMessageProcessor()]) + 'class' => fn() => new Logger(name: 'logger', processors: [new LogMessageProcessor()]) ], HttpClientInterface::class => [ @@ -63,6 +63,7 @@ return (function (): array { iLogger::class, ], ], + RetryableHttpClient::class => [ 'class' => function (HttpClientInterface $client, iLogger $logger): RetryableHttpClient { return new RetryableHttpClient( @@ -76,6 +77,7 @@ return (function (): array { iLogger::class, ], ], + LogSuppressor::class => [ 'class' => function (): LogSuppressor { $suppress = []; @@ -90,11 +92,11 @@ return (function (): array { ], StateInterface::class => [ - 'class' => fn () => new StateEntity([]) + 'class' => fn() => new StateEntity([]) ], QueueRequests::class => [ - 'class' => fn () => new QueueRequests() + 'class' => fn() => new QueueRequests() ], Redis::class => [ @@ -189,16 +191,16 @@ return (function (): array { ], UriInterface::class => [ - 'class' => fn () => new Uri(''), + 'class' => fn() => new Uri(''), 'shared' => false, ], InputInterface::class => [ - 'class' => fn (): InputInterface => new ArgvInput() + 'class' => fn(): InputInterface => new ArgvInput() ], OutputInterface::class => [ - 'class' => fn (): OutputInterface => new ConsoleOutput() + 'class' => fn(): OutputInterface => new ConsoleOutput() ], PDO::class => [ @@ -226,7 +228,7 @@ return (function (): array { ], DBLayer::class => [ - 'class' => fn (PDO $pdo): DBLayer => new DBLayer($pdo), + 'class' => fn(PDO $pdo): DBLayer => new DBLayer($pdo), 'args' => [ PDO::class, ], @@ -290,16 +292,16 @@ return (function (): array { ], iImport::class => [ - 'class' => fn (iImport $mapper): iImport => $mapper, + 'class' => fn(iImport $mapper): iImport => $mapper, 'args' => [MemoryMapper::class], ], EventDispatcherInterface::class => [ - 'class' => fn (): EventDispatcher => new EventDispatcher(), + 'class' => fn(): EventDispatcher => new EventDispatcher(), ], UserContext::class => [ - 'class' => fn (iCache $cache, iImport $mapper, iDB $db): UserContext => new UserContext( + 'class' => fn(iCache $cache, iImport $mapper, iDB $db): UserContext => new UserContext( name: 'main', config: new ConfigFile( file: Config::get('backends_file'), diff --git a/frontend/components/Markdown.vue b/frontend/components/Markdown.vue index 88100ac1..69407c0e 100644 --- a/frontend/components/Markdown.vue +++ b/frontend/components/Markdown.vue @@ -1,4 +1,4 @@ -