v1.7.0 Merge
v1.7.0 Merge
This commit is contained in:
commit
6d61515b7a
21 changed files with 764 additions and 169 deletions
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -5,10 +5,11 @@
|
||||||
.Trashes
|
.Trashes
|
||||||
ehthumbs.db
|
ehthumbs.db
|
||||||
Thumbs.db
|
Thumbs.db
|
||||||
__pycache__
|
|
||||||
GeoLite2-City.mmdb
|
GeoLite2-City.mmdb
|
||||||
GeoLite2-City.tar.gz
|
GeoLite2-City.tar.gz
|
||||||
data/varken.ini
|
data/varken.ini
|
||||||
.idea/
|
.idea/
|
||||||
varken-venv/
|
varken-venv/
|
||||||
|
venv/
|
||||||
logs/
|
logs/
|
||||||
|
__pycache__
|
||||||
|
|
21
CHANGELOG.md
21
CHANGELOG.md
|
@ -1,7 +1,24 @@
|
||||||
# Change Log
|
# Change Log
|
||||||
|
|
||||||
## [v1.6.8](https://github.com/Boerderij/Varken/tree/v1.6.8) (2019-04-18)
|
## [v1.7.0](https://github.com/Boerderij/Varken/tree/v1.7.0) (2019-05-05)
|
||||||
[Full Changelog](https://github.com/Boerderij/Varken/compare/1.6.7...v1.6.8)
|
[Full Changelog](https://github.com/Boerderij/Varken/compare/1.6.8...v1.7.0)
|
||||||
|
|
||||||
|
**Implemented enhancements:**
|
||||||
|
|
||||||
|
- \[ENHANCEMENT\] Add album and track totals to artist library from Tautulli [\#127](https://github.com/Boerderij/Varken/issues/127)
|
||||||
|
- \[Feature Request\] No way to show music album / track count [\#125](https://github.com/Boerderij/Varken/issues/125)
|
||||||
|
|
||||||
|
**Fixed bugs:**
|
||||||
|
|
||||||
|
- \[BUG\] Invalid retention policy name causing retention policy creation failure [\#129](https://github.com/Boerderij/Varken/issues/129)
|
||||||
|
- \[BUG\] Unifi errors on unnamed devices [\#126](https://github.com/Boerderij/Varken/issues/126)
|
||||||
|
|
||||||
|
**Merged pull requests:**
|
||||||
|
|
||||||
|
- v1.7.0 Merge [\#131](https://github.com/Boerderij/Varken/pull/131) ([DirtyCajunRice](https://github.com/DirtyCajunRice))
|
||||||
|
|
||||||
|
## [1.6.8](https://github.com/Boerderij/Varken/tree/1.6.8) (2019-04-19)
|
||||||
|
[Full Changelog](https://github.com/Boerderij/Varken/compare/1.6.7...1.6.8)
|
||||||
|
|
||||||
**Implemented enhancements:**
|
**Implemented enhancements:**
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,7 @@ FROM amd64/python:3.7.2-alpine
|
||||||
|
|
||||||
LABEL maintainers="dirtycajunrice,samwiseg0"
|
LABEL maintainers="dirtycajunrice,samwiseg0"
|
||||||
|
|
||||||
ENV DEBUG="False"
|
ENV DEBUG="True"
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
|
@ -12,6 +12,8 @@ COPY /varken /app/varken
|
||||||
|
|
||||||
COPY /data /app/data
|
COPY /data /app/data
|
||||||
|
|
||||||
|
COPY /utilities /app/data/utilities
|
||||||
|
|
||||||
RUN apk add --no-cache tzdata && \
|
RUN apk add --no-cache tzdata && \
|
||||||
python3 -m pip install -r /app/requirements.txt
|
python3 -m pip install -r /app/requirements.txt
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,7 @@ FROM arm32v6/python:3.7.2-alpine
|
||||||
|
|
||||||
LABEL maintainers="dirtycajunrice,samwiseg0"
|
LABEL maintainers="dirtycajunrice,samwiseg0"
|
||||||
|
|
||||||
ENV DEBUG="False"
|
ENV DEBUG="True"
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,7 @@ FROM arm64v8/python:3.7.2-alpine
|
||||||
|
|
||||||
LABEL maintainers="dirtycajunrice,samwiseg0"
|
LABEL maintainers="dirtycajunrice,samwiseg0"
|
||||||
|
|
||||||
ENV DEBUG="False"
|
ENV DEBUG="True"
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
|
|
20
README.md
20
README.md
|
@ -1,5 +1,5 @@
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img width="800" src="https://bin.cajun.pro/images/varken_full_banner.png">
|
<img width="800" src="https://bin.cajun.pro/images/varken_full_banner.png" alt="Logo Banner">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
[](https://jenkins.cajun.pro/job/Varken/job/master/)
|
[](https://jenkins.cajun.pro/job/Varken/job/master/)
|
||||||
|
@ -11,19 +11,19 @@
|
||||||
|
|
||||||
Dutch for PIG. PIG is an Acronym for Plex/InfluxDB/Grafana
|
Dutch for PIG. PIG is an Acronym for Plex/InfluxDB/Grafana
|
||||||
|
|
||||||
Varken is a standalone command-line utility to aggregate data
|
Varken is a standalone application to aggregate data from the Plex
|
||||||
from the Plex ecosystem into InfluxDB. Examples use Grafana for a
|
ecosystem into InfluxDB using Grafana for a frontend
|
||||||
frontend
|
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
* [Python 3.6.7+](https://www.python.org/downloads/release/python-367/)
|
* [Python 3.6.7+](https://www.python.org/downloads/release/python-367/)
|
||||||
* [Python3-pip](https://pip.pypa.io/en/stable/installing/)
|
* [Python3-pip](https://pip.pypa.io/en/stable/installing/)
|
||||||
* [InfluxDB](https://www.influxdata.com/)
|
* [InfluxDB](https://www.influxdata.com/)
|
||||||
|
* [Grafana](https://grafana.com/)
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
Example Dashboard
|
Example Dashboard
|
||||||
|
|
||||||
<img width="800" src="https://i.imgur.com/3hNZTkC.png">
|
<img width="800" src="https://i.imgur.com/3hNZTkC.png" alt="dashboard">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
Supported Modules:
|
Supported Modules:
|
||||||
|
@ -33,6 +33,7 @@ Supported Modules:
|
||||||
* [Tautulli](https://tautulli.com/) - A Python based monitoring and tracking tool for Plex Media Server.
|
* [Tautulli](https://tautulli.com/) - A Python based monitoring and tracking tool for Plex Media Server.
|
||||||
* [Ombi](https://ombi.io/) - Want a Movie or TV Show on Plex or Emby? Use Ombi!
|
* [Ombi](https://ombi.io/) - Want a Movie or TV Show on Plex or Emby? Use Ombi!
|
||||||
* [Unifi](https://unifi-sdn.ubnt.com/) - The Global Leader in Managed Wi-Fi Systems
|
* [Unifi](https://unifi-sdn.ubnt.com/) - The Global Leader in Managed Wi-Fi Systems
|
||||||
|
* [Lidarr](https://lidarr.audio/) - Looks and smells like Sonarr but made for music.
|
||||||
|
|
||||||
Key features:
|
Key features:
|
||||||
* Multiple server support for all modules
|
* Multiple server support for all modules
|
||||||
|
@ -41,15 +42,15 @@ Key features:
|
||||||
|
|
||||||
|
|
||||||
## Installation Guides
|
## Installation Guides
|
||||||
Varken Installation guides can be found in the [wiki](https://github.com/Boerderij/Varken/wiki/Installation).
|
Varken Installation guides can be found in the [wiki](https://wiki.cajun.pro/books/varken/chapter/installation).
|
||||||
|
|
||||||
## Support
|
## Support
|
||||||
Please read [Asking for Support](https://github.com/Boerderij/Varken/wiki/Asking-for-Support) before seeking support.
|
Please read [Asking for Support](https://wiki.cajun.pro/books/varken/chapter/asking-for-support) before seeking support.
|
||||||
|
|
||||||
[Click here for quick access to discord support](http://cyborg.decreator.dev/channels/518970285773422592/530424560504537105/). No app or account needed!
|
[Click here for quick access to discord support](http://cyborg.decreator.dev/channels/518970285773422592/530424560504537105/). No app or account needed!
|
||||||
|
|
||||||
### InfluxDB
|
### InfluxDB
|
||||||
[InfluxDB Installation Documentation](https://docs.influxdata.com/influxdb/v1.7/introduction/installation/)
|
[InfluxDB Installation Documentation](https://wiki.cajun.pro/books/varken/page/influxdb-d1f)
|
||||||
|
|
||||||
Influxdb is required but not packaged as part of Varken. Varken will create
|
Influxdb is required but not packaged as part of Varken. Varken will create
|
||||||
its database on its own. If you choose to give varken user permissions that
|
its database on its own. If you choose to give varken user permissions that
|
||||||
|
@ -57,5 +58,4 @@ do not include database creation, please ensure you create an influx database
|
||||||
named `varken`
|
named `varken`
|
||||||
|
|
||||||
### Grafana
|
### Grafana
|
||||||
[Grafana Installation Documentation](http://docs.grafana.org/installation/)
|
[Grafana Installation/Dashboard Documentation](https://wiki.cajun.pro/books/varken/page/grafana)
|
||||||
Official dashboard installation instructions can be found in the [wiki](https://github.com/Boerderij/Varken/wiki/Installation#grafana)
|
|
67
Varken.py
67
Varken.py
|
@ -17,6 +17,7 @@ from varken.unifi import UniFiAPI
|
||||||
from varken import VERSION, BRANCH
|
from varken import VERSION, BRANCH
|
||||||
from varken.sonarr import SonarrAPI
|
from varken.sonarr import SonarrAPI
|
||||||
from varken.radarr import RadarrAPI
|
from varken.radarr import RadarrAPI
|
||||||
|
from varken.lidarr import LidarrAPI
|
||||||
from varken.iniparser import INIParser
|
from varken.iniparser import INIParser
|
||||||
from varken.dbmanager import DBManager
|
from varken.dbmanager import DBManager
|
||||||
from varken.helpers import GeoIPHandler
|
from varken.helpers import GeoIPHandler
|
||||||
|
@ -28,13 +29,9 @@ from varken.varkenlogger import VarkenLogger
|
||||||
PLATFORM_LINUX_DISTRO = ' '.join(x for x in linux_distribution() if x)
|
PLATFORM_LINUX_DISTRO = ' '.join(x for x in linux_distribution() if x)
|
||||||
|
|
||||||
|
|
||||||
def thread():
|
def thread(job, **kwargs):
|
||||||
while schedule.jobs:
|
worker = Thread(target=job, kwargs=dict(**kwargs))
|
||||||
job = QUEUE.get()
|
worker.start()
|
||||||
a = job()
|
|
||||||
if a is not None:
|
|
||||||
schedule.clear(a)
|
|
||||||
QUEUE.task_done()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
@ -43,7 +40,8 @@ if __name__ == "__main__":
|
||||||
formatter_class=RawTextHelpFormatter)
|
formatter_class=RawTextHelpFormatter)
|
||||||
|
|
||||||
parser.add_argument("-d", "--data-folder", help='Define an alternate data folder location')
|
parser.add_argument("-d", "--data-folder", help='Define an alternate data folder location')
|
||||||
parser.add_argument("-D", "--debug", action='store_true', help='Use to enable DEBUG logging')
|
parser.add_argument("-D", "--debug", action='store_true', help='Use to enable DEBUG logging. (Depreciated)')
|
||||||
|
parser.add_argument("-ND", "--no_debug", action='store_true', help='Use to disable DEBUG logging')
|
||||||
|
|
||||||
opts = parser.parse_args()
|
opts = parser.parse_args()
|
||||||
|
|
||||||
|
@ -72,10 +70,15 @@ if __name__ == "__main__":
|
||||||
enable_opts = ['True', 'true', 'yes']
|
enable_opts = ['True', 'true', 'yes']
|
||||||
debug_opts = ['debug', 'Debug', 'DEBUG']
|
debug_opts = ['debug', 'Debug', 'DEBUG']
|
||||||
|
|
||||||
if not opts.debug:
|
opts.debug = True
|
||||||
|
|
||||||
|
if getenv('DEBUG') is not None:
|
||||||
opts.debug = True if any([getenv(string, False) for true in enable_opts
|
opts.debug = True if any([getenv(string, False) for true in enable_opts
|
||||||
for string in debug_opts if getenv(string, False) == true]) else False
|
for string in debug_opts if getenv(string, False) == true]) else False
|
||||||
|
|
||||||
|
elif opts.no_debug:
|
||||||
|
opts.debug = False
|
||||||
|
|
||||||
# Initiate the logger
|
# Initiate the logger
|
||||||
vl = VarkenLogger(data_folder=DATA_FOLDER, debug=opts.debug)
|
vl = VarkenLogger(data_folder=DATA_FOLDER, debug=opts.debug)
|
||||||
vl.logger.info('Starting Varken...')
|
vl.logger.info('Starting Varken...')
|
||||||
|
@ -98,72 +101,84 @@ if __name__ == "__main__":
|
||||||
SONARR = SonarrAPI(server, DBMANAGER)
|
SONARR = SonarrAPI(server, DBMANAGER)
|
||||||
if server.queue:
|
if server.queue:
|
||||||
at_time = schedule.every(server.queue_run_seconds).seconds
|
at_time = schedule.every(server.queue_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, SONARR.get_queue).tag("sonarr-{}-get_queue".format(server.id))
|
at_time.do(thread, SONARR.get_queue).tag("sonarr-{}-get_queue".format(server.id))
|
||||||
if server.missing_days > 0:
|
if server.missing_days > 0:
|
||||||
at_time = schedule.every(server.missing_days_run_seconds).seconds
|
at_time = schedule.every(server.missing_days_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, SONARR.get_missing).tag("sonarr-{}-get_missing".format(server.id))
|
at_time.do(thread, SONARR.get_calendar, query="Missing").tag("sonarr-{}-get_missing".format(server.id))
|
||||||
if server.future_days > 0:
|
if server.future_days > 0:
|
||||||
at_time = schedule.every(server.future_days_run_seconds).seconds
|
at_time = schedule.every(server.future_days_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, SONARR.get_future).tag("sonarr-{}-get_future".format(server.id))
|
at_time.do(thread, SONARR.get_calendar, query="Future").tag("sonarr-{}-get_future".format(server.id))
|
||||||
|
|
||||||
if CONFIG.tautulli_enabled:
|
if CONFIG.tautulli_enabled:
|
||||||
GEOIPHANDLER = GeoIPHandler(DATA_FOLDER)
|
GEOIPHANDLER = GeoIPHandler(DATA_FOLDER)
|
||||||
schedule.every(12).to(24).hours.do(QUEUE.put, GEOIPHANDLER.update)
|
schedule.every(12).to(24).hours.do(thread, GEOIPHANDLER.update)
|
||||||
for server in CONFIG.tautulli_servers:
|
for server in CONFIG.tautulli_servers:
|
||||||
TAUTULLI = TautulliAPI(server, DBMANAGER, GEOIPHANDLER)
|
TAUTULLI = TautulliAPI(server, DBMANAGER, GEOIPHANDLER)
|
||||||
if server.get_activity:
|
if server.get_activity:
|
||||||
at_time = schedule.every(server.get_activity_run_seconds).seconds
|
at_time = schedule.every(server.get_activity_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, TAUTULLI.get_activity).tag("tautulli-{}-get_activity".format(server.id))
|
at_time.do(thread, TAUTULLI.get_activity).tag("tautulli-{}-get_activity".format(server.id))
|
||||||
if server.get_stats:
|
if server.get_stats:
|
||||||
at_time = schedule.every(server.get_stats_run_seconds).seconds
|
at_time = schedule.every(server.get_stats_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, TAUTULLI.get_stats).tag("tautulli-{}-get_stats".format(server.id))
|
at_time.do(thread, TAUTULLI.get_stats).tag("tautulli-{}-get_stats".format(server.id))
|
||||||
|
|
||||||
if CONFIG.radarr_enabled:
|
if CONFIG.radarr_enabled:
|
||||||
for server in CONFIG.radarr_servers:
|
for server in CONFIG.radarr_servers:
|
||||||
RADARR = RadarrAPI(server, DBMANAGER)
|
RADARR = RadarrAPI(server, DBMANAGER)
|
||||||
if server.get_missing:
|
if server.get_missing:
|
||||||
at_time = schedule.every(server.get_missing_run_seconds).seconds
|
at_time = schedule.every(server.get_missing_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, RADARR.get_missing).tag("radarr-{}-get_missing".format(server.id))
|
at_time.do(thread, RADARR.get_missing).tag("radarr-{}-get_missing".format(server.id))
|
||||||
if server.queue:
|
if server.queue:
|
||||||
at_time = schedule.every(server.queue_run_seconds).seconds
|
at_time = schedule.every(server.queue_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, RADARR.get_queue).tag("radarr-{}-get_queue".format(server.id))
|
at_time.do(thread, RADARR.get_queue).tag("radarr-{}-get_queue".format(server.id))
|
||||||
|
|
||||||
|
if CONFIG.lidarr_enabled:
|
||||||
|
for server in CONFIG.lidarr_servers:
|
||||||
|
LIDARR = LidarrAPI(server, DBMANAGER)
|
||||||
|
if server.queue:
|
||||||
|
at_time = schedule.every(server.queue_run_seconds).seconds
|
||||||
|
at_time.do(thread, LIDARR.get_queue).tag("lidarr-{}-get_queue".format(server.id))
|
||||||
|
if server.missing_days > 0:
|
||||||
|
at_time = schedule.every(server.missing_days_run_seconds).seconds
|
||||||
|
at_time.do(thread, LIDARR.get_calendar, query="Missing").tag(
|
||||||
|
"lidarr-{}-get_missing".format(server.id))
|
||||||
|
if server.future_days > 0:
|
||||||
|
at_time = schedule.every(server.future_days_run_seconds).seconds
|
||||||
|
at_time.do(thread, LIDARR.get_calendar, query="Future").tag("lidarr-{}-get_future".format(
|
||||||
|
server.id))
|
||||||
|
|
||||||
if CONFIG.ombi_enabled:
|
if CONFIG.ombi_enabled:
|
||||||
for server in CONFIG.ombi_servers:
|
for server in CONFIG.ombi_servers:
|
||||||
OMBI = OmbiAPI(server, DBMANAGER)
|
OMBI = OmbiAPI(server, DBMANAGER)
|
||||||
if server.request_type_counts:
|
if server.request_type_counts:
|
||||||
at_time = schedule.every(server.request_type_run_seconds).seconds
|
at_time = schedule.every(server.request_type_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, OMBI.get_request_counts).tag("ombi-{}-get_request_counts".format(server.id))
|
at_time.do(thread, OMBI.get_request_counts).tag("ombi-{}-get_request_counts".format(server.id))
|
||||||
if server.request_total_counts:
|
if server.request_total_counts:
|
||||||
at_time = schedule.every(server.request_total_run_seconds).seconds
|
at_time = schedule.every(server.request_total_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, OMBI.get_all_requests).tag("ombi-{}-get_all_requests".format(server.id))
|
at_time.do(thread, OMBI.get_all_requests).tag("ombi-{}-get_all_requests".format(server.id))
|
||||||
if server.issue_status_counts:
|
if server.issue_status_counts:
|
||||||
at_time = schedule.every(server.issue_status_run_seconds).seconds
|
at_time = schedule.every(server.issue_status_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, OMBI.get_issue_counts).tag("ombi-{}-get_issue_counts".format(server.id))
|
at_time.do(thread, OMBI.get_issue_counts).tag("ombi-{}-get_issue_counts".format(server.id))
|
||||||
|
|
||||||
if CONFIG.sickchill_enabled:
|
if CONFIG.sickchill_enabled:
|
||||||
for server in CONFIG.sickchill_servers:
|
for server in CONFIG.sickchill_servers:
|
||||||
SICKCHILL = SickChillAPI(server, DBMANAGER)
|
SICKCHILL = SickChillAPI(server, DBMANAGER)
|
||||||
if server.get_missing:
|
if server.get_missing:
|
||||||
at_time = schedule.every(server.get_missing_run_seconds).seconds
|
at_time = schedule.every(server.get_missing_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, SICKCHILL.get_missing).tag("sickchill-{}-get_missing".format(server.id))
|
at_time.do(thread, SICKCHILL.get_missing).tag("sickchill-{}-get_missing".format(server.id))
|
||||||
|
|
||||||
if CONFIG.unifi_enabled:
|
if CONFIG.unifi_enabled:
|
||||||
for server in CONFIG.unifi_servers:
|
for server in CONFIG.unifi_servers:
|
||||||
UNIFI = UniFiAPI(server, DBMANAGER)
|
UNIFI = UniFiAPI(server, DBMANAGER)
|
||||||
at_time = schedule.every(server.get_usg_stats_run_seconds).seconds
|
at_time = schedule.every(server.get_usg_stats_run_seconds).seconds
|
||||||
at_time.do(QUEUE.put, UNIFI.get_usg_stats).tag("unifi-{}-get_usg_stats".format(server.id))
|
at_time.do(thread, UNIFI.get_usg_stats).tag("unifi-{}-get_usg_stats".format(server.id))
|
||||||
|
|
||||||
# Run all on startup
|
# Run all on startup
|
||||||
SERVICES_ENABLED = [CONFIG.ombi_enabled, CONFIG.radarr_enabled, CONFIG.tautulli_enabled, CONFIG.unifi_enabled,
|
SERVICES_ENABLED = [CONFIG.ombi_enabled, CONFIG.radarr_enabled, CONFIG.tautulli_enabled, CONFIG.unifi_enabled,
|
||||||
CONFIG.sonarr_enabled, CONFIG.sickchill_enabled]
|
CONFIG.sonarr_enabled, CONFIG.sickchill_enabled, CONFIG.lidarr_enabled]
|
||||||
if not [enabled for enabled in SERVICES_ENABLED if enabled]:
|
if not [enabled for enabled in SERVICES_ENABLED if enabled]:
|
||||||
vl.logger.error("All services disabled. Exiting")
|
vl.logger.error("All services disabled. Exiting")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
WORKER = Thread(target=thread)
|
|
||||||
WORKER.start()
|
|
||||||
|
|
||||||
schedule.run_all()
|
schedule.run_all()
|
||||||
|
|
||||||
while schedule.jobs:
|
while schedule.jobs:
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
[global]
|
[global]
|
||||||
sonarr_server_ids = 1,2
|
sonarr_server_ids = 1,2
|
||||||
radarr_server_ids = 1,2
|
radarr_server_ids = 1,2
|
||||||
|
lidarr_server_ids = false
|
||||||
tautulli_server_ids = 1
|
tautulli_server_ids = 1
|
||||||
ombi_server_ids = 1
|
ombi_server_ids = 1
|
||||||
sickchill_server_ids = false
|
sickchill_server_ids = false
|
||||||
|
@ -69,6 +70,18 @@ queue_run_seconds = 300
|
||||||
get_missing = true
|
get_missing = true
|
||||||
get_missing_run_seconds = 300
|
get_missing_run_seconds = 300
|
||||||
|
|
||||||
|
[lidarr-1]
|
||||||
|
url = lidarr1.domain.tld:8686
|
||||||
|
apikey = xxxxxxxxxxxxxxxx
|
||||||
|
ssl = false
|
||||||
|
verify_ssl = false
|
||||||
|
missing_days = 30
|
||||||
|
missing_days_run_seconds = 300
|
||||||
|
future_days = 30
|
||||||
|
future_days_run_seconds = 300
|
||||||
|
queue = true
|
||||||
|
queue_run_seconds = 300
|
||||||
|
|
||||||
[ombi-1]
|
[ombi-1]
|
||||||
url = ombi.domain.tld
|
url = ombi.domain.tld
|
||||||
apikey = xxxxxxxxxxxxxxxx
|
apikey = xxxxxxxxxxxxxxxx
|
||||||
|
|
|
@ -5,6 +5,7 @@ networks:
|
||||||
services:
|
services:
|
||||||
influxdb:
|
influxdb:
|
||||||
hostname: influxdb
|
hostname: influxdb
|
||||||
|
container_name: influxdb
|
||||||
image: influxdb
|
image: influxdb
|
||||||
networks:
|
networks:
|
||||||
- internal
|
- internal
|
||||||
|
@ -13,6 +14,7 @@ services:
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
varken:
|
varken:
|
||||||
hostname: varken
|
hostname: varken
|
||||||
|
container_name: varken
|
||||||
image: boerderij/varken
|
image: boerderij/varken
|
||||||
networks:
|
networks:
|
||||||
- internal
|
- internal
|
||||||
|
@ -27,6 +29,7 @@ services:
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
grafana:
|
grafana:
|
||||||
hostname: grafana
|
hostname: grafana
|
||||||
|
container_name: grafana
|
||||||
image: grafana/grafana
|
image: grafana/grafana
|
||||||
networks:
|
networks:
|
||||||
- internal
|
- internal
|
||||||
|
@ -41,4 +44,5 @@ services:
|
||||||
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel
|
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel
|
||||||
depends_on:
|
depends_on:
|
||||||
- influxdb
|
- influxdb
|
||||||
|
- varken
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
|
@ -2,9 +2,9 @@
|
||||||
# Potential requirements.
|
# Potential requirements.
|
||||||
# pip3 install -r requirements.txt
|
# pip3 install -r requirements.txt
|
||||||
#---------------------------------------------------------
|
#---------------------------------------------------------
|
||||||
requests>=2.20.1
|
requests==2.21
|
||||||
geoip2>=2.9.0
|
geoip2==2.9.0
|
||||||
influxdb>=5.2.0
|
influxdb==5.2.0
|
||||||
schedule>=0.5.0
|
schedule==0.6.0
|
||||||
distro>=1.3.0
|
distro==1.4.0
|
||||||
urllib3>=1.22
|
urllib3==1.24.2
|
164
utilities/grafana_build.py
Normal file
164
utilities/grafana_build.py
Normal file
|
@ -0,0 +1,164 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# To use:
|
||||||
|
# docker exec -it varken cp /app/data/utilities/grafana_build.py /config/grafana_build.py
|
||||||
|
# nano /opt/dockerconfigs/varken/grafana_build.py # Edit vars. This assumes you have your persistent data there
|
||||||
|
# docker exec -it varken python3 /config/grafana_build.py
|
||||||
|
from sys import exit
|
||||||
|
from requests import Session
|
||||||
|
from json.decoder import JSONDecodeError
|
||||||
|
|
||||||
|
docker = True # True if using a docker container, False if not
|
||||||
|
host_ip = '127.0.0.1' # Only relevant if docker = False
|
||||||
|
username = 'admin' # Grafana username
|
||||||
|
password = 'admin' # Grafana password
|
||||||
|
grafana_url = 'http://grafana:3000'
|
||||||
|
verify = False # Verify SSL
|
||||||
|
|
||||||
|
# Do not remove any of these, just change the ones you use
|
||||||
|
movies_library = 'Movies'
|
||||||
|
fourk_movies_library = 'Movies 4K'
|
||||||
|
tv_shows_library = 'TV Shows'
|
||||||
|
fourk_tv_shows_library = 'TV Shows 4K'
|
||||||
|
music_library = 'Music'
|
||||||
|
usg_name = 'Gateway'
|
||||||
|
ombi_url = 'https://yourdomain.com/ombi'
|
||||||
|
tautulli_url = 'https://yourdomain.com/tautulli'
|
||||||
|
sonarr_url = 'https://yourdomain.com/sonarr'
|
||||||
|
radarr_url = 'https://yourdomain.com/radarr'
|
||||||
|
sickchill_url = 'https://yourdomain.com/sickchill'
|
||||||
|
lidarr_url = 'https://yourdomain.com/lidarr'
|
||||||
|
|
||||||
|
# Do not edit past this line #
|
||||||
|
session = Session()
|
||||||
|
auth = (username, password)
|
||||||
|
url_base = f"{grafana_url.rstrip('/')}/api"
|
||||||
|
|
||||||
|
varken_datasource = []
|
||||||
|
datasource_name = "Varken-Script"
|
||||||
|
try:
|
||||||
|
datasources = session.get(url_base + '/datasources', auth=auth, verify=verify).json()
|
||||||
|
varken_datasource = [source for source in datasources if source['database'] == 'varken']
|
||||||
|
if varken_datasource:
|
||||||
|
print(f'varken datasource already exists with the name "{varken_datasource[0]["name"]}"')
|
||||||
|
datasource_name = varken_datasource[0]["name"]
|
||||||
|
except JSONDecodeError:
|
||||||
|
exit(f"Could not talk to grafana at {grafana_url}. Check URL/Username/Password")
|
||||||
|
|
||||||
|
if not varken_datasource:
|
||||||
|
datasource_data = {
|
||||||
|
"name": datasource_name,
|
||||||
|
"type": "influxdb",
|
||||||
|
"url": f"http://{'influxdb' if docker else host_ip}:8086",
|
||||||
|
"access": "proxy",
|
||||||
|
"basicAuth": False,
|
||||||
|
"database": 'varken'
|
||||||
|
}
|
||||||
|
post = session.post(url_base + '/datasources', auth=auth, verify=verify, json=datasource_data).json()
|
||||||
|
print(f'Created {datasource_name} datasource (id:{post["datasource"]["id"]})')
|
||||||
|
|
||||||
|
our_dashboard = session.get(url_base + '/gnet/dashboards/9585', auth=auth, verify=verify).json()['json']
|
||||||
|
dashboard_data = {
|
||||||
|
"dashboard": our_dashboard,
|
||||||
|
"overwrite": True,
|
||||||
|
"inputs": [
|
||||||
|
{
|
||||||
|
"name": "DS_VARKEN",
|
||||||
|
"label": "varken",
|
||||||
|
"description": "",
|
||||||
|
"type": "datasource",
|
||||||
|
"pluginId": "influxdb",
|
||||||
|
"pluginName": "InfluxDB",
|
||||||
|
"value": datasource_name
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_MOVIESLIBRARY",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "Movies Library Name",
|
||||||
|
"value": movies_library,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_MOVIES4KLIBRARY",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "4K Movies Library Name",
|
||||||
|
"value": fourk_movies_library,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_TVLIBRARY",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "TV Library Name",
|
||||||
|
"value": tv_shows_library,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_TV4KLIBRARY",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "TV 4K Library Name",
|
||||||
|
"value": fourk_tv_shows_library,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_MUSICLIBRARY",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "Music Library Name",
|
||||||
|
"value": music_library,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_USGNAME",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "Unifi USG Name",
|
||||||
|
"value": usg_name,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_OMBIURL",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "Ombi URL",
|
||||||
|
"value": ombi_url,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_TAUTULLIURL",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "Tautulli URL",
|
||||||
|
"value": tautulli_url,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_SONARRURL",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "Sonarr URL",
|
||||||
|
"value": sonarr_url,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_RADARRURL",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "Radarr URL",
|
||||||
|
"value": radarr_url,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_SICKCHILLURL",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "Sickchill URL",
|
||||||
|
"value": sickchill_url,
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "VAR_LIDARRURL",
|
||||||
|
"type": "constant",
|
||||||
|
"label": "lidarr URL",
|
||||||
|
"value": lidarr_url,
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
make_dashboard = session.post(url_base + '/dashboards/import', json=dashboard_data, auth=auth, verify=verify)
|
||||||
|
if make_dashboard.status_code == 200 and make_dashboard.json().get('imported'):
|
||||||
|
print(f'Created dashboard "{our_dashboard["title"]}"')
|
||||||
|
except:
|
||||||
|
print('Shit...')
|
47
utilities/historical_tautulli_import.py
Normal file
47
utilities/historical_tautulli_import.py
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
from argparse import ArgumentParser
|
||||||
|
from os import access, R_OK
|
||||||
|
from os.path import isdir, abspath, dirname, join
|
||||||
|
from logging import getLogger, StreamHandler, Formatter, DEBUG
|
||||||
|
|
||||||
|
from varken.iniparser import INIParser
|
||||||
|
from varken.dbmanager import DBManager
|
||||||
|
from varken.helpers import GeoIPHandler
|
||||||
|
from varken.tautulli import TautulliAPI
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = ArgumentParser(prog='varken',
|
||||||
|
description='Tautulli historical import tool')
|
||||||
|
parser.add_argument("-d", "--data-folder", help='Define an alternate data folder location')
|
||||||
|
parser.add_argument("-D", "--days", default=30, type=int, help='Specify length of historical import')
|
||||||
|
opts = parser.parse_args()
|
||||||
|
|
||||||
|
DATA_FOLDER = abspath(join(dirname(__file__), '..', 'data'))
|
||||||
|
|
||||||
|
templogger = getLogger('temp')
|
||||||
|
templogger.setLevel(DEBUG)
|
||||||
|
tempch = StreamHandler()
|
||||||
|
tempformatter = Formatter('%(asctime)s : %(levelname)s : %(module)s : %(message)s', '%Y-%m-%d %H:%M:%S')
|
||||||
|
tempch.setFormatter(tempformatter)
|
||||||
|
templogger.addHandler(tempch)
|
||||||
|
|
||||||
|
if opts.data_folder:
|
||||||
|
ARG_FOLDER = opts.data_folder
|
||||||
|
|
||||||
|
if isdir(ARG_FOLDER):
|
||||||
|
DATA_FOLDER = ARG_FOLDER
|
||||||
|
if not access(DATA_FOLDER, R_OK):
|
||||||
|
templogger.error("Read permission error for %s", DATA_FOLDER)
|
||||||
|
exit(1)
|
||||||
|
else:
|
||||||
|
templogger.error("%s does not exist", ARG_FOLDER)
|
||||||
|
exit(1)
|
||||||
|
|
||||||
|
CONFIG = INIParser(DATA_FOLDER)
|
||||||
|
DBMANAGER = DBManager(CONFIG.influx_server)
|
||||||
|
|
||||||
|
if CONFIG.tautulli_enabled:
|
||||||
|
GEOIPHANDLER = GeoIPHandler(DATA_FOLDER)
|
||||||
|
for server in CONFIG.tautulli_servers:
|
||||||
|
TAUTULLI = TautulliAPI(server, DBMANAGER, GEOIPHANDLER)
|
||||||
|
TAUTULLI.get_historical(days=opts.days)
|
|
@ -1,2 +1,2 @@
|
||||||
VERSION = "1.6.8"
|
VERSION = "1.7.0"
|
||||||
BRANCH = 'master'
|
BRANCH = 'master'
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
|
from sys import exit
|
||||||
from logging import getLogger
|
from logging import getLogger
|
||||||
from influxdb import InfluxDBClient
|
from influxdb import InfluxDBClient
|
||||||
from requests.exceptions import ConnectionError
|
from requests.exceptions import ConnectionError
|
||||||
|
@ -7,20 +8,32 @@ from influxdb.exceptions import InfluxDBServerError
|
||||||
class DBManager(object):
|
class DBManager(object):
|
||||||
def __init__(self, server):
|
def __init__(self, server):
|
||||||
self.server = server
|
self.server = server
|
||||||
|
if self.server.url == "influxdb.domain.tld":
|
||||||
|
self.logger.critical("You have not configured your varken.ini. Please read Wiki page for configuration")
|
||||||
|
exit()
|
||||||
self.influx = InfluxDBClient(host=self.server.url, port=self.server.port, username=self.server.username,
|
self.influx = InfluxDBClient(host=self.server.url, port=self.server.port, username=self.server.username,
|
||||||
password=self.server.password, ssl=self.server.ssl, database='varken',
|
password=self.server.password, ssl=self.server.ssl, database='varken',
|
||||||
verify_ssl=self.server.verify_ssl)
|
verify_ssl=self.server.verify_ssl)
|
||||||
version = self.influx.request('ping', expected_response_code=204).headers['X-Influxdb-Version']
|
|
||||||
databases = [db['name'] for db in self.influx.get_list_database()]
|
|
||||||
self.logger = getLogger()
|
self.logger = getLogger()
|
||||||
|
try:
|
||||||
|
version = self.influx.request('ping', expected_response_code=204).headers['X-Influxdb-Version']
|
||||||
self.logger.info('Influxdb version: %s', version)
|
self.logger.info('Influxdb version: %s', version)
|
||||||
|
except ConnectionError:
|
||||||
|
self.logger.critical("Error testing connection to InfluxDB. Please check your url/hostname")
|
||||||
|
exit()
|
||||||
|
|
||||||
|
databases = [db['name'] for db in self.influx.get_list_database()]
|
||||||
|
|
||||||
if 'varken' not in databases:
|
if 'varken' not in databases:
|
||||||
self.logger.info("Creating varken database")
|
self.logger.info("Creating varken database")
|
||||||
self.influx.create_database('varken')
|
self.influx.create_database('varken')
|
||||||
|
|
||||||
self.logger.info("Creating varken retention policy (30d/1h)")
|
retention_policies = [policy['name'] for policy in
|
||||||
self.influx.create_retention_policy('varken 30d/1h', '30d', '1', 'varken', False, '1h')
|
self.influx.get_list_retention_policies(database='varken')]
|
||||||
|
if 'varken 30d-1h' not in retention_policies:
|
||||||
|
self.logger.info("Creating varken retention policy (30d-1h)")
|
||||||
|
self.influx.create_retention_policy(name='varken 30d-1h', duration='30d', replication='1',
|
||||||
|
database='varken', default=True, shard_duration='1h')
|
||||||
|
|
||||||
def write_points(self, data):
|
def write_points(self, data):
|
||||||
d = data
|
d = data
|
||||||
|
|
|
@ -15,7 +15,7 @@ class INIParser(object):
|
||||||
self.config = None
|
self.config = None
|
||||||
self.data_folder = data_folder
|
self.data_folder = data_folder
|
||||||
self.filtered_strings = None
|
self.filtered_strings = None
|
||||||
self.services = ['sonarr', 'radarr', 'ombi', 'tautulli', 'sickchill', 'unifi']
|
self.services = ['sonarr', 'radarr', 'lidarr', 'ombi', 'tautulli', 'sickchill', 'unifi']
|
||||||
|
|
||||||
self.logger = getLogger()
|
self.logger = getLogger()
|
||||||
self.influx_server = InfluxServer()
|
self.influx_server = InfluxServer()
|
||||||
|
@ -174,7 +174,7 @@ class INIParser(object):
|
||||||
url = self.url_check(self.config.get(section, 'url'), section=section)
|
url = self.url_check(self.config.get(section, 'url'), section=section)
|
||||||
|
|
||||||
apikey = None
|
apikey = None
|
||||||
if service not in ['ciscoasa', 'unifi']:
|
if service != 'unifi':
|
||||||
apikey = self.config.get(section, 'apikey')
|
apikey = self.config.get(section, 'apikey')
|
||||||
|
|
||||||
scheme = 'https://' if self.config.getboolean(section, 'ssl') else 'http://'
|
scheme = 'https://' if self.config.getboolean(section, 'ssl') else 'http://'
|
||||||
|
@ -183,11 +183,11 @@ class INIParser(object):
|
||||||
if scheme != 'https://':
|
if scheme != 'https://':
|
||||||
verify_ssl = False
|
verify_ssl = False
|
||||||
|
|
||||||
if service in ['sonarr', 'radarr']:
|
if service in ['sonarr', 'radarr', 'lidarr']:
|
||||||
queue = self.config.getboolean(section, 'queue')
|
queue = self.config.getboolean(section, 'queue')
|
||||||
queue_run_seconds = self.config.getint(section, 'queue_run_seconds')
|
queue_run_seconds = self.config.getint(section, 'queue_run_seconds')
|
||||||
|
|
||||||
if service == 'sonarr':
|
if service in ['sonarr', 'lidarr']:
|
||||||
missing_days = self.config.getint(section, 'missing_days')
|
missing_days = self.config.getint(section, 'missing_days')
|
||||||
future_days = self.config.getint(section, 'future_days')
|
future_days = self.config.getint(section, 'future_days')
|
||||||
|
|
||||||
|
|
132
varken/lidarr.py
Normal file
132
varken/lidarr.py
Normal file
|
@ -0,0 +1,132 @@
|
||||||
|
from logging import getLogger
|
||||||
|
from requests import Session, Request
|
||||||
|
from datetime import datetime, timezone, date, timedelta
|
||||||
|
|
||||||
|
from varken.structures import LidarrQueue, LidarrAlbum
|
||||||
|
from varken.helpers import hashit, connection_handler
|
||||||
|
|
||||||
|
|
||||||
|
class LidarrAPI(object):
|
||||||
|
def __init__(self, server, dbmanager):
|
||||||
|
self.dbmanager = dbmanager
|
||||||
|
self.server = server
|
||||||
|
# Create session to reduce server web thread load, and globally define pageSize for all requests
|
||||||
|
self.session = Session()
|
||||||
|
self.session.headers = {'X-Api-Key': self.server.api_key}
|
||||||
|
self.logger = getLogger()
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return f"<lidarr-{self.server.id}>"
|
||||||
|
|
||||||
|
def get_calendar(self, query="Missing"):
|
||||||
|
endpoint = '/api/v1/calendar'
|
||||||
|
today = str(date.today())
|
||||||
|
last_days = str(date.today() - timedelta(days=self.server.missing_days))
|
||||||
|
future = str(date.today() + timedelta(days=self.server.future_days))
|
||||||
|
now = datetime.now(timezone.utc).astimezone().isoformat()
|
||||||
|
if query == "Missing":
|
||||||
|
params = {'start': last_days, 'end': today}
|
||||||
|
else:
|
||||||
|
params = {'start': today, 'end': future}
|
||||||
|
influx_payload = []
|
||||||
|
influx_albums = []
|
||||||
|
|
||||||
|
req = self.session.prepare_request(Request('GET', self.server.url + endpoint, params=params))
|
||||||
|
get = connection_handler(self.session, req, self.server.verify_ssl)
|
||||||
|
|
||||||
|
if not get:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Iteratively create a list of LidarrAlbum Objects from response json
|
||||||
|
albums = []
|
||||||
|
for album in get:
|
||||||
|
try:
|
||||||
|
albums.append(LidarrAlbum(**album))
|
||||||
|
except TypeError as e:
|
||||||
|
self.logger.error('TypeError has occurred : %s while creating LidarrAlbum structure for album. Data '
|
||||||
|
'attempted is: %s', e, album)
|
||||||
|
|
||||||
|
# Add Album to missing list if album is not complete
|
||||||
|
for album in albums:
|
||||||
|
percent_of_tracks = album.statistics.get('percentOfTracks', 0)
|
||||||
|
if percent_of_tracks != 100:
|
||||||
|
influx_albums.append(
|
||||||
|
(album.title, album.releaseDate, album.artist['artistName'], album.id, percent_of_tracks,
|
||||||
|
f"{album.statistics.get('trackFileCount', 0)}/{album.statistics.get('trackCount', 0)}")
|
||||||
|
)
|
||||||
|
|
||||||
|
for title, release_date, artist_name, album_id, percent_complete, complete_count in influx_albums:
|
||||||
|
hash_id = hashit(f'{self.server.id}{title}{album_id}')
|
||||||
|
influx_payload.append(
|
||||||
|
{
|
||||||
|
"measurement": "Lidarr",
|
||||||
|
"tags": {
|
||||||
|
"type": query,
|
||||||
|
"sonarrId": album_id,
|
||||||
|
"server": self.server.id,
|
||||||
|
"albumName": title,
|
||||||
|
"artistName": artist_name,
|
||||||
|
"percentComplete": percent_complete,
|
||||||
|
"completeCount": complete_count,
|
||||||
|
"releaseDate": release_date
|
||||||
|
},
|
||||||
|
"time": now,
|
||||||
|
"fields": {
|
||||||
|
"hash": hash_id
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
self.dbmanager.write_points(influx_payload)
|
||||||
|
|
||||||
|
def get_queue(self):
|
||||||
|
endpoint = '/api/v1/queue'
|
||||||
|
now = datetime.now(timezone.utc).astimezone().isoformat()
|
||||||
|
influx_payload = []
|
||||||
|
params = {'pageSize': 1000}
|
||||||
|
|
||||||
|
req = self.session.prepare_request(Request('GET', self.server.url + endpoint, params=params))
|
||||||
|
get = connection_handler(self.session, req, self.server.verify_ssl)
|
||||||
|
|
||||||
|
if not get:
|
||||||
|
return
|
||||||
|
|
||||||
|
queue = []
|
||||||
|
for song in get['records']:
|
||||||
|
try:
|
||||||
|
queue.append(LidarrQueue(**song))
|
||||||
|
except TypeError as e:
|
||||||
|
self.logger.error('TypeError has occurred : %s while creating LidarrQueue structure for show. Data '
|
||||||
|
'attempted is: %s', e, song)
|
||||||
|
|
||||||
|
if not queue:
|
||||||
|
return
|
||||||
|
|
||||||
|
for song in queue:
|
||||||
|
if song.protocol.upper() == 'USENET':
|
||||||
|
protocol_id = 1
|
||||||
|
else:
|
||||||
|
protocol_id = 0
|
||||||
|
hash_id = hashit(f'{self.server.id}{song.title}{song.artistId}')
|
||||||
|
influx_payload.append(
|
||||||
|
{
|
||||||
|
"measurement": "Lidarr",
|
||||||
|
"tags": {
|
||||||
|
"type": "Queue",
|
||||||
|
"id": song.id,
|
||||||
|
"server": self.server.id,
|
||||||
|
"title": song.title,
|
||||||
|
"quality": song.quality['quality']['name'],
|
||||||
|
"protocol": song.protocol,
|
||||||
|
"protocol_id": protocol_id,
|
||||||
|
"indexer": song.indexer
|
||||||
|
},
|
||||||
|
"time": now,
|
||||||
|
"fields": {
|
||||||
|
"hash": hash_id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
self.dbmanager.write_points(influx_payload)
|
|
@ -25,27 +25,38 @@ class OmbiAPI(object):
|
||||||
|
|
||||||
tv_req = self.session.prepare_request(Request('GET', self.server.url + tv_endpoint))
|
tv_req = self.session.prepare_request(Request('GET', self.server.url + tv_endpoint))
|
||||||
movie_req = self.session.prepare_request(Request('GET', self.server.url + movie_endpoint))
|
movie_req = self.session.prepare_request(Request('GET', self.server.url + movie_endpoint))
|
||||||
get_tv = connection_handler(self.session, tv_req, self.server.verify_ssl)
|
get_tv = connection_handler(self.session, tv_req, self.server.verify_ssl) or []
|
||||||
get_movie = connection_handler(self.session, movie_req, self.server.verify_ssl)
|
get_movie = connection_handler(self.session, movie_req, self.server.verify_ssl) or []
|
||||||
|
|
||||||
if not any([get_tv, get_movie]):
|
if not any([get_tv, get_movie]):
|
||||||
self.logger.error('No json replies. Discarding job')
|
self.logger.error('No json replies. Discarding job')
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if get_movie:
|
||||||
movie_request_count = len(get_movie)
|
movie_request_count = len(get_movie)
|
||||||
|
else:
|
||||||
|
movie_request_count = 0
|
||||||
|
|
||||||
|
if get_tv:
|
||||||
tv_request_count = len(get_tv)
|
tv_request_count = len(get_tv)
|
||||||
|
else:
|
||||||
|
tv_request_count = 0
|
||||||
|
|
||||||
|
tv_show_requests = []
|
||||||
|
for show in get_tv:
|
||||||
try:
|
try:
|
||||||
tv_show_requests = [OmbiTVRequest(**show) for show in get_tv]
|
tv_show_requests.append(OmbiTVRequest(**show))
|
||||||
except TypeError as e:
|
except TypeError as e:
|
||||||
self.logger.error('TypeError has occurred : %s while creating OmbiTVRequest structure', e)
|
self.logger.error('TypeError has occurred : %s while creating OmbiTVRequest structure for show. '
|
||||||
return
|
'data attempted is: %s', e, show)
|
||||||
|
|
||||||
|
movie_requests = []
|
||||||
|
for movie in get_movie:
|
||||||
try:
|
try:
|
||||||
movie_requests = [OmbiMovieRequest(**movie) for movie in get_movie]
|
movie_requests.append(OmbiMovieRequest(**movie))
|
||||||
except TypeError as e:
|
except TypeError as e:
|
||||||
self.logger.error('TypeError has occurred : %s while creating OmbiMovieRequest structure', e)
|
self.logger.error('TypeError has occurred : %s while creating OmbiMovieRequest structure for movie. '
|
||||||
return
|
'data attempted is: %s', e, movie)
|
||||||
|
|
||||||
influx_payload = [
|
influx_payload = [
|
||||||
{
|
{
|
||||||
|
@ -133,7 +144,10 @@ class OmbiAPI(object):
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if influx_payload:
|
||||||
self.dbmanager.write_points(influx_payload)
|
self.dbmanager.write_points(influx_payload)
|
||||||
|
else:
|
||||||
|
self.logger.debug("Empty dataset for ombi module. Discarding...")
|
||||||
|
|
||||||
def get_request_counts(self):
|
def get_request_counts(self):
|
||||||
now = datetime.now(timezone.utc).astimezone().isoformat()
|
now = datetime.now(timezone.utc).astimezone().isoformat()
|
||||||
|
|
|
@ -19,13 +19,18 @@ class SonarrAPI(object):
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return f"<sonarr-{self.server.id}>"
|
return f"<sonarr-{self.server.id}>"
|
||||||
|
|
||||||
def get_missing(self):
|
def get_calendar(self, query="Missing"):
|
||||||
endpoint = '/api/calendar'
|
endpoint = '/api/calendar/'
|
||||||
today = str(date.today())
|
today = str(date.today())
|
||||||
last_days = str(date.today() + timedelta(days=-self.server.missing_days))
|
last_days = str(date.today() - timedelta(days=self.server.missing_days))
|
||||||
|
future = str(date.today() + timedelta(days=self.server.future_days))
|
||||||
now = datetime.now(timezone.utc).astimezone().isoformat()
|
now = datetime.now(timezone.utc).astimezone().isoformat()
|
||||||
|
if query == "Missing":
|
||||||
params = {'start': last_days, 'end': today}
|
params = {'start': last_days, 'end': today}
|
||||||
|
else:
|
||||||
|
params = {'start': today, 'end': future}
|
||||||
influx_payload = []
|
influx_payload = []
|
||||||
|
air_days = []
|
||||||
missing = []
|
missing = []
|
||||||
|
|
||||||
req = self.session.prepare_request(Request('GET', self.server.url + endpoint, params=params))
|
req = self.session.prepare_request(Request('GET', self.server.url + endpoint, params=params))
|
||||||
|
@ -34,71 +39,13 @@ class SonarrAPI(object):
|
||||||
if not get:
|
if not get:
|
||||||
return
|
return
|
||||||
|
|
||||||
# Iteratively create a list of SonarrTVShow Objects from response json
|
|
||||||
tv_shows = []
|
tv_shows = []
|
||||||
for show in get:
|
for show in get:
|
||||||
try:
|
try:
|
||||||
show_tuple = SonarrTVShow(**show)
|
tv_shows.append(SonarrTVShow(**show))
|
||||||
tv_shows.append(show_tuple)
|
|
||||||
except TypeError as e:
|
except TypeError as e:
|
||||||
self.logger.error('TypeError has occurred : %s while creating SonarrTVShow structure for show', e)
|
self.logger.error('TypeError has occurred : %s while creating SonarrTVShow structure for show. Data '
|
||||||
if not tv_shows:
|
'attempted is: %s', e, show)
|
||||||
return
|
|
||||||
|
|
||||||
# Add show to missing list if file does not exist
|
|
||||||
for show in tv_shows:
|
|
||||||
if not show.hasFile:
|
|
||||||
sxe = f'S{show.seasonNumber:0>2}E{show.episodeNumber:0>2}'
|
|
||||||
missing.append((show.series['title'], sxe, show.airDateUtc, show.title, show.id))
|
|
||||||
|
|
||||||
for series_title, sxe, air_date_utc, episode_title, sonarr_id in missing:
|
|
||||||
hash_id = hashit(f'{self.server.id}{series_title}{sxe}')
|
|
||||||
influx_payload.append(
|
|
||||||
{
|
|
||||||
"measurement": "Sonarr",
|
|
||||||
"tags": {
|
|
||||||
"type": "Missing",
|
|
||||||
"sonarrId": sonarr_id,
|
|
||||||
"server": self.server.id,
|
|
||||||
"name": series_title,
|
|
||||||
"epname": episode_title,
|
|
||||||
"sxe": sxe,
|
|
||||||
"airsUTC": air_date_utc
|
|
||||||
},
|
|
||||||
"time": now,
|
|
||||||
"fields": {
|
|
||||||
"hash": hash_id
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
self.dbmanager.write_points(influx_payload)
|
|
||||||
|
|
||||||
def get_future(self):
|
|
||||||
endpoint = '/api/calendar/'
|
|
||||||
today = str(date.today())
|
|
||||||
now = datetime.now(timezone.utc).astimezone().isoformat()
|
|
||||||
future = str(date.today() + timedelta(days=self.server.future_days))
|
|
||||||
influx_payload = []
|
|
||||||
air_days = []
|
|
||||||
params = {'start': today, 'end': future}
|
|
||||||
|
|
||||||
req = self.session.prepare_request(Request('GET', self.server.url + endpoint, params=params))
|
|
||||||
get = connection_handler(self.session, req, self.server.verify_ssl)
|
|
||||||
|
|
||||||
if not get:
|
|
||||||
return
|
|
||||||
|
|
||||||
tv_shows = []
|
|
||||||
for show in get:
|
|
||||||
try:
|
|
||||||
show_tuple = SonarrTVShow(**show)
|
|
||||||
tv_shows.append(show_tuple)
|
|
||||||
except TypeError as e:
|
|
||||||
self.logger.error('TypeError has occurred : %s while creating SonarrTVShow structure for show', e)
|
|
||||||
if not tv_shows:
|
|
||||||
return
|
|
||||||
|
|
||||||
for show in tv_shows:
|
for show in tv_shows:
|
||||||
sxe = f'S{show.seasonNumber:0>2}E{show.episodeNumber:0>2}'
|
sxe = f'S{show.seasonNumber:0>2}E{show.episodeNumber:0>2}'
|
||||||
|
@ -106,15 +53,19 @@ class SonarrAPI(object):
|
||||||
downloaded = 1
|
downloaded = 1
|
||||||
else:
|
else:
|
||||||
downloaded = 0
|
downloaded = 0
|
||||||
|
if query == "Missing":
|
||||||
|
if not downloaded:
|
||||||
|
missing.append((show.series['title'], downloaded, sxe, show.airDateUtc, show.title, show.id))
|
||||||
|
else:
|
||||||
air_days.append((show.series['title'], downloaded, sxe, show.title, show.airDateUtc, show.id))
|
air_days.append((show.series['title'], downloaded, sxe, show.title, show.airDateUtc, show.id))
|
||||||
|
|
||||||
for series_title, dl_status, sxe, episode_title, air_date_utc, sonarr_id in air_days:
|
for series_title, dl_status, sxe, episode_title, air_date_utc, sonarr_id in (air_days or missing):
|
||||||
hash_id = hashit(f'{self.server.id}{series_title}{sxe}')
|
hash_id = hashit(f'{self.server.id}{series_title}{sxe}')
|
||||||
influx_payload.append(
|
influx_payload.append(
|
||||||
{
|
{
|
||||||
"measurement": "Sonarr",
|
"measurement": "Sonarr",
|
||||||
"tags": {
|
"tags": {
|
||||||
"type": "Future",
|
"type": query,
|
||||||
"sonarrId": sonarr_id,
|
"sonarrId": sonarr_id,
|
||||||
"server": self.server.id,
|
"server": self.server.id,
|
||||||
"name": series_title,
|
"name": series_title,
|
||||||
|
@ -147,10 +98,10 @@ class SonarrAPI(object):
|
||||||
download_queue = []
|
download_queue = []
|
||||||
for show in get:
|
for show in get:
|
||||||
try:
|
try:
|
||||||
show_tuple = Queue(**show)
|
download_queue.append(Queue(**show))
|
||||||
download_queue.append(show_tuple)
|
|
||||||
except TypeError as e:
|
except TypeError as e:
|
||||||
self.logger.error('TypeError has occurred : %s while creating Queue structure', e)
|
self.logger.error('TypeError has occurred : %s while creating Queue structure. Data attempted is: '
|
||||||
|
'%s', e, show)
|
||||||
if not download_queue:
|
if not download_queue:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -159,7 +110,7 @@ class SonarrAPI(object):
|
||||||
sxe = f"S{show.episode['seasonNumber']:0>2}E{show.episode['episodeNumber']:0>2}"
|
sxe = f"S{show.episode['seasonNumber']:0>2}E{show.episode['episodeNumber']:0>2}"
|
||||||
except TypeError as e:
|
except TypeError as e:
|
||||||
self.logger.error('TypeError has occurred : %s while processing the sonarr queue. \
|
self.logger.error('TypeError has occurred : %s while processing the sonarr queue. \
|
||||||
Remove invalid queue entries.', e)
|
Remove invalid queue entry. Data attempted is: %s', e, show)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if show.protocol.upper() == 'USENET':
|
if show.protocol.upper() == 'USENET':
|
||||||
|
@ -192,4 +143,7 @@ class SonarrAPI(object):
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
if influx_payload:
|
||||||
self.dbmanager.write_points(influx_payload)
|
self.dbmanager.write_points(influx_payload)
|
||||||
|
else:
|
||||||
|
self.logger.debug("No data to send to influx for sonarr instance, discarding.")
|
||||||
|
|
|
@ -273,8 +273,8 @@ class TautulliStream(NamedTuple):
|
||||||
audience_rating_image: str = None
|
audience_rating_image: str = None
|
||||||
audio_bitrate: str = None
|
audio_bitrate: str = None
|
||||||
audio_bitrate_mode: str = None
|
audio_bitrate_mode: str = None
|
||||||
audio_channels: str = None
|
|
||||||
audio_channel_layout: str = None
|
audio_channel_layout: str = None
|
||||||
|
audio_channels: str = None
|
||||||
audio_codec: str = None
|
audio_codec: str = None
|
||||||
audio_decision: str = None
|
audio_decision: str = None
|
||||||
audio_language: str = None
|
audio_language: str = None
|
||||||
|
@ -292,6 +292,8 @@ class TautulliStream(NamedTuple):
|
||||||
collections: list = None
|
collections: list = None
|
||||||
container: str = None
|
container: str = None
|
||||||
content_rating: str = None
|
content_rating: str = None
|
||||||
|
current_session: str = None
|
||||||
|
date: str = None
|
||||||
deleted_user: int = None
|
deleted_user: int = None
|
||||||
device: str = None
|
device: str = None
|
||||||
directors: list = None
|
directors: list = None
|
||||||
|
@ -307,6 +309,8 @@ class TautulliStream(NamedTuple):
|
||||||
grandparent_rating_key: str = None
|
grandparent_rating_key: str = None
|
||||||
grandparent_thumb: str = None
|
grandparent_thumb: str = None
|
||||||
grandparent_title: str = None
|
grandparent_title: str = None
|
||||||
|
group_count: int = None
|
||||||
|
group_ids: str = None
|
||||||
guid: str = None
|
guid: str = None
|
||||||
height: str = None
|
height: str = None
|
||||||
id: str = None
|
id: str = None
|
||||||
|
@ -331,16 +335,19 @@ class TautulliStream(NamedTuple):
|
||||||
optimized_version: int = None
|
optimized_version: int = None
|
||||||
optimized_version_profile: str = None
|
optimized_version_profile: str = None
|
||||||
optimized_version_title: str = None
|
optimized_version_title: str = None
|
||||||
originally_available_at: str = None
|
|
||||||
original_title: str = None
|
original_title: str = None
|
||||||
|
originally_available_at: str = None
|
||||||
parent_media_index: str = None
|
parent_media_index: str = None
|
||||||
parent_rating_key: str = None
|
parent_rating_key: str = None
|
||||||
parent_thumb: str = None
|
parent_thumb: str = None
|
||||||
parent_title: str = None
|
parent_title: str = None
|
||||||
|
paused_counter: int = None
|
||||||
|
percent_complete: int = None
|
||||||
platform: str = None
|
platform: str = None
|
||||||
platform_name: str = None
|
platform_name: str = None
|
||||||
platform_version: str = None
|
platform_version: str = None
|
||||||
player: str = None
|
player: str = None
|
||||||
|
pre_tautulli: str = None
|
||||||
product: str = None
|
product: str = None
|
||||||
product_version: str = None
|
product_version: str = None
|
||||||
profile: str = None
|
profile: str = None
|
||||||
|
@ -349,20 +356,25 @@ class TautulliStream(NamedTuple):
|
||||||
rating: str = None
|
rating: str = None
|
||||||
rating_image: str = None
|
rating_image: str = None
|
||||||
rating_key: str = None
|
rating_key: str = None
|
||||||
|
reference_id: int = None
|
||||||
relay: int = None
|
relay: int = None
|
||||||
|
relayed: int = None
|
||||||
section_id: str = None
|
section_id: str = None
|
||||||
|
secure: str = None
|
||||||
selected: int = None
|
selected: int = None
|
||||||
session_id: str = None
|
session_id: str = None
|
||||||
session_key: str = None
|
session_key: str = None
|
||||||
shared_libraries: list = None
|
shared_libraries: list = None
|
||||||
sort_title: str = None
|
sort_title: str = None
|
||||||
|
started: int = None
|
||||||
state: str = None
|
state: str = None
|
||||||
|
stopped: int = None
|
||||||
stream_aspect_ratio: str = None
|
stream_aspect_ratio: str = None
|
||||||
stream_audio_bitrate: str = None
|
stream_audio_bitrate: str = None
|
||||||
stream_audio_bitrate_mode: str = None
|
stream_audio_bitrate_mode: str = None
|
||||||
stream_audio_channels: str = None
|
|
||||||
stream_audio_channel_layout: str = None
|
stream_audio_channel_layout: str = None
|
||||||
stream_audio_channel_layout_: str = None
|
stream_audio_channel_layout_: str = None
|
||||||
|
stream_audio_channels: str = None
|
||||||
stream_audio_codec: str = None
|
stream_audio_codec: str = None
|
||||||
stream_audio_decision: str = None
|
stream_audio_decision: str = None
|
||||||
stream_audio_language: str = None
|
stream_audio_language: str = None
|
||||||
|
@ -380,8 +392,8 @@ class TautulliStream(NamedTuple):
|
||||||
stream_subtitle_language: str = None
|
stream_subtitle_language: str = None
|
||||||
stream_subtitle_language_code: str = None
|
stream_subtitle_language_code: str = None
|
||||||
stream_subtitle_location: str = None
|
stream_subtitle_location: str = None
|
||||||
stream_video_bitrate: str = None
|
|
||||||
stream_video_bit_depth: str = None
|
stream_video_bit_depth: str = None
|
||||||
|
stream_video_bitrate: str = None
|
||||||
stream_video_codec: str = None
|
stream_video_codec: str = None
|
||||||
stream_video_codec_level: str = None
|
stream_video_codec_level: str = None
|
||||||
stream_video_decision: str = None
|
stream_video_decision: str = None
|
||||||
|
@ -393,7 +405,7 @@ class TautulliStream(NamedTuple):
|
||||||
stream_video_resolution: str = None
|
stream_video_resolution: str = None
|
||||||
stream_video_width: str = None
|
stream_video_width: str = None
|
||||||
studio: str = None
|
studio: str = None
|
||||||
subtitles: int = None
|
sub_type: str = None
|
||||||
subtitle_codec: str = None
|
subtitle_codec: str = None
|
||||||
subtitle_container: str = None
|
subtitle_container: str = None
|
||||||
subtitle_decision: str = None
|
subtitle_decision: str = None
|
||||||
|
@ -402,7 +414,7 @@ class TautulliStream(NamedTuple):
|
||||||
subtitle_language: str = None
|
subtitle_language: str = None
|
||||||
subtitle_language_code: str = None
|
subtitle_language_code: str = None
|
||||||
subtitle_location: str = None
|
subtitle_location: str = None
|
||||||
sub_type: str = None
|
subtitles: int = None
|
||||||
summary: str = None
|
summary: str = None
|
||||||
synced_version: int = None
|
synced_version: int = None
|
||||||
synced_version_profile: str = None
|
synced_version_profile: str = None
|
||||||
|
@ -433,17 +445,17 @@ class TautulliStream(NamedTuple):
|
||||||
type: str = None
|
type: str = None
|
||||||
updated_at: str = None
|
updated_at: str = None
|
||||||
user: str = None
|
user: str = None
|
||||||
username: str = None
|
|
||||||
user_id: int = None
|
user_id: int = None
|
||||||
user_rating: str = None
|
user_rating: str = None
|
||||||
user_thumb: str = None
|
user_thumb: str = None
|
||||||
video_bitrate: str = None
|
username: str = None
|
||||||
video_bit_depth: str = None
|
video_bit_depth: str = None
|
||||||
|
video_bitrate: str = None
|
||||||
video_codec: str = None
|
video_codec: str = None
|
||||||
video_codec_level: str = None
|
video_codec_level: str = None
|
||||||
video_decision: str = None
|
video_decision: str = None
|
||||||
video_framerate: str = None
|
|
||||||
video_frame_rate: str = None
|
video_frame_rate: str = None
|
||||||
|
video_framerate: str = None
|
||||||
video_height: str = None
|
video_height: str = None
|
||||||
video_language: str = None
|
video_language: str = None
|
||||||
video_language_code: str = None
|
video_language_code: str = None
|
||||||
|
@ -452,8 +464,53 @@ class TautulliStream(NamedTuple):
|
||||||
video_resolution: str = None
|
video_resolution: str = None
|
||||||
video_width: str = None
|
video_width: str = None
|
||||||
view_offset: str = None
|
view_offset: str = None
|
||||||
|
watched_status: int = None
|
||||||
width: str = None
|
width: str = None
|
||||||
writers: list = None
|
writers: list = None
|
||||||
year: str = None
|
year: str = None
|
||||||
secure: str = None
|
|
||||||
relayed: int = None
|
|
||||||
|
# Lidarr
|
||||||
|
class LidarrQueue(NamedTuple):
|
||||||
|
artistId: int = None
|
||||||
|
albumId: int = None
|
||||||
|
language: dict = None
|
||||||
|
quality: dict = None
|
||||||
|
size: float = None
|
||||||
|
title: str = None
|
||||||
|
timeleft: str = None
|
||||||
|
sizeleft: float = None
|
||||||
|
status: str = None
|
||||||
|
trackedDownloadStatus: str = None
|
||||||
|
statusMessages: list = None
|
||||||
|
downloadId: str = None
|
||||||
|
protocol: str = None
|
||||||
|
downloadClient: str = None
|
||||||
|
indexer: str = None
|
||||||
|
downloadForced: bool = None
|
||||||
|
id: int = None
|
||||||
|
|
||||||
|
|
||||||
|
class LidarrAlbum(NamedTuple):
|
||||||
|
title: str = None
|
||||||
|
disambiguation: str = None
|
||||||
|
overview: str = None
|
||||||
|
artistId: int = None
|
||||||
|
foreignAlbumId: str = None
|
||||||
|
monitored: bool = None
|
||||||
|
anyReleaseOk: bool = None
|
||||||
|
profileId: int = None
|
||||||
|
duration: int = None
|
||||||
|
albumType: str = None
|
||||||
|
secondaryTypes: list = None
|
||||||
|
mediumCount: int = None
|
||||||
|
ratings: dict = None
|
||||||
|
releaseDate: str = None
|
||||||
|
releases: list = None
|
||||||
|
genres: list = None
|
||||||
|
media: list = None
|
||||||
|
artist: dict = None
|
||||||
|
images: list = None
|
||||||
|
links: list = None
|
||||||
|
statistics: dict = {}
|
||||||
|
id: int = None
|
||||||
|
|
|
@ -1,7 +1,8 @@
|
||||||
from logging import getLogger
|
from logging import getLogger
|
||||||
from requests import Session, Request
|
from requests import Session, Request
|
||||||
from datetime import datetime, timezone
|
|
||||||
from geoip2.errors import AddressNotFoundError
|
from geoip2.errors import AddressNotFoundError
|
||||||
|
from datetime import datetime, timezone, date, timedelta
|
||||||
|
from influxdb.exceptions import InfluxDBClientError
|
||||||
|
|
||||||
from varken.structures import TautulliStream
|
from varken.structures import TautulliStream
|
||||||
from varken.helpers import hashit, connection_handler
|
from varken.helpers import hashit, connection_handler
|
||||||
|
@ -60,7 +61,7 @@ class TautulliAPI(object):
|
||||||
if not self.my_ip:
|
if not self.my_ip:
|
||||||
# Try the fallback ip in the config file
|
# Try the fallback ip in the config file
|
||||||
try:
|
try:
|
||||||
self.logger.debug('Atempting to use the failback IP...')
|
self.logger.debug('Attempting to use the fallback IP...')
|
||||||
geodata = self.geoiphandler.lookup(self.server.fallback_ip)
|
geodata = self.geoiphandler.lookup(self.server.fallback_ip)
|
||||||
except AddressNotFoundError as e:
|
except AddressNotFoundError as e:
|
||||||
self.logger.error('%s', e)
|
self.logger.error('%s', e)
|
||||||
|
@ -208,6 +209,142 @@ class TautulliAPI(object):
|
||||||
if library['section_type'] == 'show':
|
if library['section_type'] == 'show':
|
||||||
data['fields']['seasons'] = int(library['parent_count'])
|
data['fields']['seasons'] = int(library['parent_count'])
|
||||||
data['fields']['episodes'] = int(library['child_count'])
|
data['fields']['episodes'] = int(library['child_count'])
|
||||||
|
|
||||||
|
elif library['section_type'] == 'artist':
|
||||||
|
data['fields']['albums'] = int(library['parent_count'])
|
||||||
|
data['fields']['tracks'] = int(library['child_count'])
|
||||||
influx_payload.append(data)
|
influx_payload.append(data)
|
||||||
|
|
||||||
self.dbmanager.write_points(influx_payload)
|
self.dbmanager.write_points(influx_payload)
|
||||||
|
|
||||||
|
def get_historical(self, days=30):
|
||||||
|
influx_payload = []
|
||||||
|
start_date = date.today() - timedelta(days=days)
|
||||||
|
params = {'cmd': 'get_history', 'grouping': 1, 'length': 1000000}
|
||||||
|
req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params))
|
||||||
|
g = connection_handler(self.session, req, self.server.verify_ssl)
|
||||||
|
|
||||||
|
if not g:
|
||||||
|
return
|
||||||
|
|
||||||
|
get = g['response']['data']['data']
|
||||||
|
|
||||||
|
params = {'cmd': 'get_stream_data', 'row_id': 0}
|
||||||
|
sessions = []
|
||||||
|
for history_item in get:
|
||||||
|
if not history_item['id']:
|
||||||
|
self.logger.debug('Skipping entry with no ID. (%s)', history_item['full_title'])
|
||||||
|
continue
|
||||||
|
if date.fromtimestamp(history_item['started']) < start_date:
|
||||||
|
continue
|
||||||
|
params['row_id'] = history_item['id']
|
||||||
|
req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params))
|
||||||
|
g = connection_handler(self.session, req, self.server.verify_ssl)
|
||||||
|
if not g:
|
||||||
|
self.logger.debug('Could not get historical stream data for %s. Skipping.', history_item['full_title'])
|
||||||
|
try:
|
||||||
|
self.logger.debug('Adding %s to history', history_item['full_title'])
|
||||||
|
history_item.update(g['response']['data'])
|
||||||
|
sessions.append(TautulliStream(**history_item))
|
||||||
|
except TypeError as e:
|
||||||
|
self.logger.error('TypeError has occurred : %s while creating TautulliStream structure', e)
|
||||||
|
continue
|
||||||
|
|
||||||
|
for session in sessions:
|
||||||
|
try:
|
||||||
|
geodata = self.geoiphandler.lookup(session.ip_address)
|
||||||
|
except (ValueError, AddressNotFoundError):
|
||||||
|
self.logger.debug('Public IP missing for Tautulli session...')
|
||||||
|
if not self.my_ip:
|
||||||
|
# Try the fallback ip in the config file
|
||||||
|
try:
|
||||||
|
self.logger.debug('Attempting to use the fallback IP...')
|
||||||
|
geodata = self.geoiphandler.lookup(self.server.fallback_ip)
|
||||||
|
except AddressNotFoundError as e:
|
||||||
|
self.logger.error('%s', e)
|
||||||
|
|
||||||
|
self.my_ip = self.session.get('http://ip.42.pl/raw').text
|
||||||
|
self.logger.debug('Looked the public IP and set it to %s', self.my_ip)
|
||||||
|
|
||||||
|
geodata = self.geoiphandler.lookup(self.my_ip)
|
||||||
|
|
||||||
|
else:
|
||||||
|
geodata = self.geoiphandler.lookup(self.my_ip)
|
||||||
|
|
||||||
|
if not all([geodata.location.latitude, geodata.location.longitude]):
|
||||||
|
latitude = 37.234332396
|
||||||
|
longitude = -115.80666344
|
||||||
|
else:
|
||||||
|
latitude = geodata.location.latitude
|
||||||
|
longitude = geodata.location.longitude
|
||||||
|
|
||||||
|
if not geodata.city.name:
|
||||||
|
location = '👽'
|
||||||
|
else:
|
||||||
|
location = geodata.city.name
|
||||||
|
|
||||||
|
decision = session.transcode_decision
|
||||||
|
if decision == 'copy':
|
||||||
|
decision = 'direct stream'
|
||||||
|
|
||||||
|
video_decision = session.stream_video_decision
|
||||||
|
if video_decision == 'copy':
|
||||||
|
video_decision = 'direct stream'
|
||||||
|
elif video_decision == '':
|
||||||
|
video_decision = 'Music'
|
||||||
|
|
||||||
|
quality = session.stream_video_resolution
|
||||||
|
if not quality:
|
||||||
|
quality = session.container.upper()
|
||||||
|
elif quality in ('SD', 'sd', '4k'):
|
||||||
|
quality = session.stream_video_resolution.upper()
|
||||||
|
else:
|
||||||
|
quality = session.stream_video_resolution + 'p'
|
||||||
|
|
||||||
|
player_state = 100
|
||||||
|
|
||||||
|
hash_id = hashit(f'{session.id}{session.session_key}{session.user}{session.full_title}')
|
||||||
|
influx_payload.append(
|
||||||
|
{
|
||||||
|
"measurement": "Tautulli",
|
||||||
|
"tags": {
|
||||||
|
"type": "Session",
|
||||||
|
"session_id": session.session_id,
|
||||||
|
"friendly_name": session.friendly_name,
|
||||||
|
"username": session.user,
|
||||||
|
"title": session.full_title,
|
||||||
|
"platform": session.platform,
|
||||||
|
"quality": quality,
|
||||||
|
"video_decision": video_decision.title(),
|
||||||
|
"transcode_decision": decision.title(),
|
||||||
|
"transcode_hw_decoding": session.transcode_hw_decoding,
|
||||||
|
"transcode_hw_encoding": session.transcode_hw_encoding,
|
||||||
|
"media_type": session.media_type.title(),
|
||||||
|
"audio_codec": session.audio_codec.upper(),
|
||||||
|
"stream_audio_codec": session.stream_audio_codec.upper(),
|
||||||
|
"quality_profile": session.quality_profile,
|
||||||
|
"progress_percent": session.progress_percent,
|
||||||
|
"region_code": geodata.subdivisions.most_specific.iso_code,
|
||||||
|
"location": location,
|
||||||
|
"full_location": f'{geodata.subdivisions.most_specific.name} - {geodata.city.name}',
|
||||||
|
"latitude": latitude,
|
||||||
|
"longitude": longitude,
|
||||||
|
"player_state": player_state,
|
||||||
|
"device_type": session.platform,
|
||||||
|
"relayed": session.relayed,
|
||||||
|
"secure": session.secure,
|
||||||
|
"server": self.server.id
|
||||||
|
},
|
||||||
|
"time": datetime.fromtimestamp(session.stopped).astimezone().isoformat(),
|
||||||
|
"fields": {
|
||||||
|
"hash": hash_id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
self.dbmanager.write_points(influx_payload)
|
||||||
|
except InfluxDBClientError as e:
|
||||||
|
if "beyond retention policy" in str(e):
|
||||||
|
self.logger.debug('Only imported 30 days of data per retention policy')
|
||||||
|
else:
|
||||||
|
self.logger.error('Something went wrong... post this output in discord: %s', e)
|
||||||
|
|
|
@ -9,11 +9,13 @@ class UniFiAPI(object):
|
||||||
def __init__(self, server, dbmanager):
|
def __init__(self, server, dbmanager):
|
||||||
self.dbmanager = dbmanager
|
self.dbmanager = dbmanager
|
||||||
self.server = server
|
self.server = server
|
||||||
|
self.site = self.server.site
|
||||||
# Create session to reduce server web thread load, and globally define pageSize for all requests
|
# Create session to reduce server web thread load, and globally define pageSize for all requests
|
||||||
self.session = Session()
|
self.session = Session()
|
||||||
self.logger = getLogger()
|
self.logger = getLogger()
|
||||||
|
self.get_retry = True
|
||||||
self.get_cookie()
|
self.get_cookie()
|
||||||
|
self.get_site()
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return f"<unifi-{self.server.id}>"
|
return f"<unifi-{self.server.id}>"
|
||||||
|
@ -25,22 +27,49 @@ class UniFiAPI(object):
|
||||||
post = connection_handler(self.session, req, self.server.verify_ssl, as_is_reply=True)
|
post = connection_handler(self.session, req, self.server.verify_ssl, as_is_reply=True)
|
||||||
|
|
||||||
if not post or not post.cookies.get('unifises'):
|
if not post or not post.cookies.get('unifises'):
|
||||||
|
self.logger.error(f"Could not retrieve session cookie from UniFi Controller")
|
||||||
return
|
return
|
||||||
|
|
||||||
cookies = {'unifises': post.cookies.get('unifises')}
|
cookies = {'unifises': post.cookies.get('unifises')}
|
||||||
self.session.cookies.update(cookies)
|
self.session.cookies.update(cookies)
|
||||||
|
|
||||||
def get_usg_stats(self):
|
def get_site(self):
|
||||||
now = datetime.now(timezone.utc).astimezone().isoformat()
|
endpoint = '/api/self/sites'
|
||||||
endpoint = f'/api/s/{self.server.site}/stat/device'
|
|
||||||
req = self.session.prepare_request(Request('GET', self.server.url + endpoint))
|
req = self.session.prepare_request(Request('GET', self.server.url + endpoint))
|
||||||
get = connection_handler(self.session, req, self.server.verify_ssl)
|
get = connection_handler(self.session, req, self.server.verify_ssl)
|
||||||
|
|
||||||
if not get:
|
if not get:
|
||||||
|
self.logger.error(f"Could not get list of sites from UniFi Controller")
|
||||||
|
return
|
||||||
|
site = [site['name'] for site in get['data'] if site['name'].lower() == self.server.site.lower()
|
||||||
|
or site['desc'].lower() == self.server.site.lower()]
|
||||||
|
if site:
|
||||||
|
self.site = site[0]
|
||||||
|
else:
|
||||||
|
self.logger.error(f"Could not map site {self.server.site} to a site id/alias")
|
||||||
|
|
||||||
|
def get_usg_stats(self):
|
||||||
|
now = datetime.now(timezone.utc).astimezone().isoformat()
|
||||||
|
endpoint = f'/api/s/{self.site}/stat/device'
|
||||||
|
req = self.session.prepare_request(Request('GET', self.server.url + endpoint))
|
||||||
|
get = connection_handler(self.session, req, self.server.verify_ssl)
|
||||||
|
|
||||||
|
if not get:
|
||||||
|
if self.get_retry:
|
||||||
|
self.get_retry = False
|
||||||
|
self.logger.error("Attempting to reauthenticate for unifi-%s", self.server.id)
|
||||||
|
self.get_cookie()
|
||||||
|
self.get_usg_stats()
|
||||||
|
else:
|
||||||
|
self.get_retry = True
|
||||||
self.logger.error("Disregarding Job get_usg_stats for unifi-%s", self.server.id)
|
self.logger.error("Disregarding Job get_usg_stats for unifi-%s", self.server.id)
|
||||||
return
|
return
|
||||||
|
|
||||||
devices = {device['name']: device for device in get['data']}
|
if not self.get_retry:
|
||||||
|
self.get_retry = True
|
||||||
|
|
||||||
|
devices = {device['name']: device for device in get['data'] if device.get('name')}
|
||||||
|
|
||||||
if devices.get(self.server.usg_name):
|
if devices.get(self.server.usg_name):
|
||||||
device = devices[self.server.usg_name]
|
device = devices[self.server.usg_name]
|
||||||
else:
|
else:
|
||||||
|
@ -62,10 +91,6 @@ class UniFiAPI(object):
|
||||||
"rx_bytes_current": device['wan1']['rx_bytes-r'],
|
"rx_bytes_current": device['wan1']['rx_bytes-r'],
|
||||||
"tx_bytes_total": device['wan1']['tx_bytes'],
|
"tx_bytes_total": device['wan1']['tx_bytes'],
|
||||||
"tx_bytes_current": device['wan1']['tx_bytes-r'],
|
"tx_bytes_current": device['wan1']['tx_bytes-r'],
|
||||||
# Commenting speedtest out until Unifi gets their shit together
|
|
||||||
# "speedtest_latency": device['speedtest-status']['latency'],
|
|
||||||
# "speedtest_download": device['speedtest-status']['xput_download'],
|
|
||||||
# "speedtest_upload": device['speedtest-status']['xput_upload'],
|
|
||||||
"cpu_loadavg_1": float(device['sys_stats']['loadavg_1']),
|
"cpu_loadavg_1": float(device['sys_stats']['loadavg_1']),
|
||||||
"cpu_loadavg_5": float(device['sys_stats']['loadavg_5']),
|
"cpu_loadavg_5": float(device['sys_stats']['loadavg_5']),
|
||||||
"cpu_loadavg_15": float(device['sys_stats']['loadavg_15']),
|
"cpu_loadavg_15": float(device['sys_stats']['loadavg_15']),
|
||||||
|
|
Loading…
Reference in a new issue