Merge branch 'refactor-project' into nightly

This commit is contained in:
Nicholas St. Germain 2018-12-05 19:47:38 -06:00
commit fa07879b11
25 changed files with 1457 additions and 998 deletions

5
.gitignore vendored
View file

@ -5,7 +5,10 @@
.Trashes
ehthumbs.db
Thumbs.db
configuration.py
__pycache__
GeoLite2-City.mmdb
GeoLite2-City.tar.gz
data/varken.ini
.idea/
Legacy/configuration.py
varken-venv/

View file

@ -3,7 +3,7 @@ import requests
from datetime import datetime, timezone
from influxdb import InfluxDBClient
import configuration
from Legacy import configuration
current_time = datetime.now(timezone.utc).astimezone().isoformat()

114
README.md
View file

@ -1,5 +1,9 @@
# Grafana Scripts
Repo for api scripts written (both pushing and pulling) to aggregate data into influxdb for grafana
# Varken
Dutch for PIG. PIG is an Acronym for Plex/InfluxDB/Grafana
varken is a standalone command-line utility to aggregate data
from the Plex ecosystem into InfluxDB. Examples use Grafana for a
frontend
Requirements /w install links: [Grafana](http://docs.grafana.org/installation/), [Python3](https://www.python.org/downloads/), [InfluxDB](https://docs.influxdata.com/influxdb/v1.5/introduction/installation/)
@ -7,25 +11,23 @@ Requirements /w install links: [Grafana](http://docs.grafana.org/installation/),
<img width="800" src="https://i.imgur.com/av8e0HP.png">
</p>
## Quick Setup
1. Install requirements `pip3 install -r requirements.txt`
1. Make a copy of `configuration.example.py` to `configuration.py`
2. Make the appropriate changes to `configuration.py`
1. Create your plex database in influx
```sh
user@server: ~$ influx
> CREATE DATABASE plex
> quit
```
1. After completing the [getting started](http://docs.grafana.org/guides/getting_started/) portion of grafana, create your datasource for influxdb. At a minimum, you will need the plex database.
## Quick Setup (Varken Alpha)
1. Clone the repository `sudo git clone https://github.com/Boerderij/Varken.git /opt/Varken`
1. Follow the systemd install instructions located in `varken.systemd`
1. Create venv in project `cd /opt/Varken && /usr/bin/python3 -m venv varken-venv`
1. Install requirements `/opt/Varken/varken-venv/bin/python -m pip install -r requirements.txt`
1. Make a copy of `varken.example.ini` to `varken.ini` in the `data` folder
`cp /opt/Varken/data/varken.example.ini /opt/Varken/data/varken.ini`
1. Make the appropriate changes to `varken.ini`
ie.`nano /opt/Varken/data/varken.ini`
1. Make sure all the files have the appropriate permissions `sudo chown varken:varken -R /opt/Varken`
1. After completing the [getting started](http://docs.grafana.org/guides/getting_started/) portion of grafana, create your datasource for influxdb.
1. Install `grafana-cli plugins install grafana-worldmap-panel`
1. Click the + on your menu and click import. Using the .json provided in this repo, paste it in and customize as you like.
1. TODO:: Click the + on your menu and click import. Using the .json provided in this repo, paste it in and customize as you like.
### Docker
Repo is included in [si0972/grafana-scripts](https://github.com/si0972/grafana-scripts-docker)
Repo is included in [si0972/grafana-scripts-docker](https://github.com/si0972/grafana-scripts-docker/tree/varken)
<details><summary>Example</summary>
<p>
@ -34,84 +36,8 @@ Repo is included in [si0972/grafana-scripts](https://github.com/si0972/grafana-s
docker create \
--name=grafana-scripts \
-v <path to data>:/Scripts \
-e plex=true \
-e PGID=<gid> -e PUID=<uid> \
si0972/grafana-scripts:latest
si0972/grafana-scripts:varken
```
</p>
</details>
## Scripts
### `sonarr.py`
Gathers data from Sonarr and pushes it to influxdb.
```
Script to aid in data gathering from Sonarr
optional arguments:
-h, --help show this help message and exit
--missing Get all missing TV shows
--missing_days MISSING_DAYS
Get missing TV shows in past X days
--upcoming Get upcoming TV shows
--future FUTURE Get TV shows on X days into the future. Includes today.
i.e. --future 2 is Today and Tomorrow
--queue Get TV shows in queue
```
- Notes:
- You cannot stack the arguments. ie. `sonarr.py --missing --queue`
- One argument must be supplied
### `radarr.py`
Gathers data from Radarr and pushes it to influxdb
```
Script to aid in data gathering from Radarr
optional arguments:
-h, --help show this help message and exit
--missing Get missing movies
--missing_avl Get missing available movies
--queue Get movies in queue
```
- Notes:
- You cannot stack the arguments. ie. `radarr.py --missing --queue`
- One argument must be supplied
- `--missing_avl` Refers to how Radarr has determined if the movie should be available to download. The easy way to determine if the movie will appear on this list is if the movie has a <span style="color:red">RED "Missing"</span> tag associated with that movie. <span style="color:blue">BLUE "Missing"</span> tag refers to a movie that is missing but is not available for download yet. These tags are determined by your "Minimum Availability" settings for that movie.
### `ombi.py`
Gathers data from Ombi and pushes it to influxdb
```
Script to aid in data gathering from Ombi
optional arguments:
-h, --help show this help message and exit
--total Get the total count of all requests
--counts Get the count of pending, approved, and available requests
```
- Notes:
- You cannot stack the arguments. ie. `ombi.py --total --counts`
- One argument must be supplied
### `tautulli.py`
Gathers data from Tautulli and pushes it to influxdb. On initial run it will download the geoip2 DB and use it for locations.
## Notes
To run the python scripts crontab is currently leveraged. Examples:
```sh
### Modify paths as appropriate. python3 is located in different places for different users. (`which python3` will give you the path)
### to edit your crontab entry, do not modify /var/spool/cron/crontabs/<user> directly, use `crontab -e`
### Crontabs require an empty line at the end or they WILL not run. Make sure to have 2 lines to be safe
### It is bad practice to run any cronjob more than once a minute. For timing help: https://crontab.guru/
* * * * * /usr/bin/python3 /path-to-grafana-scripts/ombi.py --total
* * * * * /usr/bin/python3 /path-to-grafana-scripts/tautulli.py
* * * * * /usr/bin/python3 /path-to-grafana-scripts/radarr.py --queue
* * * * * /usr/bin/python3 /path-to-grafana-scripts/sonarr.py --queue
*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/radarr.py --missing
*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/sonarr.py --missing
*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/sickrage.py
```

104
Varken.py Normal file
View file

@ -0,0 +1,104 @@
import schedule
import threading
import sys
import platform
import distro
from sys import exit
from time import sleep
from os import access, R_OK
from os.path import isdir, abspath, dirname, join
from argparse import ArgumentParser, RawTextHelpFormatter
from varken.iniparser import INIParser
from varken.sonarr import SonarrAPI
from varken.tautulli import TautulliAPI
from varken.radarr import RadarrAPI
from varken.ombi import OmbiAPI
from varken.dbmanager import DBManager
from varken.varkenlogger import VarkenLogger
PLATFORM_LINUX_DISTRO = ' '.join(x for x in distro.linux_distribution() if x)
def threaded(job):
thread = threading.Thread(target=job)
thread.start()
if __name__ == "__main__":
parser = ArgumentParser(prog='varken',
description='Command-line utility to aggregate data from the plex ecosystem into InfluxDB',
formatter_class=RawTextHelpFormatter)
parser.add_argument("-d", "--data-folder", help='Define an alternate data folder location')
parser.add_argument("-D", "--debug", action='store_true', help='Use to enable DEBUG logging')
opts = parser.parse_args()
DATA_FOLDER = abspath(join(dirname(__file__), 'data'))
if opts.data_folder:
ARG_FOLDER = opts.data_folder
if isdir(ARG_FOLDER):
DATA_FOLDER = ARG_FOLDER
if not access(ARG_FOLDER, R_OK):
exit("Read permission error for {}".format(ARG_FOLDER))
else:
exit("{} does not exist".format(ARG_FOLDER))
# Initiate the logger
vl = VarkenLogger(data_folder=DATA_FOLDER, debug=opts.debug)
vl.logger.info('Starting Varken...')
vl.logger.info(u"{} {} ({}{})".format(
platform.system(), platform.release(), platform.version(),
' - {}'.format(PLATFORM_LINUX_DISTRO) if PLATFORM_LINUX_DISTRO else ''
))
vl.logger.info(u"Python {}".format(sys.version))
CONFIG = INIParser(DATA_FOLDER)
DBMANAGER = DBManager(CONFIG.influx_server)
if CONFIG.sonarr_enabled:
for server in CONFIG.sonarr_servers:
SONARR = SonarrAPI(server, DBMANAGER)
if server.queue:
schedule.every(server.queue_run_seconds).seconds.do(threaded, SONARR.get_queue)
if server.missing_days > 0:
schedule.every(server.missing_days_run_seconds).seconds.do(threaded, SONARR.get_missing)
if server.future_days > 0:
schedule.every(server.future_days_run_seconds).seconds.do(threaded, SONARR.get_future)
if CONFIG.tautulli_enabled:
for server in CONFIG.tautulli_servers:
TAUTULLI = TautulliAPI(server, DBMANAGER)
if server.get_activity:
schedule.every(server.get_activity_run_seconds).seconds.do(threaded, TAUTULLI.get_activity)
if CONFIG.radarr_enabled:
for server in CONFIG.radarr_servers:
RADARR = RadarrAPI(server, DBMANAGER)
if server.get_missing:
schedule.every(server.get_missing_run_seconds).seconds.do(threaded, RADARR.get_missing)
if server.queue:
schedule.every(server.queue_run_seconds).seconds.do(threaded, RADARR.get_queue)
if CONFIG.ombi_enabled:
for server in CONFIG.ombi_servers:
OMBI = OmbiAPI(server, DBMANAGER)
if server.request_type_counts:
schedule.every(server.request_type_run_seconds).seconds.do(threaded, OMBI.get_request_counts)
if server.request_total_counts:
schedule.every(server.request_total_run_seconds).seconds.do(threaded, OMBI.get_total_requests)
# Run all on startup
SERVICES_ENABLED = [CONFIG.ombi_enabled, CONFIG.radarr_enabled, CONFIG.tautulli_enabled, CONFIG.sonarr_enabled]
if not [enabled for enabled in SERVICES_ENABLED if enabled]:
exit("All services disabled. Exiting")
schedule.run_all()
while True:
schedule.run_pending()
sleep(1)

View file

@ -1,53 +0,0 @@
'''
Notes:
- Domains should be either http(s)://subdomain.domain.com or http(s)://domain.com/url_suffix
- Sonarr + Radarr scripts support multiple servers. You can remove the second
server by putting a # in front of the line.
- tautulli_failback_ip, This is used when there is no IP listed in tautulli.
This can happen when you are streaming locally. This is usually your public IP.
'''
########################### INFLUXDB CONFIG ###########################
influxdb_url = 'influxdb.domain.tld'
influxdb_port = 8086
influxdb_username = ''
influxdb_password = ''
############################ SONARR CONFIG ############################
sonarr_server_list = [
('https://sonarr1.domain.tld', 'xxxxxxxxxxxxxxx', '1'),
('https://sonarr2.domain.tld', 'xxxxxxxxxxxxxxx', '2'),
#('https://sonarr3.domain.tld', 'xxxxxxxxxxxxxxx', '3')
]
sonarr_influxdb_db_name = 'plex'
sonarr_verify_ssl = True
############################ RADARR CONFIG ############################
radarr_server_list = [
('https://radarr1.domain.tld', 'xxxxxxxxxxxxxxx', '1'),
('https://radarr2.domain.tld', 'xxxxxxxxxxxxxxx', '2'),
#('https://radarr3.domain.tld', 'xxxxxxxxxxxxxxx', '3')
]
radarr_influxdb_db_name = 'plex'
radarr_verify_ssl = True
############################ OMBI CONFIG ##############################
ombi_url = 'https://ombi.domain.tld'
ombi_api_key = 'xxxxxxxxxxxxxxx'
ombi_influxdb_db_name = 'plex'
ombi_verify_ssl = True
########################## TAUTULLI CONFIG ############################
tautulli_url = 'https://tautulli.domain.tld'
tautulli_api_key = 'xxxxxxxxxxxxxxx'
tautulli_failback_ip = ''
tautulli_influxdb_db_name = 'plex'
tautulli_verify_ssl = True
########################## FIREWALL CONFIG ############################
asa_url = 'https://firewall.domain.tld'
asa_username = 'cisco'
asa_password = 'cisco'
asa_influxdb_db_name = 'asa'

View file

@ -1,11 +0,0 @@
### Modify paths as appropriate. python3 is located in different places for different users. (`which python3` will give you the path)
### to edit your crontab entry, do not modify /var/spool/cron/crontabs/<user> directly, use `crontab -e`
### Crontabs require an empty line at the end or they WILL not run. Make sure to have 2 lines to be safe
###
* * * * * /usr/bin/python3 /path-to-grafana-scripts/ombi.py
* * * * * ( sleep 30 ; /usr/bin/python3 /path-to-grafana-scripts/ombi.py )
* * * * * /usr/bin/python3 /path-to-grafana-scripts/taurulli.py
* * * * * ( sleep 30 ; /usr/bin/python3 /path-to-grafana-scripts/tautulli.py )
*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/radarr.py
*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/sonarr.py
#*/30 * * * * /usr/bin/python3 /path-to-grafana-scripts/sickrage.py

90
data/varken.example.ini Normal file
View file

@ -0,0 +1,90 @@
# Notes:
# - Sonarr + Radarr scripts support multiple servers. You can remove the second
# server by putting a # in front of the lines and section name, and removing
# that number from your server_ids list
# - fallback_ip, This is used when there is no IP listed in tautulli.
# This can happen when you are streaming locally. This is usually your public IP.
[global]
sonarr_server_ids = 1,2
radarr_server_ids = 1,2
tautulli_server_ids = 1
ombi_server_ids = 1
asa = false
[influxdb]
url = influxdb.domain.tld
port = 8086
username =
password =
[tautulli-1]
url = tautulli.domain.tld
fallback_ip = 0.0.0.0
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = true
get_activity = true
get_activity_run_seconds = 30
[sonarr-1]
url = sonarr1.domain.tld
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = true
missing_days = 7
missing_days_run_seconds = 300
future_days = 1
future_days_run_seconds = 300
queue = true
queue_run_seconds = 300
[sonarr-2]
url = sonarr2.domain.tld
apikey = yyyyyyyyyyyyyyyy
ssl = false
verify_ssl = true
missing_days = 7
missing_days_run_seconds = 300
future_days = 1
future_days_run_seconds = 300
queue = true
queue_run_seconds = 300
[radarr-1]
url = radarr1.domain.tld
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = true
queue = true
queue_run_seconds = 300
get_missing = true
get_missing_run_seconds = 300
[radarr-2]
url = radarr2.domain.tld
apikey = yyyyyyyyyyyyyyyy
ssl = false
verify_ssl = true
queue = true
queue_run_seconds = 300
get_missing = true
get_missing_run_seconds = 300
[ombi-1]
url = ombi.domain.tld
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = true
get_request_type_counts = true
request_type_run_seconds = 300
get_request_total_counts = true
request_total_run_seconds = 300
[asa]
url = firewall.domain.tld
username = cisco
password = cisco
influx_db = asa
ssl = false
verify_ssl = true

104
ombi.py
View file

@ -1,104 +0,0 @@
# Do not edit this script. Edit configuration.py
import sys
import requests
from datetime import datetime, timezone
from influxdb import InfluxDBClient
import argparse
from argparse import RawTextHelpFormatter
import configuration
headers = {'Apikey': configuration.ombi_api_key}
def now_iso():
now_iso = datetime.now(timezone.utc).astimezone().isoformat()
return now_iso
def influx_sender(influx_payload):
influx = InfluxDBClient(configuration.influxdb_url,
configuration.influxdb_port,
configuration.influxdb_username,
configuration.influxdb_password,
configuration.ombi_influxdb_db_name)
influx.write_points(influx_payload)
def get_total_requests():
get_tv_requests = requests.get('{}/api/v1/Request/tv'.format(
configuration.ombi_url), headers=headers,
verify=configuration.ombi_verify_ssl).json()
get_movie_requests = requests.get('{}/api/v1/Request/movie'.format(
configuration.ombi_url), headers=headers,
verify=configuration.ombi_verify_ssl).json()
count_movie_requests = 0
count_tv_requests = 0
for show in get_tv_requests:
count_tv_requests += 1
for movie in get_movie_requests:
count_movie_requests += 1
influx_payload = [
{
"measurement": "Ombi",
"tags": {
"type": "Request_Total"
},
"time": now_iso(),
"fields": {
"total": count_movie_requests + count_tv_requests
}
}
]
return influx_payload
def get_request_counts():
get_request_counts = requests.get('{}/api/v1/Request/count'.format(
configuration.ombi_url), headers=headers,
verify=configuration.ombi_verify_ssl).json()
influx_payload = [
{
"measurement": "Ombi",
"tags": {
"type": "Request_Counts"
},
"time": now_iso(),
"fields": {
"pending": int(get_request_counts['pending']),
"approved": int(get_request_counts['approved']),
"available": int(get_request_counts['available'])
}
}
]
return influx_payload
if __name__ == "__main__":
parser = argparse.ArgumentParser(prog='Ombi stats operations',
description='Script to aid in data gathering from Ombi',
formatter_class=RawTextHelpFormatter)
parser.add_argument("--total", action='store_true',
help='Get the total count of all requests')
parser.add_argument("--counts", action='store_true',
help='Get the count of pending, approved, and available requests')
opts = parser.parse_args()
if opts.total:
influx_sender(get_total_requests())
elif opts.counts:
influx_sender(get_request_counts())
elif len(sys.argv) == 1:
parser.print_help(sys.stderr)
sys.exit(1)

191
radarr.py
View file

@ -1,191 +0,0 @@
# Do not edit this script. Edit configuration.py
import sys
import requests
from datetime import datetime, timezone
from influxdb import InfluxDBClient
import argparse
from argparse import RawTextHelpFormatter
import configuration
def now_iso():
now_iso = datetime.now(timezone.utc).astimezone().isoformat()
return now_iso
def influx_sender(influx_payload):
influx = InfluxDBClient(configuration.influxdb_url,
configuration.influxdb_port,
configuration.influxdb_username,
configuration.influxdb_password,
configuration.radarr_influxdb_db_name)
influx.write_points(influx_payload)
def get_missing_movies():
# Set the time here so we have one timestamp to work with
now = now_iso()
missing = []
influx_payload = []
for radarr_url, radarr_api_key, server_id in configuration.radarr_server_list:
headers = {'X-Api-Key': radarr_api_key}
get_movies = requests.get('{}/api/movie'.format(radarr_url),
headers=headers,
verify=configuration.radarr_verify_ssl).json()
movies = {d['tmdbId']: d for d in get_movies}
for movie in movies.keys():
if not movies[movie]['downloaded']:
movie_name = ('{} ({})'.format(movies[movie]['title'],
movies[movie]['year']))
missing.append((movie_name, movies[movie]['tmdbId']))
for movie, id in missing:
influx_payload.append(
{
"measurement": "Radarr",
"tags": {
"type": "Missing",
"tmdbId": id,
"server": server_id
},
"time": now,
"fields": {
"name": movie
}
}
)
# Empty missing or else things get foo bared
missing = []
return influx_payload
def get_missing_avl():
# Set the time here so we have one timestamp to work with
now = now_iso()
missing = []
influx_payload = []
for radarr_url, radarr_api_key, server_id in configuration.radarr_server_list:
headers = {'X-Api-Key': radarr_api_key}
get_movies = requests.get('{}/api/movie'.format(radarr_url),
headers=headers,
verify=configuration.radarr_verify_ssl).json()
movies = {d['tmdbId']: d for d in get_movies}
for movie in movies.keys():
if not movies[movie]['downloaded']:
if movies[movie]['isAvailable'] is True:
movie_name = ('{} ({})'.format(movies[movie]['title'],
movies[movie]['year']))
missing.append((movie_name, movies[movie]['tmdbId']))
for movie, id in missing:
influx_payload.append(
{
"measurement": "Radarr",
"tags": {
"type": "Missing_Available",
"tmdbId": id,
"server": server_id
},
"time": now,
"fields": {
"name": movie,
}
}
)
# Empty missing or else things get foo bared
missing = []
return influx_payload
def get_queue_movies():
# Set the time here so we have one timestamp to work with
now = now_iso()
influx_payload = []
queue = []
for radarr_url, radarr_api_key, server_id in configuration.radarr_server_list:
headers = {'X-Api-Key': radarr_api_key}
get_movies = requests.get('{}/api/queue'.format(radarr_url),
headers=headers,
verify=configuration.radarr_verify_ssl).json()
queue_movies = {d['id']: d for d in get_movies}
for movie in queue_movies.keys():
name = '{} ({})'.format(queue_movies[movie]['movie']['title'],
queue_movies[movie]['movie']['year'])
quality = (queue_movies[movie]['quality']['quality']['name'])
protocol = (queue_movies[movie]['protocol'].upper())
if protocol == 'USENET':
protocol_id = 1
else:
protocol_id = 0
queue.append((name, queue_movies[movie]['id']))
for movie, id in queue:
influx_payload.append(
{
"measurement": "Radarr",
"tags": {
"type": "Queue",
"tmdbId": id,
"server": server_id
},
"time": now,
"fields": {
"name": movie,
"quality": quality,
"protocol": protocol,
"protocol_id": protocol_id
}
}
)
# Empty queue or else things get foo bared
queue = []
return influx_payload
if __name__ == "__main__":
parser = argparse.ArgumentParser(prog='Radarr stats operations',
description='Script to aid in data gathering from Radarr',
formatter_class=RawTextHelpFormatter)
parser.add_argument("--missing", action='store_true',
help='Get missing movies')
parser.add_argument("--missing_avl", action='store_true',
help='Get missing yet available movies')
parser.add_argument("--queue", action='store_true',
help='Get movies in queue')
opts = parser.parse_args()
if opts.missing:
influx_sender(get_missing_movies())
elif opts.missing_avl:
influx_sender(get_missing_avl())
elif opts.queue:
influx_sender(get_queue_movies())
elif len(sys.argv) == 1:
parser.print_help(sys.stderr)
sys.exit(1)

View file

@ -2,6 +2,8 @@
# Potential requirements.
# pip3 install -r requirements.txt
#---------------------------------------------------------
requests
geoip2
influxdb
requests>=2.20.1
geoip2>=2.9.0
influxdb>=5.2.0
schedule>=0.5.0
distro>=1.3.0

356
sonarr.py
View file

@ -1,356 +0,0 @@
# Do not edit this script. Edit configuration.py
import sys
import requests
from datetime import datetime, timezone, date, timedelta
from influxdb import InfluxDBClient
import argparse
from argparse import RawTextHelpFormatter
import configuration
def now_iso():
now_iso = datetime.now(timezone.utc).astimezone().isoformat()
return now_iso
def influx_sender(influx_payload):
influx = InfluxDBClient(configuration.influxdb_url,
configuration.influxdb_port,
configuration.influxdb_username,
configuration.influxdb_password,
configuration.sonarr_influxdb_db_name)
influx.write_points(influx_payload)
def get_all_missing_shows():
# Set the time here so we have one timestamp to work with
now = now_iso()
missing = []
influx_payload = []
for sonarr_url, sonarr_api_key, server_id in configuration.sonarr_server_list:
headers = {'X-Api-Key': sonarr_api_key}
get_tv_shows = requests.get('{}/api/wanted/missing/?pageSize=1000'.format(sonarr_url),
headers=headers,
verify=configuration.sonarr_verify_ssl).json()['records']
tv_shows = {d['id']: d for d in get_tv_shows}
for show in tv_shows.keys():
series_title = '{}'.format(tv_shows[show]['series']['title'])
sxe = 'S{:0>2}E{:0>2}'.format(tv_shows[show]['seasonNumber'],
tv_shows[show]['episodeNumber'])
missing.append((series_title, sxe, tv_shows[show]['id'],
tv_shows[show]['title']))
for series_title, sxe, id, episode_title in missing:
influx_payload.append(
{
"measurement": "Sonarr",
"tags": {
"type": "Missing",
"sonarrId": id,
"server": server_id
},
"time": now,
"fields": {
"name": series_title,
"epname": episode_title,
"sxe": sxe
}
}
)
# Empty missing or else things get foo bared
missing = []
return influx_payload
def get_missing_shows(days_past):
# Set the time here so we have one timestamp to work with
now = now_iso()
last_days = str(date.today()+timedelta(days=-days_past))
today = str(date.today())
missing = []
influx_payload = []
for sonarr_url, sonarr_api_key, server_id in configuration.sonarr_server_list:
headers = {'X-Api-Key': sonarr_api_key}
get_tv_shows = requests.get('{}/api/calendar/?start={}&end={}&pageSize=1000'
.format(sonarr_url, last_days, today),
headers=headers,
verify=configuration.sonarr_verify_ssl).json()
tv_shows = {d['id']: d for d in get_tv_shows}
for show in tv_shows.keys():
if not (tv_shows[show]['hasFile']):
series_title = '{}'.format(tv_shows[show]['series']['title'])
sxe = 'S{:0>2}E{:0>2}'.format(tv_shows[show]['seasonNumber'],
tv_shows[show]['episodeNumber'])
air_date = (tv_shows[show]['airDate'])
missing.append((series_title, sxe, air_date, tv_shows[show]['id']))
for series_title, sxe, air_date, id in missing:
influx_payload.append(
{
"measurement": "Sonarr",
"tags": {
"type": "Missing_Days",
"sonarrId": id,
"server": server_id
},
"time": now,
"fields": {
"name": series_title,
"sxe": sxe,
"airs": air_date
}
}
)
# Empty missing or else things get foo bared
missing = []
return influx_payload
def get_upcoming_shows():
# Set the time here so we have one timestamp to work with
now = now_iso()
upcoming = []
influx_payload = []
for sonarr_url, sonarr_api_key, server_id in configuration.sonarr_server_list:
headers = {'X-Api-Key': sonarr_api_key}
get_upcoming_shows = requests.get('{}/api/calendar/'.format(sonarr_url),
headers=headers,
verify=configuration.sonarr_verify_ssl).json()
upcoming_shows = {d['id']: d for d in get_upcoming_shows}
for show in upcoming_shows.keys():
series_title = '{}'.format(upcoming_shows[show]['series']['title'])
sxe = 'S{:0>2}E{:0>2}'.format(upcoming_shows[show]['seasonNumber'],
upcoming_shows[show]['episodeNumber'])
upcoming.append((series_title, sxe,
upcoming_shows[show]['id'],
upcoming_shows[show]['title'],
upcoming_shows[show]['airDate']))
for series_title, sxe, id, episode_title, air_date in upcoming:
influx_payload.append(
{
"measurement": "Sonarr",
"tags": {
"type": "Soon",
"sonarrId": id,
"server": server_id
},
"time": now,
"fields": {
"name": series_title,
"epname": episode_title,
"sxe": sxe,
"airs": air_date
}
}
)
# Empty upcoming or else things get foo bared
upcoming = []
return influx_payload
def get_future_shows(future_days):
# Set the time here so we have one timestamp to work with
now = now_iso()
today = str(date.today())
future = str(date.today()+timedelta(days=future_days))
air_days = []
downloaded = []
influx_payload = []
for sonarr_url, sonarr_api_key, server_id in configuration.sonarr_server_list:
headers = {'X-Api-Key': sonarr_api_key}
get_tv_shows = requests.get('{}/api/calendar/?start={}&end={}&pageSize=200'
.format(sonarr_url, today, future),
headers=headers,
verify=configuration.sonarr_verify_ssl).json()
tv_shows = {d['id']: d for d in get_tv_shows}
for show in tv_shows.keys():
series_title = '{}'.format(tv_shows[show]['series']['title'])
dl_status = int(tv_shows[show]['hasFile'])
sxe = 'S{:0>2}E{:0>2}'.format(tv_shows[show]['seasonNumber'],
tv_shows[show]['episodeNumber'])
air_days.append((series_title, dl_status, sxe,
tv_shows[show]['title'],
tv_shows[show]['airDate'],
tv_shows[show]['id']))
for series_title, dl_status, sxe, episode_title, air_date, id in air_days:
influx_payload.append(
{
"measurement": "Sonarr",
"tags": {
"type": "Future",
"sonarrId": id,
"server": server_id
},
"time": now,
"fields": {
"name": series_title,
"epname": episode_title,
"sxe": sxe,
"airs": air_date,
"downloaded": dl_status
}
}
)
# Empty air_days or else things get foo bared
air_days = []
return influx_payload
def get_queue_shows():
# Set the time here so we have one timestamp to work with
now = now_iso()
queue = []
downloaded = []
influx_payload = []
for sonarr_url, sonarr_api_key, server_id in configuration.sonarr_server_list:
headers = {'X-Api-Key': sonarr_api_key}
get_tv_shows = requests.get('{}/api/queue'.format(sonarr_url),
headers=headers,
verify=configuration.sonarr_verify_ssl).json()
tv_shows = {d['id']: d for d in get_tv_shows}
for show in tv_shows.keys():
series_title = '{}'.format(tv_shows[show]['series']['title'])
episode_title = '{}'.format(tv_shows[show]['episode']['title'])
protocol = (tv_shows[show]['protocol'].upper())
sxe = 'S{:0>2}E{:0>2}'.format(tv_shows[show]['episode']['seasonNumber'],
tv_shows[show]['episode']['episodeNumber'])
if protocol == 'USENET':
protocol_id = 1
else:
protocol_id = 0
queue.append((series_title, episode_title, protocol,
protocol_id, sxe, tv_shows[show]['id']))
for series_title, episode_title, protocol, protocol_id, sxe, id in queue:
influx_payload.append(
{
"measurement": "Sonarr",
"tags": {
"type": "Queue",
"sonarrId": id,
"server": server_id
},
"time": now,
"fields": {
"name": series_title,
"epname": episode_title,
"sxe": sxe,
"protocol": protocol,
"protocol_id": protocol_id
}
}
)
# Empty queue or else things get foo bared
queue = []
return influx_payload
if __name__ == "__main__":
parser = argparse.ArgumentParser(prog='Sonarr stats operations',
description='Script to aid in data gathering from Sonarr',
formatter_class=RawTextHelpFormatter)
parser.add_argument("--missing", action='store_true',
help='Get all missing TV shows')
parser.add_argument("--missing_days", type=int,
help='Get missing TV shows in past X days')
parser.add_argument("--upcoming", action='store_true',
help='Get upcoming TV shows')
parser.add_argument("--future", type=int,
help='Get TV shows on X days into the future. Includes today.'
'\ni.e. --future 2 is Today and Tomorrow')
parser.add_argument("--queue", action='store_true',
help='Get TV shows in queue')
opts = parser.parse_args()
if opts.missing:
influx_sender(get_all_missing_shows())
elif opts.missing_days:
influx_sender(get_missing_shows(opts.missing_days))
elif opts.upcoming:
influx_sender(get_upcoming_shows())
elif opts.future:
influx_sender(get_future_shows(opts.future))
elif opts.queue:
influx_sender(get_queue_shows())
elif len(sys.argv) == 1:
parser.print_help(sys.stderr)
sys.exit(1)

View file

@ -1,184 +0,0 @@
import os
import tarfile
import urllib.request
import time
import geoip2.database
import requests
import configuration
from geoip2.errors import AddressNotFoundError
from influxdb import InfluxDBClient
from datetime import datetime, timezone
CURRENT_TIME = datetime.now(timezone.utc).astimezone().isoformat()
PAYLOAD = {'apikey': configuration.tautulli_api_key, 'cmd': 'get_activity'}
ACTIVITY = requests.get('{}/api/v2'.format(configuration.tautulli_url.rstrip('/')),
params=PAYLOAD,
verify=configuration.tautulli_verify_ssl
).json()['response']['data']
SESSIONS = {d['session_id']: d for d in ACTIVITY['sessions']}
TAR_DBFILE = '{}/GeoLite2-City.tar.gz'.format(os.path.dirname(os.path.realpath(__file__)))
DBFILE = '{}/GeoLite2-City.mmdb'.format(os.path.dirname(os.path.realpath(__file__)))
NOW = time.time()
DB_AGE = NOW - (86400 * 35)
#remove the running db file if it is older than 35 days
try:
t = os.stat(DBFILE)
c = t.st_ctime
if c < DB_AGE:
os.remove(DBFILE)
except FileNotFoundError:
pass
def geo_lookup(ipaddress):
"""Lookup an IP using the local GeoLite2 DB"""
if not os.path.isfile(DBFILE):
urllib.request.urlretrieve(
'http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz',
TAR_DBFILE)
tar = tarfile.open(TAR_DBFILE, "r:gz")
for files in tar.getmembers():
if 'GeoLite2-City.mmdb' in files.name:
files.name = os.path.basename(files.name)
tar.extract(files, '{}/'.format(os.path.dirname(os.path.realpath(__file__))))
reader = geoip2.database.Reader(DBFILE)
return reader.city(ipaddress)
INFLUX_PAYLOAD = [
{
"measurement": "Tautulli",
"tags": {
"type": "stream_count"
},
"time": CURRENT_TIME,
"fields": {
"current_streams": int(ACTIVITY['stream_count']),
"transcode_streams": int(ACTIVITY['stream_count_transcode']),
"direct_play_streams": int(ACTIVITY['stream_count_direct_play']),
"direct_streams": int(ACTIVITY['stream_count_direct_stream'])
}
}
]
for session in SESSIONS.keys():
try:
geodata = geo_lookup(SESSIONS[session]['ip_address_public'])
except (ValueError, AddressNotFoundError):
if configuration.tautulli_failback_ip:
geodata = geo_lookup(configuration.tautulli_failback_ip)
else:
geodata = geo_lookup(requests.get('http://ip.42.pl/raw').text)
latitude = geodata.location.latitude
if not geodata.location.latitude:
latitude = 37.234332396
else:
latitude = geodata.location.latitude
if not geodata.location.longitude:
longitude = -115.80666344
else:
longitude = geodata.location.longitude
decision = SESSIONS[session]['transcode_decision']
if decision == 'copy':
decision = 'direct stream'
video_decision = SESSIONS[session]['stream_video_decision']
if video_decision == 'copy':
video_decision = 'direct stream'
elif video_decision == '':
video_decision = 'Music'
quality = SESSIONS[session]['stream_video_resolution']
# If the video resolution is empty. Asssume it's an audio stream
# and use the container for music
if not quality:
quality = SESSIONS[session]['container'].upper()
elif quality in ('SD', 'sd'):
quality = SESSIONS[session]['stream_video_resolution'].upper()
elif quality in '4k':
quality = SESSIONS[session]['stream_video_resolution'].upper()
else:
quality = '{}p'.format(SESSIONS[session]['stream_video_resolution'])
# Translate player_state to integers so we can colorize the table
player_state = SESSIONS[session]['state'].lower()
if player_state == 'playing':
player_state = 0
elif player_state == 'paused':
player_state = 1
elif player_state == 'buffering':
player_state = 3
INFLUX_PAYLOAD.append(
{
"measurement": "Tautulli",
"tags": {
"type": "Session",
"session_id": SESSIONS[session]['session_id'],
"name": SESSIONS[session]['friendly_name'],
"title": SESSIONS[session]['full_title'],
"platform": SESSIONS[session]['platform'],
"product_version": SESSIONS[session]['product_version'],
"quality": quality,
"video_decision": video_decision.title(),
"transcode_decision": decision.title(),
"media_type": SESSIONS[session]['media_type'].title(),
"audio_codec": SESSIONS[session]['audio_codec'].upper(),
"audio_profile": SESSIONS[session]['audio_profile'].upper(),
"stream_audio_codec": SESSIONS[session]['stream_audio_codec'].upper(),
"quality_profile": SESSIONS[session]['quality_profile'],
"progress_percent": SESSIONS[session]['progress_percent'],
"region_code": geodata.subdivisions.most_specific.iso_code,
"location": geodata.city.name,
"full_location": '{} - {}'.format(geodata.subdivisions.most_specific.name,
geodata.city.name),
"latitude": latitude,
"longitude": longitude,
"player_state": player_state,
"device_type": SESSIONS[session]['platform']
},
"time": CURRENT_TIME,
"fields": {
"session_id": SESSIONS[session]['session_id'],
"session_key": SESSIONS[session]['session_key']
}
}
)
INFLUX_SENDER = InfluxDBClient(configuration.influxdb_url,
configuration.influxdb_port,
configuration.influxdb_username,
configuration.influxdb_password,
configuration.tautulli_influxdb_db_name)
INFLUX_SENDER.write_points(INFLUX_PAYLOAD)

53
varken.systemd Normal file
View file

@ -0,0 +1,53 @@
# Varken - Command-line utility to aggregate data from the Plex ecosystem into InfluxDB.
#
# Service Unit file for systemd system manager
#
# INSTALLATION NOTES
#
# 1. Copy this file into your systemd service unit directory (often '/lib/systemd/system')
# and name it 'varken.service' with the following command:
# cp /opt/Varken/varken.systemd /lib/systemd/system/varken.service
#
# 2. Edit the new varken.service file with configuration settings as required.
# More details in the "CONFIGURATION NOTES" section shown below.
#
# 3. Enable boot-time autostart with the following commands:
# systemctl daemon-reload
# systemctl enable varken.service
#
# 4. Start now with the following command:
# systemctl start varken.service
#
# CONFIGURATION NOTES
#
# - The example settings in this file assume that you will run varken as user: varken
# - The example settings in this file assume that varken is installed to: /opt/Varken
#
# - To create this user and give it ownership of the Varken directory:
# Ubuntu/Debian: sudo addgroup varken && sudo adduser --system --no-create-home varken --ingroup varken
# CentOS/Fedora: sudo adduser --system --no-create-home varken
# sudo chown varken:varken -R /opt/Varken
#
# - Adjust User= and Group= to the user/group you want Varken to run as.
#
# - WantedBy= specifies which target (i.e. runlevel) to start Varken for.
# multi-user.target equates to runlevel 3 (multi-user text mode)
# graphical.target equates to runlevel 5 (multi-user X11 graphical mode)
[Unit]
Description=Varken - Command-line utility to aggregate data from the Plex ecosystem into InfluxDB.
After=network-online.target
StartLimitInterval=200
StartLimitBurst=3
[Service]
Type=simple
User=varken
Group=varken
WorkingDirectory=/opt/Varken
ExecStart=/opt/Varken/varken-venv/bin/python /opt/Varken/Varken.py
Restart=always
RestartSec=30
[Install]
WantedBy=multi-user.target

0
varken/__init__.py Normal file
View file

21
varken/dbmanager.py Normal file
View file

@ -0,0 +1,21 @@
import logging
from influxdb import InfluxDBClient
logger = logging.getLogger('varken')
class DBManager(object):
def __init__(self, server):
self.server = server
self.influx = InfluxDBClient(self.server.url, self.server.port, self.server.username, self.server.password,
'varken')
databases = [db['name'] for db in self.influx.get_list_database()]
if 'varken' not in databases:
self.influx.create_database('varken')
self.influx.create_retention_policy('varken 30d/1h', '30d', '1', 'varken', False, '1h')
def write_points(self, data):
d = data
logger.debug('Writing Data to InfluxDB {}'.format(d))
self.influx.write_points(d)

88
varken/helpers.py Normal file
View file

@ -0,0 +1,88 @@
import os
import time
import tarfile
import hashlib
import geoip2.database
import logging
from functools import update_wrapper
from json.decoder import JSONDecodeError
from os.path import abspath, join
from requests.exceptions import InvalidSchema, SSLError
from urllib.request import urlretrieve
logger = logging.getLogger('varken')
def geoip_download():
tar_dbfile = abspath(join('.', 'data', 'GeoLite2-City.tar.gz'))
url = 'http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz'
urlretrieve(url, tar_dbfile)
tar = tarfile.open(tar_dbfile, 'r:gz')
for files in tar.getmembers():
if 'GeoLite2-City.mmdb' in files.name:
files.name = os.path.basename(files.name)
tar.extract(files, abspath(join('.', 'data')))
os.remove(tar_dbfile)
def geo_lookup(ipaddress):
dbfile = abspath(join('.', 'data', 'GeoLite2-City.mmdb'))
now = time.time()
try:
dbinfo = os.stat(dbfile)
db_age = now - dbinfo.st_ctime
if db_age > (35 * 86400):
os.remove(dbfile)
geoip_download()
except FileNotFoundError:
geoip_download()
reader = geoip2.database.Reader(dbfile)
return reader.city(ipaddress)
def hashit(string):
encoded = string.encode()
hashed = hashlib.md5(encoded).hexdigest()
return hashed
def connection_handler(session, request, verify):
s = session
r = request
v = verify
return_json = False
try:
get = s.send(r, verify=v)
if get.status_code == 401:
logger.info('Your api key is incorrect for {}'.format(r.url))
elif get.status_code == 404:
logger.info('This url doesnt even resolve: {}'.format(r.url))
elif get.status_code == 200:
try:
return_json = get.json()
except JSONDecodeError:
logger.error('No JSON response... BORKED! Let us know in discord')
except InvalidSchema:
logger.error('You added http(s):// in the config file. Don\'t do that.')
except SSLError as e:
logger.error('Either your host is unreachable or you have an SSL issue. : %s', e)
return return_json
def mkdir_p(path):
"""http://stackoverflow.com/a/600612/190597 (tzot)"""
try:
logger.info('Creating folder %s ', path)
os.makedirs(path, exist_ok=True)
except Exception as e:
logger.error('Could not create folder %s : %s ', path, e)

186
varken/iniparser.py Normal file
View file

@ -0,0 +1,186 @@
import sys
import configparser
import logging
from sys import exit
from os.path import join, exists
from varken.structures import SonarrServer, RadarrServer, OmbiServer, TautulliServer, InfluxServer
logger = logging.getLogger()
class INIParser(object):
def __init__(self, data_folder):
self.config = configparser.ConfigParser()
self.data_folder = data_folder
self.influx_server = InfluxServer()
self.sonarr_enabled = False
self.sonarr_servers = []
self.radarr_enabled = False
self.radarr_servers = []
self.ombi_enabled = False
self.ombi_servers = []
self.tautulli_enabled = False
self.tautulli_servers = []
self.asa_enabled = False
self.asa = None
self.parse_opts()
def enable_check(self, server_type=None):
t = server_type
global_server_ids = self.config.get('global', t)
if global_server_ids.lower() in ['false', 'no', '0']:
logger.info('%s disabled.', t.upper())
return False
else:
sids = self.clean_check(global_server_ids, t)
return sids
def clean_check(self, server_id_list, server_type=None):
t = server_type
sid_list = server_id_list
cleaned_list = sid_list.replace(' ', '').split(',')
valid_sids = []
for sid in cleaned_list:
try:
valid_sids.append(int(sid))
except ValueError:
logger.error("{} is not a valid server id number".format(sid))
if valid_sids:
logger.info('%s : %s', t.upper(), valid_sids)
return valid_sids
else:
logger.error('No valid %s', t.upper())
return False
def read_file(self):
file_path = join(self.data_folder, 'varken.ini')
if exists(file_path):
with open(file_path) as config_ini:
self.config.read_file(config_ini)
else:
exit('Config file missing (varken.ini) in {}'.format(self.data_folder))
def parse_opts(self):
self.read_file()
# Parse InfluxDB options
url = self.config.get('influxdb', 'url')
port = self.config.getint('influxdb', 'port')
username = self.config.get('influxdb', 'username')
password = self.config.get('influxdb', 'password')
self.influx_server = InfluxServer(url, port, username, password)
# Parse Sonarr options
self.sonarr_enabled = self.enable_check('sonarr_server_ids')
if self.sonarr_enabled:
sids = self.config.get('global', 'sonarr_server_ids').strip(' ').split(',')
for server_id in sids:
sonarr_section = 'sonarr-' + server_id
url = self.config.get(sonarr_section, 'url')
apikey = self.config.get(sonarr_section, 'apikey')
scheme = 'https://' if self.config.getboolean(sonarr_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(sonarr_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
queue = self.config.getboolean(sonarr_section, 'queue')
missing_days = self.config.getint(sonarr_section, 'missing_days')
future_days = self.config.getint(sonarr_section, 'future_days')
missing_days_run_seconds = self.config.getint(sonarr_section, 'missing_days_run_seconds')
future_days_run_seconds = self.config.getint(sonarr_section, 'future_days_run_seconds')
queue_run_seconds = self.config.getint(sonarr_section, 'queue_run_seconds')
server = SonarrServer(server_id, scheme + url, apikey, verify_ssl, missing_days,
missing_days_run_seconds, future_days, future_days_run_seconds,
queue, queue_run_seconds)
self.sonarr_servers.append(server)
# Parse Radarr options
self.radarr_enabled = self.enable_check('radarr_server_ids')
if self.radarr_enabled:
sids = self.config.get('global', 'radarr_server_ids').strip(' ').split(',')
for server_id in sids:
radarr_section = 'radarr-' + server_id
url = self.config.get(radarr_section, 'url')
apikey = self.config.get(radarr_section, 'apikey')
scheme = 'https://' if self.config.getboolean(radarr_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(radarr_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
queue = self.config.getboolean(radarr_section, 'queue')
queue_run_seconds = self.config.getint(radarr_section, 'queue_run_seconds')
get_missing = self.config.getboolean(radarr_section, 'get_missing')
get_missing_run_seconds = self.config.getint(radarr_section, 'get_missing_run_seconds')
server = RadarrServer(server_id, scheme + url, apikey, verify_ssl, queue, queue_run_seconds,
get_missing, get_missing_run_seconds)
self.radarr_servers.append(server)
# Parse Tautulli options
self.tautulli_enabled = self.enable_check('tautulli_server_ids')
if self.tautulli_enabled:
sids = self.config.get('global', 'tautulli_server_ids').strip(' ').split(',')
for server_id in sids:
tautulli_section = 'tautulli-' + server_id
url = self.config.get(tautulli_section, 'url')
fallback_ip = self.config.get(tautulli_section, 'fallback_ip')
apikey = self.config.get(tautulli_section, 'apikey')
scheme = 'https://' if self.config.getboolean(tautulli_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(tautulli_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
get_activity = self.config.getboolean(tautulli_section, 'get_activity')
get_activity_run_seconds = self.config.getint(tautulli_section, 'get_activity_run_seconds')
server = TautulliServer(server_id, scheme + url, fallback_ip, apikey, verify_ssl, get_activity,
get_activity_run_seconds)
self.tautulli_servers.append(server)
# Parse Ombi options
self.ombi_enabled = self.enable_check('ombi_server_ids')
if self.ombi_enabled:
sids = self.config.get('global', 'ombi_server_ids').strip(' ').split(',')
for server_id in sids:
ombi_section = 'ombi-' + server_id
url = self.config.get(ombi_section, 'url')
apikey = self.config.get(ombi_section, 'apikey')
scheme = 'https://' if self.config.getboolean(ombi_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(ombi_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
request_type_counts = self.config.getboolean(ombi_section, 'get_request_type_counts')
request_type_run_seconds = self.config.getint(ombi_section, 'request_type_run_seconds')
request_total_counts = self.config.getboolean(ombi_section, 'get_request_total_counts')
request_total_run_seconds = self.config.getint(ombi_section, 'request_total_run_seconds')
server = OmbiServer(server_id, scheme + url, apikey, verify_ssl, request_type_counts,
request_type_run_seconds, request_total_counts, request_total_run_seconds)
self.ombi_servers.append(server)
# Parse ASA opts
if self.config.getboolean('global', 'asa'):
self.asa_enabled = True
url = self.config.get('asa', 'url')
username = self.config.get('asa', 'username')
password = self.config.get('asa', 'password')
scheme = 'https://' if self.config.getboolean('asa', 'ssl') else 'http://'
verify_ssl = self.config.getboolean('asa', 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
db_name = self.config.get('asa', 'influx_db')
self.asa = (scheme + url, username, password, verify_ssl, db_name)

79
varken/ombi.py Normal file
View file

@ -0,0 +1,79 @@
from requests import Session, Request
from datetime import datetime, timezone
from varken.helpers import connection_handler
from varken.structures import OmbiRequestCounts
class OmbiAPI(object):
def __init__(self, server, dbmanager):
self.now = datetime.now(timezone.utc).astimezone().isoformat()
self.dbmanager = dbmanager
self.server = server
# Create session to reduce server web thread load, and globally define pageSize for all requests
self.session = Session()
self.session.headers = {'Apikey': self.server.api_key}
def __repr__(self):
return "<ombi-{}>".format(self.server.id)
def get_total_requests(self):
self.now = datetime.now(timezone.utc).astimezone().isoformat()
tv_endpoint = '/api/v1/Request/tv'
movie_endpoint = "/api/v1/Request/movie"
tv_req = self.session.prepare_request(Request('GET', self.server.url + tv_endpoint))
movie_req = self.session.prepare_request(Request('GET', self.server.url + movie_endpoint))
get_tv = connection_handler(self.session, tv_req, self.server.verify_ssl)
get_movie = connection_handler(self.session, movie_req, self.server.verify_ssl)
if not all([get_tv, get_movie]):
return
movie_requests = len(get_movie)
tv_requests = len(get_tv)
influx_payload = [
{
"measurement": "Ombi",
"tags": {
"type": "Request_Total"
},
"time": self.now,
"fields": {
"total": movie_requests + tv_requests,
"movies": movie_requests,
"tv_shows": tv_requests
}
}
]
self.dbmanager.write_points(influx_payload)
def get_request_counts(self):
self.now = datetime.now(timezone.utc).astimezone().isoformat()
endpoint = '/api/v1/Request/count'
req = self.session.prepare_request(Request('GET', self.server.url + endpoint))
get = connection_handler(self.session, req, self.server.verify_ssl)
if not get:
return
requests = OmbiRequestCounts(**get)
influx_payload = [
{
"measurement": "Ombi",
"tags": {
"type": "Request_Counts"
},
"time": self.now,
"fields": {
"pending": requests.pending,
"approved": requests.approved,
"available": requests.available
}
}
]
self.dbmanager.write_points(influx_payload)

112
varken/radarr.py Normal file
View file

@ -0,0 +1,112 @@
from requests import Session, Request
from datetime import datetime, timezone
from varken.helpers import hashit, connection_handler
from varken.structures import Movie, Queue
class RadarrAPI(object):
def __init__(self, server, dbmanager):
self.now = datetime.now(timezone.utc).astimezone().isoformat()
self.dbmanager = dbmanager
self.server = server
# Create session to reduce server web thread load, and globally define pageSize for all requests
self.session = Session()
self.session.headers = {'X-Api-Key': self.server.api_key}
def __repr__(self):
return "<radarr-{}>".format(self.server.id)
def get_missing(self):
endpoint = '/api/movie'
self.now = datetime.now(timezone.utc).astimezone().isoformat()
influx_payload = []
missing = []
req = self.session.prepare_request(Request('GET', self.server.url + endpoint))
get = connection_handler(self.session, req, self.server.verify_ssl)
if not get:
return
movies = [Movie(**movie) for movie in get]
for movie in movies:
if not movie.downloaded:
if movie.isAvailable:
ma = True
else:
ma = False
movie_name = '{} ({})'.format(movie.title, movie.year)
missing.append((movie_name, ma, movie.tmdbId))
for title, ma, mid in missing:
hash_id = hashit('{}{}{}'.format(self.server.id, title, mid))
influx_payload.append(
{
"measurement": "Radarr",
"tags": {
"Missing": True,
"Missing_Available": ma,
"tmdbId": mid,
"server": self.server.id,
"name": title
},
"time": self.now,
"fields": {
"hash": hash_id
}
}
)
self.dbmanager.write_points(influx_payload)
def get_queue(self):
endpoint = '/api/queue'
self.now = datetime.now(timezone.utc).astimezone().isoformat()
influx_payload = []
queue = []
req = self.session.prepare_request(Request('GET', self.server.url + endpoint))
get = connection_handler(self.session, req, self.server.verify_ssl)
if not get:
return
for movie in get:
movie['movie'] = Movie(**movie['movie'])
download_queue = [Queue(**movie) for movie in get]
for queue_item in download_queue:
name = '{} ({})'.format(queue_item.movie.title, queue_item.movie.year)
if queue_item.protocol.upper() == 'USENET':
protocol_id = 1
else:
protocol_id = 0
queue.append((name, queue_item.quality['quality']['name'], queue_item.protocol.upper(),
protocol_id, queue_item.id))
for movie, quality, protocol, protocol_id, qid in queue:
hash_id = hashit('{}{}{}'.format(self.server.id, movie, quality))
influx_payload.append(
{
"measurement": "Radarr",
"tags": {
"type": "Queue",
"tmdbId": qid,
"server": self.server.id,
"name": movie,
"quality": quality,
"protocol": protocol,
"protocol_id": protocol_id
},
"time": self.now,
"fields": {
"hash": hash_id
}
}
)
self.dbmanager.write_points(influx_payload)

163
varken/sonarr.py Normal file
View file

@ -0,0 +1,163 @@
from requests import Session, Request
from datetime import datetime, timezone, date, timedelta
from varken.helpers import hashit, connection_handler
from varken.structures import Queue, TVShow
class SonarrAPI(object):
def __init__(self, server, dbmanager):
# Set Time of initialization
self.now = datetime.now(timezone.utc).astimezone().isoformat()
self.dbmanager = dbmanager
self.today = str(date.today())
self.server = server
# Create session to reduce server web thread load, and globally define pageSize for all requests
self.session = Session()
self.session.headers = {'X-Api-Key': self.server.api_key}
self.session.params = {'pageSize': 1000}
def __repr__(self):
return "<sonarr-{}>".format(self.server.id)
def get_missing(self):
endpoint = '/api/calendar'
last_days = str(date.today() + timedelta(days=-self.server.missing_days))
self.now = datetime.now(timezone.utc).astimezone().isoformat()
params = {'start': last_days, 'end': self.today}
influx_payload = []
missing = []
req = self.session.prepare_request(Request('GET', self.server.url + endpoint, params=params))
get = connection_handler(self.session, req, self.server.verify_ssl)
if not get:
return
# Iteratively create a list of TVShow Objects from response json
tv_shows = [TVShow(**show) for show in get]
# Add show to missing list if file does not exist
for show in tv_shows:
if not show.hasFile:
sxe = 'S{:0>2}E{:0>2}'.format(show.seasonNumber, show.episodeNumber)
missing.append((show.series['title'], sxe, show.airDate, show.title, show.id))
for series_title, sxe, air_date, episode_title, sonarr_id in missing:
hash_id = hashit('{}{}{}'.format(self.server.id, series_title, sxe))
influx_payload.append(
{
"measurement": "Sonarr",
"tags": {
"type": "Missing",
"sonarrId": sonarr_id,
"server": self.server.id,
"name": series_title,
"epname": episode_title,
"sxe": sxe,
"airs": air_date
},
"time": self.now,
"fields": {
"hash": hash_id
}
}
)
self.dbmanager.write_points(influx_payload)
def get_future(self):
endpoint = '/api/calendar/'
self.now = datetime.now(timezone.utc).astimezone().isoformat()
future = str(date.today() + timedelta(days=self.server.future_days))
influx_payload = []
air_days = []
params = {'start': self.today, 'end': future}
req = self.session.prepare_request(Request('GET', self.server.url + endpoint, params=params))
get = connection_handler(self.session, req, self.server.verify_ssl)
if not get:
return
tv_shows = [TVShow(**show) for show in get]
for show in tv_shows:
sxe = 'S{:0>2}E{:0>2}'.format(show.seasonNumber, show.episodeNumber)
if show.hasFile:
downloaded = 1
else:
downloaded = 0
air_days.append((show.series['title'], downloaded, sxe, show.title, show.airDate, show.id))
for series_title, dl_status, sxe, episode_title, air_date, sonarr_id in air_days:
hash_id = hashit('{}{}{}'.format(self.server.id, series_title, sxe))
influx_payload.append(
{
"measurement": "Sonarr",
"tags": {
"type": "Future",
"sonarrId": sonarr_id,
"server": self.server.id,
"name": series_title,
"epname": episode_title,
"sxe": sxe,
"airs": air_date,
"downloaded": dl_status
},
"time": self.now,
"fields": {
"hash": hash_id
}
}
)
self.dbmanager.write_points(influx_payload)
def get_queue(self):
influx_payload = []
endpoint = '/api/queue'
self.now = datetime.now(timezone.utc).astimezone().isoformat()
queue = []
req = self.session.prepare_request(Request('GET', self.server.url + endpoint))
get = connection_handler(self.session, req, self.server.verify_ssl)
if not get:
return
download_queue = [Queue(**show) for show in get]
for show in download_queue:
sxe = 'S{:0>2}E{:0>2}'.format(show.episode['seasonNumber'], show.episode['episodeNumber'])
if show.protocol.upper() == 'USENET':
protocol_id = 1
else:
protocol_id = 0
queue.append((show.series['title'], show.episode['title'], show.protocol.upper(),
protocol_id, sxe, show.id))
for series_title, episode_title, protocol, protocol_id, sxe, sonarr_id in queue:
hash_id = hashit('{}{}{}'.format(self.server.id, series_title, sxe))
influx_payload.append(
{
"measurement": "Sonarr",
"tags": {
"type": "Queue",
"sonarrId": sonarr_id,
"server": self.server.id,
"name": series_title,
"epname": episode_title,
"sxe": sxe,
"protocol": protocol,
"protocol_id": protocol_id
},
"time": self.now,
"fields": {
"hash": hash_id
}
}
)
self.dbmanager.write_points(influx_payload)

327
varken/structures.py Normal file
View file

@ -0,0 +1,327 @@
from typing import NamedTuple
class Queue(NamedTuple):
movie: dict = None
series: dict = None
episode: dict = None
quality: dict = None
size: float = None
title: str = None
sizeleft: float = None
timeleft: str = None
estimatedCompletionTime: str = None
status: str = None
trackedDownloadStatus: str = None
statusMessages: list = None
downloadId: str = None
protocol: str = None
id: int = None
class SonarrServer(NamedTuple):
id: int = None
url: str = None
api_key: str = None
verify_ssl: bool = False
missing_days: int = 0
missing_days_run_seconds: int = 30
future_days: int = 0
future_days_run_seconds: int = 30
queue: bool = False
queue_run_seconds: int = 30
class RadarrServer(NamedTuple):
id: int = None
url: str = None
api_key: str = None
verify_ssl: bool = False
queue: bool = False
queue_run_seconds: int = 30
get_missing: bool = False
get_missing_run_seconds: int = 30
class OmbiServer(NamedTuple):
id: int = None
url: str = None
api_key: str = None
verify_ssl: bool = False
request_type_counts: bool = False
request_type_run_seconds: int = 30
request_total_counts: bool = False
request_total_run_seconds: int = 30
class TautulliServer(NamedTuple):
id: int = None
url: str = None
fallback_ip: str = None
api_key: str = None
verify_ssl: bool = None
get_activity: bool = False
get_activity_run_seconds: int = 30
class InfluxServer(NamedTuple):
url: str = 'localhost'
port: int = 8086
username: str = 'root'
password: str = 'root'
class OmbiRequestCounts(NamedTuple):
pending: int = 0
approved: int = 0
available: int = 0
class TautulliStream(NamedTuple):
rating: str = None
transcode_width: str = None
labels: list = None
stream_bitrate: str = None
bandwidth: str = None
optimized_version: int = None
video_language: str = None
parent_rating_key: str = None
rating_key: str = None
platform_version: str = None
transcode_hw_decoding: int = None
thumb: str = None
title: str = None
video_codec_level: str = None
tagline: str = None
last_viewed_at: str = None
audio_sample_rate: str = None
user_rating: str = None
platform: str = None
collections: list = None
location: str = None
transcode_container: str = None
audio_channel_layout: str = None
local: str = None
stream_subtitle_format: str = None
stream_video_ref_frames: str = None
transcode_hw_encode_title: str = None
stream_container_decision: str = None
audience_rating: str = None
full_title: str = None
ip_address: str = None
subtitles: int = None
stream_subtitle_language: str = None
channel_stream: int = None
video_bitrate: str = None
is_allow_sync: int = None
stream_video_bitrate: str = None
summary: str = None
stream_audio_decision: str = None
aspect_ratio: str = None
audio_bitrate_mode: str = None
transcode_hw_decode_title: str = None
stream_audio_channel_layout: str = None
deleted_user: int = None
library_name: str = None
art: str = None
stream_video_resolution: str = None
video_profile: str = None
sort_title: str = None
stream_video_codec_level: str = None
stream_video_height: str = None
year: str = None
stream_duration: str = None
stream_audio_channels: str = None
video_language_code: str = None
transcode_key: str = None
transcode_throttled: int = None
container: str = None
stream_audio_bitrate: str = None
user: str = None
selected: int = None
product_version: str = None
subtitle_location: str = None
transcode_hw_requested: int = None
video_height: str = None
state: str = None
is_restricted: int = None
email: str = None
stream_container: str = None
transcode_speed: str = None
video_bit_depth: str = None
stream_audio_sample_rate: str = None
grandparent_title: str = None
studio: str = None
transcode_decision: str = None
video_width: str = None
bitrate: str = None
machine_id: str = None
originally_available_at: str = None
video_frame_rate: str = None
synced_version_profile: str = None
friendly_name: str = None
audio_profile: str = None
optimized_version_title: str = None
platform_name: str = None
stream_video_language: str = None
keep_history: int = None
stream_audio_codec: str = None
stream_video_codec: str = None
grandparent_thumb: str = None
synced_version: int = None
transcode_hw_decode: str = None
user_thumb: str = None
stream_video_width: str = None
height: str = None
stream_subtitle_decision: str = None
audio_codec: str = None
parent_title: str = None
guid: str = None
audio_language_code: str = None
transcode_video_codec: str = None
transcode_audio_codec: str = None
stream_video_decision: str = None
user_id: int = None
transcode_height: str = None
transcode_hw_full_pipeline: int = None
throttled: str = None
quality_profile: str = None
width: str = None
live: int = None
stream_subtitle_forced: int = None
media_type: str = None
video_resolution: str = None
stream_subtitle_location: str = None
do_notify: int = None
video_ref_frames: str = None
stream_subtitle_language_code: str = None
audio_channels: str = None
stream_audio_language_code: str = None
optimized_version_profile: str = None
relay: int = None
duration: str = None
rating_image: str = None
is_home_user: int = None
is_admin: int = None
ip_address_public: str = None
allow_guest: int = None
transcode_audio_channels: str = None
stream_audio_channel_layout_: str = None
media_index: str = None
stream_video_framerate: str = None
transcode_hw_encode: str = None
grandparent_rating_key: str = None
original_title: str = None
added_at: str = None
banner: str = None
bif_thumb: str = None
parent_media_index: str = None
live_uuid: str = None
audio_language: str = None
stream_audio_bitrate_mode: str = None
username: str = None
subtitle_decision: str = None
children_count: str = None
updated_at: str = None
player: str = None
subtitle_format: str = None
file: str = None
file_size: str = None
session_key: str = None
id: str = None
subtitle_container: str = None
genres: list = None
stream_video_language_code: str = None
indexes: int = None
video_decision: str = None
stream_audio_language: str = None
writers: list = None
actors: list = None
progress_percent: str = None
audio_decision: str = None
subtitle_forced: int = None
profile: str = None
product: str = None
view_offset: str = None
type: str = None
audience_rating_image: str = None
audio_bitrate: str = None
section_id: str = None
stream_subtitle_codec: str = None
subtitle_codec: str = None
video_codec: str = None
device: str = None
stream_video_bit_depth: str = None
video_framerate: str = None
transcode_hw_encoding: int = None
transcode_protocol: str = None
shared_libraries: list = None
stream_aspect_ratio: str = None
content_rating: str = None
session_id: str = None
directors: list = None
parent_thumb: str = None
subtitle_language_code: str = None
transcode_progress: int = None
subtitle_language: str = None
stream_subtitle_container: str = None
sub_type: str = None
class TVShow(NamedTuple):
seriesId: int = None
episodeFileId: int = None
seasonNumber: int = None
episodeNumber: int = None
title: str = None
airDate: str = None
airDateUtc: str = None
overview: str = None
episodeFile: dict = None
hasFile: bool = None
monitored: bool = None
unverifiedSceneNumbering: bool = None
absoluteEpisodeNumber: int = None
series: dict = None
id: int = None
class Movie(NamedTuple):
title: str = None
alternativeTitles: list = None
secondaryYearSourceId: int = None
sortTitle: str = None
sizeOnDisk: int = None
status: str = None
overview: str = None
inCinemas: str = None
images: list = None
downloaded: bool = None
year: int = None
secondaryYear: str = None
hasFile: bool = None
youTubeTrailerId: str = None
studio: str = None
path: str = None
profileId: int = None
pathState: str = None
monitored: bool = None
minimumAvailability: str = None
isAvailable: bool = None
folderName: str = None
runtime: int = None
lastInfoSync: str = None
cleanTitle: str = None
imdbId: str = None
tmdbId: int = None
titleSlug: str = None
genres: list = None
tags: list = None
added: str = None
ratings: dict = None
movieFile: dict = None
qualityProfileId: int = None
physicalRelease: str = None
physicalReleaseNote: str = None
website: str = None
id: int = None

149
varken/tautulli.py Normal file
View file

@ -0,0 +1,149 @@
import logging
from datetime import datetime, timezone
from geoip2.errors import AddressNotFoundError
from requests import Session, Request
from varken.helpers import geo_lookup, hashit, connection_handler
from varken.structures import TautulliStream
logger = logging.getLogger()
class TautulliAPI(object):
def __init__(self, server, dbmanager):
# Set Time of initialization
self.now = datetime.now(timezone.utc).astimezone().isoformat()
self.dbmanager = dbmanager
self.server = server
self.session = Session()
self.session.params = {'apikey': self.server.api_key, 'cmd': 'get_activity'}
self.endpoint = '/api/v2'
def __repr__(self):
return "<tautulli-{}>".format(self.server.id)
def get_activity(self):
self.now = datetime.now(timezone.utc).astimezone().isoformat()
influx_payload = []
req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint))
g = connection_handler(self.session, req, self.server.verify_ssl)
if not g:
return
get = g['response']['data']
try:
sessions = [TautulliStream(**session) for session in get['sessions']]
except TypeError as e:
logger.error('TypeError has occured : %s', e)
return
for session in sessions:
try:
geodata = geo_lookup(session.ip_address_public)
except (ValueError, AddressNotFoundError):
if self.server.fallback_ip:
geodata = geo_lookup(self.server.fallback_ip)
else:
my_ip = self.session.get('http://ip.42.pl/raw').text
geodata = geo_lookup(my_ip)
if not all([geodata.location.latitude, geodata.location.longitude]):
latitude = 37.234332396
longitude = -115.80666344
else:
latitude = geodata.location.latitude
longitude = geodata.location.longitude
decision = session.transcode_decision
if decision == 'copy':
decision = 'direct stream'
video_decision = session.stream_video_decision
if video_decision == 'copy':
video_decision = 'direct stream'
elif video_decision == '':
video_decision = 'Music'
quality = session.stream_video_resolution
if not quality:
quality = session.container.upper()
elif quality in ('SD', 'sd', '4k'):
quality = session.stream_video_resolution.upper()
else:
quality = session.stream_video_resolution + 'p'
player_state = session.state.lower()
if player_state == 'playing':
player_state = 0
elif player_state == 'paused':
player_state = 1
elif player_state == 'buffering':
player_state = 3
product_version = session.product_version
if session.platform == 'Roku':
product_version = session.product_version.split('-')[0]
hash_id = hashit('{}{}{}{}'.format(session.session_id, session.session_key, session.username,
session.full_title))
influx_payload.append(
{
"measurement": "Tautulli",
"tags": {
"type": "Session",
"session_id": session.session_id,
"friendly_name": session.friendly_name,
"username": session.username,
"title": session.full_title,
"platform": session.platform,
"product_version": product_version,
"quality": quality,
"video_decision": video_decision.title(),
"transcode_decision": decision.title(),
"media_type": session.media_type.title(),
"audio_codec": session.audio_codec.upper(),
"audio_profile": session.audio_profile.upper(),
"stream_audio_codec": session.stream_audio_codec.upper(),
"quality_profile": session.quality_profile,
"progress_percent": session.progress_percent,
"region_code": geodata.subdivisions.most_specific.iso_code,
"location": geodata.city.name,
"full_location": '{} - {}'.format(geodata.subdivisions.most_specific.name,
geodata.city.name),
"latitude": latitude,
"longitude": longitude,
"player_state": player_state,
"device_type": session.platform,
"server": self.server.id
},
"time": self.now,
"fields": {
"hash": hash_id
}
}
)
influx_payload.append(
{
"measurement": "Tautulli",
"tags": {
"type": "current_stream_stats",
"server": self.server.id
},
"time": self.now,
"fields": {
"stream_count": int(get['stream_count']),
"total_bandwidth": int(get['total_bandwidth']),
"wan_bandwidth": int(get['wan_bandwidth']),
"lan_bandwidth": int(get['lan_bandwidth']),
"transcode_streams": int(get['stream_count_transcode']),
"direct_play_streams": int(get['stream_count_direct_play']),
"direct_streams": int(get['stream_count_direct_stream'])
}
}
)
self.dbmanager.write_points(influx_payload)

55
varken/varkenlogger.py Normal file
View file

@ -0,0 +1,55 @@
import logging
from logging.handlers import RotatingFileHandler
from varken.helpers import mkdir_p
FILENAME = "varken.log"
MAX_SIZE = 5000000 # 5 MB
MAX_FILES = 5
LOG_FOLDER = 'logs'
class VarkenLogger(object):
"""docstring for ."""
def __init__(self, log_path=None, debug=None, data_folder=None):
self.data_folder = data_folder
self.log_level = debug
# Set log level
if self.log_level:
self.log_level = logging.DEBUG
else:
self.log_level = logging.INFO
# Make the log directory if it does not exist
mkdir_p('{}/{}'.format(self.data_folder, LOG_FOLDER))
# Create the Logger
self.logger = logging.getLogger()
self.logger.setLevel(logging.DEBUG)
# Create a Formatter for formatting the log messages
logger_formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(module)s : %(message)s', '%Y-%m-%d %H:%M:%S')
# Create the Handler for logging data to a file
file_logger = RotatingFileHandler('{}/{}/{}'.format(self.data_folder, LOG_FOLDER, FILENAME),
mode='a', maxBytes=MAX_SIZE,
backupCount=MAX_FILES,
encoding=None, delay=0
)
file_logger.setLevel(self.log_level)
# Add the Formatter to the Handler
file_logger.setFormatter(logger_formatter)
# Add the console logger
console_logger = logging.StreamHandler()
console_logger.setFormatter(logger_formatter)
console_logger.setLevel(self.log_level)
# Add the Handler to the Logger
self.logger.addHandler(file_logger)
self.logger.addHandler(console_logger)