v1.4 Merge

This commit is contained in:
Nicholas St. Germain 2018-12-18 22:09:24 -06:00 committed by GitHub
commit 6bfbb3ddb9
18 changed files with 765 additions and 330 deletions

1
.gitignore vendored
View file

@ -10,6 +10,5 @@ GeoLite2-City.mmdb
GeoLite2-City.tar.gz
data/varken.ini
.idea/
Legacy/configuration.py
varken-venv/
logs/

View file

@ -1,5 +1,44 @@
# Change Log
## [v1.4](https://github.com/Boerderij/Varken/tree/v1.4) (2018-12-18)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v1.3-nightly...v1.4)
**Implemented enhancements:**
- \[Feature Request\] Add tautulli request for library stats [\#64](https://github.com/Boerderij/Varken/issues/64)
**Fixed bugs:**
- \[BUG\] Ombi all requests missing half of "pending" option [\#63](https://github.com/Boerderij/Varken/issues/63)
- \[BUG\] asa bug with checking for apikey [\#62](https://github.com/Boerderij/Varken/issues/62)
**Merged pull requests:**
- v1.4 Merge [\#65](https://github.com/Boerderij/Varken/pull/65) ([DirtyCajunRice](https://github.com/DirtyCajunRice))
## [v1.3-nightly](https://github.com/Boerderij/Varken/tree/v1.3-nightly) (2018-12-18)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v1.2-nightly...v1.3-nightly)
**Implemented enhancements:**
- Create randomized 12-24 hour checks to update GeoLite DB after the first wednesday of the month [\#60](https://github.com/Boerderij/Varken/issues/60)
**Fixed bugs:**
- \[BUG\] Add Catchall to ombi requests [\#59](https://github.com/Boerderij/Varken/issues/59)
**Closed issues:**
- Unify naming and cleanup duplication in iniparser [\#61](https://github.com/Boerderij/Varken/issues/61)
## [v1.2-nightly](https://github.com/Boerderij/Varken/tree/v1.2-nightly) (2018-12-16)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v1.1...v1.2-nightly)
**Implemented enhancements:**
- \[Feature Request\]: Pull list of requests \(instead of just counts\) [\#58](https://github.com/Boerderij/Varken/issues/58)
- Feature Request , Add Sickchill [\#48](https://github.com/Boerderij/Varken/issues/48)
## [v1.1](https://github.com/Boerderij/Varken/tree/v1.1) (2018-12-11)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v1.0...v1.1)

View file

@ -2,6 +2,8 @@ FROM lsiobase/alpine.python3
LABEL maintainer="dirtycajunrice"
ENV DEBUG="False"
COPY / /app
RUN \
@ -12,4 +14,4 @@ RUN \
CMD cp /app/data/varken.example.ini /config/varken.example.ini && python3 /app/Varken.py --data-folder /config
VOLUME /config
VOLUME /config

View file

@ -12,20 +12,36 @@ frontend
Requirements:
* Python3.6+
* Python3-pip
* InfluxDB
* [InfluxDB](https://www.influxdata.com/)
<p align="center">
<img width="800" src="https://i.imgur.com/av8e0HP.png">
Example Dashboard
<img width="800" src="https://nickflix.io/sharex/firefox_NxdrqisVLF.png">
</p>
Supported Modules:
* [Sonarr](https://sonarr.tv/) - Smart PVR for newsgroup and bittorrent users.
* [SickChill](https://sickchill.github.io/) - SickChill is an automatic Video Library Manager for TV Shows.
* [Radarr](https://radarr.video/) - A fork of Sonarr to work with movies à la Couchpotato.
* [Tautulli](https://tautulli.com/) - A Python based monitoring and tracking tool for Plex Media Server.
* [Ombi](https://ombi.io/) - Want a Movie or TV Show on Plex or Emby? Use Ombi!
* Cisco ASA
Key features:
* Multiple server support for all modules
* Geolocation mapping from [GeoLite2](https://dev.maxmind.com/geoip/geoip2/geolite2/)
* Grafana [Worldmap Panel](https://grafana.com/plugins/grafana-worldmap-panel/installation) support
## Quick Setup (Git Clone)
```
# Clone the repository
git clone https://github.com/Boerderij/Varken.git /opt/Varken
# Follow the systemd install instructions located in varken.systemd
nano /opt/Varken/varken.systemd
cp /opt/Varken/varken.systemd /etc/systemd/system/varken.service
nano /etc/systemd/system/varken.service
# Create venv in project
/usr/bin/python3 -m venv /opt/Varken/varken-venv
@ -48,8 +64,8 @@ systemctl enable varken
```
### Docker
[![Docker-Layers](https://images.microbadger.com/badges/image/boerderij/varken.svg)](https://microbadger.com/images/boerderij/varken")
[![Docker-Version](https://images.microbadger.com/badges/version/boerderij/varken.svg)](https://microbadger.com/images/boerderij/varken")
[![Docker-Layers](https://images.microbadger.com/badges/image/boerderij/varken.svg)](https://microbadger.com/images/boerderij/varken)
[![Docker-Version](https://images.microbadger.com/badges/version/boerderij/varken.svg)](https://microbadger.com/images/boerderij/varken)
[![Docker Pulls](https://img.shields.io/docker/pulls/boerderij/varken.svg)](https://hub.docker.com/r/boerderij/varken/)
[![Docker Stars](https://img.shields.io/docker/stars/boerderij/varken.svg)](https://hub.docker.com/r/boerderij/varken/)
<details><summary>Example</summary>
@ -59,7 +75,8 @@ systemctl enable varken
docker run -d \
--name=varken \
-v <path to data>:/config \
-e PGID=<gid> -e PUID=<uid> \
-e PGID=<gid> -e PUID=<uid> \
-e TZ=America/Chicago \
boerderij/varken
```
</p>
@ -91,4 +108,4 @@ named `varken`
Grafana is used in our examples but not required, nor packaged as part of
Varken. Panel example pictures are pinned in the grafana-panels channel of
discord. Future releases may contain a json-generator, but it does not exist
as varken stands today.
as varken stands today.

View file

@ -1,34 +1,33 @@
import sys
# Check for python3.6 or newer to resolve erroneous typing.NamedTuple issues
if sys.version_info < (3, 6):
exit('Varken requires python3.6 or newer')
import schedule
import threading
import platform
import distro
import schedule
from sys import exit
from time import sleep
from os import access, R_OK
from sys import version
from threading import Thread
from os import access, R_OK, getenv
from distro import linux_distribution
from os.path import isdir, abspath, dirname, join
from argparse import ArgumentParser, RawTextHelpFormatter
from logging import getLogger, StreamHandler, Formatter, DEBUG
from varken.iniparser import INIParser
from varken.sonarr import SonarrAPI
from varken.tautulli import TautulliAPI
from varken.radarr import RadarrAPI
from varken.ombi import OmbiAPI
from varken.cisco import CiscoAPI
from varken import VERSION, BRANCH
from varken.sonarr import SonarrAPI
from varken.radarr import RadarrAPI
from varken.iniparser import INIParser
from varken.dbmanager import DBManager
from varken.helpers import GeoIPHandler
from varken.tautulli import TautulliAPI
from varken.sickchill import SickChillAPI
from varken.varkenlogger import VarkenLogger
PLATFORM_LINUX_DISTRO = ' '.join(x for x in distro.linux_distribution() if x)
PLATFORM_LINUX_DISTRO = ' '.join(x for x in linux_distribution() if x)
def threaded(job):
thread = threading.Thread(target=job)
thread = Thread(target=job)
thread.start()
@ -44,15 +43,32 @@ if __name__ == "__main__":
DATA_FOLDER = abspath(join(dirname(__file__), 'data'))
templogger = getLogger('temp')
templogger.setLevel(DEBUG)
tempch = StreamHandler()
tempformatter = Formatter('%(asctime)s : %(levelname)s : %(module)s : %(message)s', '%Y-%m-%d %H:%M:%S')
tempch.setFormatter(tempformatter)
templogger.addHandler(tempch)
if opts.data_folder:
ARG_FOLDER = opts.data_folder
if isdir(ARG_FOLDER):
DATA_FOLDER = ARG_FOLDER
if not access(ARG_FOLDER, R_OK):
exit("Read permission error for {}".format(ARG_FOLDER))
if not access(DATA_FOLDER, R_OK):
templogger.error("Read permission error for %s", DATA_FOLDER)
exit(1)
else:
exit("{} does not exist".format(ARG_FOLDER))
templogger.error("%s does not exist", ARG_FOLDER)
exit(1)
# Set Debug to True if DEBUG env is set
enable_opts = ['True', 'true', 'yes']
debug_opts = ['debug', 'Debug', 'DEBUG']
if not opts.debug:
opts.debug = True if any([getenv(string, False) for true in enable_opts
for string in debug_opts if getenv(string, False) == true]) else False
# Initiate the logger
vl = VarkenLogger(data_folder=DATA_FOLDER, debug=opts.debug)
@ -60,11 +76,12 @@ if __name__ == "__main__":
vl.logger.info('Data folder is "%s"', DATA_FOLDER)
vl.logger.info(u"{} {} ({}{})".format(
platform.system(), platform.release(), platform.version(),
' - {}'.format(PLATFORM_LINUX_DISTRO) if PLATFORM_LINUX_DISTRO else ''
))
vl.logger.info(u"Python {}".format(sys.version))
vl.logger.info(u"%s %s (%s%s)", platform.system(), platform.release(), platform.version(),
f' - {PLATFORM_LINUX_DISTRO}' if PLATFORM_LINUX_DISTRO else '')
vl.logger.info(u"Python %s", version)
vl.logger.info("Varken v%s-%s", VERSION, BRANCH)
CONFIG = INIParser(DATA_FOLDER)
DBMANAGER = DBManager(CONFIG.influx_server)
@ -80,10 +97,14 @@ if __name__ == "__main__":
schedule.every(server.future_days_run_seconds).seconds.do(threaded, SONARR.get_future)
if CONFIG.tautulli_enabled:
GEOIPHANDLER = GeoIPHandler(DATA_FOLDER)
schedule.every(12).to(24).hours.do(threaded, GEOIPHANDLER.update)
for server in CONFIG.tautulli_servers:
TAUTULLI = TautulliAPI(server, DBMANAGER, DATA_FOLDER)
TAUTULLI = TautulliAPI(server, DBMANAGER, GEOIPHANDLER)
if server.get_activity:
schedule.every(server.get_activity_run_seconds).seconds.do(threaded, TAUTULLI.get_activity)
if server.get_stats:
schedule.every(server.get_stats_run_seconds).seconds.do(threaded, TAUTULLI.get_stats)
if CONFIG.radarr_enabled:
for server in CONFIG.radarr_servers:
@ -99,18 +120,25 @@ if __name__ == "__main__":
if server.request_type_counts:
schedule.every(server.request_type_run_seconds).seconds.do(threaded, OMBI.get_request_counts)
if server.request_total_counts:
schedule.every(server.request_total_run_seconds).seconds.do(threaded, OMBI.get_total_requests)
schedule.every(server.request_total_run_seconds).seconds.do(threaded, OMBI.get_all_requests)
if CONFIG.sickchill_enabled:
for server in CONFIG.sickchill_servers:
SICKCHILL = SickChillAPI(server, DBMANAGER)
if server.get_missing:
schedule.every(server.get_missing_run_seconds).seconds.do(threaded, SICKCHILL.get_missing)
if CONFIG.ciscoasa_enabled:
for firewall in CONFIG.ciscoasa_firewalls:
for firewall in CONFIG.ciscoasa_servers:
ASA = CiscoAPI(firewall, DBMANAGER)
schedule.every(firewall.get_bandwidth_run_seconds).seconds.do(threaded, ASA.get_bandwidth)
# Run all on startup
SERVICES_ENABLED = [CONFIG.ombi_enabled, CONFIG.radarr_enabled, CONFIG.tautulli_enabled,
CONFIG.sonarr_enabled, CONFIG.ciscoasa_enabled]
CONFIG.sonarr_enabled, CONFIG.ciscoasa_enabled, CONFIG.sickchill_enabled]
if not [enabled for enabled in SERVICES_ENABLED if enabled]:
exit("All services disabled. Exiting")
vl.logger.error("All services disabled. Exiting")
exit(1)
schedule.run_all()
while True:

View file

@ -11,6 +11,7 @@ radarr_server_ids = 1,2
tautulli_server_ids = 1
ombi_server_ids = 1
ciscoasa_firewall_ids = false
sickchill_server_ids = false
[influxdb]
url = influxdb.domain.tld
@ -19,19 +20,21 @@ username =
password =
[tautulli-1]
url = tautulli.domain.tld
url = tautulli.domain.tld:8181
fallback_ip = 0.0.0.0
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = true
verify_ssl = false
get_activity = true
get_activity_run_seconds = 30
get_stats = true
get_stats_run_seconds = 3600
[sonarr-1]
url = sonarr1.domain.tld
url = sonarr1.domain.tld:8989
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = true
verify_ssl = false
missing_days = 7
missing_days_run_seconds = 300
future_days = 1
@ -40,10 +43,10 @@ queue = true
queue_run_seconds = 300
[sonarr-2]
url = sonarr2.domain.tld
url = sonarr2.domain.tld:8989
apikey = yyyyyyyyyyyyyyyy
ssl = false
verify_ssl = true
verify_ssl = false
missing_days = 7
missing_days_run_seconds = 300
future_days = 1
@ -55,7 +58,7 @@ queue_run_seconds = 300
url = radarr1.domain.tld
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = true
verify_ssl = false
queue = true
queue_run_seconds = 300
get_missing = true
@ -65,7 +68,7 @@ get_missing_run_seconds = 300
url = radarr2.domain.tld
apikey = yyyyyyyyyyyyyyyy
ssl = false
verify_ssl = true
verify_ssl = false
queue = true
queue_run_seconds = 300
get_missing = true
@ -75,17 +78,27 @@ get_missing_run_seconds = 300
url = ombi.domain.tld
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = true
verify_ssl = false
get_request_type_counts = true
request_type_run_seconds = 300
get_request_total_counts = true
request_total_run_seconds = 300
[sickchill-1]
url = sickchill.domain.tld:8081
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = false
get_missing = true
get_missing_run_seconds = 300
[ciscoasa-1]
url = firewall.domain.tld
username = cisco
password = cisco
outside_interface = WAN
ssl = false
verify_ssl = true
verify_ssl = false
get_bandwidth_run_seconds = 300

View file

@ -1 +1,2 @@
VERSION = 1.1
VERSION = 1.4
BRANCH = 'master'

View file

@ -1,4 +1,4 @@
import logging
from logging import getLogger
from requests import Session, Request
from datetime import datetime, timezone
@ -13,12 +13,12 @@ class CiscoAPI(object):
# Create session to reduce server web thread load, and globally define pageSize for all requests
self.session = Session()
self.session.auth = (self.firewall.username, self.firewall.password)
self.logger = logging.getLogger()
self.logger = getLogger()
self.get_token()
def __repr__(self):
return "<ciscoasa-{}>".format(self.firewall.id)
return f"<ciscoasa-{self.firewall.id}>"
def get_token(self):
endpoint = '/api/tokenservices'

View file

@ -1,8 +1,6 @@
import logging
from logging import getLogger
from influxdb import InfluxDBClient
logger = logging.getLogger('varken')
class DBManager(object):
def __init__(self, server):
@ -10,12 +8,16 @@ class DBManager(object):
self.influx = InfluxDBClient(self.server.url, self.server.port, self.server.username, self.server.password,
'varken')
databases = [db['name'] for db in self.influx.get_list_database()]
self.logger = getLogger()
if 'varken' not in databases:
self.logger.info("Creating varken database")
self.influx.create_database('varken')
self.logger.info("Creating varken retention policy (30d/1h)")
self.influx.create_retention_policy('varken 30d/1h', '30d', '1', 'varken', False, '1h')
def write_points(self, data):
d = data
logger.debug('Writing Data to InfluxDB {}'.format(d))
self.logger.debug('Writing Data to InfluxDB %s', d)
self.influx.write_points(d)

View file

@ -1,70 +1,82 @@
import os
import time
import tarfile
import hashlib
import urllib3
import geoip2.database
import logging
from json.decoder import JSONDecodeError
from os.path import abspath, join
from requests.exceptions import InvalidSchema, SSLError
from hashlib import md5
from datetime import date
from logging import getLogger
from calendar import monthcalendar
from geoip2.database import Reader
from tarfile import open as taropen
from urllib3 import disable_warnings
from os import stat, remove, makedirs
from urllib.request import urlretrieve
from json.decoder import JSONDecodeError
from os.path import abspath, join, basename, isdir
from urllib3.exceptions import InsecureRequestWarning
from requests.exceptions import InvalidSchema, SSLError, ConnectionError
logger = logging.getLogger('varken')
logger = getLogger()
def geoip_download(data_folder):
datafolder = data_folder
class GeoIPHandler(object):
def __init__(self, data_folder):
self.data_folder = data_folder
self.dbfile = abspath(join(self.data_folder, 'GeoLite2-City.mmdb'))
self.logger = getLogger()
self.update()
tar_dbfile = abspath(join(datafolder, 'GeoLite2-City.tar.gz'))
self.logger.info('Opening persistent connection to GeoLite2 DB...')
self.reader = Reader(self.dbfile)
url = 'http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz'
logger.info('Downloading GeoLite2 from %s', url)
urlretrieve(url, tar_dbfile)
def lookup(self, ipaddress):
ip = ipaddress
self.logger.debug('Getting lat/long for Tautulli stream')
return self.reader.city(ip)
tar = tarfile.open(tar_dbfile, 'r:gz')
logging.debug('Opening GeoLite2 tar file : %s', tar_dbfile)
def update(self):
today = date.today()
dbdate = None
try:
dbdate = date.fromtimestamp(stat(self.dbfile).st_ctime)
except FileNotFoundError:
self.logger.error("Could not find GeoLite2 DB as: %s", self.dbfile)
self.download()
first_wednesday_day = [week[2:3][0] for week in monthcalendar(today.year, today.month) if week[2:3][0] != 0][0]
first_wednesday_date = date(today.year, today.month, first_wednesday_day)
for files in tar.getmembers():
if 'GeoLite2-City.mmdb' in files.name:
logging.debug('"GeoLite2-City.mmdb" FOUND in tar file')
files.name = os.path.basename(files.name)
tar.extract(files, datafolder)
logging.debug('%s has been extracted to %s', files, datafolder)
os.remove(tar_dbfile)
if dbdate < first_wednesday_date < today:
self.logger.info("Newer GeoLite2 DB available, Updating...")
remove(self.dbfile)
self.download()
else:
td = first_wednesday_date - today
if td.days < 0:
self.logger.debug('Geolite2 DB is only %s days old. Keeping current copy', abs(td.days))
else:
self.logger.debug('Geolite2 DB will update in %s days', abs(td.days))
def geo_lookup(ipaddress, data_folder):
datafolder = data_folder
logging.debug('Reading GeoLite2 from %s', datafolder)
def download(self):
tar_dbfile = abspath(join(self.data_folder, 'GeoLite2-City.tar.gz'))
url = 'http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz'
dbfile = abspath(join(datafolder, 'GeoLite2-City.mmdb'))
now = time.time()
self.logger.info('Downloading GeoLite2 from %s', url)
urlretrieve(url, tar_dbfile)
try:
dbinfo = os.stat(dbfile)
db_age = now - dbinfo.st_ctime
if db_age > (35 * 86400):
logging.info('GeoLite2 DB is older than 35 days. Attempting to re-download...')
self.logger.debug('Opening GeoLite2 tar file : %s', tar_dbfile)
os.remove(dbfile)
tar = taropen(tar_dbfile, 'r:gz')
geoip_download(datafolder)
except FileNotFoundError:
logging.error('GeoLite2 DB not found. Attempting to download...')
geoip_download(datafolder)
reader = geoip2.database.Reader(dbfile)
return reader.city(ipaddress)
for files in tar.getmembers():
if 'GeoLite2-City.mmdb' in files.name:
self.logger.debug('"GeoLite2-City.mmdb" FOUND in tar file')
files.name = basename(files.name)
tar.extract(files, self.data_folder)
self.logger.debug('%s has been extracted to %s', files, self.data_folder)
tar.close()
remove(tar_dbfile)
def hashit(string):
encoded = string.encode()
hashed = hashlib.md5(encoded).hexdigest()
hashed = md5(encoded).hexdigest()
return hashed
@ -75,37 +87,59 @@ def connection_handler(session, request, verify):
v = verify
return_json = False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
disable_warnings(InsecureRequestWarning)
try:
get = s.send(r, verify=v)
if get.status_code == 401:
logger.info('Your api key is incorrect for {}'.format(r.url))
logger.info('Your api key is incorrect for %s', r.url)
elif get.status_code == 404:
logger.info('This url doesnt even resolve: {}'.format(r.url))
logger.info('This url doesnt even resolve: %s', r.url)
elif get.status_code == 200:
try:
return_json = get.json()
except JSONDecodeError:
logger.error('No JSON response... BORKED! Let us know in discord')
logger.error('No JSON response. Response is: %s', get.text)
# 204 No Content is for ASA only
elif get.status_code == 204:
if get.headers['X-Auth-Token']:
return get.headers['X-Auth-Token']
except InvalidSchema:
logger.error('You added http(s):// in the config file. Don\'t do that.')
logger.error("You added http(s):// in the config file. Don't do that.")
except SSLError as e:
logger.error('Either your host is unreachable or you have an SSL issue. : %s', e)
except ConnectionError as e:
logger.error('Cannot resolve the url/ip/port. Check connectivity. Error: %s', e)
return return_json
def mkdir_p(path):
"""http://stackoverflow.com/a/600612/190597 (tzot)"""
templogger = getLogger('temp')
try:
logger.info('Creating folder %s ', path)
os.makedirs(path, exist_ok=True)
if not isdir(path):
templogger.info('Creating folder %s ', path)
makedirs(path, exist_ok=True)
except Exception as e:
logger.error('Could not create folder %s : %s ', path, e)
templogger.error('Could not create folder %s : %s ', path, e)
def clean_sid_check(server_id_list, server_type=None):
t = server_type
sid_list = server_id_list
cleaned_list = sid_list.replace(' ', '').split(',')
valid_sids = []
for sid in cleaned_list:
try:
valid_sids.append(int(sid))
except ValueError:
logger.error("%s is not a valid server id number", sid)
if valid_sids:
logger.info('%s : %s', t.upper(), valid_sids)
return valid_sids
else:
logger.error('No valid %s', t.upper())
return False

View file

@ -1,196 +1,208 @@
import configparser
import logging
from sys import exit
from logging import getLogger
from os.path import join, exists
from varken.structures import SonarrServer, RadarrServer, OmbiServer, TautulliServer, InfluxServer, CiscoASAFirewall
from re import match, compile, IGNORECASE
from configparser import ConfigParser, NoOptionError
logger = logging.getLogger()
from varken.helpers import clean_sid_check
from varken.structures import SickChillServer
from varken.varkenlogger import BlacklistFilter
from varken.structures import SonarrServer, RadarrServer, OmbiServer, TautulliServer, InfluxServer, CiscoASAFirewall
class INIParser(object):
def __init__(self, data_folder):
self.config = configparser.ConfigParser(interpolation=None)
self.config = ConfigParser(interpolation=None)
self.data_folder = data_folder
self.services = ['sonarr', 'radarr', 'ombi', 'tautulli', 'sickchill', 'ciscoasa']
for service in self.services:
setattr(self, f'{service}_servers', [])
self.logger = getLogger()
self.influx_server = InfluxServer()
self.sonarr_enabled = False
self.sonarr_servers = []
self.radarr_enabled = False
self.radarr_servers = []
self.ombi_enabled = False
self.ombi_servers = []
self.tautulli_enabled = False
self.tautulli_servers = []
self.ciscoasa_enabled = False
self.ciscoasa_firewalls = []
self.parse_opts()
self.filtered_strings = None
def config_blacklist(self):
filtered_strings = [section.get(k) for key, section in self.config.items()
for k in section if k in BlacklistFilter.blacklisted_strings]
self.filtered_strings = list(filter(None, filtered_strings))
# Added matching for domains that use /locations. ConnectionPool ignores the location in logs
domains_only = [string.split('/')[0] for string in filtered_strings if '/' in string]
self.filtered_strings.extend(domains_only)
for handler in self.logger.handlers:
handler.addFilter(BlacklistFilter(set(self.filtered_strings)))
def enable_check(self, server_type=None):
t = server_type
try:
global_server_ids = self.config.get('global', t)
if global_server_ids.lower() in ['false', 'no', '0']:
logger.info('%s disabled.', t.upper())
return False
self.logger.info('%s disabled.', t.upper())
else:
sids = self.clean_check(global_server_ids, t)
sids = clean_sid_check(global_server_ids, t)
return sids
except configparser.NoOptionError as e:
logger.error(e)
@staticmethod
def clean_check(server_id_list, server_type=None):
t = server_type
sid_list = server_id_list
cleaned_list = sid_list.replace(' ', '').split(',')
valid_sids = []
for sid in cleaned_list:
try:
valid_sids.append(int(sid))
except ValueError:
logger.error("{} is not a valid server id number".format(sid))
if valid_sids:
logger.info('%s : %s', t.upper(), valid_sids)
return valid_sids
else:
logger.error('No valid %s', t.upper())
return False
except NoOptionError as e:
self.logger.error(e)
def read_file(self):
file_path = join(self.data_folder, 'varken.ini')
if exists(file_path):
with open(file_path) as config_ini:
self.config.read_file(config_ini)
self.config_blacklist()
else:
exit('Config file missing (varken.ini) in {}'.format(self.data_folder))
self.logger.error('Config file missing (varken.ini) in %s', self.data_folder)
exit(1)
def url_check(self, url=None, include_port=True):
url_check = url
inc_port = include_port
search = (r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # domain...
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
)
if inc_port:
search = (search + r'(?::\d+)?' + r'(?:/?|[/?]\S+)$')
else:
search = (search + r'(?:/?|[/?]\S+)$')
regex = compile('{}'.format(search), IGNORECASE)
valid = match(regex, url_check) is not None
if not valid:
if inc_port:
self.logger.error('%s is invalid! URL must host/IP and port if not 80 or 443. ie. localhost:8080',
url_check)
exit(1)
else:
self.logger.error('%s is invalid! URL must host/IP. ie. localhost', url_check)
exit(1)
else:
self.logger.debug('%s is a valid URL in the config.', url_check)
return url_check
def parse_opts(self):
self.read_file()
# Parse InfluxDB options
url = self.config.get('influxdb', 'url')
url = self.url_check(self.config.get('influxdb', 'url'), include_port=False)
port = self.config.getint('influxdb', 'port')
username = self.config.get('influxdb', 'username')
password = self.config.get('influxdb', 'password')
self.influx_server = InfluxServer(url, port, username, password)
# Parse Sonarr options
self.sonarr_enabled = self.enable_check('sonarr_server_ids')
# Check for all enabled services
for service in self.services:
setattr(self, f'{service}_enabled', self.enable_check(f'{service}_server_ids'))
service_enabled = getattr(self, f'{service}_enabled')
if self.sonarr_enabled:
sids = self.config.get('global', 'sonarr_server_ids').strip(' ').split(',')
if service_enabled:
for server_id in service_enabled:
server = None
section = f"{service}-{server_id}"
try:
url = self.url_check(self.config.get(section, 'url'))
for server_id in sids:
sonarr_section = 'sonarr-' + server_id
url = self.config.get(sonarr_section, 'url')
apikey = self.config.get(sonarr_section, 'apikey')
scheme = 'https://' if self.config.getboolean(sonarr_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(sonarr_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
queue = self.config.getboolean(sonarr_section, 'queue')
missing_days = self.config.getint(sonarr_section, 'missing_days')
future_days = self.config.getint(sonarr_section, 'future_days')
missing_days_run_seconds = self.config.getint(sonarr_section, 'missing_days_run_seconds')
future_days_run_seconds = self.config.getint(sonarr_section, 'future_days_run_seconds')
queue_run_seconds = self.config.getint(sonarr_section, 'queue_run_seconds')
apikey = None
if service != 'ciscoasa':
apikey = self.config.get(section, 'apikey')
server = SonarrServer(server_id, scheme + url, apikey, verify_ssl, missing_days,
missing_days_run_seconds, future_days, future_days_run_seconds,
queue, queue_run_seconds)
self.sonarr_servers.append(server)
scheme = 'https://' if self.config.getboolean(section, 'ssl') else 'http://'
# Parse Radarr options
self.radarr_enabled = self.enable_check('radarr_server_ids')
verify_ssl = self.config.getboolean(section, 'verify_ssl')
if self.radarr_enabled:
sids = self.config.get('global', 'radarr_server_ids').strip(' ').split(',')
if scheme != 'https://':
verify_ssl = False
for server_id in sids:
radarr_section = 'radarr-' + server_id
url = self.config.get(radarr_section, 'url')
apikey = self.config.get(radarr_section, 'apikey')
scheme = 'https://' if self.config.getboolean(radarr_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(radarr_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
queue = self.config.getboolean(radarr_section, 'queue')
queue_run_seconds = self.config.getint(radarr_section, 'queue_run_seconds')
get_missing = self.config.getboolean(radarr_section, 'get_missing')
get_missing_run_seconds = self.config.getint(radarr_section, 'get_missing_run_seconds')
if service == 'sonarr':
queue = self.config.getboolean(section, 'queue')
server = RadarrServer(server_id, scheme + url, apikey, verify_ssl, queue, queue_run_seconds,
get_missing, get_missing_run_seconds)
self.radarr_servers.append(server)
missing_days = self.config.getint(section, 'missing_days')
# Parse Tautulli options
self.tautulli_enabled = self.enable_check('tautulli_server_ids')
future_days = self.config.getint(section, 'future_days')
if self.tautulli_enabled:
sids = self.config.get('global', 'tautulli_server_ids').strip(' ').split(',')
missing_days_run_seconds = self.config.getint(section, 'missing_days_run_seconds')
for server_id in sids:
tautulli_section = 'tautulli-' + server_id
url = self.config.get(tautulli_section, 'url')
fallback_ip = self.config.get(tautulli_section, 'fallback_ip')
apikey = self.config.get(tautulli_section, 'apikey')
scheme = 'https://' if self.config.getboolean(tautulli_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(tautulli_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
get_activity = self.config.getboolean(tautulli_section, 'get_activity')
get_activity_run_seconds = self.config.getint(tautulli_section, 'get_activity_run_seconds')
future_days_run_seconds = self.config.getint(section, 'future_days_run_seconds')
server = TautulliServer(server_id, scheme + url, fallback_ip, apikey, verify_ssl, get_activity,
get_activity_run_seconds)
self.tautulli_servers.append(server)
queue_run_seconds = self.config.getint(section, 'queue_run_seconds')
# Parse Ombi options
self.ombi_enabled = self.enable_check('ombi_server_ids')
server = SonarrServer(server_id, scheme + url, apikey, verify_ssl, missing_days,
missing_days_run_seconds, future_days, future_days_run_seconds,
queue, queue_run_seconds)
if self.ombi_enabled:
sids = self.config.get('global', 'ombi_server_ids').strip(' ').split(',')
for server_id in sids:
ombi_section = 'ombi-' + server_id
url = self.config.get(ombi_section, 'url')
apikey = self.config.get(ombi_section, 'apikey')
scheme = 'https://' if self.config.getboolean(ombi_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(ombi_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
request_type_counts = self.config.getboolean(ombi_section, 'get_request_type_counts')
request_type_run_seconds = self.config.getint(ombi_section, 'request_type_run_seconds')
request_total_counts = self.config.getboolean(ombi_section, 'get_request_total_counts')
request_total_run_seconds = self.config.getint(ombi_section, 'request_total_run_seconds')
if service == 'radarr':
queue = self.config.getboolean(section, 'queue')
server = OmbiServer(server_id, scheme + url, apikey, verify_ssl, request_type_counts,
request_type_run_seconds, request_total_counts, request_total_run_seconds)
self.ombi_servers.append(server)
queue_run_seconds = self.config.getint(section, 'queue_run_seconds')
# Parse ASA opts
self.ciscoasa_enabled = self.enable_check('ciscoasa_firewall_ids')
get_missing = self.config.getboolean(section, 'get_missing')
if self.ciscoasa_enabled:
fids = self.config.get('global', 'ciscoasa_firewall_ids').strip(' ').split(',')
for firewall_id in fids:
ciscoasa_section = 'ciscoasa-' + firewall_id
url = self.config.get(ciscoasa_section, 'url')
username = self.config.get(ciscoasa_section, 'username')
password = self.config.get(ciscoasa_section, 'password')
scheme = 'https://' if self.config.getboolean(ciscoasa_section, 'ssl') else 'http://'
verify_ssl = self.config.getboolean(ciscoasa_section, 'verify_ssl')
if scheme != 'https://':
verify_ssl = False
outside_interface = self.config.get(ciscoasa_section, 'outside_interface')
get_bandwidth_run_seconds = self.config.getint(ciscoasa_section, 'get_bandwidth_run_seconds')
get_missing_run_seconds = self.config.getint(section, 'get_missing_run_seconds')
firewall = CiscoASAFirewall(firewall_id, scheme + url, username, password, outside_interface,
verify_ssl, get_bandwidth_run_seconds)
self.ciscoasa_firewalls.append(firewall)
server = RadarrServer(server_id, scheme + url, apikey, verify_ssl, queue, queue_run_seconds,
get_missing, get_missing_run_seconds)
if service == 'tautulli':
fallback_ip = self.config.get(section, 'fallback_ip')
get_activity = self.config.getboolean(section, 'get_activity')
get_activity_run_seconds = self.config.getint(section, 'get_activity_run_seconds')
get_stats = self.config.getboolean(section, 'get_stats')
get_stats_run_seconds = self.config.getint(section, 'get_stats_run_seconds')
server = TautulliServer(server_id, scheme + url, fallback_ip, apikey, verify_ssl,
get_activity, get_activity_run_seconds, get_stats,
get_stats_run_seconds)
if service == 'ombi':
request_type_counts = self.config.getboolean(section, 'get_request_type_counts')
request_type_run_seconds = self.config.getint(section, 'request_type_run_seconds')
request_total_counts = self.config.getboolean(section, 'get_request_total_counts')
request_total_run_seconds = self.config.getint(section, 'request_total_run_seconds')
server = OmbiServer(server_id, scheme + url, apikey, verify_ssl, request_type_counts,
request_type_run_seconds, request_total_counts,
request_total_run_seconds)
if service == 'sickchill':
get_missing = self.config.getboolean(section, 'get_missing')
get_missing_run_seconds = self.config.getint(section, 'get_missing_run_seconds')
server = SickChillServer(server_id, scheme + url, apikey, verify_ssl,
get_missing, get_missing_run_seconds)
if service == 'ciscoasa':
username = self.config.get(section, 'username')
password = self.config.get(section, 'password')
outside_interface = self.config.get(section, 'outside_interface')
get_bandwidth_run_seconds = self.config.getint(section, 'get_bandwidth_run_seconds')
server = CiscoASAFirewall(server_id, scheme + url, username, password, outside_interface,
verify_ssl, get_bandwidth_run_seconds)
getattr(self, f'{service}_servers').append(server)
except NoOptionError as e:
setattr(self, f'{service}_enabled', False)
self.logger.error('%s disabled. Error: %s', section, e)

View file

@ -1,9 +1,9 @@
import logging
from logging import getLogger
from requests import Session, Request
from datetime import datetime, timezone
from varken.helpers import connection_handler
from varken.structures import OmbiRequestCounts
from varken.helpers import connection_handler, hashit
from varken.structures import OmbiRequestCounts, OmbiMovieRequest, OmbiTVRequest
class OmbiAPI(object):
@ -13,12 +13,12 @@ class OmbiAPI(object):
# Create session to reduce server web thread load, and globally define pageSize for all requests
self.session = Session()
self.session.headers = {'Apikey': self.server.api_key}
self.logger = logging.getLogger()
self.logger = getLogger()
def __repr__(self):
return "<ombi-{}>".format(self.server.id)
return f"<ombi-{self.server.id}>"
def get_total_requests(self):
def get_all_requests(self):
now = datetime.now(timezone.utc).astimezone().isoformat()
tv_endpoint = '/api/v1/Request/tv'
movie_endpoint = "/api/v1/Request/movie"
@ -28,11 +28,24 @@ class OmbiAPI(object):
get_tv = connection_handler(self.session, tv_req, self.server.verify_ssl)
get_movie = connection_handler(self.session, movie_req, self.server.verify_ssl)
if not all([get_tv, get_movie]):
if not any([get_tv, get_movie]):
self.logger.error('No json replies. Discarding job')
return
movie_requests = len(get_movie)
tv_requests = len(get_tv)
movie_request_count = len(get_movie)
tv_request_count = len(get_tv)
try:
tv_show_requests = [OmbiTVRequest(**show) for show in get_tv]
except TypeError as e:
self.logger.error('TypeError has occurred : %s while creating OmbiTVRequest structure', e)
return
try:
movie_requests = [OmbiMovieRequest(**movie) for movie in get_movie]
except TypeError as e:
self.logger.error('TypeError has occurred : %s while creating OmbiMovieRequest structure', e)
return
influx_payload = [
{
@ -43,12 +56,76 @@ class OmbiAPI(object):
},
"time": now,
"fields": {
"total": movie_requests + tv_requests,
"movies": movie_requests,
"tv_shows": tv_requests
"total": movie_request_count + tv_request_count,
"movies": movie_request_count,
"tv_shows": tv_request_count
}
}
]
# Request Type: Movie = 1, TV Show = 0
for movie in movie_requests:
hash_id = hashit(f'{movie.id}{movie.theMovieDbId}{movie.title}')
status = None
# Denied = 0, Approved = 1, Completed = 2, Pending = 3
if movie.denied:
status = 0
elif movie.approved and movie.available:
status = 2
elif movie.approved:
status = 1
else:
status = 3
influx_payload.append(
{
"measurement": "Ombi",
"tags": {
"type": "Requests",
"server": self.server.id,
"request_type": 1,
"status": status,
"title": movie.title,
"requested_user": movie.requestedUser['userAlias'],
"requested_date": movie.requestedDate
},
"time": now,
"fields": {
"hash": hash_id
}
}
)
for show in tv_show_requests:
hash_id = hashit(f'{show.id}{show.tvDbId}{show.title}')
# Denied = 0, Approved = 1, Completed = 2, Pending = 3
if show.childRequests[0]['denied']:
status = 0
elif show.childRequests[0]['approved'] and show.childRequests[0]['available']:
status = 2
elif show.childRequests[0]['approved']:
status = 1
else:
status = 3
influx_payload.append(
{
"measurement": "Ombi",
"tags": {
"type": "Requests",
"server": self.server.id,
"request_type": 0,
"status": status,
"title": show.title,
"requested_user": show.childRequests[0]['requestedUser']['userAlias'],
"requested_date": show.childRequests[0]['requestedDate']
},
"time": now,
"fields": {
"hash": hash_id
}
}
)
self.dbmanager.write_points(influx_payload)

View file

@ -1,9 +1,9 @@
import logging
from logging import getLogger
from requests import Session, Request
from datetime import datetime, timezone
from varken.helpers import hashit, connection_handler
from varken.structures import Movie, Queue
from varken.helpers import hashit, connection_handler
class RadarrAPI(object):
@ -13,10 +13,10 @@ class RadarrAPI(object):
# Create session to reduce server web thread load, and globally define pageSize for all requests
self.session = Session()
self.session.headers = {'X-Api-Key': self.server.api_key}
self.logger = logging.getLogger()
self.logger = getLogger()
def __repr__(self):
return "<radarr-{}>".format(self.server.id)
return f"<radarr-{self.server.id}>"
def get_missing(self):
endpoint = '/api/movie'
@ -43,11 +43,11 @@ class RadarrAPI(object):
else:
ma = 1
movie_name = '{} ({})'.format(movie.title, movie.year)
movie_name = f'{movie.title} ({movie.year})'
missing.append((movie_name, ma, movie.tmdbId, movie.titleSlug))
for title, ma, mid, title_slug in missing:
hash_id = hashit('{}{}{}'.format(self.server.id, title, mid))
hash_id = hashit(f'{self.server.id}{title}{mid}')
influx_payload.append(
{
"measurement": "Radarr",
@ -96,7 +96,7 @@ class RadarrAPI(object):
for queue_item in download_queue:
movie = queue_item.movie
name = '{} ({})'.format(movie.title, movie.year)
name = f'{movie.title} ({movie.year})'
if queue_item.protocol.upper() == 'USENET':
protocol_id = 1
@ -107,7 +107,7 @@ class RadarrAPI(object):
protocol_id, queue_item.id, movie.titleSlug))
for name, quality, protocol, protocol_id, qid, title_slug in queue:
hash_id = hashit('{}{}{}'.format(self.server.id, name, quality))
hash_id = hashit(f'{self.server.id}{name}{quality}')
influx_payload.append(
{
"measurement": "Radarr",

64
varken/sickchill.py Normal file
View file

@ -0,0 +1,64 @@
from logging import getLogger
from requests import Session, Request
from datetime import datetime, timezone
from varken.structures import SickChillTVShow
from varken.helpers import hashit, connection_handler
class SickChillAPI(object):
def __init__(self, server, dbmanager):
self.dbmanager = dbmanager
self.server = server
# Create session to reduce server web thread load, and globally define pageSize for all requests
self.session = Session()
self.session.params = {'limit': 1000}
self.endpoint = f"/api/{self.server.api_key}"
self.logger = getLogger()
def __repr__(self):
return f"<sickchill-{self.server.id}>"
def get_missing(self):
now = datetime.now(timezone.utc).astimezone().isoformat()
influx_payload = []
params = {'cmd': 'future', 'paused': 1, 'type': 'missed|today|soon|later|snatched'}
req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params))
get = connection_handler(self.session, req, self.server.verify_ssl)
if not get:
return
try:
for key, section in get['data'].items():
get['data'][key] = [SickChillTVShow(**show) for show in section]
except TypeError as e:
self.logger.error('TypeError has occurred : %s while creating SickChillTVShow structure', e)
return
for key, section in get['data'].items():
for show in section:
sxe = f'S{show.season:0>2}E{show.episode:0>2}'
hash_id = hashit(f'{self.server.id}{show.show_name}{sxe}')
missing_types = [(0, 'future'), (1, 'later'), (2, 'soon'), (3, 'today'), (4, 'missed')]
influx_payload.append(
{
"measurement": "SickChill",
"tags": {
"type": [item[0] for item in missing_types if key in item][0],
"indexerid": show.indexerid,
"server": self.server.id,
"name": show.show_name,
"epname": show.ep_name,
"sxe": sxe,
"airdate": show.airdate,
},
"time": now,
"fields": {
"hash": hash_id
}
}
)
self.dbmanager.write_points(influx_payload)

View file

@ -1,9 +1,9 @@
import logging
from logging import getLogger
from requests import Session, Request
from datetime import datetime, timezone, date, timedelta
from varken.helpers import hashit, connection_handler
from varken.structures import Queue, TVShow
from varken.helpers import hashit, connection_handler
class SonarrAPI(object):
@ -14,10 +14,10 @@ class SonarrAPI(object):
self.session = Session()
self.session.headers = {'X-Api-Key': self.server.api_key}
self.session.params = {'pageSize': 1000}
self.logger = logging.getLogger()
self.logger = getLogger()
def __repr__(self):
return "<sonarr-{}>".format(self.server.id)
return f"<sonarr-{self.server.id}>"
def get_missing(self):
endpoint = '/api/calendar'
@ -44,11 +44,11 @@ class SonarrAPI(object):
# Add show to missing list if file does not exist
for show in tv_shows:
if not show.hasFile:
sxe = 'S{:0>2}E{:0>2}'.format(show.seasonNumber, show.episodeNumber)
missing.append((show.series['title'], sxe, show.airDate, show.title, show.id))
sxe = f'S{show.seasonNumber:0>2}E{show.episodeNumber:0>2}'
missing.append((show.series['title'], sxe, show.airDateUtc, show.title, show.id))
for series_title, sxe, air_date, episode_title, sonarr_id in missing:
hash_id = hashit('{}{}{}'.format(self.server.id, series_title, sxe))
for series_title, sxe, air_date_utc, episode_title, sonarr_id in missing:
hash_id = hashit(f'{self.server.id}{series_title}{sxe}')
influx_payload.append(
{
"measurement": "Sonarr",
@ -59,7 +59,7 @@ class SonarrAPI(object):
"name": series_title,
"epname": episode_title,
"sxe": sxe,
"airs": air_date
"airsUTC": air_date_utc
},
"time": now,
"fields": {
@ -93,15 +93,15 @@ class SonarrAPI(object):
return
for show in tv_shows:
sxe = 'S{:0>2}E{:0>2}'.format(show.seasonNumber, show.episodeNumber)
sxe = f'S{show.seasonNumber:0>2}E{show.episodeNumber:0>2}'
if show.hasFile:
downloaded = 1
else:
downloaded = 0
air_days.append((show.series['title'], downloaded, sxe, show.title, show.airDate, show.id))
air_days.append((show.series['title'], downloaded, sxe, show.title, show.airDateUtc, show.id))
for series_title, dl_status, sxe, episode_title, air_date, sonarr_id in air_days:
hash_id = hashit('{}{}{}'.format(self.server.id, series_title, sxe))
for series_title, dl_status, sxe, episode_title, air_date_utc, sonarr_id in air_days:
hash_id = hashit(f'{self.server.id}{series_title}{sxe}')
influx_payload.append(
{
"measurement": "Sonarr",
@ -112,7 +112,7 @@ class SonarrAPI(object):
"name": series_title,
"epname": episode_title,
"sxe": sxe,
"airs": air_date,
"airsUTC": air_date_utc,
"downloaded": dl_status
},
"time": now,
@ -143,7 +143,7 @@ class SonarrAPI(object):
return
for show in download_queue:
sxe = 'S{:0>2}E{:0>2}'.format(show.episode['seasonNumber'], show.episode['episodeNumber'])
sxe = f"S{show.episode['seasonNumber']:0>2}E{show.episode['episodeNumber']:0>2}"
if show.protocol.upper() == 'USENET':
protocol_id = 1
else:
@ -153,7 +153,7 @@ class SonarrAPI(object):
protocol_id, sxe, show.id))
for series_title, episode_title, protocol, protocol_id, sxe, sonarr_id in queue:
hash_id = hashit('{}{}{}'.format(self.server.id, series_title, sxe))
hash_id = hashit(f'{self.server.id}{series_title}{sxe}')
influx_payload.append(
{
"measurement": "Sonarr",

View file

@ -1,4 +1,13 @@
from sys import version_info
from typing import NamedTuple
from logging import getLogger
logger = getLogger('temp')
# Check for python3.6 or newer to resolve erroneous typing.NamedTuple issues
if version_info < (3, 6):
logger.error('Varken requires python3.6 or newer. You are on python%s.%s - Exiting...',
version_info.major, version_info.minor)
exit(1)
class Queue(NamedTuple):
@ -62,6 +71,8 @@ class TautulliServer(NamedTuple):
verify_ssl: bool = None
get_activity: bool = False
get_activity_run_seconds: int = 30
get_stats: bool = False
get_stats_run_seconds: int = 30
class InfluxServer(NamedTuple):
@ -70,6 +81,16 @@ class InfluxServer(NamedTuple):
username: str = 'root'
password: str = 'root'
class SickChillServer(NamedTuple):
id: int = None
url: str = None
api_key: str = None
verify_ssl: bool = False
get_missing: bool = False
get_missing_run_seconds: int = 30
class CiscoASAFirewall(NamedTuple):
id: int = None
url: str = '192.168.1.1'
@ -79,6 +100,7 @@ class CiscoASAFirewall(NamedTuple):
verify_ssl: bool = False
get_bandwidth_run_seconds: int = 30
class OmbiRequestCounts(NamedTuple):
pending: int = 0
approved: int = 0
@ -337,3 +359,69 @@ class Movie(NamedTuple):
physicalReleaseNote: str = None
website: str = None
id: int = None
class OmbiMovieRequest(NamedTuple):
theMovieDbId: int = None
issueId: None = None
issues: None = None
subscribed: bool = None
showSubscribe: bool = None
rootPathOverride: int = None
qualityOverride: int = None
imdbId: str = None
overview: str = None
posterPath: str = None
releaseDate: str = None
digitalReleaseDate: None = None
status: str = None
background: str = None
released: bool = None
digitalRelease: bool = None
title: str = None
approved: bool = None
markedAsApproved: str = None
requestedDate: str = None
available: bool = None
markedAsAvailable: None = None
requestedUserId: str = None
denied: bool = None
markedAsDenied: str = None
deniedReason: None = None
requestType: int = None
requestedUser: dict = None
canApprove: bool = None
id: int = None
class OmbiTVRequest(NamedTuple):
tvDbId: int = None
imdbId: str = None
qualityOverride: None = None
rootFolder: None = None
overview: str = None
title: str = None
posterPath: str = None
background: str = None
releaseDate: str = None
status: str = None
totalSeasons: int = None
childRequests: list = None
id: int = None
class SickChillTVShow(NamedTuple):
airdate: str = None
airs: str = None
ep_name: str = None
ep_plot: str = None
episode: int = None
indexerid: int = None
network: str = None
paused: int = None
quality: str = None
season: int = None
show_name: str = None
show_status: str = None
tvdbid: int = None
weekday: int = None

View file

@ -1,30 +1,31 @@
import logging
from logging import getLogger
from requests import Session, Request
from datetime import datetime, timezone
from geoip2.errors import AddressNotFoundError
from varken.helpers import geo_lookup, hashit, connection_handler
from varken.structures import TautulliStream
from varken.helpers import hashit, connection_handler
class TautulliAPI(object):
def __init__(self, server, dbmanager, data_folder):
def __init__(self, server, dbmanager, geoiphandler):
self.dbmanager = dbmanager
self.server = server
self.geoiphandler = geoiphandler
self.session = Session()
self.session.params = {'apikey': self.server.api_key, 'cmd': 'get_activity'}
self.session.params = {'apikey': self.server.api_key}
self.endpoint = '/api/v2'
self.logger = logging.getLogger()
self.data_folder = data_folder
self.logger = getLogger()
def __repr__(self):
return "<tautulli-{}>".format(self.server.id)
return f"<tautulli-{self.server.id}>"
def get_activity(self):
now = datetime.now(timezone.utc).astimezone().isoformat()
influx_payload = []
params = {'cmd': 'get_activity'}
req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint))
req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params))
g = connection_handler(self.session, req, self.server.verify_ssl)
if not g:
@ -39,14 +40,21 @@ class TautulliAPI(object):
return
for session in sessions:
# Check to see if ip_address_public attribute exists as it was introduced in v2
try:
geodata = geo_lookup(session.ip_address_public, self.data_folder)
getattr(session, 'ip_address_public')
except AttributeError:
self.logger.error('Public IP attribute missing!!! Do you have an old version of Tautulli (v1)?')
exit(1)
try:
geodata = self.geoiphandler.lookup(session.ip_address_public)
except (ValueError, AddressNotFoundError):
if self.server.fallback_ip:
geodata = geo_lookup(self.server.fallback_ip, self.data_folder)
geodata = self.geoiphandler.lookup(self.server.fallback_ip)
else:
my_ip = self.session.get('http://ip.42.pl/raw').text
geodata = geo_lookup(my_ip, self.data_folder)
geodata = self.geoiphandler.lookup(my_ip)
if not all([geodata.location.latitude, geodata.location.longitude]):
latitude = 37.234332396
@ -85,8 +93,7 @@ class TautulliAPI(object):
if session.platform == 'Roku':
product_version = session.product_version.split('-')[0]
hash_id = hashit('{}{}{}{}'.format(session.session_id, session.session_key, session.username,
session.full_title))
hash_id = hashit(f'{session.session_id}{session.session_key}{session.username}{session.full_title}')
influx_payload.append(
{
"measurement": "Tautulli",
@ -109,8 +116,7 @@ class TautulliAPI(object):
"progress_percent": session.progress_percent,
"region_code": geodata.subdivisions.most_specific.iso_code,
"location": geodata.city.name,
"full_location": '{} - {}'.format(geodata.subdivisions.most_specific.name,
geodata.city.name),
"full_location": f'{geodata.subdivisions.most_specific.name} - {geodata.city.name}',
"latitude": latitude,
"longitude": longitude,
"player_state": player_state,
@ -145,3 +151,37 @@ class TautulliAPI(object):
)
self.dbmanager.write_points(influx_payload)
def get_stats(self):
now = datetime.now(timezone.utc).astimezone().isoformat()
influx_payload = []
params = {'cmd': 'get_libraries'}
req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params))
g = connection_handler(self.session, req, self.server.verify_ssl)
if not g:
return
get = g['response']['data']
for library in get:
data = {
"measurement": "Tautulli",
"tags": {
"type": "library_stats",
"server": self.server.id,
"section_name": library['section_name'],
"section_type": library['section_type']
},
"time": now,
"fields": {
"total": int(library['count'])
}
}
if library['section_type'] == 'show':
data['fields']['seasons'] = int(library['parent_count'])
data['fields']['episodes'] = int(library['child_count'])
influx_payload.append(data)
self.dbmanager.write_points(influx_payload)

View file

@ -1,44 +1,63 @@
import logging
from logging.handlers import RotatingFileHandler
from logging import Filter, DEBUG, INFO, getLogger, Formatter, StreamHandler
from varken.helpers import mkdir_p
FILENAME = "varken.log"
MAX_SIZE = 5000000 # 5 MB
MAX_FILES = 5
LOG_FOLDER = 'logs'
class BlacklistFilter(Filter):
"""
Log filter for blacklisted tokens and passwords
"""
filename = "varken.log"
max_size = 5000000 # 5 MB
max_files = 5
log_folder = 'logs'
blacklisted_strings = ['apikey', 'username', 'password', 'url']
def __init__(self, filteredstrings):
super().__init__()
self.filtered_strings = filteredstrings
def filter(self, record):
for item in self.filtered_strings:
try:
if item in record.msg:
record.msg = record.msg.replace(item, 8 * '*' + item[-5:])
if any(item in str(arg) for arg in record.args):
record.args = tuple(arg.replace(item, 8 * '*' + item[-5:]) if isinstance(arg, str) else arg
for arg in record.args)
except TypeError:
pass
return True
class VarkenLogger(object):
"""docstring for ."""
def __init__(self, log_path=None, debug=None, data_folder=None):
def __init__(self, debug=None, data_folder=None):
self.data_folder = data_folder
self.log_level = debug
# Set log level
if self.log_level:
self.log_level = logging.DEBUG
self.log_level = DEBUG
else:
self.log_level = logging.INFO
self.log_level = INFO
# Make the log directory if it does not exist
mkdir_p('{}/{}'.format(self.data_folder, LOG_FOLDER))
mkdir_p(f'{self.data_folder}/{BlacklistFilter.log_folder}')
# Create the Logger
self.logger = logging.getLogger()
self.logger.setLevel(logging.DEBUG)
self.logger = getLogger()
self.logger.setLevel(DEBUG)
# Create a Formatter for formatting the log messages
logger_formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(module)s : %(message)s', '%Y-%m-%d %H:%M:%S')
logger_formatter = Formatter('%(asctime)s : %(levelname)s : %(module)s : %(message)s', '%Y-%m-%d %H:%M:%S')
# Create the Handler for logging data to a file
file_logger = RotatingFileHandler('{}/{}/{}'.format(self.data_folder, LOG_FOLDER, FILENAME),
mode='a', maxBytes=MAX_SIZE,
backupCount=MAX_FILES,
encoding=None, delay=0
)
file_logger = RotatingFileHandler(f'{self.data_folder}/{BlacklistFilter.log_folder}/{BlacklistFilter.filename}',
mode='a', maxBytes=BlacklistFilter.max_size, encoding=None, delay=0,
backupCount=BlacklistFilter.max_files)
file_logger.setLevel(self.log_level)
@ -46,7 +65,7 @@ class VarkenLogger(object):
file_logger.setFormatter(logger_formatter)
# Add the console logger
console_logger = logging.StreamHandler()
console_logger = StreamHandler()
console_logger.setFormatter(logger_formatter)
console_logger.setLevel(self.log_level)