v1.6.4 Merge

v1.6.4 Merge
This commit is contained in:
Nicholas St. Germain 2019-02-03 23:53:15 -06:00 committed by GitHub
commit 35ff610c0d
17 changed files with 238 additions and 174 deletions

View file

@ -7,11 +7,12 @@ addons:
apt:
packages:
- docker-ce
branches:
only:
- master
deploy:
- provider: script
script: bash deploy.sh
on:
branch: master
- provider: script
script: bash deploy.sh
on:
branch: develop

View file

@ -1,7 +1,20 @@
# Change Log
## [v1.6.3](https://github.com/Boerderij/Varken/tree/v1.6.3) (2019-01-16)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v1.6.2...v1.6.3)
## [v1.6.4](https://github.com/Boerderij/Varken/tree/v1.6.4) (2019-02-03)
[Full Changelog](https://github.com/Boerderij/Varken/compare/1.6.3...v1.6.4)
**Fixed bugs:**
- \[BUG\] fstring in Varken.py Doesnt allow py version check [\#102](https://github.com/Boerderij/Varken/issues/102)
- \[BUG\] Unifi loadavg is str instead of float [\#101](https://github.com/Boerderij/Varken/issues/101)
- \[BUG\] requestedByAlias to added to Ombi structures [\#97](https://github.com/Boerderij/Varken/issues/97)
**Merged pull requests:**
- v1.6.4 Merge [\#100](https://github.com/Boerderij/Varken/pull/100) ([DirtyCajunRice](https://github.com/DirtyCajunRice))
## [1.6.3](https://github.com/Boerderij/Varken/tree/1.6.3) (2019-01-16)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v1.6.2...1.6.3)
**Implemented enhancements:**
@ -13,7 +26,9 @@
**Merged pull requests:**
- v1.6.3 Merge [\#94](https://github.com/Boerderij/Varken/pull/92) ([DirtyCajunRice](https://github.com/DirtyCajunRice))
- double typo [\#96](https://github.com/Boerderij/Varken/pull/96) ([DirtyCajunRice](https://github.com/DirtyCajunRice))
- tweaks [\#95](https://github.com/Boerderij/Varken/pull/95) ([DirtyCajunRice](https://github.com/DirtyCajunRice))
- v1.6.3 Merge [\#94](https://github.com/Boerderij/Varken/pull/94) ([DirtyCajunRice](https://github.com/DirtyCajunRice))
## [v1.6.2](https://github.com/Boerderij/Varken/tree/v1.6.2) (2019-01-12)
[Full Changelog](https://github.com/Boerderij/Varken/compare/v1.6.1...v1.6.2)
@ -68,7 +83,6 @@
**Implemented enhancements:**
- \[Feature Request\] Add issues from Ombi [\#70](https://github.com/Boerderij/Varken/issues/70)
- \[Feature Request\] Allow DNS Hostnames [\#66](https://github.com/Boerderij/Varken/issues/66)
- Replace static grafana configs with a Public Example [\#32](https://github.com/Boerderij/Varken/issues/32)
**Fixed bugs:**
@ -123,7 +137,6 @@
**Closed issues:**
- Initial startup requires admin access to InfluxDB [\#53](https://github.com/Boerderij/Varken/issues/53)
- Ability to add custom tautulli port [\#49](https://github.com/Boerderij/Varken/issues/49)
**Merged pull requests:**
@ -151,12 +164,7 @@
- use a config.ini instead of command-line flags [\#33](https://github.com/Boerderij/Varken/issues/33)
- Migrate crontab to python schedule package [\#31](https://github.com/Boerderij/Varken/issues/31)
- Consolidate missing and missing\_days in sonarr.py [\#30](https://github.com/Boerderij/Varken/issues/30)
- Database Withou any scripts [\#29](https://github.com/Boerderij/Varken/issues/29)
- Grafana dashboard json doesn't match format of readme screenshot? [\#28](https://github.com/Boerderij/Varken/issues/28)
- Ombi something new \[Request\] [\#26](https://github.com/Boerderij/Varken/issues/26)
- Users Online not populating [\#24](https://github.com/Boerderij/Varken/issues/24)
- Missing dashboard [\#23](https://github.com/Boerderij/Varken/issues/23)
- Is there a Docker Image available for these scripts? [\#22](https://github.com/Boerderij/Varken/issues/22)
- Support for Linux without ASA [\#21](https://github.com/Boerderij/Varken/issues/21)
**Merged pull requests:**
@ -173,7 +181,6 @@
**Closed issues:**
- Tautulli.py not working. [\#18](https://github.com/Boerderij/Varken/issues/18)
- Issues with scripts [\#12](https://github.com/Boerderij/Varken/issues/12)
- issue with new tautulli.py [\#10](https://github.com/Boerderij/Varken/issues/10)
- ombi.py fails when attempting to update influxdb [\#9](https://github.com/Boerderij/Varken/issues/9)

View file

@ -4,12 +4,16 @@ LABEL maintainers="dirtycajunrice,samwiseg0"
ENV DEBUG="False"
COPY / /app
WORKDIR /app
COPY /requirements.txt /Varken.py /app/
COPY /varken /app/varken
COPY /data /app/data
RUN python3 -m pip install -r /app/requirements.txt
WORKDIR /app
CMD cp /app/data/varken.example.ini /config/varken.example.ini && python3 /app/Varken.py --data-folder /config
VOLUME /config

View file

@ -4,14 +4,18 @@ LABEL maintainers="dirtycajunrice,samwiseg0"
ENV DEBUG="False"
COPY / /app
WORKDIR /app
COPY /tmp/qemu-arm-static /usr/bin/qemu-arm-static
COPY /qemu-arm-static /usr/bin/qemu-arm-static
COPY /requirements.txt /Varken.py /app/
COPY /varken /app/varken
COPY /data /app/data
RUN python3 -m pip install -r /app/requirements.txt
WORKDIR /app
CMD cp /app/data/varken.example.ini /config/varken.example.ini && python3 /app/Varken.py --data-folder /config
VOLUME /config

View file

@ -4,14 +4,18 @@ LABEL maintainers="dirtycajunrice,samwiseg0"
ENV DEBUG="False"
COPY / /app
WORKDIR /app
COPY /tmp/qemu-aarch64-static /usr/bin/qemu-aarch64-static
COPY /qemu-aarch64-static /usr/bin/qemu-aarch64-static
COPY /requirements.txt /Varken.py /app/
COPY /varken /app/varken
COPY /data /app/data
RUN python3 -m pip install -r /app/requirements.txt
WORKDIR /app
CMD cp /app/data/varken.example.ini /config/varken.example.ini && python3 /app/Varken.py --data-folder /config
VOLUME /config

View file

@ -1,17 +0,0 @@
FROM arm32v7/python:3.7.2-slim
LABEL maintainers="dirtycajunrice,samwiseg0"
ENV DEBUG="False"
COPY / /app
COPY /tmp/qemu-arm-static /usr/bin/qemu-arm-static
RUN python3 -m pip install -r /app/requirements.txt
WORKDIR /app
CMD cp /app/data/varken.example.ini /config/varken.example.ini && python3 /app/Varken.py --data-folder /config
VOLUME /config

View file

@ -1,9 +1,12 @@
# Varken
<p align="center">
<img width="800" src="https://bin.cajun.pro/images/varken_full_banner.png">
</p>
[![Build Status](https://travis-ci.org/Boerderij/Varken.svg?branch=master)](https://travis-ci.org/Boerderij/Varken)
[![Discord](https://img.shields.io/badge/Discord-Varken-7289DA.svg?logo=discord&style=flat-square)](https://discord.gg/VjZ6qSM)
[![Discord](https://img.shields.io/discord/518970285773422592.svg?colorB=7289DA&label=Discord&logo=Discord&logoColor=7289DA&style=flat-square)](https://discord.gg/VjZ6qSM)
[![BuyMeACoffee](https://img.shields.io/badge/BuyMeACoffee-Donate-ff813f.svg?logo=CoffeeScript&style=flat-square)](https://www.buymeacoffee.com/varken)
[![Docker-Layers](https://images.microbadger.com/badges/image/boerderij/varken.svg)](https://microbadger.com/images/boerderij/varken)
[![Docker-Version](https://images.microbadger.com/badges/version/boerderij/varken.svg)](https://microbadger.com/images/boerderij/varken)
[![Release](https://img.shields.io/github/release/boerderij/varken.svg?style=flat-square)](https://microbadger.com/images/boerderij/varken)
[![Docker Pulls](https://img.shields.io/docker/pulls/boerderij/varken.svg)](https://hub.docker.com/r/boerderij/varken/)
Dutch for PIG. PIG is an Acronym for Plex/InfluxDB/Grafana
@ -14,7 +17,7 @@ frontend
Requirements:
* [Python 3.6.7+](https://www.python.org/downloads/release/python-367/)
* Python3-pip
* [Python3-pip](https://pip.pypa.io/en/stable/installing/)
* [InfluxDB](https://www.influxdata.com/)
<p align="center">

View file

@ -98,13 +98,13 @@ if __name__ == "__main__":
SONARR = SonarrAPI(server, DBMANAGER)
if server.queue:
at_time = schedule.every(server.queue_run_seconds).seconds
at_time.do(QUEUE.put, SONARR.get_queue).tag(f"sonarr-{server.id}-get_queue")
at_time.do(QUEUE.put, SONARR.get_queue).tag("sonarr-{}-get_queue".format(server.id))
if server.missing_days > 0:
at_time = schedule.every(server.missing_days_run_seconds).seconds
at_time.do(QUEUE.put, SONARR.get_missing).tag(f"sonarr-{server.id}-get_missing")
at_time.do(QUEUE.put, SONARR.get_missing).tag("sonarr-{}-get_missing".format(server.id))
if server.future_days > 0:
at_time = schedule.every(server.future_days_run_seconds).seconds
at_time.do(QUEUE.put, SONARR.get_future).tag(f"sonarr-{server.id}-get_future")
at_time.do(QUEUE.put, SONARR.get_future).tag("sonarr-{}-get_future".format(server.id))
if CONFIG.tautulli_enabled:
GEOIPHANDLER = GeoIPHandler(DATA_FOLDER)
@ -113,46 +113,46 @@ if __name__ == "__main__":
TAUTULLI = TautulliAPI(server, DBMANAGER, GEOIPHANDLER)
if server.get_activity:
at_time = schedule.every(server.get_activity_run_seconds).seconds
at_time.do(QUEUE.put, TAUTULLI.get_activity).tag(f"tautulli-{server.id}-get_activity")
at_time.do(QUEUE.put, TAUTULLI.get_activity).tag("tautulli-{}-get_activity".format(server.id))
if server.get_stats:
at_time = schedule.every(server.get_stats_run_seconds).seconds
at_time.do(QUEUE.put, TAUTULLI.get_stats).tag(f"tautulli-{server.id}-get_stats")
at_time.do(QUEUE.put, TAUTULLI.get_stats).tag("tautulli-{}-get_stats".format(server.id))
if CONFIG.radarr_enabled:
for server in CONFIG.radarr_servers:
RADARR = RadarrAPI(server, DBMANAGER)
if server.get_missing:
at_time = schedule.every(server.get_missing_run_seconds).seconds
at_time.do(QUEUE.put, RADARR.get_missing).tag(f"radarr-{server.id}-get_missing")
at_time.do(QUEUE.put, RADARR.get_missing).tag("radarr-{}-get_missing".format(server.id))
if server.queue:
at_time = schedule.every(server.queue_run_seconds).seconds
at_time.do(QUEUE.put, RADARR.get_queue).tag(f"radarr-{server.id}-get_queue")
at_time.do(QUEUE.put, RADARR.get_queue).tag("radarr-{}-get_queue".format(server.id))
if CONFIG.ombi_enabled:
for server in CONFIG.ombi_servers:
OMBI = OmbiAPI(server, DBMANAGER)
if server.request_type_counts:
at_time = schedule.every(server.request_type_run_seconds).seconds
at_time.do(QUEUE.put, OMBI.get_request_counts).tag(f"ombi-{server.id}-get_request_counts")
at_time.do(QUEUE.put, OMBI.get_request_counts).tag("ombi-{}-get_request_counts".format(server.id))
if server.request_total_counts:
at_time = schedule.every(server.request_total_run_seconds).seconds
at_time.do(QUEUE.put, OMBI.get_all_requests).tag(f"ombi-{server.id}-get_all_requests")
at_time.do(QUEUE.put, OMBI.get_all_requests).tag("ombi-{}-get_all_requests".format(server.id))
if server.issue_status_counts:
at_time = schedule.every(server.issue_status_run_seconds).seconds
at_time.do(QUEUE.put, OMBI.get_issue_counts).tag(f"ombi-{server.id}-get_issue_counts")
at_time.do(QUEUE.put, OMBI.get_issue_counts).tag("ombi-{}-get_issue_counts".format(server.id))
if CONFIG.sickchill_enabled:
for server in CONFIG.sickchill_servers:
SICKCHILL = SickChillAPI(server, DBMANAGER)
if server.get_missing:
at_time = schedule.every(server.get_missing_run_seconds).seconds
at_time.do(QUEUE.put, SICKCHILL.get_missing).tag(f"sickchill-{server.id}-get_missing")
at_time.do(QUEUE.put, SICKCHILL.get_missing).tag("sickchill-{}-get_missing".format(server.id))
if CONFIG.unifi_enabled:
for server in CONFIG.unifi_servers:
UNIFI = UniFiAPI(server, DBMANAGER)
at_time = schedule.every(server.get_usg_stats_run_seconds).seconds
at_time.do(QUEUE.put, UNIFI.get_usg_stats).tag(f"unifi-{server.id}-get_usg_stats")
at_time.do(QUEUE.put, UNIFI.get_usg_stats).tag("unifi-{}-get_usg_stats".format(server.id))
# Run all on startup
SERVICES_ENABLED = [CONFIG.ombi_enabled, CONFIG.radarr_enabled, CONFIG.tautulli_enabled, CONFIG.unifi_enabled,

View file

@ -1,11 +1,24 @@
#!/usr/bin/env bash
# Travis-ci convenience environment vars used:
# TRAVIS_BRANCH | branch name
# $TRAVIS_REPO_SLUG | organization/project (GitHub Capitalization)
# Travis-ci manual environment vars used:
# GITHUB_USER | github username
# GITHUB_TOKEN | $GITHUB_USER's token
# DOCKER_USER | docker username
# DOCKER_PASSWORD | $DOCKER_USER's password
VERSION="$(grep -i version varken/__init__.py | cut -d' ' -f3 | tr -d \")"
# Docker
GITHUB_USER='dirtycajunrice'
DOCKER_USER='dirtycajunrice'
PROJECT='varken'
NAMESPACE="boerderij/${PROJECT}"
# Set branch to latest if master, else keep the same
if [[ "$TRAVIS_BRANCH" == "master" ]]; then
BRANCH="latest"
else
BRANCH="$TRAVIS_BRANCH"
fi
# get the docker lowercase variant of the repo_name
REPOSITORY="$(echo $TRAVIS_REPO_SLUG | tr '[:upper:]' '[:lower:]')"
# Docker experimental config
echo '{"experimental":true}' | sudo tee /etc/docker/daemon.json
@ -15,40 +28,52 @@ echo '{"experimental":"enabled"}' | sudo tee ~/.docker/config.json
sudo service docker restart
# Auth
echo "$DOCKER_PASSWORD" | docker login -u="$DOCKER_USER" --password-stdin
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USER" --password-stdin
# Latest x64
docker build -t "${NAMESPACE}:latest" . && \
docker push "${NAMESPACE}:latest" && \
# Versioned x64
docker tag "${NAMESPACE}:latest" "${NAMESPACE}:${VERSION}" && \
docker push "${NAMESPACE}:${VERSION}" && \
# x64 Arch
docker tag "${NAMESPACE}:latest" "${NAMESPACE}:latest-amd64" && \
docker push "${NAMESPACE}:latest-amd64"
# Prepare qemu for ARM builds
# Prepare QEMU for ARM builds
docker run --rm --privileged multiarch/qemu-user-static:register --reset
wget -P tmp/ "https://github.com/multiarch/qemu-user-static/releases/download/v3.1.0-2/qemu-aarch64-static"
wget -P tmp/ "https://github.com/multiarch/qemu-user-static/releases/download/v3.1.0-2/qemu-arm-static"
chmod +x tmp/qemu-aarch64-static tmp/qemu-arm-static
bash prebuild.sh
chmod +x qemu-aarch64-static qemu-arm-static
# ARM images
# Set tag based off of branch
if [[ "$BRANCH" == "latest" ]]; then
TAG="$VERSION"
else
TAG="$BRANCH"
fi
# AMDx64
docker build -t "${REPOSITORY}:${TAG}-amd64" . && \
docker push "${REPOSITORY}:${TAG}-amd64"
# Create Initial Manifests
docker manifest create "${REPOSITORY}:${TAG}" "${REPOSITORY}:${TAG}-amd64"
if [[ "$BRANCH" == "latest" ]]; then
docker manifest create "${REPOSITORY}:${BRANCH}" "${REPOSITORY}:${TAG}-amd64"
fi
# ARM variants
for i in $(ls *arm*); do
arch="$(echo ${i} | cut -d. -f2)"
# Latest
docker build -f "./Dockerfile.${arch}" -t "${NAMESPACE}:latest-${arch}" . && \
docker push "${NAMESPACE}:latest-${arch}" && \
# Versioned
docker tag "${NAMESPACE}:latest-${arch}" "${NAMESPACE}:${VERSION}-${arch}" && \
docker push "${NAMESPACE}:${VERSION}-${arch}"
ARCH="$(echo ${i} | cut -d. -f2)"
docker build -f "Dockerfile.${ARCH}" -t "${REPOSITORY}:${TAG}-${ARCH}" . && \
docker push "${REPOSITORY}:${TAG}-${ARCH}"
# Add variant to manifest
docker manifest create -a "${REPOSITORY}:${TAG}" "${REPOSITORY}:${TAG}-${ARCH}"
if [[ "$BRANCH" == "latest" ]]; then
docker manifest create -a "${REPOSITORY}:${BRANCH}" "${REPOSITORY}:${TAG}-${ARCH}"
fi
done
wget -O manifest-tool https://github.com/estesp/manifest-tool/releases/download/v0.9.0/manifest-tool-linux-amd64 && \
chmod +x manifest-tool && \
python3 manifest_generator.py && \
./manifest-tool --username "$DOCKER_USER" --password "$DOCKER_PASSWORD" push from-spec ".manifest.yaml"
docker manifest inspect "${REPOSITORY}:${TAG}" && \
docker manifest push "${REPOSITORY}:${TAG}"
if [[ "$BRANCH" == "latest" ]]; then
docker manifest inspect "${REPOSITORY}:${BRANCH}" && \
docker manifest push "${REPOSITORY}:${BRANCH}"
fi
# Git tags
git remote set-url origin "https://${GITHUB_USER}:${GITHUB_API_KEY}@github.com/${NAMESPACE}.git" && \
if [[ "$BRANCH" == "latest" ]]; then
git remote set-url origin "https://${GITHUB_USER}:${GITHUB_TOKEN}@github.com/${REPOSITORY}.git" && \
git tag "${VERSION}" && \
git push --tags
fi

View file

@ -1,37 +0,0 @@
import yaml
from varken import VERSION
org = 'boerderij'
project = 'varken'
namespace = f"{org}/{project}"
yaml_arr = []
tags = ['latest', VERSION]
# Docker image, arch, variant, os
arch_list = [('arm', 'arm', 'v6', 'linux'),
('armhf', 'arm', 'v7', 'linux'),
('arm64', 'arm64', 'v8', 'linux'),
('amd64', 'amd64', None, 'linux')]
for tag in tags:
yaml_doc = {
'image': f'{namespace}:{tag}',
'manifests': []
}
for arch in arch_list:
info = {
'image': f"{namespace}:{tag}-{arch[0]}",
'platform': {
'architecture': arch[1],
'os': arch[3]
}
}
if arch[2]:
info['platform']['variant'] = arch[2]
yaml_doc['manifests'].append(info)
yaml_arr.append(yaml_doc)
with open(f".manifest.yaml", 'w') as file:
yaml.dump_all(yaml_arr, file, default_flow_style=False)

3
prebuild.sh Normal file
View file

@ -0,0 +1,3 @@
#!/usr/bin/env bash
wget -q "https://github.com/multiarch/qemu-user-static/releases/download/v3.1.0-2/qemu-aarch64-static"
wget -q "https://github.com/multiarch/qemu-user-static/releases/download/v3.1.0-2/qemu-arm-static"

55
varken.xml Normal file
View file

@ -0,0 +1,55 @@
<!--Work In Progress-->
<?xml version="1.0"?>
<Container version="2">
<Name>Varken</Name>
<Repository>boerderij/varken</Repository>
<Registry>https://hub.docker.com/r/boerderij/varken/~/dockerfile/</Registry>
<Network>bridge</Network>
<MyIP/>
<Shell>sh</Shell>
<Privileged>false</Privileged>
<Support>https://discord.gg/VjZ6qSM</Support>
<Project/>
<Overview>
Varken is a standalone command-line utility to aggregate data from the Plex ecosystem into InfluxDB. Examples use Grafana for a frontend
</Overview>
<Category/>
<WebUI/>
<TemplateURL/>
<Icon>Pig.png</Icon>
<ExtraParams/>
<PostArgs/>
<CPUset/>
<DonateText/>
<DonateLink/>
<Description>
Varken is a standalone command-line utility to aggregate data from the Plex ecosystem into InfluxDB. Examples use Grafana for a frontend
</Description>
<Networking>
<Mode>bridge</Mode>
<Publish/>
</Networking>
<Data>
<Volume>
<HostDir>/mnt/user/appdata/varken</HostDir>
<ContainerDir>/config</ContainerDir>
<Mode>rw</Mode>
</Volume>
</Data>
<Environment>
<Variable>
<Value>99</Value>
<Name>PGID</Name>
<Mode/>
</Variable>
<Variable>
<Value>100</Value>
<Name>PUID</Name>
<Mode/>
</Variable>
</Environment>
<Labels/>
<Config Name="PGID" Target="PGID" Default="" Mode="" Description="Container Variable: PGID" Type="Variable" Display="always" Required="true" Mask="false">99</Config>
<Config Name="PUID" Target="PUID" Default="" Mode="" Description="Container Variable: PUID" Type="Variable" Display="always" Required="true" Mask="false">100</Config>
<Config Name="Varken DataDir" Target="/config" Default="" Mode="rw" Description="Container Path: /config" Type="Path" Display="advanced-hide" Required="true" Mask="false">/mnt/user/appdata/varken</Config>
</Container>

View file

@ -1,2 +1,2 @@
VERSION = "1.6.3"
VERSION = "1.6.4"
BRANCH = 'master'

View file

@ -10,8 +10,10 @@ class DBManager(object):
self.influx = InfluxDBClient(host=self.server.url, port=self.server.port, username=self.server.username,
password=self.server.password, ssl=self.server.ssl, database='varken',
verify_ssl=self.server.verify_ssl)
version = self.influx.request('ping', expected_response_code=204).headers['X-Influxdb-Version']
databases = [db['name'] for db in self.influx.get_list_database()]
self.logger = getLogger()
self.logger.info('Influxdb version: %s', version)
if 'varken' not in databases:
self.logger.info("Creating varken database")

View file

@ -42,6 +42,7 @@ class SickChillAPI(object):
sxe = f'S{show.season:0>2}E{show.episode:0>2}'
hash_id = hashit(f'{self.server.id}{show.show_name}{sxe}')
missing_types = [(0, 'future'), (1, 'later'), (2, 'soon'), (3, 'today'), (4, 'missed')]
try:
influx_payload.append(
{
"measurement": "SickChill",
@ -60,5 +61,8 @@ class SickChillAPI(object):
}
}
)
except IndexError as e:
self.logger.error('Error building payload for sickchill. Discarding. Error: %s', e)
if influx_payload:
self.dbmanager.write_points(influx_payload)

View file

@ -138,6 +138,7 @@ class OmbiTVRequest(NamedTuple):
title: str = None
totalSeasons: int = None
tvDbId: int = None
requestedByAlias: str = None
class OmbiMovieRequest(NamedTuple):
@ -173,6 +174,7 @@ class OmbiMovieRequest(NamedTuple):
title: str = None
langCode: str = None
languageCode: str = None
requestedByAlias: str = None
# Sonarr

View file

@ -47,6 +47,7 @@ class UniFiAPI(object):
self.logger.error("Could not find a USG named %s from your UniFi Controller", self.server.usg_name)
return
try:
influx_payload = [
{
"measurement": "UniFi",
@ -61,15 +62,18 @@ class UniFiAPI(object):
"rx_bytes_current": device['wan1']['rx_bytes-r'],
"tx_bytes_total": device['wan1']['tx_bytes'],
"tx_bytes_current": device['wan1']['tx_bytes-r'],
"speedtest_latency": device['speedtest-status']['latency'],
"speedtest_download": device['speedtest-status']['xput_download'],
"speedtest_upload": device['speedtest-status']['xput_upload'],
"cpu_loadavg_1": device['sys_stats']['loadavg_1'],
"cpu_loadavg_5": device['sys_stats']['loadavg_5'],
"cpu_loadavg_15": device['sys_stats']['loadavg_15'],
"cpu_util": device['system-stats']['cpu'],
"mem_util": device['system-stats']['mem'],
# Commenting speedtest out until Unifi gets their shit together
# "speedtest_latency": device['speedtest-status']['latency'],
# "speedtest_download": device['speedtest-status']['xput_download'],
# "speedtest_upload": device['speedtest-status']['xput_upload'],
"cpu_loadavg_1": float(device['sys_stats']['loadavg_1']),
"cpu_loadavg_5": float(device['sys_stats']['loadavg_5']),
"cpu_loadavg_15": float(device['sys_stats']['loadavg_15']),
"cpu_util": float(device['system-stats']['cpu']),
"mem_util": float(device['system-stats']['mem']),
}
}
]
self.dbmanager.write_points(influx_payload)
except KeyError as e:
self.logger.error('Error building payload for unifi. Discarding. Error: %s', e)