27 KiB
- Installing and configuring yaVDR with Ansible
- Playbooks
- Hosts
- Group Variables
- Roles
- yavdr-common
- vdr
- STARTED yavdr-network
- STARTED nfs-server
- yavdr-remote
- automatic X-server configuration
- yavdr-xorg
- samba-install
- samba-config
- grub-config
- autoinstall-drivers
- autoinstall-satip
- autoinstall-targavfd
- autoinstall-imonlcd
- autoinstall-libcecdaemon
- autoinstall-pvr350
- autoinstall-dvbhddevice
- autoinstall-dvbsddevice
- autoinstall-plugins
- Handlers
Installing and configuring yaVDR with Ansible
This is an experimental feature which allows to set up a yaVDR installation based on a normal Ubuntu Server 16.04.x installation using Ansible.
This Manual is written in org-mode for Emacs and can rewrite the complete ansible configuration if you call org-babel-tangle
from within emacs.
Playbooks
yavdr07.yml
The yavdr07.yml
playbook sets up a fully-featured yaVDR installation:
---
# file: yavdr07.yml
# this playbook sets up a complete yaVDR 0.7 installation
- name: set up yaVDR
hosts: all
become: true
roles:
- yavdr-common # install and configure the basic system
- vdr # install vdr and related packages
- yavdr-network # enable network client capabilities
- samba-install # install samba server
- samba-config # configure samba server
#- nfs-server # install nfs server
#- nfs-config # configure nfs server
- yavdr-xorg # graphical session
- yavdr-remote # remote configuration files, services and scripts
- grub-config # configure grub
- autoinstall-satip # install vdr-plugin-satip if a Sat>IP server has been found
- autoinstall-targavfd
- autoinstall-imonlcd
handlers:
- include: handlers/main.yml
yavdr07-headless.yml
For a headless server installation yavdr07-headless.yml
is a good choice
---
# file: yavdr07-headless.yml
# this playbook set up a headless yaVDR 0.7 installation
- name: set up a headless yaVDR server
hosts: all
become: true
roles:
- yavdr-common
- vdr
- yavdr-network
- samba-server
- samba-config
- nfs-server
- nfs-config
- grub-config
handlers:
- include: handlers/main.yml
Hosts
This playbook can either be used to run the installation on the localhost or any other PC in the network that can be accessed via ssh. Simply add the host names or IP addresses to the hosts file in the respective section:
[yavdr-full]
localhost connection=local
#192.168.1.116
[yavdr-headless]
[yavdr-client]
Group Variables
default text for templates
# file: group_vars/all
# this is the standard text to put in templates
ansible_managed_file: "*** YAVDR: ANSIBLE MANAGED FILE ***"
PPAs
branch: unstable
ppa_owner: 'ppa:yavdr'
# a list of all package repositories to be added to the installation
repositories:
- '{{ ppa_owner }}/main'
- '{{ ppa_owner }}/unstable-main'
- '{{ ppa_owner }}/{{branch}}-vdr'
- '{{ ppa_owner }}/{{branch}}-yavdr'
- '{{ ppa_owner }}/{{branch}}-kodi'
Drivers
drivers:
sundtek: auto
ddvb-dkms: auto
Media directories
# dictionary of directories for (shared) files. Automatically exported via NFS and Samba if those roles are enabled
media_dirs:
audio: /srv/audio
video: /srv/audio
pictures: /srv/audio
files: /srv/files
VDR user, directories, special configuration and plugins
# properties of the user vdr and vdr-related options
vdr:
user: vdr
group: vdr
uid: 666
gid: 666
home: /var/lib/vdr
recdir: /srv/vdr/video
hide_first_recording_level: false
safe_dirnames: true
override_vdr_charset: false
# add the vdr plugins you want to install
vdr_plugins:
- vdr-plugin-devstatus
- vdr-plugin-markad
- vdr-plugin-restfulapi
- vdr-plugin-softhddevice
Samba
samba:
workgroup: YAVDR
Additional packages
# additional packages you want to install
extra_packages:
- vim
- tree
- w-scan
- bpython3
System pre-configuration
system:
shutdown: poweroff
grub:
timeout: 0
boot_options: quiet nosplash
Roles
yavdr-common
This role is used to set up a basic yaVDR installation. It creates the directories, installs the vdr and other useful packages.
default variables
This section is for reference only, please use the files in global_vars
for customizations.
---
# file: roles/yavdr-common/defaults/main.yml
Repositories
You can set a list of package repositories which provide the necessary packages. Feel free to use own PPAs if you need special customization to the VDR and it's plugins.
branch: unstable
repositories:
- 'ppa:yavdr/main'
- 'ppa:yavdr/unstable-main'
- 'ppa:yavdr/{{branch}}-vdr'
- 'ppa:yavdr/{{branch}}-kodi'
- 'ppa:yavdr/{{branch}}-yavdr'
Drivers
Automatically installed drivers can be very useful, but if you know you need a certain driver, you can simply set it's value to true. If you don't want a driver to be installed, set it's value to false.
drivers:
sundtek: auto
ddvb-dkms: auto
Additional Packages
Add additional packages you would like to have on your installation to this list
extra_packages:
- vim
- tree
- w-scan
VDR
This section allows you to set the recording directory, the user and group that runs the vdr and it's home directory.
- user
- the vdr user name
- group
- the main group for the user vdr
- uid
- the user id for the user vdr
- gid
- the group id for the group vdr
- home
- the home directory for the user vdr
- recdir
- the recording directory used by VDR
- hide_first_recording_level
- let vdr hide the first directory level of it's recording directory so the content of multiple directories is shown merged together
- safe_dirnames
- replace special characters which are not compatible with Windows file systems and Samba shares
- override_vdr_charset
- workaround for channels with weird EPG encodings, e.g. Sky
vdr:
user: vdr
group: vdr
uid: 666
gid: 666
home: /var/lib/vdr
recdir: /srv/vdr/video
hide_first_recording_level: false
safe_dirnames: true
override_vdr_charset: false
tasks
yavdr-common executes the following tasks:
main.yml
Disable default installation of recommended packages
This task prevents apt to automatically install all recommended dependencies for packages:
- name: apt | prevent automatic installation of recommended packages
template:
src: templates/90-norecommends.j2
dest: /etc/apt/apt.conf.d/90norecommends
Setting up the package repositories
- name: add yaVDR PPAs
apt_repository:
repo: '{{ item }}'
state: present
update_cache: yes
with_items: '{{ repositories }}'
- name: upgrade existing packages
apt:
upgrade: dist
update_cache: yes
Installing essential packages
- name: apt | install basic packages
apt:
name: '{{ item }}'
state: present
install_recommends: no
with_items:
- anacron
- at
- bash-completion
- biosdevname
- linux-firmware
- psmisc
- python-kmodpy
- python3-usb
- software-properties-common
- ssh
- ubuntu-drivers-common
- wget
- wpasupplicant
- usbutils
- xfsprogs
Install and execute local fact scripts
- name: create directory for local facts
file:
dest: /etc/ansible/facts.d
state: directory
- name: copy facts script for USB- and PCI(e)-IDs
copy:
src: files/hardware.fact.py
dest: /etc/ansible/facts.d/hardware.fact
mode: '0775'
- name: copy facts script for loaded modules
copy:
src: files/modules.fact.py
dest: /etc/ansible/facts.d/modules.fact
mode: '0775'
- name: copy facts script for Sat>IP server detection
copy:
src: files/satip.fact.py
dest: /etc/ansible/facts.d/satip.fact
mode: '0775'
- name: reload ansible local facts
setup: filter=ansible_local
files:
hardware facts
# This script returns a list of Vendor- and Product-IDs for all connected usb
# and pci(e) devices in json format
import glob
import json
import os
import sys
import usb.core
from collections import namedtuple
Device = namedtuple("Device", ['idVendor', 'idProduct'])
def get_pci_devices():
for device in glob.glob('/sys/devices/pci*/*:*:*/'):
with open(os.path.join(device, 'device')) as d:
product_id = int(d.read().strip(), 16)
with open(os.path.join(device, 'vendor')) as d:
vendor_id = int(d.read().strip(), 16)
yield Device(idVendor=vendor_id, idProduct=product_id)
def format_device_list(iterator):
return ["{:04x}:{:04x}".format(d.idVendor, d.idProduct) for d in iterator]
if __name__ == '__main__':
usb_devices = format_device_list(usb.core.find(find_all=True))
pci_devices = format_device_list(get_pci_devices())
print(json.dumps({'usb': usb_devices, 'pci': pci_devices}))
module facts
# This script returns a list of currently loaded kernel modules
from __future__ import print_function
import json
import kmodpy
k = kmodpy.Kmod()
print(json.dumps([module[0] for module in k.loaded()]))
satip facts
# This script sends a multicast message and awaits responses by Sat>IP servers.
# returns the boolean variable 'satip_detected' as json
import json
import socket
import sys
import time
SSDP_ADDR = "239.255.255.250"
SSDP_PORT = 1900
# SSDP_MX = max delay for server response
# a value of 2s is recommended by the SAT>IP specification 1.2.2
SSDP_MX = 2
SSDP_ST = "urn:ses-com:device:SatIPServer:1"
ssdpRequest = "\r\n".join((
"M-SEARCH * HTTP/1.1",
"HOST: %s:%d" % (SSDP_ADDR, SSDP_PORT),
"MAN: \"ssdp:discover\"",
"MX: %d" % (SSDP_MX),
"ST: %s" % (SSDP_ST),
"\r\n"))
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
# according to Sat>IP Specification 1.2.2, p. 20
# a client should send three requests within 100 ms with a ttl of 2
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 2)
sock.settimeout(SSDP_MX + 0.5)
for _ in range(3):
sock.sendto(ssdpRequest.encode('ascii'), (SSDP_ADDR, SSDP_PORT))
time.sleep(0.03)
try:
response = sock.recv(1000).decode()
if response and "SERVER:" in response:
got_response = True
else:
raise ValueError('No satip server detected')
except (socket.timeout, ValueError):
got_response = False
finally:
print(json.dumps(
{'satip_detected': got_response}
))
templates
{{ ansible_managed_file | comment('c') }}
// Recommends are as of now still abused in many packages
APT::Install-Recommends "0";
APT::Install-Suggests "0";
vdr
tasks
install the basic vdr packages
---
# file: roles/vdr/tasks/main.yml
- name: apt | install basic vdr packages
apt:
name: '{{ item }}'
state: present
install_recommends: no
with_items:
- vdr
- vdrctl
- vdr-plugin-dbus2vdr
Add svdrp/svdrp-disc to /etc/services
- name: add svdrp to /etc/services
lineinfile:
dest: /etc/services
state: present
line: "svdrp 6419/tcp"
- name: add svdrp-disc to /etc/services
lineinfile:
dest: /etc/services
state: present
line: "svdrp-disc 6419/udp"
Set up the recording directory for the vdr user
- name: create vdr recdir
file:
state: directory
owner: '{{ vdr.user }}'
group: '{{ vdr.group }}'
mode: 0775
dest: '{{ vdr.recdir }}'
- name: set option to use hide-first-recording-level patch
blockinfile:
dest: /etc/vdr/conf.d/04-vdr-hide-first-recordinglevel.conf
create: true
block: |
[vdr]
--hide-first-recording-level
when:
vdr.hide_first_recording_level
- name: create local dir in recdir
file:
state: directory
owner: '{{ vdr.user }}'
group: '{{ vdr.group }}'
mode: '0775'
dest: '{{ vdr.recdir }}/local'
when:
vdr.hide_first_recording_level
Install additional vdr plugins
The additional plugins to install can be set in the variable {{vdr_plugins}}
in the group variables
- name: install additional vdr plugins
apt:
name: '{{ item }}'
state: present
install_recommends: no
with_items:
'{{ vdr_plugins | default({}) }}'
Set up the directories for files in /srv
- name: create directories for media files
file:
state: directory
owner: '{{ vdr.user }}'
group: '{{ vdr.group }}'
mode: 0777
dest: '{{ item }}'
with_items:
- /srv/videos
- /srv/music
- /srv/picture
- /srv/backups
STARTED yavdr-network
default variables
install_avahi: true
install_epgd: true
install_mariadb: true
install_nfs_client: true
install_nfs_server: true
install_samba_client: true
install_samba_server: true
tasks
---
# this playbook sets up network services for a yaVDR installation
#
- name: install network packages
apt:
name: '{{ item }}'
state: present
install_recommends: no
with_items:
- avahi-daemon
- avahi-utils
- biosdevname
- ethtool
- nfs-common
- vdr-addon-avahi-linker
- wakeonlan
# Does this really work? We need a way to check if an interface supports WOL - Python Skript?
# - name: check WOL capabilities of network interfaces
# shell: 'ethtool {{ item }} | grep -Po "(?<=Supports\sWake-on:\s).*$"'
# register: wol
# with_items: '{% for interface in ansible_interfaces if interface != 'lo' and interface != 'bond0' %}'
STARTED nfs-server
tasks
- name: install and configure nfs-kernel-server
apt:
name: "{{ item }}"
state: present
install_recommends: no
with_items:
- nfs-kernel-server
when:
- '{{ install_nfs_server }}'
TODO yavdr-remote
default variables
tasks
templates
files
TODO automatic X-server configuration
- detect connected display
- read EDID from displays
- create a xorg.conf for nvidia/intel/amd gpus
templates
# file: roles/yavdr-xorg/templates/vdr-xorg.conf
# {{ ansible_managed_file }}
[Unit]
After=x@vt7.service
Wants=x@vt7.service
BindsTo=x@vt7.service
#!/bin/bash
# {{ ansible_managed_file }}
exec openbox-session
env | grep "DISPLAY\|DBUS_SESSION_BUS_ADDRESS\|XDG_RUNTIME_DIR" > ~/.session-env
systemctl --user import-environment
files
yavdr-xorg
default variables
tasks
---
# file: roles/yavdr-xorg/tasks/main.yml
- name: install packages for xorg
apt:
name: '{{ item }}'
state: present
with_items:
- xorg
- xserver-xorg-video-all
- xserver-xorg-input-all
- xlogin
- xterm
#- yavdr-xorg
- openbox
- name: create folders for user session
file:
state: directory
dest: '{{ item }}'
mode: '0775'
owner: '{{ vdr.user }}'
group: '{{ vdr.group }}'
with_items:
- '{{ vdr.home }}/.config/systemd/user'
- '{{ vdr.home }}/.config/openbox/autostart'
### TODO: move to yavdr-xorg package? ###
- name: create folder for customizations of vdr.service
file:
state: directory
dest: /etc/systemd/system/vdr.service.d
mode: '0775'
- name: add dependency to X-server for vdr.service using a drop-in
template:
src: templates/vdr-xorg.conf
dest: /etc/systemd/system/vdr.service.d/
### END TODO ###
- name: create .xinitrc for vdr user
template:
src: 'templates/.xinitrc.j2'
dest: '/var/lib/vdr/.xinitrc'
mode: 0755
owner: '{{ vdr.user }}'
group: '{{ vdr.group }}'
- name: populate autostart for openbox
template:
src: 'templates/autostart.j2'
dest: '/var/lib/vdr/.config/openbox/autostart'
mode: 0755
owner: '{{ vdr.user }}'
group: '{{ vdr.group }}'
- name: set a login shell for the vdr user
user:
name: '{{ vdr.user }}'
shell: '/bin/bash'
state: present
uid: '{{ vdr.uid }}'
groups: '{{ vdr.group }}'
append: yes
- name: enable and start xlogin for the vdr user
systemd:
daemon_reload: yes
name: 'xlogin@{{ vdr.user }}'
enabled: yes
state: started
samba-install
tasks
# file: roles/samba-install/tasks/main.yml
- name: install samba server
apt:
name: '{{ item }}'
state: present
install_recommends: no
with_items:
- samba
- samba-common
- samba-common-bin
- tdb-tools
samba-config
tasks
# file: roles/samba-config/tasks/main.yml
# TODO:
#- name: divert original smbd.conf
- name: create smb.conf.custom
file:
state: touch
dest: '/etc/samba/smb.conf.custom'
notify: [ 'Restart Samba' ]
- name: expand template for smb.conf
template:
src: 'templates/smb.conf.j2'
dest: '/etc/samba/smb.conf'
#validate: 'testparm -s %s'
notify: [ 'Restart Samba' ]
templates
# {{ ansible_managed_file }}
#======================= Global Settings =======================
[global]
## Browsing/Identification ###
# Change this to the workgroup/NT-domain name your Samba server will part of
workgroup = {{ samba.workgroup }}
# server string is the equivalent of the NT Description field
server string = %h server (Samba, Ubuntu)
# This will prevent nmbd to search for NetBIOS names through DNS.
dns proxy = no
#### Debugging/Accounting ####
# This tells Samba to use a separate log file for each machine
# that connects
log file = /var/log/samba/log.%m
# Cap the size of the individual log files (in KiB).
max log size = 1000
# We want Samba to log a minimum amount of information to syslog. Everything
# should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log
# through syslog you should set the following parameter to something higher.
syslog = 0
# Do something sensible when Samba crashes: mail the admin a backtrace
panic action = /usr/share/samba/panic-action %d
####### Authentication #######
# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
# security = user
# You may wish to use password encryption. See the section on
# 'encrypt passwords' in the smb.conf(5) manpage before enabling.
encrypt passwords = true
# If you are using encrypted passwords, Samba will need to know what
# password database type you are using.
passdb backend = tdbsam
obey pam restrictions = yes
# This boolean parameter controls whether Samba attempts to sync the Unix
# password with the SMB password when the encrypted SMB password in the
# passdb is changed.
unix password sync = yes
# For Unix password sync to work on a Debian GNU/Linux system, the following
# parameters must be set (thanks to Ian Kahan <<kahan@informatik.tu-muenchen.de> for
# sending the correct chat script for the passwd program in Debian Sarge).
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
# This boolean controls whether PAM will be used for password changes
# when requested by an SMB client instead of the program listed in
# 'passwd program'. The default is 'no'.
pam password change = yes
# This option controls how unsuccessful authentication attempts are mapped
# to anonymous connections
map to guest = bad user
{% for name, path in media_dirs.iteritems() %}
[{{ name }}]
path = {{ path }}
comment = {{ name }} on %h
browseable = yes
guest ok = yes
writeable = yes
browseable = yes
create mode = 0664
directory mode = 0775
force user = {{ vdr.user }}
force group = {{ vdr.group }}
follow symlinks = yes
wide links = yes
{% endfor %}
include = /etc/samba/smb.conf.custom
grub-config
default variables
system:
shutdown: poweroff
grub:
timeout: 0
tasks
- name: custom grub configuration for timeout and reboot halt
template:
src: templates/50_custom.j2
dest: /etc/grub.d/50_custom
mode: '0775'
notify: [ 'Update GRUB' ]
# TODO: add special case if plymouth is used
- name: let the system boot quietly
lineinfile:
dest: /etc/default/grub
state: present
regexp: '^(GRUB_CMDLINE_LINUX_DEFAULT=")'
line: '\1{{ system.grub.boot_options}}"'
backrefs: yes
notify: [ 'Update GRUB' ]
templates
#!/bin/sh
exec tail -n +3 $0
# This file is configured by the ansible configuration for yaVDR
{% if system.shutdown is defined and system.shutdown == 'reboot' %}
menuentry "PowerOff" {
halt
}
{% endif %}
if [ "${recordfail}" = 1 ]; then
set timeout={{ 3 if system.grub.timeout < 3 else system.grub.timeout }}
else
set timeout={{ system.grub.timeout if system.grub.timeout is defined else 0 }}
fi
handlers
- name: Update GRUB
command: update-grub
failed_when: ('error' in grub_register_update.stderr)
register: grub_register_update
# TODO: Do we need to use grub-set-default?
# https://github.com/yavdr/yavdr-utils/blob/master/events/actions/update-grub
TODO autoinstall-drivers
sundtek
autoinstall-satip
tasks
---
# file roles/autoinstall-satip/tasks/main.yml
- name: Display all variables/facts known for a host
debug:
var: ansible_local
verbosity: 1
- name: apt | install vdr-plugin-satip if a Sat>IP server has been detected
apt:
name: vdr-plugin-satip
when: ansible_local.satip.satip_detected
autoinstall-targavfd
tasks
---
# file roles/autoinstall-targavfd/tasks/main.yml
- name: apt | install vdr-plugin-targavfd if connected
apt:
name: vdr-plugin-targavfd
when:
- '"19c2:6a11" in ansible_local.hardware.usb'
autoinstall-imonlcd
tasks
---
# file roles/autoinstall-imonlcd/tasks/main.yml
- name: apt | install vdr-plugin-imonlcd if connected
apt:
name: vdr-plugin-imonlcd
when:
- '"15c2:0038" in ansible_local.hardware.usb'
- '"15c2:ffdc" in ansible_local.hardware.usb'
autoinstall-libcecdaemon
tasks
---
# file roles/autoinstall-libcec-daemon/tasks/main.yml
- name: apt | install libcec-daemon if connected
apt:
name: libcec-daemon
when:
- '"2548:1002" in ansible_local.hardware.usb'
autoinstall-pvr350
tasks
---
# file roles/autoinstall-pvr350/tasks/main.yml
- name: apt | install vdr-plugin-pvr350 if connected
apt:
name: vdr-plugin-pvr350
when:
- '19c2:6a11" in ansible_local.hardware.pci'
TODO autoinstall-dvbhddevice
Problem: woher kommt der Treiber (AFAIK noch nicht im Kernel)? Die Firmware sollte in yavdr-firmware stecken
tasks
---
# file roles/autoinstall-dvbhddevice/tasks/main.yml
- name: apt | install vdr-plugin-dvbhddevice if connected
apt:
name: vdr-plugin-dvbhddevice
when:
- '"13c2:300a" in ansible_local.hardware.pci'
- '"13c2:300b" in ansible_local.hardware.pci'
autoinstall-dvbsddevice
tasks
---
# file roles/autoinstall-dvbsddevice/tasks/main.yml
- name: apt | install vdr-plugin-dvbsddevice if module is loaded
apt:
name: vdr-plugin-dvbsddevice
when:
- '19c2:6a11" in ansible_local.modules'
TODO autoinstall-plugins
sddevice
hddevice
pvr350
Handlers
- name: Restart Samba
systemd:
name: smbd.service
state: restarted
enabled: yes
#masked: no
register: samba_reload