Sunday, December 11, 2016

espeak - alsa and hdmi environment

Overview: Speech synthesis has become nearly trivial, but a typical ALSA setup relies on sound card definitions from the config files ~/.asoundrc or /usr/share/alsa/alsa.conf (I typically delete the former and modify the latter). A complex hardware situation will almost always require some modifications to the config file. Installing espeak into such an environment often requires yet further config file modifications. I often wish OSS were still developed. Note: espeak uses the PortAudio library, specifically relying upon libportaudio. Verify installation in Arch with $ pacman -Ss portaudio.

symptom

Attempted espeak statements provide pcm error messages and no audio. Environment: Two sound "cards" -- one Intel MB chip, one NVidia graphics card sound chip. NVidia card selected b/c HDMI cable. ALSA HDMI configuration working flawlessly with all input and output apps (except espeak). ALSA espeak errors:
$ espeak "hello world"
ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.HDA-Intel.pcm.front.7:CARD=1'
ALSA lib conf.c:4371:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:4850:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2450:(snd_pcm_open_noupdate) Unknown PCM front
ALSA lib pcm.c:2450:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2450:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2450:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.HDA-Intel.pcm.surround51.7:CARD=1'
ALSA lib conf.c:4371:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:4850:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2450:(snd_pcm_open_noupdate) Unknown PCM surround21
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.HDA-Intel.pcm.surround71.7:CARD=1'
ALSA lib conf.c:4371:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:4850:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2450:(snd_pcm_open_noupdate) Unknown PCM surround71
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.HDA-Intel.pcm.iec958.7:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2'
ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.HDA-Intel.pcm.modem.7:CARD=1'
ALSA lib conf.c:4371:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:4850:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2450:(snd_pcm_open_noupdate) Unknown PCM phoneline
ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)
attempt to connect to server failed
At least two issues jump out: espeak sound is (inexplicably) attempted through the Intel card instead of the NVidia card, so that's probably why all the other subassemblies are also unreachable and spawning errors. "Dsnoop" may also be an issue.

To solve the first problem, we can easily change the values inside /usr/share/alsa/alsa.conf further than my previous post on ALSA HDMI agreement. That is, per this post, comment the line for pcm.front, and each additional line of fail as follows:
# nano /usr/share/alsa/alsa.conf
# pcm.front cards.pcm.front
pcm.front cards.pcm.default
This reroutes dynamic loading on a per command basis to the default card, eliminating the error. But this means lines and lines of corrections. Note that it appears all of these are due to HDA-Intel (instead of NVidia), being called as during dynamic loading. Let's see if we can just rewrite one line to fix the called card to NVidia, instead of doing line-by-line edits on downstream results from calling the HDA-Intel.

First let's try to specify the hardware device, as we can do in:
$ aplay -D plughw:1,7 /usr/share/sounds/alsa/Front_Center.wav
We know ALSA is thus working. So we can do a two-step method using ALSA to be sure espeak is working:
$ espeak "Hello world" --stdout |aplay -D plughw:1,7
If nothing comes out of this, or if there is no content inside a WAV produced with
$ espeak "Hello world" -w somefile.wav
... then it's time to strace.

Sunday, December 4, 2016

[solved (suboptimal)] thunar udiskie fickleness - android, cgroups, lost weekend

Rambling hurried post here to be edited down later, but basically breadcrumbs from a basket of unhappiness attempting a USB download of a picture from my phone. Three of us guys took a picture at the football game on celly -- can we even look at the photo on a desktop?

setup

Normal. I have my regular user in the uucp group and libmtp installed per typical. I have EHCI and UHCI USB controllers that easily recognize and mount any USB drive. The phone is cleanly ID'd in lsusb.

solution (still suboptimal)

The only app that could interact with the phone was simple-mtpfs from the AUR (yaourt). The CLI is required to mount and unmount the phone, but GUI is no problem except for mounting/unmounting. Make a directory in ~ called mnt or something else easy to remember and verify that the device has been detected:
$ simple-mtpfs --list-devices #verify detected
$ simple-mtpfs ~/mnt #mount phone
$ fusermount -u ~/mnt #unmount phone
At the same time, one should blacklist auto-mounting from inside ~/.xinitrc. This stops problems with gvfs attempts to automount the phone, or anything else. Auto-mounting usually fails in a way which apparently conflicts with udiskie2.
$ nano .xinitrc
# stop gvfs auto-mounts
export GVFS_DISABLE_FUSE=1

problem

On every system I have but the newest (aka, "DRM"est) one, I connect via USB and thunar detects and displays it. On the new system, even with MTP and all related software installed, I get the following symptom upon USB connection: phone detected, but in installer mode, on the phone the taskbar shows "USB slow charge only", and in the phone main screen, a popup with "Media Transfer Protocol" and a spinning prompt asking me whether I'm on a Mac. This means three different descriptions for the same connection. Unbelievable. Once I click Mac or whatever, I lose the phone on my system, lose the phone screen, and see USB slow charge on the phone status bar. It's outrageously wrong that protocols are this kludged nowadays in the copyright universe (or whatever the ultimate underlying problem is, but typically copyrights/DRM) with proprietary, instead of standardized, file and mounting types. A simple file manager connection is matter of a weekend.

thunar: volman issues

Volman is just not seeing the phone as what it is: if I start thunar from a terminal and connect the phone to a USB, lsusb detects it right down to the model number, but thunar does not, spewing, eg:
thunar-volman: Unsupported USB device type.
thunar-volman: Unsupported USB device type.
thunar-volman: Unknown block device type.
thunar-volman: Could not detect the volume corresponding to the device.

Good discussion about volman with USB is here, including many fellow gvfs and gnome-disk-utility haters (though we can install no-gconf, which helps somewhat). Interestingly, this led to one person, I think on the second page noting they use rox, udiskie, and openbox (esp the openbox pipe menus), but openbox is still too heavy for me, and I don't like rox that much. However, the idea of pipes intrigued me -- if I could get pipes in icewm, I could still use thunar for everything else so: thunar, udiskie, icewm pipes. Found this icepipes discussion, which laid most of it out.

mtab: an old friend reveals another kludge

So was the phone being mounted at all? This led to yet another new issue: cgroups. I catted /etc/mtab, an old friend that simplifies viewing what's mounted (eg. where is my f*cking phone?), if mounted. This time, in addition to the 4 or 5 entries I expected, I was greeted with an additional 8 unknown mounted file systems from who knows where in my system and so eating who knows how many processor slices or memory blocks:
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
... as well as these 6 beauties...
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
... for a total of 14 new-to-me, unknown, mounted file systems, but I can't mount my f*cking phone without creating an fstab for that model? I guess the end-user is just becoming a nuisance. Anyway, a panicked df followed, and this thankfully only revealed 3 (still too many) of these new tmpfs entries. Why wasn't this on the evening news?

Friday, November 18, 2016

pacman: update archlinux-keyring to solve corruption

I recently ran an update which did fine except for...
error: udiskie: signature from "Ambrevar " is unknown trust
:: File /var/cache/pacman/pkg/udiskie-1.5.1-1-any.pkg.tar.xz is corrupted (invalid or corrupted package (PGP signature)).
Do you want to delete it? [Y/n] Y
error: failed to commit transaction (invalid or corrupted package)
Errors occurred, no packages were upgraded.

After attempting several fixes and checking Arch forums, I stumbled across this manjaro forum. Sure enough:
# pacman -S archlinux-keyring
Following this, it was a simple
# pacman -S udiskie
to catch it up.

Saturday, November 12, 2016

openshot-qt fickleness

I recently attended a wedding anniversary and downloaded openshot-qt soon after, from the Arch repository. Openshot didn't open on the first attempt, instead spawning some window errors. To be sure which version(s) of QT the package was compiled with, I ran ldd against it, but got the following (surpising1) result:
$ ldd /bin/openshot-qt
not a dynamic executable
I then went over to read-up on any idiosyncrasies about the installation, and noticed...
"bwaynej commented on 2016-09-26 02:37
openshot-qt depends on python-pip in order to run"
...ergo...
# pacman -S python-pip desktop-file-utils libopenshot python python-httplib2 python-pyqt5 qt5-webkit shared-mime-info
Most of these were re-installs. Thereafter, to determine where everything was placed:
$ pacman -Ql openshot

solution

Went into yaourt and updated openshot-qt, during which I was queried for deleting the "conflicting" package openshot2. I authorized this and openshot-qt finally produced a display, albeit window errors continued.


1 for a presumably large application like a video editor. Eg, I would expect openshot-qt to be a ridiculously, possibly even unusably, massive app if compiled statically.
2 In spite of significant searching, I never found an installed or tarball version of openshot on my system -- the "conflict" warning was the only clue of interference from the old openshotpackage.

Friday, November 11, 2016

Rebuilding Fonts

Fonts, printers, scanners, sound, and usb connected items: have a sense a humor if you intend to deal seriously with these in Linux.

Anyway, too often during an Arch update, there will be a conflict in the fonts, sometimes between already installed fonts and one intending to update itself through pacman. When this happens, the entire update of all packages can be scotched when pacman exits, or perhaps the user will be forced to deal with /etc/fonts/fonts.conf in some immediate way.

Arch uses fontconfig, so I sometimes solve these conflicts by deleting everything inside /etc/fonts/. Since /etc/fonts/font.conf is eliminated along with other files, any conflicts in /etc/fonts are eliminated and pacman can complete its update. However, after this nuclear method, fonts will be mangled. They cannot be repaired with a simple # fc-cache -vf. Even that action will spew errors that it can't find a /etc/fonts/font.conf file instead of running a subroutine that simply builds such a file. Programmers.

So, how to rebuild /etc/fonts/font.conf? Turns out there's a lot of information about how to edit /etc/fonts/font.conf , but no information about re-building it. The solution was only obtained via frustration after attempting various manually constructed versions of /etc/fonts/font.conf which failed:
# pacman -S fontconfig
That's right, the only thing that worked for me was to re-install the f**king package. Of course, I'm an idiot, but still. Also, be sure to explicitly update one's fonts after a pacman update, in case pacman left any hanging chads.
# fc-cache -fv
Alternatively, when one runs into a font problem, a person can exit their update noting the name of the font that is causing the problem, and then rerun the update excluding that font until one has determined the conflict.
# pacman -Syu --ignore font-foo

[solved] xinit: unable to connect to x server: Connection refused

This is a common, highly annoying, permission fail following any update. If your /etc/group and /etc/shadow files have customary arrangements, and you haven't created conflicting rules in /etc/udev/rules.d, then it's not your fault -- you did your normal user-level part. Defenestration of various programmers is the deserved solution because, since the advent of PAM, LDAP, NSS and the other ridiculous kludge of "security" since Sept 2001, which have mostly secured users out of their own systems, it's probably another feature requiring a permission adjustment. If so, you have days of work ahead. Try these below and don't get into security files unless these don't work.

If no programmers are handy to toss, here are some options
  1. Log out entirely, then log back into your user and make a second attempt
  2. Try rebooting entirely. If it's a large update, this solves it nearly every time.
  3. You have two main configuration files for X, besides the regular stuff in .xinitrc (or .xsession if you're a dumbass who uses a DM):
    • /etc/X11/xorg.conf -- you should never have to fuck with this file but (ironically, as with everything else in Linux) you will nevertheless have to do so one day after a "helpful" "update" (although your system was running just fine before these) at which point you will have no experience at doing so, since you never were supposed to have to edit it, but now you suddenly have to and... (you guessed it) at a level assuming you have have a CS degree and years and years of experience with that file. Tada!
    • /usr/share/X11/xorg.conf.d -- this is the bullshit directory where 2 million conflicting configuration files live, files you cannot easily do anything but printout and lay side by side on the living room floor (which you don't have since you lost your job coming in late after all-nighters attempting to fix permission problems added since 2001) and desperately hope that you can identify some conflict between them or the aforementioned /etc/X11/xorg.conf file before losing another weekend or relationship with these printouts lying all over the house. Fun, right?

solutions

Reboot after large updates when this happens.

$ strace startx 2>&1 |tee bigfile.txt
This is obviously much better than a log file. Look for X authorities and see if they're not forthcoming.

$ xauth
Is there an ~/.Xauthority file? Two problems are possible. One, it's not being created, or two, it's being created with no content. It's a binary file, typically about 49 bytes in size, when functional. Zero bytes is not good, so then try....

$ xauth list





Tuesday, November 8, 2016

DConf - GDM specific

Early 2000 laptops shouldn't waste valuable memory for a fancy login or to run wpa_supplicant beneath the hood. I simply boot into runlevel 2 and take a few seconds to do these myself. Recently it became clear that TA's I oversee would experience a gentler learning curve if their logins and network connections had a GUI assist. So I began to toy with GDM. A few considerations
  1. DM's move directly to runlevel 5. Can I occasionally go runlevel 2 if needed (eg. troubleshooting)?
    # systemctl disable gdm.service
    # reboot
  2. I like IceWM, but GDM's default is Unity, which is a memory hog. Can I still get there from the DM?
    $ cp .xinitrc .xsession
    This is because .xinitrc is used going from runlevel 2 to the GUI using startx, and GDM uses .xsession in a similar way, calling it to start the GUI.
  3. Can I test the login screen manually from runlevel 2?
  4. GDM autmagically installs pulseaudio. Do I have available time to skillfully cripple pulseaudio so that it doesn't interefere w/VLC and java?

Configuration

A common Linux frustration is the omission or inaccuracy of critical details in Forum and other Googled solutions. Such oversights silently sap hours or days of extra effort. Over the years, the accretion of lost time becomes alarming. Accordingly, for Linux processes I might irregularly use, or which required an extremely long disentangling process, I went to the extreme of creating a blog in order to document solutions for future time savings. Details can be anything from an intermediate step, (mis)ordered steps, permissions, group membership, etc. In this case, is a great page for configuring GDM, but it overlooks the copying of some key files, some steps requiring root authority, and so on.

The icewm FAQ has the answers for this configuration. The files involved
  • /etc/X11/gdm/Sessions/IceWM
    #!/bin/bash
    exec /etc/X11/xdm/Xsession icewm
  • /etc/X11/xdm/Xsession
  • /usr/share/apps/switchdesk/Xclients.icewm
    #!/bin/bash
    exec /usr/local/bin/icewm-session
  • /usr/local/bin/icewm-session

Tuesday, October 18, 2016

xsane - usb scanner detection

Linux scanner can be even worse than Linux printer. Inadequacies in configuration software have continued for at least a decade. Sometimes scanners are detected, sometimes they're not. Sometimes only in root, sometimes only with sane-find-scanner, but not with scanimage -L. Avoiding lost weekends and domestic disputes is not always possible.

A couple points. First, $ scanimage -L is the gold standard. If this works, scanning works, even if sane-find-scanner does not. Second, if it's a recent Epson, then no matter what the documentation or forums appear to indicate, libsane-epkowa is likely the answer. There are tools for tracking down the answers to any model scanner further down. Just for an example however, here is the impossibly counterintuitive solution to configuring an Epson Perfection V370.
  1. # nano /usr/share/sane
    Comment all but usb
  2. # nano /etc/sane.d/dll.conf
    Comment all but epkowa, and add epkowa if it's not included.
  3. # nano /etc/sane.d/epkowa.conf
    Comment all but usb
  4. # nano /lib/udev/rules.d/49-sane.rules
    Be sure an uncommented line exists for the printer allowing permissions...
    # EPSON Perfection V370 Photo
    ATTRS{idVendor}=="04b8", ATTRS{idProduct}=="014a", MODE="0664", ENV{libsane_matched}="yes", GROUP="scanner"
    ...and that you belong to the named group:
    # nano /etc/group
    With multifunction printers, setfacl information may also be necessary.

tools

1) use strace when the scanner is found with sane-find-scanner, but not with scanimage -L.
# strace scanimage -L 2>&1 |tee bigfile.txt
Here's an interesting portion of the resultant 684K file; large until one comments out unnecessary printers inside /etc/sane.d/dll.conf :
stat64("/etc/sane.d/dll.d/epkowa.conf", {st_mode=S_IFREG|0644, st_size=7, ...}) = 0
open("./dll.d/epkowa.conf", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/etc/sane.d/dll.d/epkowa.conf", O_RDONLY) = 4
fstat64(4, {st_mode=S_IFREG|0644, st_size=7, ...}) = 0
read(4, "epkowa\n", 4096) = 7
read(4, "", 4096) = 0
close(4) = 0
getdents(3, /* 0 entries */, 32768) = 0
close(3)
Grep in your file for epson, until you find something akin to /run/udev/data/c187:2 and cat it to verify the scanimage reads from it and your /lib/udev/rules.d/49-sane.rules file:
$ cat /run/udev/data/c189:14
I:54330183574
E:ID_VENDOR=EPSON
E:ID_VENDOR_ENC=EPSON
E:ID_VENDOR_ID=04b8
E:ID_MODEL=EPSON_Perfection_V37_V370
E:ID_MODEL_ENC=EPSON\x20Perfection\x20V37\x2fV370
E:ID_MODEL_ID=014a
E:ID_REVISION=0100
E:ID_SERIAL=EPSON_EPSON_Perfection_V37_V370
E:ID_BUS=usb
E:ID_USB_INTERFACES=:ffffff:
E:libsane_matched=yes
E:ID_VENDOR_FROM_DATABASE=Seiko Epson Corp.
E:ID_PATH=pci-0000:00:12.2-usb-0:1
E:ID_PATH_TAG=pci-0000_00_12_2-usb-0_1
E:ID_FOR_SEAT=usb-pci-0000_00_12_2-usb-0_1
G:uaccess
G:seat
2) use /lib/udev/rules.d/49-sane.rules
3) use
#lsusb
# udevadm info -a -p $(udevadm info -q path -n /dev/bus/usb/002/009)
... to write udev rules
4) use /usr/lib/libsane/
USB printers are not connected through a network, but their device ID's can be handled as if they were, eg as if they have SNMP (Simple Network Management Protocol). Eg, from this page:
All the above is quite easy to implement if the printer is network connected, now if the printer is USB or PPI connected you need to get your hands into the HP SNMP Proxy Agent, you can find a great post here. It says that basically it is a little Windows software that piggy-backs on the standard Windows SNMP service and provides SNMP data on the default HP printer connected to a computer via USB or parallel cable.


4) look into /usr/lib/sane/libsane-[yourprinter], eg /usr/lib/sane/libsane-epkowa.so.1.0.15



Sunday, September 11, 2016

no gvfs

The gvfs is a kludge and system hog. Its only value is a trashbin. I don't need or like a trash bin, I use udiskie2 for mounting USB sticks, I don't need thunar, and I don't need bookmarks in evince. Note: I also try to disable PAM and polkit, whenever possible. I've found that PAM has some hooks that require it to be installed, though I am able to turn it off in htop fairly often.

evince & gvfs

In 2021, an evince install requires gvfs (and fuse), even though gvfs is only for bookmarks. So I now use Okular, which has a 300MB KDE/QT Photon library installed with it. Outside of the HDD space, it's no problem because these libraries do not run except when Okular is running; the KDE dependencies are not a file system, and are not persistent. That said, Okular is not as tight an interface as Evince. If one disables the sidebar, it's pretty close. Prints fine.

undoing gvfs

To view gvfs dependent applications running...

$ psaux |grep gvfs

Amazing number of things, right. Uninstall gvfs, and any others on the list, but also manually rid yourself of its stuff in your home directory, such as its cpu-hogging kludge of metadata. However, a person can't just eliminate gvfs, because nautilus will complain.That's apparnetly because nautilus depends on gvfs-disk-utility and gvfs-disk-utility depends on gvfs. One order of operations could be:

# pacman -Rsn nautilus thunar evince
# pacman -Rsn gvfs gvfs-mtp
$ pkill gvfsd-metadata
$ rm -rf ~/.local/share/gvfsd-metadata

Reboot to check that things is a-workin'. Should be no trash can in Thunar. It will still remain so just delete the entire trash directory.

# rm -r .local/share/Trash .local/share/gvfs-metadata

Secondly, when gvfs installs, it routinely installs additional feature apps. Many of these, eg PAM, also cause permission conflicts. PAM is particularly hard to get off of one's system once it's been harkened. The real question then becomes not the alternative to gvfs, but what to do about gvfs and its friends once some application has inevitably and unfortunately installed and/or activated gvfs and company.

It's a little difficult to move everything from .bashrc to .xinitrc, as some of the command options of bashrc are listed in there by default, and where else can I see those commands? I've added a section below for its default configs.

  1. get rid of .bashrc. Put all its regular commands for the terminal cursors and paths into .xinitrc
  2. thanks to this page:
    nano .xinitrc
    GVFS_DISABLE_FUSE=1
    export GVFS_DISABLE_FUSE

more thorough

The above is enough for most, but on older machines, it won't be. Both gvfs and polkit are immense permission kludges. And you can do without polkit in favor of simply using sudo when needed. Polkit works through, I believe, the wheel group to address userspace priviliges. The Arch page on it. Invariably it's one of the top users of memory if you run htop.

default .bashrc

You'll need to move all of these into .xinitrc if you delete bashrc.

# ~/.bashrc
# file works on colors and alias commands
# do exports and paths in .xinitrc

# If not running interactively, don't do anything
[[ $- != *i* ]] && return

alias ls='ls --group-directories-first --color'
alias ll='ls -l'

#PS1='[\u@\h \W]\$ '

# for Video (VDPAU)
#export VDPAU_DRIVER=r600

# for some games like solitaire where locale is important
# export LC_ALL="C"

# for TexLive
#export PATH=/home/foo/latex/bin/x86_64-linux:$PATH
#export INFOPATH=/home/foo/latex/texmf-dist/doc/info:$INFOPATH
#export MANPATH=/home/foo/latex/texmf-dist/doc/man:$MANPATH
export NVM_DIR="/home/foo/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm

Monday, July 25, 2016

[Solved] Arch ALSA configuration (HDMI, nVidia PCIe card) minus PulseAudio

$ xrandr [get displays]
$ xrandr --output HDMI1 --mode "1280x800"
$ xrandr --output HDMI1 --off

I currently maintain an unlucky system with respect to HDMI audio: two sound chips -- nVidia PCIe video card(w sound chip), and an Intel sound chip on the motherboard. One must choose between connecting the HDMI cable at the motherboard and having low-res motherboard chip video, or connecting the HDMI cable to the nVidia board. Of course, we connect the HDMI cable to the nVidia board for better video, but this then led (in my case) to conflicting motherboard and video card audio functions. This post is the result.


The system OS is Arch w/Nouveau drivers (minus proprietary nVidia modules), and without PulseAudio. Eventually, I removed the Nouveau drivers and reluctantly installed the proprietary nVidia drivers: the Nouveau drivers could not seem to manage the audio on the nVidia card.

Avoiding PulseAudio was successful -- the steps for that begin below. The steps are entirely accomplished inside Runlevel 2/3 or a terminal, but I included one optional Xorg step at the bottom of the post.

HDMI complexity

HDMI sound requires at least four reconciled systems -- Xorg, ALSA, kernel modules and, of course EDID -- and some of these have subfiles. That's too many variables for consistently easy sound configuration, and sound is already one of the more challenging aspects of Linux. One bit of Arch-specific luck is ALSA modules are nowadays compiled into the Arch kernel.

First steps

  • ALSA kernel modules present?
    $ systemctl list-unit-files |grep alsa
    alsa-restore.service static
    alsa-state.service static
    "Static", because compiled into the kernel. If not present, fundamental ALSA must be installed and the dynamic loading of its modules verified.
  • after the above, pacman all additional ALSA stuff in the repo, avoiding ALSA-PulseAudio libs (and other applications which automatically install PulseAudio) and, if feasible, also avoiding ALSA-OSS libs. Generally, the less non-ALSA audio libs the better.
  • verify (in an already-working system) one has a good HDMI cable and then connect it between the the nVidia card and one's desired monitor/TV. Once we connect to nVidia video, we're obviously committed to nVidia's sound chip (since HDMI cable carries both audio and video).
  • reboot
  • $ lsmod |grep soundcore :: soundcore must be present.

module load order --> default audio card

In a multi-card/chip sound system, the order the kernel loads the modules for the cards determines the default audio card. Specific to my system is an additional complexity: both the motherboard and nVidia audio chips load the same module, snd_hda_intel. This prevented me from determining which chip was the default sound chip with a simple module check...
$ cat /proc/asound/modules
0 snd_hda_intel
1 snd_hda_intel
Which card is the default "0", the motherboard chip or the nVidia card chip? Use ALSA's "aplay" to distinguish:
$ aplay -L |grep :CARD
default:CARD=PCH
sysdefault:CARD=PCH
front:CARD=PCH,DEV=0
surround21:CARD=PCH,DEV=0
surround40:CARD=PCH,DEV=0
surround41:CARD=PCH,DEV=0
surround50:CARD=PCH,DEV=0
surround51:CARD=PCH,DEV=0
surround71:CARD=PCH,DEV=0
iec958:CARD=PCH,DEV=0
hdmi:CARD=NVidia,DEV=0
hdmi:CARD=NVidia,DEV=1
hdmi:CARD=NVidia,DEV=2
hdmi:CARD=NVidia,DEV=3

ALSA has loaded the motherboard chip "PCH" as the audio default, which is not the NVidia audio we need (for HDMI audio). And a 3rd issue is apparent: a device designation of "0" is being used for both sound card/chips. These numbers are supposed to be unique for each card, but two cards are using the same number, "0". This may be related to the dual use of snd_hda_intel. Regardless of how it happened, dual-use of a (supposedly) card-specific number across two pieces of hardware is likely to lead to conflicts. If desired, one could detour and verify IRQ issues with say,lscpi -v, or get more information about the module, eg....

$ modinfo -p snd_hda_intel

...and do conflict tests. In my case, I made note of the overlap and pressed on.

So far: No sound yet, since default audio is going to PCH, not to nVidia. However there is progress: we know ALSA assigned unique names for the two audio chips -- "PCH" and "NVidia". We know both audio chips rely on module snd_hda_intel, that the PCH chip is loading as the default, that the nVidia chip has four available devices (0,1,2,3), and that a good HDMI cable has been connected between the nVidia HDMI port and my monitor. How to control the load order (default)?

Some users control load order problems by creating an /etc/moprobe.d/[somename].conf file, then setting options inside the file, or using creative blacklisting. This didn't work for me: both audio cards relied on the same module, snd_hda_intel so that blocking this module would block both cards (chips).

I tried however. I created the file...

# nano /etc/modprobe.d/customsettings.conf

...and attempted various "option" and "install" lines within it. Option and install lines have helped other users (see appendix at bottom), but none of them solved my problem. In the end, my only use for the custom file was to blacklist Nouveau modules, in case I had not removed all of them properly, after I had settled on nVidia drivers

# nano /etc/modprobe.d/customsettings.conf
blacklist nouveau

If kernel manipulation fails, we can turn to making modifications within ALSA. For ALSA customization,four ALSA files are possible:

  • /etc/modprobe.d/(somename).conf :: kernel manipulation, as just noted above.
  • /etc/asound.conf in Arch, the file does not exist by default, but can be created by apps (like PulseAudio) or by the user.
  • ~/asound.rc home directory version of asound/conf, if one does not desire global settings.
  • /usr/share/alsa/alsa.conf a default file, installed with ALSA. To narrow troubleshooting variables, I do any ALSA customization only in this default /usr/share/alsa/alsa.conf file. The first modification is to prevent the other two files above from being sourced. Some users simply delete /etc/asound.conf and ~/.asoundrc entirely or rename them to backup files. We need ELD information before we can modify this file entirely however.

ELD verfication

ELD information is critical to complete HDMI modifications inside /usr/share/alsa/alsa.conf. Be sure the HDMI cable is connected to the NVidia HDMI output and to the monitor to which you're connecting.
$ grep eld_valid /proc/asound/NVidia/eld*
/proc/asound/NVidia/eld#0.0:eld_valid 0
/proc/asound/NVidia/eld#0.1:eld_valid 1
/proc/asound/NVidia/eld#0.2:eld_valid 0
/proc/asound/NVidia/eld#0.3:eld_valid 0
Here, only nVidia audio card (chip) 1 is validated for HDMI audio. However, these "card" numbers 0,1,2,3, are not the actual hardware numbers. We need hardware numbers for ALSA.
$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 7: HDMI 1 [HDMI 1]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 8: HDMI 2 [HDMI 2]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 9: HDMI 3 [HDMI 3]
Subdevices: 1/1
Subdevice #0: subdevice #0
So, for HDMI 1, the ELD validated sound card/chip with my monitor, the hardware number will be hw:1,7. Now I can return to /usr/share/alsa/alsa.conf and make the necessary HDMI modifications.
# nano /usr/share/alsa/alsa.conf
defaults.ctl.card 1 #default 0
defaults.pcm.card 1 #default 0
defaults.pcm.device 7 #default 0
This means I've swapped the cards' default order from "0" to "1", and specified the ELD-authorized device by its hardware number, "7". Reboot, and unmute card PDIF 1 in alsamixer.
For non-NVidia cards, the process is similar...
$ grep eld_valid /proc/asound/HDMI/eld*
/proc/asound/HDMI/eld#0.0:eld_valid 0
/proc/asound/HDMI/eld#0.1:eld_valid 0
/proc/asound/HDMI/eld#0.2:eld_valid 0
/proc/asound/HDMI/eld#0.3:eld_valid 1
/proc/asound/HDMI/eld#0.4:eld_valid 0
... and then determine which setting is associated with ELD 3, using "$ aplay -l" again, and then making the change inside /usr/share/alsa/alsa.conf.

Next steps

  1. reboot
  2. power on the tv/monitor and verify volume is hearable level
  3. $ alsamixer :: unmute "S/PDIF 1" on the NVidia card. F6 switches between cards.
  4. $ aplay -D plughw:1,7 /usr/share/sounds/alsa/Front_Center.wav
    Should hear "Center" from TV's speakers. Aplay won't do MP3's to my knowledge -- use this ALSA player with a WAV.
  5. enjoy. I must manually unmute S/PDIF 1 each reboot, or modify some other file, but that's another day/process.

troubleshooting - post alsa update

Following any system update that changes ALSA, (eg, following pacman -Syu) sound may be lost. First, open alsamixer to verify it wasn't re-muted. Secondly, the update may have overwritten /usr/share/alsa/alsa.conf. Re-enter the modifications (scroll down) and device settings in alsa.conf, resave, and log out and back in, or reboot.

troubleshooting- vlc issues

1.) deleting ~.config/vlc as a fix. When sound comes OK from ALSA ($ aplay -D plughw:1,7 /usr/share/sounds/alsa/Front_Center.wav, as above), but VLC is soundless, try deleting~/.config/vlc, and let VLC respawn the file.

2.) no sound from media files with MP3 or AAC tracks. Movies and MP3's play with audio in other software, but no sound from VLC, if it's MP3 or AAC. The fail will appear to be ALSA, eg, if you attempt to VLC the MP3 file from the command line, you'll see errors including...

alsa audio output error: cannot set buffer duration: Invalid argument

... yet MP3's play fine with other players. Without PulseAudio, they are getting their input from ALSA, so we know it's not ALSA, it's VLC. The fix for me has been the SPDIF function, which sends audio directly to the device, without decoding, which it can't always handle. Make sure it's unchecked as seen below.



Incidentally, beneath the quality of lossless (flac and wav), apparently the compression quality proceeds: Opus better than AAC, with AAC better than MP3.

Nouveau

One of the sound cards is an nVidia graphics card.
$ nvidia-xconfig --no-use-edid-dpi $ nano /etc/X11/xorg.conf Option "DPI" "100 x 100"

appendix 1
possible /etc/moprobe.d/[somename].conf settings

At the kernel level, I was unable to do more than blacklist nouveau drivers. I could have instead blacklisted nvidia drivers, had I ever been able to activate sound within nouveau. However, many were able to solve their configuration settings in this file using such options as
install sound-slot-0 modprobe snd-card-1
install sound-slot-1 modprobe snd-card-0
or...
options snd-hda-intel enable=1,0
Note that, in this second example, the underscores in the module name are changed to hyphens.

appendix 2

If a person has old 32-bit Windows programs and wants to run Wine, what are the concerns? You'll first have to uncomment the two multilib lines in /etc/pacman.conf to access 32-bit libs. Wine itself then becomes roughly a 600MB kludge install due to all the attached 32-bit libs.

Most importantly here, one of the 32 bit choices affects the nVidia card. During Wine install, users must select either the lib32-mesa-libgl or lib32-nvidia-gl. I have th nVidia card in my system, so it seems like the safe choice. What about compatibility though? So many things use Mesa.

Saturday, July 23, 2016

LaTeX -- updating the 2015 installation (TexLive)

TexLive is a large install, typically 4 GB. I keep mine in a home directory folder (easier to update and back up - no root privileges necessary). Between 2015 and 2016 however, the internal structure of the update database apparently changed.

So, in 2016 (edit: also 2021), sometimes tlmgr update is finnicky. A common fail is to see:

Unknown directive ...containerchecksum [etc]

This is a slight pain, so I've written the steps (for future reference). There are five steps:
1) DO
2) A
3) CLEAN
4) 2016
5) INSTALL

Next, update one's path statement in ~/.bashrc

Steps (1 hr, excluding download and burn)

  • Create some home directory, eg, ~/latex and delete the one with the last install. Frees about 4 GB.
  • Delete old ~/texmf and any ~/.tex* files
  • Download the latest iso from TexLive, about 3.7GB. New ones come out once a year, about in April.
  • $ md5sum [nameofiso].iso
  • $ growisofs -speed=2 -dvd-compat -Z /dev/sr0=nameofiso.iso
  • Put DVD in, install...
    $ cat /etc/mtab |grep "TexLive"
    /dev/sr0 /run/media/foo/TeXLive2016 iso9660...
    $ cd /run/media/foo/TeXLive2016
    $ ./install-tl -no-verify-downloads



  • select item "D" and give the home directory folder to all variables: eg ~/foo/latex
  • select item "O" and give letter paper size, as opposed to A4, if desired
  • Wait an hour or two while all 3500 files are installed.
Below, some features of these two methods of install, DVD and Internet...

DVD (1 hr)

Not verifying the downloads ("no-verify-downloads") is important even when installing offline with a DVD. Otherwise random checksum failures will likely occur installing one of the ~3500 packages, at which time the entire installation will dump without recovery options. There is nothing in the crude installer to, say, skip one or another package(s) and try it later. After install, the tlmgr can of course ignore specified packages but... that's after the installation is complete. A second potential problem is heat from high CPU usage. A friend's 2008 Toshiba laptop would fail around 2000/3500 packages with an overheat lock-up. I tried it also on a 3.4 GHz i7, and it maxed that processor throughout the install as well, although it could remain cool since it was a desktop with fans. Why the maxing of CPU in a simple install? My guess is the TexLive installer is probably c. 1995 coding, with a lot of subsequent bolt-ons, in such a way that the kernel can't smoothly allocate resources. Just a guess.

Internet (2 hrs)

An Internet install runs cooler because it has installation pauses between each package -- each package takes a few moments to download, as opposed to the near-instant draw from a DVD. The Internet install is launched from a small "tl-install" file. The file initiates a download (TUG site) and installs the packages in sequence.

Post install

  • update PATHs in /etc/profile or, as I prefer, in~/.bashrc (user permissions), eg.
    $ nano .bashrc
    export PATH=/home/foo/latex/bin/x86_64-linux/:$PATH
    export INFOPATH=/home/foo/latex/texmf-dist/doc/info:INFOPATH
    export MANPATH=/home/foo/latex/texmf-dist/doc/man:MANPATH
  • Logout, and log back in
  • $ tlmgr update --self
  • $ tlmgr update --all
  • $ texhash # typically only necessary following an installation of a non-repo package, etc, but I prefer to be certain. Secondly, as noted on this great page, the way to determine which configuration file,
    or even if it's been created, is to
    $ tlmgr conf tlmgr
    Thirdly, a list of installed packages sent to text file:
    $ tlmgr list --only-installed > installed_texlive_packages.txt
  • Verify with a complex .tex file compile, eg the .tex for a book.
  • Smile. Go outside in the sun. Arguably, read the New Yorker.
If one has the time, one can also keep tabs on various fonts (scroll down to "updmap"), or install non-repo packages.

problems

There's a reliable internet connection, but...
$ tlmgr update --self No connection to the internet. Unable to download the checksum of the remote TeX Live database, but found a local copy so using that.
I ran into that when my installation, including tlmgr, became about 2 years out of date. I could have downloaded a fresh install and then updated all the paths for it in /etc/profile, but it should have been easier. Headed to the tlmgr page and got the Unix script for the latest tlmgr, update-tlmgr-latest.sh.

Wednesday, July 20, 2016

UEFI Arch install (GPT, ESP) Single boot

My advice is not to do this. I convert everything GPT to MBR to avoid GPT labels and EFI so that GRUB installs properly. So, if I run across a GPT named volume with EFI, I go straight into cfdisk, and blow out all the partitions. Then I gdisk that empty disk, hit "r", then "g", then write it with "w". Subsequently, fdisk can be used to write a DOS partition which will then show up in cfdisk as being toggleable to "bootable", when I finally go to make the new partitions.
UEFI means partition and boot issues (eg,how to do with no boot flag or MBR?) and possible boot loader settings. This page addresses many ESP issues, and this page UEFI issues more generally (scroll to UEFI). Also of course, there is this UEFI page.

There's a lot to consider (click on photo)

The booting order is apparently the UEFI firmware contacts an ESP partition (new concept), receiving information. This process also loads the EFI specific kernel informtion, (in the form of a filesystem?) "efivarfs", into /sys/firmware/efivars.

disk 20 mins

You'll have to overburn the install CD to get 740MB onto it. Of the possible options, cdrdao cannot be used without a TOC file to direct it, and dd has its dangers, so I tend to rely on cdrecord. With cdrecord, overburning should be specified, and random DAO errors that usually only appear at the end of the operation, can be prevented by running it in root.
$ cdrecord dev= HELP [get a device list]
# cdrecord -overburn -eject -dao -dev=/dev/sr0 ~/install.iso

Load the UEFI version of Arch, and then get information (eg fdisk -l sdX), at the prompt. I will want GPT, not MBR partitions: for partitioning, use cgdisk or gdisk instead of cfdisk for partition.
cgdisk /dev/sdX
... and then be sure to assign the proper qualities. New: four digit identifiers (eg 8300 for Linux and 8200 for the swap).It appears that filetype EF00 automatically sets an equivalent of a boot flag for its partition, so we only need one EF00 partition. This partition is said to need 250MB, so 1 GB should be enough.

Here might be a list of partitions:
  1. 1GB - /boot (ESP) sdb1 - ef00
  2. 40GB - / (apps) sdb2 - 8300 (this will be the basic "mnt" foundation)
  3. 50GB - swap sdb3 - 8200
  4. 900GB - /home sda1 - 8300
For GPT partitions, can use cgdisk for ncurses, or simple gdisk
# gdisk /dev/sdX
# n [add size, type]
# w [write it]
# gdisk -l /dev/sdX [list the partitions]
After partitioning, format them, eg mke2fs /dev/sda1. However, the ESP partition needs FAT32.
# mkfs.fat -F32 /dev/sdb1

basic configuration 20 mins

# mkswap /dev/sdb3
# swapon /dev/sdb3
# free -m [check swap is on]
# mount -rw -t ext3 /dev/sdb2 /mnt
# mkdir /mnt/home
# mount -rw -t ext3 /dev/sda1 /mnt/home
# mkdir /mnt/boot
# mount -rw -t ext3 /dev/sdb1 /mnt/boot
# mkdir /mnt/etc/
# genfstab -p /mnt >> /mnt/etc/fstab
# pacstrap /mnt base
# arch-chroot /mnt
# ln -s /usr/share/zoneinfo/US/[zone]
# mkinitcpio -p linux
# passwd
# pacman -Syu grub efibootmgr
# grub-mkconfig -o /boot/grub/grub.cfg
# ls -l /boot/EFI/arch [verify grubx64.efi is here or locate it]
# grub-install --efi-directory=/boot/EFI/arch --target=x86_64-efi /dev/sdb
# efibootmgr -v [verifies all bootable efi's ]
# exit
# reboot

boot 90mins

At reboot, one may have the basic GRUB2 command prompt, not the curses version. This is called "rescue mode" GRUB, with limited commands. If command "boot" does not work, then one needs to return to their install disk, mount all the partitions again, and begin at the mkinitcpio step. Mkinitcpio cannot do its job of creating the ram disk if it can't find a kernel, so it's a good litmus test that the kernel is properly installed and available. There should be some vmlinuz in /boot. So, if this is missing, reinstall the base group entirely with pacman, so the mkinitcpio step is 100% certain.
# pacman -S --needed base
If this step informs that the system isup to date, "nothing to do", then force a kernel reinstall, especially if you could find no vmlinuz in /boot. Everything must be 100% to work with UEFI.
# find /var/cache/pacman/pkg -name 'linux*'
Get the specific name of the linux package in there, eg. 4.4-4, or whatever. Then...
# pacman -U /var/cache/pacman/linux[specifics].pkg.tar.xz
# ls /boot [verify a vmlinuz present]
# mkinitcpio - p linux
# grub-mkconfig -o /boot/grub/grub.cfg
# ls -l /boot/EFI/arch [verify grubx64.efi is here or locate it]
# grub-install --efi-directory=/boot/EFI/arch --target=x86_64-efi /dev/sdb
# nano /etc/systemd/journald.conf
SystemMaxUse=200K
# exit
# reboot

This should get us to Runlevel 2. Network configuration (eg, hosts, hostname, wifi arrangements, etc) is usually first since one needs a "pacman" configuration. Then perhaps pacman servers and keys, and then apps. For example, nearly every fresh install experiences dhcpcd time outs because it can't obtain an ipv6 dhcp address from a router.
# nano /etc/dhcpcd.conf
# custom to stop ipv6 service request timeouts
noipv6rs
# useradd -G wheel,lp,audio -s /bin/bash -m foo
# pacman -S wpa_supplicant wireless-tools jre8-openjdk jdk8-openjdk
# export LANG=C

nVIDIA or Nouveau

Still the same old story -- nVIDIA won't release the code (2016). If using nVIDIA drivers, some have to blacklist the i915 driver. To blacklist, create any file with a ".conf" extension in /etc/modprobe.d/, eg /etc/modprobe.d/blacklist.conf, then
# nano /etc/modprobe.d/blacklist.conf
blacklist i915
To review all drivers prior to selection...
# pacman-Ss xf86-video |less
# pacman -S xf86-video-nouveau mesa-vdpau libva-mesa-driver
I took the vdpau since the card's an nVidia. Also, I ultimately went with the nouveau drivers so we'll see. I dunno. Now for 50MB of xorg
# pacman -S xorg-server xorg-server-utils xorg-xinit xorg-xrandr
Went entirely without the nVidia and so chose the mesa-libgl when the choice appears. We'll see. I can check it with, say
# pacman -Qqs nvidia
... to see if I put nVidia in by mistake. The other thing is I have a simple mouse and keyboard input, so I went with xf86-input-evdev instead of the more cutting edge xf86-input-libinput. We'll see, again. Note that no /etc/X11/xorg.conf is automatically created these days (2016) but users can create one. Example: a specific display problem requires it, viz to specify a BusID, etc.

I like IceWM. Hearkens back to the old days with its use of only a couple classic text config files: .xinitrc and Xresources. Those two X files are read at start-up and then there are a couple of text files internal to icewm also read during startx: ~/.icewm.preferences and ~/.icewm/startup. Preferences is like 1300 lines, with about 500 options. Love it.
# pacman -S icewm
$ cp /etc/X11/xinit/xinitrc .xinitrc
$ nano .xinitrc
exec dbus-launch icewm-session
$ startx
45MB of thunar and its dependencies. For USB drives and so forth, I don't like gvfs at all, since to me it's a kludge, it includes PAM, I don't need a trash bin, and on. So udiskie (116 MB) is a good substitute.
# pacman -S thunar thunar-media-tags-plugin thunar-volman udiskie dosfstools ntfs-3g parted
Let's do 2MB of xterm, 45MB of geany, 78MB of vlc (caveat: vlc is Qt1, so not gnome (gtk) friendly, and if you go with Qt, you eventually end up with PyQt, dunno why but it's true. All told ~ 400MB more for PyQt and Qt), 380MB of openshot (used ttf-bitstream-vera note:smbclient is part of the openshot install - yuck), 112MB of evince, and 150K of recordmydesktop.

ffmpeg or libav

What to do here? Michael Niedermeyer resigned from ffmpeg August 5, 2015. I still am most used to it and I know it typically has more features, if perhaps unstable. Ffmpeg, currently.

GRUB2 note

GRUB2 is one of the few linux boot managers that works with UEFI, but it is a horrible bootloader:
  1. will not locate kernels
  2. numbers disks differently than linux, eg, "hd" There is no intuive correlation between them. If I have only two hard drives, mounted as three partitions, sda1, sdb1, and sdb2, these may show as hd0, hd5, hd6. That would work, but GRUB wants the partition number as well, and there are only 2 physical drives. You will have to map this by trial and error.
  3. requires as much or more configuration than the underlying OS installation
  4. continuing on the configuration point, one cannot directly edit GRUB's configuration file, /boot/grub/grub.cfg
1Nokia sold Qt trademark to Digia in 2012

Monday, July 4, 2016

Lookout security for Android devices

Perhaps two weeks after activating a new phone with T-Mobile, an app called "Lookout" prompted me with a cell screen to subscribe to their service. The pop-up included a correct email address for me, pre-entered.

I found that odd since, although the email address Lookout suggested for me was correct, it was different than the email address attached to my T-Mobile account. Where did it come from? Typically these come from phone "permissions" (access privileges) to one's phone information (eg, email accounts), but we all know they are rarely "permitted" in the sense of a user wittingly authorizing information to the software. Rather, they are often pre-configured and difficult to unravel. That is, in such cases, one has to dig to determine, and still may never determine (or revoke), privileges granted by: a provider (eg. T-Mobile) service update, the phone manufacturer (eg. Samsung), the Android (OS) installation process, or the app (eg, Lookout). These iniquities become more galling when one's data security is supposedly being looked-after, particularly for a fee. One reasonably expects transparency.

A much smaller issue: beneath the email address was a blank for a password, without specifying if it was for the email address offered, or for a new Lookout account password.

Websites

Before entering any password, I navigated to the Lookout website . As I write today, I could find no information about the password sign-in or Lookout's information access on devices. The potential billing tiers for Lookout appeared to have two options - Personal Premium ($3/$30) and Personal Free: both were buried in the site's "Contacts" pages. A difficult to find FAQ finally referenced T-Mobile accounts, $4, but nothing directly about partnerships, phone access privileges, etc. A third service,"Jump!", was referenced on the page, but without explanation or links.

Trying next the T-Mobile site, nothing about Lookout phone permissions, but there was billing information for a "Premium" Lookout account, $4, that is, more than accounts directly established with Lookout. Meanwhile, Jump! is a T-Mobile phone insurance or upgrade plan, I could not be sure.

I'm supposed to feel secure about what again?

Inside the phone



Voila. The permissions somehow granted to Lookout (never wittingly given by me), were as follows:
  • Your personal information
    Add or modify calendar events and send email to guests without owners' knowledge. Modify your contacts. Read call log. Read terms you added to the dictionary. Read your contacts. Read your web bookmarks and history. Write call log. Write web bookmarks and history.
  • Your location
    Approximate (network-based) location, Precise (GPS) location.
  • Your messages
    Edit your text messages (SMS or MMS), Read your text messages (SMS or MMS). Receive text messages (SMS).
  • Network communication
    Full network access
  • Your accounts
    Add or remove accounts
  • Storage
    Modify or delete the contents of your SD card
  • Hardware controls
    Change your audio settings, Take pictures and videos
  • Phone calls
    Read phone status and identity
  • System tools
    Change network connectivity, Delete all app cache data, Disable your screen lock, Make app always run, Modify system settings, Prevent phone from sleeping, Retrieve running apps, Toggle sync on and off
In short, I'd never use a smart phone if it weren't for the fact that T-Mobile can't enable MMS on simple feature phones: I need MMS to communicate in the workplace. Obviously, Lookout smart phone permissions are not as comprehensive as what government agencies can gather or accomplish with one's phone (and other records), but it gives a person a thumbnail sketch. It might be easier if smart phones were directly issued by the government via some portion of our income tax revenue -- they've become little less than moving ID cards, with contact and quotidian information embedded.

Saturday, March 12, 2016

[solved] wpa_supplicant following system update

Looking at friend's monitor (running Arch), I saw his clock was behind, so I wanted to # ntpdate pool.ntp.org his system. It turned-out ntp hadn't been installed. I started with #pacman -S ntp, but dependency errors appeared -- the typical sign that a system needs updating. So, # pacman -Syu, and sure enough, it was at leas 4GB out of whack. Afterwards, all was good for adding ntp and running ntpdate. His clock updated.

The problem was, the first time he cycled power, his wifi didn't connect. I found I could easily connect his system manually (CLI) with
# wpa_supplicant -Dwext -iwlp1s0 -c/etc/wpa_supplicant/wpa_supplicant.conf &
or better...
# wpa_supplicant -Dnl80211 -iwlp1s0 -c/etc/wpa_supplicant/wpa_supplicant.conf &
So, WTF? One possibility was that, beginning in 2016, dhcpcd stopped connecting its built-in wpa_supplicant hook during dhcpcd install. It has to be done manually. There's also a second option, giving at least two solutions.

two solutions

  1. # systemctl disable wpa_supplicant.service
    # cp /usr/share/dhcpcd/hooks/10-wpa_supplicant /lib/dhcpcd/dhcpcd-hooks
    This puts dhcpcd in charge, calling and restarting wpa_supplicant, as needed. Make sure the right PID directory (if needed) and password are in /etc/wpa_supplicant/wpa_supplicant.conf, so it can negotiate security. Also, if dhcpcd is hanging looking for an IPV6 connection, a person can add the "-4" flag to the "ExecStart" command in /usr/lib/systemd/system/dhcpcd.service. Problem solved; back to rock solid wifi connection.
  2. Instead of putting dhcpcd in charge, put wpa_supplicant in charge, calling dhcpcd. This is not recommended since it requires a bunch of complicated custom shit described below

unit file

I checked systemd (systemctl list-unit-files) and, as expected, I saw that wpa_supplicant was no longer enabled. When I enabled wpa_supplicant, it created symlinks which were not straightforward ("epitest.fi"?), which I did not trust as helpful. So I wrote him a straightforward service unit file specific for initializing his wifi:
# nano /etc/systemd/system/johndoewifi.service

# custom wifi connect
# Arch location: /etc/systemd/system/johndoewifi.service
[Unit]
Description=wpa_supplicant wext wifi
Wants=network.target
After=network.target
# Before=network.target

# execute and run in background
[Service]
Type=forking
PIDFile=/var/run/wpa_supplicant.pid
ExecStart=/usr/bin/wpa_supplicant -Dwext -iwlp1s0 -c/etc/wpa_supplicant/wpa_supplicant.conf
# ExecStop=/usr/bin/wpa_supplicant -x

[Install]
WantedBy=multi-user.target

restart from user

You can get wpa_supplicant to run from user, with eg
# nano /etc/wpa_supplicant/wpa_supplicant.conf
# Allow users in the 'wheel' group to control wpa_supplicant
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
If John Doe is in the wheel group he can start wpa_supplicant no problema. However, it will fail, since he doesn't have authority to bring the interface UP or DOWN from userspace. That requires another kludge.
(source: Gentoo wiki)

Saturday, February 6, 2016

[solved] mount some unknown usb

In the current era of systemd, some of our old friends like udevadm aren't independent apps, but are elements of systemd. We used to be able to identify cranky (eg, doesn't automount) usb items simply with udevadm -monitor. We'd # udevadm -monitor, plug in our usb item, and read the information. Then we could craft a suitable mount, or whatever.

Currently, we do our system monitor as:
# udevadm monitor --environment --udev
After plugging in, say a drive, Ctrl-C out of the new udevadm as quickly as possible since it floods the terminal history with polling updates. Then just scroll-up a bit to get the drive name, eg /dev/sdd. Create the appropriate mount.

A standard mount might be
# mount /dev/sdd -o rw /somefolder

file ownership

Sometimes even udiskie will mount a USB incorrectly and it will be read-only. Simply # umount /dev/sdc1(or whatever), and then remount it with the following.

# mount -o rw,uid=1000,umask=000 /dev/sdc1 /home/foo/mnt

Sunday, January 31, 2016

opera and flash, other issues

Links: Opera flash ::

After a software update with Flash implications, pages were not displayed in the Opera browser. If I entered, say, the URL "www.google.com", normal loading statements flashed as the page loaded, but the webpage appeared as a blank white page once it completed loading. Internal to Opera, no error notices were displayed; the pages apparently loaded correctly as far as Opera's internal checks were concerned. However, to the user, nothing but white was displayed for the webpage. I found no information or similar complaints in Internet forums, etc. What was going on?

the cursor

On the blank page, all the pieces were apparently there, just not visible to me. So, I could move the cursor around the blank white page and I would see various link URL's appear at the bottom of the blank page, just as they do when one hovers over visible links on a normally displayed page. Nevertheless, the page would otherwise appear blank. I wanted to look at my Opera settings, but entering, say, opera://settings, loaded a blank page.

early progress - local html file

While this was going on, I happened to click on a local html file. For this local file, ie, loaded off the HDD, Opera opened and displayed html normally. Also, any links clicked from this local page loaded normally. Also, any bookmarks could then be used normally, from that page. But if I opened a new tab during that session, all would be blank again, and even back-buttoning to the locally stored file would turn it blank. Could the blank page problem be some sort of security setting?

progress - private mode

Thinking "security", I let a page load blank. I selected a bookmark in the bookmark bar and R clicked, then selected the option "Open in New Private Window". The page loaded and displayed normally. Perhaps the blank pages were part of a misconfigured security policy.

progress - flash

I continued to browse normally in "Private Window", looking for clues. Another abnormality: Adobe Flash seemed to need updating. Some YouTube videos would show, some would not. Could this also be affecting the regular appearance of pages?
The paths for libpepflashplayer.so should be available to Opera in its .json-coded resource file:
/usr/lib/opera/resources/pepper_flash_config.json
"Cat" the file to verify it's in there, eg, one of its lines should typically be:
"/usr/lib/PepperFlash/libpepflashplayer.so",
That's a very common place for it to have been installed, but you can verify with "find". Some others believe that the player should also be in the Mozilla plugins folder, /usr/lib/mozilla/plugins, whether or not Firefox is installed. Create another one for Opera.
# mkdir /usr/lib/opera/plugins

flash working, but still must be private

  1. clear all cache and cookies
  2. download latest libflashplayer.so
  3. $ chmod 775 libflashplayer.so
  4. # cp /home/foo/downloads/libflashplayer.so /usr/lib/opera/plugins/libflashplayer.so
  5. # cp /home/foo/downloads/libflashplayer.so /usr/lib/mozilla/plugins/libflashplayer.so
Hopefully, more will be revealed, but the problem has not been solved.

Saturday, January 30, 2016

Geolocation: always evolving toward a finer grain

I was looking at geolocation data on the laptop the other afternoon, and thinking how it is part of the data collection picture so desirable for advertisers these days and so saturated by government security programs. Both advertisers (business) and government seem important and thereby worthy of a short post on geolocation.

Advertisers can be controlled, but after 9/11, our own government transitioned into a silent and invisible 24/7 domestic data collector. How does this relate to location. Well, location privacy feels important because our location is immediate -- it's first-person and physical, not conceptual. It feels normal to occasionally want to be alone somewhere. We understand this in our personal relationships, for example. This used to be as simple as going for a walk in nature, or around the block for a smoke at midnight -- very simple actions a person takes for granted. People feel such moments are private. However, since as recently as 2012, non-exempt citizens can only guess at how comprehensively during their daily activities they fall within camera range. Citizens can likewise merely guess at what is done with the images. In other words, citizens are given no clues as to where to file an inquiry if they do not approve of some camera or want access to its images -- we don't know who's operating them or what they're used for.

facial and license plate recognition

In addition to static cameras, note that every time you see a newer police car, or parking enforcement vehicle, an ALPR or some facial recognition system is likely built-in. A police vehicle is, among other things, and depending on a department's budget, a network node continually transmitting information. The transmissions have time and geolocation stamps added to the information. For example, in the transmission of license plate numbers from a cruiser, a combination of the license plate number+geotag+time is sent. This is a nearly insignificantly small database line entry. However, the entered data is easily reassembled into patterns of travel. A lifetime collection of a person's driving and location-based facial recognition instances could easily fit on a USB stick. We'd want to hope that information of such incredible depth was being used in an entirely temporary and exculpatory manner by agencies which gathered it. Good luck.

cell/smart

Assuming a phone with a battery and a SIM registered to its owner (not a borrowed or stolen phone), the owner's location is known to at least three meter accuracy. Added to this, government offices listening-in to the content of the call, or reading its text messages, accomplish these actions easily in real time, within agencies as low as city police departments, and with or without warrants. This is just by our friendly government and business organizations; foreign governments' interests are lesser known, but can reasonably be imagined.

desktop/laptop

When we use our desktops, the public (government) sphere again sees whatever it wants; what about the private sphere? Consider your monthly ISP bill. One's home address is tied to their account, it makes no difference whether one is being served a dynamic or static IP. ISP's could sell this bundle of info to advertisers in real time. Further, physical street addresses are easily interchangeable with exact GPS coordinates -- it makes little difference if the GPS coordinates or a physical street address is sold to advertisers.

Those in law enforcement, military, and perhaps some other protected categories (judges, etc) have some protections against commercial incursions or release of their information, depending on the situation. Citizens however, have only whatever's customary for a limitation, since there are very few explicit, effective privacy laws. Customary business limitations are not black and white restrictions on the release of data, and they can easily change, as you may note at the fine print of any privacy policy you accept. For example, lawsuits might occur as a result of, say, a stalker purchasing one's street address directly from an ISP, or if ISP's made one's mailing address easily available to advertisers. But if wins in court made it possible to absolve ISP's from any responsibility for selling your information to a stalker posing as an advertiser, ISP's might start selling that information tomorrow. So ISP's don't divulge the entire package to advertisers... yet. Instead, ISP's divulge some network node/hub near your home, usually a sphere of within 10 or 12 blocks, probably in your zip code, but without your name attached. Try this site, for example. And again, these are simply business practices, not real privacy protections. They can be changed at any time.

misdirection

As just noted, public opinion or civil cases are probably the motivation for ISP's and major websites to provide some (grudgingly) small privacy protections --- for now. But even these appear to be at the lowest possible boundary of honesty. For example, with geolocation, by asking the user if they will allow geolocation, the provider only gives the user a false impression that geolocation information hasn't already been released. We've already seen from the link above that this is not the case: let's say I'm browsing in Opera and I want to listen to a radio station in Pittsburgh. I go to the radio station's website and click on some "listen now" button. Very likely I will see a window similar to this:


The impression to the user is the Pittsburgh station does not know my location and "needs" to learn it (for regional advertising, etc). But we've already seen above at iplocation. net, that the station already has a fix on my location within an accuracy of a few blocks of my device. What advertiser (or MPAA/RIAA stooge) needs more information than this? So what's really going on -- we know it can't honestly be location, so what is it? My guess is the acceptance of the attached privacy policy notice: I am accepting Google's, or Microsoft's (Silverlight), or the station's, privacy policy regarding location information. Recall that privacy policies, once accepted, can be changed in the future without the user being notified or having the opportunity to revoke it. At a later date, information about me can be added to the whatever the site is selling to other businesses. In other words, once accepted, the privacy policy locks me into whatever that company does with my information downstream, and prevents me from suing them for it. This is why I think acceptance of the privacy policy is the real goal: it's much more valuable to the organization than my location, which they already have to a couple of city blocks without asking. Follow the money.

browser

Just like other webpages, geolocation queries from webpages are cached and need to be purged, if you don't want the results read by other applications later. The Chrome browser used to have a way to "emulate", spurious GPS addresses (again, only for private concerns, not for government concerns), but even this was too much for some businesses to tolerate. It's been eliminated, probably due to advertiser, or MPAA/RIAA pressures. Essentially, if you are streaming anything, you are likely to see a window such as the Opera one above.

the future

Profit pressures and motives will likely degrade these policies until, at some future date, it seems reasonable to assume our physical address/GPS coordinates will be known in real time and possibly tied to our name. This is currently trivial for some government agencies, but I'm talking about within the private sphere also. At the point it becomes accepted for business, there will be little difference between a cell phone or a home desktop, and in fact, the desktop may be less private at such a time, since a home address is also a mailing address. Accordingly, businesses which support law enforcement and law enforcement unions have proactively lobbied for protections for their officers. These unions have two advantages citizens who pay officer salaries don't have: 1) police unions know the true scope of privacy incursions because their officers are using the tools, 2) they have the organization, financial resources, and support from legislators, to lobby for protections for their members. In reality, all citizens, or at least taxpayers -- we pay gov't agencies to surveil us -- should have protections equal to officers. Government agencies can pierce any privacy protection with ease, so there is no national security implication for extending protections to all taxpayers.

integrating

Take all of this location and identity information above, and integrate it with credit card data, browsing habits, email and text parsing, and you've got quite a case, or advertising file, on anyone. Still want to go outside for that walk or stream that radio station from Pittsburgh?

[solved] jnlp files in Arch

A lot of proprietary garbage predictably relies upon one of the most proprietary of the proprietary: Oracle. The most famous collusion is to require Java for some (putatively) secure connection. How to do this from a Unix-based platform? Iced Tea is part of the answer.

# pacman -S icedtea-web
[select the jre8 option]
Note that path information may have to be modified. Installation notice:
For the complete set of Java binaries to be available
in your PATH, you need to re-login or source /etc/profile.d/jre.sh
Please note that this package does not support forcing
JAVA_HOME as former package java-common did

when you use a non-reparenting window manager, set
_JAVA_AWT_WM_NONREPARENTING=1 in /etc/profile.d/jre.sh

Once all PATH information is set, logout, login, and run, eg:
$ javaws foo.jnlp

vaapi and vdpau api's - dvd playback

Using VLC, I recently played a commercial 2 DVD set (720P) on a reliable optical drive. One DVD chapter would not play and the disk appeared to have a small scratch. Before returning it for a refund, I thought I should try something besides VLC to rule-out those random situations where it's a software glitch. The command line version of Mplayer used to be a great equalizer, for example. However, I don't know VAAPI and VDPAU parameters, or say, XvMC. Situation seemed worthy of a trail of crumbs here.

overview

It's important to know one's hardware in this situation. To my understanding, VAAPI is a native Intel API, VDPAU is a native NVidia API.
$ lspci |grep -i vga
01:05.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS780MC [Mobility Radeon HD 3100]
And so of course, I get "Radeon", lol. At any rate, both VDPAU and VAAPI pass video encoding to a GPU from a CPU. Each is available to any software (scroll down), though some software is crafted with a preference. There is a lot of information out there that we don't want to use VDPAU as a wrapper, and should pick native VAAPI or VDPAU, not a wrapper accessed through the other. Then configure all software to work on that basis.

settings beyond the API

Besides the API, another playback parameter is setting cache and framedrop flags when running Mplayer. Running VLC, playback was clean, except that it paused on the scratched disc. Using Mplayer, I experienced no pauses, but had tearing or pixellation throughout, aka, a configuration issue.

troubleshoot

$ mplayer dvd:// /dev/sr0 -nosound -framedrop
[pixellation, tearing, sync errors]

$ strace mplayer dvd:// /dev/sr0 -nosound -framedrop &> vidfail.txt

$ grep -in "vidp*" vidfail.txt
3136:open("/usr/lib/vdpau/libvdpau_r600.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
3141:open("/usr/lib/libvdpau_r600.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
3143:write(2, "Failed to open VDPAU backend lib"..., 105Failed to open VDPAU backend libvdpau_r600.so: cannot open shared object file: No such file or directory
3145:write(2, "[vdpau] Error when calling vdp_d"..., 52[vdpau] Error when calling vdp_device_create_x11: 1

$ find -name "libvd*"
/usr/lib/libvdpau.so.1.0.0
./usr/lib/libvdpau.so
./usr/lib/libvdpau.so.1

vaapi checks

VAAPI is less desirable these days, for example according to this post. But we can check to see if it's operational, in case we had MPlayer or VLC configured to use it -- it might explain the fail.
$ vainfo
libva info: VA-API version 0.38.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/dri/r600_drv_video.so
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit
In this case, we see that VAAPI is not loading VDPAU, which VAAPI can only use as a wrapper. We know this because r600 is part of VDPAU, typically loaded along with Mesa or GStreamer. It appears the order of loading ins VAAPI, then an attempt to call the VDPAU wrapper. We know this because:
$ vlc --avcodec-hw=/usr/lib/xorg/modules/dri/r600_dri.so video.mp4
VLC media player 2.2.1 Terry Pratchett (Weatherwax) (revision 2.2.1-0-ga425c42)
Failed to open VDPAU backend libvdpau_r600.so: cannot open shared object file: No such file or directory.
I'm pointing VLC directly at the r600.so lib, yet it continues not to "exist". It appears VAAPI is loading but can't find this backend lib, even with a path supplied. This will require additional investigation.

vdpau checks

$ grep -i vdpau ~/.local/share/xorg/Xorg.0.log
[ 34.798] (II) RADEON(0): [DRI2] VDPAU driver: r600
I see the r600 driver is there, and is apparently operational. Not that I care about wrappers, but the reason VAAPI cannot find VDPAU to use as a wrapper, is I probably do not have a line to export the VDPAU variable export VDPAU_DRIVER=r600 in my ~/.bashrc file. Since I know r600 is operational, I merely need to configure VLC and MPLayer to work exclusively with VDPAU, or install gallium.