Sunday, August 30, 2015

python and sqlite

Links:
A previous post discussed the huge (250MB) installation of PostgreSQL for something as robust as a LAMP. However, suppose I just need a small database to keep track of business cards or some such?

SQLite CLI

Unlike PostgreSQL, SQLite has no server: everything is contained in a local "db" file you create. You can easily back it up by just backing up the file. To "connect" to the database, just go in the directory with that database and find the database, let's say it's "sample.db". Then...
$ sqlite3 sample.db
... and just do your business. Once complete, it's ".exit". Much easier for this kind of thing is a simple GUI, like sqliteman (Qt based).

Python access - APSW

But in our Python app, we'll want to access this database directly from the app, so we need a class of Python commands. If we want we can use SQLite commands and just import SQLite into the our code...
#!/usr/bin/python
import os,sys,time
import sqlite3
... and use SQL language, intermixed with Python.

But if we want a wrapper with a set of Python classes, the easiest Arch module to install is APSW (# pacman -S apsw) which is technically a wrapper, not a module, but still provides a class for interacting with SQLite databases.

Once we have that, we can start our code with, say...
#!/usr/bin/python
import os,sys,time
import apsw
...and and go on from there. However, I don't need any SQLite wrappers for the level of programming that I do --- I just write the database related commands in SQL. It's not that hard.

Python application

Remember that standalone Python programs are relatively simple to accomplish in Linux, but it depends on whether one compiles against a Python release and set of libs, or attempts to include these all in the package. Since most of what you run on your machine are scripts, like Bash scripts, the only thing you need to worry about among Python releases is what modules are natively available. The way to get to this is to first determine the highest version of Python on your system, and then see if you have modules you need to run your scripts.

Example

Suppose I'm building a script that I want to create a GTK window for (or Qt, but lets use GTK in this example), and so I've got gtk3 and pygtk installed. I'm going to check the version of Python and then list its modules and see if it has gtk and pygtk modules.
$ python --version
Python 3.4.3
$ python3
Python 3.4.3 (default, Mar 25 2015, 17:13:50)
[GCC 4.9.2 20150304 (prerelease)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> help('modules')
[all modules are listed]
>>> exit()
I check the modules list and note gtk and pygtk are in the Python 2 modules list, but are not in the Python 3 modules list. Some Googling shows that Python 3 has a compatibility module, pygtkcompat, which emulates pygtk. All I need to do then is import pygtkcompat and configure it. Following that, I treat my code as gtk and pygtk existed. For example, the script might begin...
#! /usr/bin/python3

import os, sys, time
import pygtkcompat

# enable "gobject" and "glib" modules
pygtkcompat.enable()
pygtkcompat.enable_gtk(version='3.0') #matches shebang
import gtk
import glib
...and then whatever code came subsequently. Normal GTK "WINDOW" commands are available for example.

managers: system, session, window, volume, preference (and some use a display)

Links: using systemd as a session manager ::
I miss the SysV days when system, session, window (eg. "compositing"), volume, preference, and display managers were all cleanly defined. In fact, we never even needed the preference manager, gconf, because configurations were done with simple text files for each app. I'm just a regular user, and it takes me a lot of work these days to keep these managers separate or find their configuration files. They also seem to intertwine on occasion, requiring detangling time, as well.

Display Manager Disclaimer

Logins, startx, and shutdown are the only terminal commands needed in a normal session. Display Managers are only needed to make these more aesthetic. Accordingly, I haven't used a display manager in ten years. Why have an aesthetic login if it costs memory and potential fails? Second, not using it removes one variable from the kludge of 6 managers just listed above. Display Managers inadvertently obscure any start-up fails by keeping text log access out of sight. That's a problem during an install. In fact, during installs, I obtain a solid Runlevel 2 before downloading any X elements.

Session manager sussed-out

I attempted to tweak Evince's default zoom level the other day without gvfs (gnome's volume manager) installed. Sadly, Evince does not use old-skool config files; users have to set "schema keys", which are probably xml files. At any rate, one starts at the top, the correct schema, and then one modifies its key: find the schema, then its possible keys. To shorten the codeblock below, here were the two Evince-related schemas using the list-schema flag :
$ gsettings list-schemas
org.gnome.Evince.Default
org.gnome.Evince
And then the keys for those schemas.
$ gsettings list-keys org.gnome.Evince
document-directory
auto-reload
override-restrictions
page-cache-size
allow-links-change-zoom
pictures-directory
show-caret-navigation-message

$ gsettings list-keys org.gnome.Evince.Default
window-ratio
continuous
fullscreen
show-toolbar
show-sidebar
dual-page-odd-left
inverted-colors
zoom
sidebar-page
sidebar-size
dual-page
sizing-mode
It appears the "zoom" key is the one to set, which is in schema Evince.Default.
$ gsettings set org.gnome.Evince.Default zoom 1.0
$ gsettings get org.gnome.Evince.Default zoom
1.0
We can see the zoom was set correctly (to 100%). Now let's run Evince:
$ evince
GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files
Evince starts, but spawns the error above and only the default zoom factor, instead of the 100% zoom I entered.

Was gnome-session not running, sharing with systemd, or what? A simple setting can often means days of frustration in Linux. If I recall correctly, it's further the case, that gnome-session can only be started after dealing with PAM, like via gdm-password. It's unlikely any gnome sessions are running.
# ps -A |grep -i g
614 ? 00:00:00 gconfd-2
3602 pts/2 00:00:00 grep
31658 ? 00:00:00 gnome-pty-helpe
Now it's clear: no gnome-session is running. The gsetting commands were accepted. That is because, for the settings to be entered, only gconfd, gnome's preference manager, had to be running. However, there was no way to implement those preferences, because the session manager, gnome-session, was not running: an error message was generated. Meanwhile, Arch recommends against any gconf schemas, and notes they will soon be replaced with "dconf". Currently then, Evince is probably caught in the middle of these. We really have no way to configure Evince without getting a session manager going, which in turn probably requires the dreaded gvfs by the way.

Okular

In spite of the immense KDE libraries necessary for Okular, I have to look. I mean if I can just set preferences at this point, I'd be happy.




Saturday, August 15, 2015

[solved] first run of zotero back-up

I had a chance to recently attempt my first Zotero back-up and resoration. I have the stand-alone version, which I then link to a few browsers with browser plug-ins (see bottom of page). In other words, Zotero installs as a regular application, available on or offline. Zotero browser-plugins (typically XPI) install into the desired browser independent of whether Zotero is installed on one's system at all.

other bibliography/citation-related notes

BibTex tutorial :: exporting cite keys from Zotero :: helping cite keys match database


Overview

From what I could tell from Zotero's instructions, all I would need to do is backup my entire Zotero directory, and then put it in the new drive. Helpful, but experience showed it could be broken down more explicitly:
  1. Back-up the Zotero database directory (usually a few hundred MB; mine fits on a CD).
  2. Install a blank version of Zotero, on the new drive. Be sure Zotero is stable (open and close it a couple times).
  3. Add a browser connector, if you want or need one (optional).
  4. Take the backed-up data directory and overwrite the fresh install's data directory.
  5. Open Zotero and all backed-up citations should be there.
  6. Associated back-up issue: suppose one has a folder with several Ebooks or PDF's on some subject. Zotero can apparently store local HDD links to these documents, as well as metadata about the them. These links will also be in the Zotero database, just like web links, and using links uses less space in the database than the full documents would use. However, after backing up Zotero on a new install, you'll also need to duplicate the directory structure where these documents are kept. In this way, Zotero's links will point to the documents.

Back-up

Initially, I made a mistake. I backed-up the Zotero directory, burning it onto a DVD with other data. This directory and all files within it were suspiciously only about 10MB. Eventually I determined I had only backed-up the application directory. The data directory is what should be backed-up. But the data directory is not in plain sight within the directory structure. To find the data directory, I searched for one of its files, zotero.sqlite:
$ find -name zotero.sqlite
./.zotero/zotero/af5baci7.default/zotero/zotero.sqlite
This is the correct Zotero directory to back-up, the data directory. That is, ~.zotero/zotero/[hashnumber].default/zotero/*. This directory was a few hundred MB. As for the application directory, why back it up at all? Download and install the latest version of Zotero instead.

Restoring

First, install the Zotero app, either the old one or the latest version. If it's a new install it will create a new folder in your home directory: ~.zotero/zotero/[newhashnumber].default/zotero/. Open and close Zotero a couple times to be sure the install is clean. Then just take the old data directory (you backed-it up above) and overwrite the new data directory. Your new structure will be something like ~.zotero/zotero/[newhashnumber].default/zotero/[old backed-up data]. The next time I opened Zotero, my backed-up citations were there.

Connect to browsers (optional)

You can manually enter Zotero entries. Browser plug-ins make it easier for some online sources. But after a back-up, the connector between stand-alone Zotero and its browser plug-ins, may or may not be broken. Once the standalone is working and up-to-date, start Zotero and then go to the links for the connector that matches the browser you like. Here's a couple of them.

Chromium
Chromium Zotero Connector

Opera
Opera Zotero Connector

Thursday, July 23, 2015

blender odds and ends (250Mb)

In Arch, Blender is a 60Mb download and roughly a 250Mb installation. Several associated dependencies install with it, most of which are likely to already be installed.

3 button mouse

The number one undocumented hassle for installing Blender. Users can opt for 3 button mouse emulation in Blender (Preferences ->Input Tab), but: 3 button emulation leads to overlap problems between X's management of mouse events and Blender's management of mouse events. For example, Blender's 3 button emulation of object rotation is "Alt+LMB". But in X, "Alt+LMB" are the strokes to grab an active window and move it around the desktop. What happens when a Blender user selects "Alt+LMB" while in "3-button emulation"? The entire window moves instead of the object inside Blender's window.

Solution: X mouse strokes can be altered by creating a "SectionLayout" file, and putting it in /etc/X11/xorg.conf.d/. Time consuming, considering no such extra files or time are needed if a person has a $10 (Logitech M110, Ebay) 3 button mouse. Additionally, if you have a 2 button mouse with a scroll wheel, the scroll wheel typically is a disguised 3rd button which can (in addition to being rolled) be pressed directly down until clicked. Clicking and holding the wheel down, while moving the mouse around, is how to rotate around an object inside Blender.

numpad

Also in Preferences -> Input Tab, is numpad emulation. On a laptop, this is necessary: there's obviously no numpad on standard laptops. As users might expect, numpad emulation allows using the number keys across the top of the keyboard instead of a numpad.

selection/deselection - extrusion

There are tens of YouTube video tutorials about extrusion, apparently a basic Blender feature. However, four of the seven steps for extrusion were not mentioned in any of the videos. Accordingly, for the first several hair-pulling days I attempted to extrude, the result would invariably be new, unattached duplicate boxes, NOT a connected extrusion from the current box.

The unexplained step, discovered only inadvertently (auggh) is that start-up boxes are, by default, already selected. So de-select ("A") the start-up box and then select a side, or however many one wishes, to extrude. When something is selected in Blender it changes from grey to gold:

  1. TAB to select "Edit" Mode
  2. Be in "Solid" view, not wireframe view
  3. Be in Face View, not Vertices View
  4. Use "A" to deselect/select all. Select faces by flying around the cube (MMB), and selecting the faces one wishes (Shift - LMB).
  5. Press "X", "Y", or "Z", to obtain the respective axis of the extrusion. Or, if one wishes to freehand it; "G"
  6. Press "E". You can also express it as E, then "2", or any other number. This will extrude that many grid squares along the selected axis.
  7. Move the mouse (no buttons), which will pull the extrusion. L click once it the shape is satisfactory.

floor plans

Links: Render DXF to 3-D
Floor plans are a common use of Blender for those not doing animations. Users can take standard .dxf line-art files and import them, and extrude them into complete floor plans with some additional work. Additionally textures can be downloaded and added to one's textures library to .

Friday, July 17, 2015

[solved] vlc playback odds and ends

Links:

audio playback gaps

Several forums describe buffering issues with VLC audio. For me, it was during playback of files on my HDD. For example, in the previous post, I described using VLC with .m3u files to play with WAV ordering prior to burning a CD (assuming anyone still uses CD's). Playing M3U's which contain nothing but WAV's through VLC would trigger the audio gaps.

The symptom is a one second gap in playback, a simultaneous spike in CPU use, and generation of a two-line error. Subbing in some variables for actual numbers:
[0000x7f5d030f7df8] core input error: ES_OUT_SET_(GROUP_)PCR is called too late (pts_delay increased to 2232 ms)
[0000x7f5d030f7df8] core input error: ES_OUT_RESET_PCR called
Apparently a Program Clock Reference is called too late and, in response, VLC increases the delay for the Presentation Time Stamp delay. PCR and PTR are supposed to be related to MPEG playback, but this error appears in audio-only playback of WAV's, for example.

Most of the solution for me was found here, except for 1) which value to change, 2) increase or decrease and, 3) how much.


Since each setting has a pop-up, I read them. It appeared the item to change was "File Caching". On a guess, I adjusted the delay to near the pts_delay value noted in the error message above. Once set at 2000ms, I did not experience cut-outs during file playback.

the orange cone

Some don't like the cone at all; I dislike it during audio-only playback. It sits conspicuously where album art would go. I noted how to get rid of it in a previous post, but it bears repetition. During playback, to disable the orange cone when there is no album art: Tools, Preferences, Select All (Bottom), Interface, Main interface, Qt, unselect "Display background cone or art".

Thursday, July 16, 2015

podcasts, audio cds, wine

Links: Overview of burning options :: Tutorial for Linux burning :: Mixed Data-Audio :: More TOC information

I download 60 minute podcasts and burn them to CD's to play when driving. The problem with a long CD is having to listen to, say, the first 20 minutes to skip to minute 21. MP3 players overcome this, but I don't like MP3 players in traffic: I need to look down at the time stamp. Instead, I break the hour podcast into 12 x 5 minute tracks and burn them to a CD. I can keep my eyes on the road and just twist the knob a couple of times to the track I want, without looking down. Below is my process (about 15 minutes long) for processing a podcast onto a CD. There's also some information related to one-time software installation.

Note: processing audio is not hugely resource-intensive on any system built in the last 15 years. Therefore, no "system requirements" or "rendering" time estimates are included here, unlike posts for video processing.

Steps

  1. Convert podcast into a .wav (audio CD format: 44Khz, 2chan, 16bit little endian)
  2. Set levels and so on
  3. Pick 11 break points and insert a second of silence at each
  4. Save each of the 12 segments using a numbering convention, eg "chunk01", "chunk02", etc
  5. Double-check the folder contains all 12 segments
  6. (optional, time permitting) Provide meta-information to display during CD playback (requires creation of a .toc file)
  7. Burn the segments to a CD as 12 tracks.

1. Convert to WAV (1 min)

Sometimes a podcast will be in a video format, sometimes just an MP3, etc. The line below will convert any format, since it nulls the video.
$ ffmpeg -i podcast.avi -vn -ar 44100 -ac 2 podcast.wav

2. Set levels (2 min)

You can always "normalize" the audio, something useful if you have several WAV's. That would be something like...
$ normalize-audio -m *.wav
For a single large WAV that I'm going to divide into sections, I'll need an audio editor, so I just set the level manually when I open the WAV in the editor.

  wine installation sidebar
If you (like me) prefer some outdated Windows audio editor from the 90's that's like an old glove, installing Wine is necessary to use the app. Wine however requires 32-bit libraries. Therefore, in addition to the 500Mb of extra multilib crap, you risk occasional 32-bit multilib conflicts with your modern x86_64 installation. Accepting that, in Arch, add the "multilib" Arch repository and let pacman do the heavy lifting. Enable the "multilib" repo inside pacman.conf, so pacman can locate and add the libraries.
# nano /etc/pacman.conf
[multilib]
Include = /etc/pacman.d/mirrorlist
# pacman -Syu
# pacman -S wine winetricks lib32-alsa-lib lib32-ncurses
$ wine [path to some old app]
If some of the application sound controls don't work, open $ winecfg and check sound settings for the correct card, and so on.

3. Select, mark divisions (5 min)

With the WAV open in a sound editor, I can place 1 second breaks at the end of a sentence (or other pause in speech), about 5 minutes apart. The one hour podcast is thus divided into 12 segments, with 1 second silence pauses I can see on the timeline. Time permitting, if there are any commercials or unwanted portions of the podcast, I can trim them away at this point.

4. Save the segments (5 min)

Moving from silence divider to silence divider, I save each segment of the WAV as a smaller WAV, labeling each sequentially; "chunk01.wav","chunk02.wav", etc. Burning software will follow my numbering convention, re-assembling the segments, in numbered order, as tracks onto a CD.

5. Double check folder

Verify all 11 or 12 tracks (WAVs) are in the folder and are sequentially named.

6. (optional) provide track information, pre-listen

CD's have about 70-80 minutes available, given 700Mb. If you've trimmed a lot of the podcast, you might add additional tracks to the folder, change the order of play around, etc. Maybe you have time to listen to how the entire CD will ultimately sound, once burned. To do this, make a playlist using a text m3u or pls file and play the playlist using your media player. In this post, I reference simple .m3u files (see bottom of page), but I've used the more complex .pls format in other situations. Additionally, you may want to create a .toc text file for meta information during CD burning. An example of a .toc is
also at the bottom of the page.

7. Burn to CD (2 min)

  burning software sidebar
Because I added one second to each segment when dividing the WAV (see 4 above), I don't want to pad the tracks with additional silence when burning them -- too much silence between tracks. Avoiding padding means using DAO (Disk at Once) burning rather than TAO (Track at Once) burning. Secondly, in Arch, pacman cannot resolve both wodim (cdrkit) and cdrecord (cdrtools), because their libraries conflict. Pick one, in other words. I'm used to cdrecord, but Arch instructions only describe burning in terms of wodim and genisoimage (DVD), so I went with cdrkit. Cdrdao is the most important for my way of burning anyway, and that's a separate package.
# pacman -S cdrkit cdrdao


Installation completed, burning is available. Burn speeds may be tweaked up or down according to impatience or errors, but generally:

1) Burning without a Table of Contents, say from a folder in which WAV's are in numeric order:
$ wodim -v speed=1 dev=/dev/sr0 -dao *.wav
If you want to run a test on whether it burns OK first, insert the "-dummy" option in the line. For pads between tracks add the "-pad" option. Other options on the wodim man page.

2) Burning with a Table of Contents (.toc file) can utilize individualized track names and folder locations while burning, and can include track meta-information to display during playback. The application for this type of burning is cdrdao, but note that cdrdao actually requires a TOC file to burn:
$ cdrdao write --device /dev/sr0 --speed 6 --eject --driver generic-mmc-raw [tocfilename].toc

Other Audio Notes

BPM information: The Mixxx application (pacman -S mixxx) in the Arch repositories will calculate Beats Per Minute.

VLC note: During playback, to disable the orange cone when there is no album art: Tools, Preferences, Select All (Bottom), Interface, Main interface, Qt, unselect "Display background cone or art".

.m3u

This file type is good for doing a test of the CD before we burn it. Inside the m3u, we can move the track order around until it sounds the way we prefer. Below is a minimal example with two tracks -- you can expand the number of tracks using as many WAV's as desired, for example you may want to make a 4 hour playlist for a party. However, a CD cannot contain more than 70-80 minutes of audio, unless you have a stereo that plays MP3 format (a different post), so keep a general idea of your minute total if you ultimately intend to burn it.
$ cat sample.m3u
#EXTM3U

#EXTINF:-1,July 15 Podcast - Part 1 (4:08)
chunk01.wav

#EXTINF:-1,Musical Interlude (3:00)
/home/foo/tracks/sunnyday.wav

.toc

Links: TOC metadata explanation :: More TOC information :: Script for simple TOC production
The TOC is critical during DAO burning, so it's worth reading a couple of posts on them (see links above). A user may wish to modify the TOC from an existing audio CD, produce a TOC from scratch, etc. To examine a TOC from an existing audio CD:
$ cdrdao read-cd --device 0,1,0 --driver generic-mmc-raw somename.toc
These TOC files are easily manipulated with any text editor. Use two forward slashes ("//") to add comments inside the file. I make very simple TOC's, but there are many tweaks available for those who have the time to invest: get into those three links above.

Here's a truncated view of a TOC I made for a divided podcast burn. The actual TOC for the burn is obviously longer since it has more tracks, but you'll get the picture:
$ cat sample.toc
CD_DA

//Track 01
TRACK AUDIO
CD_TEXT {
LANGUAGE 0 {
TITLE "Part 1: 2014-09-24 Speech"
PERFORMER "Matteo Renzi"}
}
FILE "chunk01.wav" 0

//Track 02
TRACK AUDIO
CD_TEXT {
LANGUAGE 0 {
TITLE "Part 2: 2014-09-24 Speech"
PERFORMER "Matteo Renzi"}
}
FILE "chunk02.wav" 0
The information will be displayed for each track. Also, the tracks are only 5 minutes each: when driving, I can spin the knob between tracks without looking down. When stopped, I can glance down to see the actual title of the portion. Have fun and be safe!

Thursday, June 25, 2015

[solved] screen blanking

Link: Excellent Blanking Overview

The Linux solutions in forums seem to me to often lack some necessary detail. And if something is missing, a person doesn't even know this, of course, until they entirely overcome whatever was left out. That might be hours or days later, if ever. It can be frustrating.

For screen blanking management, the key detail, unstated in any forum solution I found, was to first be sure blanking was disabled in runlevel3. Screen blanking settings are baked into the kernel, and if they're not first managed in runlevel3, prior to startx, blanking cannot be narrowed down to X settings. Rule-out half of the possible screen blanking problems by disabling runlevel3 blanking.

Run Level 3 (Non-GUI blanking isolation)

Again, do this first, prior to working on X, since terminal blanking is baked into the kernel and must be defeated before any X solution will work. In runlevel3 terminals, the setterm command controls several terminal settings, among them blanking times. To stop blanking in the terminal:
$ setterm -blank 0
The setterm command is not permanent when used in a terminal -- the next boot will reset to default kernel settings. To permanently stop blanking users can: disable blanking using their GRUB/LILO configuration, add a bootline flag, or make a systemd command. But another permanent option, which I considered the easiest hack for my installation, was to add a setterm line into /etc/profile. In other words:
$ nano /etc/profile
setterm -blank 0
This arrangement survived reboots. And, with the runlevel3 managed, I could attempt various X session solutions. Ideally, I wanted a 2.5 hour blank-out so I could watch a film while going to sleep and have it blank soon after.

X session failed configurations

Typically, the primary file for X configuration is ~/.xinitrc. If you're running Arch, multiple options are described here. Users may have to try several configurations or combine them. In my case, no suggested settings within ~/.xinitrc was successful. Even a suggested combination of runlevel3 and x commands did not work, for example:
$ nano .xinitrc
setterm -blank 0 -powersave off -powerdown 0
xset s off

X session successful configuration

Per the same link, after failures with ~/.xinitrc, I created a DPMS file to place into /etc/X11/xorg.conf.d/. The file was named, 10-monitor.conf. Here is one way to set it up.
# nano /etc/X11/xorg.conf.d/10-monitor.conf
Section "Monitor"
Identifier "LVDS0"
Option "DPMS" "false"
EndSection

Section "ServerLayout"
Identifier "ServerLayout0"
Option "BlankTime" "150"
Option "StandbyTime" "155"
Option "SuspendTime" "160"
Option "OffTime" "0"
EndSection
This worked for me. As described here, one can tweak these entries. Trial and error revealed that times in the ServerLayout are in units of minutes. I left "OffTime" as 0, to disable automatic power-down.

Summary: Manipulating blanking requires two files, one to stop runlevel3 blanking, and a second to adjust X session blanking.

Tuesday, June 23, 2015

extra laptop storage - hdd in optical slot, notes: external usb, thunar

Backups are a pain. They can be managed more easily in a laptop by purchasing a $10 drive caddy (photo) into which a backup SATA HDD can be placed.
Once the HDD is in the caddy, the laptop's DVD drive is removed, and the caddy (with the HDD drive) goes into its place.

software steps

First, determine the names of drives in your system. This is easily done with, say, fdisk -l. Using the name of drive, add a line for it in fstab:
# fdisk -l
[suppose result is "sdb1" for caddy drive]

# nano /etc/fstab
/dev/sdb1 /home/media ext3 rw,relatime 0 1

Now the drive will automatically mount each time the system is booted. Once mounted, the drive is an available repository for back-ups. Files can be transferred between the drives using a file manager, or a user might implement a backup schema or program (such as Rsync, eg. with chron).

Note: if you decide to put the optical drive back in the slot, comment the fstab entry for the removed HDD before rebooting, otherwise, it will seek the drive and take several minutes to boot.

external (usb) drives

For the setup above, no special applications are necessary. However, if one is going to use a USB stick or drive, the typical rule applies: you will need to install udisks2, fuser, gvfs or similar bullsh*t, if you don't want to deal with manually mounting these or moving in and out of root. Such applications cause a permission kludge, and may have memory hogging notification daemons that continually poll your system (I dislike .gvfs, notification-daemon is slightly better),but there's little doubt some permutation of these is necessary you're copying to a thumb drive or other USB block device regularly, and want to use a GUI file manager in user-space, without sudoing-up and some CLI skills. In Arch, I use udisks2 in tandem with udiskie (for userspace). Taken together, these are 20Mb:
# pacman -S udisks2 ntfs-3g parted dosfstools udiskie
With these, I can mount any format USB drive, including HFS (Mac).

udisk2 and udiskie note

Links: manual policykit/udiskie config :: systemctl udiskie config
This is a useful app for avoiding fuser, samba, .gvfs, and some others, not needed on stand-alone systems, but it requires configuring. First be sure you're in group "storage" then, for udisks2:
# nano /etc/polkit-1/rules.d/50-udisks.rules
polkit.addRule(function(action, subject) {
var YES = polkit.Result.YES;
var permission = {
// only required for udisks2:
"org.freedesktop.udisks2.filesystem-mount": YES,
"org.freedesktop.udisks2.filesystem-mount-system": YES,
"org.freedesktop.udisks2.encrypted-unlock": YES,
"org.freedesktop.udisks2.eject-media": YES,
"org.freedesktop.udisks2.power-off-drive": YES,
};
if (subject.isInGroup("storage")) {
return permission[action.id];
}
});
Then, for udiskie, somewhere near the end of .xinitrc, but ahead of dbus activation:
$ nano .xinitrc
udiskie &

Thunar note

Supposing this is your file manager for and you've connected a USB drive, you'll also need to install thunar-volman, and to set the permissions (see below) for Thunar to display it.


Select at least the two mounting options I've checked above. The path to this dialog box is: Edit, Preferences, Advanced (tab), Configure.

Finally, if you did install .gvfs, don't forget to exclude it from any dynamic backups or you're in for a world of pain.

Sunday, June 21, 2015

[solved] xsane detection of older or all-in-one HP printers

Sometimes you've got an old HP printer missing print heads or some such, but you still want to use it for scanning. Not all printers can be directly accessed using scanning software, such as SANE, unless the entire printer has been installed using printer software "drivers" (in Windows terms). This is often the case with HP's. To install HP printers, the standard "driver" package is HPLIP. HPLIP is available on Arch repos, however the Arch version of HPLIP doesn't include the specialized PPD's necessary for the printer. (See Section 3 below).

A recent encounter with a broken old HP OfficeJetPro L7590 was a case in point. I only needed this printer to scan. I turned the printer on, and connected the USB. My box detected the USB connection, but /etc/sane.d/dll.conf could not be modified for SANE to successfully communicate with the HP. The standard symptom of this existed: $ sane-find-scanner located the printer, but $ scanimage -L did not. So, good connection, but no scanning: I would have to install the printer driver. Here are the steps.

1. SANE

As just noted above, SANE was properly installed. That was simply:
# pacman -S xsane
Xsane pulls-in SANE software as dependencies, so no need to separately install SANE.

2. HPLIP

As also noted above, after we'd had SANE in, we'd connected everything (USB cable to scanner, powered on scanner) and attempted $ sane-find-scanner. This detected the printer, but $ scanimage -L did not. Since we were dealing with an HP, we needed the HP Linux printer driver set, HPLIP.
# pacman -S hplip

3. PPD

As noted at the outset, the Arch version of HPLIP does not include PPD's. PPD's are information files specific to each printer model. Both CUPS and HPLIP rely on them. Since the Arch version of HPLIP does not include them, the attempt to install an HP printer ($ hp-install -i) fails at the PPD step. Where to get the PPD for HP printers?

Go to the HP support website and download the tarball of the full version of HPLIP. Uncompress it. Enter the uncompressed folder and locate the "ppd" folder. Find the PPD for your model. After you unzip that PPD, put it anywhere you can easily remember, say your home folder. Now you are ready to complete the HPLIP printer driver installation.

4. Finally


Run $ sane-find-scanner again, and write down the USB number. In this case, let's say it was "002:003". Then we simply run...
$ hp-install -i 002:003
...and follow its prompts. At some point, the HP install dialog will prompt you for a path to the PPD. Provide the path to wherever you put it, eg. "/home/foo/hp_l500.ppd". The installation of the printer will finish normally1.

After HPLIP and the proper PPD are present, and the scanner is connected, our last test is to $ scanimage -L . We should see the scanner:
$ scanimage -L
device `hpaio:/usb/Officejet_Pro_L7500?serial=MT97K251CP' is a Hewlett-Packard Officejet_Pro_L7500 all-in-one
Now we can fire-up Xsane and scan.

1 If also needing to print with this printer, do a customary CUPS printer installation (after doing the steps above). Assuming the CUPS daemon is already running and permissions are correct, access the CUPS'administrative page at http://localhost:631 and add the printer to CUPS.

various chromium issues

You hate to have to use a browser as immense as Chromium. Installing a mainstream browser usually becomes inevitable however, and most of them are worse than Chromium.

What choice do we have but to use gigantic proprietary browsers when website designers are too lazy to create something that works well with lighter browsers? For example, light browsers like Dillo, NetSurf, and so on, have properly functioning cookie capacity, and NetSurf carries simple JavaScript and SSL, yet sites like Yahoo mail, or EBay will disallow access with mislabeled complaints that cookies are being "rejected", or even that correct passwords are "incorrect". These are the result of lazy website design. Anyway...

Here are some installation notes and the three main hassles I've experienced with Chromium (plus safety further down).

Three hassles

These are the big three:
  1. AMD GL settings
  2. Flash settings
  3. Sound settings
Additionally, be sure "gnome-icon-theme is installed. Note that the file ~/.config/chromium/Default/Preferences allows some settings; I the pepper-flash path can go in there, other things that would have to be added to a start-up line instead. Anything in the file is overwritten if the user signs-in to Chromium (not to various Google accounts).

1. AMD/GL settings

After # pacman -S chromium, the initial startup from command line is bound to throw similar errors to these, if you're running an AMD processor:
$ chromium
[6391:6391:0621/175259:ERROR:url_pattern_set.cc(240)] Invalid url pattern: chrome://print/*
[6417:6417:0621/175259:ERROR:sandbox_linux.cc(340)] InitializeSandbox() called with multiple threads in process gpu-process
[6417:6417:0621/175300:ERROR:gles2_cmd_decoder.cc(11539)] [GroupMarkerNotSet(crbug.com/242999)!:D0CCF6C427020000]GL ERROR :GL_INVALID_OPERATION : glTexStorage2DEXT: <- error from previous GL command
The first fix is in Settings->Show Advanced Settings -> System. Deselect "Use hardware acceleration if available". I also deselect "Continue running background apps when Chromium is closed", but for other reasons). Now restart Chromium without the GPU:
$ chromium --disable-gpu [6391:6391:0621/175259:ERROR:url_pattern_set.cc(240)] Invalid url pattern: chrome://print/*
The print error is probably not worth the trouble, since it's a complex flag issue.

2. Flash settings

The place to turn off Adobe flash and turn on Pepper Flash is inside chrome://plugins. At this point, it should happen automagically, since Google has unwisely allowed Adobe to partner in some way. What is identified as the flash player is not the flash player but PepperFlash if you've installed it. Still, PepperFlash must be downloaded separately, it's not yet apparently bundled.

3. Sound settings

Assuming you want ALSA and not that PulseAudio crap, then be sure to #pacman -S alsa-tools alsa-utils

Safety

  1. One site was suggesting removing the RLZ tag from omnibar/omnibox searches (omnibar/omnibox is the URL bar that has morphed into a search bar in Chromium). I don't consider it a threat, but it's good to understand it superficially. One explanation is here. If you don't want it, add another search engine option, calling it whatever you want, but a URL lacking "RLZ". For example:
    {google:baseURL}search?&safe=off&num=100&q=%s
    You can see this also provides 100 returns per page -- you may want less.

Saturday, June 13, 2015

systemd: time-outs, journal size, Xorg terminals

I prefer SysVinit to Systemd, in the same way I prefer OSS to ALSA, and ALSA to PulseAudio: the newer stuff creates problems where there were none. Among these problems are locating configuration commands or files in the face of opaque memory or CPU hogging defaults.

time-outs

During shut down and start-up, systemd will wait too long to kill or initialize internet connections, among other things. Set the systemd time-out restrictions...

dhcpcd tries to initialize

After an update, dhcpcd occasionally is re-enabled at boot. Since it tries all interfaces, it will hang on any that are not connected. Furthermore, the process is partly obscured, so it's tricky to find. The standard "list-unit-files" does not give a complete syntax. So although one disables dhcpcd.service, the other service is not fully named and is cryptically called by a different name in boot.

# systemctl list-unit-files |grep dhcpcd
dhcpcd.service disabled disabled
dhcpcd@.service indirect disabled

$ systemctl status dhcpcd*@*.service
* dhcpcd@enp4s0.service - dhcpcd on enp4s0
Loaded: loaded (/usr/lib/systemd/system/dhcpcd@.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-06-09 04:32:17 PDT; 7min ago
Process: 362 ExecStart=/usr/bin/dhcpcd -q -w enp4s0 (code=exited, status=1/FAILURE)
The name you need here is dhcpcd@enp4s0.service, it cannot be disabled as the first name given, simply dhcpcd@.service. To stop this hang...
# systemctl disable dhcpcd@enp4s0.service
Removed /etc/systemd/system/multi-user.target.wants/dhcpcd@enp4s0.service.


journald logs

Prior to systemd, boot-ups used to log to /var/log, pretty much auto-magically via rsyslog (syslog). The only thing one had to configure was chron's timeline for rotating old logs into the trash. Suckily, journald logs grow until they take over something like 10% of disk space by default. To me, it's yet another mistake of the last 8 years of Linux -- adding boggy new application layers instead of improving and simplifying long-standing daemons1. At any rate, journald must be configured if you want anything reasonable and intelligible. Secondly, you must use "journalctl" the read them, because they are stored in a binary format.

Since there are about 30 settings in /etc/systemd/journald.conf, an hour is wasted researching journald's settings.
# nano /etc/systemd/journald.conf
Storage=auto
SystemMaxUse=200K
When I want an ASCII record for grepping, etc, I use journalctl -r -o short-iso ("r" reverses time to put most recent on top, "short-iso" is for giving normal clock timestamps), and save screen output to text:
$ journalctl -r -o short-iso 2>&1 file.txt
Alternatively, one can output other formats such as the apparently standard JSON format. It cannot however export directly to text unless it's to another application.

This the file when the system is running well and no logging is needed:
Storage=none

notes

  • 200K of logging seems to cover about the last 10 boots.
  • journalctl --verify checks logs for corruption

xorg terminals

If systemd is not restricted, it will open 40+ terminals when you open X, buring hundreds of unnecessary MB's of memory. Seven terminals is sufficient for operations inside X
# nano /etc/systemd/logind.conf
NAutoVTs=6

1It's as stupid as when PulseAudio took hold of Alsa (which itself overlaid OSS)

Monday, February 16, 2015

xorg odds and ends, xorg.conf

xterm settings

In Arch it's become slightly difficult to implement xterm settings. The way Xorg initializes has changed -- it no longer directly loads .Xdefaults or .Xresources. Instead, they are called by .xinitrc. Sometimes, this extra step fails and your xterm will look vanilla. The correct .xinitrc lines calling .Xresources should already be present, and should look something like...
$ nano .xinitrc
if [ -f "$userresources" ]; then

xrdb -merge "$userresources"

fi
Troubleshoot: save either .Xresources or .Xdefaults. I kept .Xresources and therefore deleted .Xdefaults, but I could have switched this. Add your xterm customizations to whichever of the two is kept. Check these customizations by loading the file directly...
$ xrdb .Xresources
...then open a new xterm to see if it looks configured. I typically make modifications of color, clipboard availability, and so forth, eg...
$ nano .Xresources
XTerm*selectToClipboard:true
... but this can also be added to one's command to open xterm in their icewm menu.

If xterm was modified, .Xresources is being called by .xinitrc. However if xterm looks unmodified, then either .xinitrc needs modification to properly call .Xresources or, .Xresources requires further work.

xorg install order

  1. # pacman xf86-video-ati There are several drivers to pick from; you'll need the correct one for your chip. I found my chip this way:
    $ lspci |grep vga
    VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS780MC [Mobility Radeon HD 3100]
    So I had the Radeon HD 3100 RS780MC chip, which corresponded with the xf86-video-ati driver. Since it's radeon, I also proceeded with the next step; others may not need it
  2. # nano /etc/mkinitcpio.conf
      MODULES="radeon"
    # mkinitcpio -p linux
    # reboot
  3. # pacman xorg-xinit xorg-server xorg-server-utils xorg-apps (80Mb)
  4. # pacman icewm icewm-themes (20Mb) just get some WM in there to test all the settings
  5. # pacman dillo a light browser for looking stuff up about the install
  6. get the skeletons for /etc/X11/xinit/xinitrc, /usr/share/icewm (the entire directory, then chown it and change it to ".icewm"), and /etc/dillo/dillorc
  7. modify ~/.xinitrc to add "exec icewm"
  8. I like to reboot here, before starting X, but some might just logout and log back in. Either way, "startx", and then tweak and add apps

xorg.conf

x configuration is a persisting mystery. The following may help in Arch.
  1. the files to configure 90% of your X session are in /etc/X11/xorg.conf.d/ directory
  2. create /etc/X11/xorg.conf.d/10-extras.conf for your custom stuff, like screen blanking, Blender mouse remapping, and so on -- all in one file
  3. the config files can have multiple "ServerLayout", "InputDevice", etc. sections
  4. comments in configuration files in the directory are with pound sign "#"
  5. .xinitrc and .Xresources play about a 10% role
  6. /etc/profile.d/locale.sh is important for "locales", which affect font encoding. You want all of these to be "C"

Sunday, February 15, 2015

[solved] distributed install (Tex Live info also)

Typically, the details I need to operate within Linux are difficult to find on the Net, yet what I don't need seems written again and again nearly everywhere on the Net. I often must acknowledge to myself later that, what I couldn't find at the time was too simple for anyone to even bother typing.

Recent example. For years, I've wanted to backup my data directory quickly, so I could have a cron backup script to automate it. "Quickly" to me also meant "dd" (data destroyer, lol) instead of "rsync" or "cp". In turn, "dd" meant "unmounted" --- I don't want "live acquisition" for a directory as important as "/home". But I could not grasp how a separate, unmountable, partition for "/home" would work exactly.

The allocations for each partition seemed easy: I used...
$ du -ch
... to determine usage, and formulated a plan for splitting-up the drive: 10G swap, 30G install, the rest to /home. But I couldn't figure out how to do it. How would applications find the partition containing the data files? Dual booters run into that problem, for example.

fstab - the key

One day, I was struggling with the problem and I finally recalled a Linux basic: everything is a network, everything is a file. For example, how does a worker in Building A access his home directory on a server in Building B? Of course! fstab would simply mount it. Fstab was the solution and it was so simple it's little wonder no one had bothered to explain this in their distributed install instructions.

new Arch install

Solution in hand, the new install had 3 pieces: "/home", "/", and "swap" (some add a separate boot partition also). Using cfdisk, I sized each partition as noted above. Then...
# mkswap /dev/sda3
# swapon /dev/sda3
# mount -rw -t ext3 /dev/sda1 /mnt
# mkdir /mnt/home
# mount -rw -t ext3 /dev/sda2 /mnt/home
# genfstab -p /mnt >> /mnt/etc/fstab
...and all was good. The rest of the install was normal. Knowing the mounting commands and their order were the key. The root directory had to be mounted first, then other directories, such as /home.

I also put the TexLive distro (4.5G) into /home, since it's so large. I don't use the Arch repo version, since the full install is more complete. To install, create a directory called, eg., "/home/foo/latex" and, using command "D" during the install, supply the directory information. TL will create the necessary environment within your userspace, no root required. You will just have to update your PATH variables subsequently (see below).
$ cd /home/foo/latex/install-tl-20150525/
$ ./install-tl
command: D
/home/foo/latex/2015

After install, TexLive provides a reminder about paths.
Add /home/foo/latex/texlive/2015/texmf-dist/doc/info to INFOPATH.
Add /home/foo/latex/texlive/2015/texmf-dist/doc/man to MANPATH
(if not dynamically found).

Most importantly, add /home/foo/latex/texlive/2015/bin/x86_64-linux
to your PATH for current and future sessions.

Welcome to TeX Live!
You can test the PATH by attempting to compile,say,a small test TEX file with $ pdflatex test.tex". If the command isn't found, then bash needs the PATHs. You could 1) make a small executable to add paths in /etc/profile.d./ or, 2) add:
$ nano .bashrc
export PATH=/home/foo/latex/texlive/2015/bin/x86_64-linux:$PATH
export INFOPATH=/home/foo/latex/texlive/2013/texmf-dist/doc/info:$INFOPATH
export MANPATH=/home/foo/latex/texlive/2015/texmf-dist/doc/man:$MANPATH
Exit the X session and logout of the user (eg, "foo"), then log back in. The bash paths should be updated and TexLive normally available from non-X terminal, xterm, geany, etc.

backups

Simple now.The format of "dd":
dd if=[source] of=[target] bs=[byte size]
Essentially, "dd" goes from a device to a file. The easiest large file is probably an ISO. One other thing, "dd" copies the entire device, including the empty areas -- it's a copy -- so the target device has to be as large as the source, unless one compresses.
Steps: assume here that home directory /dev/sda2 is to be backed up to a usb drive, /dev/sdb1.
  • boot-up into CLI
  • determine the block/byte size of /dev/sda2 (typically 4k these days), by writing an extremely small file, far below the size of a full block (for example a file only containing the number "1"), and then checking its disk usage (du):
    $ echo 1 > test
    $ du -h test
      4.0 Kb   test
  • Verify the file system format, eg Reiser, ext3, etc. You can use "lsblk -fs /dev/sda2" or "file -sL /dev/sd*".
  • # umount /dev/sda2 (no writing to the partition; we want a clean backup)
  • attach the usb drive (/dev/sdb)
  • dd if=[source] of=[target] bs=[byte size]
  • # dd if=/dev/sda2 of=/dev/sdb1/20150210.iso bs=4k conv=sync,noerror
  • profit.
Profit unless you inverted your source and target drive names ("i" and "o" are next to each other on the keyboard) -- in which case dd wrote from the back-up drive to your HDD, destroying your HDD data.

Sunday, February 8, 2015

[solved] clipboard note - clipit

A very useful note was found in this gentleman's post. I was receiving confounding GLIBC errors when loading clipit. They didn't make sense. Appears sometimes the configuration can become corrupted. Simply delete it.
$ cd .local/share/
$ rm -r clipit/
Clipit loaded normally again.

however...

On a friend's HP system with all that proprietary HP keyboard and so on, there is an error which is not relieved by removing clipit's history. This was so intransigent that I emailed the developer. It's in line 1018 of the C code.

Saturday, January 31, 2015

[solved] more on Arch sound (fix for chipmunk sound)

No reasonable person likes PulseAudio currently. It's also a mystery why it was ever developed instead of, say, simply enhancing OSS (so we would only have one sound daemon). But, since OSS and ALSA were left behind, it goes without saying that, over time, a Linux distribution is near-certain to become infected with PulseAudio -- it will automatically be installed as a dependency by one application or another. Eventually, PulseAudio is almost certain to interfere with something, for example to make Audacious play MP3's at a chipmunk's pitch. As I write in early 2015, the strategy that works for me is,

crippling Pulse-Audio without removal

You can try a straight-up # pacman -Rs pulseaudio, but dependencies tend not to allow this. So cripple it.
Note: the PulseAudio daemon respawns if not properly neutered.
  • obviously start with $ pulseaudio --kill
  • directory /usr/share/alsa/alsa.conf.d hides a PulseAudio file (50-pulseaudio.conf) which ALSA executes during ALSA startup. This file surreptitiously activates /bin/pulseaudio. Rename it so it's no longer executed, eg "cp 50-pulseaudio.conf 50-pulseaudio_conf.bak", OR change /bin/pulseaudio to /bin/true in the file.
  • stop PulseAudio autospawning by gutting /etc/pulse/client.conf and replacing its lines with
    # custom version
    autospawn = no
    # "/bin/true" doesn't do anything, but no errors
    daemon-binary = /bin/true
  • Check inside /etc/X11/xinit/xinitrc.d to be sure X11 isn't infected. Eliminate any PulseAudio files in that directory.
  • $ systemctl list-units |grep pulse to verify no systemctl pulseaudio targets are present

/usr/share/alsa/alsa.conf vs. /etc/asound.conf

These initializing files use the same commands and formats, as well as one other config file, ~/.asoundrc. Since all 3 files are ALSA initializing files, only one is necessary, delete the other two to avoid interference. I typically keep /usr/share/alsa/alsa.conf because it's the most complete out-of-the-box, and because so many applications (eg. Audacious) load it. Save a copy of the working version for future installs. Not sure how to activate the file under systemd, so I just logout and log-in to be sure the file is read. Next, the alsa.conf configuration.

/usr/share/alsa/alsa.conf ("USAA" here)

USAA has a few problems itself.
  • backup the default USAA, eg # cp /usr/share/alsa/alsa.conf /usr/share/alsa/alsa_conf.bak. Backup any other files that are modified as well.
  • USAA's first subroutine is to load /etc/alsa.conf and /home/foo/.asoundrc. These can have PulseAudio hooks that wrongly set playback frequency. Carefully delete this subroutine, from
    [
    {
    func load
    files [
    {
    @func concat
    strings [
    { @func datadir }
    "/alsa.conf.d/"
    ]
    }
    "/etc/asound.conf"
    "~/.asoundrc"
    ]

    errors false
    }
    ]
    ...to....
    @hooks [
    {
    func load
    files [
    {
    @func concat
    strings [
    { @func datadir }
    "/alsa.conf.d/"
    ]
    }
    ]

    errors false
    }
    ]
    and save.
  • backup and then eliminate any of these which exist
    1. /etc/asound.conf
    2. ~/.asoundrc
    3. ~/.config/asound.conf

NB: There is *one* situation where I occasionally want ~/.asoundrc, albeit customized. If I have an external USB mic and some application won't simply accept its direct information, eg "plughw:1,0", I can turn it into a second soundcard that's providing content. Of course, there is no playback via the mic. Audacity, for example, likes this arrangement.

verify

Logout, log back in, and then try an Audacious or command line play, etc... $ aplay somefile.mp3

Thursday, January 29, 2015

[solved] Linux alarm clock

I left my phone at work today, and I often use it as a morning alarm. What to do? This site came to the rescue.
sleep 5h 30m && vlc somemusic.mp3

Friday, January 2, 2015

[solved] xsane fails to find WLAN Brother scanner

Over the holiday I visited a buddy who put me on his home WLAN, which included a Brother MFC-8840D printer. This printer also has scanning capacity. I added the printer via CUPS, and it printed. I hadn't network scanned previously, so I tried Xsane, hoping for the best. Xsane failed to detect the MFC-8840D. I was skeptical about going straight to port 6566, since forwarding runs the risk of compromising the firewall or causing other security problems, which are easy to do, apparently. So, what to do? Below, I'll describe my solution, and then some of the troubleshooting (6+ hours) that preceded it.

1. overview

Budget a half-hour to forty minutes to accomplish the connection if you already know the steps. Some of my learning steps: 1) SANE and CUPS are entirely different pathways; SANE does not require CUPS to be operating or enabled during scanning, at least on a Brother, 2) determine the correct backend software for your scanner and download (if necessary), 3) if using brscan-skey for special buttons (see below) start the necessary daemons with systemctl. I didn't need brscan-skey however, had I needed it, this step is important, 4) manually install the config file (after backend).

2. scanner backend

This took getting used to. USB scanners typically don't require drivers in Linux so for this WLAN scanner, I was thinking all the scanner would need was a network connection. Over a network, a driver is also needed. Brother has a site for LAN backends. Go there to determine which version you need for your Brother model. There two files:
  • backend - for the older printer my buddy had it was "brscan". In the AUR, it says it's for USB scanners, but that's a misleading typo. Just download and install.
  • scan key - pointless unless you want to use automated physical keys on the scanner such as "scan to fax", "scan to email", etc. Secondly, if you obtained "brscan" in step 1 above from the AUR, then scan-key is included and you don't need this step anyway.
Note: For those who want brscan-skey, documentation shows it's good to omit the "user" and "group" fields, and install the service file to /usr/lib/systemd/user/brscan-skey.service instead of /usr/lib/systemd/system/brscan-skey.service. That's so users can start (or enable) the brscan-skeydaemon as a regular user without sudo, eg:
$ systemctl --user start brscan-skey
If you don't do all those permission changes, you apparently will need the standard:
# systemctl start brscan-skey.service
As noted above, I didn't need brscan-skey, so I disabled it (I also stopped CUPS).

3. install the config file

The following application is supposed to do the installation:
# /usr/share/brother/sane/setupSaneScan -i
This didn't work for me. In other words, after this step, I looked for the scanner and was greeted with the following:
$ scanimage -L
bugchk_free(ptr=(nil))@brother_modelinf.c(467)
Aborted (core dumped)

Strace indicated scanimage failed when looking for "Brother.ini" at /usr/local/Brother/sane/Brsane.ini. The file does exist however, at /usr/share/brother/sane/Brsane.ini, so I created the directory in /usr/local , and copied the file to where it was looking.
# cp /usr/share/brother/sane/Brsane.ini /usr/local/Brother/sane/Brsane.ini
At this point, the program ran through but, as user, could not create a socket connection due to permissions (go figure).
$ scanimage -L
[bjnp] create_broadcast_socket: bind socket to local address failed - Cannot assign requested address
What's apparently happened here is setupSaneScan doesn't work very well. It might even be unnecessary to run. In my case, it certainly failed to write the file /usr/share/brother/sane/brsanenetdevice.cfg, or to install the scanner. This site has the few lines needed to nano into /usr/share/brother/sane/brsanenetdevice.cfg. For example:
# nano /usr/share/brother/sane/brsanenetdevice.cfg
DEVICE=MFC8840D , "MFC-8840D" , 0x4f9:0x160 , IP-ADDRESS=192.168.1.4

In summary:
  • copy /usr/share/brother/sane/Brsane.ini to /usr/local/Brother/sane/Brsane.ini
  • create and enter lines into /usr/share/brother/sane/brsanenetdevice.cfg

4. install the scanner driver

Following the hand-entries in /usr/share/brother/sane/brsanenetdevice.cfg, the printer/scanner still must be installed.
  • unless desired, turn off CUPS and brscan-skey with systemctl
    # systemctl stop org.cups.cupsd.service
    # systemctl stop brscan-skey.service
  • obtain the IP of the printer, you'll recognize it by its operating system
    # nmap -O 192.168.1.1/24 -oG
  • obtain the exact model name for the printer from /usr/share/brother/sane/Brsane.ini
  • let's say the printer IP was 192.168.1.4, and the model name was "MFC-8840D". Using these values, or the ones for your printer, enter /usr/share/brother/sane/brsaneconfig -a name="common name" model="model from INI" ip=xxx.xxx.xx.xx", eg,
    # /usr/share/brother/sane/brsaneconfig -a name=MFC8840D model=MFC-8840D ip=192.168.1.4
  • verify this went through with "brsaneconfig -q", and "scanimage -L"
# /usr/share/brother/sane/brsaneconfig -q
Devices on network 0 MFC8840D MFC-8840D I:192.168.1.4
$ scanimage -L
device `brother:net1;dev0' is a Brother MFC-8840D MFC8840D
So, with scanimage -L showing detection, xsane can be initiated for scanning.

investigation leading to solution (6+ hrs)

This is not necessary to read; it's just crib notes (to save time in the future) of troubleshooting which eventually led to a solution.

To start with, I left the CUPS daemon on. I wasn't sure what might be necessary to detect the scanner. Further down here, I realize CUPS actually gets in the way of installation. Secondly, I read that scanner drivers expect "nobody" should be included in the "scanner" group. You can do this with, eg...
# usermod -a -G scanner nobody
... but I like to directly (not recommended) type into the /etc/group and /etc/passwd files: I added nobody to the scanner group in /etc/group. None of these changes had any effect, but YMMV. I then checked for scanners.
$ scanimage -L
[bjnp] create_broadcast_socket: bind socket to local address failed - Cannot assign requested address
Since BJNP is the Canon-specific CUPS back-end, and since my attempt to connect was to a Brother, the unsolicited appearance of BJNP, and causing scanimage to repeatedly fail, was... annoying.

A more powerful attempt to locate the fail...
# strace scanimage -L 2>&1 |tee bigfile.txt
# chown 500:500 bigfile.txt
$ grep socket bigfile.txt >bigfile2.txt
And here's the portion with the fail...
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 132 setsockopt(132, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0 setsockopt(132, SOL_IPV6, IPV6_V6ONLY, [1], 4) = 0 bind(132, {sa_family=AF_INET6, sin6_port=htons(8612), inet_pton(AF_INET6, "fe80::b277:2173:31a6:e71", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=if_nametoindex("enp4s0")}, 28) = -1 EADDRNOTAVAIL (Cannot assign requested address) fstat(2, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 write(2, "[bjnp] ", 7[bjnp] ) = 7 write(2, "create_broadcast_socket: bind so"..., 95create_broadcast_socket: bind socket to local address failed - Cannot assign requested address ) = 95 close(132)
Subroutine 132 is failing. The real fail was scanimage was demanding an IPV6 addresses, which of course is a too rigid restriction to place onto an older IPV4 scanner. Additionally, scanimage was soliciting via "enp4s0", the ethernet LAN NIC. This was a further problem because the LAN NIC was not connected, I was only connecting via my WiFi card. If scanimage demanded these to be present, nothing was going to work on the older Brother on a WAN.

To be sure I wasn't crazy, I nmap-ped the LAN ("nmap -O 192.168.1.1/24 -oG somefile.txt") , found the printer by the Brother OS, and made sure I was able to ping it successfully, using only the WiFi NIC, and leaving the LAN NIC disconnected. This was a success. But I never was able to eliminate the BJNP, IPV6, or LAN NIC failures with the CUPS daemon "on".

/etc/sane.d/saned.conf

Digging through some pages, it appeared the first effort should be to modify /etc/sane.d/saned.conf When I opened the file, all lines were commented, so I added two uncommented lines
localhost 192.168.1.0/24

/etc/saned.d/net.conf

According to http://wiki.archlinux.org/index.php/sane, the file /etc/sane.d/net.conf must be similarly modified.
localhost 192.168.1.0/24

to xinetd or not xinetd?

Xinetd is for allowing anyone on a LAN to use a hardwired (eg. by USB) scanner. My friend's Brother was sitting on a WLAN, independent of any system, so it should have been simpler. Still, after I installed xinetd (pacman), some tweaks were required. For example, in /etc/sane.d/saned.conf,these lines are at the bottom:
# NOTE: /etc/inetd.conf (or /etc/xinetd.conf) and # /etc/services must also be properly configured to start...
Well now. /etc/services is a listing of applications for each port, and sane-port was there at line 6566 for both UDP and TCP. This appeared OK. Next stop was /etc/xinetd.d/sane.
# cat /etc/xinetd.d/sane service sane-port { port = 6566 socket_type = stream wait = no user = nobody group = scanner server = /usr/bin/saned # disabled by default! disable = yes }
So there's nothing allowing LAN access. Using the site above, and this Slackware page, I changed the file by adding "tcp" capability (because WLAN):
# cat /etc/xinetd.d/sane service sane-port { port = 6566 socket_type = stream protocol = tcp wait = no user = nobody group = scanner server = /usr/bin/saned # disabled by default! disable = no }

check loopback

If you went the nuclear route and brought down your firewall during this, a good check to be sure it's re-established is to telnet into it. The connection should be refused.
telnet localhost [service port]
For example, xsane uses port 6566 so...
telnet localhost 6566
Try telnetting as both root and user; if groups are properly set-up, users should be able to telnet the port.