Wednesday, December 25, 2013

PulseAudio reconsidered - hail OSS, ALSA

Links: arch wiki pulseaudio :: record system in pulseaudio (GUI) ::
record system in pulseaudio (CLI) :: ALSA configuration :: best disable link so far (3/2014)

Linux sound management and the changing nature of Linux initialization schemes are the bad things about Linux. In this post, it's about sound. PulseAudio, on the scene since perhaps 2009, but currently (2014) prevalent, has made the sound kludge even worse1.

OSS - still good

My Linux use began when OSS was the only sound daemon communicating with hardware. OSS had bugs, but it was straightforward, and therefore was a good foundation which should have been developed further instead of dumped.

I'm not a recording engineer, but I never encountered the two main purported limitation of OSS, 1) an inability to process simultaneous sound sources (, eg. capture a mic input at the same time as a music stream) nor 2) a supposed incapacity to split a single sound source into multiple types of files simultaneously.

When I wanted to listen to several files through the speakers and to, simultaneously, capture the output (stdout) into a single WAV file, I piped them through sox. For example, in this case, I created a script which played several audio files in sequence, and I used sox to collate the output into a single file:
$ | sox -t ossdsp -w -s -r 44100 -c 2 /dev/dsp foobarout.wav
That's all there was to it.

ALSA - meh

When ALSA became common, the the simple approach of /dev/dsp was gone. In ALSA, we had to locate soundcard info with aplay-l, aplay -L, /proc/asound/pcm AND /dev/sound/. Some software couldn't handle the ALSA layer, and we'd have to name the device. Only now it wasn't generically, /dev/dsp. So we'd have to research its PCI values and provide literal device information such as "hw:0,0". Another problem was lost hours configuring daemons. As users, we'd sort of have to choose between OSS or ALSA and be certain we'd blackballed the modules of the other. It's unclear what benefit ALSA provided, in other words.

Consider the play/capture scenario I described above. Under ALSA, a similar effect should have been available by researching ALSA commands. That's already wasted time (duplication of effort) to achieve the same outcome, but it also turns-out the the ALSA commands were not as reliable. For example...
$ | arecord -D hw:0,0 -f cd foobarout.wav
... often would result in a string of error messages regarding "playback only", even though capturing had been enabled in ALSA, etc. To me, it seemed that ALSA, and not OSS, had multiplexing limitations.

Further, the C code of the ALSA "dmix" lib, which one would think was created exactly to solve this problem gives no respite...
ALSA lib pcm_dmix.c:961:(snd_pcm_dmix_open) The dmix plugin supports only playback stream
Lol. In the end, the only helpful plugin (of course with nearly zero documentation and an unintuitive name), is the asym plugin. For this you'd have to become aware of it in the first place, not so easy, nor will you escape having to build and tweak an /etc/asound.conf and/or an .asound.rc. Have fun.


Instead of simply developing either ALSA or OSS more completely, some group developed PulseAudio. PulseAudio purported to make muxing easier than ALSA which purported to make muxing easier than OSS. "Lol". Lol because ALSA and/or OSS continue to be layers underneath PulseAudio. So now we add PulseAudio memory and CPU loads to those of both the OSS or ALSA modules running beneath PulseAudio. These three different directions at once also raise the potential for errors. And if you think PulseAudio makes configuration any easier, take a look here.

After PulseAudio configuration, recording my script requires the same steps as capturing streaming. It's too much information to regurgitate entirely here, but the shorthand is:
$ pavucontrol (set "Record Stream from" to "Monitor of Internal Audio Analog Stereo")
$ | parec --format=s16le --device=(from "pactl list") | oggenc --raw --quiet -o dump.ogg -
There is also hypothetically an OSS emulator called "padsp" (install ossp in Arch) with which one could use sox again. That is, PulseAudio apparently uses an emulator instead of just accessing a real OSS module. I haven't tried "padsp", but it may work.
$ padsp sox -r 44100 -t ossdsp /dev/dsp foobarout.wav

PulseAudio crippleware effect

Once PussAudio has ever been installed, even inadvertently as a dependency for some other application, and even when you're sure its daemon is not running (), one's soundcard will likely be reduced to analog mode. Eg, after Puss Audio was installed, but its daemon not running, I observe...
$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: SB [HDA ATI SB], device 0: ALC268 Analog [ALC268 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
...when I should instead see...
$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: SB [HDA ATI SB], device 0: ALC268 Analog [ALC268 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: SB [HDA ATI SB], device 1: ALC268 Digital [ALC268 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
Results will be similar in $ arecord -l. There's no way to capture one's system in ALSA properly again until it detects the entire soundcard. If we'd like, we can even see the problem more clearly:
$ aplay -L
Discard all samples (playback) or generate zero samples (capture)
Default ALSA Output (currently PulseAudio Sound Server)
HDA ATI SB, ALC268 Analog
Default Audio Device
HDA ATI SB, ALC268 Analog
Front speakers
[remaining 4 entries, all analog dev=0, snipped]
That is, even with PussAudio disabled, ALSA remains infected with the PulseAudio Sound Server, and analog limitations.

PulseAudio disabling

Link: excellent disabling instructions
We'd like to disable PulseAudio, but we'd prefer not to uninstall PulseAudio --- for example, it's understood that some Gnome sound functions with hooks in PulseAudio libs may not otherwise work. We know from above that it's not enough to simply disable its daemon.
file note
/etc/pulse/ dir: pulse audio config files
/etc/modprobe.d/alsa-base configuration for ALSA modules
/usr/share/alsa/ dir: alsa config files
/etc/asound.conf alsa config file for pulse
/usr/share/alsa/ change ALSA hooks
/etc/pulse/ change PulseAudio hooks

1. $ pulseaudio -k
2. # pacman -r pulseaudio-alsa
3. # rename /usr/share/alsa/alsa.conf.d/*.conf /usr/share/alsa/alsa.conf.d/*.bak be continued.

1 Some configurations even include a Music Player Daemon ("MPD"), and/or a Simple Direct MediaLayer ("SDL"), ridiculous 4th and 5th possible layers.

Saturday, December 21, 2013

fbxkb - multi-language keyboard

Fbxkb does a simple job: sits in one's system tray and displays language status with a national flag. It's a GUI interface to bypass keystrokes for keyboard language switching.

One might guess fbxkb would be a 10MB app, but htop revealed fbxkb uses 156MB to do its thing. Luckily, my friend's system (on which it was needed) uses the relatively lean icewm Windows Manager. I found that 156MB did not affect performance. Additionally, the visual cue of a flag in the system tray saves guesswork. All told, satisfaction with fbxkb probably depends on a person's available RAM and their preference for a visual cue. Otherwise, just bind setxkbmap keystrokes (see below) and forgo fbxkb.


The system onto which fbxkb was installed runs ArchLinux so I used yaourt to put it in.
$ yaourt -S fbxkb


This friend wanted US and German keyboard options. I looked into /usr/share/X11/xkb/ for configuring, but we don't need to configure so deeply for a simple installation. Instead, add two lines to .xinitrc, and then invoke fbxkb with a single line in .icewm/startup:
$ nano .xinitrc
setxkbmap -layout us,de
# optional keystroke for shifting without GUI
setxkbmap -option grp:alt_shift_toggle 'us,de'

$ nano .icewm/startup
fbxkb &

This arrangement gave him the option to switch languages with Alt+Shift, or to switch using the fbxkb icon in the tray. Just click on the flag and it will toggle to the other flag, indicating the other keyboard mapping is operational. This also works for more than two languages. Although I just needed US and DE layouts, more options can be found here.

Saturday, December 14, 2013

xdm - installing and customizing

Links: Linux Journal 1999(!)

I strongly agree with this post's comment that kdm and gdm are lib-laden and overall bloated. To simply login, we just might want a different background photo than the the standard x box, but that's all the customization I need. In fact, I prefer runlevel 3 at startup, but if I'm helping someone with Linux that prefers a window-esque GUI right from the start, then it's got to be runlevel 5.

install - file changes

One can just download xdm with pacman. There are only two important file changes: 1) xdm (and all display managers) read .xsession instead of .xinitrc. 2) changing to runlevel 5 means altering /etc/inittab I organized a working xdm on a friends system with the following...
# nano /etc/inittab
x:5:respawn:/usr/X11R6/bin/xdm -nodaemon -config /etc/X11/xdm/archlinux/xdm-config

$ cp .xinitrc .xsession


One of the best instructions on this remains from 1999 (link at top). XDM goes way back.

Thursday, December 12, 2013

yaourt - tarball installation

Most like to use yaourt and it's typically easy to install it from the French repo. But documentation is lacking if it has to be installed from the tarball.

the repo site...

Append these lines to /etc/pacman.conf, and then request pacman to retrieve yaourt...
# nano /etc/pacman.conf
SigLevel = Never
Server =$arch

# pacman -Sy yaourt

...but some require the tarball

It appears the method above is the common install method, since I could find very little good information for those forced to use the tarball (eg, on my friend's system). Here is the official page which was little help, and here is the only site with details I could follow nearly verbatim.

packagebuild note

Installing yaourt uses packagebuild. The sites I found describing packagebuild, were adamant about running it as a user and it even has a warning prompt inside the program. They say it will ask you for authentication at the right time, blah blah blah. This cost me a lot of time. Permissions are a critical step in Linux, as we all know and I found that packagebuild did not prompt me in a helpful way for authentication. In my view, it's much better to manually level up or down manually when using packagebuild, the same way we we do with "make" and "make install".

what worked

Here is what (finally) worked:

Sunday, December 8, 2013

power management - hibernation key(s)

I recently encountered a HP Pavilion for which a good friend wanted hibernation capacity. Ideally, hibernation would result from an idle state (eg., when a user steps away from their system for a number of minutes) and/or when hibernation was selected by the user (a menu item or a hotkey). The process gave me a chance to step through key bindings and power management configuration.


I expected hibernation to really be two functions - hibernation, and resuming from hibernation, equally important. And both could have subsections which might include: 1) hardware settings (eg. having a swap file for suspending to a disk), 2) boot settings (eg. grub/lilo), 3) daemon settings for whatever daemon alerted the kernel (possibly many options), 4) key bindings 5) inactivity time-out.
NB: Currently having hardware issues with full hibernation, so this entry ends with "suspending". Upgrade to full hibernation downstream.

1. swap

For hibernation, a swap partition is the easiest, though supposedly a swap file can be configured with enough work. I checked in cfdisk, and noted that my friend had more than two gigs of swap partition on /dev/sda2. /Etc/fstab was properly configured as well. Otherwise, mkswap and swapon, possibly followed by a fresh genfstab, would be accomplished in this step.

2. boot

This guy is running ArchLinux with grub, so I looked here, followed by:
# nano /etc/default/grub

# grub-mkconfig -o /boot/grub/grub.cfg
I got a couple of errors when updating grub, thanks apparently to this bug, but these cleared with an update to his system (pacman -Syu) the following day.

3. daemon

What to use to notify the kernel? Certainly, acpid and/or pm-utils are what most will choose. However, it's not necessary in this case. Arch's use of systemd already handles some power events without another daemon. Per this helpful page, I changed the login process, uncommenting the relevant hibernation line and restarting systemd:
# nano /etc/systemd/logind.conf

# systemctl restart systemd-logind
No other daemons than systemd are needed with systemd installed. What's accomplished at this step is hibernate service enablement, but not activation. To activate through systemd we need a systemd service file telling it what to do, possibly a cron job, and possibly a key binding. Properly configured, the command to initiate hibernation is $ systemctl hibernate, and for suspension $ systemctl suspend. Select one of these to go through the configuration steps -- binding to a key, possibly, to a cron job, possibly a systemd service file, etc.

3a. privileges

One expects privilege confusion with users taking daemon actions, so a check of group membership is wise. On the other hand, according to this page about systemd, polkit is required for user privileges to run hibernation or suspend --- adding users to groups such as "disk" can actually cause conflicts. YMMV, it appears. In lieu of polkit, simply granting ourselves a specific privilege in /etc/sudoers, eg "username ALL=NOPASSWD: /usr/bin/systemctl", also should work.

4. keys

A non-standard KB-0630 enhanced keyboard with a separate hibernation key, so pretty sweet. Per this page, installed xbindkeys and hit the hibernation key.
$ xbindkeys -k
[hit the hibernate key]
"(Scheme function)"
m:0x0 + c:150

$ nano .xbindkeysrc
# Suspend system
"systemctl suspend"

$ nano .xinitrc (or .icewm/startup)
xbindkeys &

5. idle time

# nano /etc/systemd/logind.conf

# systemctl restart systemd-logind

This worked, except that it would go to sleep in 20 minutes, idle or not.

Saturday, November 23, 2013

fonts -- x11 core or xft?

In this wiki, and in this fellows reflection, we see possible advantages of xft over X11 core fonts. But most installations have a portion of both. So when to use one over the other, what commands are available to both, what is the easier or more reliable configuration path...and so on?

Friday, November 22, 2013

[solved] .xinitrc - include startup programs?

Many Linux users, are in a GUI environment from startup to shutdown. They are in a GUI GDM before they enter their Window Manager Desktop Environment.

Others login to runlevel 3 so we can view startup messages before moving into runlevel 5. When we're satisfied with boot messages, we use "$ startx". Startx of course initializes X via the local file "/home/[user]/.xinitrc. Its final line calls whatever windows manager we want to use. So, for example, the last line in .xinitrc might be "exec twm", for those who use twm. Users can also include any other programs prior to that last line, as long as they fork them to background ("&") so the script isn't waiting for that program to exit before the next line is executed. A simple .xinitrc could be the following:
$ cat .xinitrc
# ~/.xinitrc
# set editor to nano
export EDITOR=nano

# clipboard app
/usr/bin/clipit &

# volume app
/usr/bin/volwheel &

# Tom's Window Manager
exec twm

A potential problem occurs for this group of people using "runlevel3 + startx". Because .xinitrc runs through a list of whatever apps the user wants and then loads the windows manager, the actual initialization of X11 is happening at the same time these programs are being called. Sometimes there's a conflict.


Desktop Environments and some Windows Managers (eg. Cinnamon) allow users to configure startup programs via some sort of client, typically called "startup applications" or some such, which the user configures for their next boot. (screenshot below). [insert example]

But again, some people use simpler Desktop Managers (eg, icewm, twm) that mostly rely on .xinitrc settings. For these, the solution appears to be keep one's .xinitrc simple: have it load the windows manager without adding applications. Instead, find which other scripts are called when the window's manager loads. Icewm has a nice way of handling this. If one uses "exec icewm-session" in their .xinitrc, icewm checks for a script the user can create called ~/.icewm/startup. For example, I tried this script with good results...
$ cat .icewm/startup
# chmod from 644 to 755 - must be executable to be read by icewm-session

sleep 4
synclient TouchPadOff=1
xgamma -gamma .7
volwheel &
sleep 1
clipit &
sleep 1
Icewm loaded quickly, and then the other two programs appeared in taskbar a few seconds later, without conflicts.

In the case of twm, it appears one could do something similar by placing a script call in their ~/.twmrc file. I didn't have the available time to test such an option today. The point here is to avoid conflicts when light windows managers load. If we can let .xinitrc simply load the selected windows manager, we can create or modify scripts to load other programs or settings downstream from .xinitrc.

Tuesday, October 29, 2013

primefilm 7250u scanner - fuduntu

Links: CLI scanning w/scanimage :: sane-config options :: extracting .nal from Windows driver :: deeper on the .nal

Relevant folders: 1) dev/bus/usb [device]; 2) etc/sane.d [daemon only]; 3) etc/udev/rules.d [custom rules]; 4) usr/lib/sane [backends]; 5) usr/lib/udev/rules.d [default rules]; 6) usr/share/sane

A friend recently purchased a Primefilm 7250u to scan old slides. Closed-source Windows and Mac software came with it. Difficulty: No software for Linux, not even Vue-Scan. Fuduntu has no access to repos. A kludge of interacting files, as one can see from the folders above.

The most important steps

The first two steps need to happen in order. We need to be sure, 1) the scanner is detected and, 2) the correct backend is called. For detection, /usr/lib/udev/rules.d/49-sane.rules, or (user-created) /etc/udev/rules.d/[custom].rules must contain the vendor ID. These files then call the correct backend(s).

hardware detection

Sane-find-scanner does not check the backend, it simply verifies the scanner is connected and that it's vendor ID is one of the rules files. This is hardware detection, pure and simple. Scanimage does both detection and backend pointing but, unfortunately, it will fail with the same message whether hardware detection fails or backend fails. Accordingly, even though scanimage must be working smoothly in order to use GUI frontends (eg Xsane), sane-find-scanner is very useful to troubleshoot hardware detection.
$ sane-find-scanner
...found USB scanner (vendor=0x05e3, product=0x0145) at libusb:002:009

$ scanimage -L

No scanners were identified. If you were expecting something different,
check that the scanner is plugged in, turned on and detected by the
sane-find-scanner tool (if appropriate). Please read the documentation
which came with this software (README, FAQ, manpages).
Success with hardware. However, scanimage -L subsequently failed, meaning the backend is misconfigured or missing. To solve this, we have two options -- modify the rules or force scanimage to call backends (stored in /usr/lib/sane) using CLI switches.


Before writing a rule, we'd like to see how the kernel names the device, ie, what is the /dev device node's name? I typically use dmesg but, in my friend's system, dmesg was not supplying it. Also, his /dev folder had no "usb" folder. I located the usb folder /dev/bus/usb...
$ ls /dev/bus/usb
001 002 003 004 005 006
Opening /dev/bus/usb/002/ revealed "009". Taken together, these correlate with lsusb:
$ lsusb
Bus 002 Device 009: ID 05e3:0145 Genesys Logic, Inc.
So I had enough to go after the necessary /etc/udev/rules.d information:
# udevadm info -a -p $(udevadm info -q path -n /dev/bus/usb/002/009)

Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.

looking at device '/devices/pci0000:00/0000:00:12.2/usb2/2-2':

looking at parent device '/devices/pci0000:00/0000:00:12.2/usb2':

looking at parent device '/devices/pci0000:00/0000:00:12.2':

looking at parent device '/devices/pci0000:00':


Tried this rule...
# nano /etc/udev/rules.d/10-primefilm.rules

# Custom for Primefilm scanner
KERNEL=="2-2", SUBSYSTEM=="usb", \
ATTRS{idVendor}=="05e3", ATTRS{idProduct}=="0145", \
GROUP="scanner", MODE="0664", \
This rule worked correctly, as tested with...
# udevadm test $(udevadm info -q path -n /dev/bus/usb/002/009) 2>&1
...however I continued to see scanimage -L fails. This means backend trouble -- typically complicated.

scanimage and sane-config

In /usr/lib/sane, I noted two libsanes which might work as backends, viz libsane-pie, and libsane-genesys. Sane wasn't automatically pointing to either of these, so they hadn't been used. We can make it do that with the CLI. Neither appeared to work:
$ scanimage --device=pie:/dev/bus/usb/002/009
scanimage: open of device pie failed: Invalid argument

$ scanimage --device=genesys:/dev/bus/usb/002/009
scanimage: open of device genesys failed: Invalid argument

Friday, August 23, 2013

[solved] removable media - gid and uid fix

I typically maintain uniform UID's and GID's across the different distros I try. It seems to contribute to less troublesome back-ups which come with distro-hopping.

For example, I recently caused myself a problem. During the latest install, I established my standard UID of 1500. But I overlooked the usual procedure of creating a new group for myself with a 1500 GID. Instead, I placed myself into the "users" group, which had a GID of 100. The files I thereafter created possessed a 1500:100 stamp.

A couple of weeks following the install I attempted to back-up a diff. I attached a USB HDD I've had for several months, and which was formatted, including its folders, with my typical 1500:1500 ownership. Of course it has been no problem to copy from the 1500:1500 USB to my system. But when I attempted to back-up 1500:100 files from my system to the 1500:1500 USB, "permission denied" write errors were generated. Writing to the USB HDD as root would have overcome this, but wasn't the solution I wanted: "chown"-ing any and all 0:0 files to 1500:1500 seemed to be overlooking a more efficient solution.

After some thought, it seemed best to permanently change the primary group attached to my username on the desktop system. I wanted this to happen in a way that automatically assigned the new GID to all existing files in my home directory. Note that username is "patriot".


  1. exited X to command line
  2. logged off - "exit"
  3. logged in as root
  4. created group 1500 (# groupadd foo -g 1500)
  5. moved "patriot" from group 100 into group 1500 (# usermod -g 1500 patriot) Note: give this a few minutes to complete; many file GID's are being updated.
  6. # nano /etc/group - verified patriot was in desired groups
  7. rebooted and logged in as user "patriot", per usual.


All files changed to 1500:1500 and no permission problems noted when backing-up to a USB HDD.

Monday, August 19, 2013

crontab -- good times

Sometimes it's apparently the simplest, most potential helpful concepts which are the most poorly executed. Adding a printer, calling a script at a certain time each day. For those that configure these things for a living, I'm sure it's easy --- the rest of us look forward to the day basic functions won't take a week to configure.

So...crontab. A simple concept which requires threading through a weird syntax, and kludgy daemon and script file relationships until one finally gets it to work on their particular installation. Here's what worked on a recent install of Arch. First the unintuitive, unhelpful crontab syntax:
Example of job definition:
.---------------- minute (0 - 59)
| .------------- hour (0 - 23)
| | .---------- day of month (1 - 31)
| | | .------- month (1 - 12) OR jan,feb,mar,apr ...
| | | | .---- day of week (0 - 6) (Sunday=0 or 7)
| | | | |
* * * * * command to be executed

Let's say I want to delete a file every night at midnight; let's use the common example of .local/share/recently-used.xbel. I go to $ crontab -e to access my crontab file and create the job line:
0 0 * * * /usr/bin/rm -rf /home/foo/.local/share/recently-used*

So zero hour, and the asterisks mean "every". Every day of the week, every day of the month, at zero hour this command will be executed. The other asterisk, the one at the end of the file to be deleted? Another "feature" of crontab is it appears to ignore commands with file extensions, so I replace any file extensions with asterisks or leave them off entirely. I've also found that the last line has to be a newline, so that, following my command above, I need to make a hard return in the file. Note also that I used the hard path to the rm command, and to the file to be deleted. This is because of the common PATH issues with crontab. Finally, if crontab has the wrong syntax, it loads without errors but simply doesn't perform its tasks.

Even if one manages to get the insanely finnicky crontab in a proper syntax for running, it still is unlikely to run. Why? Daemon problems. Their are at least 5 different cron daemons available, instead of one. All of them behave differently. There is crond, cronie, fcron, anacron, and cronie-without-anacron, for example. More horrible design work around this critical function. At least one of them must be running for the crontab to execute.


Systemd uses timers.
$ systemctl list-timers

Monday, July 22, 2013

more ipv6 / ipv4 fun

Links: ipv6 over adsl :: Linksys E1000 knowledge base :: Linksys list of IPv6 enabled is a useful site for checking IPv6 functionality. YouTube, probably due its fallback capacities (PPAP, HTML 5) is much more tolerant. So it was of interest to recently that I could access South Park at the gf's place, but not at home.

home setup

At home, we have ATT ADSL split between roommates. Service was initiated during the years when ATT provided Linksys E1000 routers in tandem with Motorola 2210-02 modems. This equipment is questionable; the Motorola overheats if it's breathed on, and the Linksys has no IPv6 support. This post is about the latter.

To set-up IPv6, we only want one IPv6 choke-point, so we want the modem to pass everything (bridge mode) and do the PPoE inside the router. But can we get the E1000 to do IPv6? Appears not easily.

Step 1 - ISP

Contact with ATT today noted that their DNS servers are resolving in IPv6.

Step 2 - Modem and Router

Modem is not in bridge mode; it's providing DHCP downstream to the LAN (E1000 Router). Unclear whether that's already degraded or filtered to IPv4-only coming from modem. Also it appears that the modem cannot be set to bridge mode without physical modification.

The router itself is not inherently IPv6 capable, so it may be worthwhile to start there. ATT sells the IPv6 Motorola NVG510 modem/router combo for $100. That one's not been getting good reviews. Another option is say, the less expensive D-Link G604T ($60). Outdated, but OpenWRT or DD-WRT should handle things.

E3000 v.2.1 (dd-wrt)

In an attempt to enable IPv6, installed dd-wrt software. Apparently the E3000 has a small (4MB) memory capacity so we're only looking for K2.6 builds, which run about 3.3MB. It was also suggested to keep Tx power down around 45mW for best throughput. I'll probably set my MTU filter at 1500 or 1400. We'll see.

Couple of useful links:
  • page 4 describes 16964 as first build supporting the router, then the 16968 build as stable, then 16994 (nokaid) having ipv6. 19519 may be most recent, but appears best to start with 16968 and then flash up to 19519 (nokaid).

  • firewall code

  • Friday, July 19, 2013

    [solved] slackware media march - july 2013

    Links: slackerboy install notes :: :: adobe flash 11 :: ibm bash shell tutorial

    "It's rarely if ever necessary to update a good Linux installation -- the odd security update being the exception". This should and would be a true statement if it weren't for the parade of media player and media codec changes perpetrated by media providers. Over the long run, media-related updates cost us months of wasted time and effort.

    An site sensitive to anything less than perfect Flash configuration is, ironically. If it indicates Flash or some other similar Flash error, where do I begin?


    Running Firefox 17 ESR inside a Slackware KDE desktop, look into /usr/lib64/mozilla/plugins
    $ ls /usr/lib64/mozilla/plugins
    Nope, no flashplayer installed. And if I look into "about:plugins" in my browser, I also see no flash player installed.

    KDE directory gobbledy-gook

    The final version of Adobe is 11.2, due to be entirely retired Jan 2014. Linux users may be forced into Chromium at that time. Lack of information is typical in the media-player realm. Meanwhile, we're supposed to install into a bunch of sub-directories and make soft links and so on without KDE documentation. No thanks. [SUCCESS]

    Point the browser to adobe flash 11, and download.
    $ tar -xzvf install_flash_player_11_linux.x86_64.tar.gz
    Download the tarball and unpack it. A file named is created. In a separate file named "usr" a lot of KDE bologna (see below) is also created. Ignore the entire "usr" folder and focus upon .
    # cp /usr/lib64/mozilla/plugins/
    Verify the Flash Player is active in Firefox via about:plugins. If it's not in there, create a plugins folder: /usr/lib64/firefox[version]/plugins; try that kind of thing. Done in five minutes. Go to YouTube and enjoy your bass fishing videos.

    Chromium in Arch

    This works, for those running Arch. Follow the instructions for installing "chromium-pepperflash-stable". [FAIL]

    Same as with libflashplayer.soabove, head to adobe flash 11, and download.
    $ tar -xzvf install_flash_player_11_linux.x86_64.tar.gz
    The unpacked folder will be a /usr folder, with subfolders, /bin, /lib, /share, and so on. Sudo-up and waste an hour transfering files into the real versions of these directories in the file system. For example
    $ cd [install directory]
    $ cd usr/lib64/kde4
    # mkdir /usr/lib64/kde4; mkdir /usr/lib/kde4
    # cp /usr/lib64/kde4/
    # cp /usr/lib/kde4/
    $ cd ..; cd .. ; cd bin
    # cp flash-player-properties /usr/bin/
    $ cd ..; cd share/applications
    # cp flash-player-properties.desktop /usr/share/applications/
    And so on through each directory. After these were complete:
    # update-desktop-database
    # update-desktop-database -q /usr/share/applications >/dev/null 2>&1
    # gtk-update-icon-cache
    # gtk-update-icon cache /usr/share/icons/hicolor >/dev/null 2>&1

    None of this KDE sh*t worked. Suggest the easy way with or, if runninig Arch, get the pepperflash prior to Jan 2014.

    FAIL: cooling/acpi in a 2008 trashiba

    Links: fan speed control :: cooling and acpi :: fancontrol :: Satellite L305D-S5869 :: One user's solution - v.2.0
    I have a disposable Satellite L305D-S5869 or, more specifically, a PSLC8U-00Y010, for trips and low priority activities. It has the older Toshiba BIOS (Insyde H20 Rev 3.5 v. 1.8) with flakey DSDT code; it's hit or miss with ACPI or kernel module fan speed control. This appears to apply mostly to Slackware and Slackware hybrids; the laptop's fans do occasionally work with Fedora-based OS's. Another challenge for this crappy BIOS is hibernation, but that's another post.

    $ lsmod |grep thermal
    thermal 8082 0
    thermal_sys 13862 4 thermal,processor,video,fan
    hwmon 1473 3 radeon,thermal_sys,k10temp

    $ sensors
    Adapter: PCI adapter
    temp1: +56.9°C (high = +70.0°C)

    Adapter: Virtual device
    temp1: +56.0°C (crit = +105.0°C)
    Not bad, except I had never heard the fan on even at over 70C. Taking a trip into the BIOS, there were no ACPI settings, but I added append="acpi=force"into LILO. Still over 70C with no fans. (On the BIOS tip, you can read further down that I also updated the BIOS to the latest version to no avail).


    We can set thresholds to whatever we'd like, using the program to increase or decrease fan use. I don't yet have a configuration file in place.
    # fancontrol
    Loading configuration from /etc/fancontrol ...
    Error: Can't read configuration file

    # ls /etc/fancontrol
    ls: cannot access /etc/fancontrol: No such file or directory
    A larger problems; it may be that fancontrol cannot detect my fans -- its configuration editor is unable to detect them.
    # pwmconfig
    pwmconfig revision 5770 (2009-09-16)
    /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed
    At first cut, it appears no Pulse Width Modification controllable fans are in the laptop, but it may be the system fan is controllable via some other method(s). It's also of note that KDE is the WM running, and that Kinfocenter does not indicate hardware has been properly detected. Fedora-based distros have had no such problems. Accordingly, although this post began about cooling, detection is the first order of business, and it has been added to the title in parentheses.

    Notably, Kinfocenter provides identical results as lshal, where it might have looked for its information:
    # lshal
    udi = '/org/freedesktop/Hal/devices/acpi_CPU0'
    info.category = 'processor' (string)
    info.product = 'Unknown Processor' (string)

    So instead of hal or other OS detection, let's directly access fan information.
    # ls /sys/class/thermal
    cooling_device0 cooling_device1 cooling_device2 cooling_device3 thermal_zone0

    # cat /sys/class/thermal/cooling_device0/device/hid

    # cat /sys/class/thermal/cooling_device0/device/modalias

    So this fan's ACPI identifier is PNP0C0B. The fan is currently off; what are its range of possible values when running?
    # cat /sys/class/thermal/cooling_device0/device/thermal_cooling/cur_state

    # cat /sys/class/thermal/cooling_device0/device/thermal_cooling/max_state

    # cat /sys/class/thermal/cooling_device0/device/thermal_cooling/power/control
    Now it's clear why no PWM functionality was detected by pwmconfig: the fan's options are apparently either "on" or "off". Power is controlled "auto"-matically. But similarly to these commands for a backlight, let's attempt to turn on "cooling_device0".
    # echo 1 > /sys/class/thermal/cooling_device0/device/thermal_cooling/power/wakeup_active
    bash: /sys/class/thermal/cooling_device0/device/thermal_cooling/power/wakeup_active: Permission denied
    This sort of problem continued with other attempts "invalid parameters" and so forth. The next step seemed to be query the device for legal settings. Assistance via a specialized program to add to efficiency seemed sensible.


    Links: acpitool :: acpitool GUI
    I'm running a Toshiba laptop, so let's try
    # acpitool -F 1
    Forcing the fan of/off is only supported on Toshiba laptops.
    No Toshiba ACPI extensions were found.
    Hah! OK, so "No Toshiba ACPI extensions" on a Toshiba laptop means that either specific kernel modules are not loading for Toshiba, or playing with the BIOS and LILO until this changes. Probably the former, toshiba_acpi.ko.
    # find -name toshiba_acpi.ko

    # lsmod |grep toshiba

    # grep -n toshiba /etc/rc.d/rc.modules
    178:#/sbin/modprobe toshiba_acpi

    # nano /etc/rc.d/rc.modules
    Hopefully this is not too many modules and hogs memory. Following this I rebooted.
    # lsmod |grep toshiba
    Huh. Okay then...
    # modprobe toshiba_acpi
    FATAL: Error inserting toshiba_acpi (/lib/modules/ No such device
    Not good. Per this site, I checked the BIOS and note that the BIOS is not Toshiba. Appears I will have to recompile the kernel and enable (menuconfig) Device Drivers / x86 Platform Specific Device Drivers / Toshiba Laptop Extras . I'm dubious since, if it can't load an external module, how is it likely to work as a built-in option. Still, have to try everything for cooling...

    post kernel - BIOS

    As feared, the kernel was unable to detect the Toshiba-ness of the laptop, even after building in the Toshiba-specific features above, in addition to some other switches which I hoped would allow the kernel to grasp it was in a Toshiba.
    # acpitool -F 1
    Forcing the fan of/off is only supported on Toshiba laptops.
    No Toshiba ACPI extensions were found.
    Next stop is the BIOS. The BIOS, Insyde H20 v.1.2 rev.3.5, appears rudimentary and has no ACPI settings. Perhaps we can flash it to something better/more recent. At the Toshiba website, the most recent (April 2009) BIOS for the L305D. Sort of old and the file is Windows specific - slc8v180.exe. I ran strings against it see if it was just an archive.
    $ strings slc8v180.exe
    [snip] processorArchitecture="X86" name="Roshal.WinRAR.WinRAR" type="win32" /> WinRAR archiver
    I was able to unpack this. I found a bootable ISO in the files with a README to reboot with it. Thereafter, I burned the ISO to a CD, and rebooted the system. Voila, the Insyde BIOS updated from 1.2 to 1.8. However, looking inside it, no ACPI functions were added, and I still get the following:
    # modprobe toshiba_acpi
    FATAL: Error inserting toshiba_acpi (/lib/modules/ No such device

    Arch installation

    Let's see if we can find better interaction with a different OS, with a newer kernel. Installed Arch and then compiled acpitool.
    # acpitool -F 1
    Could not open file : /proc/acpi/toshiba/fan
    You must have write access to /proc/acpi/toshiba/fan to stop or start the fan.
    Or ensure yourself you are running a kernel with Toshiba ACPI support enabled.
    Fails, but more information. Acpitool apparently only points at /proc/acpi/toshiba/fan, but the fan directory for this this system is /sys/bus/acpi/drivers/fan. Let's attempt a softlink, first being sure there is a /proc/acpi/toshiba directory into which we can link a "fans" directory.
    # ls /proc/acpi/toshiba
    keys version

    # ln -s /sys/bus/acpi/drivers/fan /proc/acpi/toshiba
    ln: failed to create symbolic link '/proc/acpi/toshiba/fan': No such
    file or directory
    Let me get this straight, ln fails to create a directory, because the directory it's supposed to create doesn't exist before it creates it? Brilliant program. But at any rate, the device in the "fan" directory, PNP0C0B:00, is a symlink to another directory.
    $ ls -an /sys/devices/LNXSYSTM:00/device:44/PNP0C0B:00
    total 0
    drwxr-xr-x 4 0 0 0 Aug 1 12:06 .
    drwxr-xr-x 6 0 0 0 Aug 1 12:06 ..
    lrwxrwxrwx 1 0 0 0 Aug 1 18:42 driver -> ../../../../bus/acpi/drivers/fan
    -r--r--r-- 1 0 0 4096 Aug 1 18:41 hid
    -r--r--r-- 1 0 0 4096 Aug 1 18:41 modalias
    -r--r--r-- 1 0 0 4096 Aug 1 18:41 path
    drwxr-xr-x 2 0 0 0 Aug 1 18:41 power
    drwxr-xr-x 2 0 0 0 Aug 1 18:41 power_resources_D0
    -r--r--r-- 1 0 0 4096 Aug 1 18:41 power_state
    -r--r--r-- 1 0 0 4096 Aug 1 18:41 real_power_state
    lrwxrwxrwx 1 0 0 0 Aug 1 18:42 subsystem -> ../../../../bus/acpi
    lrwxrwxrwx 1 0 0 0 Aug 1 18:42 thermal_cooling -> ../../../virtual/thermal/cooling_device3
    -rw-r--r-- 1 0 0 4096 Aug 1 12:06 uevent
    -r--r--r-- 1 0 0 4096 Aug 1 18:41 uid
    Yep, acpi "proc" is being replaced by the newer "sys", which also effects "suspend", and other acpi activity. Which is the power state we need to change for the fan? Here are the "cats" for various of these entries:
    driver directory
    hid PNP0C0B
    modalias acpi:PNP0C0B:
    path \_TZ_.FAN1
    power directory
    power_resources_D0 directory
    power_state D3cold
    real_power_state D3cold
    uevent DRIVER=fan
    uid 2
    Now in each of these three sub-directories "driver", "power", and "power_resources_D0"
    drwxr-xr-x 2 0 0 0 Aug 1 12:07 .
    drwxr-xr-x 14 0 0 0 Aug 1 12:06 ..
    lrwxrwxrwx 1 0 0 0 Aug 1 18:39 PNP0C0B:00 -> ../../../../devices/LNXSYSTM:00/device:44/PNP0C0B:00
    --w------- 1 0 0 4096 Aug 1 18:37 bind
    --w------- 1 0 0 4096 Aug 1 12:07 uevent
    --w------- 1 0 0 4096 Aug 1 18:37 unbind

    drwxr-xr-x 2 0 0 0 Aug 1 18:41 .
    drwxr-xr-x 4 0 0 0 Aug 1 12:06 ..
    -rw-r--r-- 1 0 0 4096 Aug 1 19:17 async
    -rw-r--r-- 1 0 0 4096 Aug 1 19:17 autosuspend_delay_ms
    -rw-r--r-- 1 0 0 4096 Aug 1 19:17 control
    -r--r--r-- 1 0 0 4096 Aug 1 19:17 runtime_active_kids
    -r--r--r-- 1 0 0 4096 Aug 1 19:17 runtime_active_time
    -r--r--r-- 1 0 0 4096 Aug 1 19:17 runtime_enabled
    -r--r--r-- 1 0 0 4096 Aug 1 19:17 runtime_status
    -r--r--r-- 1 0 0 4096 Aug 1 19:17 runtime_suspended_time
    -r--r--r-- 1 0 0 4096 Aug 1 19:17 runtime_usage

    drwxr-xr-x 2 0 0 0 Aug 1 18:41 .
    drwxr-xr-x 4 0 0 0 Aug 1 12:06 ..
    lrwxrwxrwx 1 0 0 0 Aug 1 19:19 LNXPOWER:00 -> ../../LNXPOWER:00
    The two symlinks are problematic infinite loops, but "power" appears to contain a useful writeable file named "control". According to this site, which is about USB, but has the appropriate power information, we should be able to use /power/control to change the fan's state from "auto" to "on".
    $ cat /sys/devices/LNXSYSTM:00/device:44/PNP0C0B:00/power/control

    # echo on > /sys/devices/LNXSYSTM:00/device:44/PNP0C0B:00/power/control

    $ cat /sys/devices/LNXSYSTM:00/device:44/PNP0C0B:00/power/control

    $ cat /sys/devices/LNXSYSTM:00/device:44/PNP0C0B:00/power_state
    In other words, although the device seems to accept the power change, the fan does not power-up. And though we can see it in sys/devices, it's not detected by the kernel. Yeah, we see it in dmesg during boot, but it never is seen by the kernel -- we can tell because it never forms an entry in /proc/acpi. During boot, it's called "FAN1", but after that...gone.

    $ dmesg |grep ACPI
    PnP ACPI: found 10 devices
    ACPI: bus type PNP unregistered
    ACPI: bus type USB registered
    ACPI: Battery Slot [BAT0] (battery present)
    ACPI: AC Adapter [ADP0] (on-line)
    ACPI: Power Button [PWRB]
    ACPI: Lid Switch [LID]
    ACPI: Power Button [PWRF]
    ACPI: Video Device [VGA] (multi-head: yes rom: no post: no)
    ACPI: Thermal Zone [THZN] (50 C)
    ACPI: Fan [FAN1] (off)

    # cd /proc/acpi

    # grep -rn LID *
    wakeup:2:LID S4 *enabled

    # grep -rn FAN *
    # grep -rni fan *

    $ acpi -V
    Battery 0: Unknown, 100%
    Battery 0: design capacity 4000 mAh, last full capacity 2541 mAh = 63%
    Adapter 0: on-line
    Thermal 0: ok, 48.0 degrees C
    Thermal 0: trip point 0 switches to mode critical at temperature 105.0 degrees C
    Cooling 0: Fan 0 of 1
    Cooling 1: LCD 3 of 7
    Cooling 2: Processor 0 of 3
    Cooling 3: Processor 0 of 10
    Seems to be there, but nothing....

    How about in /sys/bus/acpi/drivers/fan?
    $ ls /sys/bus/acpi/drivers/fan
    PNP0C0B:00 bind uevent unbind

    $ ls /sys/bus/acpi/drivers/fan/PNP0C0B:00/
    driver hid modalias path power power_resources_D0 power_state real_power_state subsystem thermal_cooling uevent uid
    These are the same settings (and same problems) as earlier above in the former configuration: /sys/devices/LNXSYSTM:00/device:44/PNP0C0B:00/. A Gordian knot with no apparent commands to reach inside it -- so close but so far.

    Wednesday, July 10, 2013

    [solved] curl rabbit hole

    Links: DNS server list Note: the workaround described here is probably only necessary during this temporary era of IPv4 -> IPv6 transition. I have little doubt this curl bug will eventually be resolved.

    Currently (7/2013), when we enter a URL via curl at the command line, curl's first action is a DNS resolution query to resolve the address. By default, curl queries in both IPv4 and IPv6 formats. The rub is that, unless the DNS server responds in both formats, A and AAAA, curl spawns an error (see below) and exits. Most major sites are returned in both formats but many sites, including repositories needed by rpm/yum, are not resolved in IPv6.
    $ curl
    curl: (6) Could not resolve host:; Cannot allocate memory

    We can overcome the problem with the "--ipv4" switch.
    $ curl --ipv4
    [page loads normally]

    The more substantial problem is rpm/yum reliance upon curl for network repository access. Calls to curl from within rpm/yum are made without any switches available to the user. Accordingly, curl makes such DNS queries in a default IPv4 + IPv6 format. This means rpm/yum fail and exit unless curl receives both A and AAAA responses.
    # rpm -ivh
    curl: (6) Could not resolve host:; Cannot allocate memory

    troubleshoot - tcpdump

    Tcpdump information confirms that it is curl's insistence on both IPv4 and IPv6 information that causes the failure:
    # rpm -ivh
    curl: (6) Could not resolve host:; Cannot allocate memory
    [tcpdump information] > 41495+ A? (36) > 25356+ AAAA? (36) > 41495 1/0/0 A (52) > 25356- 0/0/0 (36)
    Curl has received valid DNS resolution (emboldened), but only to the IPv4 query. Curl's failure inside of an rpm request thereby causes rpm to also fail. Although we know so far that curl has a design flaw requiring it receive A and AAAA query responses (instead of either), and although we know we can't control curl inside of rpm, we need more information about why no AAAA (IPv6) data is being returned. We might be able to control that.

    troubleshoot - nslookup, dig

    $ nslookup

    Non-authoritative answer:

    $ nslookup type=AAAA
    nslookup -type=aaaa
    Non-authoritative answer:
    *** Can't find No answer

    $ dig +short A

    $ dig +short AAAA
    Perhaps an AAAA record (PTR or zone) was never created for the sourceforge repository (consider eg.this article). It's also possible ATT does not update its DNS files often, or there is an incorrect AAAA record in their zone. Too many upstream variables to determine reliably. At this point, unless we work for the NSA, we're stuck with the information we have. What is a feasible solution?

    strategy 1

    Write a patch and recompile curl (or pfsense) to succeed with either IPv4 or IPv6 information. The most reliable solution --- except that I'm not a programmer.

    strategy 2 (inelegant but successful)

    Provide BIND with an AAAA /etc/hosts entry for Some good /etc/hosts IPv6 information is available here. The excellent site provided a set of conversion options for 78.46.17. The IPv6 address which appeared best for tricky operations such as the current curl release (operating in IPv4 mode, but needing IPv6 info!) appeared to be "IPv4-mapped address". This is 0:0:0:0:0:ffff:4e2e:11e4, written as ::ffff:4e2e:11e4.
    # nano /etc/hosts

    # nano /etc/host.conf
    order hosts,bind
    (Other helpful links were this article and this IPv4-6 translator). Before the successful curl run, I tried the URL successfully in Chromium, entering http://[::ffff:4e2e:11e4] in the URLbar.

    strategy 3

    Try other DNS servers, ones likely to have the most up-to-date zones. Google provides solid instructions in this document describing how to point to their DNS servers. There is also this DNS list. Alternatively, I could simply add "nameserver" entries in /etc/resolv.conf using "supersede" to prevent overwriting by dhcpcd as described in comments here.

    strategy 4

    A common way to manage IPv4 vs IPv6 confusion in the past has been locking-out IPv6. I even showed how to do this in a prior post. However, since curl now requires both A and AAAA records to be returned, shutting out IPv6 is no longer a sensible confusion-stopper. Unless a person has no need to contact software repositories.

    xsane rabbit hole

    Links: scanimage man page :: clear overview of network and usb scanners, groups

    I recently connected a mothballed Epson that scans on every system, but found that it didn't work in Fedora. The sane-find-scanner utility detected the scanner, but scanimage -L did not (as root or user). This is apparently a common problem, sometimes associated with permissions, sometimes random bugs.

    I did get it to scan by manually entering the USB address detected by sane-find-scanner, and add the library suffix for Epsons* from /usr/lib/sane/. This looks like, eg...
    $ scanimage -d epson2:libusb:001:005 --resolution 75 --mode Gray > star.jpeg
    $ convert star.jpeg -normalize out.jpg
    Conversion (ImageMagick) is required because the output image caused errors "not jpeg starts 0x50 0x34". The Start of File (SOF) was not the correct header for a JPEG. This is because it's really a PGM file. Anyway, scanning in this way would be prohibitively time intensive.

    *attempted both /usr/lib/sane/libsane-epson and /usr/lib/sane/libsane-epson2

    $ scanimage --help --device-name epson:libusb:001:005

    Options specific to device `epson:libusb:001:005':
    Scan Mode:
    --mode Lineart|Gray|Color [Lineart]
    Selects the scan mode (e.g., lineart, monochrome, or color).
    --dropout None|Red|Green|Blue [None]
    Selects the dropout.

    --gamma-correction User defined (Gamma=1.0)|User defined (Gamma=1.8) [User defined (Gamma=1.8)]
    Selects the gamma correction value from a list of pre-defined devices
    or the user defined table, which can be downloaded to the scanner
    --resolution 75|150|300|600dpi [75]
    Sets the resolution of the scanned image.
    --speed[=(yes|no)] [no]
    Determines the speed at which the scan proceeds.

    --short-resolution[=(yes|no)] [no]
    Display short resolution list
    --red-gamma-table 0..255,...
    Gamma-correction table for the red band.
    --green-gamma-table 0..255,...
    Gamma-correction table for the green band.
    --blue-gamma-table 0..255,...
    Gamma-correction table for the blue band.

    --wait-for-button[=(yes|no)] [no]
    After sending the scan command, wait until the button on the scanner
    is pressed to actually start the scan process.
    --preview[=(yes|no)] [no]
    Request a preview-quality scan.
    --preview-speed[=(yes|no)] [no]

    -l 0..215.9mm [0]
    Top-left x position of scan area.
    -t 0..297.857mm [0]
    Top-left y position of scan area.
    -x 0..215.9mm [215.9]
    Width of scan-area.
    -y 0..297.857mm [297.857]
    Height of scan-area.
    --quick-format CD|A5 portrait|A5 landscape|Letter|A4|Max [Max]


    Stracing scanimage revealed that the appropriate was consulted, but nothing was returned. I also moved all .conf files in /etc/sane.d/ except the epsons, in case the client was could not locate the appropriate .conf file from the 30 or so in that folder. No luck.


    One article (link is also at top of page), seemed to imply that saned was necessary to run the scanner. I considered saned (configured via /etc/sane.d/saned.conf) only necessary for network access to scanners, and so looked more closely.

    Saned and CUPS appear similar, at least in concept. Since I could find no clear guideline for the syntax for adding USB printers into saned, it's possible they are added to /etc/sane.d/saned.conf in a similar USB URL format as in CUPS, eg usb:/dev/usb/lp0. But it appears we may have to more specific about which USB and so include each:


    The Fedora installation had no group for scanning. It might be necessary to add a "scanner" group and create user access. If there is an underlying permissions issue, this might resolve it.

    Tuesday, July 9, 2013

    [solved] dns, yum, rpm, curl, ping

    Links: yum variables :: IPv4 address conversion :: yum commands
    NB: This is complicated post. It first addresses IPv6 (mostly successfully), but a second problem is revealed specific to Fuduntu, that I could not circumvent. Since Fuduntu is defunct, I'm disregarding and posting "solved" above. Hopefully, there's plenty of info below for others working on what might be a similar Fedora flavor of the Fuduntu release problem.
    Consider the following problem -- if I can ping, I should be equally able to curl, but I'm not:
    $ ping
    PING ( 56(84) bytes of data.
    64 bytes from ( icmp_seq=1 ttl=49 time=27.9 ms
    64 bytes from ( icmp_seq=2 ttl=49 time=27.7 ms
    $ curl
    curl: (6) Couldn't resolve host ''
    This did more than just raise my curiosity; rpm/yum relies on curl during access to repositories. First I checked for proxy and IPv6 settings. All looked normal: no proxy, IPv6 set to ignore, but not to block or forced resolution. Let's look under the hood.


    Here are portions of dumps for the successful ping and struggling curl:
    # tcpdump -nlieth0 -s0 udp port 53 -vvv
    [during ping] > [udp sum ok] 11891+ A? (34) > [udp sum ok] 11891 q: A? 1/0/0 [5s] A (50) > [udp sum ok] 51147+ PTR? (43) > [udp sum ok] 51147 q: PTR? 1/0/0 [9h53m39s] PTR (73)

    [during curl] > [udp sum ok] 26082+ A? (34) > [udp sum ok] 54668+ AAAA? (34) > [udp sum ok] 26082 q: A? 1/0/0 [5s] A (50) > [udp sum ok] 54668- q: AAAA? 0/0/0 (34) > [udp sum ok] 47978+ A? (46) > [udp sum ok] 42568+ AAAA? (46) > [udp sum ok] 47978 NXDomain- q: A? 0/0/0 (46) > [udp sum ok] 42568 NXDomain- q: AAAA? 0/0/0 (46)
    Ping only queries the DNS server in IPv4 (A?) and has success. Curl initially requests in both IPv4(A?) and IPv6 (AAAA?). Although curl receives a proper response ( to its IPv4 request, nothing is returned for IPv6 request. Apparently due to some bug, curl ignores the IPv4 resolution and requests a second time in both formats. It also mysteriously appends "localdomain" onto its query(!).

    solution - /etc/hosts + release awareness

    Links: IPv4 address conversion :: yum concerns :: cleaning old yum info

    We should write a patch for curl and recompile it, but that's for programmers. I only know how to supply curl with the IPv6 information it wants. The site may not have an AAAA record in its DNS zone file, but I can still manually enter IPv6 info into /etc/hosts and force curl to use that.
    # nano /etc/hosts

    # nano /etc/host.conf
    order hosts,bind

    $ curl
    [page loads normally]
    Problem 1 solved. However, there is a second problem, one specific to Fuduntu, not curl. Fuduntu is a hybrid. It accordingly doesn't have typical Fedora values in its rpm variables, eg $releasever.
    $ rpm - q fedora-release
    package fedora-release is not installed

    $ rpm -q fuduntu-release

    $ ls /etc/*release
    ls: cannot access /etc/release*: No such file or directory

    $ yum list fedora-release
    Loaded plugins: fastestmirror, langpacks, presto, refresh-packagekit
    Adding en_US to language list
    Determining fastest mirrors
    Could not retrieve mirrorlist error was
    14: PYCURL ERROR 6 - ""
    Error: Cannot find a valid baseurl for repo: fuduntu
    Glitches also cause this with Fedora users when version conflicts arise. In the case of Fuduntu however, the repos no longer exist -- one strategy might be to spoof Fuduntu version checking as if were being upgraded when it accesses third-party repos. If we eliminate the locally-stored repo files and the rpm release file, we might be able to override with third party information. First let's do a debug dump with the current info (in case we need it later), then remove local information.
    $ yum-debug-dump
    Output written to: /home/~/yum_debug_dump-local-2013-07-12_20:50:30.txt.gz

    $ rpm -q fuduntu-release

    # yum remove fuduntu-release-2013-3.noarch
    [screens and screens of removal]

    $ rpm -q fuduntu-release

    # ls /etc/yum.repos.d
    dropbox.repo fuduntu.repo
    # rm /etc/yum.repos.d/fuduntu.repo
    # ls /etc/pki/rpm-gpg/
    RPM-GPG-KEY-fuduntu RPM-GPG-KEY-fuduntu-i386
    RPM-GPG-KEY-fuduntu-2013-primary RPM-GPG-KEY-fuduntu-x86_64
    # rm /etc/pki/rpm-gpg/*

    # yum clean all

    $ rpm -q fuduntu-release


    Links:replace $releasever using sed :: yum variables
    The orphaned Fuduntu release has no access to Fuduntu repositories because they no longer exist. Fuduntu must rely on third-party repos to move forward. Fedora-related repositories are arranged with "f[$releasever]-[$basearch]". In Fuduntu this variable was "2013-i386". This special variable worked in Fuduntu repos, but fails in 3rd party repos -- $releasever, needs to be changed to something standard, such as "17", to create a more Fedora-standard "f17-i386" variable.

    But nothing worked. Not exporting the variable $releasever=17 to the kernel, not changing /etc/yum.conf, not swapping out the value of "2013" for "17" in each /etc/*release. Nothing I could find globally changed this variable. Eventually, after a couple of lost days on the project, I gave up and brute forced the repo files individually. Before modifying, I eliminated the old cache and backed-up all the unmodified repos into a new directory I called "default". Then I modified the repos, exchanging "$releasever" for "17" in each file.
    # rm -r /var/tmp/*

    # mkdir /etc/yum.repos.d/default
    # cp /etc/yum.repos.d/* /etc/yum.repos.d/default/

    # sed -i 's/$releasever/17/g' ./etc/yum.repos.d/*
    The repos finally loaded.

    (non)solution - remove IPv6 functionality

    Link: Disabling IPv6

    This is not a solution, because curl/rpm/yum needs IPv6 and IPv4 information and does not get what it needs with the process below. I'm including this info however, because it provides insight into how other TCP clients (ie, not strictly curl/rpm) can be assisted in a mixed A/AAAA environment. Some readers are interested in a Chromium,etc.
    # nano /etc/sysconfig/network

    # nano /etc/modprobe.d/blacklist.conf
    blacklist nf_conntrack_ipv6
    blacklist nf_defrag_ipv6

    Appendix 1 - wireshark

    A good link for using wireshark to check DNS problems. The wireshark GUI is more elegant than my CLI approach above, and arguably more user-friendly for those working on IPv4 /IPv6 solutions.

    Appendix 2 - strace

    Straces are too long to regurgitate here; let's look at the 14 relevant lines where ping succeeds and curl is unsuccessful. Of possible interest here is that, from inside the LAN, the DNS server, via DHCP, is simply the gateway at "".
    $ strace ping
    connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, 16) = 0
    gettimeofday({1373430779, 7870}, NULL) = 0
    poll([{fd=3, events=POLLOUT}], 1, 0) = 1 ([{fd=3, revents=POLLOUT}])
    send(3, "\346\f\1\0\0\1\0\0\0\0\0\0\3www\10websense\3com\0\0\1"..., 34, MSG_NOSIGNAL) = 34
    poll([{fd=3, events=POLLIN}], 1, 5000) = 1 ([{fd=3, revents=POLLIN}])
    ioctl(3, FIONREAD, [50]) = 0
    recvfrom(3, "\346\f\201\200\0\1\0\1\0\0\0\0\3www\10websense\3com\0\0\1"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, [16]) = 50
    close(3) = 0
    connect(3, {sa_family=AF_INET, sin_port=htons(1025), sin_addr=inet_addr("")}, 16) = 0
    getsockname(3, {sa_family=AF_INET, sin_port=htons(53553), sin_addr=inet_addr("")}, [16]) = 0

    And for the failing curl :
    $ strace curl
    connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, 16) = 0
    gettimeofday({1373429425, 713068}, NULL) = 0
    poll([{fd=3, events=POLLOUT}], 1, 0) = 1 ([{fd=3, revents=POLLOUT}])
    sendmmsg(3, {{{msg_name(0)=NULL, msg_iov(1)=[{"\215s\1\0\0\1\0\0\0\0\0\0\3www\10websense\3com\0\0\1"..., 34}], msg_controllen=0, msg_flags=0}, 34}, {{msg_name(0)=NULL, msg_iov(1)=[{"\34\305\1\0\0\1\0\0\0\0\0\0\3www\10websense\3com\0\0\34"..., 34}], msg_controllen=0, msg_flags=0}, 34}}, 2, MSG_NOSIGNAL) = 2
    poll([{fd=3, events=POLLIN}], 1, 5000) = 1 ([{fd=3, revents=POLLIN}])
    ioctl(3, FIONREAD, [34]) = 0
    recvfrom(3, "\34\305\200\0\0\1\0\0\0\0\0\0\3www\10websense\3com\0\0\34"..., 2048, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, [16]) = 34
    close(3) = 0
    connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, 16) = 0

    Curl never seems to leave port 53, and it also appears curl may have actually received the IP of the DNS server in response to its query to that selfsame DNS server. Perhaps this is due to curl embedding its request inside a more complex sendmmsg routine, as opposed to ping's simpler send routine. Additionally, ping uses a getsockname process not used by curl.

    More information: while Epiphany is running, we check to see what calls are creating errors. Get its PID, open a terminal, and let strace run for several seconds while attempting to surf to an address in Epiphany. Then CTRL-C out and examine the data,eg....
    $ strace -c -p 13881
    Process 3811 attached
    ^CProcess 3811 detached
    % time seconds usecs/call calls errors syscall
    ------ ----------- ----------- --------- --------- ----------------
    54.08 0.000384 0 2167 writev
    14.79 0.000105 5 20 munmap
    13.24 0.000094 0 1526 clock_gettime
    13.24 0.000094 0 6690 4567 recv
    4.65 0.000033 0 4583 poll
    0.00 0.000000 0 1 restart_syscall
    0.00 0.000000 0 80 9 read
    0.00 0.000000 0 31 write
    0.00 0.000000 0 36 open
    0.00 0.000000 0 36 close
    0.00 0.000000 0 4 unlink
    0.00 0.000000 0 18 access
    0.00 0.000000 0 8 rename
    0.00 0.000000 0 262 gettimeofday
    0.00 0.000000 0 1 clone
    0.00 0.000000 0 14 _llseek
    0.00 0.000000 0 21 mmap2
    0.00 0.000000 0 58 46 stat64
    0.00 0.000000 0 84 8 lstat64
    0.00 0.000000 0 36 fstat64
    0.00 0.000000 0 2 1 madvise
    0.00 0.000000 0 70 6 futex
    0.00 0.000000 0 1 statfs64
    ------ ----------- ----------- --------- --------- ----------------
    100.00 0.000710 15749 4637 total
    Blog formatting squishes the data a little, but we see significant errors(4567 of them) on "recv" calls, as well as some on "stat64" and a few others.

    Wish I could write in C and recompile curl.

    Saturday, July 6, 2013

    fuduntu - yum/rpm stuff

    links: yum setup :: yum command examples

    why yum/rpm?

    In the previous post, I noted an install of Fuduntu, a recently orphaned distro. The idea was to establish a simple system which might retain its stability against the tide of media player updates. An expected side benefit was the opportunity to learn yum. Specifically, it's necessary to modify Fuduntu's default yum files because Fuduntu's yum URL's are now invalid. Note: the final Fuduntu release appears to be based on Fedora 17 (or roughly Enterprise Linux 6), plus some Ubuntu features.


    No notable differences between Fedora and Fuduntu in the global /etc/yum.conf file. Standard.

    repos to remove - /etc/yum.repos.d/

    The helpful System > Administration > Software Sources window:

    We can see the Fuduntu repository entries are there, though they no longer point anywhere. To follow good housekeeping, toggle enable to "enable=0" inside each undesired repo entry in /etc/yum.repos.d/. To be more thorough, we could delete unwanted repo files(in /etc/yum.repos.d/) and gpg keys (in /etc/pki/gpg-keys/). In my simple world, I renamed all the repos with a .bak extension and kept them -- I might want to look inside one later. This also allows me to change the extension back to ".repo" if I wanted to reactivate one for some reason.
    # cd /etc/yum.repos.d/
    # rename .repo .bak *.repo

    repos to add

    As noted at the top, I assumed Fuduntu's nearest releases were F17 and EL6.
    (1) REPOFORGE (formerly RPM Forge)
    # rpm -ivh
    However, rpm for some reason could not resolve the host and was providing the following error
    curl: (6) Could not resolve host:; Cannot allocate memory
    error: skipping - transfer failed
    This is not a small problem, and requires a separate post on IPv4 and IPv6 addressing, which will be my next post. Essentially, curl does the DNS and downloading for rpm/yum actions. Users can specify "--ipv4" in direct CLI curl requests, but there are no such switches when rpm calls curl as a subroutine, out of sight of the user. If it doesn't receive IPv6 information, curl fails, causing rpm to fail.

    So, for now, downloaded the rpm repo file directly via my browser, and then...
    # yum --nogpgcheck localinstall rpmforge-release-0.5.3-1.el6.rf.i686.rpm
    ...and finally
    # yum install --enablerepo=rpmforge-extras
    Looking at its file, it performs a gpg check on the downloaded packages. There are many settings available, a simple example is the one above -- to toggle a repo in a file on or off, change the state in the "enabled" line.

    (2) ATRPMS Below is the repair for curl IPv6 finickiness. Then one can directly download and install the file using rpm.
    # nano /etc/host.conf
    order hosts,bind

    # nano /etc/hosts

    # curl --ipv6

    [install key]
    # rpm --import

    # rpm -ivh
    Preparing... ########################################### [100%]
    1:atrpms-repo ########################################### [100%]

    # ldconfig

    # yum clean all

    # nano /etc/host.conf
    order hosts,bind

    # nano /etc/hosts

    # curl --ipv6

    # rpm -ivh
    warning: /var/tmp/rpm-tmp.QSSeST: Header V3 RSA/SHA256 Signature, key ID 8296fa0f: NOKEY
    Preparing... ########################################### [100%]
    1:rpmfusion-free-release ########################################### [100%]

    # ldconfig

    # yum clean all

    manual repo pointing

    Let's create a new repo file, fedora1.repo. Permissions of the file should be 644...
    # chown 644 /etc/yum.repos.d/fedora1.repo should appear in System > Administration > Software Sources:

    quick test - yum repolist

    Fast way to check if everything is operating correctly

    NOTES: gpg keys

    On first cut, the new repository did not work. Note that in the above repo listing, we requested a gpg check and provided the folder to find it...
    ... but then of course we only had Fuduntu keys in that folder, no Fedora keys. Further, the Fedora project has moved on, it's no longer operating repos for Fedora 17. So we could use rpm to import the keys and automatically create softlinks... /etc/pki/rpm-gpg/...
    # rpm --import RPM-GPG-KEY-fedora-17-primary
    # rpm --import RPM-GPG-KEY-fedora-17-secondary
    ...but Fedora does not maintain older packages, so we'd have the keys to nothing. We could even add the old F17 keys manually...
    # gedit /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-17-primary the key block from the link into that file, then made a softlink for housekeeping...
    # ln -s /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-17-primary /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora
    ...but again this would be pointless since no F17 software is available at Fedora.

    There are also these sorts of possible yum problems.

    Saturday, May 18, 2013

    slacko 5.5 (puppy) and easy peasy on a hp110 mini

    Links: GPTfdisk  Puppy User and Security

    booting from USB

    I downloaded the latest slacko ISO (5.5), and used UNetbootin to put the ISO onto a stick. I put in the stick and powered-up. F10 got me into the BIOS, but the BIOS did not detect the USB --- the HDD was the single boot option. The F1 key and cycling the BIOS eventually detected the USB. Each time thereafter, an F9 option to "change boot device order" appeared in the BIOS splash. Once the USB was detected, Slacko booted and installed quickly from it.

    Next were attempts at a couple of Ubuntu based distros; Joli OS, and EasyPeasy. Both hung during auto-partitioning. Reckoning I had encountered the storied MBR/GPT conflict, I considered downloading GPTfdisk, but then I discovered an article which appeared to show how to remove the GPT manually. I processed these on both ends of the HDD, but the installs still hung at the same step.

    Why was an Ubuntu installer hanging and a Slackware based installer not hanging? During installs, Ubuntu's installer script apparently relies on the POS gparted for the partitioning phase. Since Ubuntu installation screens are GUI, not CLI, gparted has no straightforward way to provide its failure information; gparted simply dies (exits) and hangs the installation. I was eventually able to guess it might be gparted only after Googling similar hangs. To verify, I exited the setup script and entered the live CD desktop. I then opened a terminal, su'ed up ("sudo passwd ubuntu", then "su") and ran gparted. Sure enough, gparted failed and exited. The failure:
    Assertion (head_size <= 63) at ../../../libparted/labels/dos.c:659 in function probe_partition_for_geom() failed.
    OK, the above explained why I couldn't see the failure, and why it didn't occur in Slackware, but what is this failure? Poor design, apparently. Certainly, the difference between a USB and a CDROM should be irrelevant for gparted to do its install job, but it obviously isn't. It is a nasty bug. I was only able to find one fix on my first chop at it, and that was designed for Verbatim brand USB's. I don't have a Verbatim USB. Nevertheless, cfdisk -z /dev/sdb got me partway there, and then I also formatted the USB with ext2 (mke2fs /dev/sdb1) just to be sure. I then ran UNetbootin on an EasyPeasy ISO. With this, I was able to install EasyPeasy (Ubuntu installer script) without issues. 3 lost hours.

    UEFI (Unified Extensible Firmware Interface) Note:

    I also booted Slacko from the USB on a 2013 laptop. For this, I found I had to enter the BIOS and disable UEFI booting prior to the USB, or even a CDROM, booting successfully.

    Slacko: adding a user, logins, X-settings for user

    Slacko's default GUI access is root, which makes sense for a live distro. So how to create users and which files are required for X (and Bash, etc) initilization for these users?

    Create users (in this case "foo"), add them to groups, and set-up the home directory...
    # mkdir /home/foo
    # adduser foo
    # nano /etc/group #(add foo to whatever)
    # su foo
    $ cp /boot/root/.bashrc /home/foo/
    $ cp /boot/root/.Xdefaults /home/foo/
    $ cp /boot/root/.Xresources /home/foo/
    $ cp /boot/root/.fonts-cache1 /home/foo/
    $ cp /boot/root/.gtkrc-2.0 /home/foo/
    ...then arrange for runlevels and booting logins.

    First, to eliminate autologins to root, change the second line in \etc\inittab.
    tty1::respawn:/sbin/getty -n -l /bin/autologinroot 38400 tty1
    tty1::respawn:/sbin/getty 38400 tty1

    Wednesday, April 24, 2013

    CLI slackware on a 1999 laptop

    Links: CNET specs   broadcom module

    AmeriNote RL366C 366MHz Celeron  

    In the late 1990's, CompUSA1 apparently sold a line of laptops called "AmeriNote"s. It's unclear who manufactured these for CompUSA, but the laptops did garner at least some good reviews in their day. I recently had an opportunity to salvage one in unknown condition. Current market value would be less than 50 cents, but the project seemed interesting: a 366MHz Celeron with 256Mb of 144 pin 60MHz SODIMM, 3.2 GB HDD (not bad for '99), CD-ROM drive, single USB 1.0 port, and a 12.1" 800x600 screen. The screen appeared to be without a backlight, passively lit, if that's possible. Full specs are in the link at top.


    The laptop powered into BIOS OK, but did not boot; apparently it suffered corrupted HD sectors. I located a 2008 Slackware 12.1 CD and it booted with "huge.s". Bad sectors seemed about 100MB leaving sufficient space for a CLI install. First, the verification of the checksum of the old install disk however:
    $ cd /media/foo/S12.2d1
    $ md5sum -c CHECKSUMS.md5
    md5sum: WARNING: 12 lines are improperly formatted
    The warning is inconsequential since it just means whomever created the checksum file made a syntax error. Any warnings that "computed checksums did NOT match" however would have to be evaluated.


    The on-board battery had obviously not kept-up, and was stuck in 2005. Some applications (eg. WiFi modules) will not compile if the OS date is too far removed from the dates in the source code. I fixed this based mostly on the information from this site, adding quotation marks as follows...
    # date --set="Sat Apr 24 18:49:00 EST 2013"
    ... and sent it to the hwclock using
    # hwclock --systohc --utc

    user - adduser

    Creating a user with "useradd" appeared the most flexible, but seemed to always lead to account expiration errors even when a directory, expiry, and so forth seemed properly initialized. After several iterations of "useradd", "usermod", and "userdel" with different option flags, eventually sampled "adduser". "Adduser" worked the first time and so seemed worth the annoying GECOS hand-holding, etc. The command is simply...
    # adduser foo
    Subsequent verification of the user set with # passwd -Sa ("status", "all"), and $ groups confirmed a normal scenario, including a user directory.


    I had a USB stick with application source --- how to get them into a user directory using CLI? I found instructions here. I varied from these slightly by using udevmonitor to detect the drive name, and by creating the mount directory (as $) under my home directory, eg /home/foo/myusb...
    $ mkdir /home/foo/myusb
    # udevmonitor [returned "/dev/sda1" from usb stick]
    # mount -t vfat -o rw,users /dev/sda1 /home/foo/myusb
    I was able to easily move files as a user. Then, prior to removing the USB stick...
    $ umount /dev/sda1

    network - pcmcia

    The AmeriNote of course only had a built-in 56K modem. This left the PCMCIA or USB ports as possible network adapter inputs (how did we get by back in '99?). PCMCIA seemed best for WiFi: leave the USB available. For whatever reason, I had a small pile of PCMCIA WiFi cards in an old shoebox. Each was detected, but using the numbers in lspci -vn, the subassembly chips seemed to require ndiswrapper, which I wanted not to use. And peering into /lib/modules/`uname -r`/kernel/net/wireless, it seemed that, after all these years, very few native modules have emerged2. However, one of the cards was a Broadcom 4300 series. It seemed worth a try with a 32 bit driver. The source compiled without errors, but when I went to insert it, I had errors which ultimately required a kernel recompile enabling more WiFi networking features than the vanilla install. I have instructions for such recompiles in a March 2009 post. I also considered stripping down the running modules and running make localmodconfig, since that runs "lsmod" to determine the Makefile. In the end, I recompiled the Slackware 12.1 source, enabling wireless.

    Before trying the Broadcom driver noted above, I checked to see if the newly compiled modules might natively drive the card. But using...
    $ ls -l /dev/sys/net
    ...only revealed an eth0 and no wlan0. Hmmm...

    1Absorbed since 2012 into
    2Self-defeating firmware opacity on the part of manufacturers?