Friday, November 16, 2018

pam.d and polkit strike again

I hate pam.d (system level), and polkit (group level). Both cause interaction problems and are parts of a larger failure of Linux progression over the years: what users previously configured in a few quick text changes in their /etc folder (most of the original appeal of Linux, in other words) have become memory hogging daemons. These applications each seem to have a specific syntax inside a (difficult to locate) configuration file, and handshake issues. For example, group and user policy overlord polkit runs multiple threads and hogs memory for zero benefit. Here's a few solutions to pam.d and polkit issues I encounter. BTW, gconf is nearly as disgusting.

disable polkit

Disabling pam.d will break your session because so many app writers have caved to its policies, but disabling polkit just means, eg. entering $ udiskie & when mounting a USB drive. Best solution:
# systemctl stop polkit
# systemctl disable polkit.service
The above worked and freed-up some percentage of CPU cycles and about 50M of RAM. None of what was described here worked -- either creating the following file in /etc/polkit-1/rules.d/99-deny-all.rules
polkit.addRule(function(action, subject) {
return polkit.Result.YES;
});
... or using $ startx --vt7.

authorization levels

Most old-time Linux users configuring access authority between users, groups, and applications expect simple, fast, text editor options in /etc/sudoers, and maybe /etc/group files. Not since the abomination pam.d was developed. One still must accomplish, say, /etc/sudoers changes in the old syntax. But, as noted well in this post, more complicated work is necessary in pam.d as well.

Monday, October 29, 2018

[solved] (sort of) webm format on older hardware

Laptops and desktops from 2007 or 8 seem too slow to view WEBM files -- their hardware is probably insufficient to decode WEBM on the fly. This manifests when using VLC: I get significant tearing and block pixillation. I've tweaked acceleration and RAM arrangements, but have never found a solution for these distortions when watching WEBM on old hardware.

Eventually, it came down (sadly) to converting WEBM to MP4 or AVI. Since MP4 is a long established container, it plays on nearly anything, if the bitrate isn't too high. A simple CLI solution always seems efficient for this kind of a conversion. After Googling, I came across this page which advised the following:
$ ffmpeg -i input.webm -c:v libx264 -crf 20 -c:a aac -strict experimental out.mp4
This works all right.

round 2

If time permits, or if the bitrate has to be brought down (4700K fine for sports, 1100K fine for teaching), I like the approach of breaking out the video and the sound and recombining. Let's say I want a teaching vid, shot at some ridiculously high 14000K bit rate, to play well on an old laptop or look good on a 55" screen...
$ ffmpeg -i original.webm -vn -ar 44100 -ac 2 sound.wav
$ ffmpeg -i original.webm -vcodec libx264 -b:v 1000k -s wxga -an video.avi
$ ffmpeg -i video.avi -i sound.wav -acodec libmp3lame
-ar 44100 -ab 192k -ac 2 -vol 330 -vcodec copy -b:v 1000k output.mp4
... taking the volume up just a hair to overcome any transcoding loss.

rotation

A lot of times for cell phone vids, the video is rotated 90° one direction or the other. To straighten, add the '-vf transpose' option. 1 rotates +90, 2 rotates -90, and 0 and 3 do something that also includes flips...
$ ffmpeg -i original.webm -vn -ar 44100 -ac 2 sound.wav
$ ffmpeg -i original.webm -vcodec libx264 -b:v 1000k -vf "transpose=2" -s wxga -an video.avi
$ ffmpeg -i video.avi -i sound.wav -acodec libmp3lame -ar 44100 -ab 192k -ac 2 -vol 330 -vcodec copy -b:v 1000k output.mp4

resolutions

Here's a resolution page that might come in handy. 5:3 resolution is just a hair away from the Golden Ratio, and I like it. The most common of these is 1280x768 or WXGA. A standard GoPro does a 1280x720 (HD 720), which is closer to the old 4:3 ratios of VGA, SVGA and so forth. But since I like 5:3, I try WXGA -- if it looks stretched, then I drop to XGA (1024x768). Incidentally, there are still perfectly usable old handheld media players around that do 4:3 320x240 (QVGA). These are maybe $5 in a bargain bin at Fry's or whatever. Naxa made one of these rechargables with 4G memory.

Friday, September 28, 2018

mount /tmp on tmpfs? compile and runtime tmpfs overloads

I located (via yaourt) a large wine client -- a few hundred megabytes and half an hour to compile on an older laptop. But two wine compilation problems appeared during the install: 1) wine doesn't compile because its PGP key is unrecognized by yaourt, 2) wine doesn't compile because the system runs out of available RAM.

First the PGP problem: when the PGP rejection error notice is generated, one can see the key in the notice. Users can then copy it and add it to their keyring. The reason for the confusion is that yaourt relies on the user's personal key ring, which is a different key ring than pacman's dedicated keyring. So accept the key into your personal gpg ring...then restart the install and yaourt will accept the app.
$ gpg --recv-key [key]
...or you can examine the key owner's information with --fingerprint [key].

memory

Now we have the application, we have several ways to check the memory parameters.
$ free
total used free shared buff/cache available
Mem: 2034884 330740 1301548 45624 402596 1512420
Swap: 50426196 0 50426196

$ swapon
NAME TYPE SIZE USED PRIO
/dev/sda2 partition 48.1G 0B -2

$ df
Filesystem 1K-blocks Used Available Use% Mounted on
dev 1010468 0 1010468 0% /dev
run 1017440 592 1016848 1% /run
/dev/sda1 258030980 27348180 217575600 12% /
tmpfs 1017440 2972 1014468 1% /dev/shm
tmpfs 1017440 0 1017440 0% /sys/fs/cgroup
tmpfs 1017440 8 1017432 1% /tmp
tmpfs 203488 0 203488 0% /run/user/1000

$ cat /etc/fstab
/dev/sda1 / ext3 rw,relatime,block_validity,barrier,user_xattr,acl 0 1

/dev/sda2 none swap defaults,pri=-2 0 0

Where do these additional entries not in fstab originate? Can they be controlled? It appears we need some strategy for our tmpfs to go onto swap space once RAM is full so that the system doesn't lockup.

memory during compiling

During application compiling, tmpfs is used for parking files spawned or used by make. The tmpfs space is in RAM, viewable inside /tmp, and it allows compilations to progress quickly. However, compiling larger applications can require so much /tmp space, that RAM becomes filled. Once RAM is nearly full, make aborts complaining of a lack of space. Although tmpfs is supposed to overflow into SWAP space it does not. How can we compile these larger programs?

As noted, /tmp naturally resides in RAM. At the expense of 1) time, perhaps 25% longer compilation, and 2) the risk of thrashing one's hard drive, one can mount their /tmp directory on the hard disk. The important thing then is to refer this directory back to RAM after compiling the project. /var/tmp is the answer, but how to get it to connect with RAM?
# nano /etc/fstab
tmpfs /tmp tmpfs size=25G 0 0
This is all that's necessary. Checking...
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 1016196 5752 1010444 1% /dev/shm
tmpfs 26214400 8 26214392 1% /tmp
Now we can compile wine. We still have /dev/shm for regular RAM, but with spillover into 25G HDD swap if needed. Since I won't need this regularly, I will comment it from /etc/fstab once I compile wine.



Tuesday, September 25, 2018

[solved] more hdmi sound hijinx

Whichever card is running the HDMI must be set as the default inside alsa, no matter whether or not there are secondary issues with pulseaudio. Your order of business when finding that, say, YouTube videos no longer have sound, or VLC is without audio, are
  • ye olde verify audio is unmuted in alsamixer (while you're in alsamixer, why not check your card for annoying "automute", and disable it. Select it and use the "+" or "-" keys to toggle it)
  • check sound with a specific command to the hardware card. If this doesn't work, go into a subroutine of verifying HDMI. For example, this MUST work, if this is the named card and HDMI channel
    $ aplay -D plughw:1,7 /usr/share/sounds/alsa/Front_Center.wav
  • assuming the above works, then the wrong default card is likely being assigned, so that YouTube is, say, attempting to play through [default] motherboard audio, instead of an HDMI authorized sound card. This means, particularly after an software update, doing the thing you've done for years now...
    # nano /usr/share/alsa/alsa.conf
    @hooks [
    {
    func load
    files [
    {
    @func concat
    strings [
    { @func datadir }
    "/alsa.conf.d/"
    ]
    }
    ]
    errors false
    }
    ]
    ... and then further down in the file change the defaults...
    # nano /usr/share/alsa/alsa.conf
    defaults.ctl.card 1 #default 0
    defaults.pcm.card 1 #default 0
    defaults.pcm.device 7 #default 0

Saturday, September 15, 2018

post boot startup scripts

Going into "X", there are ways to run startups for various GUI applications. But what about at boot time? GRUB/LILO's job is only to initialize the kernel and the system: booting apps are not the way to, say, run an app that connects to wifi or runs a cron job. How to do? My ugly solution is a text rc.local file, added with a second step of creating and attaching a systemd service. Prior to systemd, in less complicated Linux years, we could create text app init files inside /etc/init.d/.

/etc/rc.local

There are more elegant ways (eg. as argued here) to connect to the web at startup, but let's use web connection as an easy rc.local example app, then pick your own application(s) to put into rc.local.

# nano /etc/rc.local
#!/bin/bash
echo "router connect"
wpa_supplicant -iwlan0 -Dwext -B -c /etc/wpasupplicant/wpa_supplicant.conf
exit 0

# chmod 755 /etc/rc.local
... or some like to just "+x" it.

/etc/systemd/system/rc-local.service


Next, create a systemd service file to call rc.local. As well noted here, the file must include a "wanted by" line.

# nano /etc/systemd/system/rc-local.service
[Unit]
Description=/etc/rc.local Compatibility
ConditionPathExists=/etc/rc.local

[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target
... and then enable the service in systemd...
# systemctl enable rc-local.service

The next time the system boots, it subsequently runs the rc.local file and connects to the router.

[solved] chk files

Sometimes after plugging a USB key into a microsoft product, you'll get weird CHK extension files, possibly in cryptic "FOUND" directories, that Microsoft is attempting to "recover" off the drive. To see what they've recovered often requires a hex dump reader. The simplest one I've found is the gnome ghex reader, easily installable in Arch with (obviously)
# pacman -S ghex

[unsolved] boot problems after upgrade

I had some kind of problem with mkinitcpio during a long overdue upgrade on my uncle's system. It was dropping out on the udev early hook with an error that it couldn't determine the file system type. Commands such as blkid, fsck will all be clean, and so you know it's in GRUB or the mkinitcpio, or possibly the kernel line or some other related issue.
# set
You should see some line like BOOT_IMAGE /boot/vmlinuz-linux

fix (3hrs)

Fresh reinstall. See this post. Your only problem is if the Arch install is through password-protected WiFi instead of ethernet. You'll have to HAND COPY your wpa_supplicant.conf into the installation machine prior to pacstrap, since there is no way to mount an inserted USB into the machine prior to pacstrap, yet WiFi must be on prior to pacstrap. It's a double bind, not unheard of during Arch installations. Said otherwise, there's no legal mount for the USB in the ramdisk.

no fix (20hrs)

Unfortunately, to match the system you're fixing, you'll have to download the latest core arch install ISO to access GRUB. The reason is not that GRUB cannot find the linux kernel, you see it found it right away. The re
linux line parameters "bootparam" man page is 50% helpful, more here , and not listed "rootfstype=ext2"
kernel modules
grub modules (grub manual)
initramfs (mkinitcpio) modules
running early hook [udev]
starting version 239
running hook [udev]
ERROR:device '' not found. Skipping fsck
mounting '' on real root
mount: /new_root: no filesystem type specified.
You are now being dropped into an emergency shell.
sh: can't access tty;job control turned off
udev hooks early hooks
multiple kernel options in Grub

Press "e" on the large 2 line Grub menu to see the params inside the grub. And then, if you want the rescue prompt, go to F2.
grub>

hahah... of course!
"There are different types of hooks [earlyhook, hook, latehook, cleanuphook]. Apart from that it should be in order."
uname -r and mkinitcpio -M, display the same information, the running kernel. However, after updating, the kernel mkinitcpio compiles with is the newer version, so it displays an error.

possibly gold plate post

Thursday, August 2, 2018

mods and apps on an old F3/F3Q

$40-55 (+16 hrs if unfamiliar with concepts below)

This year, first-responders begin receiving ATT cellular devices operating on the FirstNet secure, priority, communication network, and all for free. The average citizen brings home several times less than first-responders, and can expect to pay $300-400 (and up), for a phone, as well as another $70 per month for an unsecured, questionably reliable service plan. If a citizen is lower-income, say an educator or a student, elderly fixed income, etc, even the device price becomes out of reach, not to mention phone plans. This post suggests one workaround for the device price, though it requires some work, perhaps 8 hrs worth. The service plan issue cannot be solved without massive PUC intervention. Good luck on that.
As I write (2018), excellent condition, legally unlocked, T-Mobile LG F3 and F3Q Android phones can be purchased for $40. The phones operate NFC, Bluetooth, WiFi, and GPS. Screen size is 800x480 WVGA, and it records in HD 1280x720. F3/F3Q's contain a 3.5mm headphone jack and natively play MP3's.

Android 4.1 (Jelly Bean) phones such as the F3 or F3Q, or later, accommodate micro memory cards (SDHC's) up to 32GB(29.7 usable). 32GB U1 micro-SDHC's run ~ $11 (2018)1. If we root the phone, we can configure the memory card to reliably run all recent apps (Uber, Slack, whatever). So far, we're at $51, and there's an additional $4 app below, though a person can also get-by with the free version. They also might have a spare micro-SDHC available, keeping the cost to just the unlocked phone.

root and mod (20 mins to root)

Rooting doesn't unlock a phone, yet a phone must be unlocked to properly root it. Accordingly, I purchase unlocked phones or take them to my service plan provider to unlock. Once unlocked, the rooting process will provide much greater freedom inside the phone's Android OS. For example, rooting give apps such as Apps2SDPro ($4)2 the authority to move other apps onto the micro-SDHC card. This frees up memory inside the phone. Further efficiency can be gained by modding or replacing the Android OS itself. A stock-based, or custom ROM is needed. Modding has benefits and drawbacks (see below), whereas rooting the phone is entirely positive.

For the phone here (F3/F3Q), two types of root software can be used: earlier release F3's have a software version prior to "c", and can be rooted with motochopper or Saferoot. Software version "c" or later phones can be rooted with Saferoot (installs a free version of SuperSu ADB). Again, Phones are not unlocked by simple rooting.

Here were the Linux rooting steps I used (Note: Windows users can Google and install the saferoot EXE; the second and third bullet points below still apply):
  • install/verify lib32-ncurses, since older ncurses5 emulation is required for these rooting programs from 2014. You might need the PGP key from here.
    # pacman-key --populate archlinux
    # pacman -S multilib-devel lib32-glibc
    $ yaourt lib32-ncurses5-compat-lib
    It still barked at me about keyring, so I knew gpg (runs at user level) was hung and I just did a # pacman -U [localfile and location] on the file made by yaourt.
  • in the phone, verify USB debugging is on (Settings -> Developer Options)
  • connect phone via USB
  • in the phone, cd to saferoot directory containing install.sh, or motochopper directory containing run.sh. Verify the files, along with any associated BAT, ADB, PWN, etc. files, all have 755 permissions.
  • from the PC,
    $ ./install.sh
  • I answered "y" to installing BusyBox during the root process. It's an old small version.
  • reboot phone and look for SuperSu app in Apps

troubleshoot/saferoot
If the the PWN binary fails (the actual rooting portion), yet the phone reboots OK and you see a Superuser app, then saferoot (or motochopper) and your USB configuration are working well and haven't damaged the phone. The PWN binary is likely outdated. Remove the Superuser app and run a newer version of Saferoot. This Saferoot post includes a phone compatibility list.

Nota bene: Rooting should, should provide no risk of losing MMS, phone, contacts, etc. Saferoot went smoothly and I was also able to quickly install Apps2SDPro from the Google store and move bulky app, including their data and libraries, to the microSD card. A mod/ROM, however, is a further step than a root; it's an actual OS modification. Before a person tries a ROM, they'll likely want the capacity to bring the phone back to factory specs if things go haywire. OK, on with the post-root work.

phone apps

I considered these 3 the core rooting apps, and didn't move them to the SDHC. Around 40Mb total, and used concurrently, they provide the ability to free 100's of MB's used by other apps. All are available in the Google Play Store.

Apps2SDPro (28Mb)
$4 Allows partitioning and automatically sets-up 2nd partition for extra system memory for linking moving. Don't link/move the ADB app (eg. SuperSu), but most others can link or move. Trial and error.
SuperSu (7Mb)
-- Free version runs a sufficient ADB, but can pay $2 for one with a few more features. Saferoot installed an older version via the USB cable, then it easily upgraded via Google Play
Trimmer (5Mb)
-- Free, works quickly on the NAND to delete orphans. Very useful and no ads or nags.

making memory space (5 mins)

Using Apps2SD, I planned to reserve 20GB for Android space (apps) and retain 9GB for photo files, etc. Files storage is VFAT, and I've read that Android uses ext4. During the Apps2SD setup (left, click to enlarge), I learned the 2nd partition would automatically be formatted ext4 and mounted as phone memory. I used the blue slider to select about 9GB of FAT32 for photos/files, entered 800MB for swap, and hit the thumbs-up for the partitions to be created. All three virtual drives were created smoothly.3

custom ROM

I have had such good results with rooting and three apps above, that I'm not sure I'll ever mod the OS until entirely bored. Modding runs risks for losing the phone's "data connection" (necessary for MMS pic send/receive), phone quality, and battery life. Modified stock ROMs, as opposed to full custom ROM's, usually have better camera quality. There's some talk about that here. Also discussed is the need for running Trimmer as all these customs apparently begin to lose speed. However, if I do ever mod the phone, I will finish this portion of the post, or do a new post on the topic.


1 The phone's highest video recording resolution is 16:9 HD:1280x720, so this is the reason we can get away with less expensive U1-SDHC's. Incidentally, the phone records at 1280x720, but the screen is 5:3 WVGA -- 800x480. That means video playback on the device, YouTube/Netflix are at 480p, small but clear.
2 Apps2SDPro is an entire suite of tools, for example it includes a partition manager for setting up the SDHC card.
3 If the card is removed, can it be re-linked to the apps? Complex unmounting/remounting steps might be necessary if one removes the micro-SD from the phone.

Tuesday, June 12, 2018

pacman failure "404", duid (dhcpcd)

There's a mix of ipv4, 6, and duid stuff in here. On a home network -- our home LAN -- why would we ever need NSA level DUID (DHCP Unique IDentifier) fingerprinting of our system? If it's configured, but there's some conflict, our dhcpcd will time out, or make a connection but fail to browse (no DNS), and so on. This is just another level of possible failure on a home system; why would we ever want duid in a million years? DUID settings are further down, but they seem to be set no matter what a person does to remove.

pacman

One time, I began receiving 404 failures during # pacman -Syu. I quickly dug deeper using # pacman -Syu --debug 2>&1 |tee debugfile.txt and found that either curl itself (used by pacman to retrieve packages), or the way in which pacman was using curl, was the source. This is not always a bed of roses to determine and repair since they are often random glitches in what's become a decades-long, intermittent, annoying ipv6/ipv4 transition mess. This can involve tens of kernel and application settings and confusing conf files that often interact badly. For example, no longer can one edit /etc/resolv.conf directly, since it will be overwritten by resolvconf during dhcpcd initialization. One must now edit /etc/resolvconf.conf to indirectly get at /etc/resolv.conf, and resolvconf.conf has its own syntax, different from resolv.conf, so that both file syntaxes must be verified following any change. And so on.

which ipv is working?

The most obvious start is to verify both ipv4 and ipv6 curl function. Then, one can disable or activate the failed half or, possibly... force pacman to use the ip version that *is* working, (although I'm not sure if ipv type can be specified during a pacman operation).

First, the curl ipv functionality. Take a URL from one's mirror list and ping it to see that DNS is working. Then...
# curl aprs.ele.etsmtl.ca
Bienvenue | Welcome [curl works]
# curl --ipv4 aprs.ele.etsmtl.ca
Bienvenue | Welcome
[curl, and therefor pacman, is working ipv4]
# curl --ipv6 aprs.ele.etsmtl.ca
curl: (6) Could not resolve host: aprs.ele.etsmtl.ca
[curl, and therefore pacman, is failing ipv6]

In this case, it appears pacman is requesting both an ipv6 (AAAA) and ipv4 (A) handshake via curl, but, at various points in its development, pacman could only accomplish ipv4. Strace shows futher that IPV6 is not even able to open a socket.

Another way to check the kernel network stuff is to do a # sysctl -A, which will reveal all the kernel settings, but then grep it for, "net", "ipv4", or "ipv6" settings.

Users could also run # ip addr list and look to see there's an ipv6, and ipv4, or both.

~/.curlrc

As part of the curl man page, we can see some settings for the configuration file.

In the case above, I was getting proper ipv6 configuration, but still getting a curl fail on ipv6. It turns out that I was only able to find a single post on this, here. The post indicates that it's really a glibc issue, that could only be resolved by slowing down DNS resolution via manipulations inside /etc/resolvconf.conf to manipulate /etc/resolv.conf. We want /etc/resolv.conf to say "options single-request"
and so, to achieve this, we write...
# nano /etc/resolvconf.conf
# Solve ipv4/v6 fails
resolv_conf_options="single-request"
For a list of all resolvconf.conf options, see this man page.
Disabling IPV6 in Arch is no bed of roses -- one has either to put a boot line into their GRUB, or to create an effective /etc/sysctl.d/40-ipv6.conf (see bottom). Further, we may instead need to enable ipv6 more thoroughly across all applications, not further cripple it. It's a typical ipv4/ipv6 trial and error mess due to opaque reliance upon multiple programs (eg. curl)

old fix

This one used to work, but nowadays with built-in ipv6, doesn't always work. Add a couple lines in your blacklist file, eg...
# nano /etc/modprobe.d/blacklist
# stop pacman failure when encounter ipv6 sites
blacklist nf_conntrack_ipv6
blacklist nf_defrag_ipv6

grub

Use nano or whatever to add

/etc/sysctl.d/40-ipv6.conf

Here's one example, not guaranteed to work.
# Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.wlan0.disable_ipv6 = 1

duid

I've removed all duid references but I still get some duid when I connect with dhcpcd. Be sure too to delete all old "leases" in /var/lib/dhcpcd. Note also that they put a duplicate version of dhcpcd.conf inside /var/lib/dhdpcd/etc/, and that this is the one used by the system. You could probably even delete /etc/dhcpcd.So "duid" needs to be commented out at that location.

# rm /etc/dhcpcd.duid
# rm /etc/dhdpcd.secret
# rm /var/lib/dhdpcd/duid
# nano /etc/dhdpcd.conf [comment duid and clientid lines]

So then, here's a truncated version of etc/dhcpcd.conf. But even with al duid disabled, the router is apparently coaxed into collecting one.

# nano /etc/dhcpcd.conf
# A sample configuration for dhcpcd.
# See dhcpcd.conf(5) for details.

# Inform the DHCP server of our hostname for DDNS.
hostname

# duid

# Persist interface configuration when dhcpcd exits.
persistent

option rapid_commit

# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search
option classless_static_routes

# Get the hostname
option host_name

# Most distributions have NTP support.
option ntp_servers

# Respect the network MTU. This is applied to DHCP routes.
option interface_mtu

# A ServerID is required by RFC2131.
require dhcp_server_identifier

# Prevent timeouts for ipv6 failures
noipv6
noipv6rs

Sunday, June 10, 2018

[solved] pacman issues, keys

Recently attempted a pacman -Syu and had some of the typical failures of breaking dependencies and so forth, repaired by identifying orphan applications...

# pacman -Qdt

and then removing them...

# pacman -Rs [the package]

However, the system failed a subsequent update attempt, reporting a lack of disk space. Let's check why.

$ df
Filesystem 1K-blocks Used Available Use% Mounted on
dev 1884252 0 1884252 0% /dev
run 1891812 760 1891052 1% /run
/dev/sda1 27867324 25491112 960636 97% /
tmpfs 1891812 5880 1885932 1% /dev/shm
tmpfs 1891812 0 1891812 0% /sys/fs/cgroup
tmpfs 1891812 8 1891804 1% /tmp
/dev/sda2 681201384 114146820 532451556 18% /home
So, shit, yah. Not good, 25GB of 27GB used? Let's try the usual suspect /var/pacman/cache.
# du -sh /var/cache/pacman
18G /var/cache/pacman
# pacman -Sc
Packages to keep:
All locally installed packages

Cache directory: /var/cache/pacman/pkg/
:: Do you want to remove all other packages from cache? [Y/n] y
removing old packages from cache...

Database directory: /var/lib/pacman/
:: Do you want to remove unused repositories? [Y/n] y
removing unused sync repositories...
# du -sh /var/cache/pacman
1.5G /var/cache/pacman
Pretty nice to get rid of 16GB of unused shit

install some local package

If you're in this mess, you might also have to CLI mount a USB drive. I first checked which partitions I had on the USB drive. I plugged in the USB and noted it was assigned "/dev/sdg, so I then...
# lsblk /dev/sdg
... and see there's only one partition "sdg1", so...
# mount -t vfat -o rw,users /dev/sdg1 /home
... then when done with copying the files, I umount the device.
Now you have a local package and want to install it using pacman
# pacman -U [package.tgz]

signatures

We'd like pacman to verify signatures, but occasionally, there are error messages if it's been a long time since the update. Often times, I've found that simply reinstalling the keyring works.
# pacman -S archlinux-keyring

/etc/pacman.d/mirrorlist

Occasionally, mirrors go offline so that, if it's been a while since an update, pacman occasionally can't locate the previously enabled mirror. Go into this file and uncomment some other mirror or get the latest available list at the pacman mirrorlist generator.
# pacman -Syy
# pacman -Syu

ffmpeg dependencies

Another potential long period update fail, is a pacman exit due to any of several ffmpeg dependencies. This may look like...
error: failed to prepare transaction (could not satisfy dependencies)
:: ffmpeg2.8: installing libvpx (1.7.0-1) breaks dependency 'libvpx.so=4-64'
:: ffmpeg2.8: removing libx264 breaks dependency 'libx264.so=148-64'
:: ffmpeg2.8: installing x265 (2.8-1) breaks dependency 'libx265.so=146-64'
Fix...
# pacman -Rs vlc
Sometimes, this leads to a second dependency issue with phonon. If so, uninstall phonon, then vlc. Next try to update normally with the -Syu flags. This usually works, so then just re-install vlc after the update.

pacman key failures

Sometimes a pacman update fails due to key errors. All the apps have PGP keys now and sometimes pacman won't update due to some error in that database or so forth. Usual cure
# pacman -Sy archlinux-keyring -
...after which try the more nuclear option
# pacman-key --refresh-keys

There is also a problem that occurs when the fucking keyserver (right, you have to worry about keyservers now) will not have that key, causing an error. This is typically in yay operations.

$ yay -Syu
==> PGP keys need importing:
-> E0C4CDDB8A6B4FDA4F8468E024ADFAAF698F1516, required by: pgadmin3
==> Import? [Y/n] y
:: Importing keys with gpg......
gpg: keyserver receive failed: General error
==> Error: Problem importing keys

And it's at this point that a person has to install it manually. The Arch Linux key list

yay issues

Sometimes after an update, you want to use yay....

$ yay -Syu
yay: error while loading shared libraries: libalpm.so.12: cannot open shared object file: No such file or directory

This page explains all the steps but, in short...

# pacman -Rs yay
$ git clone https://aur.archlinux.org/yay.git
$ cd yay
$ makepkg -si

The "i" flag prompts an install when complete (uses pacman -U), so just enter your password when requested. Yay should work thereafter. You can check the version with the "--version" flag. To get rid of the build directory without a MILLION confirmations...

$ rm -rI

Thursday, March 8, 2018

[solved] USB drive mounting as read-only

1. 80% fix

# pacman -S ntfs-3g

2. 10% fix

Occasionally, the FS correct, but a file corruption causes the dirty bit to be set. File splice or other permission errors will occur attempting to copy. Once that happens, you gotta fsck, the drive, but what is the drive reference? Plug in the drive and then...

# dmesg |tail

...then take the resultant drive number (eg, /dev/sdc1) and...

$ dosfsck /dev/sdc1

...or if not VFAT

$ fsck /dev/sdc1

3. 10% fix

This one is really annoying and began in 2020 with the latest versions of udiskie. Udiskie mounts the USB into /run/media/foo/, using the full UUID of the USB. And it's root only. It looks like problem 1 above but a check on the logs shows that it's attempting to mount an ext2 fs using ext4. Udiskie no longer mounts ext2 so it "mounts ext2 using an ext4 system" which it can only do read-only. This pisses people off.

$ journalctl -r
EXT4-fs (sda1): mounting ext2 file system using the ext4 subsystem EXT4-fs (sda1): warning: mounting unchecked fs, running e2fsck is recommended

...but yet you know it's an e2 fs...

$ df -Th
/dev/sda1 ext2 3.6G

...so WTF?

[Fail 1 ] doubled down on fs type..

$ mke2fs -t ext2 /dev/sda1

[Success] after 4 days wasted, I gave up and put VFAT fs. I mean, udiskie is going to mount with ext4 no matter what and I sure AF am not going to format a USB drive, or anything else under the sun, with ext4. So just format with something ext4 compatible like VFAT.

$ mkdosfs /dev/sda1

Problem solved. Mounts and reads as user.

HOWEVER... if you do run this version be sure to run either dosfsck, or fsck.msdos on the drive, or you may further corrupt the drive.

Wednesday, January 10, 2018

Latitude D520 Arch install

Problem: your grandparent or a teacher's assistant needs a work-a-day laptop, but you're not a first responder, professional athlete, or an investment banker. If you can carry one (5lbs), 2007 Latitude D520's are available on EBay for about $60. They are 64bit 2 core processors. Load 'em up with 2 Gigs of RAM, Arch Linux and IceWM: within an $80 budget, you've got a laptop that'll do common functions without much lag. It'll show movies, edit photos and audio. High CPU tasks like rendering Blender output, or other video editing, takes ages (so just avoid video editing).

CD (10 mins)

Download an ISO and check the MD5. Put in a fresh disc. I found it's best to burn using root -- some systems bark, even with group permissions configured, at cdrecord at user level. I typically add "eject" as a simple completion notification, but it's not needed.
$ cdrecord -scanbus
# cdrecord -dao -eject dev=0,0,0 install.iso
$ md5sum /dev/sr0

base CLI (runlevel 2) configuration :: 20 mins

There's no UEFI or other boot complexities on older systems. But if you have a newer system, get rid of UEFI. Format the disk with "fdisk -t" to create an MBR. 0x04. The only real question is whether to make multiple partitions. The instructions below include multiple partitions. If multiple partitions aren't desired, just make two primary partitions in cfdisk: a linux (83) SDA1 and a linux swap (82) SDA2, and jump from the step mounting on /mnt to the step with mkdir /mnt/etc
# mkswap /dev/sdb3
# swapon /dev/sdb3
# free -m [check swap is on]
# mount -rw -t ext3 /dev/sdb2 /mnt
# mkdir /mnt/home
# mount -rw -t ext3 /dev/sda1 /mnt/home
# mkdir /mnt/boot
# mount -rw -t ext3 /dev/sdb1 /mnt/boot
# mkdir /mnt/etc/
# genfstab -p /mnt >> /mnt/etc/fstab
# pacstrap /mnt base base-devel linux linux-firmware
# arch-chroot /mnt
# ln -s /usr/share/zoneinfo/US/[zone] /etc/localtime
# mkinitcpio -p linux
# passwd
# pacman -Syu grub
# mkdir /boot/grub
# grub-mkconfig -o /boot/grub/grub.cfg
# grub-install /dev/sda
# exit
# reboot

useradd :: 5 mins

User 500, name "foo", home directory of "foo", using bash shell.
# useradd -G wheel,lp,audio -u 500 -s /bin/bash -m foo

aur :: 30 mins

Used to be so simple, so I'm including it again here in case it returns some day:
# nano /etc/pacman.conf
[archlinuxfr]
SigLevel = Never
Server = http://repo.archlinux.fr/$arch

# pacman -Sy yaourt
However, even in past, one had to have base-devel installed, and to add oneself to the sudoers file. Unfortunately, editing sudoers is done through the visudo command, which uses those stupid vim commands:
# visudo
foo ALL=(ALL) ALL
However, yaourt hasn't been maintained since 2018, so the (only) way to get it is to build it from scratch out of AUR source, as in olden times. The AUR website has aur source. Be sure you've got wget installed (# pacman -S wget), retrieve the package, untar it, morph it into a pacman version ("makepkg"), then install it using pacman with a "U" flag. The "U" flag has pacman looking into the current directory instead of out over the Net. Prior to building any AUR helper (yaourt, aurman), you'll need to first build "package-query", also from the AUR.

Also there is a wikipedia page and reddit conversations. In spite of its weaknesses, I continued to use yaourt. I downloaded aurman, but was unable to compile it.
$ mkdir pkgs
$ cd pkgs
$ wget https://aur.archlinux.org/cgit/aur.git/snapshot/package-query.tar.gz
$ wget https://aur.archlinux.org/cgit/aur.git/snapshot/yaourt.tar.gz
$ tar xzvf package-query.tar.gz
$ cd package-query
$ makepkg -s
# pacman -U package-query[version]tar.xz
And then the same with yaourt thereafter.

dhcpcd timeouts/limit journalctl size :: 5 mins

If you're having dhcpcd timeout problems, they can show up in pacman and curl, and are usually due to IPV6. This ipv6 request mod has to be re-accomplished after every dhcpcd package update.
# nano /etc/dhcpcd.conf
# custom to stop ipv6 service request timeouts
noipv6rs
Systemd will log GB's and GB's of data if not limited
# nano /etc/systemd/journald.conf
SystemMaxUse=200K

X install :: 20 mins

These Dells have the Intel integrated graphics 945GM; the correct xf86 driver is the Intel VA driver.
# pacman -S xf86-video-intel
This driver now uses DRI3 as the default Direct Rendering
Infrastructure. You can try falling back to DRI2 if you run
into trouble. To do so, save a file with the following
content as /etc/X11/xorg.conf.d/20-intel.conf :
Section "Device"
Identifier "Intel Graphics"
Driver "intel"
Option "DRI" "2" # DRI3 is now default
#Option "AccelMethod" "sna" # default
#Option "AccelMethod" "uxa" # fallback
EndSection
# pacman -S xorg-server xorg-apps xorg-xinit xorg-xrandr
If you scroll down this Arch page discussing Xorg, we can see that we'll want the mesa for Open GL and lib32-mesa for older apps. Also, Intel chips, as we know (scroll down to about item 13), do not support VDPAU, viz:
"Intel Embedded Graphics Drivers do not support VDPAU. VDPAU stands for video decode and presentation API for UNIX*. VDPAU is an open source library and API originally designed by NVIDIA that provides an interface to support hardware-accelerated video decode."
... and so Intel sez libVA is correct. More specific to Arch, there's additional information in their video acceleration page, if you like to read. The only problem with the VA-API is it can't decode MP4 and FLASH containers, but it does all other common formats, and all codecs, including H264 and the new VP8 and 9. I just use MKV and AVI containers and set the VLC codec to VA (instead of its default VDPAU). In spite of inefficiencies on the Intel hardware, some may wish to overlay VDPAU functionality onto their Intel chip, which is an installation beyond this post. If a person does that, any mistakes will defeat X working properly -- no harm, just revert to runlevel 2 and reconfigure until Xorg is working well. FYI, one of the tweaks I've seen for VIDPAU overlaid onto VA, is adding "export VDPAU_DRIVER=r600"in one's ~/.xinitrc file. Anyway, back to pure libVA...
# pacman -S libva libva-intel-driver libva-utils libva-mesa-driver
... then check the install with "$ vainfo".

window manager :: 10 mins

On an old system, I don't waste memory with display managers, instead I login and "startx" from runlevel 2. I like Ice Window Manager, a light interface with simple text configuration, wallpaper, and menu files (look inside ~/.icewm ). Efficient on older systems: perhaps 150M usage after logging-in, connecting to network, and starting X. In Arch, the template files are inside /usr/share/icewm/, including the themes. See the main Arch file.
# pacman -S icewm
$ cp /etc/X11/xinit/xinitrc .xinitrc
$ nano .xinitrc
exec dbus-launch icewm-session
$ startx
I also looked here to get the names of additional drivers, for example to solve the pesky touchpad problem. I couldn't stop the Dell touchpad with synclient TouchPadOff=1 until I # pacman -S xf86-input-synaptics.

wifi digression :: 60 mins

Some of these old Latitudes have the dreaded Broadcom 43 series card. For example, if users run lspci and the wifi readout line is this or similar...
Network controller: Broadcom Limited BCM4311 802.11b/g WLAN (rev 01)
... then it's unlikely the typical fix for older cards, # pacman -S wireless_tools will have effect. It's also unlikely anything will appear in either of these two verification steps...
# iw dev
# cat /proc/net/wireless
If this is the case, the easiest move is installing yaourt and building a b43 driver, but take note:
"BCM4306 rev.3, BCM4311, BCM4312 and BCM4318 rev.2 have been noticed to experience problems with b43-firmware. Use b43-firmware-classicAUR for these cards instead."
During that b43-firmware-classicAUR install, the following warning was displayed...
Please add $VISUAL to your environment variables
for example:
export VISUAL="vim" (in ~/.bashrc)
(replace vim with your favorite editor)
...and you may also encounter
~/.config/aurvote must have username and password. Run: aurvote --configure
...if you installed the yaourt rating app but hadn't configured it. You can use any username and pass you like. Reboot after this install, and something like this is normal:
# iw dev
phy#0
Interface wlan0
ifindex 3
wdev 0x1
etc..

QT or Gtk

I try to stick with one or the other to keep a smaller install and shorter update. Sometimes both become necessary. I prefer GTK (except gvfs), but VLC requires QT. QT is about 400MB, and typically pulls in PyQT. But since I'm a fan of VLC ... QT became my baseline API.
# pacman -S qt4
However, you're going to see that udiskie (to avoid gvfs) brings in about 80MB of shit, including basic Gtk.

sound

I avoid PulseAudio as much as I can. See my post from 2016. ALSA is now built-in, so that all that's required is alsamixer in order to control the sound levels (unmute, etc)
# pacman -S alsa-utils
Done.

rc.local

A consolidated place for random startup shit too lazy to configure individually. It's like an initrc in X, but for runlevel 3.
# nano /etc/rc.local
#!/bin/bash
wpa_supplicant etc
dhcpd etc
exit 0
# systemctl enable rc-local.service should make it happen next boot, but you also have to create the service file before enabling it.
# nano /etc/systemd/system/rc-local.service

# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.

[Unit]
Description=/etc/rc.local Compatibility
ConditionPathExists=/etc/rc.local

[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target