Monday, July 25, 2016

[Solved] Arch ALSA configuration (HDMI, nVidia PCIe card) minus PulseAudio

$ xrandr [get displays]
$ xrandr --output HDMI1 --mode "1280x800"
$ xrandr --output HDMI1 --off

I currently maintain an unlucky system with respect to HDMI audio: two sound chips -- nVidia PCIe video card(w sound chip), and an Intel sound chip on the motherboard. One must choose between connecting the HDMI cable at the motherboard and having low-res motherboard chip video, or connecting the HDMI cable to the nVidia board. Of course, we connect the HDMI cable to the nVidia board for better video, but this then led (in my case) to conflicting motherboard and video card audio functions. This post is the result.


The system OS is Arch w/Nouveau drivers (minus proprietary nVidia modules), and without PulseAudio. Eventually, I removed the Nouveau drivers and reluctantly installed the proprietary nVidia drivers: the Nouveau drivers could not seem to manage the audio on the nVidia card.

Avoiding PulseAudio was successful -- the steps for that begin below. The steps are entirely accomplished inside Runlevel 2/3 or a terminal, but I included one optional Xorg step at the bottom of the post.

HDMI complexity

HDMI sound requires at least four reconciled systems -- Xorg, ALSA, kernel modules and, of course EDID -- and some of these have subfiles. That's too many variables for consistently easy sound configuration, and sound is already one of the more challenging aspects of Linux. One bit of Arch-specific luck is ALSA modules are nowadays compiled into the Arch kernel.

First steps

  • ALSA kernel modules present?
    $ systemctl list-unit-files |grep alsa
    alsa-restore.service static
    alsa-state.service static
    "Static", because compiled into the kernel. If not present, fundamental ALSA must be installed and the dynamic loading of its modules verified.
  • after the above, pacman all additional ALSA stuff in the repo, avoiding ALSA-PulseAudio libs (and other applications which automatically install PulseAudio) and, if feasible, also avoiding ALSA-OSS libs. Generally, the less non-ALSA audio libs the better.
  • verify (in an already-working system) one has a good HDMI cable and then connect it between the the nVidia card and one's desired monitor/TV. Once we connect to nVidia video, we're obviously committed to nVidia's sound chip (since HDMI cable carries both audio and video).
  • reboot
  • $ lsmod |grep soundcore :: soundcore must be present.

module load order --> default audio card

In a multi-card/chip sound system, the order the kernel loads the modules for the cards determines the default audio card. Specific to my system is an additional complexity: both the motherboard and nVidia audio chips load the same module, snd_hda_intel. This prevented me from determining which chip was the default sound chip with a simple module check...
$ cat /proc/asound/modules
0 snd_hda_intel
1 snd_hda_intel
Which card is the default "0", the motherboard chip or the nVidia card chip? Use ALSA's "aplay" to distinguish:
$ aplay -L |grep :CARD
default:CARD=PCH
sysdefault:CARD=PCH
front:CARD=PCH,DEV=0
surround21:CARD=PCH,DEV=0
surround40:CARD=PCH,DEV=0
surround41:CARD=PCH,DEV=0
surround50:CARD=PCH,DEV=0
surround51:CARD=PCH,DEV=0
surround71:CARD=PCH,DEV=0
iec958:CARD=PCH,DEV=0
hdmi:CARD=NVidia,DEV=0
hdmi:CARD=NVidia,DEV=1
hdmi:CARD=NVidia,DEV=2
hdmi:CARD=NVidia,DEV=3

ALSA has loaded the motherboard chip "PCH" as the audio default, which is not the NVidia audio we need (for HDMI audio). And a 3rd issue is apparent: a device designation of "0" is being used for both sound card/chips. These numbers are supposed to be unique for each card, but two cards are using the same number, "0". This may be related to the dual use of snd_hda_intel. Regardless of how it happened, dual-use of a (supposedly) card-specific number across two pieces of hardware is likely to lead to conflicts. If desired, one could detour and verify IRQ issues with say,lscpi -v, or get more information about the module, eg....

$ modinfo -p snd_hda_intel

...and do conflict tests. In my case, I made note of the overlap and pressed on.

So far: No sound yet, since default audio is going to PCH, not to nVidia. However there is progress: we know ALSA assigned unique names for the two audio chips -- "PCH" and "NVidia". We know both audio chips rely on module snd_hda_intel, that the PCH chip is loading as the default, that the nVidia chip has four available devices (0,1,2,3), and that a good HDMI cable has been connected between the nVidia HDMI port and my monitor. How to control the load order (default)?

Some users control load order problems by creating an /etc/moprobe.d/[somename].conf file, then setting options inside the file, or using creative blacklisting. This didn't work for me: both audio cards relied on the same module, snd_hda_intel so that blocking this module would block both cards (chips).

I tried however. I created the file...

# nano /etc/modprobe.d/customsettings.conf

...and attempted various "option" and "install" lines within it. Option and install lines have helped other users (see appendix at bottom), but none of them solved my problem. In the end, my only use for the custom file was to blacklist Nouveau modules, in case I had not removed all of them properly, after I had settled on nVidia drivers

# nano /etc/modprobe.d/customsettings.conf
blacklist nouveau

If kernel manipulation fails, we can turn to making modifications within ALSA. For ALSA customization,four ALSA files are possible:

  • /etc/modprobe.d/(somename).conf :: kernel manipulation, as just noted above.
  • /etc/asound.conf in Arch, the file does not exist by default, but can be created by apps (like PulseAudio) or by the user.
  • ~/asound.rc home directory version of asound/conf, if one does not desire global settings.
  • /usr/share/alsa/alsa.conf a default file, installed with ALSA. To narrow troubleshooting variables, I do any ALSA customization only in this default /usr/share/alsa/alsa.conf file. The first modification is to prevent the other two files above from being sourced. Some users simply delete /etc/asound.conf and ~/.asoundrc entirely or rename them to backup files. We need ELD information before we can modify this file entirely however.

ELD verfication

ELD information is critical to complete HDMI modifications inside /usr/share/alsa/alsa.conf. Be sure the HDMI cable is connected to the NVidia HDMI output and to the monitor to which you're connecting.
$ grep eld_valid /proc/asound/NVidia/eld*
/proc/asound/NVidia/eld#0.0:eld_valid 0
/proc/asound/NVidia/eld#0.1:eld_valid 1
/proc/asound/NVidia/eld#0.2:eld_valid 0
/proc/asound/NVidia/eld#0.3:eld_valid 0
Here, only nVidia audio card (chip) 1 is validated for HDMI audio. However, these "card" numbers 0,1,2,3, are not the actual hardware numbers. We need hardware numbers for ALSA.
$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 7: HDMI 1 [HDMI 1]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 8: HDMI 2 [HDMI 2]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 9: HDMI 3 [HDMI 3]
Subdevices: 1/1
Subdevice #0: subdevice #0
So, for HDMI 1, the ELD validated sound card/chip with my monitor, the hardware number will be hw:1,7. Now I can return to /usr/share/alsa/alsa.conf and make the necessary HDMI modifications.
# nano /usr/share/alsa/alsa.conf
defaults.ctl.card 1 #default 0
defaults.pcm.card 1 #default 0
defaults.pcm.device 7 #default 0
This means I've swapped the cards' default order from "0" to "1", and specified the ELD-authorized device by its hardware number, "7". Reboot, and unmute card PDIF 1 in alsamixer.
For non-NVidia cards, the process is similar...
$ grep eld_valid /proc/asound/HDMI/eld*
/proc/asound/HDMI/eld#0.0:eld_valid 0
/proc/asound/HDMI/eld#0.1:eld_valid 0
/proc/asound/HDMI/eld#0.2:eld_valid 0
/proc/asound/HDMI/eld#0.3:eld_valid 1
/proc/asound/HDMI/eld#0.4:eld_valid 0
... and then determine which setting is associated with ELD 3, using "$ aplay -l" again, and then making the change inside /usr/share/alsa/alsa.conf.

Next steps

  1. reboot
  2. power on the tv/monitor and verify volume is hearable level
  3. $ alsamixer :: unmute "S/PDIF 1" on the NVidia card. F6 switches between cards.
  4. $ aplay -D plughw:1,7 /usr/share/sounds/alsa/Front_Center.wav
    Should hear "Center" from TV's speakers. Aplay won't do MP3's to my knowledge -- use this ALSA player with a WAV.
  5. enjoy. I must manually unmute S/PDIF 1 each reboot, or modify some other file, but that's another day/process.

troubleshooting - post alsa update

Following any system update that changes ALSA, (eg, following pacman -Syu) sound may be lost. First, open alsamixer to verify it wasn't re-muted. Secondly, the update may have overwritten /usr/share/alsa/alsa.conf. Re-enter the modifications (scroll down) and device settings in alsa.conf, resave, and log out and back in, or reboot.

troubleshooting- vlc issues

1.) deleting ~.config/vlc as a fix. When sound comes OK from ALSA ($ aplay -D plughw:1,7 /usr/share/sounds/alsa/Front_Center.wav, as above), but VLC is soundless, try deleting~/.config/vlc, and let VLC respawn the file.

2.) no sound from media files with MP3 or AAC tracks. Movies and MP3's play with audio in other software, but no sound from VLC, if it's MP3 or AAC. The fail will appear to be ALSA, eg, if you attempt to VLC the MP3 file from the command line, you'll see errors including...

alsa audio output error: cannot set buffer duration: Invalid argument

... yet MP3's play fine with other players. Without PulseAudio, they are getting their input from ALSA, so we know it's not ALSA, it's VLC. The fix for me has been the SPDIF function, which sends audio directly to the device, without decoding, which it can't always handle. Make sure it's unchecked as seen below.



Incidentally, beneath the quality of lossless (flac and wav), apparently the compression quality proceeds: Opus better than AAC, with AAC better than MP3.

Nouveau

One of the sound cards is an nVidia graphics card.
$ nvidia-xconfig --no-use-edid-dpi $ nano /etc/X11/xorg.conf Option "DPI" "100 x 100"

appendix 1
possible /etc/moprobe.d/[somename].conf settings

At the kernel level, I was unable to do more than blacklist nouveau drivers. I could have instead blacklisted nvidia drivers, had I ever been able to activate sound within nouveau. However, many were able to solve their configuration settings in this file using such options as
install sound-slot-0 modprobe snd-card-1
install sound-slot-1 modprobe snd-card-0
or...
options snd-hda-intel enable=1,0
Note that, in this second example, the underscores in the module name are changed to hyphens.

appendix 2

If a person has old 32-bit Windows programs and wants to run Wine, what are the concerns? You'll first have to uncomment the two multilib lines in /etc/pacman.conf to access 32-bit libs. Wine itself then becomes roughly a 600MB kludge install due to all the attached 32-bit libs.

Most importantly here, one of the 32 bit choices affects the nVidia card. During Wine install, users must select either the lib32-mesa-libgl or lib32-nvidia-gl. I have th nVidia card in my system, so it seems like the safe choice. What about compatibility though? So many things use Mesa.

Saturday, July 23, 2016

LaTeX -- updating the 2015 installation (TexLive)

TexLive is a large install, typically 4 GB. I keep mine in a home directory folder (easier to update and back up - no root privileges necessary). Between 2015 and 2016 however, the internal structure of the update database apparently changed.

So, in 2016 (edit: also 2021), sometimes tlmgr update is finnicky. A common fail is to see:

Unknown directive ...containerchecksum [etc]

This is a slight pain, so I've written the steps (for future reference). There are five steps:
1) DO
2) A
3) CLEAN
4) 2016
5) INSTALL

Next, update one's path statement in ~/.bashrc

Steps (1 hr, excluding download and burn)

  • Create some home directory, eg, ~/latex and delete the one with the last install. Frees about 4 GB.
  • Delete old ~/texmf and any ~/.tex* files
  • Download the latest iso from TexLive, about 3.7GB. New ones come out once a year, about in April.
  • $ md5sum [nameofiso].iso
  • $ growisofs -speed=2 -dvd-compat -Z /dev/sr0=nameofiso.iso
  • Put DVD in, install...
    $ cat /etc/mtab |grep "TexLive"
    /dev/sr0 /run/media/foo/TeXLive2016 iso9660...
    $ cd /run/media/foo/TeXLive2016
    $ ./install-tl -no-verify-downloads



  • select item "D" and give the home directory folder to all variables: eg ~/foo/latex
  • select item "O" and give letter paper size, as opposed to A4, if desired
  • Wait an hour or two while all 3500 files are installed.
Below, some features of these two methods of install, DVD and Internet...

DVD (1 hr)

Not verifying the downloads ("no-verify-downloads") is important even when installing offline with a DVD. Otherwise random checksum failures will likely occur installing one of the ~3500 packages, at which time the entire installation will dump without recovery options. There is nothing in the crude installer to, say, skip one or another package(s) and try it later. After install, the tlmgr can of course ignore specified packages but... that's after the installation is complete. A second potential problem is heat from high CPU usage. A friend's 2008 Toshiba laptop would fail around 2000/3500 packages with an overheat lock-up. I tried it also on a 3.4 GHz i7, and it maxed that processor throughout the install as well, although it could remain cool since it was a desktop with fans. Why the maxing of CPU in a simple install? My guess is the TexLive installer is probably c. 1995 coding, with a lot of subsequent bolt-ons, in such a way that the kernel can't smoothly allocate resources. Just a guess.

Internet (2 hrs)

An Internet install runs cooler because it has installation pauses between each package -- each package takes a few moments to download, as opposed to the near-instant draw from a DVD. The Internet install is launched from a small "tl-install" file. The file initiates a download (TUG site) and installs the packages in sequence.

Post install

  • update PATHs in /etc/profile or, as I prefer, in~/.bashrc (user permissions), eg.
    $ nano .bashrc
    export PATH=/home/foo/latex/bin/x86_64-linux/:$PATH
    export INFOPATH=/home/foo/latex/texmf-dist/doc/info:INFOPATH
    export MANPATH=/home/foo/latex/texmf-dist/doc/man:MANPATH
  • Logout, and log back in
  • $ tlmgr update --self
  • $ tlmgr update --all
  • $ texhash # typically only necessary following an installation of a non-repo package, etc, but I prefer to be certain. Secondly, as noted on this great page, the way to determine which configuration file,
    or even if it's been created, is to
    $ tlmgr conf tlmgr
    Thirdly, a list of installed packages sent to text file:
    $ tlmgr list --only-installed > installed_texlive_packages.txt
  • Verify with a complex .tex file compile, eg the .tex for a book.
  • Smile. Go outside in the sun. Arguably, read the New Yorker.
If one has the time, one can also keep tabs on various fonts (scroll down to "updmap"), or install non-repo packages.

problems

There's a reliable internet connection, but...
$ tlmgr update --self No connection to the internet. Unable to download the checksum of the remote TeX Live database, but found a local copy so using that.
I ran into that when my installation, including tlmgr, became about 2 years out of date. I could have downloaded a fresh install and then updated all the paths for it in /etc/profile, but it should have been easier. Headed to the tlmgr page and got the Unix script for the latest tlmgr, update-tlmgr-latest.sh.

Wednesday, July 20, 2016

UEFI Arch install (GPT, ESP) Single boot

My advice is not to do this. I convert everything GPT to MBR to avoid GPT labels and EFI so that GRUB installs properly. So, if I run across a GPT named volume with EFI, I go straight into cfdisk, and blow out all the partitions. Then I gdisk that empty disk, hit "r", then "g", then write it with "w". Subsequently, fdisk can be used to write a DOS partition which will then show up in cfdisk as being toggleable to "bootable", when I finally go to make the new partitions.
UEFI means partition and boot issues (eg,how to do with no boot flag or MBR?) and possible boot loader settings. This page addresses many ESP issues, and this page UEFI issues more generally (scroll to UEFI). Also of course, there is this UEFI page.

There's a lot to consider (click on photo)

The booting order is apparently the UEFI firmware contacts an ESP partition (new concept), receiving information. This process also loads the EFI specific kernel informtion, (in the form of a filesystem?) "efivarfs", into /sys/firmware/efivars.

disk 20 mins

You'll have to overburn the install CD to get 740MB onto it. Of the possible options, cdrdao cannot be used without a TOC file to direct it, and dd has its dangers, so I tend to rely on cdrecord. With cdrecord, overburning should be specified, and random DAO errors that usually only appear at the end of the operation, can be prevented by running it in root.
$ cdrecord dev= HELP [get a device list]
# cdrecord -overburn -eject -dao -dev=/dev/sr0 ~/install.iso

Load the UEFI version of Arch, and then get information (eg fdisk -l sdX), at the prompt. I will want GPT, not MBR partitions: for partitioning, use cgdisk or gdisk instead of cfdisk for partition.
cgdisk /dev/sdX
... and then be sure to assign the proper qualities. New: four digit identifiers (eg 8300 for Linux and 8200 for the swap).It appears that filetype EF00 automatically sets an equivalent of a boot flag for its partition, so we only need one EF00 partition. This partition is said to need 250MB, so 1 GB should be enough.

Here might be a list of partitions:
  1. 1GB - /boot (ESP) sdb1 - ef00
  2. 40GB - / (apps) sdb2 - 8300 (this will be the basic "mnt" foundation)
  3. 50GB - swap sdb3 - 8200
  4. 900GB - /home sda1 - 8300
For GPT partitions, can use cgdisk for ncurses, or simple gdisk
# gdisk /dev/sdX
# n [add size, type]
# w [write it]
# gdisk -l /dev/sdX [list the partitions]
After partitioning, format them, eg mke2fs /dev/sda1. However, the ESP partition needs FAT32.
# mkfs.fat -F32 /dev/sdb1

basic configuration 20 mins

# mkswap /dev/sdb3
# swapon /dev/sdb3
# free -m [check swap is on]
# mount -rw -t ext3 /dev/sdb2 /mnt
# mkdir /mnt/home
# mount -rw -t ext3 /dev/sda1 /mnt/home
# mkdir /mnt/boot
# mount -rw -t ext3 /dev/sdb1 /mnt/boot
# mkdir /mnt/etc/
# genfstab -p /mnt >> /mnt/etc/fstab
# pacstrap /mnt base
# arch-chroot /mnt
# ln -s /usr/share/zoneinfo/US/[zone]
# mkinitcpio -p linux
# passwd
# pacman -Syu grub efibootmgr
# grub-mkconfig -o /boot/grub/grub.cfg
# ls -l /boot/EFI/arch [verify grubx64.efi is here or locate it]
# grub-install --efi-directory=/boot/EFI/arch --target=x86_64-efi /dev/sdb
# efibootmgr -v [verifies all bootable efi's ]
# exit
# reboot

boot 90mins

At reboot, one may have the basic GRUB2 command prompt, not the curses version. This is called "rescue mode" GRUB, with limited commands. If command "boot" does not work, then one needs to return to their install disk, mount all the partitions again, and begin at the mkinitcpio step. Mkinitcpio cannot do its job of creating the ram disk if it can't find a kernel, so it's a good litmus test that the kernel is properly installed and available. There should be some vmlinuz in /boot. So, if this is missing, reinstall the base group entirely with pacman, so the mkinitcpio step is 100% certain.
# pacman -S --needed base
If this step informs that the system isup to date, "nothing to do", then force a kernel reinstall, especially if you could find no vmlinuz in /boot. Everything must be 100% to work with UEFI.
# find /var/cache/pacman/pkg -name 'linux*'
Get the specific name of the linux package in there, eg. 4.4-4, or whatever. Then...
# pacman -U /var/cache/pacman/linux[specifics].pkg.tar.xz
# ls /boot [verify a vmlinuz present]
# mkinitcpio - p linux
# grub-mkconfig -o /boot/grub/grub.cfg
# ls -l /boot/EFI/arch [verify grubx64.efi is here or locate it]
# grub-install --efi-directory=/boot/EFI/arch --target=x86_64-efi /dev/sdb
# nano /etc/systemd/journald.conf
SystemMaxUse=200K
# exit
# reboot

This should get us to Runlevel 2. Network configuration (eg, hosts, hostname, wifi arrangements, etc) is usually first since one needs a "pacman" configuration. Then perhaps pacman servers and keys, and then apps. For example, nearly every fresh install experiences dhcpcd time outs because it can't obtain an ipv6 dhcp address from a router.
# nano /etc/dhcpcd.conf
# custom to stop ipv6 service request timeouts
noipv6rs
# useradd -G wheel,lp,audio -s /bin/bash -m foo
# pacman -S wpa_supplicant wireless-tools jre8-openjdk jdk8-openjdk
# export LANG=C

nVIDIA or Nouveau

Still the same old story -- nVIDIA won't release the code (2016). If using nVIDIA drivers, some have to blacklist the i915 driver. To blacklist, create any file with a ".conf" extension in /etc/modprobe.d/, eg /etc/modprobe.d/blacklist.conf, then
# nano /etc/modprobe.d/blacklist.conf
blacklist i915
To review all drivers prior to selection...
# pacman-Ss xf86-video |less
# pacman -S xf86-video-nouveau mesa-vdpau libva-mesa-driver
I took the vdpau since the card's an nVidia. Also, I ultimately went with the nouveau drivers so we'll see. I dunno. Now for 50MB of xorg
# pacman -S xorg-server xorg-server-utils xorg-xinit xorg-xrandr
Went entirely without the nVidia and so chose the mesa-libgl when the choice appears. We'll see. I can check it with, say
# pacman -Qqs nvidia
... to see if I put nVidia in by mistake. The other thing is I have a simple mouse and keyboard input, so I went with xf86-input-evdev instead of the more cutting edge xf86-input-libinput. We'll see, again. Note that no /etc/X11/xorg.conf is automatically created these days (2016) but users can create one. Example: a specific display problem requires it, viz to specify a BusID, etc.

I like IceWM. Hearkens back to the old days with its use of only a couple classic text config files: .xinitrc and Xresources. Those two X files are read at start-up and then there are a couple of text files internal to icewm also read during startx: ~/.icewm.preferences and ~/.icewm/startup. Preferences is like 1300 lines, with about 500 options. Love it.
# pacman -S icewm
$ cp /etc/X11/xinit/xinitrc .xinitrc
$ nano .xinitrc
exec dbus-launch icewm-session
$ startx
45MB of thunar and its dependencies. For USB drives and so forth, I don't like gvfs at all, since to me it's a kludge, it includes PAM, I don't need a trash bin, and on. So udiskie (116 MB) is a good substitute.
# pacman -S thunar thunar-media-tags-plugin thunar-volman udiskie dosfstools ntfs-3g parted
Let's do 2MB of xterm, 45MB of geany, 78MB of vlc (caveat: vlc is Qt1, so not gnome (gtk) friendly, and if you go with Qt, you eventually end up with PyQt, dunno why but it's true. All told ~ 400MB more for PyQt and Qt), 380MB of openshot (used ttf-bitstream-vera note:smbclient is part of the openshot install - yuck), 112MB of evince, and 150K of recordmydesktop.

ffmpeg or libav

What to do here? Michael Niedermeyer resigned from ffmpeg August 5, 2015. I still am most used to it and I know it typically has more features, if perhaps unstable. Ffmpeg, currently.

GRUB2 note

GRUB2 is one of the few linux boot managers that works with UEFI, but it is a horrible bootloader:
  1. will not locate kernels
  2. numbers disks differently than linux, eg, "hd" There is no intuive correlation between them. If I have only two hard drives, mounted as three partitions, sda1, sdb1, and sdb2, these may show as hd0, hd5, hd6. That would work, but GRUB wants the partition number as well, and there are only 2 physical drives. You will have to map this by trial and error.
  3. requires as much or more configuration than the underlying OS installation
  4. continuing on the configuration point, one cannot directly edit GRUB's configuration file, /boot/grub/grub.cfg
1Nokia sold Qt trademark to Digia in 2012

Monday, July 4, 2016

Lookout security for Android devices

Perhaps two weeks after activating a new phone with T-Mobile, an app called "Lookout" prompted me with a cell screen to subscribe to their service. The pop-up included a correct email address for me, pre-entered.

I found that odd since, although the email address Lookout suggested for me was correct, it was different than the email address attached to my T-Mobile account. Where did it come from? Typically these come from phone "permissions" (access privileges) to one's phone information (eg, email accounts), but we all know they are rarely "permitted" in the sense of a user wittingly authorizing information to the software. Rather, they are often pre-configured and difficult to unravel. That is, in such cases, one has to dig to determine, and still may never determine (or revoke), privileges granted by: a provider (eg. T-Mobile) service update, the phone manufacturer (eg. Samsung), the Android (OS) installation process, or the app (eg, Lookout). These iniquities become more galling when one's data security is supposedly being looked-after, particularly for a fee. One reasonably expects transparency.

A much smaller issue: beneath the email address was a blank for a password, without specifying if it was for the email address offered, or for a new Lookout account password.

Websites

Before entering any password, I navigated to the Lookout website . As I write today, I could find no information about the password sign-in or Lookout's information access on devices. The potential billing tiers for Lookout appeared to have two options - Personal Premium ($3/$30) and Personal Free: both were buried in the site's "Contacts" pages. A difficult to find FAQ finally referenced T-Mobile accounts, $4, but nothing directly about partnerships, phone access privileges, etc. A third service,"Jump!", was referenced on the page, but without explanation or links.

Trying next the T-Mobile site, nothing about Lookout phone permissions, but there was billing information for a "Premium" Lookout account, $4, that is, more than accounts directly established with Lookout. Meanwhile, Jump! is a T-Mobile phone insurance or upgrade plan, I could not be sure.

I'm supposed to feel secure about what again?

Inside the phone



Voila. The permissions somehow granted to Lookout (never wittingly given by me), were as follows:
  • Your personal information
    Add or modify calendar events and send email to guests without owners' knowledge. Modify your contacts. Read call log. Read terms you added to the dictionary. Read your contacts. Read your web bookmarks and history. Write call log. Write web bookmarks and history.
  • Your location
    Approximate (network-based) location, Precise (GPS) location.
  • Your messages
    Edit your text messages (SMS or MMS), Read your text messages (SMS or MMS). Receive text messages (SMS).
  • Network communication
    Full network access
  • Your accounts
    Add or remove accounts
  • Storage
    Modify or delete the contents of your SD card
  • Hardware controls
    Change your audio settings, Take pictures and videos
  • Phone calls
    Read phone status and identity
  • System tools
    Change network connectivity, Delete all app cache data, Disable your screen lock, Make app always run, Modify system settings, Prevent phone from sleeping, Retrieve running apps, Toggle sync on and off
In short, I'd never use a smart phone if it weren't for the fact that T-Mobile can't enable MMS on simple feature phones: I need MMS to communicate in the workplace. Obviously, Lookout smart phone permissions are not as comprehensive as what government agencies can gather or accomplish with one's phone (and other records), but it gives a person a thumbnail sketch. It might be easier if smart phones were directly issued by the government via some portion of our income tax revenue -- they've become little less than moving ID cards, with contact and quotidian information embedded.