Sunday, December 28, 2008

CUPS - Wifi Brother HL-2170W



basics (through USB)


Appears there are several steps.
1. Unbox and confirm printer works using test page.
2. The Brother drivers appear to come only in .deb and .rpm formats, so I verified the rpm program was installed (if not, netpkg rpm).
3. Download rpm printer drivers from Brother site, both the cupswrapper and the lprdrivers.
4.Install the lpr driver
# rpm -ihv --nodeps brhl2170wlpr-2.0.2-1.i386.rpm
5. Install the cupswrapper
# rpm -ihv --nodeps cupswrapperHL2170W-2.0.2-1.i386.rpm
6. Verify these have properly installed by finding them in the results from this command:
# rpm -qa
7. Turn-off printer and connect with USB. Turn-on printer.
8. Add printer to CUPS
# lpadmin -p Brother -E -v usb://dev/usb/lp0 -m HL2170W.ppd
9. Restart CUPS
# service restart cups
10. Should work

Saturday, December 27, 2008

XFCE4 start-up actions

I was recently at my folks' for the holidays and plopped-in Zenwalk 5.2 on an old system they have there. One thing I wanted Xfce4 to do when Mom logged-in was to
$xgamma -gamma 0.7

startup


To solve this, I created a .desktop file (I named mine xgamma.desktop) made it executable (chmod 755 xgamma.desktop), and placed it in the xfce autostart folder (~/.config/autostart). This format for the .desktop file worked the first time:
[Desktop Entry]
Encoding=UTF-8
Version=1.0
Type=Application
Name=xgamma
Comment=takes screen to .7 bright
Exec=xgamma -gamma 0.7
StartupNotify=false
Terminal=false
I then logged-out. After logging back in, the screen went to 0.7 brightness automagically.

Friday, December 26, 2008

CUPS - usb HP DJet F300

(HP a1350e)
    Zenwalk 5.2
    2.3G Athalon X64 two part processor
    400 MB Ram
    HP Deskjet F300 (USB)

CUPS, good documentation here.

Sketchy CUPS Webtool


I was at my folks house recently for the holidays and installed Zenwalk 5.2, a Slack-based OS that's pretty light. All went well until I attempted to add a HP Deskjet F300 All-In-One. The HP website says support only begins with F310. I figured I could get a workaround going.
1. In spite of (significant) password efforts, you will likely find that the webtool athttp://localhost:631simply rejects any and all passwords, no matter what, when, or why, and will deny you adding a printer, always waiting until the final step in a timely 6-part process to do so. After a couple of lost days working with /etc/cups/cupsd.conf and password, this of course will become ballistically frustrating. The temptation to sledge-hammer the computer will become more pronounced as one realizes the rejection of the CUPS webtool means the printer must be added manually using lpadmin and, like any CLI, it is powerful and simultaneously filled with the potential for damaging errors. One can easily spend days. But what are you going to do - you have to print, right?
2. The ppd file has to be locatable by lpadmin. Put the ones which need to be found in /usr/share/cups/model/ . I renamed the original ppd to dj350.ppd. for ease of loading and made sure it was the proper permission for ppd files "644". Now try this:
#lpadmin -p HP300 -E -v usb:/dev/usb/lp0 -m dj350.ppd 
The printer added easily. If a mistake is made and a PPD file is installed that you later want to substitute with a different PPD file, put the new file in /usr/share/cups/model/ and run the following command:
#lpadmin -p HP300 –P another.ppd
To delete a printer entirely, use the command:
#lpadmin -x HP300
To make certain, you can always check the list of installed printers using:
#lpstat -v



Day 2 (Afternoon/Evening) More lpadmin and configuration files


It may be possible, though unlikely, to work via CUPS web interface. If not, look at the file /etc/cups/printers.conf and note its arrangement. Pay particular attention to this line.
DeviceURI usb://dev/usb/lp0
This critical printer setting must be accurate.

test prints


Test prints are one easy thing from the CUPS web interface; although lpadmin is required to stop testprints, the CUPS interface is an easy way to initiate them. While testing the printer configs, many test prints might be desired. If they don't print, they pile-up in queue and use resources re-attempting. One can't use CUPS web interface to stop prints, so cancel them from lpadmin:
#lpstat -o
provides pending print jobs and the job number eg, "HP1100-1". Then
#cancel HP1100-1
will get rid of the job. Run "lpstat -o" again to verify, if you like.

xfce4 connection


To print from Mousepad and others that use X-settings, xfce printing needs to recognize the CUPS printer. Try printing in Mousepad and see if the CUPS printer is available. If not, configuring xfce4 to CUPS will be necessary. This was simple for me. I simply went to the XFCE menu, then access Settings ->Settings Manager -> Printing System. Once in Printing System, I selected the CUPS network printer and closed the menu. I then opened a Mouspad file to print; sure enough, the CUPS printer appeared in my options.

SNMP HPLIP


SNMP (Simple Network Management Protocol) is a powerful process originally designed to simplify web management. Many processes take advantage of SNMP functionality and one of them is HPLIP

If hp-setup doesn't work, then the road may be long. Try using SNMP to determine if the kernel can see the printer at the nework address:
snmpwalk -Os -c public -v 1 ip.address.of.printer 1.3.6.1.4.1.11.2.3.9.1.1.7.0 

Per this site HP tshoot, the response should be something which shows the manufacturer, or SNMP may not be installed correctly. This means working with the /etc/snmp/snmpd.conf file.

Tuesday, November 25, 2008

usb storage - corrupted e2fs

I don't know how it happened, since I verify each time when I disconnect my external 350G USB drive. I wait for hal to indicate "OK to disconnect" each time. Nevertheless, it's corrupted. I found the list of superblock backups easily
# dumpe2fs /dev/sdb1 |grep superblock
Then, I attempted a customary
# fsck -b 32768 /dev/sdb1
with a backup block from the list, 32768. However, several attempts netted the the same response, that the device was busy
# umount /dev/sdb1
# fsck -b 32768 /dev/sdb1
fsck 1.40.8 (13-Mar-2008)
e2fsck 1.40.8 (13-Mar-2008)
/sbin/e2fsck: Device or resource busy while trying to open /dev/sdb1
Filesystem mounted or opened exclusively by another program?
Making sure, I ran
# umount /dev/sdb1
umount: /dev/sdb1: not mounted
But yet, again, if I tried to fsck, I got
# fsck -b 32768 /dev/sdb1
fsck 1.40.8 (13-Mar-2008)
e2fsck 1.40.8 (13-Mar-2008)
/sbin/e2fsck: Device or resource busy while trying to open /dev/sdb1
Filesystem mounted or opened exclusively by another program?
What was going on? I verified the blocksize
# dumpe2fs /dev/sdb1 |grep -i "block size"
dumpe2fs 1.40.8 (13-Mar-2008)
Block size: 4096
and proceeded with
# mke2fs -S -b 4096 -v /dev/sdb1
to restore the superblocks without touching the data. Unfortunately, this move resulted in stale file handles and the drive wouldn't mount. Another move would be
# e2fsck -y -f -v -C 0 /dev/sda3
# tune2fs -j /dev/sd3
This makes the file system into a Ext3 system, and so the blocks are off if I started with ext2 on the drive in the first place. So I lost everything. Years of data, photos, passwords, tax records, the lot. If I knew a little more about e2fsck, I might have been able to get there, but all I could find was what to do with e3 file systems.

Monday, November 24, 2008

voip - skype, others

Typically, I'm using Skype, and here's how I set it up.

skype
1. Check the dependency requirements at the skype site and install them from netpkg or installpkg. I get the static tar.gz from the site.
2. Untar the package. The executable is included; all the junk just needs to be copied to proper directories.
3. I don't run Skype as root, but it's necessary to sudo or root to make some folders and place some files into the proper directories.
# mkdir /usr/share/skype
# cp -a sounds langs avatars /usr/share/skype
# cp skype (the bin) /usr/bin
# cp skype.desktop /usr/share/desktop
$ mkdir /home/[username]/.Skype
$ cp skype.conf ~/.Skype/

other voip

Sunday, November 23, 2008

postgresql - user level install

Want to set-up a relational database and have browser access through localhost on our local machine. If successful, we can next learn cloud install or connect clusters. DBeaver is installed as our GUI admin tool. PgAdmin is good when running but often spawns lib errors. Regardless, the install portions of administration are CLI.

installation steps (2 hrs)

1. server (1 hour)

An entirely initalized PostgreSQL cluster has two parts, a database and a server. The server acts like Apache and manages connections. We start with that portion first, the databases need to be created from within the server. The one exception is when we run "initdb", a default database "postgres is created, so it can communicate with an admin tools.

Pacman installs the PostgreSQL server into a "bin" directory. This is fine. But, we want all configuration and data writing into our home directory, and we don't want to have to become another user or admin to run PostgreSQL. The idea is simple backup, portability, and user-level permissions; nothing root or weird PAM errors.

1a. install server

# pacman -S postgresql dbeaver

Postgresql install is about 51MB, not too bad. Service runs on port 5432 (change this in postgresql.conf ), and is not a systemctl service. At the user level we can start it, and stop it without becoming root.

1b. intitialize default cluster

Like any other application, I want to access all PostgreSQL files and tasks at the user level: never want to become root or change to another user. Yet when pacman installs Postgresql, it creates a "postgres" group in /etc/group and does not append $USER (eg. 'foo') to this group.

When initdb initializes PostgreSQL, it creates a default cluster and DB named "postgres" and puts them in /var/lib/postgres/data. No! I want the files in the home "foo" directory, like any other app.

Solution: 1) Append ourselves ("foo") to the "postgres" group in /etc/group. 2) run $ initdb and specify our home directory as the data repository.

# nano /etc/group [add self to postgrep group]
$ mkdir postgres
$ initdb -D /home/foo/postgres/data

1c. lockfile hack

There's a third permissions issue. When PostgreSQL runs at user-level, it is thwarted attempting to write a session lockfile into a root controlled /run/postgresql folder. To properly solve this error probably requires manipulating groups, but I simply changed the permissions on /run/postgresql to $USER. The problem disappeared.

# mkdir /run/postgresql/
# chown -R foo:foo /run/postgresql

1d. start server

With all the above accomplished, we can start the server as a user.

$ postgres -D '/home/foo/postgres/data'

1e. make permanent

We'd prefer not to have to delineate the data directory each time we start the posgres server. So we want set the path correctly. This initdb wiki notes the "PGDATA" variable added to the /home/foo/.bashrc file will make this permanent.

$ nano .bashrc
export PGDATA="/home/foo/postgres/data"

Logout and back in and then verify using "$ listenv". Now we can start our cluster (server) with $ postgres

1e. logs, tables, configuration

Unless modified in postgresql.conf, logs will appear in our home directory. Tables are in /home/foo/postgres/data.We can also configure the server, apart from the database and cluster, in /home/foo/postrgres/data/postgresql.conf. This would arrange which port we listen on, and other features. The security related conf is pg_hba.conf. There are others. All of these control the server, not the databases themselves

2. connect to defaults (10 minutes)

With a lot of work, our server environment has been completed. default "postgres" cluster and database were created by the initdb process. Let's leave them so we don't disrupt any administrative dependencies, but let's also move on to creating our own cluster and databases.

From the CLI, there is plenty we can arrange without dbeaver. DBeaver kicks butt once our databases are filled with data. For now though, to check our connection and create databases, we want to use the psql command. Our default username is "foo" when we connect because that's the user we ran initdb. The default database "postgres" however also must be called. So here is our connection command.

$ psql -U foo postgres

Again, there are three entitites: the cluster and the default DB are named "postgres", and the username is our own home directory name.

3. create clusters and databases (whatever no. of hours)

Our server environment is complete and our plans for data begin. We might have created a UML schema or it might be trial and error against a CSV or something else.

We're connected to the database through psql, so now we want to make a cluster and a database

BTW, there are many pages about dropping clusters and databases, and we can delete them from the data folder as well, so it's not that difficult to drop. Ok, but let's add.

4. dbeaver

Now that we're at step 3, we can add dbeaver to visualize our setup. Dbeaver is java and so needs JDBC compliant databases. Postgres is Type 4 java-compliant, inherently java-compatible. Dbeaver connects to a user at startup, 'postgres' by default, so change it to your $USER, eg "foo".

concepts

Roles and users

Somewhat confusing -- an imperfect analogy is that roles are governments and users are those operating within those governments. None of these will be created until we create a database. Just turning on the server ("postgres" command) does not create a database nor its associated roles.

postgres - users and roles  (22:52) E-Multi-Skills Database Tutorials, 2020. users can login, roles cannot. Role is a function, user is a client. Dbeaver seeks to connect to a role. It will operate as an agent so it nees a role with significant privileges.
postgres - roles, schema, security  (32:31) databasetorque, 2021. role can cally.
PBX - true overhead costs  (11:49) Rich Technology Center, 2020. Average vid, but tells hard facts. Asteriks server ($180) discussed.

wifi - ralink pci rt2600 drama

--(edited 20090421)--
I had occasion to reinstall Slackware the other day. Default distribution drivers for the ralink rt2600 are the rt61pci series drivers which seem to provide only flaky and, therefore, annoying connections.

drivers
First, retrieve the latest drivers from Ralink's linux page . For me, this was the 2008_0723_RT61_Linux_STA_v1.1.2.2.tar.bz2 driver. Unfortunately, the README was almost unintelligible, without even proper line returns. It only gives a portion of the necessary information, and hours are required to find the solution. Highly annoying. Taken together with the drivers, both the kernel source and headers are required. So:
a. drivers
b. kernel source
c. kernel headers

file changes
Prior to compiling, there are some changes, at least to these 2008 and 2009 versions. After copying Makefile.6 to Makefile, change the CFLAGS statement around line 28 to:

EXTRA_CFLAGS+= $(WFLAGS)
If you don't like to change the Makefile, you can export the KBUILD_NOPEDANTIC just prior to compiling, but this is a necessary environment variable, one way or the other.

Another change, this one suggested, is near the end of the rtmp_main.c file. Here, we want to change as follows: from
static INT __init rt61_init_module(VOID)
{
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,5,0)
   return pci_register_driver(&rt61_driver);
#else
   return pci_module_init(&rt61_driver);
#endif
}
to this:
static INT __init rt61_init_module(VOID)
{
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,5,0)
   return pci_register_driver(&rt61_driver);
#else
   return pci_register_driver(&rt61_driver);
#endif
}


environment
As noted above and described here, if you don't change the enviroment setting in the Makefile, you'll need to add
$export KBUILD_NOPEDANTIC=1
as a command just prior to compiling. For systems with csh and tcsh environments, the command is apparently
$setenv KBUILD_NOPEDANTIC 1

putting it together

$ tar -xvjf 2008_0723_RT61_Linux_STA_v1.1.2.2.tar.bz2
$ cd 2008*
$ cd Module
$ cp Makefile.6 Makefile
$ chmod 755 Configure
$ Configure
$ export KBUILD_NOPEDANTIC=1 (if you didn't change the Makefile)
$ make

Root up

# mkdir /etc/Wireless
# mkdir /etc/Wireless/RT61STA
# cp *.bin /etc/Wireless/RT61STA (three firmware files)
# cp sta.dat /etc/Wireless/RT61STA (configuration file)
# cp rt61.ko /lib/modules/<kernel>/kernel/drivers/net/wireless
final steps
# nano /etc/modprobe.d/blacklist
blacklist rt61pci
blacklist rt2x00pci
blacklist rt2x00lib (save and close)

# nano /etc/modprobe.d./modprobe.conf
alias ra0 rt61 (save and close)

# depmod -a

Saturday, November 22, 2008

slackware - find and replace

I occasionally encounter compatibility problems with Internet Explorer and Firefox when writing site pages. When I do, finding and replacing one word across many files can be a problem. For example, I recently learned that Internet Explorer does not process the word "grey" - it apparently has to be spelled "gray". So how to find and replace all instances of "grey" with "gray" across all the directories on the site?

grep
Of course, finding text is no problem for grep. If it's only one or two files, I can have grep locate the file for me and change them by hand.
grep -lr 'text' *
where "l" provides the line number, "r" will recursively check all files, 'text' is the text I want to locate, and "*" means all files will be checked.

sed
To replace the text, we can use sed

grep and sed script
To both find and replace the text, we need a mix of sed and grep together. A simple bash script follows that does it just fine.
#!/bin/bash

function search {
find $1 -type f \( -name '*php' \) -print | while read file
do
echo replacing \"$2\" with \"$3\" in $file
sed "s,$2,$3,g" < "$file" > "$file".tmp
mv "$file".tmp "$file"
done
}

function nothing {
echo "dir: $1 search string: $2 replace string: $3"
}

# A directory has been given. Search for files containing the term, and replace it
if [ -d "$1" ]; then
search $1 $2 $3

# A file has been given. Search the file for the term and replace it
elif [ -f "$1" ]; then
sed "s,$2,$3,g;" < "$1" > "$1".tmp
mv "$1".tmp "$1"

# A file to parse or test has been given. If parse, set the directory,
# then step though every file in the directory replacing search / rplace pairs.
# Keep going until there are no more pairs.
elif [ "$1" == '-f' ] || [ "$1" == '-t' ] && [ -f "$2" ]; then
index=0
cat $2 | while read line
do
if [ $index -eq 0 ] && [ -d "$line" ]; then
dir=$line
elif [ $index -eq 0 ] && [ ! -d "$line" ]; then
exit
elif [ $index -gt 0 ]; then
findSt=`echo ${line%% *}`
repSt=`echo ${line##* }`
fi
if [ "$1" == '-f' ] && [ ! "$repSt" == '' ] && [ ! "$findSt" == '' ]; then
search $dir $findSt $repSt
elif [ "$1" == '-t' ] && [ ! "$repSt" == '' ] && [ ! "$findSt" == '' ]; then
nothing $dir $findSt $repSt
fi
let "index += 1"
done
else
echo "Search and replacer:"
echo "useage:"
echo "$0 [directory to search] [phrase to search for] [replacement phrase]"
echo "$0 [file to search] [phrase to search for] [replacement phrase]"
echo "$0 [-f|-t] [file to parse]"
echo "To parse a file the first line should be the directory, then each line after is a pair of terms"
echo "and replacements seperated by white space. If the -f os given, the file will be parsed. If -t "
echo "is given then the file will be read and the terms that would be replaced are output."
echo
fi

You can see at the top that this only works on php files, so I just changed the "php" to "css".

Thursday, November 13, 2008

slackware 12.0 - wpa, wicd

I'm using the Atheros AR242x 64 (5007 chipset) in a Toshiba Satellite running Zenwalk 5.2. So, this is not technically a Slackware 12.0 post - the Slackware foundation of Zenwalk means they are similar. I'll edit the specifically Slackware aspects of this post over Christmas break. For Zenwalk WPA, I mostly followed these excellent instructions .

Atheros AR242x 64 (5007 chipset)
The information here was helpful for understanding this newer chip. For a module, Zenwalk provides ath5k, but ath5k wasn't responding well to configuration attempts, so I turned to a Madwifi module. Incidentally, the Madwifi site also contains information on ath5k here, and it appears the ath5k module will eventually be effective. Currently however, the steps which worked with the Madwifi module were:

1. in /etc/modprobe.d/blacklist, blacklist the "ath5k" module
2. reboot and lsmod - make sure ath5k is gone
3. download madwifi-hal-0.10.5.6-r3861-20080903.tar.gz , or the newest one there, make, and install.
4. reboot again and lsmod
5. iwconfig ath0

WEP
WEP is trivial, merely requiring the two iwconfig commands "essid" and "key restricted", matched to whatever network I was using. Because I'm multihomed, the order of bringing-up the interfaces was the only other pay-attention issue. If the interfaces are activated backwards in /etc/rc.d/rc.local, then dhcpcd apparently attempt to assign DNS to the interface on the LAN, rather than the one on the WLAN. blah blah blah.

WPA
The basis here was again gained at this Zenwalk wiki , but there are a couple of tweaks or clarifications. It appears that wpa_supplicant relies upon inserting the wext module into the kernel.
1. using wpa_passphrase with the [essid] and [password] varies with the WLAN with which one wants to connect. The command is used similarly to the WEP iwconfig ath0 key restricted "xxxxx", which varies with each WLAN. Wpa_password then, is not a key for the laptop I'm working from, it's a password key hashed for the WLAN I'm attempting to connect with. This means I have a different entry for each WPA WLAN I'm working on.
2. wpa_password [essid] [password] > /etc/wpa_supplicant.conf is a brilliant way to start the initial conf file. And, as noted in the wiki, the file can be this simple to work:

network={
ssid="BART Transit"
#psk="oct2008@rezt9bit"
psk=3ad964f16045787dec86a4730e9dec4bedaa9e24f2998eacfa363e80510e3393
key_mgmt=WPA-PSK
proto=WPA
}
3. The following command line from the wiki configured ath0 properly on the first shot:
wpa_supplicant -i wlan0 -D wext -c /etc/wpa_supplicant.conf -B
the "-B" switch is added to make the program run as a daemon, and might not be necessary.
4. After the above, I had only to dhcpcd ath0 to get a valid connection.
5. One issue appears to be quickly enabling or disabling configurations for different WLANs via /etc/wpa_supplicant.conf. (see wicd below).
6. Permanence/boot - inserting the line in Step 3 above into an /etc/rc.local should work if I have an /etc/wpa_supplicant.conf file with a single network configuration in it.

more wpa_supplicant.conf
Wpa_supplicant is not necessary for WEP security, since we can program the card directly with CLI commands to prepare WEP. However, if one chooses to run wpa_supplicant for everything, /etc/wpa_supplicant.conf files appear able to configure WEP and unencrypted connections, in addition to WPA connections. The conf files are useful for storing various wi-fi location configurations. Each connection requires a different conf file, eg etc.wpa_supplicant1.conf, etc.wpa_supplicant2.conf, etc. Apparently wicd can do this with a gui interface which manages switching. But this also means the wicd daemon must run - additional memory usage.

A site with the wpa_supplicant.conf commands is this one, and a site that shows the different WLAN setups in the "conf" file is here (scroll down).

wicd
Wicd configuration files live in /etc/wicd/encryption/templates. A primer for converting wpa_supplicant.conf files to wicd configurations is at this site. Once the various templates are in place, it appears we can switch between them using the wicd gui, though it's unclear if dhcpcd would need to be killed and restarted in the command line?

Sunday, October 26, 2008

Audio - SiS966 (Realtek ALC660-VD) Drama

A strange case appeared recently that might be helpful to someone. I updated ALSA on an SiS966 card and the sound died. Alsamixer was unmuted, etc, proc shows the card as a single unit with a single interrupt:
$ cat /proc/asound/cards
0 [SIS966 ]: HDA-Intel - HDA SIS966
HDA SIS966 at 0xfbfe8000 irq 22

Aplay shows more information about the card.

$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: SIS966 [HDA SIS966], device 0: ALC660-VD Analog [ALC660-VD Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: SIS966 [HDA SIS966], device 1: ALC660-VD Digital [ALC660-VD Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
So it appeared hw=0.0 was the analog card, and hw=0,1 was the digital version. If followed that, for the analog portion...
$ cat /proc/asound/card0/pcm0p/info
card: 0
device: 0
subdevice: 0
stream: PLAYBACK
id: ALC660-VD Analog
name: ALC660-VD Analog
subname: subdevice #0
class: 0
subclass: 0
subdevices_count: 1
subdevices_avail: 1
....and for the digital portion...
$ cat /proc/asound/card0/pcm1p/info
card: 0
device: 1
subdevice: 0
stream: PLAYBACK
id: ALC660-VD Digital
name: ALC660-VD Digital
subname: subdevice #0
class: 0
subclass: 0
subdevices_count: 1
subdevices_avail: 1
Why no sound when levels are tested in alsamixer? I took a look at a reliable page alsa-hw that has helped me in the past. From here, I checked to see which of the cards was default:
$ aplay -L
default:CARD=SIS966
HDA SIS966, ALC660-VD Analog
Default Audio Device
front:CARD=SIS966,DEV=0
HDA SIS966, ALC660-VD Analog
Front speakers
surround40:CARD=SIS966,DEV=0
HDA SIS966, ALC660-VD Analog
4.0 Surround output to Front and Rear speakers
surround41:CARD=SIS966,DEV=0
HDA SIS966, ALC660-VD Analog
4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=SIS966,DEV=0
HDA SIS966, ALC660-VD Analog
5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=SIS966,DEV=0
HDA SIS966, ALC660-VD Analog
5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=SIS966,DEV=0
HDA SIS966, ALC660-VD Analog
7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
null
So it's clear the analog portion of the card is the default. Can we hear anything from the digital portion of the card?
$ aplay -D hw:0,1 alsatest.wav
Playing WAVE 'alsatest.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, Stereo
No sound here on the digital, what about the analog...
$ aplay -D hw:0,0 alsatest.wav
Playing WAVE 'alsatest.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, Stereo
...and here I have sound. Apparently the analog portion only will be available.
testing
To check microphone settings, a nice way to check the levels is to open two terminals. Record a couple minutes using arecord in one terminal and use the alsamixer in the other window to vary levels until it works.
$ arecord -d 60 -f cd -t wav -D hw:0,0 foobar.wav
This will record for 60 seconds, but we can change the number of seconds to any value we wish. I call out the changes into the microphone as I make them. To play foobar.wav and check the various settings, we just
$ aplay -D hw:0,0 foobar.wav
After the settings are correct, don't forget
# alsactl store

Thursday, October 16, 2008

Radeon 3100HD RS780MC drama

On install, Zenwalk loaded a stock vesa driver in /etc/xorg.conf, providing resolutions of 800x600. Common sense and the command # gtf seemed to indicate higher resolutions were available. In /etc/xorg.conf, I replaced "vesa" with, alternately, "ati", "radeon", and "radeonhd"; these did nothing but break X. I then relented for the proprietary driver "fglrx" described on most blogs as bloaty and slow, but operational. The driver was avialable here by selecting the Linux x86_64 -> Radeon -> ATI Radeon HD 3xxx Series and pressing "go". One note about installing this - I received checksum errors when I attemped to install it with # ati*; I had to explicitly invoke bash #bash ati*. However, following this installation, I simply replaced "vesa" with "fglrx" in the Device section of /etc/X11/xorg.conf, rebooted, and everything worked. With the vesa-fglrx swap, the /etc/X11/xorg.conf file looks like this:
Section "Device"
Identifier "Videocard1"
VendorName "ATI Technologies Inc"
BoardName "Video device"
Driver "fglrx"
BusID "PCI:1:5:0"
Option "RenderAccel" "true"
EndSection
Adjustments to the fglrx module "Options" can come some other weekend; resolution and display appear sharp currently.

software rendering


It appears the problem with slow, boggy rendering continued after installation and a few different option tweaks. The card appears to default to software rendering instead of rendering with the considerable hardware memory available. The test program glxgears crashed, and the information tool glxinfo noted direct rendering is currently not enabled. The best post I could find about this was here at phoronix. Accordingly, I changed xorg.conf to the following:
Option "KernelModuleParm" "string"
Option "KernelModuleParm" "agplock=0"
Option "KernelModuleParm" "agp_try_unsupported=1"
Option "KernelModuleParm" "debug=1"
Option "KernelModuleParm" "maxlockedmem=256"
where 256 represents the memory size in mb.
Unfortunately, this had no visible effect. Snooping with lsmod yielded nothing like "fglrx" or "glx", etc. Following with a find -name fglrx* however revealed two modules: fglrx_drv.so and fglrx_dri.so, that is two "shared object"(so) modules, but no "kernel object"(ko) modules. That explained the lsmod blank, and also rules-out kernel object loaders such as modprobe, which could have been handy in rc.local. So what next? Is a kernel module available and preferable? Why the "so"'s? Checking dependencies with ldd glxgears does not yield much either, and I'm unclear if it's possible to depmod .so's to check their dependencies.

radeonhd


As noted above, the open-source radeonhd driver, which appears to be a .ko, is improving. The radeon site has information about this driver as well as some useful "phoronix forums" to assist. The source code for the radeonhd is available by looking under item 6 on the wiki. The source was most recently updated on Oct. 13,2008.

more fglrx considerations


The fglrx driver, although supplied by Radeon, does appear to have significant Googleland complaints for slowness. For me, it renders well, but very slowly: I'm experience update lines even scrolling through a simple text page on Mousepad, etc. So, it appears something either a)very inefficient, or b)very underpowered, is taking place in terms of memory usage with the fglrx driver. From Google, it appears there are a few things to investigate: 1) does flglrx load as a module? (lsmod |grep fglrx). Mine does not appear in lsmod, and this apparently means xorg.conf loads a substandard fglrx_drv.ko module. Lsmod failed to locate this module either. Odd. 2) Settings in /etc/X11/xorg.conf for the fglrx driver, under "Options" may be important. 3)Settings (don't know syntax or location) for whether the ATI card uses SIDEPORT mode (card uses its own memory),UMA mode (card shares system memory), or another unnamed mode where it uses a mix of SIDEPORT and UMA. One thing for sure, a lifesaver in these forum boards is fglrxinfo or fglrxinfo -v:
# fglrxinfo
display: :0.0 screen: 0
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.4 (2.1 Mesa 7.0.3)

glxinfo is also important. On my system, glx is not enabled, though this may be a different problem than the overall slowness of screen panning and scrolling that I'm experiencing. A good forum thread for these issues is here, not for solutions but for the many aspects of the problem. I located a Zenwalk specific ATI installation wiki which informed me of the aticonfig command. This command didn't seem to change much, other than append an "Extensions" section to the end of /etc/X11/xorg.conf. That is, following the use of aticonfig, lsmod continues not to show any fglrx module.

Monday, October 13, 2008

screencasting - slackware

links: ffmpeg commands
Lee Lefever videos are not screencasts but they reveal the value of a simple idea. His guiding philosophical considerations are well-described in his blog. Educational screencasts do well to follow similar lines. Screencasts from teacher Joe Wood, have a similar flavor to Lee Lefever's videos, and Wood clarifies his ideas well.

What about screencasts in Linux? My previous work making a required video for an education class left me feeling underwhelmed. It was initially raw AVI from the camera, but I muxed it to MPEG2. There appeared to be resolution issues - the picture wasn't as clean as I hoped. This time, I'm starting with screencasts. I'll eventually work back to recording and cutting video when I can easily manage screencasts. Another side of it is the computing power required for rendering -- I want to increase the efficiency there.

istanbul


First attempt was with Istanbul, which was pre-installed with Zenwalk 5.1. Something was wrong with the framerates, and all I was seeing was flashing screens, which appeared to indicate a large number of frame drops. Then I looked at this guy's post and it appears that there might also be screen size as well as framerate issues. Istanbul has no sound supplied, it must subsequently be muxed. What could possibly be more annoying?

recordmydesktop


Includes sound and screen, and outputs in Ogg-Theora as .ogv files. Seems the most useful, but the ability to record sound varies with systems. When it works, it works well. The .ogv file can be shifted to .flv format for YouTube uploads or website. I used a script for doing so from here  though I'm sure it's also around at other sites. I had to modify the script slightly for CLI use, and libmp3lame.so.0 must be installed for the mencoder inside it to follow the script properly. I renamed the script ogv2flv.sh and it runs on command line once libmp3lame.so.0 is installed:
$ ogv2flv.sh input.ogv
There is no config file for the CLI version of recordmydesktop, which means hideously long command line entries. Further, typing $ man recordmydesktop only produced "No manual entry for recordmydesktop". Nice going. It appears the best thing is the sourceforge version, until that URL breaks. Strike two. However, a nicely sized screen which works well from command line for capturing browser without the status bar, but including the URL bar is
$ recordmydesktop -x 14 -y 55 -width 988 -height 674 -fps 12 -o wobbly.ogv
This seems to default constantly to 44100 and 2 channels in spite of my entering 1 channel and 22050 frequency and the device hw:0,0, so I eventually deleted these parameters from the line.

Recordmydesktop also has a python based GtK front-end available to those who are interested. This program does have a config file .gtk-recordmydesktop, which appears to be an advantage over remembering the complex CLI commands necessary to avoid the inevitable "broken pipe" commands if you forget one parameter. Each time I edited the config file by hand, the program overwrote my screen size settings each time it opened. Strike three.

xvidcap


Solid, but a few quirks. 1) .xvidcap.scf is supposed to be available for a config file, it apparently doesn't work well or is not read. Accordingly, right clicking on the gui controls provides preferences. Not too bad and they can be saved there too. 2) Have to adjust the box size each time. 3) On at least one occasion, it might have muted my microphone during start-up. 3). Sound is garbled unless using oss emulation aoss. So, I might start xvidcap to do 10 frames per second like this:
$ aoss xvidcap --fps 10
Once screencasting is complete, ffmpeg can shift the mpeg into a YouTube flv in a single command
$ ffmpeg -i test2.mpeg -ar 44100 test2.flv

sound levels


Microphone settings become significant in screencasting. Here are a couple of cards.

Realtek ALC660-D
I set "Mic" as the capture source, and vary the relationship between Mic Boost, Digital, and the Capture bar. The settings which avoid clipping and feedback have been Mic Boost=33, Digital~65-70, and Capture~77-82. Digital seems to be the most important for hiss and I play trade-off between Digital and Capture until the hiss disappears while attempting to avoid clipping distortion if Capture is set too high.

Friday, October 10, 2008

slackware - application list

A baseline group of applications we typically install, mostly to solve the problem of how to retrieve various types of media or other information quickly. Slackware has several by default. The configuration file locations are not given. Whether or not they are command-line or gui apps is not always included.

ffmpeg: necessary media translator
streamtuner: media stream consolidation
recordmydesktop: screencasting. best with command-line settings.
audacious (netpkg): gui, audio, skins
eclipse (netpkg): C++/Java IDE
unison (netpkg): folder comparison and merging
cycas ($300): professional architectural program ($128 - basic)



database apps
Nola Pro: bookkeeping/accounting. MySQL, PhP, Browser

Friday, October 3, 2008

fall 2008 - celly status

I've waited a while for new cell service. For several years I've run a Motorola V3 on Cingular -> SBC -> ATT service. In other words, I've used a basic second generation (2G) cellphone with 2G service. When I selected a 2G provider several years ago, I selected the GSM (ATT) version of 2G over the CDMA (eg. Metro PCS/Verizon) version of 2G because GSM is prevalent in Europe. When visiting Europe, I use my Motorola V3 relatively cheaply by purchasing a European plan with a SIM and swapping out my US SIM. Further, in the US, GSM phones hold about 75% of the market share.

3G

Last year, the iPhone debuted, the first interesting third generation (3G) device, at least in my opinion. I've since watched the developments around 3G data/voice with the idea of jumping-in at the right moment. The types of service are somewhat complex and I made a little chart (click to enlarge) for this blog entry, simplifying what I understood about these services:

I assume like most people, the reason I'm migrating to 3G is web and phone access in a single device. Until the iPhone, a person needed a (bulky) laptop for that level of service. I considered an iPhone when they first appeared last year but the iPhone seemed too outrageously priced for the product and services package. More recently, I noticed Sprint "Wi-Max" (CDMA-2000 service) was scheduled for availability in August of 2008 on Nokia phones. That was interesting but, upon researching further, WCDMA devices look like a better choice than Wi-Max devices. WCDMA succeeds GSM (see chart above) and accordingly are backwards-compatible with GSM. That plays a role when traveling to Europe where cell phone service exists at different stages of development in different regions. Having a phone that is backwards-compatibile with European (2G GSM) networks makes it possible to travel there with less phone hassles. I set my mind to finding WCDMA service.

3G drawbacks

As seen in the chart above, there are two versions of 3G, WCDMA (formerly GSM) and CDMA-2000 (formerly CDMA). If I understand correctly, geo-locating in the CDMA branch used GPS from the start. GSM users, on the other hand, turned-off their phone to have locational privacy, and had to be triangulated via cell towers when their phones were turned-on. Stated otherwise, it's beneficial for security agencies if the public migrates to CDMA. Luckily for them, the data transfer requirements of 3G require the transmission style of CDMA. Accordingly, as GSM providers attempt to provide 3G on previous GSM networks, GSM phones will have to morph into a version of CDMA phones for 3G availability. This means a degree of privacy reduction for GSM users. Secondly, although CDMA has a single advantage - it can manage more users - battery life suffers for managing this transmission methodology. Finally, Qualcomm holds the patents for both forms of 3G (WCDMA and CDMA-2000). Qualcomm is an American company, and we may assume their chips integrate CALEA backdoors or other monitoring options. Any backdoor is subject to exploits and so can be considered a privacy drop. All told, our initial 3G phones would appear to suggest decreased privacy in at least two ways, and decreased battery life.

TMobile

TMobile, the American wing of Deutsche Telekom, is of interest to me as a 3G provider. They currently sell both 2G GSM and 3G WCDMA service in the US, and are rolling out WCDMA in more and more cities. In the Bay Area, TMobile already (10/2008) has 3G, The phone which interests me, the G1 phone, uses a SIM and is backwards-compatible to 2G GSM networks. According to one or two sites I've seen, the price is $199 via a pre-order that arrives Fed Ex by Nov 10. It requires a 2 year contract that appears to be $89 a month for 3G voice and data service. If that's correct, it's a significant improvement over iPhone prices plus G1 software (Android OS) is supposedly open-source in cooperation with Google. Pre-order link: http://www.t-mobileg1.com/

Saturday, September 27, 2008

zenwalk 5.2 in Toshiba Satellite (L305D-S5869)

2022 update

The original post from 2008 is at the bottom, but I just couldn't help wanting this dinosaur alive. NB: You must get the replacement battery for the model S5869 which is 11.1 VDC, not the 10.8 VDC of most replacements. I could not get the laptop to boot with a 10.8VDC battery.

battery and sdd

  • $0 CMOS battery: run down, but I don't have an adjustable soldering iron right now, so I installed ntp and ran
    # ntpdate pool.ntp.org
    right after connecting to my local router
  • $19 battery: an inexpensive Chinese "TA3533LH" Li-Ion, 5200mAh, 6 cell. However, when I received it, it was for the
  • $15 hdd -> ssd: I'd read somewhere that SATAIII was backwards compatible with SATAI and II, so I simply bought a new drive and moved stuff over.
  • $0 OS: The latest version of Arch worked fine with the old hardware.

From 2008

These were on-sale recently (9/25) at Fry's and seemed like a good deal although it's understood the Linux factor might be difficult with ATI video and so on. Still for $400:

15.4 WXGA
AMD Athlon 64 X2 Dual Core
1024 MB PC6400 SDRAM
Radeon 3100HD (RS780) w/VGA out
Atheros AR5007EG (wifi)
Realtek RTL8102E (ethernet)
Realtek ALC268 (sound)
120GB 5400 RPM SATA 2.5"
DVD RW, PCMCIA, SD port
3xUSB 2.0
No bluetooth or videocamera

Booted into pre-installed Windows Vista first. The return policy is 15 days for laptops and specifies software and hardware must remain unmodified. After verification of hardware features, I blew out the unbelievably bloaty factory load, dropped in a boot disk, formatted, and mke2fs /dev/sda1. Nice.

Slackware 12.0


I had a Slack 12.0 DVD gathering dust available and Slackware is my favorite. However, errors appeared on installation and it seemed an extensive parameter set was required to tame them:
#boot nosmp noapic irqpoll
To me, these problems meant that, if I continued with the Slack install on the newer hardware, I might be compiling and patching over the weekend, or that I should download and burn Slack 12.1 and begin there. I also had a copy of Slackware-based Zenwalk (formerly Minislack) 5.2 which possessed a newer kernel and a supposedly candy-coated installation process. Choices.

Zenwalk 5.2


Installed smoothly with only irqpoll needed as a parameter.

Atheros AR242x 64 (5007 chipset)
The instructions here were helpful for understanding this newer card. Zenwalk provides ath5k, but it wasn't going well. The Madwifi site has information on ath5k here, and it appears the ath5k module will eventually be effective. Currently however, the steps which worked were:

1. in /etc/modprobe.d/blacklist, blacklist the "ath5k" module
2. reboot and lsmod - make sure the ath5k is gone
3. download madwifi-hal-0.10.5.6-r3861-20080903.tar.gz , or the newest one there, make, and install.
4. reboot again and lsmod
5. iwconfig ath0

WEP and WPA
WEP is trivial, merely need the two iwconfig commands "essid" and "key restricted" to make it work. WPA, on the other hand, is a separate post. It only took 10 minutes to configure, but the description is too long for this overview. If one has a distro which requires kernel modification for WPA, the process becomes longer. This site seems to explain it t I'm also currently building a chart for easier understanding based on this site.

screen brightness and gamma
The default settings for screen brightness and backlighting install maxed at 100%, and the Fn buttons don't seem to work except in Windows - battery life, screen life, eyestrain. Without going into X, one has command-line control over the brightness. Look in /proc/acpi/video/VGA/LCD/brightness to see the possible brightness settings for the card, such as 25, 50, 75%, and so on. I like 25%, so:
#echo -n 25 > /proc/acpi/video/VGA/LCD/brightness
It appears we cannot change the backlighting outside of X, though I haven't researched. Once in X however, open a terminal and select any number between 0.00 and 1.0 for gamma, eg:
xgamma -gamma 0.75

Realtek ALC268 Sound
Some duplicate alsamixer settings were seen. For example, alsamixer showed two microphone capture bars when only one channel was connected. I went to the Realtek downloads site, clicked a link there to the "HD Audio Codec Driver", and agreed to licensing language. After download and unpacking, it turned out this was the latest release of ALSA, so it basically installs the latest ALSA, but apparently with a newer HD driver. The alsamixer showed proper inputs following this ALSA update. and so, after setting levels, it was time for # alsactl store.

Radeon 3100HD RS780MC
Initially, Zenwalk loaded the vesa driver in /etc/xorg.conf, providing resolutions of 800x600. Common sense and #gtf seemed to indicate higher resolutions were available. In /etc/xorg.conf, I replaced "vesa" with, alternately, "ati", "radeon", and "radeonhd"; these did nothing but break X. I then relented for the proprietary driver "fglrx" described on most blogs as bloaty and slow, but operational. The driver was avialable here by selecting the Linux x86_64 -> Radeon -> ATI Radeon HD 3xxx Series and pressing "go". One note about installing this - I received checksum errors when I attemped to install it with #. ati*; I had to explicitly invoke bash #bash ati*. However, following this installation, I simply replaced "vesa" with "fglrx" in the Device section of /etc/X11/xorg.conf, rebooted, and everything worked. With the vesa-fglrx swap, the /etc/X11/xorg.conf file looks like this:
Section "Device"
Identifier "Videocard1"
VendorName "ATI Technologies Inc"
BoardName "Video device"
Driver "fglrx"
BusID "PCI:1:5:0"
Option "RenderAccel" "true"
EndSection
Adjustments to the fglrx module "Options" can come some other weekend; resolution and display appear sharp currently.

additional fglrx considerations for the Radeon 3100HD R5780MC
The fglrx driver, although supplied by Radeon, does appear to have significant Googleland complaints for slowness. For me, it renders well, but very slowly: I'm experience update lines even scrolling through a simple text page on Mousepad, etc. So, it appears something either a)very inefficient, or b)very underpowered, is taking place in terms of memory usage with the fglrx driver. From Google, it appears there are a few things to investigate: 1) does flglrx load as a module? (lsmod |grep fglrx). Mine does not appear in lsmod, and this apparently means xorg.conf loads a substandard fglrx_drv.ko module. Lsmod failed to locate this module either. Odd. 2) Settings in /etc/X11/xorg.conf for the fglrx driver, under "Options" may be important. 3)Settings (don't know syntax or location) for whether the ATI card uses SIDEPORT mode (card uses its own memory),UMA mode (card shares system memory), or another unnamed mode where it uses a mix of SIDEPORT and UMA. One thing for sure, a lifesaver in these forum boards is fglrxinfo or fglrxinfo -v:
# fglrxinfo
display: :0.0 screen: 0
OpenGL vendor string: Mesa project: www.mesa3d.org
OpenGL renderer string: Mesa GLX Indirect
OpenGL version string: 1.4 (2.1 Mesa 7.0.3)

glxinfo is also important. On my system, glx is not enabled, though this may be a different problem than the overall slowness of screen panning and scrolling that I'm experiencing. A good forum thread for these issues is here, not for solutions but for the many aspects of the problem.

netpkg repos
Each time zenwalk is released, the repositories update to the most current release so that, if one needs a certain package, they generally have to update the rest of their system to be in sync with it. Suppose I like my release and just want to keep it into perpetuity. As long as 1) I have the installation disk 2) have netpkg'ed all the programs from that release I want (they download to /var/packages), and 3) have downloaded a copy of PACKAGES.TXT.gz, I can point netpkg to /var/packages and use the older release. A couple of simple modifications are required with two configuration files since netpkg doesn't inherently recognize URLs of the type "file:///". This link describes the changes to the two files, /usr/libexec/netpkg-functions and /etc/netpkg.config files, which I repeat here.
  • /usr/libexec/netpkg-functions, at or about line 144:


  • if [ $( echo "$url" | egrep -e "ftp:.*|http:.*|file:.*" ) ]; then
  • ...at or about line 205:


  • if [ ! "$(echo $mirror | egrep 'http://|ftp://|file://')" ] ; then
  • /etc/netpkg.conf add another line such as


  • Internet_mirror = file:///var/packages
    Put a copy of PACKAGES.TXT.gz into /var/packages, and you've got a self-contained distribution.

    unsolved: multiple instantiation of mplayer, thunar, etc
    Perhaps because of multiple processors, there's currently a problem when using DVD's. Multiple instances of related applications appear, eg 2 x MPlayer or 2 x Thunar. Working around this by disabling automatic HAL events for the time being. Manually opening one instance of the application for now.

    Saturday, September 20, 2008

    Clickable JavaScript-less thumbnails

    A project required some clickable JPGs, and I somehow became determined to attempt it without the boggy overhead of JavaScript. The plan was to generate thumbnails with a Bash script and then manipulate the rest with Cascading Style Sheets (CSS).
    Note: At the bottom of the entry here, however, I found a lightweight and probably foolproof java script. Additionally, in the search for the CSS solution, I ran across a notable introductory CSS tutorial here, and one that explains how to create rounded corners here.

    thumbnail generation


    In lieu of a Bash script, I downloaded a small thumbnail generator application, jpgtn. It compiled but threw errors on first use:
    $ jpgtn -H -s 128 -d "./thumbs/" -p "tn_" *.jpg

    Using strace an unstated dependency on ld.so seemed to be the problem:
    access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
    Although ld.so was indeed lacking, the module had long been deprecated. The problem was operator error: I specified "jpg" in the command, and that limited jpgtn to processing only files with a lowercase ".jpg" extension. Other extensions -- JPEG, jpeg, JPG, or even, TXT -- caused jpgtn to exit with an error. The program also exits with an error if a "thumbs" sub-folder has not previously been created. So, verified or renamed all the JPG's with a lowercase "jpg" extension, and created a "thumbs" subfolder, jpgtn processed the photos into "thumbs" very rapidly.

    mouseover text


    For ease, mousing over the thumbnail should provide a text description of it,, but I wanted to do this without the pain of java, and within CSS. Several solutions appeared. The first good description of how to understand CSS mouseover options was here, but it appeared to deal mostly with links. A more complete description using span was found here, but this involved editing the style sheet in some way I didn't understand. The site ultimately with a page by page solution was this one. The author apparently wished to insert some effects on his MySpace page. MySpace apparently allows CSS but not java.

    Using the code from the site, I put a thumbnail on the page with the mouseover text "Statics Class 2005", in the following way:

    <div class="popuptext0">
    <center>
    <a class= "hoverTest" href="stats.jpg">
    <img src="tn_stats.jpg"> <br>
    <span class="popUpSpan">Statics Class, 2005 </span>
    </a>
    </center>
    </div>
    <style>
    div.popuptext0 {width:150px; height:auto}
    a:hover img {filter:none;}
    a img {border:0px !important;}
    a.hoverTest {width:150px; height:auto; }
    a span.popUpSpan {visibility:hidden;}
    a:hover span.popUpSpan {visibility:visible; display:block; border:1px silver solid}
    a span.popUpSpan {color:black; font-size:13px}
    a:link, a:hover {text-decoration:none !important;}
    </style>

    This worked fine - it named the photo and recalled the larger photo when clicked. But what about having multiple thumbnails across a page, in gallery format, with text available, how to arrange this?

    lightweight JavaScript solution


    Eventually, I found a partial JavaScript solution which was lightweight, meaning it didn't require pages of code. Additionally, it automatically generated the thumbnails for me. I found a micro script at dynamicdrive which did the trick and didn't require many resources. Problem solved, pretty easily.

    Friday, September 5, 2008

    Slackware 12.0 - Wine Install

    It appears Slackware doesn't include Wine. In early September of 2008, there was a dedicated Wine package available for Slack 10.1 here, but this package made for 10.1, may not have worked with the 2.6.21.5 kernel in Slack 12.0. I thought it best to go with source and compile. The latest Wine package download info is typically found at this portion of the WineHq site. I selected the latest release, which was Wine 1.1.4 in early September 2008.

    dependencies

    Dependencies are supposed to be the bane of installing Wine. I was unable to locate a dependency list at the WineHQ site to preempt vigilance with configure. Nevertheless, it seemed it would be easy to watch configure, note misses, download and install them, and then re-run configure. It seemed that easy.

    I untarred the package and proceeded to the first run of configure, Several apparent problems which didn't appear to be dependency problems displayed as configure scrolled its checks. These scrolling messages seemed to indicate various audio features were unavailable. Here's a few of hundreds that appeared:
    checking machine/soundcard.h usability... no
    checking machine/soundcard.h presence... no
    checking for machine/soundcard.h... no
    When configure finished its run, it also provided a message which may have been dependency related:
    configure: libcapi20 development files not found, ISDN will not be supported.

    Additionally, I looked in /include/config.h to check further for missing dependencies. All told, I didn't think I'd need the audio stuff that was scrolling or the ISDN support the message indicated, so I went to the remaining standard installation steps:
    $ ./configure
    $ make depend
    $ make
    # make install
    These steps seemed to go well, though in the back of my mind I was a feeling my dependency laziness might bite me later.

    configuration


    After installation, I began with $ winecfg and received another apparent error message:
    $ winecfg
    wine: created the configuration directory '/home/doofus/.wine'
    Could not load Mozilla. HTML rendering will be disabled.
    wine: configuration in '/home/doofus/.wine' has been updated.
    Not sure what this meant either, but I kept going, unzipping and installing a Windows gradekeeper program. It started without any problems. So far, so good, though I may yet find that these HTML and audio messages provide some limitations.

    Saturday, August 23, 2008

    FAIL: Slackware 12.0 - NFS, NAS, rsync

    Links:

    I created an NFS Ethernet with a 4 port router. One of the ports went to a NAS, another to a printer, a third to my desktop and fourth was left free. The point was a LAN which allowed me to use the NAS as a central synch point, something like a cloud server. First, I would synch with my desktop, secondly, when I arrived home with a work laptop, I could plug it into the open router port and synch that also.

    NFS was because it's an older service built upon Sun's RPC services -- it's tried and true. Additionally, my systems currently are Linux systems, so why add security and memory problems with SMB or Samba? Ultimately however, I was defeated by a sh*tty NAS enclosure.

    Vantec LX NAS (NST-375LX-BK)

    I purchased a Vantec NAS enclosure and ran into problems immediately, probably because I didn't research the product in advance. Thinking it would work generically, I installed a dormant 350GB PATA drive into the Vantec. The enclosure has USB and Ethernet ports, but it appears Vantec's brilliant designers made the Ethernet firmware Microsoft protocol exclusive. This meant it was not native NFS and would instead require SMB (Samba). I was faced with installing Windows compatible Samba garbage or using the USB connection. In addition to no network compatibility (except samba), USB speeds are similar to molasses. What a load of crap. For those who want to jeopardize your box w/Samba, here is a forum link with the info of what to do.


    Edit: the power conditioning in this box is just a voltage regulator chip; even normal line transients will change power to the enclosed HDD and possibly zorch the data. After I gave up on this box as a NAS, I used it as an external back-up and lost about 100G of data, some of it irreplaceable, eg family photos. I now use only USB powered externals.

    politics


    A short digression for a rant. Following my above situation, I looked for other options. There is a hideous lack of home-user NAS enclosures running NFS in the US consumer marketplace. In the US, to purchase a NAS that works with NFS one must spend at business levels, about $1000. Interestingly, if one only needs a Windows enclosure, the cost is perhaps $400. Meanwhile, there are many options for inexpensive NFS-serving NAS enclosures in foreign markets, such as the UK. These foreign markets don't require Windows or Samba, so why is NFS so shut-out in the US? Especially when it's understood that Windows and Samba have security vulnerabilities. One has to ask themselves what marketplace and/or government influences would lead to such a situation. Interesting.

    NFS, RPC, portmap


    Without an NFS box, let's still take a look at what I would have done, if I'd had one. As noted above, NFS lies on top of RPC services. RPC ports are not dedicated, they move around, so we need port mapper as a connection tracker. The portion of NFS which sends commands and acknowledgments between server and client is a normal dedicated port "file" (ports are files in UNIX), port 2049. But NFS uses undedicated RPC ports to move the data payloads, eg. the powerpoint files, the text files, whatever we are moving. The portmap application is necessary for this portion of the transfer. If portmap crashes, data may be lost or unsaved. To see what ports portmap is currently tracking, use, eg...
    $ rpcinfo -p

    RPC functionality


    This tutorial is a good start to setting up the RPC functionality necessary for NFS.
    1. The presence of /etc/hosts.allow and /etc/hosts.deny.

    portmapper functionality



    NFS functionality


    1. In function, I've seen that domains have to be the same across the machines. For example, let's say one machine's /etc/HOSTNAME indicates "green.example.net", and another machine on the same LAN has "blue.example.net". This makes the situation very difficult.

    NFS from command line


    NFS is essentially a mount, but remote - we mount a drive, or folders from a drive, from another system on the LAN, and it appears as a drive on our current system, though labeled so we know it's an NFS share. We can mount temporarily from command line or make it permanent as part of the boot process.

    NFS as an fstab line



    NFS directory mounting



    security

    NFS requires various ports. The system should be operating smoothly before attempting to add firewall functionality because ports may be affected from the firewall rules. If anything goes wrong, we will know that it was on the firewall side, and not our NFS configuration.

    Thursday, August 21, 2008

    Slackware 12.0 - Dual Homed


    Challenges can arise when a wifi NIC is handling wifi internet access and, in the same computer or "host", an ethernet NIC is operating in a wired LAN (eg, with a printer and backup storage). Since the host exists in two different LANs simultaneously, it is "dual homed". Obviously, two LANS means two routers.

    gateway vs. dhcpcd


    Check email at Yahoo. The host PC makes this request to Yahoo and Yahoo is outside the host's network, so it needs to use the gateway and the name servers outside the gateway. But when interfaces are being initialized at boot, the dhcp application, dhcpcd, overwrites the /etc/resolve.conf file that shows the IP addresses of these nameservers. So, if we bring the eth0 connection up after the wlan0 connection, our nameservers will be overwritten. To avoid this we do all these things:
    /etc/rc.d/rc.local
    ifconfig eth0 down
    ifconfig wlan0 up
    dhcpcd wlan0
    sleep 3
    route
    sleep 5
    route del -net 169.254.0.0 netmask 255.255.0.0 dev wlan0
    ifconfig eth0 up
    sleep 3
    dhcpcd -R eth0 #-R prevents resolv.conf overwrite
    sleep 3

    Sometimes,(rare) I have a "UG" default gateway for both wlan0 and eth0 even after these steps. I then remove the extra gateway at the command line:
    route del default eth0
    A quick verification with "route" shows me the second gateway is gone.
    Also there are these files...

    loading order


    rc.inet1 reads rc.inet1.conf and loads rc.wireless which reads rc.wireless.conf when loading. I don't worry about the rc.wireless.conf, but, in rc.inet1.conf I look for the default gateway line and I change it from "" to the IP of the gateway router.

    security


    The final step. We want to be sure everything is running before we lock down ports.

    Wednesday, August 20, 2008

    Slackware 12.0 - SiS190 eth0 module

    Links: sis191 instructions

    symptoms


    I connect a hard-wire Cat5 cable from my box (integrated SiS190 NIC) to a known-good Cisco router. Dhcpcd times-out and assigns a default IPV4LL address, eg 169.254.126.11. The network is unreachable. Is it the cable, the router, the NIC, router firmware, OS software?

    Unfortunately for my weekend, several hours pass narrowing the options. Eventually, it appears the insertable module for the NIC is from 5/2007 and does nothing. Isn't that nice? The process is explained relatively well here.

    Kernel recompiling was required. Since I had no Net on this box, I had to download kernel source on another box and sneaker-net it to the box with SiS191.

    Tuesday, August 19, 2008

    Slackware 12.0 - Safety Part 1: expectations

    high expectations, low knowledge, under-equipped


    We typical home pc users combine our computer security system, end-user applications, and our critical data into a single box. Further, we design this arrangement without IT or programming experience. How effective do you believe such a security arrangement is likely to be?

    Accordingly, seen in security terms, our home computer systems are an "opportunity". In large organizations, well-paid IT professionals design layers of dedicated security systems to protect workstation computer networks and data is backed up and retained elsewhere. Since we don't have that option, let's at least think about our configuration.

    situational awareness - threats


    Given that our home systems present opportunities, are there threats which seek to take advantage of them? First, it appears one may assume governments, US or otherwise, can easily penetrate our leaky home systems, and perhaps they do so regularly. It's also likely a large number of civilian entities penetrate our home computer systems. Given the opportunity and the threat, and our typical lack of resources to build a $3-5K layered and ported system, about all we can do is make ourselves a less inviting target than other home machines out there. Let's suppose here we're going to make the effort, where do we begin?

    step one - no security


    Linux security features sometimes function like shells around processes, sometimes around files, sometimes around ports, sometimes around data streams, sometimes around other things. The security system is complex. The dedicated security servers of the corporate world do this more easily since they operate with firmware or static-linked libraries and have no other applications or data to interfere. But we want applications and data on our home systems. This means we're bound to have permission or authentication problems when we configure complex security functions around desktop applications and web interfaces. Accordingly, I have one rule for a home system prior to activating iptables.chains,tripwire,ssh, really any security function - the system has to work exactly the way it's supposed to work before the security is implemented, and hopefully before it's connected to any network. If we don't have desktop functionality first, is the permission problem we see when activating security related to the security configuration or is it due to the user configuration? We have no way to cut the problem in half. Alternatively, we could bring up our security half before the desktop operations half, but this means we would have to estimate user applications in advance. So the first step of (home) security is to configure the desktop system to run reliably, including all it's peripherals of printers, scanners, cameras, audio inputs, and so on. Subsequently, we will activate our security and the net packages together, and adjust the mix when a desktop application ceases to work due to security layers. Ideally, we should test system applications after adding each security layer.

    Monday, August 18, 2008

    Slackware 12.0 - CUPS network printer

    Here's how CUPS worked-out on a couple different systems connected to a wired ethernet LAN, with a printer attached to the same LAN, and wifi access to the Web on each box (dual-homed boxes).

    slackware 12.0 - huge.s kernel patched for non-smp
    500 MHz PIII
    500 MB Ram
    --printer--
    HP1100 LaserJet
    Netgear PS 101 MiniServer
    Linksys BEFSR41 10/100 router
    RTL8139 type card

    What works: install


    For most of us, the CUPS webtool at  http://localhost:631
    simply never works. Not only this, but it waits until the final step in a timely 6-part process to note the failure. For me, it feels more efficient to add printers manually. Perhaps some day all the permissions will align with the stars and CUPS will work. At any rate, there is no limit to the number of printers we can manually add to our workstation.

    Lpadmin is the command, but first prepare the printspace:
    • add print group ("lp", in many cases) to the groups for users I want to have print access. Users who are not in lp or whatever, can notprint or cancel print jobs without rooting-up.
    • decide which ppd to use with the printer.
    With these two accomplished, try to add the printer:
    # lpadmin -p HP1100 -E -v socket://192.168.1.101/printer -m hpijs.ppd
    This may return the error that the hpijs ppd file, in this case hpijs.ppd, cannot be copied. Ppd's need to be in the correct format (uncompressed ppd) and directory (/usr/share/cups/model) if lpadmin is to find them and add them.

    Foomatic hides and zips their ppds. I finally located them in /usr/share/cups/model/foomatic-ppds/HP. The one I needed was HP-LaserJet1100.ppd.gz. I copied this file into /usr/share/cups/model , unzipped it: "gzip -d HP*", and renamed it hp1100.ppd for ease of loading. Then I ran:
    # lpadmin -p HP1100 -E -v socket://192.168.1.101/printer -m hp1100.ppd
    The printer was successfully added to the system. Suppose, however, I wanted to try several .ppd files to determine the one with the clearest print and functionality? Ppd files can easily be added and deleted. Simply copy the new .ppd file (let's call this new one eg, another.ppd) into /usr/share/cups/model/ and run:
    # lpadmin -p HP1100 -P another.ppd


    What works: configuration


    Ideally again, one should be able to use the CUPS webtool at  http://localhost:631 for configuration, just as we were supposed to be able to add printers with it. If the web interface is working, use it. If not part two of our work will be to modify the file /etc/cups/printers.conf with the right settings and restart the CUPS server so it reads the file. In /etc/cups/printers.conf one line has different possible structures depending on how the printer is attached. Let's say we've named our device "printername", some possibilities are:
    DeviceURI socket://ip.address/printername #LAN
    DeviceURI socket://ip.address:ps101name #LAN version2
    DeviceURI parallel:/dev/lp0 #parallel port
    DeviceURI usb:/dev/usb/lp0 #usb port

    I experimented until I determined the correct one for the physical connection which, in my case, was on a LAN. I looked in my router and found my printserver received DHCP at 192.168.1.101.
    I navigated there (http://192.168.1.101) in Firefox, and found a GUI interface which allowed me to rename the printserver to any name I desired, but decided to leave it PS101. Accordingly, the successful syntax for my DeviceURI was this one:
    DeviceURI socket://192.168.1.101/PS101
    and my final working configuration file went like this:
    /etc/cups/printers.conf
    # Printer configuration file for CUPS v1.3.7
    # Written by cupsd on 2008-12-01 04:19
    <DefaultPrinter HP1100>
    Info HP1100
    DeviceURI socket://192.168.1.101/PS101
    State Idle
    StateTime 1228132708
    Accepting Yes
    Shared Yes
    JobSheets none none
    QuotaPeriod 0
    PageLimit 0
    KLimit 0
    OpPolicy default
    ErrorPolicy stop-printer
    </Printer>

    After all this, I ran
    # service restart cups
    in order for CUPS to re-read the /etc/cups/printers.conf file with the new information. I was able to print from all programs.

    test prints


    While testing printer ppd's, I ran a lot of test prints. Some worked, but the ones that didn't print piled-up in the queue, and the CUPS interface at http://localhost:631 would never allow me to delete jobs. CLI solution to delete print jobs, (assuming CUPS is running, there is no "lpstat" w/out CUPS):
    # lpstat -o
    provides pending print jobs and the job number eg, "HP1100-1". Then
    # cancel HP1100-1
    will get rid of the job. Run "lpstat -o" again to verify, if you like.
    # lpstat -v [list of installed printers]
    # lpstat -d [names default printer, also can specify using this]



    xfce4 note


    To print from Mousepad and others that use X-settings, xfce printing needs to recognize the CUPS printer. Try printing in Mousepad and see if the CUPS printer is available. If not, configuring xfce4 to CUPS will be necessary. This was simple for me. I simply went to the XFCE menu, then access Settings ->Settings Manager -> Printing System. Once in Printing System, I selected the CUPS network printer and closed the menu. I then opened a Mouspad file to print; sure enough, the CUPS printer appeared in my options.

    What is supposed to work (as opposed to all of the above)


    First, good CUPS documentation here.
    Essentially, one should be able to navigate to CUPS at  http://localhost:631 with Firefox and add printers, manage the print queue, or do other config changes. I never could get the parts to sync enough to add a printer, let alone administer them. Some considerations:
    • GUI authentication: CUPS apparently relies on PAM, Slackware does not provide PAM?
    • parallel: HPLIP is supposedly needed for support if using a parallel port, but not for network printer. I could not get HPLIP to see the parallel port on the legacy system
    • parallel: snmp might be a factor
    In the name of gleaning understanding and advancing progress from previously attempted approaches or, at least, of not duplicating effort,here were some initial directions:
    1. configure network and verify dhcp assigned print server and slackbox IP's. Let's call these IP's as follows:
      192.168.1.101 - Print Server 192.168.1.102 - Slackbox

    2. check /etc/rc.d/ - verify rc.cups (and rc.hplip, if it were needed) are 755.
    3. is printer plugged in? PS101 plugged in?
    4. Firefox to http://912.168.1.101. Blue admin page. Change or leave the name of the PS101, but be sure to write it down - you will need it later in your /etc/cups/printers.conf file. Print a test page to see that the server is talking to the printer properly.
    5. Firefox to http://localhost:631 CUPS admin interface. Make sure it comes up.
    6. Check that tcp and udp ports 161,162, and 9100 are not locked-down.

    CUPS Webtool notes http://localhost:631


    1. Try adding the printer with the CUPS webtool. It likely will wait until the (frustratingly) last step in the process and reject your password. Since CUPS uses PAM for password authentication and Slackware doesn't use PAM, I made many many many attempts at editing the /etc/cups/cupsd.conf file to eliminate the password requirement, but it always prompted me for a password in the CUPS webtool. According to the information at this slack site here is how one should proceed:
    A. Lets say your name is "doofus" and you declared your SystemGroup in cupsd.conf as "wheel":
    lppasswd -g wheel -a doofus
    B. Check to be certain:
    # cat /etc/cups/passwd.md5 doofus:wheel:01234567890abcde1234567890abcde12
    C. Now restart CUPS and go back to the webtool in Firefox and add the printer:
    #/etc/rc.d/rc.cups restart
    I never got it to work; the password was always rejected and I could never add a printer.

    SNMP HPLIP


    SNMP (Simple Network Management Protocol) is a powerful process originally designed to simplify web management. Many processes take advantage of SNMP functionality and one of them is HPLIP

    If hp-setup doesn't work, then the road may be long. Try using SNMP to determine if the kernel can see the printer at the nework address:
    snmpwalk -Os -c public -v 1 ip.address.of.printer 1.3.6.1.4.1.11.2.3.9.1.1.7.0
    Per this site HP tshoot, the response should be something which shows the manufacturer,or SNMP may not be installed correctly. This means working with the /etc/snmp/snmpd.conf file.

    Slackware 12.0 - install 1

    Intro


    Moving from Debian and Debian-based hybrids (eg. Ubuntu) means a different initialization configuration. For me, this was for the better: the straightforward /etc/rc.d layout in Slack (and its hybrids - eg. Zenwalk) is clean and almost intuitive. Another bonus was learning that Slackware doesn't use PAM. Let's look at Slack 12.0 install.

    Older Machines


    1. boot with non-smp kernel big.s
    2. cfdisk, mke2fs, and mkswap per taste.
    3."setup"
    4. activate disks, including swapon. Skip formatting (done in #2)
    5. install from CD's or DVD but don't configure the network
    6. reboot, run the patch for non-smp.
    7. add group 1000 and user 1000, do a genfstab, etc.
    8. reboot, login, and further configure users, groups, fstab, inittab, profile, visudo, modules, other initialization.
    9. build nvidia drivers if necessary
    10. build wifi drivers if necessary (madwifi, ndiswrapper, other)
    11. a second look at modules to /etc/rc.d/rc.modules and any associated commands to /etc/rc.d/rc.local
    12. reboot, check dmesg -tail, verify wifi or eth0 (eg. with "route"). To make permanent see here. If box is dual-homed, alter network files appropriately.
    13. download and configure xorg.conf or copy premade to /etc/X11/
    14. copy /etc/X11/xinit/xinitrc to ~.xinitrc and add desired Windows Manager line at end, eg exec dbus-launch twm
    15. reboot, attempt $ startx and tune.

    Wireless

    No need to download and compile a driver module if one came with distro: Check here:
    ls /lib/modules/2.6.23.12/kernel/drivers/net/wireless/

    Atheros AR242x 64 (5007 chipset)

    Memory location on my card for this is 53100000. The instructions here were crucial. The ath5k module doesn't apparently work well in Satellites, however it's described here, and it appears the ath5k will eventually be the way to go. For now, the steps seem to be:
    1. in /etc/modprobe.d/blacklist, blacklist the "ath5k" module
    2. reboot and lsmod - verify ath5k is gone
    3. download madwifi-hal-0.10.5.6-r3861-20080903.tar.gz , or the newest one there, make, and install.
    4. reboot again and lsmod again
    5. # iwconfig ath0

    Atheros AR5005G

    download latest madwifi, eg madwifi-0.9.9.3, then the usual
    $ tar -xzvf madwifi*
    $ cd madwifi*
    $ make
    # make install
    # modprobe ath-pci

    RaLink RT2600 802.11 MIMO
    1. download latest ralink rt61 driver from ralink support.
    2. $tar -xzvf 2008*
    3. $cd RT61_Linux*
    4. $cp Makefile.6 Makefile [kernel 2.6.x]
    5. Alter module rtmp_main.c , by commenting out (around the bottom, line ~900):
    return pci_module_init(&rt61_driver);
    and replace it with:
    return pci_register_driver(&rt61_driver)
    6. Their "configure" file is not executable, so change it, then configure and make the module.
    $chmod 755 Configure
    $./Configure
    $make

    7. Make a directory where the module will locate configuration info and put these info files in it.
    # mkdir /etc/Wireless/RT61STA
    # cp rt2561.bin /etc/Wireless/RT61STA/
    # cp rt2561s.bin /etc/Wireless/RT61STA/
    # cp rt2661.bin /etc/Wireless/RT61STA/
    8. The last file to go into that config directory may have CTRL+L line ends and we have to be sure these are eliminated: Use $dos2unix rt61sta.dat (or use sed). This file has the particulars for our LAN and WEP.
    9. Copy that file to where the others are at:
    #cp rt61sta.dat /etc/Wireless/RT61STA/
    10. Send the module to the kernel's module folder:
    #cp rt61.ko /lib/modules//misc/
    11. Tell the kernel where to find the module by adding a line in /lib/modules//modprobe.dep
    : "/lib/modules//misc/rt61.ko:"
    12. Load the kernel:
    #modprobe rt61

    AFTER INSTALLING WIRELESS
    1. provide permanence via /sbin/modprobe ath-pci to /etc/rc.d/rc.modules and any configuration for it (eg. iwconfig ath0 essid "loser") to /etc/rc.d/rc.local.
    1b: in the case of Ralink /sbin/modprobe rt61 to /etc/rc.d/rc.modules.
    2. further modify /etc/rc.d/rc.local to be sure card comes up. Last line for the card: iwconfig ra0 up or iwconfig ath0 up
    3. reboot and "dhcpcd ra0" or ath0, check with "route" and a ping.

    A NOTE ON PACKAGES

    In Slackware, packages typically need to be individually compiled and installed. This leads to dependency problems because individual applications might overwrite or otherwise break dependencies from other packates. Slackware has pkgtool and users are urged to use it whenever possible.