Sunday, December 20, 2009

LaTeX hijinx

latex forum  TexLive install (zenwalk)   TeX Live guide   summary of commands   general guide   general guide  latex basics   latex basics   formatting   formatting tips   font info  more font   equations   graphing information  graphing instructions   photos/graphics   complex graphs   tables  lines, boxes  templates: letters, tables  BibTeX   BibTeX  Hongzhi's Notes   convert latex to png

Note: If Perl or Tk version problems appear running tlmgr or tlmgr -gui, it might be searching on the DVD instead of the HDD installation. The way to confirm this is to run $ tlmgr update --list which will attempt to evaluate what packages could be updated by examining the database. If it comes back unable to find the database and the folder is the DVD, not the HDD, update where it looks:
$ tlmgr option location
Also update tlmgr, before checking anything else.
$ tlmgr update --self


Unlike MS Word, LaTeX is not WYSIWYG, but we it's features are transparent,non-proprietary, and configurable to very fine grain. LaTeX seems slightly ungainly initially, before one understands what set of binaries they will typically use. One becomes more efficient as they learn, but the easiest approach for a noob appears to be to download a complete 2.8 GB TexLive or MiKTeX iso, that has all potential binaries and many templates. Just burn it to DVD and then install it to the hard-drive from the DVD. One will avoid encountering missing binary requirements by having a complete installation. The greatest advantage of the basic LaTeX being in ASCII text is that it is easily searchable w/grep, unlike proprietary formats (eg. Word). Additionally, publishers often produce .cls files which automatically, or nearly automatically, format one's text for the style of the journal's submission requirements. After placing in the appropriate directory, one only need change one line at the top of their document eg: \documentclass{theircls}.

tex files
The basic LaTeX file is the ascii .tex file. It can be edited with any text editor. Once complete, the source .tex is compiled into a .dvi file (device independent), but it can also be compiled into other formats, such as .pdf, .ps, .ep, etc.

tex file syntax
The default settings for margins are huge, around 2" in every direction. In the basic .tex file below, I added {geometry} package information to overcome this but, if the default geometry is desired, {geometry} can be deleted. One can create their own style sheets and call them with \usepackage, applying desired behavior across any document, similar to the way a css sheet does in an html document.

% percents are comments
% \usepackage[T1]{fontenc} accents, umlauts, etc
% \usepackage[utf8]{inputenc} chinese characters
% \usepackage{graphicx} if photos
% \usepackage{indentfirst} indents first para in section
% \usepackage[scaled]{helvet} sans serif pt1
% \renewcommand*\familydefault{\sfdefault} sans serif pt2
\author{\LaTeX Newbie}
\title{A Quick Example}

\section{Notes Wk 1}
Math 674 -- Spring 2010
\subsection{20100116 Introduction}
We review the syllabus and introductions. A primary concern seems to be the use of a calculator.\\

Here are a couple of equations that are pretty well known, the second being the quadractic formula. I'm uncertain how to make a larger space between the two equations:
E = mc^2
\begin{array}{*{20}c} {x = \frac{{ - b \pm \sqrt {b^2 - 4ac} }}{{2a}}} &
{{\rm{when}}} & {ax^2 + bx + c = 0} \\ \end{array}
Just a short follow-up text

If we then compiled it into a .pdf with the command, say $ pdflatex test.tex , it looks like this:

graphs and photos
Many people creates graphs or plots of equations outside LaTeX and "\include" the results, while also using a package to process it. The main packages are eepic, graphicx, and tikz. Eepic is not known to work with pdflatex, which I use to compile my docs into PDF files. It appears a simple way is to use gnuplot from the command line, and export the resulting graph as an *.eps file. In the main document, use the graphicx package ("\usepackage {graphicx}") and then, where graphics are desired, call the eps file(s) using \includegraphics with the file name to insert the graphic. Graphicx can also import jpgs pngs and the like, as described in this wiki primer.

Another option is tikz, which is actual vector graphics. The package is \usepackage{tikz}, and then the callout is \begin{figure}, which is apparently the graphics area. Nested, we use \begin{tikzpicture} with associated code then entered to create the graph. Tikz apparently is a user application layer for a program called "pgf". The info on pgf along with some typical tikz examples is available at the pgf site. Chapter 12 of the TIKZ manual there is particularly helpful for mathematics graphing, but does not manage equations and smooth curves easily. It is possible, and looks clean, as seen here.

currency symbols inside math mode
Since dollar signs are a special operator used to delimit math mode, suppose we need actual dollar signs to display in a math output? Unfortunately, the only answer I've found to date is to insert: \text{\$}

tex to html, open office, word
Link: Geico Caveman's attempt
In TexLive, tex4ht appears worthless. For straight html, I didn't find anything better than $ htlatex foo.tex . this created an html document and associated css stylesheet that properly rendered math and text. At least in Firefox. The css stylesheet was bulky for a css and a few features will not parse, notably dfrac.

combining multiple documents
I can't write it any better than this excellent post for combining multiple .tex files into a book or other larger document.

Saturday, December 19, 2009

dvd-burning burn-out

Links: Firmware info   Firmware   Latest DVRFlash

It used to be that DVD-burning was pretty straightforward under various distributions:
$ growisofs -dvd-compat -Z /dev/hdc=myvacation.iso
Mostly, all would go smoothly. (Note: This is not a discussion of the current cdrecord/wodim problem)
But advances have come at different speeds for different components, such as drives and cables and I/0 ports, firmware, and the BIOS. A DVD drive might read OK, but might seem to encounter problems when burning:
$ growisofs -dvd-compat -Z /dev/hdc=myvacation.iso
Executing 'builtin_dd if=myvacation.iso of=/dev/sr0 obs=32k seek=0'
:-[ PERFORM OPC failed with SK=3h/POWER CALIBRATION AREA ERROR]: Input/output error

One can have Brasero or some program like that do the burning to overcome any manual settings, but then we might notice our drive, which is supposed to burn at, say, 4-16x, is burning instead at about .5x. That would be something like 680 KbSec, if we agree 1x is supposed to be about 1.32 MB/Sec. This is annoying, to be sure. (Note: one good thing about Brasero, it seems a rare non-K3b burner which will do video, at least if configured properly.

Facing this, is it our cable, our DVD firmware, what? Certainly, the burner only working at 1/8 of its lowest burn speed is going to cause I/O problems and user delays. First, we'd like to run a few tests, just as we see here, to give us a rough view.

# hdparm -i /dev/sr0

Model=PIONEER DVD-RW DVR-108 , FwRev=1.18 , SerialNo=
Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic }
RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0
BuffType=13395, BuffSize=64kB, MaxMultSect=0
(maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0
IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2 udma3 *udma4
Drive conforms to: Unspecified: ATA/ATAPI-2,3,4,5

* signifies the current active mode

# hdparm -I /dev/sr0

ATAPI CD-ROM, with removable media
Model Number: PIONEER DVD-RW DVR-108
Serial Number: DKDC451400WL
Firmware Revision: 1.18
Likely used CD-ROM ATAPI-1
DRQ response: 50us.
Packet size: 12 bytes
cache/buffer size = unknown
LBA, IORDY(can be disabled)
Buffer size: 64.0kB
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 *udma4
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=240ns IORDY flow control=120ns
Enabled Supported:
* Power Management feature set
* PACKET command feature set
* DEVICE_RESET command
HW reset results:
CBLID- above Vih
Device num = 0 determined by the jumper

# hdparm -tT /dev/sr0

Timing cached reads: 1234 MB in 2.00 seconds = 616.60 MB/sec
Timing buffered disk reads: 22 MB in 3.23 seconds = 6.82 MB/sec

With an advertised 12x reading speed, a speed of about 16 MB/sec, we see that the true, uncached read speed of 6.82 MB/sec is not coming close. And, as noted above, a write speed of about .682 MB/sec, is anywhere between 1/8 to 1/32 of the advertised 4x-16x write speed range. Hmm...

We see in both of the first two checks that UDMA - Ultra Direct Memory Access - is operational, and that in fact UDMA4, one of the faster forms of UDMA, has been selected. Let's also check with this tool:
$ cat /proc/ide/piix

It's notable that UDMA4 requires an 80 pin cable to transfer reliably at the 66 MB/Sec transfer rate UDMA4 ostensibly carries. Apparently, avoiding capacitive interference is the issue for using the 80 pin over its 40 pin predecessor. I haven't opened the box, but it's unlikely that an 80 pin cable was used for this drive, since I simply cannibalized two drives from older systems when constructing the system. The 80 pin cables are more reliable as well as backwards-compatible to earlier drives. At any rate, I can note this as potentially a problem, but perhaps not a full explanation.

Monday, November 2, 2009

linux - timezone/time changes

Brief clock-setting example       Ntpd daemon documentation (in depth)
Clock processes on Red Hat     Clock processes in Archlinux

The Red Hat link describes time features functionally with some commands. The Archlinux link includes specific configuration file information.
Note: "RTC" and "UTC" are similar acronyms, and of course we want to keep them straight. My device is to remember, "U is close to Z", since UTC is Zulu time (Zulu=GMT=UTC). "RTC" is the BIOS clock.

Overview of clock/timezones in Slack/Zenwalk

The hardware clock, more technically the RTC (Real Time Clock) in the BIOS, is the battery powered CMOS clock that sits on the motherboard. During startup, we can enter BIOS and set the BIOS clock directly, or we can set it through some commands (see below). The software system clock reads the hardware clock at boot time. To assist with system clock accuracy, some users also employ an ntpd daemon. After the system boots, ntpd periodically checks calibrated servers via the Web and make corrections to the system clock. At shutdown, the system clock's time is written to the hardware clock, to assist with the hardware clock's accuracy going into a powered-off state and battery power.

RTC -> System Clock (once,at boot) ->System Clock (corrected by NTPD) -> Timezone offset -> Desktop display

Linux views the hardware time in two potential ways: as carrying "local time" or to "UTC" (aka "GST", "GMT", "Zulu") time. This decision is made by the user during installation, but the file can be directly accessed and toggled any time by the (root) user. In Zenwalk, the file is /etc/hardwareclock. The file contains only one word, either "localtime" or "UTC" (minus quotes). In other Linux systems, local or UTC is noted in /etc/sysconfig/clock. Zenwalk/Linux systems use either the local or UTC when calculating the current time to display to the user.

It's up to the individual, but I set BIOS to UTC and select "UTC" in /etc/hardwareclock. With UTC: 1) my system remains consistent across both the hardware and system clocks and, 2) UTC is the time served by Internet ntp time servers. Also, when traveling, I don't needn't alter clock settings beyond changing /usr/share/zoneinfo or, eg Orage (if I want to use a GUI).

What appears when I type "date"
Neither the system clock nor hardware clock. "Date" is the system clock (taken from BIOS at boot), modified according to /etc/hardwareclock and /usr/share/zoneinfo settings. As noted above, because I keep system and hardware clocks set to Zulu, the simplest way to keep things accurate at all 3 levels (bios, system, user-display), is to run a time server update on systemclock and then send it to the BIOS:
# ntpdate
# hwclock --systohc
These commands will be explained more below, but the above command makes life easy after, say, one replaces a CMOS battery on the motherboard. Browsers won't be able to SSL lock unless the time is relatively close to the current date. For example, Chromium won't navigate to the Google home page unless the system time is within the current CA certificate, (eg, June 1, 2001 won't allow access in 2009), because Google always connects "https".

Hardware clock (RTC)
It's possible to directly set the time on one's hardware clock to whatever one wishes using hwclock commands.

To view the hardware clock:
# hwclock

To change the hardware clock to whatever the system clock currently indicates:
# hwclock --systohc

To change the system clock to whatever the hardware clock currently indicates (this is what happens at boot):
# hwclock --hctosys

System clock
The system clock periodically receives an accurate UTC/GST/GMT from ntpd . Settings are configured in /etc/ntp.conf and one can check operation with # service list. One can also force an ntp update to the system clock. Turn off ntpd, (# service stop ntpd), to free the ntp port. Google an ntp time server and run ntpdate or just go to the ntp pool, which will determine and use the nearest server:
# ntpdate

To view the system clock, for example to verify an update:
$ date

Timezone offset
Suppose I fly from Chicago to NYC or vice versa; how do I set the timezone? From the desktop, I open Orage and change the timezone there. If I want to do it without a GUI, most of the answer is in /usr/share/zoneinfo.

The file /etc/localtime is supposed to be a soft ("sym")link pointing to the correct timezone in /usr/share/zoneinfo. In my system, there was a hard file in /etc/localtime instead of a symlink. Accordingly, to be sure, I removed both files and and created a new symlink to the correct timezone. For instance, since NYC is EST, this would be the process when flying to NYC:

# rm -r /etc/localtime
# rm -r /usr/share/zoneinfo/localtime
# ln -s /usr/share/zoneinfo/US/Eastern /etc/localtime

With these three commands, and setting /etc/hardwareclock to UTC, all should be good at the next reboot.

Sometimes a system is sticky even with this. If that's the case, do all of the above and also export the time variable $TZ, to the kernel. For example in PST regions:
$ export TZ=PST8PDT

Ancillary notes
  • the directory /usr/share/zoneinfo contains the premade time zone options. It appears all potential time zones are in here.
  • the timezone file /etc/localtime appears filled with weird symbols like it's a bin file. It can't therefore be edited with a text editor.
  • the /usr/share/zoneinfo directory contains two soft links: localtime -> /etc/localtime, and timeconfig -> /usr/sbin/timeconfig. The symlink for /etc/localtime is explained above, but I'm uncertain why the timeconfig application would be linked here since it can easily be ran in any terminal.

Saturday, October 10, 2009

zenwalk - package management

Forum Thread: Prior release repositories
Forum Thread: Making a local repository for a release

The Zenwalk OS (Slackware-based) is updated once or twice a year. Following an update, the mirrors for packages (programs) are also updated and contain the latest package versions.

Let's suppose I like to use the audacious package to play music. In order to keep the installation disc as small as possible, packages such as audacious are not included in the Zenwalk installation disc. These additional packages are retrieved separately from one of the package mirrors. Open a terminal and it's easy to download and install any Zenwalk package (in this case, audacious) using the command line:

# netpkg audacious

Or, if removing:

# netpkg -remove audacious

That's about all there is to installing or removing applications Zenwalk maintains.

Maintaining Previous Versions
As noted above, Zenwalk releases entirely new distributions once or twice a year. What if I don't want to upgrade my entire operating system, but I still want to install applications? For example, suppose I've had Zenwalk 6.0 installed for a year before I remember I want to install audacious. I try to netpkg audacious but, when I do, I discover Zenwalk has upgraded to v.6.4. If I try to install the newer version of audacious, netpkg asks to upgrade portions of v.6.0. I can let netpkg do this, but maybe I don't want parts of my system to be in 6.4, while other parts are in 6.0. How do I avoid upgrading to 6.4, but still get the packages I want for 6.0?

Three Solutions

Solution a -- install without using netpkg
One can always go to the home page of any application they desire and simply download, unpack, configure, compile, and install the general release tarball (.tgz). In the case of audacious, the home page is One might want to check for dependencies when doing so.

But a couple of solutions for installing older software can be accomplished inside of the netpkg package manager. Both of these require a small degree of manipulation of the /etc/netpkg.conf file. The second option additionally requires manipulation of the /usr/libexec/netpkg-functions file.

Solution b -- point netpkg to prior release mirrors
For a period of time after a new Zenwalk release, a few mirrors contain the previous release. One must open the /etc/netpkg.conf file with a text editor and manually add URLs for older mirrors. After doing so, I run

# netpkg mirror

and select one of the older mirrors. I found a few prior release URLs listed here, and had success with this mirror: Other archive URLs can probably be Googled, but there is a limitation to this solution: archive mirrors trail the current Zenwalk release by only one version. Users therefore only have a grace period of 6 months to a year before they will be forced to upgrade to some extent. The more permanent solution is the one below.

Solution c -- download all desired packages for dvd or other local access
This is a permanent solution, in case one wishes to never upgrade Zenwalk. The catch here is one needs to consider nearly any application they might need, because thinking of it two years later will be too late (if that happens, just use "solution a"). To download all potential applications in a Zenwalk release is roughly 10GB. I then alter netpkg to find the files on my hard drive or a dvd, instead of on a mirror. Alterations include the /etc/netpkg.conf and /usr/libexec/netpkg-functions files. Here are instructions:local repository

Supposedly it's also possible to point to a DVD with everything on it (if Apache is running) with this URL added to /etc/netpkg.conf and selected via # netpkg mirror:

!! For "solution c", it's necessary to download PACKAGES.txt and PACKAGES.txt.gz/ from the mirror. Netpkg appears unable to traverse directories to locate packages without these meta-information files. Get these files before the previous release mirrors move to a newer release or face recreating them manually, a time-consuming, nearly prohibitive task.

Saturday, September 19, 2009

Browser ID String - User Agent

User Agent Switcher

Superficial entry here. I'm a Yahoo! Premium User but, even if I weren't, I believe I'm supposed to have access to their News videos. I don't, using Firefox, currently version 3. I sent them an email some months back and they assured me that their videos are tested and viewable on Firefox. Um... no. Or maybe "yes" on some version on some system they created. Anyway, eventually I had little choice but to pursue the annoyance of a User Agent spoofer.

I went ahead with the popular User-Agent Switcher developed by Chris Pederick. It installs easily and then ones simply restarts Firefox.

This is a good piece of software, but doesn't have the three or four strings I wanted to use. I wanted to save the large file that comes with UAS, but also to make my own short list so that I would only need to select from three options, and the menu would therefore be much smaller and more useful.

1) Backed up the ID files that came with UAS. This is in


I backed it up: $ cp useragents.xml useragents.bak, and then opened the original useragents.xml file and added the strings I wanted from various GIS's. The one which allowed me to view content in Yahoo was:

Wednesday, September 16, 2009

layman data III

Helpful links:
**Google MySQL Primer
CERT Bulletins
Webmaster World Forum
Simulate foreign keys - MyISAM
Cascading and key constraints - MyISAM, InnoDB, NDB
Create tables using PHP script

This is the third in the series, though not meant as a coherent progression. A random collection of tidbits or crumbs to follow. Recently:

* Cascading and foreign key constraints with different engines. My webhoster provides only the MyISAM engine, so no foreign keys. Foreign keys are the "relation" in an RDBMS, auto-updating child relations when a parent is updated, cascading inserts and deletes, and so on. This apparently can be approximated in a number of ways in MyISAM. TRIGGERs can be created, loops which do multiple inserts, etc. The InnoDB engine makes this process native from the time of creating the tables. Much easier. To switch between engines in existing tables, we use:
* Added CERT link above. The CERT bulletin link above quickly reveals the many injection threats arising each week. It appears one has to lock-down the code of a production server which, in turn, apparently requires time and patience to learn and implement.
* Scripts to install tables. Appear to format as .sql dump files but without the data inside.
* Proper documentation, once this is more focused and defined. So far, a simple RTF file using underline for primary, and italic for foreign key, has been helpfully direct. Seen it elsewhere too, but read it in Welling, L., Thomson, L. (2008). PHP and MySQL® Web Development, Fourth Edition. Addison-Wesley Professional. pg 208-209 informit link ~$50.


Friday, September 4, 2009

layman data II

related links
Apache   PHP   PostgreSQl
security modifications to avoid root
LAPP on Redhat (very helpful)
clear PHP/Apache compile notes

A difficult intercomplexity, combined with an annoying resource drain of running Apache, PostgreSQL (or MySQL), PHP, and a browser (taken together, a LAMP) are required these days. If one has photos or a lot of other files, something besides file folders are needed and they cannot be managed without a LAMP unless one has a CS degree or can afford Oracle. I run a LAMP on my website to make files accessible, but the provider where I park the site has older versions of all this software. This makes the LAMP vanilla and slower (eg., no InnoDB). Additionally, there are no options for PostgreSQL.

Since I prefer PostgreSQL, for the LAMP on my local drive, I created a LAPP, substituting Postgres for MySQL. Even on a local drive, security issues arise. Apache, Postgresql, PHP, and some browsers require ports. I want to be sure no ports are open to the outside. Learning how to lock-down Apache, PostgreSQL, and PHP to make them only localhost accessible is a work in progress. Configuration files need to be altered for localhost only, but it appears there is more to it than this, if one is simultaneously connected on the Web.

On this local drive, running hybridized Slackware (Zenwalk), a reliable LAMP exists out of the box, but morphing it to a PostgreSQL LAPP required compiling PostgreSQL and PHP (see "Notes" below). The kernel didn't require alteration and a recompile, thankfully.


Install PostgreSQL(source, don't use netpkg) and MySQL(netpkg) first. In Zenwalk, PHP is precompiled without PostgreSQL support. PHP must therefor be recompiled with it: "--with-pgsql=/usr/local".

Default Users, Ports, Home

Postgresql - user:postgres, port 5432, /usr/local/pgsql. Apache - user:root, port 80, /etc/httpd.conf. PHP - /usr/local/lib/php. MySQL - user?, port 3306, usr/share/mysql. I compile Postgresql instead of netpkging it because of a Catch-22 that occurs after installation. One would have to log in and out every time they wanted to use the database or create group permission trees. On a standalone, it's easier to compile Postgresql and initialize with the user as the owner instead of "postgres". Create databases using


FIRSTRUN DBMS - Compiling is easier downstream than Zenwalk. When compiling, simply supply one's username during initdb, eg. if one's username were "foo": $ initdb foo --encoding=utf8 --locale=POSIX .Then just make some directory in /home like "/home/pgsql" and # chown -R 1000:100 /home/pgsql so "foo" can use it at will. If using Zenwalk, postgresql.conf and pg_hba.conf must be configured prior to first run. Zenwalk also makes the default user postgres, so its password needs to be created: # passwd postgres, and enter a simple password. A note of confusion for Zenwalk is that "postgres" is both the god user of the DBMS, but also a command to start the DBMS ("postmaster" is deprecated).
START/STOP DBMS - # service start/stop postgresql (Zenwalk), or # postgres -D /var/lib/pgsql/data/ -r logname.txt. This second command starts the database at its default location and provides a logname of choice.
DATABASE FILES Zenwalk installs a PostgreSQL tablespace at /var/lib/pgsql/data, but if installing from source they go to /usr/lib/pgsql. # createdb -U postgres -W -D /var/lib/pgsql/data/sub01 -E utf8 -e employees.


SECURITY Once it's running, if Apache's listening for connections, it's a significant security problem. Set it to only listen on port 80, so it only listens to localhost. Skype also uses Port 80, but you can reset Skype to, say, Port 81, in its advanced settings. Meanwhile, to change Apache:
# nano /etc/apache2/ports.conf
START/STOP - # service start/stop httpd (Zenwalk), or # apachectl start/stop (any distro). Checkit by pointing browser to "http://localhost".
CONFIG FILES - Netpkg handles it, but following PHP recompile, Apache configuration tweaks are necessary for PHP serving. A short list is here. Additionally, one must open /etc/apache/mod_php.conf and provide the complete path to, typically /usr/libexec/apache/, if it's not in there. Following changes, restart httpd, which should initialize PHP.
HTML FILES - (Zenwalk) We can serve files from anywhere on our hardisk through the browser, but it's easiest to put them in /var/www/htdocs/, because this is the default. To write to here from logs or anything, it can't be done easily since /var/www/ is owned by root. A solution is to create a new group.


START/STOP - # php -v. This command checks for the version. PHP loads as an Apache module, not as a separate program. I used #netpkg remove php to remove the Zenwalk version of PHP. I did this because the netpkg (Zenwalk) version fails to support PostgreSQL.

COMPILE - necessary for PostgreSQL; netpkg PHP does not support Postgres. The configuration phase, prior to "make", is critical. The correct syntax for the PostgreSQL functionality is --with-pgsql=/usr/local. However other options, can be useful. Taking most situations into account, a reasonable configure string might be:
$ ./configure --with-apxs2=/usr/sbin/apxs \
--with-pgsql=/usr/local \
--with-mysql=/usr/share \
--with-libxml-dir=/usr/lib \
--with-curl=/usr/bin/curl \
--with-zlib \
--with-gettext \
--with-gdbm \
--enable-inline-optimization \

"Make", then root "make install"; it installs to /usr/local/lib/php. Copy the ini files to there: # cp php.ini* /usr/local/lib/php/. Pick one of the two to be the ini file, eg # cp php.ini-development /etc/apache/php.ini. It can be tweaked later.


Tuesday, August 4, 2009

layman data I

Helpful links:
**Google MySQL Primer
MySQL Forge
OpenSourceCMS compare
insert data from html
html data entry formats
useful php code
Blog w/simple folksonomy schemas


Folders and a file manager aren't sufficient for speedy file retrieval once a few thousand documents are accumulated. Further, they don't allow for proper metadata storage. I'm sure many home users are in this situation.Our needs are great, but we are basically forced to rely on folders and a file manager or to contact, say, Oracle and pay business rates. And the only in-between option seems to be to go to all the trouble of learning how to build and implement a CMS such as a LAMP or to install a boggy pre-designed LAMP like Joomla or Drupal.

I had a few considerations:

  • PostgreSQL data warehouse
  • browser initiated query ability (JavaScript, PHP, blah blah blah)
  • methods to vacuum, backup, and restore the DB
  • a schema representing the above in some reasonably intuitive way

  • relationships

    The problem was how to establish relationships between a file and several tags. Three commonly used schemas are MySQLicious, Scuttle, and Toxi. There are others, more complex, and faster, but my provider is simple and only has MySQL. Toxi appeared passable for my arrangement. The key though is the PHP to enter the relationships, and in the proper order. Anyway, first, the schemas. This site shows the three options utilizing ER modeling but crows feet is probably the clearest representation. Crow's feet versions can be seen in some of the representations in the blog link above the introduction.


    The provider on which my site is parked only provides MySQL for manipulation. This was OK for a trial run. Below are the three tables, taken more or less verbatim from MySQL Forge's excellent page:

    CREATE TABLE Items (
    , item_name VARCHAR(255) NOT NULL
    /* Many more attributes of the item... */
    , PRIMARY KEY (item_id)
    ) ENGINE=InnoDB;

    , tag_text TEXT NOT NULL
    , PRIMARY KEY (tag_id)
    , UNIQUE INDEX (tag_text)
    ) ENGINE=InnoDB;

    CREATE TABLE Item2Tag (
    , PRIMARY KEY (item_id, tag_id)
    , INDEX (tag_id)
    , FOREIGN KEY fk_Item (item_id) REFERENCES Items (item_id)
    , FOREIGN KEY fk_Tag (tag_id) REFERENCES Tags (tag_id)
    ) ENGINE=InnoDB;

    With these I was nearly able to be up and running, but I received an error when attempting to create "Tags", namely that:

    Error: tag_text used in key specification without a key length

    I changed UNIQUE INDEX (tag_text) to index on the first 12 characters:

    UNIQUE INDEX (tag_text(12))

    The table was created properly but my provider does not allow for the InnoDB and so the command was subverted to the ungainly MyISAM. Nothing I could do there. Subsequently, however, I added a column to the table to provide more complete descriptions of files:
    ALTER TABLE `mydb`.`mytable` ADD COLUMN `item_desc` TEXT NOT NULL AFTER `item_name`

    I wanted a column to list the page numbers or slide numbers of whatever file I was looking at:
    ALTER TABLE `mydb`.`mytable` ADD COLUMN `item_pages` SMALLINT NOT NULL DEFAULT '1' AFTER `item_name`

    Tutorials describe back-ups using phpMyAdmin , or directly from a PHP browser page.


    Challenges include setting up forms inside a table, the order of html and php, and resetting the form after the submission of data. An example of a form inside a table, with a reset function after the data is submitted:
    <form action="insert1.php" method="post" onsubmit="this.submit(); this.reset(); return false">
    <table bordercolorlight="#CFCFCF" bordercolordark="#FFFFFF" border="1"
    bordercolor="#cfcfcf" cellpadding="2" cellspacing="0" width="100%">
    <td align="left" bgcolor="#009dd0" valign="top"><b><font color="#ffffff">description</font></b></td>
    <td bgcolor="#009dd0"><b><font color="#ffffff">tags</font></b></td>
    <td bgcolor="#009dd0"><b><font color="#ffffff">slides/pages</font></b></td>
    <td bgcolor="#009dd0"><b><font color="#ffffff">filename</font></b></td>
    <td><TEXTAREA class="expands" name="item_desc" rows="8" cols="35"></TEXTAREA></td>
    <td <input name="#" type="text" size="35" />
    <td <input name="Position[]" type="text" id="Position[]" size="5" />
    <td> <input name="item_name" type="text"></input></td>


    Appeared at first that an html file containing forms had to be made to enter data and another to retrieve data. Each of these would presumably call an appropriate php script to do the database work. However, it now appears best to include the php right into the html files.


    The core portion of the insert, which pulled values from a previous page's SUBMIT.
    <? php

    //connection to the tablespace
    $dbhandle = mysql_connect($hostname, $username, $password)
    or die("Unable to connect to MySQL");
    printf ("<p>Status: Connected to tablespace </p>");

    //connection to a database
    $selected = mysql_select_db("foo",$dbhandle)
    or die("Could not select database");
    printf (" <p>Status: Connected to database </p>");


    mysql_query("INSERT INTO Items (item_name, item_desc) VALUES ('$filename','$descrip')") or die(mysql_error());


    Tuesday, March 31, 2009

    java frameworks

    I register the term "Java", in only some vague manner, for example encountering a "JavaScript error" when web browsing. Digging deeper, I learned that "Java" and "JavaScript" are unrelated, except in name. JavaScript is the current name of what formerly was called LiveScript. LiveScript was developed at Netscape, the same Netscape now owned by AOL, however the name LiveScript was eventually changed to JavaScript, probably to leverage the (then) popularity of Java. How AOL was not sued by Sun Microsystems, I don't know, since Java was created at Sun Microsystems and its development was, of course, prior to the name change of LiveScript to JavaScript.

    Java is a programming language which results in code compiled into executable (binary) programs, in a manner similar to other languages such as "C". Java-written programs require a Java run-time environment installed on the executing computer. Alternatively, JavaScript is only questionably a language and, if a language, then a scripting-type language; it doesn't require compilation. Applications written in JavaScript operate only in a browser (except to write cookies), and are called "applets". Applets require a run-time environment, and this is either built-in to the browser or is a browser plug-in. The user can enable or disable the JavaScript run-time for their browser. For example, a user might encounter a warning that JavaScript has not been enabled on their browser; this refers not to the applet, but to the JavaScript run-time plug-in not being installed/enabled.

    As just noted, JavaScript browser applications ("applets") run via a JavaScript plug-in prepackaged inside most browsers. JavaScript applets are difficult to design for all browsers, since each browser (IE, Firefox, Opera, etc) designs its run-time slightly differently. A variance from what that particular browser needs to run the applet leads to the somewhat common "JavaScript error" messages encountered when browsing. This is expected to change as the ECMA eventually standardizes the approach. Another difficulty with JavaScript applets in browsers is security. Applets written in JavaScript operate in limited parameters ("sandboxes") designed to limit access to the user's system. But they've been proven not to entirely deny this access. A nice thing about the JavaScript browser plug-in is it alerts users to necessary updates.

    What about the original Java? Java is a language for writing applications that run on the workstation itself, not merely in a browser. For example, you could write a program in Java that edits photos, or documents, etc. Like other compiled programs, Java programs require a runtime environment, but Java was also designed to be cross-platform. That is, the Java concept is to create programs which run inside various operating systems (Linux, Windows, Mac OSx), and platforms (x86, Mac). The software framework of the runtime is what varies for each machine, and this runtime framework is installed first, prior to the program. Once the virtual framework is installed, the Java application may then be installed - the application is supposed to work regardless of the type of machine or OS, since the underlying runtime layer is handling any OS idiosyncracies.

    I've found that Java applications do work on anything if the runtime is properly installed. Unfortunately, in my experience at least, Java runtime framework installation is often problematic, so much so that the original reason for this blog page was to refresh my memory for some typical steps. One other note, framework version updates are not typically automatically displayed so that installers must remember to check for periodic upgrades.

    The Java Runtime Environment (JRE) is the primary framework for Java-written applications, in a similar way that the Visual Basic Runtime Environment (VBRE) is the virtual machine for applications written in Microsoft's Visual Basic. The latest version of the JRE appears to be called J2SE (Java2 Software Environment). Sometimes J2SE by itself is enough to run a Java-written application, other times, additional Java frameworks have to be compiled and added to J2SE to provide functionality. Some examples: I have Sun's OpenOffice on my workstation. Without installing the Java Media Framework (JMF), a separately compiled framework which relies upon J2SE already being installed underneath it, there is no way I know of for OpenOffice's Impress application (it's like PowerPoint) to playback sound and video. An mp3 addition to the JMF can also be compiled and added. The Java Database Connectivity (JDBC) software first requires J2SE to be in-place. JDBC supports databases such as Neo4j (a graphing database good for tags). There are additional Java environments/frameworks available. Taking all these Java flavors together, the terminology and installation picture appears complicated, and the best overall description I've seen is here. The remainder of this post is hands-on.

    Java Media Framework (JMF)

    A fundamental problem is the Java Installation site provides ambiguous information. Of course, mere awareness of its ambiguity isn't going to overcome it. In one forum, a guy complained that he spent so long attempting to configure his JMF installation that he decided to return to M$ Windows. That's extreme, but it seems questionable that the JMF installation appears to require a significant underlying understanding of paths, classpaths, and softlinks for software roughly four years old. Installation should be simpler by this point.

    Step One: JRE

    I haven't yet installed J2SE, so I use the old Java Runtime Environment. It's important to check that it's installed and that it exists in user's and root's path. I checked both of these to be sure, and then just repeated as root:
    $java -version
    java version "1.6.0_11"
    Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
    Java HotSpot(TM) Client VM (build 11.0-b16, mixed mode, sharing)

    $ echo $PATH
    If these don't show up, it's best to check and make sure the JRE path has been exported. For example, check that /etc/profile includes somewhere:
    export JAVA_HOME

    Step Two unpacking and moving

    Download the bin file and unpack it:
    $ sh ./jmf-2_1_1e-linux-i586.bin.

    It makes a directory like any untar. I rooted-up and moved this entire directory to /usr/lib/JMF-2.1.1e

    Step Two: user and root paths

    JMF needs to be found on occasion by root and on other occasions by the user. These are two different setups:
    1) User: Sun recommends creating a /home/~/.profile file with these entries:
    # path settings for JMF
    export JMFHOME=/usr/lib/JMF-2.1.1e
    export CLASSPATH=.:$CLASSPATH:$JMFHOME/lib/jmf.jar

    I like things explicit, so I leave out the "JMFHOME" variable and do it this way:

    export CLASSPATH
    export LD_LIBRARY_PATH

    2) Root: add the same lines, but at the top of /etc/profile.

    3) User and Root: Echo the path ("echo $PATH") to be sure that it includes paths to the jmf files. If not, add additional paths as the final lines of the "/etc/profile". For example, I also wanted some X11 bin files in my path, so my final two "etc/profile" lines are:

    export PATH

    Following this, JMF was working except

    Step Three: activate

    Easiest: restart the computer so it sources both profiles.

    Step whatever - diagnostics

    The one good thing Java has done is make an online diagnostics page with which to check the JMF installation. Just point the browser there and it checks my JMF (and JRE). However, checking via the browser in this was seems to confoundingly generate classpath errors, regardless that there aren't any. See below.


    Firefox is a separate matter. With JMF properly configured for their workstation, when one visits the online diagnostics page, one will receive an error " classes..".


    JMF also has a download to play MP3's.

    Monday, March 30, 2009

    slackware 12.2 - kernel recompile

    Last edited: 2013/05/03
    Links: slackware compile simple   good: includes LILO   full Debian   gentoo instructions w/LILO   arch instructions   kernel performance
    Kernel builds are straightforward and can be done in user mode in /usr/src/[version]
    $ make mrproper
    $ cp /boot/config /usr/src/[version]/.config
    $ make menuconfig
    $ make dep
    $ make bzImage (or just "make")
    $ make modules
    # make modules_install
    Recompiles are mostly the same, but we use same source version repeatedly. That is, each recompile uses source with the same version number. That causes a modules problem.

    the problem and solution

    A problem with using the same version source repeatedly is that modules are installed according to kernel version numbers. The modules go into /lib/modules/[version]. We don't want modules for each recompile overwriting or mixing with the prior recompile. We want a unique directory for each build's modules. How do we do that? We could just change $MOD_INSTALL_PATH, in the Makefile. However, we also need our new kernel to be able to find its modules. A better option then, is to change $EXTRAVERSION to something unique. This will then be incorporated into $KERNELRELEASE, and everything downstream -- the module directory, and pointers to those modules within --- will be properly named and installed. So, before running "make menuconfig", change the top-level $EXTRAVERSION (or occasionally "$EXTRAVER") variable in Makefile to a different suffix for each build. Before running "make", double check the Makefile for your identifier. There is also apparently an option for "local version" suffix creation within the make menuconfig process beginning with kernel 2.6.

    Note: to examine the running kernel settings in a GUI without changing the settings, run "make xconfig". This displays the configuration of the booted kernel. Select "discard changes" before exiting. To do the same in an ncurses CLI interface, the command is "make menuconfig" (also discard changes when exiting).

    Other - unintended effects

    There's a small chance one's video drivers or other hardware will act odd with the new kernel and modules. It's therefore worthwhile to back-up the old modules in case one needs to use the old kernel. If one recompiles GLibC, it's nearly certain some applications will have to be recompiled.

    Other - what is /boot/vmlinuz?

    When bzImage is created in /usr/src/[version]/arch/i386/and we copy the bzImage to /boot, we change its name. The file is the same, but we change it to "vmlinuz". We also append a suffix to its file name to distinguish it from other kernels in /boot. For example, we might title it "vmlinuz-20120604" or "vmlinuz-test". We add this same suffix to the "config" and "", files for that build when they are copied to /boot.

    pre-compile: patches, optimizations, modules

    Patches are accomplished prior to opening the configuration files; they potentially expand the kernel options with new features. To patch, put the patch in the same directory as the source, consider the level for "p" (in this example I'll use "1") and:
    $ patch -p1 < patchname

    For optimizing the kernel, hardware information: # lshw or # lspci. Save this to a file, and print it. For example, I have one older laptop with
      AMD Athlon(tm) X2 Dual-Core QL-60 64 bits, 1900MHz,L1 256 capacity, L2 2x512KB capacity
    It appears the processor will be able to run an i686 instruction set (using amd 64 / i686). Instruction sets are backwards compatible to 386,486,586. It's also worth reading here about CFLAGS (SLKCFLAGS for Slackware) CHOST, and so on, for compilation notes, since the compilation of the kernel is just another compile. Also examine these two
    $ cat /proc/cpuinfo
    $ echo | gcc -dM -E - -march=native
    I save march info to the lshw file. Of course, we can adjust the march to something forced, but it's valuable to start with examining what's automatically selected (by gcc).


    So "vmlinuz" is simply "bzImage" renamed and copied into /boot. If we compile using the standard make install technique, the vmlinuz file in /boot will be overwritten during that step. Instead, use make (or make bzImage) without "install". Later, we'll manually copy the new kernel, "bzImage" into /boot, changing its name to, say, vmlinuz_20090312 as we do so, and preventing overwrites.
    $ make mrproper
    $ make menuconfig
    $ make dep
    $ make bzImage (or just "make")
    $ make modules
    # make modules_install
    Kernel compilation varies depending on the processor speed and the number of cores. See "j" flag options for "make".


    Booting requires and vmlinuz. And, each kernel requires its own file. These both need to be in /boot. Also the ".config" file should be placed in /boot for future reference. Make them easy to locate within /boot. Eg, supposing the unique identifier for that build were "20090312", then I would move the files to /boot from /usr/src/linux/) as
    # cp /arch/i386/bzImage /boot vmlinuz-20090312
    # cp /usr/src/linux/ /boot/
    # cp /usr/src/.config /boot/config-20090312
    # rm /boot/vmlinuz
    # ln -s /boot/vmlinuz-200903312 /boot/vmlinuz

    Double check kernels, System.maps, config files and proper softlinks within /boot.


    If you're certain about a single kernel, a single softlink named "vmlinuz" to that kernel will work, as just described. LILO needn't be changed. However, if wanting to select between multiple kernels at boot time, LILO needs additional entries. The one we have been doing above would be added as
      label= somenameforscreen
      root= /dev/sda1
      read only
    Then to update LILO:
    # lilo -v

    Appendix - Cleaning source


    After we have initiated compilation once on the source if we want to clean the object files and other temporary files then we have to run the following:
    make clean
    This will remove most generated files but will keep the configuration file. If we need an absolute cleaning, i.e. if we want to return the source to the state in which it was before we started the compilation, then do a
    make mrproper
    This command will delete all generated files, the configuration file as well as various backup files. This will in effect unwind all the changes we made to the source. The source after this step will be as good as it was just after the download and untar.

    Tuesday, March 3, 2009

    toshiba l305d-s5869 - touchpad

    Synaptics touchpad fun
    Xorg touchpad notes
    Disabling synaptics touchpad
    Touchpads or, as I call them, "random cursor placement devices" annoy me. Attempting to work on a paper where natural thumb placement means inadvertently moving the cursor from line to random line, or from word to random word, etc, brings a desire to tear-out hair.

    1. Synclient In my Toshiba, I have the Synaptics pad and driver, though some will have the ALPS. The command to disable the touchpad:
    $ synclient TouchPadOff=1

    Tuesday, February 10, 2009

    toshiba l305d-s5869 - more video

    Last September, I posted about picking up one of these Toshibas. At that time, the RS780 (HD3100) ATI/Radeon video chip was problematic, and may pose some additional problems. However, currently it appears the radeonhd driver has finally matured enough to supplant the proprietary fglrx driver which seemed to only provide software rendering.


    fglrx driver
    fglrx installation
    radeonhd download and install
    ati chip audio considerations


    1. Back-up the working xorg.conf in etc/X11.
    2. Regardless if going forward with a newer fglrx driver, or some other driver, such as the radeonhd driver, remove the outdated fglrx driver to avoid conflicts.
    # cd /usr/share/ati
    # sh

    3. Install git from respository or source.
    4. Decide whether to install fglrx or radeonhd.
    4a. fglrx driver
    4b. (Radeonhd)
    # git-clone git://
    # cd xf86-video-radeonhd
    # --prefix=/usr/
    # make
    # make install
    # gtf 1280 800 60 -x (or whatever the native resolution and refresh)
    # nano /etc/X11/xorg.conf

    More coming, need some sleep

    Tuesday, January 20, 2009

    tweet clients, drm

    Since Adobe took over Macromedia and added DRM, Flash has suffered, in my opinion. Adobe seems to purchase content creation companies, then sells creation software, then adds two steps of DRM, so whatever free player for the content probably slows to check DRM and/or nags users for updates. It's brill for making money, but it sucks out a percentage of creativity, at least for all of those who can't buy the Adobe software.

    In the case of Twitter, Adobe evaluators probably determined they couldn't control content creation from the millions of contributors who constantly tweet. Instead Adobe attempts to add Twitter functionality by creating aesthetically appealing tag hashing software that those millions might want to install. But, surprise, the software, TweetDeck, also requires customers to sign a EULA and to install Adobe's proprietary run-time platform Adobe AIR. Adobe AIR appears to have Adobe's customary DRM layers and potential phone-homes (updates, statistics) drawbacks. To me, TweetDeck means "Twitter, now with statistical data-mining, nagware, and under-the-hood file manipulation". Extrapolating a bit, watch for Adobe to someday "partner" with Twitter or otherwise make "reliability" data-sharing agreements with Twitter. One of the simplest explanations for why DRM seems to make financial sense to companies is this article which notes how creation software helps lock non-purchasers out of the creative process. The article is from Techdirt.

    tag hasher options
    TweetDeck :: Linux version available but, as noted, requires the installation of Adobe Air runtime.

    TweetGrid :: Linux compatible. Nothing to download. It appears to require that one's referrer header is set to "2" in "about:config". The developer told me this is to prevent hotlinking. I left my referrer header at "0" and installed the RefControl add-on to firefox to manage the header. This helps on other sites which require headers as well (eg. Adobe!). Appears TweetGrid searches both a tweet's text and its title.

    Twitter Search :: This is a sort of rudimentary way to go about it, which is why Adobe and TweetGrid can step in, but it can find whatever I want. Apparently limits its search to tags.

    Sunday, January 4, 2009

    wine - details

    Wine installs no problem with netpkg in Zenwalk, but the latest version has a tweak or two with opening a browser from whatever application is running in Wine.

    The first step, found here suggested adding this entry to ~/.wine/user.reg:
    [Software\\Wine\\WineBrowser] 1178036531
    The number "1178036531" will not be the same for each installation. The first seven digits are uniform throughout a wine installation, and I duplicated the three trailing numbers of the entry from [Software\\Wine\\MSHTML].

    The second step, discovered here, notes that a symbol must be added to each line with winebrowser.exe in ~/.wine/system.reg:
    @="C:\\windows\\system32\\winebrowser.exe -nohome"
    would be changed to
    @="C:\\windows\\system32\\winebrowser.exe -nohome %1"
    Following these two tweaks, Firefox opened without error when called from Wine applications.