Monday, December 20, 2010

backup data disk

Links: command line howto

It seemed there should be an easy way to back-up copies of data disks. Of course, one can just copy the files to the system, and then create another DVD, but it makes sense there should be a more efficient way to duplicate data disks for back-ups. I found a way that takes only two commands.

terminal commands
It appears difficult to get started without at least copying the DVD data to the drive in the form of an .iso. Still, an .iso is more efficient than copying over all of the files and creating a new iso. Start by putting the data DVD into the drive and unmount it if it auto-mounts. Next,eg:
$ dd if=/dev/hdc of=somename.iso
On some systems, this might be
$ dd if=/dev/sr0 of=somename.iso
Now that we have the .iso, we can just burn as many copies as we'd like.

Sunday, December 19, 2010

ffmpeg update

Links: Ubuntu tutorial

2011-11-21 update: NEVER blow out your old ffmpeg when compiling a new ffmpeg until you first determine if you need to update your x264 to compile the newer ffmpeg version (during ./configure). x264 requires that ffmpeg (eg. your old version) already be installed to provide lavf support during x264 assembly, a mind-bendingly stupid developer Catch-22. Without lavf, ffmpeg is as useful as a text editor when it comes to AV files. After updating the x264 SUCCESSFULLY, then you can blow-out your old ffmpeg version.

2011-09-01 update: mencoder will not open without libdirac present. That's the BBC one I talk about below. A good time to build it is when doing the ffmpeg compile. Then both ffmeg and mencoder can use it. For this, add "--enable-libdirac" to the stuff I have below in the ffmpeg configure line.
I recently upgraded/replaced an installed ffmpeg package. The idea was cell phone interoperability. The previously installed ffmpeg package wasn't apparently compiled with amr, 3gp, or 3gpp file support, so that I couldn't translate these files generated by my cellphone. Upgrading ffmpeg turned-out to be a trip down the rabbit hole, so posting here for posterity.

Note: Zenwalk installs lib modules in /usr/lib. The newly compiled libs installed to the standard /usr/local/lib. Solution: I left the old Zenwalk modules in /usr/lib but removed softlinks in /usr/lib. I then made new softlinks in /usr/lib, but these one point to the new modules in /usr/local/lib. To restore the original Zenwalk installation, just recreate softlinks in /usr/lib that point to the old modules in /usr/lib and delete the ones pointing to /usr/local/lib.

VideoLan's libx264 is the backbone of ffmpeg. When compiling ffmpeg, if ffmpeg doesn't detect a recent version of libx264, it will exit and request that a more recent version be built. Daily libx264 builds are available for download at the VideoLan site. The difference between libx264 (and libxvidcore below) and most other modules is in being assembled, rather than compiled. Assembly takes advantage of machine-level instruction efficiency when encoding video. Accordingly, I downloaded and installed the yasm assembler before configuring libx264 . My configure line for libx264 then looked something like:

$ ./configure --mandir=/usr/man --enable-yasm --enable-visualize --enable-shared

Libxvidcore is a codec with similar importance to libx264. The source for it was easily located at, and then downloaded. After untarring on my drive, I located Linux source a couple directories from the top, in build -> generic. Googling around, I found conflicting information about whether or not libxvidcore could be assembled or compiled. Running $ ./compile --help also did not provide a solid answer. Eventually, I just compiled it without any selected configuration options, though I would have preferred to assemble it, if it had been clear how to do so.

This is for the BBC's Dirac encoder. Even though I was using the most recent ffmpeg from an svn repository, I couldn't get ffmpeg to properly recognize libschroedinger. Eventually, I took libschroedinger out of my ffmpeg configure line.

other updates
During configuration of ffmpeg, prior to the make phase, other updates were also requested by ffmpeg. These were the libtheora open-source encoder, swscale, and opencoreamr. Opencore
is a downloadable Sourceforge project, swscale comes inside ffmpeg source, and libtheora source is found in the downloads portion of their site. It appeared that I didn't need to update libspeex during this particular ffmpeg upgrade. Another nice thing is that libfaad and libfaad-bin are nowadays incorporated into ffmpeg and no longer need to be separately compiled.

ffmpeg & configure line
Finally, after completing all updates to supporting modules above, I located the svn URL at the the ffmpeg download page, and retrieved the most recent ffmpeg source code:
$ svn checkout svn:// ffmpeg

The final configure line, the configuration which resulted in a working ffmpeg installation, is below. It's long and will wrap several lines in this blog, but it's actually a single terminal command:
$ ./configure --mandir=/usr/man --enable-shared --disable-static --enable-pthreads --enable-x11grab --enable-swscale --enable-libfaac --enable-libmp3lame --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-gpl --enable-postproc --disable-ssse3 --enable-libopencore-amrwb --enable-libopencore-amrnb --enable-yasm --enable-version3 --enable-nonfree --arch=i686 --cpu=i486

updated library list
I had to manually link these to /usr/local/lib. Ffmpeg sought them in /usr/lib

Edit: this easily could have been avoided by adding a line in /etc/ and then running ldconfig, or by adding --prefix=/usr to the $ ./configure of ffmpeg

final tally
The entire ffmpeg update process took most of a weekend, but ffmpeg seemed to work without a hitch following completion. In fact, it seemed to run faster, perhaps due to using an assembler for libx264. After the update, I quickly converted a .3gpp cell phone file to a .wav:
$ ffmpeg -i recording36180.3gpp -vn -acodec pcm_s16le -ar 44100 -ac 1 lecture_03.wav
A list of the required actions to update ffmpeg:

  • yasm - download, compile, install

  • libx264 - download, assemble, install

  • libxvidcore - download, compile, install

  • libopencore - download, compile, install

  • libtheora - download, compile, install

  • ffmpeg - svn download, compile, install

  • library linking (likely avoidable for those with "ld" knowledge)

Thursday, November 25, 2010

Garmin Nuvi 295w Linux

Links: Tiger Direct pricing

Looking closely at this Garmin below. It appears to have GPS functionality and, near WiFi sources, email and web. That would seem to avoid the necessity for a service plan with a phone provider and I certainly don't want another $$ervice plan. The catch is I'm uncertain how this unit will interact with Linux. I want PC connectability to update maps and download images. If all this works, looks like a nice travel toy for the glovebox.

We'll see. This is a potentially nice toy on sale for around $90. It comes with this in the kit.

Complaint I'm seeing is this unit appears to drain its battery rapidly. In my experience, battery zapping is a common problem on my cell phone when I'm in wireless mode. So,I'm guessing the Nuvi's WiFi might be always-on by default. This would be additional load to its GPS power needs. Before purchase would want to learn whether it's possible for users to individually control power to these features.

Saturday, August 28, 2010

Flash Player updates, fail

Links Flashplayer

In Zenwalk (mini-Slack), I run Iceweasel for a browser. This is just a stripped Firefox that doesn't use copyrighted code, a nice touch. However, it means the User Agent string is typically unrecognized at mainstream sites like Hulu. When there is, for example, a periodic Flash update that sites like Hulu want us to install, Adobe provides a warning message that I have an unrecognized or unsupported system.

In spite of the ominous warning message, Adobe provides the latest module at their their site. Wipe out the old directory and the two softlinks. Put in the latest and create two softlinks to the new module. Right as rain.

download the module
Go to the Adobe Flashplayer page, which has a sniffer to determine the OS is Linux. I took the basic tar.gz version which is pre-compiled (can we say "proprietary"?). Unpack it. That's it. However, there are times when Flash updates have entirely broken my browser and this then requires an OS update. We never want to update an OS for any reason. We want a stable, 30 year installation.

remove the old stuff...
To be reasonably sure older versions didn't linger and cause conflicts:

# rm -rf /usr/lib/flashplugin*
# rm /home/foo/.mozilla/plugins/
# rm /usr/lib/mozilla-firefox/plugins/
# rm /usr/lib/mozilla/plugins/

...and in with the new stuff
The softlink commands are wrapping here in the blog's narrow column; there are just two of them.

# mkdir /usr/lib/flashplugin
# cp /home/foo/downloads/ /usr/lib/flashplugin/
# ln -s /usr/lib/flashplugin/ /usr/lib/mozilla/plugins/
# ln -s /usr/lib/flashplugin/ /usr/lib/mozilla-firefox/plugins/

Friday, August 6, 2010


Links: TiLP   TiEmu

The effort to get a Ti-89 connected to the laptop. Didn't want the Windows stuff that came with the calculator. Located the TiLP Project for Linux. All good. All the files are linked from the site.

Versions installed during this blog entry:
  • tilp2 - 1.16
  • libticables2 - 1.3.3
  • libticalcs2 - 1.1.7
  • libticonv - 1.1.3
  • libtifiles2 - 1.1.5

The program comes in two packages, the program itself and a package of libs. Obviously install the libs package first, which will do the libticables2, libticalcs2, libticonv, and libtifiles2.

When compiling be sure to use

$ configure --prefix=/usr

Otherwise hell to pay with subsequent library location. Libs seem to have to be installed in a certain order. Sussed this out from the README's...
  1. libticonv
  2. libtifiles
  3. libticables
  4. libticalcs
...which worked.

loading applications
Take the stats program, for example. Download it from the Ti Site, its name is tistatl.89k.
  1. Connect the Ti-89 w/usb
  2. $ tilp
  3. libticables
  4. libticalcs

I also installed Group File Manager v. 1.06.
Haven't installed or played with this yet. I'll add the info and edit this portion of the page one of these weekends.

Sunday, June 6, 2010

Yellow Dog - 2002 eMac Pt 2

Links: Open Firmware discussion

Round two of the Yellow Dog Linux installation into a vintage eMac. At this point, the variables had settled down to PPC idiosyncrasies around disk partitioning, boot loading, and firmware keystrokes. Several weeks earlier, in an earlier post, I thoroughly cleaned and inspected the eMac physically. This post is more about the installation. The overall order:
1. get yaboot (mac-ish lilo/grub) operational and boot a kernel
2. partition using mac-fdisk
3. make file systems on the linux specific areas
4. install core packages and modules (eg. network)
5. packages
So, pretty standard. The only reason for blogging this are idiosyncrasies.

The first thing to consider was the "SuperDrive". Initially, I thought it was able to read DVD's, making an easier installation, but a second check on the specs showed it to only read and write CD's. Further, I had read on several sites that one couldn't boot from the drive unless jumpers were moved to make it the master drive. I was messing with CD drive jumper for quite a bit until I realized it would boot fine if I had the correct Open Firmware keystrokes. The two greatest of these were:
* holding "c" to boot from the CD. Holding down "c" during start-up directly loads yaboot from a CD. No jumper changes were necessary.
* apple+option+o+f for Open Firmware access. This is a much more analytic approach, since a person can stay in the Apple firmware and check settings and so forth before deciding to access their boot CD. When it's time for that, I used
boot cd:\\yaboot
Other commands allow a person to check the CD's files first
dir cd:\\
and there are others listed here.

disk partitioning
The problem here was making sure Linux could see the Mac partitions properly and that the hard drive was consistent with what the Mac expected to see at boot time. I hadn't bothered to read this and simply used my (previously) reliable cfdisk approach. Ineffective. Mac cannot boot without Mac-specific boot areas defined to its BIOS. One can use cfdisk to start that process, but cfdisk and fdisk can't reliably define all 4 or 5 partitions (Mac uses 9 partitions with its own OS!) in a Mac protocol. Even if you start with cfdisk, you'll need to finish it with mac-fdisk. I decided to learn how to do the entire process in mac-fdisk, just to limit the number of applications involved. That said, mac-fdisk has a learning curve.

continuing problems - Ubuntu
The Yellow Dog continued to fail installing. I then attempted Slackintosh, which also failed. I then attempted Ubuntu. It failed. The Ubuntu fail, however, was traceable to a specific problem. I logged a bug about it and waited for an updated daily build (Maverick version). The daily build was repaired, however, this time the daily build was too large for a CD! Since this older Mac only has a CD player, not a DVD player, I was limited to a USB install. This is extremely complex, before you know the tricks. One of those "easy when you know how", "difficult to find information to learn how" issues. A day of googling and trial and error was involved. I was irritable, I'd already put in a couple of days on various PPC CD-install attempts.

I had the oversized iso, but I had to put it on the USB. Should I leave it in the iso format or extract the files from the iso? Hard to tell from Googling. I decided to go with extracting the files. I had no app to extract the files from the iso. This meant I had to mount it. I went into the directory where I downloaded the iso, made a temporary folder there, and mounted the iso in there to get at the files.
$mkdir temp
$ sudo mount maverick-alternate-powerpc.iso temp -o loop
$ cp -r * /media/usb1/

I got a few errors about not being able to make symbolic links
cp: cannot create symbolic link `/media/usb1/dists/stable': Operation not permitted
cp: cannot create symbolic link `/media/usb1/dists/unstable': Operation not permitted
cp: cannot create symbolic link `/media/usb1/ubuntu': Operation not permitted
I made a note of their targets and went into the USB disk and (re)created the softlinks there manually:

$ cd /media/usb1/dists
$ ln- s maverick stable
(If you make a mistake, use unlink {link name} to delete the soft link, then remake it.) HOWEVER, I was thwarted yet again because the USB stick was formatted in FAT32, which does not allow for the creation of symlinks. So far, thwarted about 6 different ways on this install. Since Open Firmware will only read HFS, there is a Catch-22; how to copy files that need to be read by the Linux kernel on a USB that needs read by Linux.

Conclusion: It appears that the only f*cking way to do this Ubuntu install, since it's oversized for a CD, is to purchase a DVD player, burn the iso to a DVD, and go that route. Wow.

Monday, May 3, 2010


Links: gui kits comparison   C++ examples   Connecting C++ to MySQL   makefile tutorial   make manual   YoLinux C++ links   C++ tips  CScene mag   Vid:Glade, GTK+, C++

gui issues
So, after some functionality with a database interaction, is it worth the effort to put this into a GUI format? It appears that the API for building Windows apps in Linux will be wxWidgets or VCF. For our Linux C++ program, there are too many to choose from. There is no unified Linux GUI api, unlike Windows. Some would say this is one of the few true drawbacks of Linux, others would say it demonstrates Linux diversity. Essentially, you have to either go to the trouble of coding in XLib, at the time consuming atomic level, or parse through the API's until one seems to match-up with the look and feel (L&F), functionality, and bloat that you are willing to tolerate.

This comparison shows several different formats. SFML and FLTK, maybe even XLib, will probably be in my future, but the Glade+GTK(gtkmm)+C++ approach seems good for the present. Users of my application will need the GTK libraries installed to run my application. XLib coding ay be the only way to avoid that kind of a dependency since X is installed on all Linux systems.

Glade is a useful GTK tool for interfacing with C++. In it, we can create the GUI pieces and signals from user actions (clicks, data entry, etc), and send them to our underlying C++ program for intelligent processing. In my view, it's better than an IDE, since it just manages the interface without messing with the logic. On the negative side, it creates a glade file for the GUI. The file requires a separate library, libglade, to be installed during compilation so. That's one layer of bloat, but probably worth it. Additionally, making sure the make process is right with the extra type of file means a slightly more complex compile command or makefile.

$gcc -export-dynamic testapp.c -o testapp `pkg-config --cflags -g -o2 --libs gtk+-2.4 libglade-2.4`

It gets more complex if one first transforms the glade file into an object and inserts it into the final application to make it more portable.

Friday, April 30, 2010

C++ - Eclipse

Links: Short tutorial   C++ examples   Connecting C++ to MySQL   makefile tutorial   make manual  

Test-driving the Eclipse Java/C++ interface (IDE) as part of my continuing work on the Paperhater database solution. Previously, I thought a LAMP was the best idea, but the four layers of software make it challenging to focus on function. I intend to do a LAMP version as I gain PHP ability, but a C++ application needs to be accomplished to get basic functionality. One hitch...I don't know any C++.

Make (gcc)
Although this post is mostly about the Eclipse IDE, I will likely eventually rely on hand-made Makefiles for more fine-grained control. As an example, Paperhater needs to interface with MySQL. There are special compilation commands to allow code to interface with MySQL. These are unlikely to be managed by a generic IDE, unless it allows for customization.

This interface is a great thing. It's oriented around Java, but I don't want to write in Java until they solve the problems they seem to have with JRE in Linux environments. The C++ IDE (Eclipse calls it "CDT" C++ Development Tool) can be downloaded directly into Eclipse. This is just a plug-in for Eclipse, since Eclipse was designed for Java programming. But it's nearly as robust as the Java tool.

The CDT is Eclipse version specific. When I went to the CDT download page, it noted I would need the link specifically for my Eclipse release. I have Eclipse "Ganymeade" (aka version 3.4.1). I copied the link they provided for version 3.4.1,, to my clipboard.

With the link in my clipboard, I went to Eclipse and: Help->>Software Updates->>Available Updates. Select "New Sites" and put the link in there. Eclipse downloaded and configured the necessary CDT files once I pointed it to that link.

Sunday, April 25, 2010

Yellow Dog - 2002 eMac Pt 1

Links: the eMac   eject the cd tray   eMac disassembly

A friend recently graced me with a CRT display 2002 eMac (apparently built 07/11/2002). The system would not boot as of late. These early eMacs are underpowered dinosaurs with 128MB of RAM, a 700 MHz processor, and so on, although they were the thing to have in 2002. To make it available for classroom use, a free OS, a keyboard, a mouse, and a GB of RAM seemed right, hopefully while remaining beneath $100.

some features
Model A1002 G4 700 MHZ Power PC
Firmware 4.x OS
Display 17" CD-R/W
NVIDIA graphics(32MB VRAM)
Upon disassembly, I learned I had also lucked into the prior owner's decision for a 512MB RAM upgrade, giving a total of ~641MB RAM. Taken with a 700MHz processor speed, watching video might or might not be possible.

Yellow Dog
To get it running, Yellow Dog linux looked like a natural choice. They've apparently released a PPC distro (in this case 6.1 dated 20081119) for some time. The originally installed software was Mac OSX 10.1.4, and it supposedly could handle up to 10.4.11. If any of these became available later, I could always blow-out the Yellow Dog.

For hardware upgrades, it looked like I could get a $50 keyboard, a $60 RAM upgrade to 1GB, upgrade the 2x speed "Super Drive" (optical drive) to a 32x DVD-RW, and put in a larger IDE HDD than the standard 60 GB, if I wanted. But a keyboard and mouse at least.

CD/DVD tray
In order to install Yellow Dog, I had to open the CD/DVD tray. The CD/DVD drive tray flap on the front of the shell would not open with the system powered-off. Powered-on, the system was not booting, so I could not use the keyboard eject. So, it was a Catch-22. Luckily, the shell needed to be removed anyway to clean 8 years of dust from the system.

I found, upon disassembly, that the flap opens from the top to the bottom, that is, the hinges are on the bottom of the flap covering the DVD drive. Unpowered, one can insert a small screwdriver at the top of the flap and pull the flap down. Additionally, the Airport card is behind a cover plate, behind that flap. To change the Airport card then, open the flap and remove the interior cover plate (two Phillips). Pulling the Airport card should be accomplished before the disassembly required for swapping the DVD drive or the HDD.

Disassembly was no problem; essentially, a 2.5 mm hex and a good Phillips. I followed these instructions and would only add one clarification; The power switch cable needs to be disconnected from the chassis, not from the cover, this was left unstated in the guide. Needle nosed pliers were helpful to apply removal pulling pressure on the plug itself rather than pulling on the plug's (thin) wires. The plug is keyed for proper reinstallation. Disassembly voids the warranty, apparently, but that was no problem on a donated system.

Once apart, as expected, the insides were caked with dust. Dust is a large problem for enclosed cooling systems that rely on unfiltered air (think also "laptop"). I took an air hose to the uncovered chassis before proceeding further.

As I got into the system, it appeared that a RAM upgrade had already taken place. It appeared to be a 512MB card, which would provide 756MB RAM, probably enough. At any rate, adding or deleting RAM involves nothing more than removing the bottom plate. The same for changing the jumper on the optical drive, albeit the angle is slightly difficult. I moved the jumper from the far left (slave) to the center jumper, hoping this was the right setting for "master". This is needed so I can boot off the DVD. We'll see if I got it right -- there was no jumper sticker.

Sunday, April 11, 2010

Data acquisition, solar

We all are trying to use alternative energy when we can at this point, arguably to save costs, but certainly to help with the various resource problems of the stupendous global overpopulation.

At any rate, lets look at a few options that might be available to us:

Energy Accounting
The first step in residential use, is determining usage. Once we know

A simple solar PV mat can charge your car battery or cell phone while it blocks the light from coming onto your dash. These

Thursday, April 1, 2010

PHP - MySQL arrays

PHP is a glue for many web sites. It shapes the pages of the browser interface, and also accomplishes interactions with the database, when needed.

The grist of Web activities involve databases. What comes out of and goes into a database during a web transaction often requires additional manipulation. Many times, these processes require arrays. Suppose we're using MySQL as a database, calling a few records to compare with some user-entered data. Sometimes the data can be manipulated on the hosting database server. More often, data is retrieved as an array and needs to be managed after it has been extracted. For example, it might be extracted with a PHP command, manipulated, and then the database might subsequently be updated.

Manipulating arrays is a core and somewhat complex PHP activity. Except for the complexity of security, arrays play a role in the most complex PHP functions. As noted above, user-entered data might be arriving from the browser side at the same time that info from a database is being extracted. In such a case, three or more arrays, even if temporary, might be created with PHP. PHP would also accomplish the comparisons, increments, and so on, that would take place with these arrays. If one can understand and write scripts to manipulate arrays, a programmer can quickly learn less complicated PHP functions.

Here are some resources for learning array-related PHP skills:
Dovlet Tatlok
WebMaster World
PHP Freaks
Big Resource
PHP Builder
reading directories

Monday, March 29, 2010

Python - PHP - PostgreSQL - Android - C++

Links: Python GUI thread   wxPython tutorial

A lot to learn, but probably the only way to properly get to the point where I can catalog files. PHP first (web-based), Python next (desktop), PostgreSQL (desktop, but growing on web), and then step-up from Python to C++. C++ is apparently more efficient than Python, but takes deeper knowledge. I'm not fond of Java until its Run Time Environment becomes more refined and stable.

Python flavors
The issue with Python is apparently that, once one gets to a GUI level with it, there are two main flavors. One, TKinter is essentially Python using the TK/GTK libraries. wxPython has its own library set. It seems that wxPython is more on a growth path than the older Tkinter. I read one description that said so. So far, I've been happy with wxPython.

This does not seem problematic. It imports a module called psycopg2 which seems to have hooks for the DB. At this site, there is a simple tutorial about how to make the connection.

For programming, using arrays is the real.

Don't care, but have to learn it.

Difficult part is using arrays in best possible ways.

Tuesday, March 9, 2010

Firefox - bookmarks -SQLite

Links: SQLite browser   SQLite backup

My parents noted some problems with Firefox bookmarks; crashes when attempting to organize bookmarks and so on. I experienced some weird effects from a large bookmark file on my system as well. I looked into how the bookmarks are stored.

We users typically access and manage our bookmarks via Firefox. However, unseen to users, Firefox relies upon SQLite, a flat-file (2-D) database application, to store and retrieve bookmark information on our hard drives. Many applications, like Firefox, require limited data storage and retrieval. A full 3-D database would be an extra installation process with additional problems. Accordingly, it's not uncommon to find SQLite built-in to many lighter applications; it's a free, open-source application that developers can easily adapt. What this means with Firefox is the bookmark data itself is kept in an SQLite file called places.sqlite. This file lives in the default, hidden Mozilla folder in the user's home directory, eg:


(Of course, "foo" is a stand-in for whatever one has actually named their user directory.) Although the data is in SQLite format, the bookmarks remain portable; Firefox allows users to export and import bookmarks as html files, instead of the less familiar SQLite database file format.

bookmark problems
If there are problems with bookmarks, it's not always clear whether the problem is in a single bookmark entry from a selected website, or whether the SQLite index which manages all the bookmarks has become corrupted. It's much easier to restore the database, so this is probably the best starting solution attempt.

database solution
As a first solution, it's easier to restore the database, which can be done in about 3 minutes. The steps, as I do them, are to, 1) export the bookmarks file, 2) delete the old bookmarks file (and related backups) in the mozilla folder, and then, 3) re-open Firefox and import the exported bookmarks. This will reestablish and repopulate the database, which should then be stable.

In greater detail, the steps are:

1) Open Firefox, go into Bookmarks -> Organize Bookmarks -> Import and Backup -> Export HTML . This exports the bookmarks in html format. I gave my exported file the name bookmark.html, but any name is fine.

2) Exit Firefox, open a terminal, and delete places.sqlite and other bookmark related files:
$ cd .mozilla/firefox/bQt4mrl3a.default
$ rm places.sqlite
$ rm bookmarks.html
$ cd bookmarkbackups
$ rm *

3) Re-open Firefox and import the previously exported bookmark.html file (or whatever you named it when exporting).

the database solution didn't work
The steps above seem to address potential database corruption possibilities, since a new database is created when Firefox is re-opened after deleting the bookmark files. If this does not solve the challenge, then the problem might be a corrupt bookmark entry. This would have been imported back into the database and so would persist. I have hundreds of bookmarks, which means that looking through them, bookmark by bookmark, attempting to locate a corrupted one, might take hours and hours of hunting. If the solution above doesn't repair my problem, and if I know it's not a hardware or OS problem, I would find it easier to simply delete all bookmarks and start over. However, if I were retired and had loads of free time, I might read on here...

entry by entry solution
First, it's worthwhile to make a copy of places.sqlite, and then examine the copy instead of the original. If one is able to locate the problem in the copy, one can repair it there without damaging the original. Copy it back when complete and tested.

Secondly, going into the places.sqlite file to examine it, bookmark by bookmark, can be done by Command Line commands or via a GUI. We're talking about masochism either way -- the ultimate masochism would be using the command line. Using command line sqlite commands is beyond the scope here, but there are many tutorials out there, such as this one at the SQLite site.

Thirdly, because command line appears less efficient in this case, I'd suggest using a GUI SQLite manager of some sort. This should make an inspection of the places.sqlite file easier than with the command line, at least for most people. The free, open-source SQLite Browser may be a good option.

SQLite Browser
I found this application easy to download, compile, and install. It took about 10 minutes all together. I downloaded the latest release from SQLite sourceforge project and then followed the standard steps. Two variations;

1) it uses qmake since it's Qt-based:
$ qmake -unix -recursive
$ make

2) it doesn't install, just compiles a binary. But we can copy the binary into any directory we wish and start if from there.

All of this, from download to finish, took perhaps 10 minutes. Again, once the binary has been created, I put it in a folder with some other apps, and started it from the command line:
$ ./sqlitebrowser

Fig. 1 SQLite Browser

With the program running, open the copy of the bookmarks file places.sqlite (Figure 1). The bookmarks file/database appears to have a fairly elaborate structure. Looking for corrupted characters would take too much time for my taste in this structure; it's easier for me to wipe-out my bookmarks and start over again. That said, SQLite Browser is a helpful application and would be very useful in a situation where one had a simple SQLite database, say, a personal database for birthdays or anniversaries.

direct backup of an SQLite database
We noted above that we could simply copy the database records (in Firefox's case places.sqlite), and use that for a backup. However, if a person wants to make a compressed snapshot which includes the entire directory structure, it's relatively easy from a terminal using command-line:

$ tar cvfz `date +%Y%m%d`_mtdb_bkup.sql.tar.gz DATABASE PATH

And then to restore:
$ tar xvfz ARCHIVE.tar.gz

Much more can obviously be said about these bookmark problems and about SQLite. It's good to understand that there is an additional layer beyond Firefox, under the hood. A couple of options were provided above and it's my hope we can all hang on to our bookmarks.

Saturday, February 27, 2010

more data fun - PaperHater

Links: Common MySQL Commands  WizzyWeb   VistA database

I've been working on going paperless since about 2002. It's a maddeningly dull, time consuming affair so that, for my own amusement, I call this project or effort "PaperHater". I'm not a php programmer or CS guy and it's been a long road but, who knows, maybe someday it will become an open-source project.

In mid-2002, I started scanning my documents. These varied as widely as old letters from my deceased grandmother to recent bank statements. I learned that the number and variety of the generated files grew rapidly and could not easily be organized; I was spending significant time categorizing and renaming the files using customary file-naming and folder-naming conventions. In spite of this effort, and with only a couple thousand files (albeit growing in number), I had to admit to myself that I was wasting more time arranging and not finding files than I previously had spent hassling with the inconvenience of paper documents. What good was the computer accomplishing for me?

I noted one common feature in the software products that could actually have (too expensively) solved my problems in 2002: a DBMS. A proper database sitting between thousands of stored electronic files, and the user attempting to retrieve one of those files, appeared to make all the difference. However, since an immediate solution looked neither likely nor affordable at that time, I reverted to storing documents in bankers boxes. I did this for several years, but with began to slowly acquaint myself with database programs.

By 2009, I had become routinely frustrated with paper accumulation again. Stuck between a computer and a pile of papers, there appeared to be no easy way out. Since early 2009, alternating Saturdays have been spent reading and learning installation processes, webserver configurations, and the like -- LAMP stuff. It seems to have paid off. I'm roughly at the 2/3 mark on this project. Over the next several months, I intend to add an entry here and there about PaperHater's progress to summarize for myself and to possibly help other home users who might be attempting to design and implement something similar on their own system(s). Good luck to all of us. It would be helpful if expert organizations with CMS-type experience could release reasonably priced PC/Mac database solutions instead of gouging us.

step 1

Design the database. I mostly use localhost for this, since it's fast and secure, but sometimes run things on the cloud server. In other words, I spent the many weekends required to learn how to configure reliable LAMPs on both my localhost and server site, and to have them running with the permissions and tweaks (think, eg. php.ini,, etc) I wanted. Following that portion came additional reading, head-scratching, and back-of-envelope sketches; then downloading and installing working applications. Finally, using phpMyAdmin and MySQL commands, I reviewed and modified table structures until I could determine a desirable schema. I also worked on php scripts. Eventually, I had a set of working databases, some copied to .sql scripts, and some partially completed php scripts. That's mostly where it stands now, but I'll move on to describe what's going to come next.

step 2

Copy the structure of a localhost or online database I've designed and tested and want to use for interaction. We put this into an .sql script.
(localhost)$ mysqldump --no-data database >/home/foo/database_template.sql

(online)$ mysqldump -h -uuser -ppassword --no-data -D database >database_template.sql

Then go to whatever web address I keep databases on, create a new database there (thanks to this site), and give it the structure we want from the .sql script.

$ mysql -h -uuser -ppassword
mysql> CREATE DATABASE newdatabase;
mysql> USE newdatabase;
mysql> SOURCE /home/foo/database_template.sql;
mysql> quit;

The new database is now ready to accept data-entry and to run queries.

step 3

Upload some php scripts. I've written a large percentage of the interfacing php scripts but, today, I learned about WizzyWeb, a $99 product which could help tune my scripts or create similar ones quickly. This seems like a very reasonable price if it does what's advertised. That is, if it saves me two hours, I've made-back my money.

step 4

Enjoy. But remember: the PHP, Postgresql/Mysql,Apache, etc. are all on the server side. Users still must create some Javascript additions for their served pages to influence the browser client. However, it is possible to write such skillful PHP code on the server side, that a client needs very little browser-side code to have a nice experience on the site. And remember that server-side is usually more secure since it can't be hacked without hacking the site, not just a webpage. Here's a list of some pre-built server-side PHP "UI Frameworks" written in PHP (as opposed to Java or .Net). Edit: Also, Sitepoint's poll results from 2015 for PHP frameworks.

PHP vs Python vs Java vs C#

Java no way, since the 2011 Oracle purchase. C# no way, since it's more or less locked to Microsoft's .NET framework. But Python now has modules which allow it to work on a server (of course it can still be used to code stand-alone programs too). PHP is designed to work on a webserver inherently, and so is more naturally integrated into HTML, but if one wants to be consistent, they could just do everything with (open-source) Python, using its server side add-on modules (scroll down to "Compared as Web Development Frameworks"). Even low-end ISP's like APlus, which has a terrible lag implementing anything, has PHP4/5 and Python capacity.

Thursday, February 25, 2010

zenwalk - corrupted disk

Links: helpful command explanations

A couple of Christmases ago I installed Zenwalk 6.0 on one of my parents' HP systems. It remained mostly reliable until this week, when it froze and subsequently stopped booting properly. Ultimately, we found that their system contained one or more inode conflicts. We fixed these cross-links with two commands and a great number, roughly 1,000, confirmation keystrokes. All this via an hour-long telephone call. My mother is a patient and persistent soul.

Part one
My folks told me they recalled being online arranging bookmarks in Iceweasel (Zenwalk's rebranded Mozilla-Firefox). While arranging, the system froze entirely, apparently remaining unresponsive to the last-resort X11 Ctrl+Backspace exit. Mom said she next attempted a hard reboot but this also failed. During the attempt, error messages describing cross-indexed inodes and a corrupt superblock were scrolling. This was no doubt fsck scanning /dev/sda1. Following fsck, the script eventually exited to a single-user root login. No action was taken at the root login, and the system eventually rebooted itself and again initiated fsck. And so on. Then my phone rang ;)

Part two
It appeared the most direct route was to reboot the system and let it exit to the point where it allowed the single-user root login. From that point, we could enter commands directly, such as to locate back-up copies of the superblock on /dev/sda1. With a back-up copy located, we could run a clean fsck and resolve any doubly referenced inodes. Accordingly, the first step was to retrieve a list of the backup superblocks.

# dumpe2fs /dev/sda1 | grep superblock

This provided us with a list of about 12 back-up superblock locations. The first of these was "32768".

Part three
Used the back-up superblock to reestablish order in the system files and clear-up the multiply- referenced inode conflict.

# fsck -b 32768 /dev/sda1

Fsck eventually resolved the inode conflicts and some directory problems. During the process, my mother had to enter confirming "yes"'s nearly 1,000 times.

The problem was solved for the time being. Since I could not access the system, it remains unclear whether the problem was related to Iceweasel, to a possible hard drive problem, or to some combination of these.

Wednesday, February 17, 2010

accounting...and php

Links: IonCube loader instructions

Thought I'd try that Nola Pro software and see what it's all about to put something enterprise level on a localhost, or what kind of security holes it opens. Blah blah blah. Along the way, I fell prey to a php problem I was embarrassed to have overlooked, and got to try-out the IonCube code obfuscator w/associated configuration challenges.

Figure 1

Ion Cube's product is a code obsfucator, one that makes it nearly impossible for visitors to determine the underlying code of a website's pages, sort of like Zend. NolaPro requires the IonCube loader as a prerequisite for NolaPro's installation. The IonCube loader is free, the complete IonCube package is not. This business model reminded me of the early PDF era when a free Adobe Reader was provided to read PDF files, but Adobe required that Acrobat be purchased, at a significant price, to produce them. At any rate, I navigated to Ion Cube's main site, then to "Products->Free Loaders for Encoded Files" and downloaded the latest .tgz. Instructions were to untar it and move its entire folder into /var/www/htdocs. The folder appeared to contain a group of libraries.

To install NolaPro itself, I untarred the NolaPro .tgz in my home directory. From there, the instructions were to move the entire folder into /var/www/htdocs. It also suggested full "777" permissions on this folder, which I don't like to do. I made sure Apache was on and pointed my browser to http://localhost/nolapro. From there NolaPro's installer displayed the necessary php.ini settings. Here's a summary of its php complaints.

Figure 2

I thought that mbstring was properly configured in my php.ini, so that was weird, along with the gd thing. It seemed an odd diagnostic at first....

php embarrassment
A check with http://localhost/phpinfo.php soon solved one problem. How could I have overlooked it not loading my php.ini file? Idiot.
Figure 3
I remedied this problem by copying php.ini to /usr/local/lib/php.ini. But, about this time, I also recalled I had initially installed php v.5.28 on my system, and had subsequently upgraded to 5.3. Although it can't be seen in the the photo clip above, http://localhost/phpinfo.php noted version 5.3 was what Apache was reporting. However checking $ php -v yielded the following:
PHP 5.2.8 (cli) (built: Jan 9 2009 16:26:32)
Copyright (c) 1997-2008 The PHP Group
Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies
Nice -- a mixed-up set of installations. Additionally, when I compiled the module last year, I apparently overlooked gd and mbstring support:
'./configure' '--with-apxs2=/usr/sbin/apxs' '--with-pgsql=/usr/local' '--with-mysql=/usr/share' '--with-libxml-dir=/usr/lib' '--with-curl=/usr/bin/curl' '--with-zlib' '--with-gettext' '--with-gdbm' '--enable-inline-optimization' '--enable-track-vars'

Accordingly I went back to my php source folder, did a $ make clean and reconfigured, made, and #made installed the php module. This time, used all of the above and added --with-gd --enable-mbstring into the configure command. Success.

more IonCube
With the php problems resolved, NolaPro was ready to install except for the IonCube loader (Figure 2). IonCube provides a diagnostic script, ioncube-loader-helper.php , but this was displaying a blank page when I attempted to load it. I then copied the IonCube folder to both the nolapro folder and the htdocs folder as possible solution. No change. In the end, ensuring I had loader files corresponding to my processor was the answer. I thought I could run loader files that that noted 64-bit processing on my 32-bit system, and that these loader files would be back-compatible. They weren't. Once I dropped to the 32-bit version, ioncube-loader-helper.php displayed a successful test screen. Just to be certain, I also updated /usr/local/lib/php.ini with

zend_extension = /var/www/htdocs/ioncube/

I used loader 5.3 which corresponded to my php release; others should change it to the loader for their php release.

NolaPro was finally ready to install and did so routinely. At first look, NolaPro seems pleasing. It also appears to be a foot in the door to lure users into purchasing premium add-ons. If that's right, it's not freeware, but more like crippleware. I'll take a closer look over the coming week. It was a good experience insofar as I gained additional experience with php and IonCube configurations.

Friday, February 12, 2010

cdrecord/wodim woes

Links: Bug w/wodim info  Firmware info

Apparently there has been a fork in what previously was cdrtools. What's apparently happened is the previously robust cdrecord may have had licensing issues so that what's now called cdrecord is actually an alias for wodim. Wodim appears to be Jörg Schilling's freely licensed forefather to his more developed offspring cdrecord. It seems that, once the licensing problems with cdrecord became evident in 2006, the freely licensed, and more rudimentary, wodim was resurrected and included in cdrkit nee cdrtools.

This came up in an attempt to upgrade firmware on an old Pioneer drive that, for some reason, was only burning at about a 1/10th of its rated speeds. When I ran the reliable and familiar
$ cdrecord -scanbus
just to get a part number, I found results under the command wodim, which caused concern. On a web-facing PC, any time I enter one command and another appears, I'm going to ponder the worst scenario. Ah, but it's just a licensing issue.

But a real concern are any limitations with wodim not present in cdrecord. For example, in the same box that I used to have good results with cdrecord, I get these results with wodim.

$ wodim -scanbus

wodim: No such file or directory.
Cannot open SCSI driver!
For possible targets try 'wodim --devices' or 'wodim -scanbus'.
For possible transport specifiers try 'wodim dev=help'.
For IDE/ATAPI devices configuration, see the file README.ATAPI.setup from
the wodim documentation.

$ find -name README.ATAPI


Monday, February 8, 2010

Inspiron 7000 - Linksys WPC54G w/BCM4306

Links: Discussion on 43b module   b43 module installation (doesn't work)   b43 v. ndiswrapper  b43legacy issues  ndiswrapper solution   linksys WPC54G support

This is an install I'm working on with a friend's older Inspiron 7000. We dropped Zenwalk 6.0 (Slackware light) into the system and everything configured out-of-the box, except his PCMCIA Linksys WPC54G (version 1.2) card. This entry is meant as a trail of crumbs for how we solved it.

The legacy Linkys WPC54G card employs a Broadcom 4306 chip. I've never understood the impulse behind closed-source drivers for hardware. There are millions of Linux users; if Broadcom only produces a driver for Microsoft, at least make it open-source. Linux users can then easily design a good driver. They will want to buy Broadcom-based hardware.

Since the Broadcom driver was proprietary, any drivers/modules for it needed to be reverse engineered for Linux with some predictable results. For example, the b43 module described here didn't work. The fwcutter program appeared to properly extract info from bcmwl5.sys and install it into /lib/firmware; the card was detected by the kernel and apparently was semi-configured by the b43 module; but yet the card never fully initialized. This meant it came down to either the b43legacy module or, as a last resort, to using ndiswrapper on the driver coded for MSoft by Broadcom. You know, like back in 2004. But before we capitulated to ndiswrapper or b43legacy, we wanted to try fwcutter again with a more researched approach.

$ uname -r

# lspci -vnn
06:00.0 Network controller [0280]: Broadcom Corporation BCM4306 802.11b/g Wireless LAN Controller [14e4:4320] (rev 03)
Subsystem: Linksys WPC54G [1737:4320]
Flags: bus master, fast devsel, latency 64, IRQ 11
Memory at 1c000000 (32-bit, non-prefetchable) [size=8K]
Capabilities: [40] Power Management version 2
Kernel driver in use: b43-pci-bridge
Kernel modules: ssb

Understanding from above that we're dealing with the "4320" Broadcom chip, the b43 module should have worked previously; b43 is the recommended driver for the 4320 id. I decided to recompile fwcutter, this time making sure I had version "012" of fwcutter and version of Broadcom's proprietary driver.

$ wget
$ wget

Untarred and compiled fwcutter -- "$ make". Didn't even have to configure. Then, after untarring the driver, put the driver files in the folder with the fwcutter program and ran:

# ./b43-fwcutter -w /lib/firmware wl_apsta_mimo.o

Rebooted and the card came right up.

Sunday, January 31, 2010

logo - gimp

There's a great site for a tutorial, but the steps are a little vague. Here's how I did it.

1) Made some garbage logo and saved it as a png
2) Duplicated the layer
   Layer > Duplicate Layer
3) Opened the stack viewer so I could move layers up and down
   Windows > Dockable Dialogs > Layers
4) Highlighted the duplicate in the stack, ran a Gaussian blur about "5"
   Filters > Blur > Gaussian
5) Made a new layer, colored white
   Layer > New Layer > (check white)
6) Put a vertical gradient on the new layer (white to blue looks cool)
7) In the stack viewer, moved the blue layer to top, above the blurred Gaussian.
8) Bumpmapped the blurred layer into the blue layer
   Filters > Map > Bump map
9) With these three layers, slide them around and change their opacity until desired dropshadow appears in the picture, then merge down and flatten.

Monday, January 18, 2010

MySQL - GUI's - Workbench

MySQL Gui's   Common MySQL commands   MySQL Workbench issues   MySQL field types

Website maintenance can be considered in two main halves, the administration of files portion and administration of databases (if any) portion. For file administration, adding and deleting files, I use a secure ftp program and an ssh tunnel to my provider. But what about database administration? If I am given MySQL databases, and if that provider has phpMyAdmin installed, then I use a mix of command line and phpMyAdmin database commands. As a side note, I like PostgreSQL more than MySQL, but most webspace providers only install MySQL.

command line
Making a command-line connection to the database is a good place to start, even if we will use a GUI later. Through the command line, we can easily verify connection is possible, and we can also run MySQL installation scripts, etc.

Let's imagine we had a database at Google that was web-facing w/remote access permissions. This would never happen, but this is the way ISP's often provide database access to you and me. We could connect to our imaginary Google database in the following way:
$ mysql -h -u pietro -D mystuff -pwiggetystop

(note: no space between the "-p" command and the password itself "wiggetystop") So, if I've done this correctly, I'll be logged into my Google database and have a cheery "mysql >" prompt waiting for mysql commands.

By default, many distros that have package management programs install phpMyAdmin into the Apache area. It's understood that Apache is used to parse the php files in phpMyAdmin, but Apache is another annoying level of overhead which also brings on security concerns if used outside of localhost. I only use phpMyAdmin when it's provided on an ISP's server. What is the solution for accessing a database remotely from one's local machine using a GUI? We don't want to pay for Navicat, now do we?

mysql workbench
It turns-out MySQL makes a GUI developer suite called "Workbench", which includes administration, navigation, and design tools. After checking dependencies, I downloaded, compiled, and installed the program, actions which appeared to have proceeded successfully. The start-up command is $ mysql-workbench, and it might have a few options for that command.

The reason this screenshot has no administration is because Workbench couldn't load the administration and nav modules, which left only the design module (pictured). There is a documented bug when compiling in Linux, even with all the Python libraries I could think of installed. The Python program pexpect, was considered helpful in the bug report, so I installed that as well. No improvement. Currently, I only have the design interface, as shown in the image above.

After digging around, filing a bug report (incidentally, they don't even want to answer within the bug forum where they will be useful to all, they direct one instead to their IRC channel where only the current people in the room will see the solution - ridiculous), all of the usual stuff, I dug in deeper with strace. It appears that it's the usual problem: Java, the most ridiculous run-time library since Visual Basic. This garbage occurs from each object MySQL Workbench attempts to retrieve:

**Message: WARNING: MetaClass db.maxdb.Catalog is registered but was not loaded from a XML

This seems to be the Java getDocument call. And there is no way to fix this I know of except to once again spend hours checking every exported path and poorly designed JRE directory search requirement. I've seen this kind of thing in other Java based programs. It's not Java itself that sucks so badly, it's their (apparently) lazily designed afterthought of a Run-Time Enivironment. And Java's inability to tolerate errors slewing off its own runtime environment, certainly doesn't help. This particular thwart experience is probably related to their "MetaClass" requirments.

I don't want an application with Wine built-in. It conflicts with the Wine I already have installed and is an incredibly boggy duplication of effort.

Saturday, January 16, 2010

linux - cheapo usb camera/webcam

Video: use and settings (not good for compile instructions)
Blog: making it work with Flash

getting started (FAIL)

# udevmonitor

the program '/bin/bash' called 'udevmonitor', it should use 'udevadm monitor ', this will stop working in a future release monitor will print the received events for:
UDEV the event which udev sends out after rule processing
UEVENT the kernel uevent
UEVENT[1263672227.910603] add /class/usb_device/usbdev1.2 (usb_device)
UEVENT[1263672227.910786] add /class/usb_endpoint/usbdev1.2_ep00 (usb_endpoint)

# lsusb

Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 093a:010e Pixart Imaging, Inc. Digital camera, CD302N/Elta Medi@ digi-cam/HE-501A
Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

So the kernel sees the camera. We have a $2 Pixart CD302N/HE-501A (ID 093a:010e) camera. How do we capture from this piece of junk?

A simple command line interface for the camera will be spcagui ,once the camera is initialized, if the camera uses the gspca module to drive the camera. How do I determine the right module for my CD302N/HE-501A (ID 093a:010e) camera?

Navigated to Linux Kernel driver database and found that the 093a:010e camera uses gspca module. Checking the kernel to see if it's activated:
# modprobe gspca
FATAL: Module gspca not found.

So, it's not installed. Would the Kernel allow it, if we had it?
# cd /etc
# grep -rn "CONFIG_USB" *
udev/rules.d/30-scanners.rules:25:# For Linux >= 2.6.22 without CONFIG_USB_DEVICE_CLASS=y

So, it appears I won't have to modify the kernel, but I will have to build a module (in Windowspeak, a "driver") to load into the kernel. If I'd had to change the kernel from "n" to "y", I would go here and accomplish all this. But I don't have to change the kernel.

The documentation above noted the GSPCA module relies on libv4l, so it must also be checked:
# netpkg libv4l
[I][l] Found installed libv4l-0.5.8-i486-60.1.tgz on the repository
what should I do ?
1) reinstall
2) download
3) skip
There and installed. So only the source and patch for the GSPCA module needs to be found somewhere.
  • At this site appears gspcav1-20071224.tar.gz is the most recent version.

  • At this site appears gspcapatch.gz is a 2009 version.

Unzip them both into the same folder, then patch
$ patch<.gspcapatch
Following the patch, root-up and run their excellent compilation script
# ./gspca_build
The module will be created. Then just modprobe it and check to see if it loaded
# modprobe gspca
# lsmod
gspca          601572  0
videodev       23680   1   gspca
v4l1_compat   9732   1   videodev

So, all are loaded. I plugged-in the camera, and checked in /dev, and found that there was no /dev/video0, and so there was no way to find the camera. This is a well known bug, but the standard fixes, such as reloading the module, haven't worked.

# modinfo gspca
filename: /lib/modules/
license: GPL
description: GSPCA/SPCA5XX USB Camera Driver
author: Michel Xhaard

# modinfo gspca |grep 93A
alias: usb:v093Ap2463d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2472d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap260Fd*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap260Ed*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2608d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2603d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2601d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2600d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2470d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2460d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2471d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap2468d*dc*dsc*dp*ic*isc*ip*
alias: usb:v093Ap050Fd*dc*dsc*dp*ic*isc*ip*
Notice that there is no 093a:010e in the list of these Pixart cameras. So, we are definitely going to need to add another module or patch the current one further. The OS needs to create the node/dev/video0 or else software that shows images can't interact with the camera.

Mr97310a.c, may be the patch. It's described at the Linux Database. Scrolling down, the site seems to indicate that the 093a:010e Pixart Imaging, Inc. Digital camera, CD302N/Elta Medi@ digi-cam/HE-501A requires this "C" module to be patched into the gspca module when compiling it, however this patch cannot be downloaded or copied and pasted. Alternatively, perhaps it will be a module named gspca_mr97310a. I can't figure out how to build this from the documentation.

Since I don't want, or don't have the information, to go to that trouble, I'm going to handcode what the module source will need to find the hardware, and then recompile the module:

In gspa.mod.c, added:

In gspa_core.c, added line 413:
added Line 613:
{PAC7310, "Pixart Kaibo 7310"},
added Line 628:
{USB_DEVICE(0x093a, 0x010e)}, /* Pixart Kaibo 7310 */
added Lines @ 4120:
case 0x010e:
spca50x->desc = PAC7310;
spca50x->bridge = BRIDGE_PAC7311;
spca50x->sensor = SENSOR_PAC7311;

Then, took out the old module and recompiled and installed:
# modprobe -r gspca videodev v4l1_compat
# rm /lib/modules/
# cd /home/foo/Download/gspcav1-20071224
# ./gspca_build

So, after these steps, and a # modprobe gspca and plugging in the camera, we get:
[dev]# find . -name "video*"

Great! The system is seeing the camera, and creating the /dev/video0 and associated nodes. Let's see if we can get a picture. No. It may be that the 7311 bridge is not going to work properly with a 7310 camera:
$ spcagui
SpcaGui version: 0.3.5 date: 18 September 2005
video device /dev/video0
ERROR opening V4L interface
: Input/output error

Reinstalling the driver, with modprobe yields the same result. It appears I'm very close, but that the bridge for the 7311 is not going to work with the 7310. Not sure what to do without that 7310 bridge, but at least the process is confirmed for compiling the driver and recognizing the camera. More to come.

linux analog video tape -> digital format

There are two parts. First there are many different types of video format to consider, what do we need as an outcome? DivX, Quicktime, swf? Start by considering that, or whether it might have to go out into multiple formats or be edited. Next is taking the old RCA video outputs from a VCR and getting the analog audio and video into the Linux box for capture and potential editing before burning to DVD, sending to YouTube, whatever. Finally, there is a layer that's software for editing, but that will likely be a separate post, so I'm not considering that one here.

Friday, January 1, 2010

Math - Grammar - Law

Geometry tends to bore people, for example, me. It might be interesting in one way though. There's a connection we feel between Math, Grammar, and Law (within this essay, "MGL") that emerges when we do Geometric proofs. I don't think there's any other subject where there's a convergence like this. What allows them to work together appears to be the rule structures underlying each concept.

Familiarity, intuitive use
Most of us who are not professionals in MGL fields still operate with them enough to have a feel for them. In Math most of us recall that different theories seem to apply to different types of problems, for example, commutativity and additive inverses. Without knowing the names of the rules, we understand that 3 - 2 is not the same as 2 - 3, but that 2 + 3 and 3 + 2 are equal. In Grammar, we know that adverbs describe actions, but even if we've forgotten the name of that rule, it feels wrong if we mistakenly use an adverb to describe a noun. Eg, "An unused heavy weight makes a good doorstop." looks correct, as compared to "An unused heavily weight makes a good doorstop". In Law, most of us understand that, if contracts are broken, there might be a lawsuit, whether or not we happen to know the applicable law. So in each of these areas, most of us operate out of habit without needing to consult the specific MGL rule that applies. In Geometry, when we make proofs, we have to be explicit about only the Math portion, but there are other laws at work.

Grammar is determined by usage and social convention. In the case of US English, media status and academia tend to promote one form of usage into Standard English and Received Pronunciation. These then normatively reinforce that usage over others. Outside of this politicking however, grammar "rules" are descriptive, not normative. they are descriptions of what we observe across languages. One of the potential hiccups is agreeing on the linguistic terms. Linguistic terms are themselves words -- defined partly by usage -- so that they risk a circularity of using themselves to define themselves. We break the circle fairly effectively by first having a conversation about what our linguistic terms refer to, a meta-conversation, attempting to solidify what our discussion will point out in a language before we start examining languages. Outside of Linguistics study, in the world. language operates without bounds. Inside Linguistics, that is, while studying worldly language effects, we want own words to point to agreed-upon concepts.

As an example, let's suppose we're Linguists who agree about the meaning of the word "case", insofar as language is concerned. Using this definition, we're able to observe the number or nature of cases across languages in way we both understand. In German, we could agree there are likely at least 4 cases; the nominative, accusative, genitive, and dative cases. Observing English, we might agree that typical English usage does not split these apart so neatly. The language rules society follows come through usage, the rules we use to describe them must come through academic agreement. In one sense, Grammar is a posteriori, but the study of Grammar requires a priori definitions.

With Math, we believe we establish rules based on logic first, and experience second. So, the trajectory is deductive, we start with principles and build logically from that point to conclusions within that logic, or must expand it. The basis is a priori, but the expansion of axioms to encompass new information is a posteriori. Of course, Kant considered Mathematics "synthetic" for this blend. But what about our meta-conversation in Math, like the one we had in Linguistics? Isn't it true that we must first have a conversation about what Math words mean and agree upon what they point to, before we subsequently do that Math? For the part of Math that is arithmetic, that uses symbols, such as 2+2=4, it's relatively easy to agree because there are quantities these point to, not just concepts. We can place two items in front of anyone in any language and, once it's clear we are discussing a quantity, need only agree on the symbol. So for quantitative Math, there can be agreement about the language. But Math is not all quantities, it's also performing operations on quantities and applying theorems to those quantities, and these operations and concepts may need clearly agreed terms to discuss.

With Law, the statuatory scenario is a priori; the law is true by its definition as declared, not through observation (inductively). In this way it is like the first discussion we had about Linguistics, we had to first agree what we meant by the words we were going to use with the However, with precedents, the law also allows for interpretation which can affect its implementation in an a posteriori sense.

Geometry connection
One thing that seems to combine all three of these is Geometry. Geometric proofs utilize deductive logic, the obviousness of quantities and graphical constructs, and clear definitions. Perhaps Geometry is not so boring after all.