Showing posts with label fail. Show all posts
Showing posts with label fail. Show all posts

Sunday, July 19, 2020

Eachine E58 -- Drone X Pro scam

I wanted to inspect the chimney on the roof with a drone and thus fell prey to the Drone-X Pro scam. I trusted a YouTube ad without doing research, and paid $100 for a $25 drone. Now that I have this Eachine E58 (the only thing missing is the brand name on the controller), could I make the best of it? Nope. Failure.

The camera is only 2 megapixels, can only be seen through an app, and flight time is given as 7 minutes. Reviews note that even the slightest breeze will take it out of range. On mine, I had a common problem of the wifi connecting to my phone, but I could not work the controller or see video. The E58 apparently does not work with all phones. Review: dronedeliver.uk.

wifi - piloting - video - app

These four are tied together because the drone controller has no video screen. Suppose I wanted to inspect my roof. As the drone passes from my direct line of sight, I can no longer see where the drone is going, and I lose the ability to pilot the aircraft. This is remedied by a complicated solution with several possible failure points

  • the manufacturer inserted a wifi transceiver into the drone; it appears as a hotspot to wifi systems
  • users connect their cellphone to the hotspot using whatever wifi functionality is present in the phone
  • after connecting, users open a pre-downloaded app to view through the drone's video camera and pilot the craft
There is no suggested application in the directions, at least in English, however after an hour of YouTube videos, and forum posts, this one appeared most likely:

 

...and don't forget Google Play needs port 5228.

failure

My Droid Turbo connected to WiFi, but with a warning that I had no Internet connection, thus I think (just a guess) disabling the http transport necessary for viewing the video. The drone app noted that I needed to "connect" in spite of the phone showing I was wi-fi connected, as noted

Following my hunch, I went to this site and learned that I could potentially enable http transport on a phone which normally doesn't do so with non-internet LANS, however I would have to root my phone, which I didn't want to do.

sd video

Video is supposedly saved onto a micro SD, viewable after flight. However, the apps could never connect to the drone, and the hand controller had no function to start video recording. It appears that the video recording was never initialized in the drone -- there was no video on the SD card after the device was flown.

aftermath

I'll give the drone away to one of my friends who has a compatible cell phone, for $10 and a chimney inspection. It appears compatible means phones allowing HTTP transport on wifi connections, even when on a local LAN without DNS, eg mDNS.

Thursday, April 9, 2020

[unsolvable] disable passwords for Zoom Basic meetings (Arch, Android)

If one's about to install Zoom on any device, first open a browser and create an account at the Zoom website. Other than the "hands-free" note further down, I would limit myself to changing any Zoom settings only from this website account, rather from a device. If done from the website account, settings will waterfall into whichever downstream device(s) one uses with a Zoom client.

I like to disable the Zoom Personal ID ("PID") . The PID is similar to a personal phone number. I have never given mine out, and I disabled PID meetings via the web account (shown below). The effect on my phone is neither PID meetings nor the PID itself appear. De-clutter.

largest problems

The largest problems on Zoom are the hidden ones, probably obscured at the order of some marketing hack?
  • no way to disable passwords for scheduled meetings in the basic account. If you'd like to meet with grandpa joe without a password to make it easier on him, be prepared to pay $15 per month; basic users have no ability to disable the password requirement. Send him the entire link with the embedded password, or devise a simple password scheme, say the letter "j", for all meetings.
  • opaque appropriation of email domains. There's a screen warning, but I failed to get a screenshot of it before it disappeared. Say one has an email address at their employer, Acme:
      chump@acme.com
    Chump goes to the Zoom website, creates a basic account, and is Zooming to their job. Maybe he even pays $15 for extra features. But now Acme decides as a corporation to purchase an enterprise Zoom account. Without informing Acme or Chump, Zoom restricts control over any Zoom logins with emails ending in "acme.com". The next time Chump logs in for a Zoom work meeting, a pop-up warns Chump he cannot login and misleads him with a choice between accepting all the Acme settings or simply change his account email address. Chump updates with another email address. Unknown to Chump, or likely to Chump's boss, when Chump changed his email to keep his settings, his Zoom login lost acceptance into Acme-hosted Zooms. Through no fault of his own, Chump can't log-in, and he can't figure out why, since Zoom didn't provide that information (at this writing). This means Chump also lacks an explanation for his boss, who likely feels Chump is a liar, lazy or incompetent for the missed meeting(s). Chump madly rifles through the hundreds of his Zoom account settings, and still, all login attempts are rejected. The only solution is apparently for Chump to make a new account, as Cornell eventually learned.
  • Numerous, sometimes overlapping settings. COVID will long be over by the time we figure-out these combinatorics: 4 levels, 3 roles, and 40 or 50 settings. Some settings only apply to a certain level, others apply to all, and it's pretty much trial and error. The Four levels are, meeting, user, group, account. Now add the 3 roles, user, admin, owner. The entire 16 minutes of video in the link below only deals with the blue-selected "Settings" button in the menu seen to the left. Notice that there is an entire "Admin" menu area, and that this is expandable with many other menu setting areas available. All these settings may be necessary or beneficial to some users, but it's also time-consuming, complex, and therefore error-prone, for all

    Advanced Zoom settings - Basic and Pro (16:50) Lifelong Learning at VTS, 2020. Pedantic, side by side run-down of settings for Basic and Pro features.

  • Features locked by default require identity verifcation to unlock. Verification is accomplished via a credit card or PayPal, including a home address. Now they have your zip code

Android - phone

Go to the Google Play Store, and download and install Zoom. Zoom has step-by-step instructions for getting started, but there's nothing weird except one thing: disable the hands free option in settings or it's a serious nagware problem every time the app is opened.

When opening the application a "sign-in" and "sign up" prompt appear. "Signing up" is the one-time event I recommend doing at the Zoom website. The website has far more settings than on the phone app. I ignore the "sign-up" prompt no matter the device b/c I already accomplished it on the website. "Signing-in" I do each time I use the application.

If one has already created a web account, one can simply "sign-in" to the current device and have all the settings which one configured at the website. Create the account at the website, install the app, sign-in to the app.

creating and joining meetings

I create all meetings on the Zoom website. I do not create meetings through the phone application, I just attend or host them through it. If one intends to use Zoom, it's helpful to try a practice meeting with a friend before going live to a conference and so forth.

Arch - desktop

No one wants to install this 256Mb lead weight because it brings in PulseAudio, which is effectively a virus. Some apps, eg, recordmydesktop), will fail to be able to directly access the soundcard. If you need to screen-capture during a Zoom a person can either turn on recording for the Zoom itself, or use ffmmpeg

THere are the (7.45Mb)dependencies noted during the (AUR) Zoom installation, via...

$ yay zoom
... (of course, remove with # pacman -Rns zoom)
  • alsa-plugins
  • pulseaudio
  • pulseaudio-alsa
  • rtkit
  • web-rtc-audio-processing

ffmpeg :: screen and sound capture

One should know their screensize, eg 1366x768, and cut off the bottom 30pixels or however many consistitute a toolbar. This allows switching between windows via the toolbar offscreen. Syntax: These three flags should come first and in this order
-video_size 1366x738 -f x11grab -i :0
...else you'll probably get only a small left corner picture or errors. Then come all your typical bitrate and framerate commands
$ ffmpeg -video_size 1366x738 -f x11grab -i :0 -r 30 output.mp4
I've never been able to set a bitrate in a screencast without fatal errors (eg, b:v 1M) b:a 192k. And then to add the sound...well you're stuck with PulseAudio if you installed Zoom, so just add -f pulse -ac 2 -i default...
$ ffmpeg -video_size 1366x738 -f x11grab -i :0 -r 30 -f pulse -ac 2 -i default output.mp4
There are also ways to get it to record a specific app only, using the name of the window, not covered here.

Tuesday, July 9, 2013

[solved] dns, yum, rpm, curl, ping

Links: yum variables :: IPv4 address conversion :: yum commands
NB: This is complicated post. It first addresses IPv6 (mostly successfully), but a second problem is revealed specific to Fuduntu, that I could not circumvent. Since Fuduntu is defunct, I'm disregarding and posting "solved" above. Hopefully, there's plenty of info below for others working on what might be a similar Fedora flavor of the Fuduntu release problem.
Consider the following problem -- if I can ping, I should be equally able to curl, but I'm not:
$ ping www.websense.com
PING www.websense.com (204.15.67.11) 56(84) bytes of data.
64 bytes from www.websense.com (204.15.67.11): icmp_seq=1 ttl=49 time=27.9 ms
64 bytes from www.websense.com (204.15.67.11): icmp_seq=2 ttl=49 time=27.7 ms
^C
$ curl www.websense.com
curl: (6) Couldn't resolve host 'www.websense.com'
This did more than just raise my curiosity; rpm/yum relies on curl during access to repositories. First I checked for proxy and IPv6 settings. All looked normal: no proxy, IPv6 set to ignore, but not to block or forced resolution. Let's look under the hood.

tcpdump

Here are portions of dumps for the successful ping and struggling curl:
# tcpdump -nlieth0 -s0 udp port 53 -vvv
[during ping]
192.168.1.20.34097 > 192.168.1.254.53: [udp sum ok] 11891+ A? www.websense.com. (34)
192.168.1.254.53 > 192.168.1.20.34097: [udp sum ok] 11891 q: A? www.websense.com. 1/0/0 www.websense.com. [5s] A 204.15.67.11 (50)
192.168.1.20.58651 > 192.168.1.254.53: [udp sum ok] 51147+ PTR? 11.67.15.204.in-addr.arpa. (43)
192.168.1.254.53 > 192.168.1.20.58651: [udp sum ok] 51147 q: PTR? 11.67.15.204.in-addr.arpa. 1/0/0 11.67.15.204.in-addr.arpa. [9h53m39s] PTR www.websense.com. (73)

[during curl]
192.168.1.20.41050 > 192.168.1.254.53: [udp sum ok] 26082+ A? www.websense.com. (34)
192.168.1.20.41050 > 192.168.1.254.53: [udp sum ok] 54668+ AAAA? www.websense.com. (34)
192.168.1.254.53 > 192.168.1.20.41050: [udp sum ok] 26082 q: A? www.websense.com. 1/0/0 www.websense.com. [5s] A 204.15.67.11 (50)
192.168.1.254.53 > 192.168.1.20.41050: [udp sum ok] 54668- q: AAAA? www.websense.com. 0/0/0 (34)
192.168.1.20.58040 > 192.168.1.254.53: [udp sum ok] 47978+ A? www.websense.com.localdomain. (46)
192.168.1.20.58040 > 192.168.1.254.53: [udp sum ok] 42568+ AAAA? www.websense.com.localdomain. (46)
192.168.1.254.53 > 192.168.1.20.58040: [udp sum ok] 47978 NXDomain- q: A? www.websense.com.localdomain. 0/0/0 (46)
192.168.1.254.53 > 192.168.1.20.58040: [udp sum ok] 42568 NXDomain- q: AAAA? www.websense.com.localdomain. 0/0/0 (46)
Ping only queries the DNS server in IPv4 (A?) and has success. Curl initially requests in both IPv4(A?) and IPv6 (AAAA?). Although curl receives a proper response (204.15.67.11) to its IPv4 request, nothing is returned for IPv6 request. Apparently due to some bug, curl ignores the IPv4 resolution and requests a second time in both formats. It also mysteriously appends "localdomain" onto its query(!).

solution - /etc/hosts + release awareness

Links: IPv4 address conversion :: yum concerns :: cleaning old yum info

We should write a patch for curl and recompile it, but that's for programmers. I only know how to supply curl with the IPv6 information it wants. The site www.websense.com may not have an AAAA record in its DNS zone file, but I can still manually enter IPv6 info into /etc/hosts and force curl to use that.
# nano /etc/hosts
::ffff:cc0f:430b www.websense.com
204.15.67.11 www.websense.com

# nano /etc/host.conf
order hosts,bind

$ curl www.websense.com
[page loads normally]
Problem 1 solved. However, there is a second problem, one specific to Fuduntu, not curl. Fuduntu is a hybrid. It accordingly doesn't have typical Fedora values in its rpm variables, eg $releasever.
$ rpm - q fedora-release
package fedora-release is not installed

$ rpm -q fuduntu-release
fuduntu-release-2013-3.noarch

$ ls /etc/*release
ls: cannot access /etc/release*: No such file or directory

$ yum list fedora-release
Loaded plugins: fastestmirror, langpacks, presto, refresh-packagekit
Adding en_US to language list
Determining fastest mirrors
Could not retrieve mirrorlist http://packages.fuduntu.org/repo/mirrors/fuduntu-stable-rpms-2013 error was
14: PYCURL ERROR 6 - ""
Error: Cannot find a valid baseurl for repo: fuduntu
Glitches also cause this with Fedora users when version conflicts arise. In the case of Fuduntu however, the repos no longer exist -- one strategy might be to spoof Fuduntu version checking as if were being upgraded when it accesses third-party repos. If we eliminate the locally-stored repo files and the rpm release file, we might be able to override with third party information. First let's do a debug dump with the current info (in case we need it later), then remove local information.
$ yum-debug-dump
Output written to: /home/~/yum_debug_dump-local-2013-07-12_20:50:30.txt.gz

$ rpm -q fuduntu-release
fuduntu-release-2013-3.noarch

# yum remove fuduntu-release-2013-3.noarch
[screens and screens of removal]

$ rpm -q fuduntu-release
fuduntu-release-2013-3.noarch

# ls /etc/yum.repos.d
dropbox.repo fuduntu.repo
# rm /etc/yum.repos.d/fuduntu.repo
# ls /etc/pki/rpm-gpg/
RPM-GPG-KEY-fuduntu RPM-GPG-KEY-fuduntu-i386
RPM-GPG-KEY-fuduntu-2013-primary RPM-GPG-KEY-fuduntu-x86_64
# rm /etc/pki/rpm-gpg/*

# yum clean all

$ rpm -q fuduntu-release
fuduntu-release-2013-3.noarch


$releasever


Links:replace $releasever using sed :: yum variables
The orphaned Fuduntu release has no access to Fuduntu repositories because they no longer exist. Fuduntu must rely on third-party repos to move forward. Fedora-related repositories are arranged with "f[$releasever]-[$basearch]". In Fuduntu this variable was "2013-i386". This special variable worked in Fuduntu repos, but fails in 3rd party repos -- $releasever, needs to be changed to something standard, such as "17", to create a more Fedora-standard "f17-i386" variable.

But nothing worked. Not exporting the variable $releasever=17 to the kernel, not changing /etc/yum.conf, not swapping out the value of "2013" for "17" in each /etc/*release. Nothing I could find globally changed this variable. Eventually, after a couple of lost days on the project, I gave up and brute forced the repo files individually. Before modifying, I eliminated the old cache and backed-up all the unmodified repos into a new directory I called "default". Then I modified the repos, exchanging "$releasever" for "17" in each file.
# rm -r /var/tmp/*

# mkdir /etc/yum.repos.d/default
# cp /etc/yum.repos.d/* /etc/yum.repos.d/default/


# sed -i 's/$releasever/17/g' ./etc/yum.repos.d/*
The repos finally loaded.

(non)solution - remove IPv6 functionality

Link: Disabling IPv6

This is not a solution, because curl/rpm/yum needs IPv6 and IPv4 information and does not get what it needs with the process below. I'm including this info however, because it provides insight into how other TCP clients (ie, not strictly curl/rpm) can be assisted in a mixed A/AAAA environment. Some readers are interested in a Chromium,etc.
# nano /etc/sysconfig/network
NETWORKING_IPV6=no

# nano /etc/modprobe.d/blacklist.conf
blacklist nf_conntrack_ipv6
blacklist nf_defrag_ipv6


Appendix 1 - wireshark

A good link for using wireshark to check DNS problems. The wireshark GUI is more elegant than my CLI approach above, and arguably more user-friendly for those working on IPv4 /IPv6 solutions.

Appendix 2 - strace

Straces are too long to regurgitate here; let's look at the 14 relevant lines where ping succeeds and curl is unsuccessful. Of possible interest here is that, from inside the LAN, the DNS server, via DHCP, is simply the gateway at "192.168.1.254".
$ strace ping www.websense.com
socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, 16) = 0
gettimeofday({1373430779, 7870}, NULL) = 0
poll([{fd=3, events=POLLOUT}], 1, 0) = 1 ([{fd=3, revents=POLLOUT}])
send(3, "\346\f\1\0\0\1\0\0\0\0\0\0\3www\10websense\3com\0\0\1"..., 34, MSG_NOSIGNAL) = 34
poll([{fd=3, events=POLLIN}], 1, 5000) = 1 ([{fd=3, revents=POLLIN}])
ioctl(3, FIONREAD, [50]) = 0
recvfrom(3, "\346\f\201\200\0\1\0\1\0\0\0\0\3www\10websense\3com\0\0\1"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, [16]) = 50
close(3) = 0
socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(1025), sin_addr=inet_addr("204.15.67.11")}, 16) = 0
getsockname(3, {sa_family=AF_INET, sin_port=htons(53553), sin_addr=inet_addr("192.168.1.20")}, [16]) = 0
close(3)

And for the failing curl :
$ strace curl www.websense.com
socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, 16) = 0
gettimeofday({1373429425, 713068}, NULL) = 0
poll([{fd=3, events=POLLOUT}], 1, 0) = 1 ([{fd=3, revents=POLLOUT}])
sendmmsg(3, {{{msg_name(0)=NULL, msg_iov(1)=[{"\215s\1\0\0\1\0\0\0\0\0\0\3www\10websense\3com\0\0\1"..., 34}], msg_controllen=0, msg_flags=0}, 34}, {{msg_name(0)=NULL, msg_iov(1)=[{"\34\305\1\0\0\1\0\0\0\0\0\0\3www\10websense\3com\0\0\34"..., 34}], msg_controllen=0, msg_flags=0}, 34}}, 2, MSG_NOSIGNAL) = 2
poll([{fd=3, events=POLLIN}], 1, 5000) = 1 ([{fd=3, revents=POLLIN}])
ioctl(3, FIONREAD, [34]) = 0
recvfrom(3, "\34\305\200\0\0\1\0\0\0\0\0\0\3www\10websense\3com\0\0\34"..., 2048, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, [16]) = 34
close(3) = 0
socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, 16) = 0

Curl never seems to leave port 53, and it also appears curl may have actually received the IP of the DNS server in response to its query to that selfsame DNS server. Perhaps this is due to curl embedding its request inside a more complex sendmmsg routine, as opposed to ping's simpler send routine. Additionally, ping uses a getsockname process not used by curl.

More information: while Epiphany is running, we check to see what calls are creating errors. Get its PID, open a terminal, and let strace run for several seconds while attempting to surf to an address in Epiphany. Then CTRL-C out and examine the data,eg....
$ strace -c -p 13881
Process 3811 attached
^CProcess 3811 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
54.08 0.000384 0 2167 writev
14.79 0.000105 5 20 munmap
13.24 0.000094 0 1526 clock_gettime
13.24 0.000094 0 6690 4567 recv
4.65 0.000033 0 4583 poll
0.00 0.000000 0 1 restart_syscall
0.00 0.000000 0 80 9 read
0.00 0.000000 0 31 write
0.00 0.000000 0 36 open
0.00 0.000000 0 36 close
0.00 0.000000 0 4 unlink
0.00 0.000000 0 18 access
0.00 0.000000 0 8 rename
0.00 0.000000 0 262 gettimeofday
0.00 0.000000 0 1 clone
0.00 0.000000 0 14 _llseek
0.00 0.000000 0 21 mmap2
0.00 0.000000 0 58 46 stat64
0.00 0.000000 0 84 8 lstat64
0.00 0.000000 0 36 fstat64
0.00 0.000000 0 2 1 madvise
0.00 0.000000 0 70 6 futex
0.00 0.000000 0 1 statfs64
------ ----------- ----------- --------- --------- ----------------
100.00 0.000710 15749 4637 total
Blog formatting squishes the data a little, but we see significant errors(4567 of them) on "recv" calls, as well as some on "stat64" and a few others.

Wish I could write in C and recompile curl.

Sunday, March 24, 2013

gnome-keyring, LDAP, PAM -- lost weekend blues

Links: LDAP workaround for Slack   CUPS LDAP issues

Today's distro's often install security as if one's stand-alone machine were a network box - localhost looping back onto itself with PAM and LDAP. In an environment like this, if one attempts to directly attach a peripheral, one becomes delayed or entirely thwarted. Anything from an improperly operating keyring to some limitation of root (eg, root apparently cannot navigate an IP localhost inherently), or so forth can extend attaching a printer into weeks. These are not problems for hackers; these are limits upon the computer's owner(s). Why?

I have no f*cking idea. Public agencies already can directly access our systems and private hackers either know how to use these LEA backdoors or have their own software methods. So for our home systems there is perhaps a 1% security gain for having an encrypted-vaulted system that loops back onto itself with layered authentication and cryptography. Meanwhile there is about a 70% productivity loss and about a 140% frustration increase to go along with it. Nearly all savvy computer users below the level of industry professionals or CS majors (they presumably can write patches to solve authentication situations), would pay GOOD money to defang all of their "security" beyond an initial login. As I noted in the previous post, users of Ubuntu (Ubuntu appears to use every layer of loopback LDAP, PAM, SOAP, encryption spaghetti available), are forced to, basically, hack their own systems to accomplish something as simple as directly connecting a printer.

Of course there are "solutions" for us; spend hours on forums and maybe make a post --- one could wait anywhere from one to several days or weeks for a possible workaround. Or one can spend the weeks necessary to resurrect their IPtables knowledge, uninstall LDAP, and parse out how to connect to web services without having LDAP and so on in place (eg try NSS without LDAP!).

This rant isn't against Ubuntu, it's a vent regarding loopback security for stand-alone machines. Leave layered loopback security, which places a security server and client on a single machine, out of vanilla distros. The so-called hackers out there already read our credit card numbers and have a hundred other entry points into our systems through dynamic libraries and so forth than this ridiculous set of authentications can ever prevent. Meanwhile, we end-users struggle to connect peripherals. Iptables and (arguably) a well configured PAM are our stand-alone friends. Loopback localhost security "services" are not.

Saturday, March 23, 2013

cups - hp office jet pro 8500a - 12 hours of fail in Lubuntu

Links: open printing.org   scroll down for tarball link   uninstall hplip   hp-setup options
Note this may also play HEAVILY into encountered problems with Ubuntu/Lubuntu printer addition: gnome keyring fiasco
So a year and a half ago, I added this printer in Minislack/Zenwalk. In 2013, I'm into Lubuntu, mostly for an updated set of libraries for the time being. I know Lubuntu (so far) uses usblp for usb printers so it should go OK. One thing, I'm expecting to compile my core applications as I've already noticed from Ubuntu/Lubuntu versions of avconv and ffmpeg, that they seem to be watered down. I suppose they have to be for such a popular distribution, so I keep my expectations for this distro to having a recent set of libraries for compiling. Back to our printer story... scroll down to Pt 3 for installation.

Pt 1 Groundwork

  • HPLIP has apparently changed to where it must now be compiled to get the .ppd's out of the source? (Edit 2013-03-24 after unpacking the HPLIP source, the .ppd's are available, but in a compressed "gz"format, eg "hp-officejet_pro_8500_a910.ppd". Select and use as needed (chmod 644 the file) without proceeding further with HPLIP compilation. Save both "hpcups" and the "hpijs" versions, since hpcups is new, its rasterization is different, and YMMV. If one proceeds with the HPLIP installation, the gz'ed ppd's are also placed into /usr/share/ppd/HP
  • Before doing so, use Synaptic to install developer versions of libjpeg, libcups, libusb1, and libsane to avoid ./configure fails. My final HPLIP configure line was $ ./configure --prefix=/usr --enable-lite-build --disable-network-build. I only needed scanning and printing, and via a USB connection.

Pt 2 errors

This make proceeded along for a while until exiting with
prnt/hpcups/CommonDefinitions.h:43:25: fatal error: cups/raster.h: No such file or directory
I installed apt-file (# apt-get apt-file find) and sought out the raster.h code. Sure enough, it was located:
$ apt-file search cups/raster.h
libcupsimage2-dev: /usr/include/cups/raster.h
lsb-build-base3: /usr/include/lsb3/cups/raster.h

So accordingly...
# apt-get install libcupsimage2-dev
...and back to the compiling. I have to hand it to whomever came up with the excellent idea of apt-file. The remainder, including # make install, appeared to go normally.

next errors

To check the installation...
$ hp-check -t
The program 'hp-check' is currently not installed. You can install it by typing:
sudo apt-get install hplip
Perhaps some library linking did not go well, $PATH was not updated, or files were installed into a non-standard location for the PATH or CUPS. The latter is what happened during a prior HPLIP install in 2011.
$ ls /usr/bin |grep hp
dvihp
hp-mkuri
php
php5
pitchplay
Not encouraging.
$ which hp-check
$
Not encouraging.
$ find -name hp-check
$
Not encouraging.
# service cups restart
cups stop/waiting
cups start/running, process 11411
$ hp-check -t
The program 'hp-check' is currently not installed. You can install it by typing:
sudo apt-get install hplip
Not encouraging.
# reboot
$ hp-check -t
The program 'hp-check' is currently not installed. You can install it by typing:
sudo apt-get install hplip
Not encouraging.

uninstalling HPLIP

Link: Uninstall instructions. From the installation directory...
# make uninstall
( cd '/usr/share/cups/drv/hp' && rm -f hpcups.drv )
( cd '/etc/cron.daily' && rm -f hplip_cron )
( cd '/usr/share/hal/fdi/preprobe/10osvendor' && rm -f 20-hplip-devices.fdi )
( cd '/usr/share/ppd/HP'...all ppds)
( cd '/etc/udev/rules.d' && rm -f 56-hpmud_support.rules 86-hpmud_plugin.rules 56-hpmud_add_printer.rules 55-hpmud.rules )
( cd '/usr/share/doc/...various docs.)
( cd '/usr/share/doc/hplip-3.13.3/styles' && rm -f css.css )
( cd '/usr/share/doc/hplip-3.13.3/images' && rm -f favicon.ico print.png toolbox_actions.png toolbox_fax.png toolbox_print_control.png toolbox_print_settings.png toolbox_status.png toolbox_supplies.png xsane.png )
( cd '/usr/share/doc/hplip-3.13.3' && rm -f COPYING copyright README_LIBJPG )
( cd '/usr/lib/cups/backend' && rm -f hp )
( cd '/usr/bin' && rm -f hp-mkuri )
( cd '/usr/lib/cups/filter' && rm -f hpcups )
( cd '/usr/lib/cups/filter' && rm -f hpcupsfax )
( cd '/etc/hp' && rm -f hplip.conf )
This is where HPLIP had been installed. I don't believe the HAL issue will be important, but we'll see. Additionally, from the uninstall, we have some idea what libraries were installed and why they might not have been located...
/bin/bash ./libtool --mode=uninstall rm -f usr/lib/libhpmud.la'
libtool: uninstall: rm -f /usr/lib/libhpmud.la /usr/lib/libhpmud.so.0.0.6 /usr/lib/libhpmud.so.0 /usr/lib/libhpmud.so
/bin/bash ./libtool --mode=uninstall rm -f '/usr/lib/libhpip.la'
libtool: uninstall: rm -f /usr/lib/libhpip.la /usr/lib/libhpip.so.0.0.1 /usr/lib/libhpip.so.0 /usr/lib/libhpip.so
/bin/bash ./libtool --mode=uninstall rm -f '/usr/lib/sane/libsane-hpaio.la'
libtool: uninstall: rm -f /usr/lib/sane/libsane-hpaio.la /usr/lib/sane/libsane-hpaio.so.1.0.0 /usr/lib/sane/libsane-hpaio.so.1 /usr/lib/sane/libsane-hpaio.so
( cd '/usr/lib/cups/filter' && rm -f pstotiff )
We'll see if we can get by without these libraries, eg, may not be able to do scanning. Still, simple printing, to start. A decent PPD should be sufficient.

Pt 3 Printer Installation (10 minute)

Note: "udevmonitor" is now "udevadm monitor".
1. Uncompress the ppd wherever you find it as a .ppd.gz from the HPLIP source (no need to compile HPLIP)
2. Copy it to where it can be found and chmod it.
# cp hp-officejet_pro_8500_a910.ppd /usr/share/cups/model/1sttry.ppd
# chmod 644 /usr/share/cups/model/1sttry.ppd
3. Verify you the user are in the groups lp and lpadmin, eg via "$ groups" or "# userconfig"
4. Verify the file /etc/cups/cupsd.conf has the line (or add it and restart CUPS):
FileDevice Yes
5. Run # udevadm monitor
6. Connect printer into USB, write-down dev address eg, usb:/dev/usb/lp0 that appears.
7. Be sure CUPS is running, eg # service cups restart
8. Add the printer to that USB address.
# lpadmin -p hp8500 -E -v usb:/dev/usb/lp0 -m 1sttry.ppd
9. Print, profit
10. If problems occur, see my post from 2011. You might have to uninstall the printer and try a different PPD or a different location for the PPD. The command to cancel current print jobs is # lprm -, the command to remove the printer (eg, named 'hp8500' above): # lpadmin -x hp8500. Also # lpstat -v is your friend.

One additional error

Uninstalling the HPLIP libraries earlier above did come back to bite. Everything installed properly, but when attempting to print, an error: "/usr/lib/cups/filter/hp/cups not available: No such file or directory". I then reinstalled HPLIP but installed PPD manually. Appeared all good, but print jobs gave error that printer was waiting to become available. Looked into /var/log/cups/error.log, and noted the following
The name org.freedesktop.ColorManager was not provided by any .service files
This appears to be a common error. By now, I also saw a lot others pissed off about the various security (non)interactions around loopback security and their distro. For example, this person's SuSE rant.

Incidentally, I noted that, in each case, a gnome keyring error appears, stopping communication with the printer. Apparently, a person has to hack their own laptop(!) for normal use if they use Ubuntu/Lubuntu. The roots of gnome-keyring persist into PAM, of course, and the LDAP abortion in their own system. To avoid contributing to my own productivity loss past the 12 hours already lost here, I eventually installed the HP5800a with "apt-get hplip". At some point, I will return to Slackware but, before doing so, I will attempt to euthanize gnome-keyring, PAM, and whatever other interconnected encryption garbage is in this distro when I have a few days to spare and see if a printer can be attached.

Again, adding a printer is typically a console job of 10 minutes, but I just gave up and let the immense Ubuntu RAM-killing daemon colluge do it for me after 12 lost hours. Google "cups pam" and you'll see why I gave up.

lpadmin

Lpadmin is part of CUPS and doesn't work when CUPS is not running.

GET A LIST OF PENDING PRINT JOBS BY INSTALLED PRINTER NAME
# lpstat

CANCEL ALL PENDING PRINT JOBS
# cancel -a

PRINT A TEST PAGE (eg, the man page for lpadmin)
# man lpadmin | lp -d HP8500

Saturday, December 22, 2012

Browser - Adobe Flash

Links: Slackware Flash update :: Opera plug-in page

Like most reasonable people, I dislike Adobe's proprietary obsfucation. It's most oppressive in the Linux environment, where its intrusive modules don't interact well with Linux's more transparent libraries.

A recent Adobe Flash update1 screwed my Iceweasel installation and, in turn, destabilized a previously 3-years' stable Zenwalk install. That is, immediately following the update of (libflashplayer.so), including complete deletion of all prior versions, etc, the previously rock-solid Iceweasel intermittently crashed at Flash intensive sites. A new installation of FlashBlock did not stop the crashes. Reinstallation of all three applications did no better. I eventually had to move to ArchLinux from Zenwalk, due to these Adobe-related Flash crashes. In other words, I had to change my entire OS structure thanks to Adobe's closed-sourced, DRM intensive elements, which are so-far impossible for average users to ignore for a typical browsing environment.

1Pop-up windows demanding Adobe Flash Player updates began to appear in sponsored YouTube videos December 2012. Prior to this update ads could be bypassed. Following the update, they could not.

Monday, November 21, 2011

ffmpeg - x264 (FAIL)

Links: forum discussion of circular dependency for lavf
Full disclosure: This post is not a solution - it's a vent about program dependencies, one of those items where you're forced to waste a DAY or TWO for no good reason other than developers didn't use their heads around installation. Anyone who uses ffmpeg knows it's linked to libx264 like a hand-in-glove. Yesterday, I came across a very unappetizing twist in this relationship. I was deep into shrinking some videos into qvga format for a NAXA device (see previous post) and found that my 1 yr old ffmpeg release was concatenating video and audio streams into an AVI file in ways giving flaky playback. Video(libxvid) and audio(MP3) were stable individually but, once concatenated, could not reliably playback on every device. With MP2 audio, there was no problem, so that it appeared possible the problem was how MP3 streams were being packaged by ffmpeg. To rule this out, I started an update on ffmpeg.

Catch-22

After deleting my old version of ffmpeg, I downloaded ffmpeg version 0.8.6 and found during ./configure that my version of x264 needed updating. I blasted my old x264 and got on that, whereby I learned that I wanted yasm, of course, for an assembled (faster) version of x264. Yasm in, I returned to x264, and found that lavf support was not showing during ./configure. WTF? This is where I learned of a motherfarking CATCH-22 circular dependency that x264 developers had implemented. X264 LAVF support requires a version of ffmpeg already installed. But ffmpeg requires that x264 be installed. Each requires the other for lavf support, so where can you start? This means I could not install x264 or or ffmpeg with lavf support or, in other words, that they would both be USELESS CRIPPLEWARE. WTF?

The only solution possiblity I can see for this GARBAGE is to install x264 without lavf support, subsequeently install ffmpeg without lavf (because it's not in the x264, because it's not in the ffmpeg, because it's not in the x264, because...), then turn around and blow out the x264 and reinstall it with the crippled ffmpeg already in place (and no LAVF support within) and HOPE that the x264 can somehow provide LAVF when it doesn't see any in the ffmpeg. Finally, blow out the ffmpeg again, and reinstall it with x264 support enabled. Altogether, this would be 4 installations, but I don't see any other way to get the LAVF in there. Thanks, guys.

All I wanted was to get some videos down to qvga size, which was already days, now I see I'm going to have to fix the thing that i need to fix the other thing, etc etc etc, in order to, days later, have a working ffmpeg again, in order to get back on the long road to making qvga sized videos. What a bunch of shiat.

Sunday, November 13, 2011

pdfedit - pdf's etc (FAIL)

Links: sourceforge - pdfedit   www.boost.org   Xournal

Like most blog posts, this one is born from annoyance. My current rage was with PDF books retreived from Project Gutenberg. Typical PDF book files should be a few hundred K and fast to load. Some are. Some are several MB and also open quickly. But a few are several megabytes and, while loading, push one's CPU to an unhealthy 100% for minutes, instead of for a few seconds. This subset of larger PDF's are of course impossible to open on portable devices. The problem is the Gutenberg volunteers make a normal PDF, but then add a 1200 lines of resolution photo of the book's cover to the first page of the PDF. It takes a lot of CPU and memory for PDF software to simultaneously render a huge photo down to a tray icon, display the huge photo full-screen, and load the first few pages of text. The fix is to edit such PDF's initial page, reducing the first page photo to a typical 75 or 150 lines of resolution photo.

So these PDFs need repairing or else one's CPU will need replacing, but is there a Linux program out there which does this? We can say "yes" definitively if we want to spend hundreds on the Adobe Acrobat solution. And there are well-proven Linux tools like pdftotext that quickly extract all the PDF's text unformatted. But what about a Linux program that just opens the PDF, allows us to edit, and then close the file? Based on this ideal, I decided to give PDFedit a shot.

installation (v.0.4.5)

Comes as a .bz2 because they have decided to pander to the Windows crowd, apparently. The README indicates "Boost" is the dependency. Boost is just a set of C++ libraries, so I ran configure before doing any checks to see if they might already be installed. Nope:
checking for boostlib >= 1.20.0... configure: error: We could not detect the boost libraries (version 1.20 or higher). If you have a staged boost library (still not installed) please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option. If you are sure you have boost installed, then check your version number looking in . See http://randspringer.de/boost for more documentation.

boost (v.1.47) installation

PDFedit's pretension of requiring Boost is annoying. For example, 1) C++ libraries sufficient for compiling are already on people's systems, we don't need a redundant set, 2) installing them means bloating one's system for no reason and, worst of all, 3) they are on Sourceforge servers so add an hour to the installation timeline. (Edit: indeed, the first download was 30MB and was a set of PDF documents mislabeled as source.) A half hour was already wasted, but it's a dependency, gotta get it in. Let's go directly to the boost site to get the libraries. And...the Boost site bounced me back to Sourceforge for another 53 frigging MB at Sourceforge 60K "speeds". Installing PDFedit is starting to look like a 2 hour operation.

bootstrap

So, opened the PDFedit source. No configure file, no README. Great. Noting there are some bootstrap files however, so we're apparently dealing with frigging bootstrap. Now we have bad choices by both the Boost and the PDFedit developers. Also Boost appears to require Python. So the real dependency tree is apparently: PYTHON-->BOOST-->XPDF-->PDFEDIT
$ ./boostrap.sh
$ ./b2 --prefix=/usr
This doesn't work. I finally located some installation instructions. They're on the Boost website instead of in a simple README in the source. They appear partially inaccurate since they are without root. Let's start over and change it to a way it will work.
$ ./boostrap.sh --prefix=/usr
# ./b2 install

back to pdfedit (hours later)

I've almost forgotten why I needed to install PDFedit in the first place, but here we go. Did a mostly standard configure -prefix=/usr, however the results showed me that no tools or kernel tests would be included. Start over.
$ configure -prefix=/usr --enable-tools --enable-kernel-tests
This went well except that kernel checks couldn't be configured due to some missing package apparently called Cppunit for which it wants version 1.10 or later. Let's see if we can get that in.

Cppunit (v.1.12.1) installation

This was a standard configure -prefix=/usr, make, # make install. No problems.

back to pdfedit

Attempted 3 ways
$ configure -prefix=/usr --enable-tools --enable-pdfedit-core-dev --enable-kernel-tests
$ configure -prefix=/usr --enable-tools --enable-pdfedit-core-dev
$ configure -prefix=/usr
All of these resulted in fatal errors during make
make[2]: *** [cpagecontents.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory `/home/foo/Download/pdfedit-0.4.5/src/kernel'
make[1]: *** [kernel] Error 2
make[1]: Leaving directory `/home/foo/Download/pdfedit-0.4.5/src'
make: *** [source] Error 2
Apparently the PDFedit source has design and documentation flaws much deeper than one can suss out in the time required for reasonable installation. On the first account, it should run with normal kernel settings. On the second account, they left the little detail of kernel recompiling out of their hard-to-locate documentation, when it should be the first thing noted. Further, the documentation neglects any information regarding which kernel switches would need to be set. So really, users would have to guess among 2,600 kernel options in order to use PDFedit. In short, PDFedit will either work on one's PC or it won't, dealer's choice. Troubleshooting using strace and finding the needle in the haystack of the entire PDFedit source, goes far beyond the investment most people should have to make to simply install a program. I certainly have more appealing things to do with two weeks.

I wasted half a day on the shiatty PDFedit product and was unable to install it or edit my PDF's. In the end, I ran pdftotext on the particular PDF I wanted to fix. I'll format that basic text file with LaTeX as I read it, and then recompile when finished -- the resultant PDF will be easily read on a portable device. This is extra work I'll first have to do with a desktop, so I guess I'll read the book at home.

Saturday, August 28, 2010

Flash Player updates, fail

Links Flashplayer

In Zenwalk (mini-Slack), I run Iceweasel for a browser. This is just a stripped Firefox that doesn't use copyrighted code, a nice touch. However, it means the User Agent string is typically unrecognized at mainstream sites like Hulu. When there is, for example, a periodic Flash update that sites like Hulu want us to install, Adobe provides a warning message that I have an unrecognized or unsupported system.

In spite of the ominous warning message, Adobe provides the latest libflashplayer.so module at their their site. Wipe out the old directory and the two softlinks. Put in the latest libflashplayer.so and create two softlinks to the new module. Right as rain.

download the module
Go to the Adobe Flashplayer page, which has a sniffer to determine the OS is Linux. I took the basic tar.gz version which is pre-compiled (can we say "proprietary"?). Unpack it. That's it. However, there are times when Flash updates have entirely broken my browser and this then requires an OS update. We never want to update an OS for any reason. We want a stable, 30 year installation.



remove the old stuff...
To be reasonably sure older versions didn't linger and cause conflicts:

# rm -rf /usr/lib/flashplugin*
# rm /home/foo/.mozilla/plugins/libflashplayer.so
# rm /usr/lib/mozilla-firefox/plugins/libflashplayer.so
# rm /usr/lib/mozilla/plugins/libflashplayer.so

...and in with the new stuff
The softlink commands are wrapping here in the blog's narrow column; there are just two of them.

# mkdir /usr/lib/flashplugin
# cp /home/foo/downloads/libflashplayer.so /usr/lib/flashplugin/
# ln -s /usr/lib/flashplugin/libflashplayer.so /usr/lib/mozilla/plugins/libflashplayer.so
# ln -s /usr/lib/flashplugin/libflashplayer.so /usr/lib/mozilla-firefox/plugins/libflashplayer.so