Sunday, April 26, 2020

class :: distance

During the COVID fiasco, we've had to adapt. By far, the first break-out concerns are 1) hardware and 2) internet connection. There's a couple Zoom/Meet tips for class organizers at the bottom.

If you can organize databases and an email server, you're ahead of the program.

MySQL configuration (14:26) Engineer Man, 2019. MySQL, but can adapt to Posgres.

  • connection reliable FiOs is the only way to underwrite live streaming and large webinars without problems. IME, 80% of students have connections that can manage this format since 5G. During 4G, reliable connections or data plans would average 30% participation.
  • hardware a laptop from 2016 minimally: eg, Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor with minimum 8GB RAM. Linux, Mac, Windows, makes no difference but has to be able to encode/decode in real time. Additionally, one or both of:
    • a digital tablet to write equations (connect with USB, switch in Zoom).
    • a document camera (connect with USB, switch in Zoom).
  • paper doing Zooms (or any contact with students, admin, or parents), I have 1) a plain spiral notebook, 2) teachers planner, and 3) printed copies of my rosters on a clipboard. I can take notes and keep-up with checking off having spoken with everyone, objectives, attendance, and so on.  
    These paper planners are old school, but it's a cheap investment in sanity, IME.
  • space room for comfortable seating and cross lighting

second best

anything less than above. The main difference is how contact is made. For example, suppose there is no Zoom capability with the students? I will have to make videos, and I will have to make phone calls, both very time consuming. Of course, I have a Google Voice number available, but there is extensive logging and sometimes a tickler required -- it's casework. Tier 2 means reaching-out since Zoom rooms are unavailable for students and parents to visit themselves. The point is I can be just as successful in Tier 1 or 2, they just look slightly different in how I spend my time, and how much time.

common strategy

Eighty percent of what I do is the same whether or not I'm in Tier 1 or 2. Accordingly, most of this post is taken with what the two tiers have in common. At the bottom are notes on strategies specific to each.

admin

Apart from academics, one has to provide their own admin layer.
  • time-keeping spreadsheet, daily entries, semester long. I have categories (columns) for each day and just put in my hours, let the spreadsheet total it. The first page has this, then I have a second page specifically tracking Zoom/webinar hours. This has served me well to provide to others. If cost were no option, I would want a complete database available with custom reports, but the risk is spending more time entering data than doing work, so a spreadsheet manages this OK.
  • personal calendar I use old-fashioned paper on this. oil changes, MD appts, all of this. Time-keeping I run in a spreadsheet, hours worked with categories, all appointments are done in a separate calendar

solo

An ever lesser approach than Tier 2 is to have no support, working on one's own. At this level, a person's largest problem becomes grade, testing, and assignment turn-ins. Without any organizational support use a free Schoology account, Zoom, one's Google account (email, voice, drive), and then post occasional YouTube videos that link to one's Schoology. Since you can post grades and attendance in Schoology, you run it as the LMS. Engrade usded to be able to handle this but no longer exists.

  • connection you more or less are going to need FiOs speed connection to hangle large Zooms or streaming. Without it, you're down to posting videos, and then answering emails.
  • hardware

school supported

During the COVID, I knew a teacher that worked for a school using Aeries for grade tracking, Illuminate for attendance tracking (Aeries was used while in classroom), staff emails on Microsoft Outlook, and communicated with students in Google Suite, with work assigned via Khan Academy, and staff meeetings in Zoom. Teachers had to track their own hours in a spreadsheet.

video-conferencing

type note
MeetFree inside G-Suite. You'll want Google Extensions and/or tactiq.
Zoomeducational accounts have dropped time restrictions.
BigMarkerjava-based, takes plugins for, eg RStudio
Skype1-1, MicroSoft data collection for the US Govt

editing

I'll list editors below, but ffmpeg is the only reliable way to edit, sadly. The only way to edit video reliably in Linux is via CLI with a time sheet, clips, and ffmpeg.

scheduling

type note
CalendarIn G-Suite, has an appointments slot
Zoomweb-integrated Python, designed to display output in a browser.
BigMarkerjava-based, takes plugins for, eg RStudio
Skype1-1, MicroSoft data collection for the US Govt

equipment

type note
CalendarIn G-Suite, has an appointments slot
Zoomweb-integrated Python, designed to display output in a browser.
BigMarkerjava-based, takes plugins for, eg RStudio
Skype1-1, MicroSoft data collection for the US Govt
WebcamsOld cell phones are cheapest with USB connection and proper software.

Zoom

The settings screen appears as seen below; there are probably too many, perhaps 50 settings. 


  • attendance$$$ (Pro Version or higher) 1) OFF: Allow participants to rename themselves (In Meeting (Basic)), 2) ON: Attention tracking (In Meeting (Advanced)) 3) Go into Reports -> Usage after meeting. NB: renaming is turned off to prevent students from checking in as their friends.

Attendance via focus (2:33) Dr. Veronica Paz (IUP), 2020. How

  • attendance (Poor version) 1) OFF: Allow participants to rename themselves (In Meeting (Basic)), 2) A chat transcript automatically appears in the recording folder but you will have to grep or do some kind of script to pull all the names from the file.

Extraction from a file (25:20) theurbanpenguin, 2013. Thorough treatment of data extraction from a file.

Moodle/LMS

Moodle is a good LMS option for the LMS layer, as is Canvas. Can build your own server, or launch Moodle via a Bitnami vendor in Google Cloud for about $5 per months. I have a separate post on Moodle. Moodle works to be compliant with LRS/xAPI connections, since it is content based, not an LRS server. For exams, you'll want to complete SCORM versions, also another post.

Moodle exam contiguration (12:11) Centre for Professional and Part Time Learning, 2019. Exam configuration for an institution, but a lot of good information for anyone's settings.

LRS

In the newer xAPI realm using HP5, and different reporting

Monday, April 20, 2020

PiP screencast pt 1

contents
capture organizing the screen, opening PiP, capture commandsspeed changes slow motion, speed ramps
cutsscripting
precision cutsaudio and sync
other effects fade, text, saturationsubtitles/captions

NB: Try to make all cuts at I-frame keyframes, if possible.


Links: 1) PiP Pt II 2) capture commands 3) settings

Editing video in Linux becomes a mental health issue after a decade or more of teeth grinding with Linux GUI video editors. There are basically two backends:ffmpeg and MLT. After a lost 10 years, some users like me resign themselves to command line editing with ffmpeg and melt (the MLT CLI editor).

This post deconstructs a simple PiP screencast, perhaps 6 minutes long. A small project like this exposes nearly all the Linux editing problems which appear in a production length film. This is the additional irony of Linux video editing -- having to become practically an expert just to do the simplest things; all or nothing.

At least five steps are involved, even for a 3.5 minute video.

  1. get the content together and laid out, an impromptu storyboard. What order do I want to provide information?
  2. verify the video inputs work
  3. present and screencapture - ffplay, ffmpeg CLI
  4. cut clips w/out render - ffmpeg CLI
  5. assemble clips with transitions - ffmpeg CLI

capturing the raw video

The command-line PiP video setup requires 3 terminals to be open, 1) for the PiP, 2) for the document cam, 3) for the screen capture. Each terminal has a command. 1) ffplay, 2) ffplay, 3) ffmpeg.

1. ffplay :: PiP (always on top)

The inset window of the host narrating is a PiP that should always be on top. Open a terminal and get this running first. The source is typically the built in webcam, trained on one's face.
$ ffplay -i /dev/video0 -alwaysontop -video_size 320x240

The window always seems to open at 640x480, but then resized down to 160x120 and moved anywhere on the desktop. And then to dress it up with more brightness, some color sat, and mirror flipped...

ffplay -i /dev/video0 -vf eq=brightness=0.09:saturation=1.3,hflip -alwaysontop -video_size 320x240

2. ffplay :: document cam

I start this secondly, and make it nearly full sized, so I can use it interchangeably with any footage of the web browser.
$ ffplay -i /dev/video2 -video_size 640x480

3. ffmpeg :: screen and sound capture

Get your screensize with xrandr, eg 1366x768, then eliminate the bottom 30pixels (20 on some systems) to omit the toolbar. If the toolbar isn't shown, it can be used during recording to switch windows. Syntax: put the 3 flags in this order:

-video_size 1366x738 -f x11grab -i :0
...else you'll probably get only a small left corner picture or errors. Then come all your typical bitrate and framerate commands
$ ffmpeg -video_size 1366x738 -f x11grab -i :0 -r 30 output.mp4

This will encode a cleanly discernable screen at a cost of about 5M every 10 minutes. The native encoding is h264. If a person wanted to instead be "old-skool" with MPEG2 (codec:v mpeg2video), the price for the same quality is about 36 times larger: about 180M for the same 10 minutes. For MPEG2, we set a bitrate around 3M per second (b:v 3M), to capture similarly to h264 at 90K.

Stopping the screen capture is CTRL-C. However: A) be certain CTRL-C is entered only once. The hard part is, it doesn't indicate any change for over a minute so a person is tempted to CTRL-C a second time. Don't do that (else untrunc). Click the mouse on the blinking terminal cursor to be sure the terminal is focused, and then CTRL-C one time. It could be a minute or two and the file size will continue to increase, but wait. B) Before closing the terminal, be certain ffmpeg has exited.

If you CTRL-C twice, or you close the terminal before ffmpeg exits, you're gonna get the dreaded "missing moov atom" error. 1) install untrunc, 2) make another file about as long as the first but which exits normally, and 3) run untrunc against it.

Explicitly setting the screencast bitrate (eg, b:v 1M b:a 192k) typically spawns fatal errors, so I only set the frame rate.

Adding sound...well you're stuck with PulseAudio if you installed Zoom, so just add -f pulse -ac 2 -i default...I've never been able to capture sound in a Zoom meeting however.

$ ffmpeg -video_size 1366x738 -f x11grab -i :0 -r 30 -f pulse -ac 2 -i default output.mp4

manage sound sources

If a person has a Zoom going and attempts to record it locally, without benefit of the Zoom app, they typically only hear sound from their own microphone. Users must switch to the sound source of Zoom itself to capture the conversation. This is the same with any VOIP, of course. This can create problems -- a person needs to make a choice.

Other people will say that old school audio will be 200mV (0.002), p-p (peak-to-peak). Unless all these signals are changed to digital, gain needs to be set differently. One first needs to know the name of the devices. Note that strange video tells more about computer mic input at than I've seen anywhere.

basic edits, separation, and render

Link: Cuts on keyframes :: immense amounts of information on cut and keyframe syntax


Ffmpeg can make non-destructive, non-rerendered cuts, but they may not occur on an I-frame (esp. keyframe) unless seek syntax and additional flags are used. I first run $ ffprobe foo.mp4 or $ ffmpeg -i foo.mp4on the source file: bitrate, frame rate, audio sampling rates, etc. Typical source video might be 310Kb h264(high), with 128 kb/s, stereo, 48000 Hz aac audio. Time permitting, one might also want to obtain the video's I-frame (keyframe) timestamps, and send them to a text file to reference during editing...

$ ffprobe -loglevel error -skip_frame nokey -select_streams v:0 -show_entries frame=pkt_pts_time -of csv=print_section=0 foo.mp4 >fooframesinfo.txt 2>&1
  • no recoding, save tail, delete leading 20 seconds. this method places seeking before the input and it will go to the closest keyframe to 20 seconds.
    $ ffmpeg -ss 0:20 -i foo.mp4 -c copy output.mp4
  • no recoding, save beginning, delete tailing 20 seconds. In this case, seeking comes after the input. Suppose the example video is 4 minutes duration, but I want it to be 3:40 duration.
    $ ffmpeg -i foo.mp4 -t 3:40 -c copy output.mp4
    Do not forget "-c copy" or it will render. Obviously, some circumstances require this level of precision, and a person has little choice but to render.
    $ ffmpeg -i foo.mp4 -t 3:40 -strict 2 output.mp4
    This gives cleaner transitions.
  • save an interior 25 second clip, beginning 3:00 minutes into a source video
    $ ffmpeg -ss 3:00 -i foo.mp4 -t 25 -c copy output.m4
...split-out audio and video
$ ffmpeg -i foo.mp4 -vn -ar 44100 -ac 2 sound.wav
$ ffmpeg -i foo.mp4 -c copy -an video.mp4
...recombine (requires render) with mp3 for sound, raised slightly above neutral "300", for transcoding loss
$ ffmpeg -i video.mp4 -i sound.wav -acodec libmp3lame -ar 44100 -ab 192k -ac 2 -vol 330 -vcodec copy recombined.mp4

precision cuts (+1 render)

Ffmpeg doesn't allow for frame number cutting. If you set a time without recoding, it will rough cut to a number of seconds and a decimal. This works poorly for transitions. So what you'll have to do is recode it and enforce strict time limits, then take it time the number of frames. You can always bring the clip into Blender to see the exact number of frames. Even though Blender is backended with Python and ffmpeg, it somehow counts frames a la MLT.

other effects (+1 render)

Try to keep the number of renders as low as possible, since each is lossy.

fade in/out

...2 second fade-in. It's covered directly here, however, it requires the "fade" and "afade" filters which don't come standardly compiled in Arch, AND, it must re-render the video for this.
$ ffmpeg -i foo.mp4 -vf "fade=type=in:duration=2" -c:a copy output.mp4

For the fade-out, the location must be made in seconds, most recommend using ffmprobe, then just enter the information 2 seconds before you want it. This video was 7:07.95, or 427.95 seconds. Here it is embedded with some other filters I was color balancing and de-interlacing with.

$ ffmpeg -i foo.mp4 -max_muxing_queue_size 999 -vf "fade=type=out:st=426:d=2,bwdif=1,colorbalance=rs=-0.1,colorbalance=bm=-0.1" -an foofinal.mp4

text labeling +1 render

A thorough video 2017,(18:35) exists on the process. Essentially a filter and a text file, but font files must be specified. If you install a font manager like gnome-tweaks, the virus called PulseAudio must be installed, so it's better to get a list of fonts from the command line
$ fc-list
...and from this pick the font you want in your video. The filter flag will include it.
-vf "[in]drawtext=fontfile=/usr/share/fonts/cantarell/Cantarell-Regular.otf:fontsize=40:fontcolor=white:x=100:y=100:enable='between(t,10,35)':text='this is cantarell'[out]"
... which you will want to drop into the regular command
$ ffmpeg -i foo.mp4 -vf "[stuff from above]" -c:v copy -c:a copy output.mp4

...however this cannot be done because streamcopying cannot be accomplished after a filter has been added -- the video must be re-encoded. Accordingly, you'll need to drop it into something like...

$ ffmpeg -i foo.mp4 -vf "[stuff from above]" -output.mp4

Ffmpeg will copy most of the settings, but I do often specify the bit rate, since ffmpeg occasionally doubles it unnecessarily. This would just be "q:v "(variable), or "b:v "(constant). It's possible to also run multiple filters; put a comma between each filter statement.

$ ffmpeg -i foo.mp4 -vf "filter1","filter2" -c:a copy output.mp4

saturation

This great video (1:08), 2020, describes color saturation.

$ ffmpeg -i foo.mp4 -vf "eq=saturation=1.5" -c:a copy output.mp4

speed changes

1. slow entire, or either end of clip (+1 render)

The same video shows slow motion.

$ ffmpeg -i foo.mp4 -filter:v "setpts=2.0*PTS" -c:a output.mp4
OR
$ ffmpeg -i foo.mp4 -vf "setpts=2.0*PTS" output.mp4

Sometimes the bitrate is too low on recode. Eg, ffmpeg is likely to choose around 2,000Kb if the user doesn't specify a bitrate. Yet if there's water in the video, it will likely appear jerky below a 5,000Kb bitrate...

$ ffmpeg -i foo.mp4 -vf "setpts=2.0*PTS" -b 5M output.mp4

2. slowing a portion inside a clip (+2 render)

Complicated. If we want to slow a 2 second portion of a 3 minute normal-speed clip, but those two seconds are not at either end of the clip, then ffmpeg must slice-out the portion, slow the portion (+1 render), then concatenate the pieces again (+1 render). Also, since the single clip temporarily becomes more than one clip, a filter statement with a labeling scheme is required. It's covered here. It can be covered in a single command, but it's a big one.

Suppose we slow-mo a section from 10 through 12 seconds in this clip. The slow down adds a few seconds to the output video.

$ ffmpeg -i foo.mp4 -filter_complex "[0:v]trim=0:10,setpts=PTS-STARTPTS[v1];[0:v]trim=10:12,setpts=2*(PTS-STARTPTS)[v2];[0:v]trim=12,setpts=PTS-STARTPTS[v3];[v1][v2][v3] concat=n=3:v=1" output.mp4

supporting documents

Because of the large number of command flags and commands necessary for even a short edit, we can benefit from making a text file holding all the commands for the edit, or all the text we are going to add to the screen, or the script for TTS we are going to add, and a list of sounds, etc. With these three documents we end up sort of storyboarding our text. Finally, we might want to automate the edit with a Python file that runs through all of our commands and calls to TTS and labels.

basic concatenation txt

Without filters, file lists (~17 into video) are the way to do this with jump cuts.

python automation

Python ffmpeg scripts are a large topic requiring a separate post; just a few notes here. A relatively basic video 2015,(2:48) describing Python basics inside text editors. The IDE discussion can be lengthy also, and one might want to watch this2020, (14:06), although if you want to avoid running a server (typically Anaconda), you might want to run a simpler IDE (Eric, IDLE,), PyCharm, or even avoid IDE's2019,(6:50). Automating ffmpeg commands with Python doesn't require Jupyter since the operations just occur on one's desktop OS, not inside a browser.

considerations

We want to have a small screen of us talking about a larger document or some such and not just during recording
  • we want the small screen PiP to always be on top :: use -alwaysontop flag
  • we'd like to be able to move it
  • we'd like to make it smaller than 320x240
link: ffplay :: more settings

small screen

$ ffplay -f video4linux2 -i /dev/video0 -video_size 320x240
OR
$ ffplay -i /dev/video0 -alwaysontop -video_size 320x240
...then to keep it always on top

commands

The CLI commands run long. This is because ffmpeg defaults run high. Without limitations inside the commands, ffmpeg pulls 60fps, h264(high), at something like 127K bitrate. Insanely huge files. For a screencast, we're just fine with
  • 30fps
  • h264(medium)
  • 1K bitrate
flag note
 b:v4Kb if movement in the PiP is too much, up this
 fx11grab must be followed immediately with a second option "i", and eg, "desktop" this will also bring h264 codec
 framerate30. Some would drop it to 25, but I keep with YouTube customs even when making these things. Production level would be 60fps
 b:v1M if movement in the PiP is too much, up this
Skype1-1, MicroSoft data collection for the US Govt

video4linux2

This is indespensable for playing one's webcam on the desktop, but it tends to default to highest possible framerates (14,000Kbs), and to a 640x480 window-size though the latter is resizeable. The thing is, it's unclear whether this is due to the vidoe4linux2 codec settings, or upon the ffplay which uses it. So is there a solid configuration file to reset these? This site does show a file to do this.

scripting

You might want to run a series of commands.The key issue is figuring the chaining. Do you want to start 3 programs at once, one after the other, one after the other as each one finishes, one after the other with the input of the prior program as the input for the next?

Bash Scripting (59:11) Derek Banas, 2016. Full tutorial on Bash scripting.
Linking commands in a script (Website) Ways to link commands.

$ nano pauseandtalk.sh (don't need sh, btw)
#!/bin/bash

There are several types of scripts. You might want a file that sequentially runs a series of ffmpeg commands, or you might want to just have a list of files for ffmpeg to look at to do a concatanation, etc.

Sample Video Editing Workflow using FFmpeg (19:33) Rick Makes, 2019. Covers de-interlacing to get rid of lines, cropping, and so on.
Video Editing Comparison: Final Cut Pro vs. FFmpeg (4:44) Rick Makes, 2019. Compares editing on the two interfaces, using scripts for FFmpeg

audio and narration/voiceover

Text-to-speech has been covered in another post, however there are commonly times when a person wants to talk over some silent video. $ yay -S audio-recorder. How to pause the video and speak at a point, and still be able to concatenate.

inputs

If you've got a desktop with HDMI output, a 3.5mm hands-free mic won't go into the video card, use the RED 3.5mm mic input, then filter out the 60hz hum. There are ideal mics with phantom power supplies, but even a decent USB mic is $50.

For syncing, you're going to want to have your audio editor running and Xplayer running same desktop. This is because it's easier to edit the audio than the video, there's no rendering to edit audio.

Using only Free Software (12:42) Chris Titus Tech, 2020. Plenty of good audio information (including Auphonic starting at 4:20; mics (don't use the Yeti - 10:42) and how to sync (9:40) at .4 speed.
Best for less than $50 (9:52) GearedInc, 2019. FifinePNP, Blue Snowball. Points out that once we get to $60, it's an "XLR" situation with preamps and so forth to mitigate background noise.
Top 5 Mics under $50 (7:41) Obey Jc, 2020. Neewer NW-7000Compares editing on the two interfaces, using scripts for FFmpeg

find the microphone - 3.5mm

Suppose we know we're using card 0

$ amixer -c0
$ aplay -l
These give us plenty of information. However, it's still likely in an HDMI setup to hit the following problem
$ arecord -Ddefault test-mic.wav
ALSA lib pcm_dsnoop.c:641:(snd_pcm_dsnoop_open) unable to open slave
arecord: main:830: audio open error: No such file or directory

This means there is no "default" configured in ~./asoundrc. There would be other errors too, if not specified. The minimum command specifies the card, coding, number of channels, and rate.

$ arecord -D hw:0,0 -f S16_LE -c 2 -r 44100 test-mic.wav

subtitles/captions

Saturday, April 18, 2020

[solved] Brother HL-L2315DW USB install ( 04f9:0092)

2022 update: due to evolving colord conflicts with CUPS color-setting, it's best to follow the instructions in my most recent post on this install. There's plenty of relevant older info below, but it probably depends on one's time and inclination whether to move onto the other post or start below.


Typical printing links and commands:

  • http://localhost:631 CUPS admin page
  • # lpadmin -x Brother remove a printer by its installed name, eg. "Brother"
  • $ lpinfo -v get info on all connected printers
  • # lpstat -o list all print jobs
  • # cancel -a cancel all print jobs
  • # systemctl [enable/disable/start/stop/restart] cups.service
  • /var/log/cups/error_log CUPS error logs

The printers are priced $90 clearance (c. 2020) or sometimes $70 refurbished. Refurbished with 2-year warranty is best. Eg, "refurbished" from Wal-Mart ($70), with an Allstate 2 year plan ($6), costs less than a new model ($90) yet with deeper protections, such as free shipping for repairs.

Links: Openprinting.org database :: Brother L2315DW downloads page


solution

Go to the Brother L2315DW downloads page and select Linux -> RPM's. Note in the screenshot below that, even though they have three available downloads, both the PPD and LPD are contained in the single 0.2MB PPD download circled in red. The file is hll2315dwcupswrapper-3.2.1-1.i386.rpm.

Use xarchiver to open the RPM and extract its single folder, "opt" Inside opt, I continued drilling down into its subdirectories. I located both necessary files in /opt/brother/Printers/HLL2315DW/cupswrapper/. The CUPS PPD is named brother-HLL2315DW-cups-en.ppd, feel free to rename to [whatever].ppd. As for the LPD, it must retain its name, and you might want to save it (its a Perl script), but you could also just create the file, since its contents are just a single line (see further down).

owner and permission

Ownership will automatically become root b/c these can't be copied into their directories as user -- when you 'su'-up to copy them, they'll move over as owned by root. I have a working system with 755 on the filter but the instructions say it should 751.

  • CUPS (PPD) 755 $ chmod 755 file
  • LPD text file 751 $ chmod 751 file

locations

  • CUPS (PPD) /usr/share/cups/model/foo.ppd
  • LPD file /usr/lib/cups/filter/brother_lpdwrapper_HLL2315DW

LPD file

The ONLY way I could get to print was to use default options, which meant creating a custom brother_lpdwrapper_HLL2315DW file. Duplex printing -- an option physically available on the printer -- might be possible from some printer setting, but it cannot be accomplished in the CUPS software without errors that prevent printing altogether. Let the printer report its defaults to CUPS for successful printing. The file which makes CUPS adopt printer defaults is a single shebang line, with no line breaks (it is multiple line below due to column width word-wrapping). Don't forget permissions must be 751.
#! /opt/brother/Printers/HLL2315DW/cupswrapper/brother_lpdwrapper_HLL2315DW

PPD file

The PPD part is like years prior.
  • assigned read/execute (5) permissions that have worked with prior PPD's... $ chmod 755 printer.ppd
  • copied it to the Arch PPD directory... # cp printer.ppd /usr/share/cups/model/printer.ppd
  • verify the USB-attached printer is detected $lsusb
  • verify CUPS is running... # systemctl
  • install the printer # lpadmin -p Brother -E -v usb:/dev/usb/lp0 -m printer.ppd
  • alternatively, if having problems, you can use the bus ID's in lsusb and be more specific about the PPD locations as well (all one line):
    lpadmin -p Brother -E -v usb:/dev/bus/usb/lp0 -m printer.ppd
  • check it in CUPS http://localhost:631, and verify or assign it default printer
  • still in CUPS, verify the printer is awaiting print jobs and not paused
  • print a test page
  • copy the PPD to one's installation USB key so they needn't download it again for an OS/CUPS re-install

wifi install

Not worth it for a home-use stand-alone printer. It's too flaky. Eg, the next power outage, the WiFi router assigns the printer with a new DHCP address and one has to install the printer again.

somewhat common filter problem

There's a fail where the filter does not print over to the /opt directory.

$ ls/opt/brother/Printers': No such file or directory

The error log looks like this:

cat /var/log/cups/error_log W [14/Dec/2022:22:24:51 -0800] CreateProfile failed: org.freedesktop.ColorManager.AlreadyExists:profile id \'Brother-Gray..\' already exists

What has happened? The printer driver PPD attempted to create a settings file in /opt/brother/Printers/[model], but part of the PPD attempt included setting printer colors. The desktop ColorManager (ICC files) settings from free desktop (files are in /usr/share/dbus-1/interfaces) conflict with the PPD color. Due to the conflict, CUPS aborts creation of the printer configuration file in /opt and then fails to print based on lack of that filter.

These color manager related failures are based on colord ICC profiles for each device and are complex.

colormgr is the command line device for changing colord ICC files.

additional

There's been changes in CUPS, such as requiring the LPR translation file, and PPD's may disappear entirely at some point.
  • cannot select Duplex and have it print. If I could, I'd want "DuplexNoTumble". Duplex with tumble inverts the top and bottom of the front/back of a page like for clipboard use. However, the printer only works in single page mode.
  • after installation, it's efficient to set it as the default printer in case one has an application that sends to an LPR default, not to a printer name (eg. geeqie).
  • Some failures require a CUPS restart...
    # systemctl restart cups.service

Thursday, April 9, 2020

[unsolvable] disable passwords for Zoom Basic meetings (Arch, Android)

If one's about to install Zoom on any device, first open a browser and create an account at the Zoom website. Other than the "hands-free" note further down, I would limit myself to changing any Zoom settings only from this website account, rather from a device. If done from the website account, settings will waterfall into whichever downstream device(s) one uses with a Zoom client.

I like to disable the Zoom Personal ID ("PID") . The PID is similar to a personal phone number. I have never given mine out, and I disabled PID meetings via the web account (shown below). The effect on my phone is neither PID meetings nor the PID itself appear. De-clutter.

largest problems

The largest problems on Zoom are the hidden ones, probably obscured at the order of some marketing hack?
  • no way to disable passwords for scheduled meetings in the basic account. If you'd like to meet with grandpa joe without a password to make it easier on him, be prepared to pay $15 per month; basic users have no ability to disable the password requirement. Send him the entire link with the embedded password, or devise a simple password scheme, say the letter "j", for all meetings.
  • opaque appropriation of email domains. There's a screen warning, but I failed to get a screenshot of it before it disappeared. Say one has an email address at their employer, Acme:
      chump@acme.com
    Chump goes to the Zoom website, creates a basic account, and is Zooming to their job. Maybe he even pays $15 for extra features. But now Acme decides as a corporation to purchase an enterprise Zoom account. Without informing Acme or Chump, Zoom restricts control over any Zoom logins with emails ending in "acme.com". The next time Chump logs in for a Zoom work meeting, a pop-up warns Chump he cannot login and misleads him with a choice between accepting all the Acme settings or simply change his account email address. Chump updates with another email address. Unknown to Chump, or likely to Chump's boss, when Chump changed his email to keep his settings, his Zoom login lost acceptance into Acme-hosted Zooms. Through no fault of his own, Chump can't log-in, and he can't figure out why, since Zoom didn't provide that information (at this writing). This means Chump also lacks an explanation for his boss, who likely feels Chump is a liar, lazy or incompetent for the missed meeting(s). Chump madly rifles through the hundreds of his Zoom account settings, and still, all login attempts are rejected. The only solution is apparently for Chump to make a new account, as Cornell eventually learned.
  • Numerous, sometimes overlapping settings. COVID will long be over by the time we figure-out these combinatorics: 4 levels, 3 roles, and 40 or 50 settings. Some settings only apply to a certain level, others apply to all, and it's pretty much trial and error. The Four levels are, meeting, user, group, account. Now add the 3 roles, user, admin, owner. The entire 16 minutes of video in the link below only deals with the blue-selected "Settings" button in the menu seen to the left. Notice that there is an entire "Admin" menu area, and that this is expandable with many other menu setting areas available. All these settings may be necessary or beneficial to some users, but it's also time-consuming, complex, and therefore error-prone, for all

    Advanced Zoom settings - Basic and Pro (16:50) Lifelong Learning at VTS, 2020. Pedantic, side by side run-down of settings for Basic and Pro features.

  • Features locked by default require identity verifcation to unlock. Verification is accomplished via a credit card or PayPal, including a home address. Now they have your zip code

Android - phone

Go to the Google Play Store, and download and install Zoom. Zoom has step-by-step instructions for getting started, but there's nothing weird except one thing: disable the hands free option in settings or it's a serious nagware problem every time the app is opened.

When opening the application a "sign-in" and "sign up" prompt appear. "Signing up" is the one-time event I recommend doing at the Zoom website. The website has far more settings than on the phone app. I ignore the "sign-up" prompt no matter the device b/c I already accomplished it on the website. "Signing-in" I do each time I use the application.

If one has already created a web account, one can simply "sign-in" to the current device and have all the settings which one configured at the website. Create the account at the website, install the app, sign-in to the app.

creating and joining meetings

I create all meetings on the Zoom website. I do not create meetings through the phone application, I just attend or host them through it. If one intends to use Zoom, it's helpful to try a practice meeting with a friend before going live to a conference and so forth.

Arch - desktop

No one wants to install this 256Mb lead weight because it brings in PulseAudio, which is effectively a virus. Some apps, eg, recordmydesktop), will fail to be able to directly access the soundcard. If you need to screen-capture during a Zoom a person can either turn on recording for the Zoom itself, or use ffmmpeg

THere are the (7.45Mb)dependencies noted during the (AUR) Zoom installation, via...

$ yay zoom
... (of course, remove with # pacman -Rns zoom)
  • alsa-plugins
  • pulseaudio
  • pulseaudio-alsa
  • rtkit
  • web-rtc-audio-processing

ffmpeg :: screen and sound capture

One should know their screensize, eg 1366x768, and cut off the bottom 30pixels or however many consistitute a toolbar. This allows switching between windows via the toolbar offscreen. Syntax: These three flags should come first and in this order
-video_size 1366x738 -f x11grab -i :0
...else you'll probably get only a small left corner picture or errors. Then come all your typical bitrate and framerate commands
$ ffmpeg -video_size 1366x738 -f x11grab -i :0 -r 30 output.mp4
I've never been able to set a bitrate in a screencast without fatal errors (eg, b:v 1M) b:a 192k. And then to add the sound...well you're stuck with PulseAudio if you installed Zoom, so just add -f pulse -ac 2 -i default...
$ ffmpeg -video_size 1366x738 -f x11grab -i :0 -r 30 -f pulse -ac 2 -i default output.mp4
There are also ways to get it to record a specific app only, using the name of the window, not covered here.

2016 Latitude E7270 6600U Install

Safety: BitLocker (encryption), UEFI, Fingerprint. Uninstall packages:

# pacman -Rsn

NB:Headphones (also hands-free) Mijiaer/Langsdom JM26 3.5mm earbuds


Problem: your situation requires reliable Zoom (or other) teleconferencing but you're not a first responder, professional athlete, nor an investment banker. If you can carry one (2.74 lbs), 2016 carbon fiber, non-touch screen version Latitude E7270's are available on EBay for about $250 delivered. These are not display showoffs, w/ 12.5" screens, and only integrated Intel HD Graphics(520), but they are solid business grade: 16 GB DDR4, i7 6600U Skylake w/100Mhz FSB, and 1080P webcam: they will do the job reliably. Think 256G PCIe3 SSD, Arch Linux and IceWM: for that $250, one has a laptop which will teleconference, plus audio might be edited -- a single 3.5mm combined jack. Watching movies or editing pics on a 12.5" however, is suboptimal.

upgrade/replacement notes: Please scroll down to bottom for sections on replacement, esp batteries and SSD. The laptop takes an M.2 card style NVME SSD so its a little complicated.

Packaging appeared good, visual inspection was clean and the charger and battery tests went well (1 hr). The subsequent Arch installation was an additional 4 hours.

bootable USB :: 20 mins

  1. Download the latest ISO. Necessary or the signature keys might error during install.
  2. # fdisk -l find the drive name, eg /dev/sdc OR...
  3. $ lsblk | grep -i sd
  4. # cfdisk /dev/sdc, delete all partitions and write, exit
  5. # dd bs=4M if=/home/foo/Downloads/archlinux-2020.03.01-x86_64.iso of=/dev/sdc oflag=sync The sync flag takes a little longer but you can be sure it will be properly on the usb.

BIOS shifts :: 15 mins

Two equally important adjustments must be made in BIOS: 1) booting from USB for install and, 2) when booting from HDD, non-windows, non-secure, booting and no disk encryption.
  • Enter BIOS, press F12 repetitively during power up.
  • Secure Boot Enable -> "Disabled"
  • General -> Advanced Boot Options -> "Enable Legacy ROM's"
  • General -> Boot Sequence -> Enable "Legacy"
Also, I noted I luckily had the latest BIOS version 1.20.3 (7/2016) but there's a process if not. Link: configure BIOS for USB boot :: update the BIOS

GRUB/safety problem :: 15 mins

Currently (2020) however, GRUB seems finnicky when using a GPT ("globally unique identifier partition table") disk. So, in addition to eliminating EFI, I convert all GPT to MBR. This requires three applications: cfdisk (delete all partions), gdisk (to detect GPT and revert to MBR) and fdisk ("fdisk -t" to create an MBR. 0x04.)
  1. # fdisk -l determine the drive name, eg /dev/nvme1
  2. # cfdisk /dev/nvme1, delete all partitions, which eliminates EFI. Write this partitionless drive and exit
  3. # gdisk /dev/nvme1 note whether GPT is present and MBR protected. Mine notes both and that "using GPT". I selected "x" for "expert command", then "z" to zap the entire GPT structure, which was confirmed. Then it asked if I want to "Blank out MBR"?, to which I replied "yes". When i then retried # gdisk /dev/nvme1, all (MBR, BSD, APM, and GPT) were not present. This is correct, but "x,z,y" again to be sure.
  4. # cfdisk /dev/nvme1, select DOS label, and proceed. The main is a linux (83) nvme1p1 and a linux swap (82) nvme1p2,
The only real question is whether to make multiple partitions for different parts of the OS. For this example, I've put them all in a single partition.

base CLI (runlevel 2) configuration :: 20 mins

# mkswap /dev/nvme1p2
# swapon /dev/nvme1p2
# free -m [check swap is on]
# mke2fs /dev/nvme1p1
# mount -rw -t ext2 /dev/nvme1p1 /mnt
# mkdir /mnt/etc
# genfstab -p /mnt >> /mnt/etc/fstab
# pacstrap /mnt base base-devel linux linux-firmware
# arch-chroot /mnt
# ln -s /usr/share/zoneinfo/US/[zone] /etc/localtime
# mkinitcpio -p linux
# passwd
# pacman -Syu grub dhcpcd
# mkdir /boot/grub
# grub-mkconfig -o /boot/grub/grub.cfg
# grub-install /dev/sda [no partition]
# exit
# reboot

go back to install disk

Let's say I forget to # pacman dhcpcd after pacstrap, and now I'm booting straight off the drive. It's extremely difficult to setup even a wired connection, let alone WiFi with just the tools in # So go back to arch-chroot and install from there.
# mount -rw -t ext2 /dev/nvme1p1 /mnt
# arch-chroot /mnt
# pacman -Syu dhcpcd wireless-tools
# exit
# reboot
Similarly, you want to pull window manager menus and so on prior to starting X the first time...
$ mkdir media
# mount -rw -t ext2 /dev/nvme1p1 /media
$ cd media
[do whatever operations]
# umount /dev/nvme1p1

locale :: 0 mins

I don't set a locale, but here's some information if interested.

useradd :: 5 mins

User 500, name "foo", home directory of "foo", using bash shell.
# useradd -G wheel,lp,audio -u 500 -s /bin/bash -m foo

aur :: 30 mins

Used to be so simple with yaourt, however it hasn't been maintained since 2018. The best option now is yay, built on the 300Mb behomoth go dependency. This first part is the same:
# nano /etc/pacman.conf
[archlinuxfr]
SigLevel = Never
Server = http://repo.archlinux.fr/$arch

# pacman -S go wget git base-devel
$ git clone https://aur.archlinux.org/yay.git
To build and install they do it through fakeroot, which looks at the god-damned sudoers file. Unfortunately, editing sudoers is done through the visudo command, which every stinking website tells users not to circumvent, and which uses the ridiculous VIM editor. Here's the workaround...
# nano /etc/sudoers
foo ALL=(ALL) ALL
... and once that's done...
$ cd yay
$ makepkg -si
If you have to hand install the TAR.XZ, use pacman with a "U" flag. The "U" flag has pacman looking into the current directory instead of out over the Net. Prior to building any AUR helper (yaourt, aurman), you'll need to first build "package-query", also from the AUR.
# pacman -U package-query[version]tar.xz
And then the same with yaourt thereafter.

limit journalctl size :: 5 mins

Systemd will log GB's and GB's of data if not limited
# nano /etc/systemd/journald.conf
SystemMaxUse=200K

X install :: 20 mins

These Dells have the Intel integrated graphics 945GM; the correct xf86 driver is the Intel VA driver.
# pacman -S xf86-video-intel
This driver now uses DRI3 as the default Direct Rendering
Infrastructure. You can try falling back to DRI2 if you run
into trouble. To do so, save a file with the following
content as /etc/X11/xorg.conf.d/20-intel.conf :
Section "Device"
Identifier "Intel Graphics"
Driver "intel"
Option "DRI" "2" # DRI3 is now default
#Option "AccelMethod" "sna" # default
#Option "AccelMethod" "uxa" # fallback
EndSection
# pacman -S xorg-server xorg-apps xorg-xinit xorg-xrandr
If you scroll down this Arch page discussing Xorg, we can see that we'll want the mesa for Open GL and lib32-mesa for older apps. Also, Intel chips, as we know (scroll down to about item 13), do not support VDPAU, viz:
"Intel Embedded Graphics Drivers do not support VDPAU. VDPAU stands for video decode and presentation API for UNIX*. VDPAU is an open source library and API originally designed by NVIDIA that provides an interface to support hardware-accelerated video decode."
... and so Intel sez libVA is correct. More specific to Arch, there's additional information in their video acceleration page, if you like to read. The only problem with the VA-API is it can't decode MP4 and FLASH containers, but it does all other common formats, and all codecs, including H264 and the new VP8 and 9. I just use MKV and AVI containers and set the VLC codec to VA (instead of its default VDPAU). In spite of inefficiencies on the Intel hardware, some may wish to overlay VDPAU functionality onto their Intel chip, which is an installation beyond this post. If a person does that, any mistakes will defeat X working properly -- no harm, just revert to runlevel 2 and reconfigure until Xorg is working well. FYI, one of the tweaks I've seen for VIDPAU overlaid onto VA, is adding "export VDPAU_DRIVER=r600"in one's ~/.xinitrc file. Anyway, back to pure libVA...
# pacman -S libva libva-intel-driver libva-utils libva-mesa-driver
... then check the install with "$ vainfo".

window manager :: 10 mins

On an old system, I don't waste memory with display managers, instead I login and "startx" from runlevel 2. I like Ice Window Manager, a light interface with simple text configuration, wallpaper, and menu files (look inside ~/.icewm ). Efficient on older systems: perhaps 150M usage after logging-in, connecting to network, and starting X. In Arch, the template files are inside /usr/share/icewm/, including the themes. See the main Arch file.
# pacman -S icewm
$ cp /etc/X11/xinit/xinitrc .xinitrc
$ nano .xinitrc
exec dbus-launch icewm-session
$ startx
I also looked here to get the names of additional drivers, for example to solve the pesky touchpad problem. I couldn't stop the Dell touchpad with synclient TouchPadOff=1 until I # pacman -S xf86-input-synaptics.

QT or Gtk

I used to stick with one or the other to keep a smaller install and shorter update. Nowadays both seem necessary. I prefer GTK (except gvfs), but VLC requires QT. QT is about 400MB, and typically pulls in PyQT. But since I'm a fan of VLC ... QT became my baseline API.
# pacman -S qt4
However, you're going to see that udiskie (to avoid gvfs) brings in about 80MB of shit, including basic Gtk.

sound

I avoid PulseAudio as much as I can. See my post from 2016. ALSA is now built-in, so that all that's required is alsamixer in order to control the sound levels (unmute, etc)
# pacman -S alsa-utils
Done.

rc.local

A consolidated place for random startup shit too lazy to configure individually. It's like an initrc in X, but for runlevel 3.
# nano /etc/rc.local
#!/bin/bash
wpa_supplicant etc
dhcpd etc
exit 0
# systemctl enable rc-local.service should make it happen next boot, but you also have to create the service file before enabling it.
# nano /etc/systemd/system/rc-local.service

# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.

[Unit]
Description=/etc/rc.local Compatibility
ConditionPathExists=/etc/rc.local

[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target

replacement info

batteries - 55wh(4 Cell) LPD-J60J5 $30

Li-Ion, power is 7.6VDC. These go bad. When I first opened the case, the battery was swollen and pushing against the case cover. The charging circuit is built-in, which might have failed. Or the battery has a place for heat paste on the bottom, which might not have been seated. If only there were a LiPo which fit it for less than $100. Replacement Li-Ion J60J5 costs $25 (2022) but 2.5 bucks tax, then thermal paste and cleaner another $15. We're looking at $40+.

After install, there are several ways to check the battery, but this is perhaps the easiest way:

$ upower -i /org/freedesktop/UPower/devices/battery_BAT0

Thermal paste application (8:48) Laptop Performance Tricks, 2019. Appears this is solution video with all relevant tips. Vid is for a Dell G7 rather than the Dell E7270.

storage: ssd pcie $30

Nowadays, we're not dealing with 2.5" SATA HDD's that spin-up. The installed card is a 256Gb NVMe PCIe Gen3 x 4, so 2 lanes are not used. That's because PCIe bus (FSB) is standard 100Mhz, but is discount model with 2 lanes. It's upgradeable to 1TB for probably $70, and to simply replace the 256Gb Gen3 about $30.