Friday, December 11, 2020

maths resources

Links: Photomath :: Wolfram Alpha :: Desmos
xda xiaomi redmi2 :: xiaomi community :: more flashboot commands :: reddit post (self-teach calc) :: Ranking engineering degrees by difficulty (15:07), 2023


Math is difficult, because the explanations come too late in the game. You have to get to Linear Algebra to explain basics, which most people can't accomplish

spreadsheets, Python, R, SPSS

Most people cannot afford SPSS, ergo PSPP is a a good option. If using Linux, most repo's have it. In Arch, it's in the AUR. There's plenty of PSPP videos on YT, as well as SPSS videos that a person can adapt.

Python, Numpy, Plotly (1:10:57) Derek Banas, 2022. Uses Jupyter on Anaconda but a couple tweaks will do it in Google Cloud. 16:00 explains Numpy, 17:00 read CSV, scraping 21:00, Plotly 45:00
Pandas Merge (Excel) (9:08) Alan Hettinger, 2022. Similar to a SQL join. When we want to put things together from various tables into a combined output. Can dapt Jupyter to Google Colab with Google Sheets as-is, or download them as CSV's first and then merge.
Pandas Merge (Sheets) (10:48) Tobias Willman, 2020. Adapt the Jupyter above using this guy's vid.

Self-study generally

self-study feynman's books (27:33) Math Sorcerer, 2023. Mathematics for self-study series of books, espec trig and calculus.
V600 scanning tips (5:53) erwnalejo, 2021. Somewhere in Western Kansas. Lomo digitaliza, blue tack to adjust height. Links in comments to products. Lomo is an extra $50.
epson scanning tips (28:11) Nick Carver, 2019. Paper masks some of the holder. 5:40 uses a squeeze ball. 2400 dpi is fine for any normal use including 8.5 x 11 blowups, etc.
V600 overview (52:17) Film Friends,

PreCalc

Two best books I know are Larson and Sullivan's. I found a 9th edition Sullivan and a

Polar coordinates

Difficult because we move back and forth between analytic trig and graphic trig, then introduce time changes in the angles

Series

Infinite series means infinite terms, not that the sum diverges to infinity. Also the best way to learn them is M1 in MVP math.

Harmonic Series (sum of sums) (46:34) Mathologer, 2020. Infinite sum of 1/(n+1) Gamma explained at 24:21.
Buying land (32:06) Brantley Blended, 2018. listing, PLAT recency, survey recency (due diligence period), boundaries,

Matrices

My road to matrices has been difficult, b/c for years it was one of those categories where 1) I simply had to remember formulas and 2) Math teachers acted like matrices were easy, like I was an idiot, because the problems were easy. Nearly always in Math, my intuition is right when something doesn't quite seem that easy. So lets call matrices what they are: representations of vectors with several degrees of freedom, not just 'simple arrays'. Although the latter is correct, it's like saying cars are easy to make b/c they fit in a garage.

The real way to look at it is via transformations, rotations, and vectors, as well as ways to write functions (remember we can also write functions with limits). That is to say, linear algebra. M

1. Berkeley HS Math 1, Module 6 (webpage). This has some good videos on different transformations: reflections, rotations, and translations. The shortcoming is that the they move the points, instead of changing the underlying grid (as we do with matrix scalars).
2. Linear transformations (10:58) 3 Blue 1 Brown, 2016. The real value of what matrices actually are, and also acknowledged by the host. They know how important this is. Connecting matrices and vectors. 3:52 i-hat (x), j-hat (y).
3. Why can't we multiply vectors (51:15) Freya Holmer. A good review, awkwardly stated, of basic maths up to matrices/vectors. Multiplying R3 vectors gives quaternions, multiplying R2 vectors gives us complex numbers.
A tour of differential equations(27:16) 3 Blue, 1 Brown, 2017. How we work backwards with diff equations
4. interesting vector usage (14:40) Kieran Borovic, 2023. an interesting way to convert categories of nature into a 7 row vector.

Integral tricks

Most difficult are problems that combine U-sub (reverse chain) and IBP, integration by parts, but also must know d/dx, dx/dy, Dx

Integral conceptualization (10:54) Math The World, 2024. *does not* use the best Cherkassky/Russian way of series, however gets much nearer than most and points out the problem of focusing too much on area under curve.
Why dx at end of integral (4:36) Krista King, 2016. sometimes I forget why it's smashed onto the end of an integral or double integral and so this is a good reminder it's the dx(limit of distance,0), not the Dx (distance).
What is dx alone? (5:38) Jim Fowler, (2013). Just as important as the one above. A little deeper.
u-subs (11:02) The Organic Chemistry Tutor, 2018 . a few definite integrals from organic chemistry tutor.
Cheat sheet (PDF) Most of the tricks and common integrals.

Exponents/Logs

exponentiate (30:08) Mathemaniac, 2024.

Lesser used

I consider that these are like the irregular verbs of math. We have to know them typically by use, as they are difficult to remember how to derive. Trig identities, hyperbolic derivatives and integrals. Geometric Algebra -- nuclear physics.

Logic

Russell's paradox (28:27) Jeffrey Kaplan, 2022. Covers naive set theory inadvertently (as opposed to axiomatic set theory).
attach/remove ft lens (2:46) Caleb Ginsburg, 2021. Turn to f5.6, then turn speed selector to 400 ASA in alignment with 5.6, then push the release button. It sort of hinges from top to bottom.

Statistics

Old skool brandon foltz, are still some of the best videos out there.

the most important statistical skill (13:35) Very Normal, 2024. Monte carlo is about iterations

Thursday, October 8, 2020

old cell phones for webcams (usb, wifi)

Links: Android Partitions :: Arch Android tools :: xda F3Q
xda xiaomi redmi2 :: xiaomi community :: more flashboot commands

NB: check out updated 2022 version


Overview

Two main layers of software function:

  • system video is of course displayed on a laptop or screen. Several layers of internal software must accomodate webcams.
    1. The kernel should detect the video source
    2. software like V4L2 should be able to transport detected video.
    3. software like ffplay or OBS with ftl can then display transported video.
  • phone configuration software, typically fastboot, usually has many poorly documented model-specific functions. Still, it's a necessary step because the phone webcam is not a stand-alone camera, but functioning within phone firmware.

The handshake is the trickier, more time-consuming project. Yet the handshake tends to be low maintenance, once established. When configured properly, the phone cam should be detected like any other cam by the kernel. We'll start with USB access into the phone to configure it; following configuration, the phone cam can connect via WiFi or via USB.

phone powerup

Rooting a phone typically compromises its camera commands and degrades resolution. To avoid rooting the phone, we want to set Android into developer mode with USB bugging turned-on. Then we can connect via USB and authorize an ADB program. Once that's in, we can get to the phone's cam.

Most Androids can initialize in several modes. We want them in either ADB mode or Fastboot mode. Another site about this. The F3 and one of the F3Q's powered into fastboot mode (more fastboot commands), and still did not boot after a hardware reset. Good article here and it's important to know the Android partion names.

Even with correct fastboot syntatx....

# pacman -Syu android-tools
# fastboot oem unlock
OKAY [ 0.291s]
Finished. Total time: 0.291s
# fastboot erase boot
Erasing 'boot' FAILED (remote: 'failed to erase partition')
fastboot: error: Command failed
# fastboot format boot
fastboot: error: Formatting is not supported for file system with type ''.

It was a weekend long process to simply boot one of these phones into operation as a webcam

Note also that the adb commands were inoperative with all the phones, eg # adb devices produced nothing. Eventually, adb kill-server was run. Back to flashboot and determining the proper flags for it.

xioami tools

All three devices have Xiaomi inside. There's a good Xiaomi tool, xiaomi-adb-fastboot-tools, in the AUR. It runs on Java and needs the JRE...

$ yay -S xiaomi-adb-fastboot-tools

Sometimes this worked, but it failed to compile on boxes with the newest version of Java. On systems which it did compile, I couldn't find the executable.

$ which xioami*

...returned nothing. A Redditor helped me out...

$ pacman -Qql "$pkg_name" | while read fn; do [[ -x $fn && -f $fn ]] && echo "$fn"; done

...which returned...


Phones

One LG F3 and two LG F3Q's were available. All 3 were Jellybean (4.12), with 5MP cameras.

BTW, no idea why these 2 fastbooted, I hadn't tried to root or tamper with the devices. The 3rd phone booted normally, but since I had moved my SIM into my active phone, I could only connect by WiFi without Google Play store. How then to download webcam apps? Ultimately, without being able to boot the phone nor get access to Google Play, the unfortunate reality seemed rooting the two unbootables. They may also have had some sort of Factory Reset Protection (FRP), not sure.

Xiaomi sidenote

Xiaomi is the phone's cell xponder manf. It's actually a Qualcomm SnapDragon 410E. The phones I used were all Redmi 2, presumably with wt88047 hardware. When a phone powers into fastboot, lsusb provides the phone's internal manufacturer ID instead of the LG phone ID. For example, below you'll see the bootable F3Q displayed a charging identifier, but the F3Q that powered into fastboot displayed the 18d1:d00d Xiaomi Redmi 2 hardware identifier. Xiaomi itself produces some ROM's; their site is worthy of research also.

LG Optimus F3Q D520 (lg-d520)

$ lsusb
1004:6300 LG Electronics, Inc. G2/Optimus Android Phone [Charge mode]
1004:632c LG Electronics, Inc. LGE Android Phone [MTP files]
1004:631e LG Electronics, Inc. LM-X420xxx/G2/Optimus Android Phone (PTP/camera mode)
18d1:d00d Google Inc. Xiaomi Mi/Redmi 2 (fastboot)

5MP HDMI autofocus, Snapdragon 400, Android 4.12 (Jellybean). Only 1.3GB internal but takes a 32GB microSD

files
stock for the F3Q (d520) D52010c_00.kdzfactory firmware (ZIP or KDZ), flashed to "boot". aka ROM.
stock for the F3 (p659) P65910b_05.kdzfactory firmware (ZIP or KDZ), flashed to "boot". aka ROM.
structure/workflowfactory image (always IMG) for the device
color balanceapplication (always APK) to root
crossfadeslideshow complex! use a GUI app
de-interlacewatermark
gif

LG Optimus F3 (lg-p659)

$ lsusb
18d1:d00d Google Inc. Xiaomi Mi/Redmi 2 (fastboot)

5MP HDMI autofocus, Snapdragon 400, Android 4.12 (Jellybean). Only 1.3GB internal but takes a 32GB microSD

Fastboot Process

YES

fastboot oem device-info
fastboot getvar product
    product: FX3Q_TMO_US
    product: FX3_TMUS
fastboot continue

NOPE

fastboot erase boot
fastboot flashing unlock_critical

failed f3q hard reset

The first pass at restoring factory failed with the message...

Secure booting error!
Cause: boot certificate verify

errors

  • partion table doesn't exist :: command: fastboot erase data Valuable response also helping with directory names.
  • Error2: Failure sending erase group start command to the card (RCA:2) :: command: fastboot erase recovery

Card RCA:2

This card was nearly continually in failures

# fastboot continue
[phone] Error No. 2: Failure sending read command to the Card (RCA:2)

One site had a theory about why recovery writes fail having to do with multiple bootloader slots. My phones though....

# fastboot --set-active=a
fastboot: error: Device does not support slots
# fastboot getvar slot-count
slot-count:

ADB Process

ADB mode is less important than Fastboot mode, but problems in Fastboot can sometimes be solved by having a proper ADB capability. So it can be worth it. The commands are different from Fastboot. Also, you might even have to manually make Plugdev rules for the damned thing. If you can see it in lsusb, you have a chance.

YES

adb start-server
adb kill-server
adb logcat
adb devices
fastboot continue

NOPE

fastboot erase boot
fastboot flashing unlock_critical

failed f3q hard reset

The first pass at restoring factory failed with the message...

Secure booting error!
Cause: boot certificate verify

errors

  • partion table doesn't exist :: command: fastboot erase data Valuable response also helping with directory names.
  • Error2: Failure sending erase group start command to the card (RCA:2) :: command: fastboot erase recovery

plugdev group

I want to add a system group

# groupadd -r plugdev
, add myself to it, and then also of course make a rule...
# nano /etc/udev/rules.d/51-android.rules
. The rule is described well here.

SD/USB boot

Site describes how to unbrick using the EMMC method, special software. It appears however this method requires also converting the KDZ file into BIN files for the software. A pain.

A33 Unbrick with SD Card (8:09) Kiko Dog, 2015. Requires special program to burn the OS onto the SD card.

Monday, August 17, 2020

notes - pulseaudio

Links: PulseAudio configuration :: PulseAudio examples :: Kodi PulseAudio guide


For 5-6 years, I avoided PulseAudio by sending any hooks to /dev/null. Zoom was my recent undoing -- Zoom's settings were so complicated, that it was much less timely to install PulseAudio than circumvent it. Still, instead of the unplanned kludge of PulseAudio on top of ALSA on top of OSS; instead of the elaborate trouble-shooting and configuration nightmares a nested audio 'system' brings; why couldn't OSS simply be fully developed? It was close to completion when abandoned for ALSA. And, when something so irrational as a 3-tier audio system is developed, reasonable ppl might assume the RIAA/MPAA and their IP attorneys (the DCMA Force) were involved. Anyway... that said, PulseAudio has at least improved. So now here we are.

source v. sink

We all understand the one-way graphic below, but just a note to keep in mind: these are relative.


For a person recording her voice to her lapop, she is the ultimate source, and the SSD sector is the ultimate sink, but intermediate stops can also be defined. The microphone is a sink from the source voice, but the microphone is a source for the Audio-Digital Converter sink, and so on down the line until we get to the final sink, or final destination, the SSD. When configuring, know what the software defines as a source or sink at that point in the process.

Other nomenclature isssues concern the difference between cards, profiles, indexes, sources, and outputs.

sorting for configuration

Three things to keep in mind: 1) if you know how to change underlying ALSA and/or OSS, these are still preferred, 2) failing this, well-defined PulseAudio configuration files can usually overcome and accomplish, 3) go to /etc/pulse/ for the two PulseAudio configuration files: default.pa, and client.conf. Default is just what it sounds like, so it's not the place to put all ones configurations, only the default settings for when PulseAudio starts. Client is the larger file with all profiles and so on. If one runs the PulseAudio daemon, there's also a configuration file for that: /etc/pulse/daemon.conf.

Here are a few (of 50+) PulseAudio commands used during configuration...

  • pacmd list-cards
  • pacmd list-sources
  • pacmd list-sinks

These commands return overwhelming information and only a small portion are used in PulseAudio configuration files. Which ones? What nomenclature matters: sink, index, name, card, etc,? If I use the command...

$ pacmd list-cards

...in order to find the names to use for configuration, I will receive intimidating amounts of information.

Only the two circled items are needed for use in configuration, however which is better?

symbolic-name
alsa_card.pci-0000_00_1f.3
the preferred parameter for PulseAudio configuration files.
card-index
0
unpreferred parameter inside PulseAudio configuration files. The card-index value can change across boots.

In short, I have one sound card, known to the system as alsa_card.pci-0000_00_1f.3, and this is what I should use to create configuration customizations. The good news is that I can increase the number of "sinks" on the card so that ffmpeg can access them separately or together through PulseAudio.

configuration

Go to /etc/pulse/ for the two main configuration files: default.pa, and client.conf. The overarching theme is to have well-configured (or 'well-defined') "sources", which we can then easily select from for recording or muxing. We'll need to add sinks.

Multiple Audio Collection (11:29) Kris Occhipinti, 2017. How to do multiple audio sources.

Utilizing the results from pacmd list-source-outputs, we can create recording commands.

$ ffmpeg -f pulse -i 0 -map '0' foo.mp3

adding sinks

Links Adding sinks ::

Adding sinks is important for having sources and outputs available to mux.

Tuesday, August 11, 2020

google cloud services initialization (cloud, colab, sites, dns, ai)

Some of Google's web services have their own names but are tied together with GCP (Google Cloud Platform) and/or some specific GCP API. GCP is at the top, but a person can enter through lesser services and be unaware of the larger picture. Again, GCP is at the top, but then come related but semi-independent services, Colab, Sites, AI. In turn, each of these might rely on just a GCP API, or be under another service. For example, Colab is tied into GCP, but a person can get started in it through Drive, without knowing its larger role. When a person's trying to budget, it's a wide landscape to understand exactly for what they are being charged, and under which service.

Static Google Cloud site (9:51) Stuffed Box, 2019. Starting at the top and getting a simple static (no database) website rolling. One must already have purchased a DNS name.
Hosting on Cloud via a Colab project (30:32) Chris Titus Tech, 2019. This is a bit dated, so prices have come up, but it shows how it's done.
Hosting a Pre-packed Moodle stack (9:51) Arpit Argawal, 2020. Shows the value of a notebook VM in Colab
Hosting on Cloud via a Colab project (30:32) Chris Titus Tech, 2019. This is a bit dated, so prices have come up, but it shows how it's done.

Google's iPython front-end Colab takes the Jupyter concept one-further, placing configuration and computing on a web platform. Customers don't have to configure an environment on their laptop, everything runs in the Google-sphere, and there are several API's (and TensorFlow) that Google makes available.

During 2020, the documentationi was a little sparse, so I made a post here, but now there are more vids and it's easier to see how we might have different notebooks running on different servers, across various Google products. This could also include something where we want to run a stack, eg for a Moodle. If all this seems arcane, don't forget we can host traditionally through Google Domains. What's going to be interesting is how blockchain changes the database concept in something like Moodle. Currently, blockchain is mostly for smart contract and DAPPs.

Colab

Notebooks are created, ran, and saved via the Drive menu, or go directly to colab.research.google.com. Users don't need a Google Cloud account to use Colab. Easiest way to access Colab is to connect it to one's Drive account, where it will save files anyway. Open Drive, click on the "+" sign to create a new file and go to down to "More". Connect Colab and, from then on, Colab notebooks can be created and accessed straight from Drive.

There's a lot you can do with a basic Colab account, if you have a good enough internet connection to keep up with it. The Pro version is another conversation. I often prefer to strengthen Colab projects by adding interactivity with Cloud.

GUI Creation in Google Colab (21:31) AI Science, 2020. Basics for opening a project and getting it operational.
Blender in Colab (15:28) Micro Singularity, 2020. Inadvertently explains an immense amount about Colab, Python, and Blender.

Colab and Google Cloud

Suppose one writes Python for Colab that needed to call a Google API at some point. Or suppose a person wanted to run a notebook on a VM that they customized? These are the two added strengths of adding Cloud: 1) make a VM (website) to craft a browser project, 2) add Google API calls. Google Cloud requires a credit card.

Cloud and Colab can be run separately, but fusing them is good in some cases. Gaining an understanding of the process allows users to know when to rely on either Colab or Google Cloud or interdependently.

Colab vs. Google Cloud (9:51) Arpit Argawal, 2020. Shows the value of a notebook VM in Colab
Hosting on Cloud via a Colab project (30:32) Chris Titus Tech, 2019. This is a bit dated, so prices have come up, but it shows how it's done.

Note the Google Cloud platform homepage above. The menu on the left is richer than the one in the Colab screenshot higher above. We run the risk of being charged for some of these features so that Google will display potential estimated charges before we submit our requests to use Google API's.

API credentials

We might want to make API calls to Cloud's API's. Say that a Colab notebook requires a Google API call, say to send some text for translation to another language. The user switches to their Cloud account and selects the Google API for translation. Google gives them an estimate of what calls to that API will cost. The user accepts the estimate, and then Google provides the API JSON credentials, which are then pasted into their Colab notebook. When the Colab notebook runs, it can then make the calls to the Google API. Protect such credentials because we don't want others to use them against our credit card.

Cloud account VM details

In the case of running notebooks and you update something, did it update on your machine or googles. its more clear on Google Cloud.

API dependencies

When a person first opens a Colab notebook, it's on a Google server, and the path for the installed Python is typically /usr/local/lib/python/[version]. So I start writing code cells, and importing and updating API dependencies. Google will update all the dependencies on the virtual machine it creates for your project on the server. ALLEGEDLY.

Suppose I want to use google-cloud-texttospeech. Then the way to updates its dependencies (supposedly):

% pip install --upgrade google-cloud-texttospeech

Users can observe all the file updates necessary for the API, unless they add the "-quiet" flag to suppress it. However, no matter that this process is undertaken, when the API itself is called, there can be dependency version problems between Python and iPython.

Note that in the case above the code exits with a "ContextualVersionConflict" listing a 1.16 version detected in the given folder. (BTW, this folder is on the virtual machine, not one's home system). Yet the initial upgrade command AND a subsequent "freeze" command show the installed version as 1.22. How can we possibly clear this since Google has told itself that 1.22 is installed, but the API detects version 1.16? Why are they looking in different folders? Where are they?

problem: restart the runtime

Python imports, iPython does not (page) Stack Overflow, 2013. Notes this is a problem with sys.path.

You'd think of course that there's a problem with sys.path, and to obviate *that* problem, I now explicitly import the sys and make sure of the path in first two commands...

import sys
sys.path.append('/usr/local/lib/python3.6/dist-packages/')

... in spite of the fact these are probably being fully accomplished by Colab. No, the real problem, undocumented anywhere I could find, is that one simply has to restart the runtime after updating the files. Apparently, if the runtime is not restarted, the newer version stamp is not reread into the API.

what persists and where is it?

basic checklist

  • login to colab.
  • select the desired project.
  • run the update cell
  • restart the runtime
  • proceed to subsequent cells
  • gather any rendered output files from the folder icon to the left (/content folder). Alternatively, one can hard code it into the script so that they transfer to one's system:
    from google.colab import files
    files.download("data.csv")
    There might also be a way to have these sent to Google Drive instead.

Saturday, August 1, 2020

keyframes

Links: keyframe :: (wikipedia) keyframe is an "I" frame -- intraframe. types of frames :: (wikipedia) I, P, & B frames.

These blog posts are application oriented. Sometimes however, a little theory helps the application. Simply stated,keyframes are a type of i-frame manually added by a user editing a video, increasing the total i-frame count of a video.

i-frames

Video cameras record a chronological procession of pictures, frame by frame. They do so with 3 types of frames: i, p, and b. The i-frames are complete pictures, like a photo, of what is being recorded. As the camera records, it takes an i-frame and then several P or B frames, and then another i-frame, and so on. The P or B frames refer to an associated I-frame to complete the picture during playback. The Wikipedia graphic below showes the sequence. The i,p, b-frame schema was created to save memory space.

Most video devices insert i-frames about every 8 seconds or every 240 frames or so (I shoot mostly 30fps) when recording video. Newer codecs set these intervals dynamically: shorter intervals when action increases, and longer intervals when action decreases. H264 comes to mind.

keyframes

When software users edit video effects, say a dissolve transition, their editing software adds an i-frame to the beginning and end of the effect. These post-recording, user-added i-frames are in addition to the existing i-frames embedded by their camera during recording. Only these post-recording, user inserted i-frames are properly termed "keyframes".

Further nomenclature confusion can arise when software uses proprietary terms for key or i-frame intervals. For example, in ffmpeg, i-frame intervals are called GOP's "Groups of Pictures", and without regard to whether they are between key or i-frames.

raw video analysis

When I import uncut clips, I first-off detect and save all the i-frame time stamps to a file I can refer to when editing. If it's a simple edit without transition, and all my cuts are at i-frames, I might not need to add keyframes and re-encode the video. How do I get a list of i-frame time stamps? Suppose I have a clip, "foo.mp4".

$ ffprobe -loglevel error -skip_frame nokey -select_streams v:0 -show_entries frame=pkt_pts_time -of csv=print_section=0 foo.mp4 >iframes_foo.txt 2>&1
The output intervals will be timestamps, which we can easily see by catting the file.
$ cat iframes_foo.txt
0.000000
4.771433
10.410400
18.752067
27.093733
...

To determine the exact frame number (always an integer) of the i-frame, multiply the time stamp by the recording FPS. I can determine the FPS using a basic $ ffprobe foo.mp4. In this example, it revealed a 29.97 FPS. So...

29.97 x 4.771 = 142.99 or frame 143.
29.97 x 10.4104 = 311.99 or frame 312.
29.97 x 18.7521 = 561.99 or frame 562.

...and so on. We could write a short program to calulate this or export it into Excel/Gnumeric/Sheets. But this is for only a single clip and of course I want this information for each of my clips.

repair - ffmpeg

Sometimes, the keyframes become unmanageable for some reason and need to be cleaned. Typically re-encoding is required. But there is a methodology.

Thursday, July 30, 2020

blender 2.8 notes - vse, video

contents
clip matchingplugins necessary plugins
setup and render
keyframeswatermark

NB: set rendering output confinguration prior to editing, esp FPS


Blender or, as I call it, "Death from 1000 Cuts", is vast, almost an operating system. It is Python + FFmpeg based. KDenLive, the other major Linux GUI editor, is MLT based. Whether using Blender for animation or video (this post concerns video), a reasonably good understanding of keyframes goes a long ways. The reason is that, portions of edits which don't require keyframes can be done with a CLI editor. Blender is likely to be used for sliding text on or out of a frame, etc.

what has to match?

  • frame rate(in ffmpeg: avg_frame_rate) must match in final edit. Having a uniform frame rate, dimensions, bitrate, and so on, makes for easier editing. So if just using a few varied clips, it's worthwhile to recode them to match, prior to importing. Obviously, frame rate will ultimately be forced in the final render, and it can be jerry-rigged during editing if desired...
    Dealing with Variable Framerates (7:03) Michael Chu, 2019. Audio unaffected by framerate, but we want the video to correspond. This is a manual solution when many clip-types are present.
  • tbn, tbc, tbr These are ffmpeg(Blender's backend) names for timing information beyond the fps. The time_base (tbc) is simply the inverse of the time_scale (tbn), but there is not necessarily one frame for each tick of the timebase (see 2nd link below). Timescale (tbn) is often 90K.
    Variable names w/descriptions (page) GitHub, 2020. What each variable pertains to. Inaccurately substitutes "timebase" for "timescale".
    Mismatch between clips (page) 2013. An example timing mismatch between clips. Timebase vs. Codec Time base (page) 2013. Variations between these two can lead to problems.
    Container vs. Codec (page) Stack Oveflow, 2013.
    Ffmepg timebase (page) Alibaba, 2018. Description of how obtained.
    **NB: 90,000 is typical tbn because divisible by 24, 25, and 30, although 600 will work, a la Quicktime.
  • bit rate helpful if matched, varies in quality if not. Along with the i-frame interval, bit rate determiines the quality of a clip. It's OK for bit rate to differ across project clips -- they can still be fused -- as long as one understands that each clip's quality will vary accordingly.
  • i-frame interval these can vary from clip to clip and change throughout editing as keyframes are needed. However, attention should be paid to this again at the final render to see that an efficient, hopefully dynamic, setting is selected. In ffmpeg itself, the i-frame interval is defined by "Groups of Pictures".
  • PTS

setup (also render)

I strongly suggest configuring one's output render settings as the first step of any Blender project. A consistent framerate, codec, and other details set to the desired output, forces clips into alignment from the start. As a bare minimum, set the final rendering FPS when getting started. That being said, the final render requires i-frame interval adjustments. The newer codecs will do this dynamically, so that if there are periods of zero action, i-frame intervals can expand to, say, one every 10 seconds, etc.

Dealing with Variable Framerates (7:03) Michael Chu, 2019. Audio unaffected by framerate, but we want the video to correspond. This is a manual solution when many clip-types are present.
Blender 2.8 Setup (19:27) Mikeycal Meyers, 2019. Render settings begin at 5:00.

The directory structure follows a pretty standard setup with a user file in ~/.config/blender/[version]/startup.conf, but there are also files inside /usr/share/blender/, which is where add-ons seem to live. So it may be that there are configurations to save from both of these sources.

Keyframes (and other) in 2.8 (9:48) Code, Tech, and Tutorials, 2019. Solid inadvertent tour around the VSE via a simple edit. Shows how to do transitions simply without inserting them, pressing "i" for keyframe.
Blender 2.8 Setup (19:27) Mikeycal Meyers, 2019. Render settings begin at 5:00.
Ffmpeg blur box (14:48) Thilakanathan, 2019. Fast, but covers nearly everything. Rendering in last 5 minutes. Comments have a lot of tips.

render troubleshoot

I've put these rendering fixes at the top of the blog to help determine how to preventatively configure settings.

  • grey timeline is rendered but appears to have a sheen of gray over entire playback, like through a dirty window. Inside the little camera icon, go all the way to the bottom "Color Management" and change it from Filmic to Standard.

keyframes

Video keyframes are a large editing topic I cover separately, but a few notes here. Fundamentally, keyframes are a hand-entered subset of i-frames, added by users. Since they are a sub-type of i-frame, all keyframes are i-frames, but not all i-frames are key frames.

  • "i" to add, "Alt+i" to delete
  • keyframes can be edited in Timeline, Dope Sheet, or Graph Editor. Only manually added keyframes appear in these editors, not generic i-frames.
    Keyframe manipulation (6:18) Michael Chu, 2020.
  • the final render should eliminate as many keyframes as possible, to decrease the size of our output file

How to Delete all Keyframes (page) TechJunkie, 2018. This is in the older version of Blender but has loads of solid keyframe information.
Keyframes (and other) in 2.8 (9:48) Code, Tech, and Tutorials, 2019. Solid inadvertent tour around the VSE via a simple edit. Shows how to do transitions simply without inserting them, pressing "i" for keyframe.

plugins

Plugins appear to live in /usr/share/blender/[ver]/scripts/addons. There are some key ones to have

proxy encoding.

If a person has a gaming system, this is not a concern. However, many systems edit smoother if the video is proxied. The system is lags and jumps during playback. Proxying does not work well if a person has elaborate transitions that require exact keyframes and so on.

Proxy a clip or clips (4:17) Waylight Creations, 2019. How to deal with a slower system.

sound effects

Occasionally, we'd like a sound effect to this or that in blender without a video clip coming in and so on.

Inserting Sound effects in Blender(11:11) Oracles Stream School, 2020. OBS based tutorial, using the computer, not a capture card.

Sunday, July 26, 2020

rebuild ASUS Sabertooth Z77 ATX

A buddy has a 22lb Cooler Master RC-692-KKN2 CM690 ($120) from 2012 and inside we found a Sabertooth Z77 (LGA 1155 Slot) system Mobo ($350). The Z77 is a proven Mobo c. 2012: a firm foundation for a budget upgrade, or whatever.

 
CM RC-692 ($120)
Sabertooth Z77 ($350)
Corsair CMPSU-750TXv2 ($140)

 
 
Corsair CMPSU-750TXv2 Manual :: generic, no voltages, schematics, pinouts.
 
Sabertooth Z77 Manual :: missing voltages for fans or the pinouts on board.

ASUS Sabertooth Z77 (9:37) Linus Tech Tips, 2012. Full rundown plus some useful comments underneath
Using Cloud Storage (7:31) Wolfgang's Channel, 2019. Be sure to pick a provider that uses Xen or KVM, rather than OpenVz-based virtual machines.
ECC Memory considerations (9:47) Linus Tech Tips, 2021. ECC explained and shows the value of error checking in AMD vs. Intel, where it costs more.

baseline

Here's some of the other system features we started with. I got this info executing dmidecode, lshw, and smartctl -a /dev/sda (following# pacman -S smartmontools).

  • CPU Intel i7-3770 3.4G clockable to 3.8 ($140)
    LGA 1155 slot accepts newer i7-9700 or 10700, for roughly $300 or $400 respectively
  • BIOS AMI v.2104 8/13/2013
  • cooling All are 12VDC. 5x 140mm ball bearing, 3 pin (Cooler Master A14025-10CB-3BN-F1) $20 each, 2 x 35mm auxiliaries (Sunon EF35101S2-Q010-G99).
  • RAM - 16GB 1332MHz DDR3 synchronous SDRAM 240 pin (Corsair CMZ16GX3M4A1600C9 ~$140)
    array capacity is 32 GB. Currently 4 x 4GB sticks onboard. "Upgrade" to 32GB A-Tech RAM ~$140.
  • GPU  
  • Storage 1 x Seagate Barracuda HDD IDE (not detected by BIOS), 1 x SanDisk SDSSDHP256G in ATA mode on SATA 3. These are laptop SSD's. (~$120 used)
    Capacity: 2 x SATA3 and 4 x SATA2
  • Power Corsair CMPSU-750TXv2. No problems ($140)
  • Optical

first inspection

Dust is too prevalent inside and the fans are worn. Several unfiltered openings. The previous owner wired the fans undirected with air from the bottom, exiting top. The dust should be solved, and one might also seek a lighter box, eg the Q500L Midtower (magnetic filter).

fans - motherboard (9)

The motherboard seems to have 5 x 12VDC directed fans, 1 x undirected (3-pin) 12vdc fan, and 1 directed CPU fan.

We also have these oddballs:

  • Sunon EF35101S2-Q010-G99 x 2. Location: motherboard. These are both 35 mm MoBo 3-pin (unmanaged) "Assistant Fans", 12vdc (pg 2-33/35 manual below). Apparently the fitting can take up to 40 mm. The Motherboard Manual provided no pinout voltages, so that there was no way to determine if 5VDC or 12VDC without a teardown. The diagram isn't clear if these are installed as 35 or 40mm, but found the 35's installed and measured they will take up to 40mm.

The Sabertooth has a fan controller, the AI Suite II. However, this is somehow a software controller from Windoze. If not running Windoze, we might want an independent fan controller. Check out the one in the video below.

Fan types (17:32) Jayz Two Cents, 2020. Other information about hydraulic, mag-lev, and ball bearings. Hydraulic is more susceptible to dust.

The EZDIY-FAB 5 pack (5 VDC - use SATA connector) is a good $50 120mm horizontal (they are hydraulic) solution

fans - case fittings (7)

As noted above, the case has too many unfiltered openings, so that filters or jerry-rig pantyhose must be installed. These are the 5 listed in the case manual...

... and I found these two additional (undocumented) locations.

  • side panel directly beneath CPU. space for 1 x 70mm 3-pin 12vdc fan. This would blow directly onto the bottom of the CPU.
  • front 1 x 140mm currently blowing into the case. This appears controlled.

temp sensors

To understand the fan settings, we need temperature information.

S/PDIF

I noticed this was entirely disconnected, even to my GPU or Optical Drive. Seeking information, I learned that I was not alone. It seems the best is to get a small case outlet that sends it to an RCA jack if one is going to exploit it.

Sunday, July 19, 2020

Eachine E58 -- Drone X Pro scam

I wanted to inspect the chimney on the roof with a drone and thus fell prey to the Drone-X Pro scam. I trusted a YouTube ad without doing research, and paid $100 for a $25 drone. Now that I have this Eachine E58 (the only thing missing is the brand name on the controller), could I make the best of it? Nope. Failure.

The camera is only 2 megapixels, can only be seen through an app, and flight time is given as 7 minutes. Reviews note that even the slightest breeze will take it out of range. On mine, I had a common problem of the wifi connecting to my phone, but I could not work the controller or see video. The E58 apparently does not work with all phones. Review: dronedeliver.uk.

wifi - piloting - video - app

These four are tied together because the drone controller has no video screen. Suppose I wanted to inspect my roof. As the drone passes from my direct line of sight, I can no longer see where the drone is going, and I lose the ability to pilot the aircraft. This is remedied by a complicated solution with several possible failure points

  • the manufacturer inserted a wifi transceiver into the drone; it appears as a hotspot to wifi systems
  • users connect their cellphone to the hotspot using whatever wifi functionality is present in the phone
  • after connecting, users open a pre-downloaded app to view through the drone's video camera and pilot the craft
There is no suggested application in the directions, at least in English, however after an hour of YouTube videos, and forum posts, this one appeared most likely:

 

...and don't forget Google Play needs port 5228.

failure

My Droid Turbo connected to WiFi, but with a warning that I had no Internet connection, thus I think (just a guess) disabling the http transport necessary for viewing the video. The drone app noted that I needed to "connect" in spite of the phone showing I was wi-fi connected, as noted

Following my hunch, I went to this site and learned that I could potentially enable http transport on a phone which normally doesn't do so with non-internet LANS, however I would have to root my phone, which I didn't want to do.

sd video

Video is supposedly saved onto a micro SD, viewable after flight. However, the apps could never connect to the drone, and the hand controller had no function to start video recording. It appears that the video recording was never initialized in the drone -- there was no video on the SD card after the device was flown.

aftermath

I'll give the drone away to one of my friends who has a compatible cell phone, for $10 and a chimney inspection. It appears compatible means phones allowing HTTP transport on wifi connections, even when on a local LAN without DNS, eg mDNS.

Tuesday, July 7, 2020

rclone details

In a prior post, I'd found that using rclone to upload RESTful (rclone uses REST, not SOAP) data had become more complex -- by at least three steps -- than two foundational videos from 2017:
1. Rclone basics   (8:30) Tyler, 2017.
2. Rclone encrypted   (10:21) Tyler, 2017.
These videos are still worthy for concepts, but additional steps --choices actually -- must be navigated for both encrypted and unencrypted storage, whichever one desires. Thus, a second post. Unlike signing in and out of one's various Google and OneDrive accounts, all are accessed from a single rclone client. Rclone is written in Go (500Mb), so that immense dependency must be installed.

across devices

To install rclone on multiple devices, including one's Android phone (RCX), save one's ~/.config/rclone/rclone.config. For each installed client, simply duplicate this file and one can duplicate the features of the original installation. If one has encryption, losing this file would be very bad.

deleted configurations

  1. ~/.config/rclone/rclone.config (client). If this file is lost, duplicate it from another device. If lost entirely, access must be re-established entirely from scratch, and the encrypted files will be lost permanently.
  2. scope (google) Google requires authentication for access details for which Google keeps. Documentation is difficult to find, other than the OAuth info in the prior sentence. It appears that users cannot directly edit any of the 11 access scopes (files) defined, but rather only through a Google dialog screen. When installing rclone, 5 of the 11 scopes are available, for which I typically like "drive.file".

command usage

For simplest use, to the root directory...

$ rclone copy freedom.txt mygoogleserv:/

Not all commands work on all servers, so use...

$ rclone help

instead of...

$ rclone --help

The former will display only those commands on the installed version of rclone. The latter shows all commands, but not every compilation has these.

$ rclone about mygoogleserv:
Total: 15G
Used: 10.855k
Free: 14.961G
Trashed: 0
Other: 40.264M
Of course, there's also the GUI, rclone-browser.

encryption notes

Rclone documentation notes strong encryption, especially if salt is used. Minimally, we're talking 256-bit. Of course governments can read it, but what can't they read?
  • unencrypted accounts must be established first. Encryption is an additional feature superimposed onto unencrypted accounts.
  • remember the names of uploaded encrypted files; even the names of files are encrypted on the server and the original filename is necessary for download.
  • keep the same encryption password on all devices on which rclone is installed.

glossary

  • application data folder (Google) a hidden folder in one's Drive (not in one's PC). The folder cannot be accessed directly via a web browser, but can be accessed from authorized (eg OAth) apps, eg rsync. The folder holds "scope" information for file permissions.
  • authorization (OAuth, JWT, OpenID) protocols for using a third party REST app (rclone) to move files in and out of a cloud server (Google, AWS, Azure, Oracle), there's an authorization process between them, even though you are authenticated in both.
    What is OAuth (10:56) Java Brains, 2019.
    What is JWT (10:34) Bitfumes, 2018.
  • scope (Google). the permissions granted inside Drive to RESTful data uploaded by users using, eg, rclone.
  • REST Representational State Transfer API for server to client data transfer. Wikipedia notes this as an industry term and not a copyrighted concept by Oracle or Google. It refers to data exchanged by user-authorized third party apps between applications or databases and applications. This is as opposed to data directly entered by users, or data that is not authorized by users between servers.

    REST API concepts and examples (8:52) WebConcepts, 2014. Conceptually sound on this HTTP API, even though dated with respect to applications. Around 7:00 covers OAuth comprehensibly.

  • SOAP Simple Object Access Protocol. This is the older API for server to client data transfer.

    SOAP v. REST API (2:34) SmartBear, 2017. Very quick comparison.


Google 15GB

Users can personally upload and save files in Google Drive through their browser as we all know. However, Google treats rclone as a third party app doing a RESTful transfer and uses OAth to authorize it. Additional hidden files are created by Google and placed into one's Drive account to limit or control the process.
Within that process, there are two ways to rclone with Google Drive, slower or faster. The faster method requires Google Cloud services (with credit card) and a ClientID (personal API key). The slower way uses rclone's generic API connection.

1. Slower uploads

Faster to set-up, but slower uploads. Users regularly backing-up only a few MB of files can use this to avoid set-up hassles. It bypasses the Cloud Services API, and uses the built in rclone ID to upload as directed
  1. $ rclone config
    ... and just accept all defaults. For scope access, I chose option "3", which gives control over whatever's uploaded.
  2. verify function by uploading a sample file and by looking in ~/.config/rclone/rclone-config to see that the entry looks sane

2. Faster uploads

This method requires a lengthier set-up but, once configured, rclone transfers files more quickly than the generic method above. Users need a credit card for a Google Cloud Services account, which in turn supplies them with a ClientID or API key for rclone or other 3rd party access into Drive.
  1. get a Google email
  2. sign-up for Google Cloud services
  3. register one's project "app". In this case it's just rclone) with the Google API development team
  4. waiting for their approval -- up to 2 weeks
  5. receiving a Client ID and Client Secret which allow faster uploading and downloading through one's Drive account

These two videos move very quickly however they have the preferred Client ID and Client Secret method that supposedly speeds the process over the built-in ID's.

Rclone with Google API (6:38) Seedit4me, 2020. The first four minutes cover creating a remote and the 5 steps in creating the Client ID and Secret.
Get Client ID and Secret (7:29) DashSpan.me, 2020. Download and watch at 40% speed.

OneDrive 2GB

This primer is probably the best for OneDrive, however it also applies to many of the other providers

metadata and scope

These are hidden files within one's Google Drive. is is part of the Google Drive API v.3, which is what rclone uses to connect and transfer files. In particular, you will want to know about the Application Data Folder
Google API v3 usage (5:28) EVERYDAY BE CODING, 2017.
Get Client ID and Secret (7:29) DashSpan.me, 2020. Download and watch at 40% speed.
RESTFUL resources and OAuth (55:49) Oracle Developers, 2017.

Tuesday, June 30, 2020

backup - texts (sms/mms)

Unlike years prior to Patriot Act, BSA, and DMCA, both criminal and civil agencies seem to operate assuming anything on citizens' cell phones or computers will eventually be discoverable, if not somehow prosecutable. These forces are much larger than us and there's little way we can protect ourselves from their creepiness.

In the face of this, we'd prefer to delete all of our data every day but our lives would be even more negatively impacted by these forces if we do. We'd lose track of birthdays, anniversaries, important receipts, and so forth. So we still need to retain some data for our daily affairs in these absurd times.

Understanding that protecting data is impossible for less than a team of experts, what can we individuals do to retain some data, and somewhat mitigate its exposure? If we can do some selective capture, we might be able to maintain our activities without gov't and info-agency stalkers into 100% of our private affairs. Texts are possibly a good test-case.

  • application my two primary considerations are 1) include all conversations in a single back-up, 2) have a clear text, non-propietary format. There was a great app, "Email My Texts" which collated a selected period of texts into TXT and included attachment file-names. Google mysteriously removed this app from the PlayStore. It seemed like a killer app that customers were choosing over all alternative$, and maybe Google disapproved, not sure. The remaining apps have awkward formats which parse conversations or use proprietary XTML, PDF's (immensely inefficient), and so on. The best option I've found of the remaining PlayStore apps in 2020 is...
    ... which is worthy of upgrading to Pro, for something like $4.
  • format CSV the only option in text apps 2020. If a TXT app returns that can select all conversations in a single file and names the attachment, I'd take it over CSV, but that's 2020 for you.
  • storage run rclone to encrypt CSV files to cloud storage. I cover this thoroughly in next month's rclone post. Meanwhile...
    $ rclone listservers
  • searching obviously, the reason to back-up texts is the same reason to back-up emails. You might need some information downstream. How can we search encrypted CSV files in such a way that we can easily find keywords, and then print all the date and parties to the interaction? Not easily. Perhaps a Python script which displays the results in a browser, sequentially as it processes a CSV file.

storage

Assuming this is encrypted via Watch these videos first...

1. Rclone basics (8:30) Tyler, 2017.
2. Rclone encrypted (10:21) Tyler, 2017.

...which can be followed verbatim. There are some new details since these videos, discussed in one of my other blog entries, but the core of the setup is the same. Additionally, this video (8:19, 2017) has some good basic commands.

search

In order to find information in a haystack of encrypted CSV files, each file must be decypted and greped individually. Since we have many encrypted CSV files, this is unlikely to be efficient. It's probably worthwhile to have a passle of encrypted CSV files on the Cloud, and then a local backup for parsing with one's Python script.

emails

A second question arises from text retention which is how to save emails which we want and to encrypt these. How to display them?

Sunday, June 14, 2020

toolbox :: appslist

contents
cli-misc a lot of the install stuffmedia playback, creation, editing
coding Python, misc APInetwork basic connectivity
documentssafety
math/stats minus calculator

Arch :: list of applications


Items marked with an asterik "*" below should be accomplished during install prior to leaving arch-chroot. Remember that $ strace > catchit.txt 2>&1 [command] is our friend, and that there can be some pacman judo required. To safely uninstall without dependency breaking, # pacman -Rs [app] is typically enough. Our old post on cleaning up orphan space is also useful.

system configuration

During install, a few details prior to downloading apps makes life easier. I went over some of these in 2015. Limiting journal size...

# nano /etc/systemd/journald.conf
Storage=auto
SystemMaxUse=200K

CLI (misc)

  • diff: 1MB, # base :: typical: $ diff -s file1.txt file2.txt You can also send it to a file a,s,c.see here. If the result is 180c180 means that we would have to copy line 180 from file1 to line 180 in file2 to make file 2 be like file 1.
  • fdupes: 1MB, # pacman -S fdupes :: cleans ridiculous amounts of duplicate files prior or after backups.
  • htop: 1MB # pacman -S htop :: color informative version of top
  • jmtpfs: 1MB, $ yay -S jmtpfs :: support to plug move files on and off an Android.
  • lshw: 1MB, # pacman -S lshw :: provide info on MB components better than lspci.
  • * nano: 1MB, # pacman -S nano :: light text editor
  • strace: 1MB, # pacman -S strace :: typical: $ strace > catchit.txt 2>&1 [command]
  • * usbutils: 1MB, # pacman -S usbutils :: lsusb, some others
  • archiving utilities. In addition to xarchiver, install all of the various formats or you're just going to get pissed on the occasion when you need a manual or something and have to update your entire system just to get one zip library. Among these are bzip, qzip, lrzip, lz4, lzip, lzop, xz, zstd, zip, unzip, p7zip, and unarchiver, among others.
  • yt-dlp: 1MB, $ yay -S yt-dlp :: seems lately to work more reliably than the parent youtube-dl from which it's forked.

coding

With the advent of Google Collab (essentially Google version of Jupyter notebook but with TensorFlow and so on for deep), this seems less important, however we still occasionally need a sane offline environment.

  • pipenv: 3MB, # pacman -S pipenv :: critical so can do projects (eg TTS) requiring special pip updates and so on without hurting the OS install
  • geany: 1MB, # pacman -S geany :: coding editor.
  • umlet: 1MB, $ yay -S umlet :: a light UML editor. Filetype UXF
  • rclone: 2MB, # pacman -S rclone :: encrypted cloud backups. GUI rclone-browser

documents

LaTeX needs to be installed from, eg, TexLive in a separate directory in ~ somewhere.

  • cups: 12MB, # pacman -S cups, necessary evil if printing. Get the PPD files from the AUR.
  • evince: 14MB, # pacman -S evince basic PDF reading. This comes with the price of gvfs, so another option is okular, which doesn't have the problem of gvfs.
  • xournalpp: 4MB, # pacman -S xournalpp. editing and creating PDF with Huion
  • xsane: 5MB, # pacman -S xsane scanning documents

A problematic aside is what to do with old DOC and DOCX documents possibly on one's system. LibreOffice can do the conversion with a command line , but a person has to install LibreOffice writer (300MB) to get this one feature.

# pacman -S libreoffice-fresh
$ lowriter --convert-to pdf somefile.doc

math/stats

  • PSPP: $ yay -S pspp. GNU version of SPSS. Does most of the functions. GUI: psppire
  • gretl: $ yay -S gretl. Econometrics
  • octave: # pacman -S octave GNU version of MatLab
  • RStudio: R-specific IDE

media

playback

  • vlc: 100MB, # pacman -S vlc :: necessary for speed variance, some obscure filetypes. Also plays playlists and reaches to non-drm streams.
  • xplayer: 7MB, 15 min compilation $ yay -S xplayer :: much lighter than anything else, plays clean, loops.
  • libdvdcss (backup) 8MB # pacman -S libdvdcss :: any kind of backup off a DVD (eg. an old Newhart episode on a 2000 DVD) requires this. Unintuitive errors result without this. Possibly also consider $ yay -S vobcopy, which will decrypt as it copies it over.
  • pipelight, widevine - TBD. support for DRM protected media a la silverlight

creation & editing

  • audacity (sound) 20MB # pacman -S audacity :: necessary for voice recording to view levels in real time. uses PortAudio
  • ffmpeg (sound, video) CLI 20MB # pacman -S ffmpeg:: screencast audio and video capture. ffplay to precheck video.
  • flowblade (video) 20MB $ yay -S flowblade:: does cross-fades of multiple files far too complex in ffmpeg. Open a project, then import MP4's. Avidemux no good for cross-fades. Openshot crashes. Pitivi crashes. However, all of the Pitivi dependencies.... 
    ... are also good for Flowblade. Install them before getting Flowblade off the AUR. Of these, the only critical item is python-cairo. The AUR Flowblade install does not always check for python-cairo and Flowblade will no-start with errors without python-cairo.
  • gimp (JPG, PNG) 20MB # pacman -S gimp:: Swiss army knife
  • goldwave.exe (WAV) 5MB 1 hr due to WINE installation. This 90's app still the easiest and most thorough for polishing sound. See Wine configuration vid below.
  • mlt (video)3MB # pacman -S mlt:: needed for melt command as as well as Flolwblade. Be sure to add # pacman -S rubbberband if using melt commands. melt FAQ.
  • obs-studio (video) 20 MB # pacman -S obs-studio :: mixing media on the fly if want live productions and saving to file.
  • shotcut (slideshow) 20 MB # pacman -S shotcut :: supposedly this is good for making slideshows though I have never tried it. This would be the only reason to install as it's (per Linux usual) worthless for cross-fades (can't adjust overlap consistently). MLT-based.

Wine Configuration (19:58) Chris Titus Tech, 2019. WINE converts Windows system calls to POSIX system calls. Make a bottle for every Windows app.

network

  • * dhcpcd: 3MB, # pacman -S dhcpcd :: important to add before leaving chroot or no internet after post-install reboot. Also disable it in systemd or possible boot hangs.
  • umlet: 1MB, $ yay -S umlet :: a light UML editor. Filetype UXF
  • * ntp: 1MB, # pacman -S ntp :: typical: # ntpdate pool.ntp.org
  • rclone: 2MB, # pacman -S rclone :: encrypted cloud backups. GUI rclone-browser
  • * wpa-supplicant: 1MB, # pacman -S wpa-supplicant ::

graphic

Wine: Google SketchUp. The old
Wine: ConceptDraw ($99)
Wine: VideoMeld64: fonts-corefonts,tahoma DLL's-none,

audio

Wine: GoldWave: fonts-arial,corefonts,tahoma DLL's-none, however errors on ntdll

safety

  • glasswire :: monitor net usage
  • zoneminder :: security camera management

safety - security devices, rules

Our phones and yubico keys are security devices. A cell phone's primary function is a security device, however its communication functions have been conveniently conflated and incorporated. These should be separated of course, and a security device provided for free by the government, since they are the ones who access and benefit from these functions. The phone itself should return to a secure, non-traceable way (unless search warrant) to communicate. Obviously this would inconvenience security agencies, and the collaboration aspects of government and immensely profitable communication agencies. So it will never happen. That is to say, it will happen just as soon as education is reformed for national benefit.

TLDR: These are security devices so that 1) PAM is involved, therefore 2) Udev rules must be written or they will not even be detected by the kernel in lsusb

phone example

  1. #pacman
  2. # nano /etc/udev/rules.d/90-android-tethering.rules
    # Execute pairing program when appropriate
    ACTION=="add|remove", SUBSYSTEM=="net", ATTR{idVendor}=="22d9", ATTR{idProduct}=="276a", ENV{ID_USB_DRIVER}=="rndis_host", SYMLINK+="android"
  3. # udevadm control --reload

yubico example

It's arguably worthwhile to know the model of one's yubikey -- there are perhaps 40 versions. Let's take an older one, though still FIPS compliant. Nowadays we'd want one that's FIDO compliant. The problem is we can't use these FIDO compliant ones on older computers that only have USB-A ports, so it's good to have an older FIPS key and a newer FIDO key. They of course make one to use with a phone as well. When they stop working, it's a PITA to determine. Use of udevadm monitor was my friend. Then I bout a $50 Yubikey 5. I think the old one was UV sensitive somehow. After it had been in the heat it stopped working.

But back to our rule configuration. According to their website...

  1. # pacman -S yubico pam
  2. # nano /etc/udev/rules.d/70-u2f.rules
    # Execute pairing program when appropriate
    ACTION=="add|remove", SUBSYSTEM=="net", ATTR{idVendor}=="22d9", ATTR{idProduct}=="276a", ENV{ID_USB_DRIVER}=="rndis_host", SYMLINK+="android"
  3. # udevadm control --reload

Tuesday, June 9, 2020

system - server - hosting

We want a system for learning management (LMS), and another for general usage. I like the Moodle LMS and Nextcloud. The problem is that, for years, both of these should be done locally (VPN), you can't really webface them. New solutions are making it possible to do both. I've previously had webhosting, and I think that's been part of the problem. This time around I want to do a VPS. I would still put Nextcloud on a VPN, but I think Moodle can reasonably be done on a VPS at this point with TOTP. So we can host Moodle on Google, but the question is which Tech Stack (see below). The idea is there re 3 layers: the hosting (Google), the http server (Apache), and the system (Moodle, NextCloud).

  • VPS - Virtual Private Server. Cloud server. Google, UpCloud
  • VPN - Virtual Private Network. Home server. Unlimited storage, only limited by HDD space. I am uninterested in the typical web usage of VPN's for anonymity and so on. These are mostly useless (see vid from Wolfgang's Channel below). Thinking here of the much more prudent usage of a home network for a VPN. It's possible to make it web-facing also, but this should not be done without 2FA and SSL.
  • Backup Critical files need this. Probably anything paper that's irreplaceable, eg, DD214, grades, etc. This shouldn't need to be more than about 1-5 GB anyway, but critical. Chris Titus uses BackBlaze. BackBlaze however relies on Duplicity, which in turn relies upon the dreaded gvfs, one of the top 5 no-no items (pulse audio, gvfs, microsoft, oracle, adobe). Use some other with rclone, rsync, remmina, cron.

plan

Current A-Plus costs: $5 month x 2 sites ($120) + annual 2 x domain w/privacy ($30), one site only MySQL.

  1. DNS - Google ($12 yr x 2 incl.privacy)
  2. rclone some criticals to Drive
  3. Moodle VPS on Google LXC
    • $ yay -S google-cloud-sdk 282MB
    • go to Google Cloud and provide credit card
    • follow Chris Titus' instructions in video below

    Host on Google (30:32) Chris Titus Tech, 2019. Do an inexpensive, shared kernel setup. Uses Ubuntu server and Wordpress in this case.
    Moodle 3.5 Install (22:47) A. Hasbiyatmoko, 2018. Soundless. Steps through every basic setup feature. Ubuntu 18.04 server.

  4. Nextcloud VPS on Skysilk ($60)

1. transfer DNS to Google

Chatted with old provider and obtained the EPP's for both domains, began registration in the new domain. Once these are established, we'll have to change the A-records, and pehaps "@" and "C" records to point to current hosting. Each possible VPS provider handles their DNS in different ways. Some providers manage the entire process under the hood, at others a person must manually make any changes to their A-records.

Rsync Backup on Linux (9:19) Chris Titus Tech, 2019. Great rundown plus excellent comments below.
New DNS Update (7:18) Without Code, 2018. Proprietary, but a transparent example of what is involved in the process.

server blend

Nextcloud is not an actual server itself, the underlying server should be something like Apache or Nginx. Nextcloud then overlays these and serves files via the server underlying it. The logins and so forth are accomplished in Nextcloud in the same way we used to do so with, eg. Joomla or Wordpress (optimized for blogs).

Nextcloud: Setting Up Your Server (17:43) Chris Titus Tech, 2019. Uses Ubuntu as underlying server on (sponsored) Xen or Upcloud. Rule of thumb $0.10 month per GB, eg $5 for 50G.
What are Snaps for Linux (4:47) quidsup, 2018. These are the apps that are installable across distros.

2. existing storage for backup

We can use free storage such as Drive or Dropbox to backup data. They key is it should be encrypted on these data mining, big tech servers.

RClone encryption (10:21) Tyler, 2017. Methods to encypt with rclone. Also good idea to download rclone-browser, for an easy GUI.
Rsync Backup on Linux (9:19) Chris Titus Tech, 2019. Great rundown plus excellent comments below.
Using Cloud Storage (22:55) Chris Titus Tech, 2019. Easy ways to encrypt before dropping into Google Drive, etc. (sponsor:Skysilk)

choosing a VPS

One can of course select Google, but what virtualization do they typically employ? Skysilk uses LXC containers via ProxMox.

Rsync Backup on Linux (9:19) Chris Titus Tech, 2019. Great rundown plus excellent comments below.
Using Cloud Storage (7:31) Wolfgang's Channel, 2019. Be sure to pick a provider that uses Xen or KVM, rather than OpenVz-based virtual machines.

tech stack

I used to use a LAMP stack, but I am trying to avoid MySQL (proprietary RDBMS), and use PostgreSQL (OODBMS), as a minimum update (LAPP), and have looked at some other stuff (see below). I may try a PERN stack if I can get it going with Moodle. Post

Various Tech Stacks (48:25) RealToughCandy, 2020. Decent rundown plus large number of comments below. Narrator skews "random with passion" over "methodical presentation", but useful. PostgreSQL around 38:00.
Using Arch as Server (33:11) LearnLinuxTV, 2019. He's running on Linode (sponsor), but the basics the same anywhere. Arch is rolling, but just keep it as the OS for one app.