Showing posts with label safety. Show all posts
Showing posts with label safety. Show all posts

Wednesday, November 8, 2023

portable (usb) disk recovery

We've nearly all dropped an important HDD on the way to the airport or work and know the (sinking) feeling later that day when data can't be accessed. It's expensive, slow, and intrusive to have data recovered professionally.

So we've mostly switched to SSD's, but even these are of questionable1 reliability.

levels of loss

  • level of destruction -- unable to work at all, unable to be detected, unable to boot
  • type of drive SSD, USB, SDC, HDD
  • file system (ext2, ntfs, reiser, etc)

The easiest thing, the thing we'd like to do, is reset the dirty bit, or re-establish a corrupted MBR, and be able to plug our drive back in for normal use. Then we find that each crash can have a different level of complexity -- the dirty bit is often possible, the corrupted boot sector typically not.

Some of our luck in a crash also has to do with the file system. If a person has an HDD formatted in NTFS, then it needs a bootable NTFS sector even if it's just a portable data disk.

expectations: think data first, access second

The data crash reveals the drive to be unreliable; if I greedily aim to re-establish a boot sector, I might lose the data if the boot sector attempt fails. Check to see if the data is intact and shoot for that first, even if it's more time consuming than restoring access through a repaired boot.

software: testdisk

At the start of the fail I cycled through all my disk software, fdisk, cfdisk, gparted, ntfsfixbootand best so far seems to be testdisk. Also might take a look at wipefreespace on the AUR.

Data Recovery with Test Disk (18:22) oswyguy, 2018. This poor guy needs sunlight, but a superb TestDisk video beginning about 7:30.

WD Passport 1000GB, (1058:0748)

I employed the usuals to gather info and attempt repairs, cfdisk, fdisk, gparted, and from the AUR, . None of them did anything

pacman -S testdisk

This retrieved files, but didn't repair the MBR index.

nope

Zero
pacman -S ntfsffixboot
No
pacman -S ntfsfix

1Consider this SSD drive article, from which I quote:

According to the Western Digital's firmware update page, the impacted products are from the SanDisk Extreme Portable SSD V2, SanDisk Extreme Pro Portable SSD V2, and WD My Passport SSD line, and lists the models as follows:

SanDisk Extreme Portable 4TB (SDSSDE61-4T00)
SanDisk Extreme Pro Portable 4TB (SDSSDE81-4T00)
SanDisk Extreme Pro 2TB (SDSSDE81-2T00)
SanDisk Extreme Pro 1TB (SDSSDE81-1T00)
WD My Passport 4TB (WDBAGF0040BGY)

And these drives are still best-sellers on Amazon, so suppliers are being shady AF. SSD's rely on firmware but still must be properly constructed, of course. HDD's were mostly just a hardware situation.

Thursday, July 27, 2023

data retention

Most ppl want perpetual storage of all their personal data, and they would probably prefer that it is stored under European data privacy laws. Of course streaming one's data to a site out of the country means it will be NSA-repeated as it leaves the US. Still it is likely to have better protections once it arrives in the EU.

For those who cannot afford to physically fly their hard drives around and have them securely duplicated elsewhere, what can we do? Is there a reasonable plan? Not in any secure sense. However a partially secure solution could be to export a percentage of their data to a European Cloud server. Perhaps the line could be drawn at storing personal photos and documents on the Cloud. It's understood any government would be able to see them, run text analysis and facial recognition -- as is true anywhere data is stored -- but we'd at least have a portion of our data backed up against most any non-nuclear catastrophe.

TLDR; IME, PCloud (if one can afford it), since it's perpetual, with no subscription fees, is a reasonable unsecured solution. Here we can store photos and documents perpetually (one or two terrabytes), though can't afford enough space for video, audio, and so on. So our poor man's plan could look something like this...

  • photos and docs: PCloud. Managing docs and citations will need a database. In lieu of database, possibly can organize BIB files over collections and then use, eg jabref to keep it clean and manageable. It's unclear yet how to cite specific SMS or emails. A database or at least a spreadsheet, seems inevitable.
    Very small audio and videos (eg screen captures) which document something may possibly be kept cloud.
  • videos: HDD, SSD. There's no way for a poor man to store, or have available to edit, these media on the Cloud. What might be Cloud stored is some sort of filing spreadsheet or database table which allows the user to date-match vids (SSD), pics (cloud), and docs (cloud), if desired for a family reunion or forensics.
  • audio: HDD, SSD. Music and longer podcasts must be kept here. Too expensive for cloud storage.
  • sensitive: decision time. If documents have current PII, might want to keep on a USB key, backed-up to SSD, or something else off-cloud. Tough decision when it's OK to Cloud store. Of course, it's more convenient to have on the Cloud, but not sure that's recommendable from a safety persective.

oversight

It's as yet unclear how to database one's entire collection, but some attempt to herd the cats must be made. If one has the time and resources to implement NARA level storage plans, then some version can be followed

If database is possible, either through fiverr or some such, that's probably recommendable, since all one would need to do is occasionally make a diff database backup occasionally and keep it on the cloud. But personal files are a wide ranging "collection", and people often change file and folder names, and so forth. If that's correct, a system may be more important than capturing each file, not sure.

jabref and bibtex

$ yay -S jabref-latest

Let's take the example of emails. These are a horrible security risk and a person typically wouldn't want to archive them on the Cloud. It's equally true however that we sometimes need to archive them. Suppose we decide to archive some of them. Let's take an example for how we could manage the metadata without a database

Suppose we print-to-pdf an email conversation and and give it the filename 20230705_anytownoffice.pdf. For metadata, let's create a text file called emails.bib. This is BibTex file, using standard LaTeX formatting.

@misc{email000001,
title = "20230705_anytownoffice.pdf",
author = "homeoffice@gmail",
year = "2023",
booktitle = "{A}nytown {O}ffice {S}upply",
address= "Anytown, CA",
abstract="April-June. Regarding shipping some reams of papers. Contact was Joe Foo",
note= "folder: emails, file: 20230705_anytownoffice.PDF"

And then if a person opens the BIB file using jabref, they will have all the relevant info displayed as in a spreadsheet. So jabref can work for more than articles and books.

old DVD's

Standardize the format. A lot of old movies and episodes are 480p (DVD) and that's fine. However, they're often in MKV or WEBM containers with y9 encoding and so on. There's no way to reliably burn these back onto a DVD or even play them on most players.

Toying around, I've come up with...

ffmpeg -i oldthing.mkv -b:v 1.5M standard.mp4

... which yields about 1.3G and shows well on a large screenon the screen...

I prefer...

ffmpeg -i oldthing.mkv -b:v 2M standard.mp4

...but this yields perhaps 1.7G for a standard 1:40-1:45 film, which is a lot of space.

If a person has the time, it's even more interesting to break these larger films into a folder of 20 mins

Tuesday, October 25, 2022

log management

The first step is to locate and comprehend all log activities, second to determine triggers for undesirable events and conditions, third to text if this/these happen, and fourth to send email summaries of log changes.

I doubt I'll ever complete this post, as there's so many ways to skin this cat, both in CLI and GUI. Overall it's part of SIEM and should be accomplished with some thought.

directories

At the simplest level, this is the local directory:

/var/log

And of course to see how much use:

$ du -sh /var/log

And of course to limit the largest offender journalctl, in the first place:

# nano /etc/systemd/journald.conf
SystemMaxUse=200K

log programs

1. Linux log apps (webpage), 2022. Several log apps show information.

Friday, September 23, 2022

phone provider oddities

Phone company interaction has become complex, at least at T-Mobile. However, outside of the idiosyncracies below, 5G reception is better than 4G (eg, inside buildings) and 5G hotspotting carries good speed, eg 6-7Mb/sec during software updates (pacman). There are some idiosyncracies...

1. unlock - hard 40 days

It's necessary to use an unlocked phone if traveling overseas and wanting to purchase a SIM for local service. IME, especially if planning for a trip, buy any new devices 2 months ahead of trip. New devices have a hard 40 day hold (postpaid plans - see below) before T-Mobile will unlock the phone.

Secondly, probably because I purchased my device on EBay, there was no T-Mobile unlock application any longer in the device apps. This means a call was necessary to T-Mobile to accomplish unlocking. Have the IMEI and an email address handy. They will ask "Why? International? Leaving T-Mobile?"; it's almost as bad as KYC in banking. The agent notes unlocking takes 72 hours and that an email will be generated once complete. So 40+3 = 43 days from purchase, best case.

2. misc

  • call/chat - chat is preferable. Calls are recorded, but there's nothing in writing and no way to attach screenshots. Talking also means KYC questions "do you plan to travel?"
  • prepaid, postpaid plan - postpaid plans require credit checks and report to credit agencies, so this is higher level account. Prepaid plans are lower end and have overage charges. For best service, get all account numbers postpaid, since pre/post can vary by the number on the same account.
  • unlock phones - postpaid account: 40 days on the network. Prepaid accounts: 365 days on the network. Website declaring a device is "eligible" for unlock and/or being purchased independently is irrelevant.
  • temporary and permanent phone unlock: which is which and why?
  • gigabyte accounting: extrememly liberal. I'd wager T-Mobile tallies Gb usage at 1.5:1 or greater. Eg, 1Gb actual usage will be declared as 1.5Gb. Also there's delayed accounting -- account might show 3Gb immediately after use and gradually rise to 4.5 over the next 12 hours of non-use.
  • throttling/overage. device dependent. On my dedicated hotspot, I get throttled back to 126 K. On my phone account, I get throttled to 56K. Does the account automatically add gigabytes of data for some charge.
  • sim swap - there's typically some network confusion or throttling with a recently swapped SIM or a new SIM. Usually apps will require a new login for an "unrecognized device" when the SIM changes. Google apps can work without the SIM if Wi-Fi is available.

3. usage

Day to day use -- emails and so on, blogging, browsing, is about 1Gb per day, maybe 2 if several videos (typically 360p in phone). Some problems:

  • Surfline (app): no way to adjust/lower quality. Immense AJAX and ads. Seems to use 500Mb each time opened, up to 1GB if on there 4 mins.
  • NFL GamePass (hotspot): no way to adjust/lower quality . Apparently, server takes over and streams at 1080 or greater. This means a game is is 15Gb, or about 3.5 per quarter of a full replay.
  • Criterion Channel (hotspot): about 3-5Gb per film, 520-720p, adjustable.

4. phone app

Say a person has unlimited 5G phone and 40G hotspot. When they look on the T-Mobile app, they see screenshot below. Turns-out 17.6Gb at the top would be the combined number of both phone data and hotspotting usage.

Sunday, September 4, 2022

5G - OnePlus Nord200 - Oppo N1 - DE2118

Links: Wiki page

model

Like all smartphones, manufacturing is murky and obsfucated, and can best be determined by its USB VID

$ lsusb
22d9:2765 OPPO Electronics Corp. Oppo N1

"22d9" is Oppo Electronics, so that's our real indicator. Interestingly, the OS is Cyanogen Mod, which they call Oxygen. $125 refurb off EBay. For travel it's GSM and

phone history

The best I've had was the LG-D520, best text feel, fast, least distracting

5G Nord N200
2022/08
$125 Refurb unlocked Ebay, originally on T-Mobile. Android 12, upgraded from original Android 11/Oxygen. Type C USB cable
4G Droid Turbo 2
2020/05
$50 XT 1585 Refurb unlocked from Verizon. Android 7 (Nougat) Reliable Motorola phone. Could not achieve VoLTE after 5G upgrade due to Verizon firmware. Type B ("micro usb") USB cable. Battery about 45 minute turnaround. Keep unrooted as a good cam phone.
3G Optimus F3Q
2015/05
$50 LG-D520. Refurb unlocked from T-Mobile. Android 4.4 (KitKat) Easy battery swap. This phone has slightly better-yet touch for texting than the G1. The G1 was good for texting. The LG-P659 was as smaller, glass screen (non-slider) version.
3G LG C195N
2012
$100 new. T-Mobile. Slider. Android 1.5 (Cupcake). Great texting touch and pocket and hand feel. Wide enough not to slip out of pocket. Simple, reliable, great battery life. GSM. Might still be good for Europe. SMS/MMS, no Internet.
3G LG G1
2010
$100 new. T-Mobile. Slider. Android 1.5 (Cupcake). Slider has good touch for texting. Stolen at work.
2G Moto. Razr V3
2006
$? T-Mobile. Favorite phone. Black metal case, reliable, battery features, easy to use, heft even better than Nokia, easy battery.
2G Nokia E.6010
2002
$? Cingular. Candy bar, GSM 850/1900. Also called NPM-10. Superb heft and single hand key access. Easy battery (NK3310), dedicated charger. FCC:GMLNPM-10X
Gx Alcatel
2002
$30,new. Price part of a Deutsche-Telocom package which included a number of minutes and other features.

Settings

Android 12. One of the key things to do is disable "Quick Device Connect", a persistent problematic app that will ask for one's location. Settings -> Apps -> App management -> (three dots in upper R) -> Quick Device Connect. Force stop and turn off all notifications.

  • awake: double tap, then swipe up.
  • usb setup: Settings -> System settings -> Developer options -> Select USB configuration
  • nav bar / 3 buttons: there's no back button in the Chrome browser, so the buttons are helpful; triangle is a back button. Settings -> System settings -> System navigation select buttons or gestures (gestures has Navigation Bar).
  • screen timeout: Settings -> Display & brightness -> Auto screen off (set mine to 1 min)
  • notifications: apps like Uber, EBay often interrupt with announcements. Settings -> Notifications & Status bar -> Apps are in a column; toggle the notifications on/off as desired for each app.

Glass/LCD Replacement ($38) and Battery ($18)

Had the phone a couple months and then was at a meeting with some guys and dropped it on concrete. First time broken screen experience. For this phone a significant job no matter glass or glass+frame. Why not also replace (LiPo) battery since it has to be removed to do the glass. Comments on that a little further down.

Replacing only the glass is easiest fix, but requires bending-out the frame to remove the glass. I considered a smooth press-fit would be more durable and stock-appearing, so I purchased a frame/glass combo. An extra $6. Another $5 for the toolset with B-7000 glue. With tax, call it $38 for all of this, non-OEM. I also read the seller's customer feedback comments for additional tips from buyers. Delicate job and, again, not OEM.

Beyond this, having or jerry-rigging a heat-pad or gun, and clamps for re-gluing, are all required. This is true also for a straight battery job. The video below has the "with-frame" process I did, but can do just the glass, as noted above, after battery step. See video commenets.

Screen w/frame replace (17:09) Geardo, 2022. Latter half of video -- reassembly portion -- valuable to watch prior to initial disassembly. Comments valuable.

The battery was an extra $18 incl. tax. I got a LiPo version, like the OEM. Li-Ion requires an overcharge protection circuit that's likely not in the phone since it's LiPo OEM. I don't want the phone burning up on the charger.

Saturday, August 13, 2022

4G -> 5G WTF

NB: The least expensive (T-Mobile) 5G entry with enough features (Android 11, Snapdragon 480 vs. Snapdragon 805 and Android 7 in my 2015 Droid2), is probably the Nord N200 5G. ~$200, or about $140 (including tax) refurbished.

Links: EBay.com :: New in box older phones :: T-Mobile


5G is definitely here and governments are rejoicing (all digital! AI packet inspection!). But as part of this rollout, providers began shutting down 3G networks. This meant the new fallback for voice calls became digital 4G, no more analog 3G. Also, different providers use different frequencies.

1. VoLTE functions (15:54) Telecom Tutorial info, 2018. Pre-5G video provides history of 2G, 3G, 4G and thorough description of VoLTE services including important note that the provider may not enable VoLTE on all handsets. 11:00 particularly applied to me.
2. VoLTE functions turn/on off (6:52) Make Knowledge Free, 2021. Thorough description of ways to enable VoLTE, assuming provider, handset, SIM, and handset firmware are provisioned to do so.

part 1 - 4G begins to malfunction

I have a reliable 4G phone. In March, a T-Mobile store checked the device and said, "it will work after the July 5G rollout". In August however, 4G data was reliable, but the phone dropped voice calls. Obviously, the phone was attempting to switch voice traffic to analog 3G, which no longer existed.

1st diagnostic

I checked several screens, but will only repeat the most relevant one below

analysis

  1. 4G data is not a problem in this phone. The 4G icon is illuminated in the upper right, and there had been no interruptions to texts or internet.
  2. my device is probably white-listed by T-Mobile, just as the store clerk said.
  3. Hardware is provisioned for VoLTE or the greyed out "VoLTE provisioned" switch would not be present

Given 1-3, the disfunction must either be the device's firmware, or the device's SIM card. If it's the SIM, T-Mobile can send me a VoLTE provisioned SIM. If it's firmware... I'm out of luck and will have to purchase a new device. The phone (though reliable) is an unlocked former Verizon phone with Verizon firmware. Verizon firmware is impossible to update through T-Mobile.

2nd diagnostic - t-mobile

I recorded the IMEI of the phone but, as we can see from the screenshot above, SIM numbers were not provided to the user in Android 7. I physically pulled the SIM to record the ICCID number. With the ICCID and IMEI, I opened a laptop, went to the T-Mobile website and intiated a text chat with a tech. He took the numbers and determined the problem was likely SIM provisioning. He orded a new SIM overnighted to me. It arrived 2 days later.

3rd diagnostic - SIM install

After installing the new SIM (see below), there's a 2 hour delay for T-Mobile to fully detect and provision it.

I waited 3 hours and resarted the phone twice, however the VoLTE option was still greyed-out as previously. Verified also that voice calls were dropping. SIM was no solution. Failure.

2nd analysis

The remaining unverified problem was firmware. We can see from the phone info that both baseband and system firmware are Verizon. If I'd had unlimited patience, I might have written Verizon to see if they had a more recent version that was VoLTE enabled -- there was nothing on their website which looked like a solution.

conclusion

Appears I need to purchase a 5G phone; the Verizon firmware limitation cannot apparently be remedied. Lesson: if buying a cheap unlocked phone, it's smart to purchase a device which was previously used on my current provider's network. Accordingly, I found a 5G OnePlus Nord 200 on Ebay for $130, unlocked from T-Mobile.


part 2 - related notes

Home: SIM installation (T-Mobile)

Home installation relies partly on one's online T-Mobile account. The old SIM needs to be operating to log-in (2FA) to that account, and a person must complete the SIM swap before the online account times-out. There's no way to 2FA back into the online account without the new SIM being registered. Here's what I needed:

  • paper; copy down the new SIM number from the package
  • a tool to quickly pop the SIM from the phone
  • laptop to get on the T-Mobile website
  1. login to T-Mobile. T-Mobile sends a 2FA text to my phone
  2. Go to Account->Lines and Devices ->Change SIM.
  3. Work fast before you're logged out.
    • turn-off the phone
    • pop-out old SIM and put in new SIM
    • power-on the phone (all kinds of error messages will appear -- disregard them)
  4. enter the new SIM number in the box for it on website (laptop) and then "continue"
  5. completion message appears
  6. After about 20 minutes of the phone being inoperable, a text from T-Mobile arrived asking for new SIM verification.
  7. re-establish accounts in phone. Most accounts (google play, voice, phone) are tied to a SIM card, not to a phone, so all of these accounts logged out and needed to be re-established.

Vendor: possible store visit

SIM config can only be done by the provider, so if the phone is still not properly registering in the network, take it to the provider. More about it here. Also the phone probably has to be registered for IMS, unlike 3G. And if a person has a perfectly configured 4G phone, the provider might also be banning that device model from the 5G network. With so many variables, it's good to be patient with oneself if a person eventually must visit the vendor shop.

VoLTE functions (23:15) Telecom Tutorial info, 2018. Pre-5G video provides in-depth PowerPoint of how 4G VoLTE connects from User Equipment(UE) with voice in an IP Multimedia System (IMS). Covers congested network scenarios also.

Monday, February 14, 2022

stream and record - obs and ffmpeg 1

Links: OBS site w/forums

A high-speed internet connection is foundational for streaming, but what are some other considerations? Some live stream sites (YouTube, Discord, Glimesh) will need an OBS type app on the user's system to format a stream to transmit to their site. Other sites (Zoom) have a proprietary streaming app, but the app experience can sometimes be enhanced by routing it through an OBS-type app. A third issue is the various streaming protocols and site authentications. A fourth issue is hardware problems which can be specific to a streaming app. All this complexity allows for multiple problems and solutions.

Note: a fifth issue is that OBS is natively setup for the notorious Nvidia hardware and the PulseAudio software. Detection of audio is particularly difficult without PulseAudio, eg requiring JACK config.

protocols and authentication

RTMP streaming providers typically require a cell number via the "security" (forensic record) requirement of 2FA requiring a cell. This is an immense safety issue. Who knows how these providers tie cell numbers to credit reports, "trusted 3rd parties", etc? The answer is consumers are expected to understand a multi-page "privacy" policy filled with legalistic language and equivocations, which regularly changes, and which varies from site to site. Way to protect us Congress, lol. Accordingly, since I essentially have no idea what they're doing with my cell, I try to avoid streaming services which require a cell.

 

Although they require a cell*, YouTube's advantage is streaming directly from a desktop/laptop with nothing beyond a browser. Discord can do similarly with limited functionality, and they have a discord app which adds features. Glimesh works well with OBS -- it provides a stream key for OBS, or whatever a person is using.

*YouTube requires "account verification" at https://www.youtube.com/verify prior to streaming. The verification is 2FA to a cell.

obs

Those not intending to use OBS can still find utility in its attempts to stream or record. A great deal will be revealed about one's system. OBS logs are also valuable to identify/troubleshoot problems, eg the infamous 'ftl_output' not found issue -- you'll find it in the logs (~/.config/obs-studio/logs). OBS can encounter a couple of problems.

obs hardware issue: nvidia graphics card

Obviously, no one wants NVidia hardware: the associated bloatware is almost unbearable. However, its use is so common that many users have it in their system(s). OBS therefore makes Nvidia the default. This setting spawns errors for systems with AMD Radeons. Change the "NV12" (or 15 by now) circled below to an option which works for one's hardware.

1. obs audio problem - alsa and jack

Most desktops unfortunately have two audio systems: an MB chip, and a graphics card chip. Difficulty can arise when one source is needed for mic input, and the the other source is needed for playback (eg, for HDMI). This is bad enough. However there's an additional problem with OBS -- it doesn't detect ALSA. Your options are PulseAudio (gag), or JACK (some config work, depending). I end up using a modified PulseAudio. More about that here.

1. obs local configuration and usage: to MP4

Fffmpeg works great for screen and input captures, but OBS can be preferable for more mixing in during live. In OBS terminology "scenes" and "sources" are important words. Scenes is a collection of inputs (sources). OBS is good at hardware detection, but files can be played, websites shown, hardware (cams, mics), images (eg for watermarks) other videos, and so on. For making MP4's "Display Capture" is obviously an important source.

Scenes and Sources (8:08) Hammer Dance, 2021. How to add the scenes and sources to them.

V4L2 issues

1. loopback issue

Unimportant, though you might find it in the OBS logs

v4l2loopback not installed, virtual camera disabled.

The solution steps are here.

  • v4l2loopback-dkms: pacman. basic loopback. This makes a module, so you need to do, pacman -S linux-headers prior to the loopback. install.
  • v4l2loopback-dc-dkms: AUR. haven't tried this one. apparently allows connecting an Android device and using it as a webcam via wifi

We're not done because the loopback device will takeover /dev/video0, denying use of our camera. So we need to configure our loopback to run on /dev/video1. This has to be specified by putting a load-order file into /etc/modules-load.d/ .

Install the loopback, if desired.

# pacman -S v4l2loopback-dkms

2. ftl_output issue

This is one is important.

$ lsmod |grep video
uvcvideo 114688 1
videobuf2_vmalloc 20480 1 uvcvideo
videobuf2_memops 20480 1 videobuf2_vmalloc
videobuf2_v4l2 36864 1 uvcvideo
videobuf2_common 65536 2 videobuf2_v4l2,uvcvideo
videodev 282624 4 videobuf2_v4l2,uvcvideo,videobuf2_common
video 53248 3 dell_wmi,dell_laptop,i915
mc 65536 4 videodev,videobuf2_v4l2,uvcvideo,videobuf2_common

If we haven't installed loopback, then video0 is the default. Note this is verified by the lack of any settings or capabilities returned on video1.

$ v4l2-ctl --list-devices
Integrated_Webcam_HD: Integrate (usb-0000:00:14.0-2):
/dev/video0
/dev/video1
/dev/media0
$ v4l2-ctl -l -d 0
brightness 0x00980900 (int) : min=-64 max=64 step=1 default=0 value=0 contrast 0x00980901 (int) : min=0 max=95 step=1 default=0 value=0 saturation 0x00980902 (int) : min=0 max=100 step=1 default=64 value=64 hue 0x00980903 (int) : min=-2000 max=2000 step=1 default=0 value=0 white_balance_temperature_auto 0x0098090c (bool) : default=1 value=1 gamma 0x00980910 (int) : min=100 max=300 step=1 default=100 value=100 gain 0x00980913 (int) : min=1 max=8 step=1 default=1 value=1 power_line_frequency 0x00980918 (menu) : min=0 max=2 default=2 value=2 white_balance_temperature 0x0098091a (int) : min=2800 max=6500 step=1 default=4600 value=4600 flags=inactive sharpness 0x0098091b (int) : min=1 max=7 step=1 default=2 value=2 backlight_compensation 0x0098091c (int) : min=0 max=3 step=1 default=3 value=3 exposure_auto 0x009a0901 (menu) : min=0 max=3 default=3 value=3 exposure_absolute 0x009a0902 (int) : min=10 max=626 step=1 default=156 value=156 flags=inactive
$ v4l2-ctl -l -d 1
[nothing]

However, even with this default correct, there is a ftl_output error remaining which prevents an output video stream.

$ yay -S ftl-sdk

plug-ins

OBS has plugins, for example one that shows keystrokes and mouse clicks.

Streaming and Recording(11:08) Gaming Careers, 2019. OBS based tutorial, using the computer, not a capture card.
GoPro to WiFi(page) Action Gadgets, 2019. Used GoPros can work as well as newer cams.

settings - device

repurposed cams

attendance

  • meet: only in highly paid plans beginning about $12 per month (2023). The higher level educator plans also.
  • zoom: only in paid plans
  • teams: only in paid plans - teames is part of microsoft360 business suite
  • webex: webex is inherently pay-only

Streaming and Recording(11:08) Gaming Careers, 2019. OBS based tutorial, using the computer, not a capture card.

Thursday, January 20, 2022

password safety

Links :: devdungeon primer :: NASA page

This post assumes use of gpg for password protection at the 256 level. Geofencing is not covered here, and anyways has no current protections: too many 3rd parties sell to each other and governments.

gpg shortcomings

  1. trivially overcome by government organizations
  2. modifying defaults requires a configuration file
    • default password timeout is several minutes
    • default cypher (CAST5) easily bypassed
  3. default encrypted file is a binary. May not therefore pass virus checks, etc.
  4. the encrypted file exists without password attempt safeguards. It need only be copied, and then a second program to run a brute force password attack millions of times per second until it is cracked
  5. often used without a hash, degrading its security

In short, as provided, this program is little more than window dressing providing a (dangerously) false sense of security. Let's look at solutions by number.

TLDR

GPG non-destructively encrypts: an encrypted output file is created and the unencrypted input file still exists. If no output file is specified (-o flag) during encryption, an output file will still be created. It will have the input filename with a "GPG" extension added. An output file should also be specified (-o flag) for decryption, or the decrypted file will simply read to the screen, with no file creation. 1) accomplish the configuration changes in section 2 "configuration file" below and restart, then 2) encrypt and decrypt as follows.

encrypt

$ gpg -c inputfiletoencrypt.txt
[password]
inputfiletoencypt.txt.gpg

In lieu of providing an output name, rename the file with the GPG extension whatever you wish. It makes no difference to decoding what the file's name is.

decrypt

$ gpg -o decryptedoutputname.txt -d encrypted.txt
[password]
decryptedoutputname.txt

The encrypted file is a binary, and may not pass all virus tests, eg required for uploading into GDrive. If this is a problem, add the flag --armor into the encryption command. This encodes the file into a scrambled block of ASCII text, instead of a binary.

1. transparency to gov't

No reasonable solution. Securing files against gov't intrusion requires a CS degree or steps more problematic for the user than the privacy is worth. If a citizen's files are stored on the Web, it's easiest for a government, but any device connected to the Web for even a few seconds likely has been indexed or otherwise compromised. Citizens' thoughts, patterns, networks, and locations are trivially determinable with AI. If deemed interesting, their information is forwarded for human review. Accept the gov't panopticon.

2. configuration file

Average users want the convenience of symmetry. If so, a person wants the strongest symmetric setup.

$ touch [or nano] ./gnupg/gpg-agent.conf
# Comment line indicator
# Seconds until password required for decrypt
default-cache-ttl 20
# Seconds until all passwords dropped
max-cache-ttl 25
# AES256 encryption
cipher-algo AES256

2a encrypt/decrypt

With the changes above, standard encryption syntax is OK

$ gpg -c toencrypt.txt
[password]
toencrypt.txt.gpg

Idiots leave files on their system with "GPG" extensions, so specify the file name. Give all your encrypted files the same preface, say "t", so you'll know:

$ gpg -o t20211101.docx -c toencrypt.txt
[password]
t20211101.docx

To decrypt, give an output filename or it just reports to the screen.

$ gpg -o output.txt -d t20211101.docx
[password]
output.txt

3. binary file

By default, a binary file is created, and that's space efficient. But binaries look bad to some virus checkers and sometimes are flagged as viruses, preventing uploads or storage. When this occurs, the solution is to create an encrypted file with ASCII text. For this we simply add the "armor" flag.

$ gpg -o t20211101.docx --armor -c toencrypt.txt
[password]
t20211101.docx

To verify, cat the encrypted file. It should be a block of text.

Wednesday, October 6, 2021

dashboard options

We'll first want to run a dashboard on our local system, using a light webserver and PHP server, before attempting it over a LAN or WAN. Step one is to inventory all our databases (including browser's IndexDB), logs, mail, and daemons (units/services) on our systems prior to attempting the dashboard. That's because a dashboard will add to what's already present. We need a system baseline. Because for example, the simplest MTA's add a database of some sort. SMTP4dev adds an SQLite.db in its directory, same with notmuch (or maybe an XML datatabase). If we go so far as Postfix, it requires a full relational database. So we need a pre-dashboard inventory of databases, logs, and mail, written on a static HTML page.

why dashboard

We may want to monitor certain applications or system parameters, financial counts, student attendance, anything. If we start with our own system as a model there are 4 things which regularly change: logs, timers/cronjobs, system values (temp, ram usage, hdd usage, net throughput, etc), and application information. We might even want to craft a database to store some values. Postgres is best, but we can get general theory from MySQL models.

MySQL table basics (14:26) Engineer Man, 2020. Great overview for the time invested.

Prior to a real-time dashboard, a slower process of daily emails with a summary are a good start, even if we just mail them to localhost disconnected from Web.

Once we can update a daily localhost email, we can attempt to expand that to internet email. Or/and, we can add a light local webserver + PHP. We need this to have dynamic webpages; opening an HTML webpage from a file directory is "load once", not updatable.

configuration models

Security camera configurations were created around updating pages, and these configurations might be adaptable to system monitoring. More than security models, the DevOps/BI models seem glean-able. These servers might be, eg Fluentd, Prometheus, and Grafana. Production server software, but localhost possibilities. Most are in the AUR. Prometheus is sort of a DAQ for software -- with email alerts possible -- and then Grafana is typically the display for it. But neither require the other. Grafana can display from any data source, or take its info from Prometheus. For Postgres built DB's, TimescaleDB has a lot of videos that might apply. We might even be able to modify a Moodle setup, now that we can upload quizzes using the Aiken model.

services

We can attempt several daemons on a local machine, and see which ones are too resource intensive. Also timer scripts to execute those services only as needed, stopping after.

local system dashboard

logs and timers

Nearly all logs are inside /var/log, but we need to evaluate our system carefully at least once for all relevant log locations. Some logs are ASCII, others are binaries that require an application or command to obtain their info. Once tallied, systemd timers and scripts are the simplest, with possible output via postfix. If we then add a webserver and PHP, we could run a systemd timer script every hour on logs and which updates a localhost webpage. To see running timers...

# systemctl list-timers

Fluentd is a log aggregator available in the AUR, but may also need a DB to write to.

Timer files (18:16) Penguin Propaganda, 2021. This guy makes one to run a script. However, what about running a program and then shutting down the service after? 10:00 restarting systemctl daemons and timers.
Systemd start files. (7:52) Engineer Man, 2020. How to make unit files, which we would need anyway for a timer file.
Dealing with system logs (10:00) Linux Learning, 2018. Mostly about /var/log. Explains some are ascii others binary.

email inside our system (mua + mta)

Link: localhost email setup - scroll down to email portion.

For a graphical MUA, I used (in Chromium) an extension called Open Email Client. Some configuration information is provided by the developer.

various

ISP Monitoring (8:04) Jeff Geerling, 2021. Jeff (St. Louis) describes typical frustrations and getting information on power and internet usage. Shelly plugs ($15 ebay) are one answer. However there are excursions into several dashboard options.
How Prometheus Works (21:30) Techworld with Nana, 2020. Why and how Prometheus is used.
Grafana Seminar (1:02:50) TimescaleDB, 2020. Avthar Sewrathan (S.Africa) Demonstation and some of his use cases
Grafana Seminar for DevOps (1:01:59) Edureka!, 2021. Grafana half of a Edureka seminar on DevOps with Prometheus and Grafana. Thorough description including how to create a systemd service file to run it locally.
Prometheus and Grafana (54:06) DevOpsLearnEasy, 2020. Adam provides description of a server deployment of Prometheus and Grafana. This guy even shows the setup of a VM on AWS. He seems confused but we learn, same as strike football penetrates the veneer.
Grafana Seminar (1:02:50) TimescaleDB, 2020. Avthar Sewrathan (S.Africa) Demonstation and some of his use cases
Grafana Seminar for DevOps (1:01:59) Edureka!, 2021. Grafana half of a Edureka seminar on DevOps with Prometheus and Grafana. Thorough description including how to create a systemd service file to run it locally.

Tuesday, July 7, 2020

rclone details

In a prior post, I'd found that using rclone to upload RESTful (rclone uses REST, not SOAP) data had become more complex -- by at least three steps -- than two foundational videos from 2017:
1. Rclone basics   (8:30) Tyler, 2017.
2. Rclone encrypted   (10:21) Tyler, 2017.
These videos are still worthy for concepts, but additional steps --choices actually -- must be navigated for both encrypted and unencrypted storage, whichever one desires. Thus, a second post. Unlike signing in and out of one's various Google and OneDrive accounts, all are accessed from a single rclone client. Rclone is written in Go (500Mb), so that immense dependency must be installed.

across devices

To install rclone on multiple devices, including one's Android phone (RCX), save one's ~/.config/rclone/rclone.config. For each installed client, simply duplicate this file and one can duplicate the features of the original installation. If one has encryption, losing this file would be very bad.

deleted configurations

  1. ~/.config/rclone/rclone.config (client). If this file is lost, duplicate it from another device. If lost entirely, access must be re-established entirely from scratch, and the encrypted files will be lost permanently.
  2. scope (google) Google requires authentication for access details for which Google keeps. Documentation is difficult to find, other than the OAuth info in the prior sentence. It appears that users cannot directly edit any of the 11 access scopes (files) defined, but rather only through a Google dialog screen. When installing rclone, 5 of the 11 scopes are available, for which I typically like "drive.file".

command usage

For simplest use, to the root directory...

$ rclone copy freedom.txt mygoogleserv:/

Not all commands work on all servers, so use...

$ rclone help

instead of...

$ rclone --help

The former will display only those commands on the installed version of rclone. The latter shows all commands, but not every compilation has these.

$ rclone about mygoogleserv:
Total: 15G
Used: 10.855k
Free: 14.961G
Trashed: 0
Other: 40.264M
Of course, there's also the GUI, rclone-browser.

encryption notes

Rclone documentation notes strong encryption, especially if salt is used. Minimally, we're talking 256-bit. Of course governments can read it, but what can't they read?
  • unencrypted accounts must be established first. Encryption is an additional feature superimposed onto unencrypted accounts.
  • remember the names of uploaded encrypted files; even the names of files are encrypted on the server and the original filename is necessary for download.
  • keep the same encryption password on all devices on which rclone is installed.

glossary

  • application data folder (Google) a hidden folder in one's Drive (not in one's PC). The folder cannot be accessed directly via a web browser, but can be accessed from authorized (eg OAth) apps, eg rsync. The folder holds "scope" information for file permissions.
  • authorization (OAuth, JWT, OpenID) protocols for using a third party REST app (rclone) to move files in and out of a cloud server (Google, AWS, Azure, Oracle), there's an authorization process between them, even though you are authenticated in both.
    What is OAuth (10:56) Java Brains, 2019.
    What is JWT (10:34) Bitfumes, 2018.
  • scope (Google). the permissions granted inside Drive to RESTful data uploaded by users using, eg, rclone.
  • REST Representational State Transfer API for server to client data transfer. Wikipedia notes this as an industry term and not a copyrighted concept by Oracle or Google. It refers to data exchanged by user-authorized third party apps between applications or databases and applications. This is as opposed to data directly entered by users, or data that is not authorized by users between servers.

    REST API concepts and examples (8:52) WebConcepts, 2014. Conceptually sound on this HTTP API, even though dated with respect to applications. Around 7:00 covers OAuth comprehensibly.

  • SOAP Simple Object Access Protocol. This is the older API for server to client data transfer.

    SOAP v. REST API (2:34) SmartBear, 2017. Very quick comparison.


Google 15GB

Users can personally upload and save files in Google Drive through their browser as we all know. However, Google treats rclone as a third party app doing a RESTful transfer and uses OAth to authorize it. Additional hidden files are created by Google and placed into one's Drive account to limit or control the process.
Within that process, there are two ways to rclone with Google Drive, slower or faster. The faster method requires Google Cloud services (with credit card) and a ClientID (personal API key). The slower way uses rclone's generic API connection.

1. Slower uploads

Faster to set-up, but slower uploads. Users regularly backing-up only a few MB of files can use this to avoid set-up hassles. It bypasses the Cloud Services API, and uses the built in rclone ID to upload as directed
  1. $ rclone config
    ... and just accept all defaults. For scope access, I chose option "3", which gives control over whatever's uploaded.
  2. verify function by uploading a sample file and by looking in ~/.config/rclone/rclone-config to see that the entry looks sane

2. Faster uploads

This method requires a lengthier set-up but, once configured, rclone transfers files more quickly than the generic method above. Users need a credit card for a Google Cloud Services account, which in turn supplies them with a ClientID or API key for rclone or other 3rd party access into Drive.
  1. get a Google email
  2. sign-up for Google Cloud services
  3. register one's project "app". In this case it's just rclone) with the Google API development team
  4. waiting for their approval -- up to 2 weeks
  5. receiving a Client ID and Client Secret which allow faster uploading and downloading through one's Drive account

These two videos move very quickly however they have the preferred Client ID and Client Secret method that supposedly speeds the process over the built-in ID's.

Rclone with Google API (6:38) Seedit4me, 2020. The first four minutes cover creating a remote and the 5 steps in creating the Client ID and Secret.
Get Client ID and Secret (7:29) DashSpan.me, 2020. Download and watch at 40% speed.

OneDrive 2GB

This primer is probably the best for OneDrive, however it also applies to many of the other providers

metadata and scope

These are hidden files within one's Google Drive. is is part of the Google Drive API v.3, which is what rclone uses to connect and transfer files. In particular, you will want to know about the Application Data Folder
Google API v3 usage (5:28) EVERYDAY BE CODING, 2017.
Get Client ID and Secret (7:29) DashSpan.me, 2020. Download and watch at 40% speed.
RESTFUL resources and OAuth (55:49) Oracle Developers, 2017.

Tuesday, June 9, 2020

system - server - hosting

We want a system for learning management (LMS), and another for general usage. I like the Moodle LMS and Nextcloud. The problem is that, for years, both of these should be done locally (VPN), you can't really webface them. New solutions are making it possible to do both. I've previously had webhosting, and I think that's been part of the problem. This time around I want to do a VPS. I would still put Nextcloud on a VPN, but I think Moodle can reasonably be done on a VPS at this point with TOTP. So we can host Moodle on Google, but the question is which Tech Stack (see below). The idea is there re 3 layers: the hosting (Google), the http server (Apache), and the system (Moodle, NextCloud).

  • VPS - Virtual Private Server. Cloud server. Google, UpCloud
  • VPN - Virtual Private Network. Home server. Unlimited storage, only limited by HDD space. I am uninterested in the typical web usage of VPN's for anonymity and so on. These are mostly useless (see vid from Wolfgang's Channel below). Thinking here of the much more prudent usage of a home network for a VPN. It's possible to make it web-facing also, but this should not be done without 2FA and SSL.
  • Backup Critical files need this. Probably anything paper that's irreplaceable, eg, DD214, grades, etc. This shouldn't need to be more than about 1-5 GB anyway, but critical. Chris Titus uses BackBlaze. BackBlaze however relies on Duplicity, which in turn relies upon the dreaded gvfs, one of the top 5 no-no items (pulse audio, gvfs, microsoft, oracle, adobe). Use some other with rclone, rsync, remmina, cron.

plan

Current A-Plus costs: $5 month x 2 sites ($120) + annual 2 x domain w/privacy ($30), one site only MySQL.

  1. DNS - Google ($12 yr x 2 incl.privacy)
  2. rclone some criticals to Drive
  3. Moodle VPS on Google LXC
    • $ yay -S google-cloud-sdk 282MB
    • go to Google Cloud and provide credit card
    • follow Chris Titus' instructions in video below

    Host on Google (30:32) Chris Titus Tech, 2019. Do an inexpensive, shared kernel setup. Uses Ubuntu server and Wordpress in this case.
    Moodle 3.5 Install (22:47) A. Hasbiyatmoko, 2018. Soundless. Steps through every basic setup feature. Ubuntu 18.04 server.

  4. Nextcloud VPS on Skysilk ($60)

1. transfer DNS to Google

Chatted with old provider and obtained the EPP's for both domains, began registration in the new domain. Once these are established, we'll have to change the A-records, and pehaps "@" and "C" records to point to current hosting. Each possible VPS provider handles their DNS in different ways. Some providers manage the entire process under the hood, at others a person must manually make any changes to their A-records.

Rsync Backup on Linux (9:19) Chris Titus Tech, 2019. Great rundown plus excellent comments below.
New DNS Update (7:18) Without Code, 2018. Proprietary, but a transparent example of what is involved in the process.

server blend

Nextcloud is not an actual server itself, the underlying server should be something like Apache or Nginx. Nextcloud then overlays these and serves files via the server underlying it. The logins and so forth are accomplished in Nextcloud in the same way we used to do so with, eg. Joomla or Wordpress (optimized for blogs).

Nextcloud: Setting Up Your Server (17:43) Chris Titus Tech, 2019. Uses Ubuntu as underlying server on (sponsored) Xen or Upcloud. Rule of thumb $0.10 month per GB, eg $5 for 50G.
What are Snaps for Linux (4:47) quidsup, 2018. These are the apps that are installable across distros.

2. existing storage for backup

We can use free storage such as Drive or Dropbox to backup data. They key is it should be encrypted on these data mining, big tech servers.

RClone encryption (10:21) Tyler, 2017. Methods to encypt with rclone. Also good idea to download rclone-browser, for an easy GUI.
Rsync Backup on Linux (9:19) Chris Titus Tech, 2019. Great rundown plus excellent comments below.
Using Cloud Storage (22:55) Chris Titus Tech, 2019. Easy ways to encrypt before dropping into Google Drive, etc. (sponsor:Skysilk)

choosing a VPS

One can of course select Google, but what virtualization do they typically employ? Skysilk uses LXC containers via ProxMox.

Rsync Backup on Linux (9:19) Chris Titus Tech, 2019. Great rundown plus excellent comments below.
Using Cloud Storage (7:31) Wolfgang's Channel, 2019. Be sure to pick a provider that uses Xen or KVM, rather than OpenVz-based virtual machines.

tech stack

I used to use a LAMP stack, but I am trying to avoid MySQL (proprietary RDBMS), and use PostgreSQL (OODBMS), as a minimum update (LAPP), and have looked at some other stuff (see below). I may try a PERN stack if I can get it going with Moodle. Post

Various Tech Stacks (48:25) RealToughCandy, 2020. Decent rundown plus large number of comments below. Narrator skews "random with passion" over "methodical presentation", but useful. PostgreSQL around 38:00.
Using Arch as Server (33:11) LearnLinuxTV, 2019. He's running on Linode (sponsor), but the basics the same anywhere. Arch is rolling, but just keep it as the OS for one app.

Friday, March 13, 2020

covid-19 digression

There's not a nation in the world where communication channels are more numerous than the US. We're a global communication pacesetter. How is it that, during a national crisis, communication is so bad here that people don't know what to believe about Covid-19 ("CV")? It seems impossible without willful meddling, and this is why people have lost trust: "Why are they f*cking with us during a life/death crisis?"

Maybe the past 35 years have already been every man for himself.

  • broadcast and internet video information sources tie viewership to profits, through ad revenues. Ads are not paragons of honesty.
  • political system requires continual fund-raising for re-election efforts. Politicians are not paragons of honesty.
  • bureaucratic offices protect agency turf or contracts and attempt to grow budgets. Bureaucratic offices are not paragons of transparent disclosure.
  • businesses, in addition to ad leverage, spend immense amounts on lobbying and electioneering. Lobbyests are not paragons of public information reliability.

But suppose we face a crisis as deadly as CV has become; don't these forces step aside and make room for the welfare of the public? Perhaps some do, in percentages. But it appears that, in the main, the clouding of public communication channels with disguised revenue or vote seeking appeals have continued. Meanwhile, look what's happened to the economy and public trust.

Is a percentage of these declines related to the forces above? If we agree that people need regular accurate information to make good decisions about their welfare then the answer must be "yes": those who don't work for, or who lack access to, organizations with inside information, are doomed to random luck with their decisions about safety and finances. How is that American?

summary of broadcast improvements needed now during CV

  1. provide government level comms to public Does a police officer responding to a call for a domestic dispute receive information from dispatch that there's an "amazing argument somewhere on the Northside"? Imagine anyone in such a safety situation making decisions with vague, sensationalized information. During times of crises, we need accurate information. Someone in CDC has projected mortality rates, best forms of self-care, proper decision trees for hospital visit, and so on. Interagency, (non-commercially) this information is readily available, undiluted. Placing such information in public hands is likely beneficial during non-crisis times, but it's absolutely critical in a pandemic.
  2. de-incentivize commercial elements during crises Would a police officer responding to a call hear it radio-ed to him with a proviso to "stay-tuned" through ads for carpeting, a cruise, and virus software? And would these ads be inserted b/c the department made money from them, regardless of the safety risk to the officer? If we must rely on commercial broadcast journalism, then the incentive to sensationalize, to keep us listening and generate more ad revenue, must be reduced or eliminated during crises.
  3. cease exploitative announcements and reports Reports filled with, eg. throw-away superlatives about the "amazing spread" of CV are more dross for the public to waste time sifting in a crisis. 1) Vagueness leads to guesswork and panic, 2) when many are losing their income, empty adjectives to compel viewers to watch past ad breaks is opportunism. Give information clearly and directly -- it's compelling enough without embellishments.
  4. interview survivors Show numbers and closed locales on slides or crawlers and get to interviewing. First person interviews with survivors are almost entirely not present in broadcasts. Without first-person interviews, the prospect of some unknown experience too dreadful to imagine is easy for the public to assume. Withholding what is natural -- first person interviews -- leaves viewers aware we're being managed by reporters instead of hearing from our fellow sufferers. We know it's widespread, where are all the interviews? (Edit 2020/04/03) Some edited accounts of survivors now on the news. Even YouTube will not show any unedited first hand accounts -- all are from news agencies reposted. Why?
  5. clarify the problem, do not cloud it Be honest. A simple, informative announcement should be broadcast, say, hourly, along the lines of the following with important dates, infection numbers, and a website.
    We're doing our best to assist with this pandemic. We do need some public cooperation. CV severity varies, but if we slow the spread, we can clean ventilators after use and treat additional patients, hopefully all the patients who might need them. Distancing is, so far, our only known weapon for slowing the spread, and is therefore extremely important. We understand the public is enduring an economic and social sacrifice to distance, but we must continue it longer to save lives. If we don't distance during peak phases of the outbreak, ventilator demand might exceed ventilator supply, leading to loss of life.

    For those few weeks' critical period which we will announce, we'd appreciate people staying away from all but necessary travel, and we might ask police to politely disperse groups of more than two persons.

1. communication exploitation examples (panic creation)

1A. broadcast exploitation

CV socio-economic effects are significant, so reporting is expected, sometimes interrupting regular programs. What most of us witness however is emergency flavored continuous coverage with multiple entangled panic producing elements. Each one is potent -- and when combined, panic is nearly certain. Thus viewers stay tuned-in, even through advertising breaks. Mission accomplished. $$$.
  1. The actual CV experience is minimized or excluded. First-person reports are anecdotal, and so might seem irrelevant, but they are exactly what's missing, as noted above. Instead reporters emote and gesticulate between themselves, creating panic.
  2. Related is how long does CV last? Will I be sick a week, a month? The regular omission of CV symptoms, timeline, and cure leads to an unsettled audience more prone to panic.
  3. Using "Covid 19" to obscure Stating the uninformative words "covid-19" over and over obscures what "ventilator shortage" means and how we might accurately frame the problem socially. The inventory of ventilators is simply less than the number patients who might eventually need one during a few days of their CV infection. Euphemizing leaves the public feeling discouraged or angry about transparency.
  4. "Social distancing": necessary, but exploited Social distancing importantly slows viral spread. Its use goes unexplained and the images are exploited to cause more fear (and thus, continual viewing). Social distancing should never be mentioned without the calming context of why its necessary: ventilator demand could be distributed over a few months instead of a few weeks. Ventilator availability (through re-use) saves lives, and the public should not be made to feel simply afraid of fellow citizens. Save lives by distributing ventilator use over time.

1B. political exploitation

There are problems also with some who create public policy or are speaking to the press. This is another contribution to the panic.
  1. CV truths are mostly excluded. We will not "defeat" CV by distancing -- CV will still exist like any other virus does. Any vaccine preventing outbreaks takes time. But this almost doesn't matter: CV is a slightly stronger flu virus, not an outlier. CV appears to fall within normal CDC flu season expectations, with normal or near-normal (20-60,000) loss of life. Still, public figures continue to proclaim our mission is to "defeat" the virus by distancing, again confusing the public about the real distancing goal, ventilator availability.
  2. public safety response By implementing first responders, public leaders have again caused public concern, perhaps inflaming panic. Bringing a heavy hand seems to validate panic, it's self-reinforcing: "if they need to bring first responders and martial law, this disease must be deadly because they need to bring first responders and martial law.". this Everyone loves overtime and hazard pay for our heroes, but can we pay for this? Involving the first responders should always be announced in concert with supporting available ventilator supplies, the reason for social distancing and enforcing social distancing.
  3. indeterminate duration some are wondering how long these restrictions will be in effect. That is, if an officer tickets me in a state park, how many more weeks does that persist? Where is an omnibus information center for these restrictions and/or closures?

2. why was panic manufactured?

As I say at the top, something stinks. We all know that's true. Certainly there is a virus spreading which needs coverage. It's flu season, after all. But why is the coverage continual, only vaguely helpful, and panic producing? This level of response doesn't correspond with the actual threat.

During manufactured panics, it's typically interesting to determine who's benefiting from the events, to what extent they are the main beneficiary, where is the money going, what are the contributing elements, and so on. This is very often impossible or nearly impossible to do, so we want to avoid wild speculation and seek educated guesses (if available).
  • what else is happening in the nation that would normally garner significant press yet is marginalized or buried during the panic. For example, what significant economic or political occurrences have we experienced in the last 2 months, and which normally would have received press, but which went nearly unreported? Were any interesting bills passed or committee reports published?
  • what are the viewership numbers (and thereby ad revenue) for press coverage during this press-induced panic?
  • what government agencies are gathering information during this quasi martial law event, and what will they share about their gathering and how it's being used? This brings to mind preparatory, exploratory, or proof-of-concept scenarios.

3. real information

3A. the experience and duration

I'm not a doctor, but the Covid-19 experience is reported to be a respiratory one with little or no GI tract issues: no diarrhea, loss of liquids or electrolytes. For most of us, the virus experience will range from unnoticeable to a coughing experience, maybe with elevated temperature and other viral flu aches, and for a week or two, as noted below. Some people, mostly elderly or other compromised respiratory, cannot tolerate the irritation in the lungs without hospitalization to increase oxygen transfer, and some in that group apparently cannot get enough oxygen to survive.

Medicalnewstoday.com :: description of symptoms

Secondary bacterial infections (eg, sinus, pneumonia) can develop during the viral phase but they outlast the viral phase and may worsen. They might require antibiotics or a ventilator, even after the viral phase.

Healthline.com :: normal flu/cold (virus) duration and risks.


3B.apparent severity

People can die from it. But this is typical during any flu season. The CDC chart of flu season effects is below.

CDC Influenza effects, yearly

Every flu season results in 20-50,000 deaths, and the same categories of people are at risk in other years as with Covid 19: those with compromised immune systems or underlying conditions decreasing ability to withstand an illness: elderly, chemotherapy, heart disease, young children.There is nothing new here. What's new is the extremely interesting response.

Monday, February 13, 2017

tcpdump in userspace

Many times we'll have inexplicable collisions -- eg 2000ms pings -- on our home LAN. Is someone squatting on our LAN? Is it a configuration problem?

1. squatting

If I'm not at the LAN's router terminal to view the DHCP table...

# nmap -sn --max-rate 100 192.168.1.0/24

Slow-down the rate to 100 to catch cell phones, which otherwise may not have time to respond. The generic 192 net will obviously vary depending on setup.

2. weird collisions

If possible switch from channel 1 on the Wi-Fi of course. And if our client is at the outer range of the router, we'd expect more interference and might have to move things around.

If these are not the causes, we'd want to capture some traffic and review it for suspicious activity. This is not typically trivial. To access traffic, we need to install tcpdump and reconfigure it for user-level execution, so that its files (.PCAP) are easy to manage. Next, we attempt to constrain tcpdump's enormous PCAP captures to a manageable size and format. Finally, we could evaluate the PCAP network data. Let's do the last two first, since many readers already have tcpdump configured.

A. use and constrain PCAP

As usual, StackOverflow closes the most relevant questions as irrelevant, lol. But you can scroll down to bro to see how its done with bro. Wireshark and some others , Zeek are additional options.

Try to pre-simplify by limiting tcpdump. Obviously, tcpdump unhelpfully only outputs to the screen, so a person has to tee its output to a file, let's start there...

The command gets long, but we can get a good PCAP.

time limit

Tcpdump doesn't have an inherent duration flag. Users typically must CTRL-C which can corrupt the output file -- the program was natively designed to STDOUT to the screen. However the "-c" flag tells it how many packets to capture before quitting.

$ tcpdump [other filters] -c -w filename.$(date +%Y-%m-%d.%Z.%H.%M.%S).pcap

B. evaluate PCAP

Problematic areas include

C. configure tcpdump for user

Users must root-up to operate tcpdump as installed:

$ tcpdump
tcpdump: wifi01: You don't have permission to capture on that device

Of course, to see the error messages more completely:

$ strace tcpdump 2>&1 | tee file.txt

after initial setup

Typically problems only occur after a restart and/or an update. The steps for initial setup follow this section and users can refer to those if anything needs to be reset. Otherwise...

  1. attempt a user-level simple usage.
    $ tcpdump
  2. if all goes well, use the more complex commands discussed in section A.

initial setup

Generally, know which tcpdump location executes, and add the user to any created groups. General instructions.

For location, run $ strace tcpdump and note whether if fails in /bin, /usr/bin, /usr/sbin. For groups, verify inside /etc/group. Then...

  1. create pcap group and add the user
    # groupadd pcap
    # usermod -a -G pcap user
  2. give pcap group permissions to tcpdump and make it group executable (750). Let's suppose tcpdump executes from /usr/bin.
    # chgrp pcap /usr/bin/tcpdump
    # chmod 750 /usr/bin/tcpdump
  3. We modify executable (binary) file capabilities, in this case tcpdump, using setcap. The specified group, (not just root) can then operate the wifi in promiscuous mode.
    # setcap cap_net_raw,cap_net_admin=eip /usr/bin/tcpdump
  4. Verify the binary capabilities were updated
    $ getcap /usr/bin/tcpdump
    /usr/bin/tcpdump cap_net_admin,cap_net_raw=eip

problems

There can be other permission problems. The initial problem is user permission to the wifi device. That is handled above. A secondary problem is user access to run tcpdump. This gives the following failure.

$ tcpdump
bash: /usr/bin/tcpdump: Permission denied

One site adds this line to ensure tcpdump is receiving root access for the user. Suppose we verify by strace that our execution of tcpdump is from /usr/bin/tcpdump. His would be...

# groupadd tcpdump
# usermod -a -G tcpdump user
# chown root.tcpdump /usr/bin/tcpdump
# chmod 0750 /usr/sbin/tcpdump
# setcap cap_net_raw,cap_net_admin=eip /usr/bin/tcpdump

last resort

If everything is set, getcap is returning everything proper and the error still appears, we can change the execution from group level only (750), and add it to the user also (755). I consider this a last resort because, at that point, there is essentially zero security on the wifi card. However, a person could run 755 when they want to run wireshark or some such and

Saturday, January 30, 2016

Geolocation: always evolving toward a finer grain

I was looking at geolocation data on the laptop the other afternoon, and thinking how it is part of the data collection picture so desirable for advertisers these days and so saturated by government security programs. Both advertisers (business) and government seem important and thereby worthy of a short post on geolocation.

Advertisers can be controlled, but after 9/11, our own government transitioned into a silent and invisible 24/7 domestic data collector. How does this relate to location. Well, location privacy feels important because our location is immediate -- it's first-person and physical, not conceptual. It feels normal to occasionally want to be alone somewhere. We understand this in our personal relationships, for example. This used to be as simple as going for a walk in nature, or around the block for a smoke at midnight -- very simple actions a person takes for granted. People feel such moments are private. However, since as recently as 2012, non-exempt citizens can only guess at how comprehensively during their daily activities they fall within camera range. Citizens can likewise merely guess at what is done with the images. In other words, citizens are given no clues as to where to file an inquiry if they do not approve of some camera or want access to its images -- we don't know who's operating them or what they're used for.

facial and license plate recognition

In addition to static cameras, note that every time you see a newer police car, or parking enforcement vehicle, an ALPR or some facial recognition system is likely built-in. A police vehicle is, among other things, and depending on a department's budget, a network node continually transmitting information. The transmissions have time and geolocation stamps added to the information. For example, in the transmission of license plate numbers from a cruiser, a combination of the license plate number+geotag+time is sent. This is a nearly insignificantly small database line entry. However, the entered data is easily reassembled into patterns of travel. A lifetime collection of a person's driving and location-based facial recognition instances could easily fit on a USB stick. We'd want to hope that information of such incredible depth was being used in an entirely temporary and exculpatory manner by agencies which gathered it. Good luck.

cell/smart

Assuming a phone with a battery and a SIM registered to its owner (not a borrowed or stolen phone), the owner's location is known to at least three meter accuracy. Added to this, government offices listening-in to the content of the call, or reading its text messages, accomplish these actions easily in real time, within agencies as low as city police departments, and with or without warrants. This is just by our friendly government and business organizations; foreign governments' interests are lesser known, but can reasonably be imagined.

desktop/laptop

When we use our desktops, the public (government) sphere again sees whatever it wants; what about the private sphere? Consider your monthly ISP bill. One's home address is tied to their account, it makes no difference whether one is being served a dynamic or static IP. ISP's could sell this bundle of info to advertisers in real time. Further, physical street addresses are easily interchangeable with exact GPS coordinates -- it makes little difference if the GPS coordinates or a physical street address is sold to advertisers.

Those in law enforcement, military, and perhaps some other protected categories (judges, etc) have some protections against commercial incursions or release of their information, depending on the situation. Citizens however, have only whatever's customary for a limitation, since there are very few explicit, effective privacy laws. Customary business limitations are not black and white restrictions on the release of data, and they can easily change, as you may note at the fine print of any privacy policy you accept. For example, lawsuits might occur as a result of, say, a stalker purchasing one's street address directly from an ISP, or if ISP's made one's mailing address easily available to advertisers. But if wins in court made it possible to absolve ISP's from any responsibility for selling your information to a stalker posing as an advertiser, ISP's might start selling that information tomorrow. So ISP's don't divulge the entire package to advertisers... yet. Instead, ISP's divulge some network node/hub near your home, usually a sphere of within 10 or 12 blocks, probably in your zip code, but without your name attached. Try this site, for example. And again, these are simply business practices, not real privacy protections. They can be changed at any time.

misdirection

As just noted, public opinion or civil cases are probably the motivation for ISP's and major websites to provide some (grudgingly) small privacy protections --- for now. But even these appear to be at the lowest possible boundary of honesty. For example, with geolocation, by asking the user if they will allow geolocation, the provider only gives the user a false impression that geolocation information hasn't already been released. We've already seen from the link above that this is not the case: let's say I'm browsing in Opera and I want to listen to a radio station in Pittsburgh. I go to the radio station's website and click on some "listen now" button. Very likely I will see a window similar to this:


The impression to the user is the Pittsburgh station does not know my location and "needs" to learn it (for regional advertising, etc). But we've already seen above at iplocation. net, that the station already has a fix on my location within an accuracy of a few blocks of my device. What advertiser (or MPAA/RIAA stooge) needs more information than this? So what's really going on -- we know it can't honestly be location, so what is it? My guess is the acceptance of the attached privacy policy notice: I am accepting Google's, or Microsoft's (Silverlight), or the station's, privacy policy regarding location information. Recall that privacy policies, once accepted, can be changed in the future without the user being notified or having the opportunity to revoke it. At a later date, information about me can be added to the whatever the site is selling to other businesses. In other words, once accepted, the privacy policy locks me into whatever that company does with my information downstream, and prevents me from suing them for it. This is why I think acceptance of the privacy policy is the real goal: it's much more valuable to the organization than my location, which they already have to a couple of city blocks without asking. Follow the money.

browser

Just like other webpages, geolocation queries from webpages are cached and need to be purged, if you don't want the results read by other applications later. The Chrome browser used to have a way to "emulate", spurious GPS addresses (again, only for private concerns, not for government concerns), but even this was too much for some businesses to tolerate. It's been eliminated, probably due to advertiser, or MPAA/RIAA pressures. Essentially, if you are streaming anything, you are likely to see a window such as the Opera one above.

the future

Profit pressures and motives will likely degrade these policies until, at some future date, it seems reasonable to assume our physical address/GPS coordinates will be known in real time and possibly tied to our name. This is currently trivial for some government agencies, but I'm talking about within the private sphere also. At the point it becomes accepted for business, there will be little difference between a cell phone or a home desktop, and in fact, the desktop may be less private at such a time, since a home address is also a mailing address. Accordingly, businesses which support law enforcement and law enforcement unions have proactively lobbied for protections for their officers. These unions have two advantages citizens who pay officer salaries don't have: 1) police unions know the true scope of privacy incursions because their officers are using the tools, 2) they have the organization, financial resources, and support from legislators, to lobby for protections for their members. In reality, all citizens, or at least taxpayers -- we pay gov't agencies to surveil us -- should have protections equal to officers. Government agencies can pierce any privacy protection with ease, so there is no national security implication for extending protections to all taxpayers.

integrating

Take all of this location and identity information above, and integrate it with credit card data, browsing habits, email and text parsing, and you've got quite a case, or advertising file, on anyone. Still want to go outside for that walk or stream that radio station from Pittsburgh?