yojimbo

Thoughts that should have a longer lifetime than my Mastodon posts ...

Elite:Dangerous speedrunning

My Type-8 made the Hutton Run in 26m40s (with a theoretical min time of 23:20) ... but now we have the Cobra Mk V, and that's a significantly faster ship. What can we achieve?

CMDR Osiliran has added the new speed data to their SCO Speed and Fuel Rates spreadsheet and with a max speed of 7017c we can make the 6,395,831 Ls journey in 911.47 seconds, 15:20 (this doesn't account for acceleration/deceleration/alignment and final approach/docking, which is why the Hutton Run Leaderboard hasn't broken the 17:00 mark at time of writing.

Fuel though ...

While at max overcharge, we'll be consuming fuel at 351.12T/hour; we need around 20 minutes fuel, which would be 117.04T. The Cobra has a 4C fuel tank holding 16T, and we'll need to add some more tanks ... and luckily the Cobra has just enough space.

We have optional internal slots for one 5C tank (32T), 3 4C tanks (16T each), and 3 3C tanks, 8T.

This gives us an additional 32 + (3 x 16) + (3 x 8) = 104T, and if we account for the core 4C 16T tank, we get 120T of fuel. You could add another 2C and 1C tank for an additional 6T but you won't need it. Alternatively, you could read to the end to find out how much fuel I had left-over, and trim that off before you start; but I don't believe SCO speed is affected by ship mass, so there's probably little need for this sort of finesse, unless you really wanted a shield on this ship.

Yeah, but it's a dry heat ...

A quick test shows that an unengineered A-rated power plant does slowly build up heat under overcharge; luckily with the recent Thargoid War we're all experts in heat management now, so some Low Emissions engineering is easy. This gave me a stable 48% heat maximum, so actually no need for heatsinks at all!

Need for Speed

So, how does this ship perform? Honestly, much better than I expected. My run came in at 17:42, allowing for a little loop or two at the destination as I was coming out of overcharge. I had 27T fuel remaining after a 16Ly jump into the system from Gendalla. A reasonably low-effort 3rd place that will soon be knocked off the perch by other commanders giving this fun ship a good workout!

This is a true IT horror story for the spooky season. Enjoy.

One day many years ago, on a Friday afternoon, I was visiting a client site in order to upgrade the OS on their unix minicomputer (for those that don't know, that's about half the size of a modern fridge/freezer). I think it might have been an ICL DRS 3000, if anyone cares that much ... so early 1990s.

It was raining outside, and I was just verifying the latest backup tape, prior to the upgrade. There were rumbles of thunder in the distance.

Then, all of a sudden ... boom as a lightning strike connected somewhere nearby. The office lights went out. The server and its screen stayed on. At the bottom of the screen, interrupting the backup log, was some text I've never seen since and can't completely remember, but it was something like

ERROR: MAIN POWER SUPPLY FAILU

In the dark, I was staring at the green screen in a small amount of shock. I had no idea how such a message could be generated, let alone still be visible ...

Then I noticed the smell of smoke. Blue smoke. Something was either burning or had already burst open ... and it was not pleasant.

The server beeped.

I hadn't touched it.

It beeped again, and the screen cleared ...

It started to boot back up.

It was still smoking, perhaps it was still on fire!

In a rush, I looked around the darkened room for the power lead going to the wall socket, pushed the server aside and pulled the power lead out as quickly as possible ...

The office lights came back on – I guess someone found a fusebox or breaker switch – and the server continued to boot. It wasn't plugged in to the wall but it was still booting up ...

After a moment or two more of panic (I didn't want this thing to come back up until I'd contacted proper hardware support engineers!) I looked again at the power lead I'd pulled out ... and it didn't actually go into the server after all ... it went into the UPS unit sitting next to it! The UPS that had done its job of dealing with the loss of mains power had allowed the server to ... catch fire? Tell me it was on fire? HCF isn't a real instruction, is it?

Well, I found the correct power supply switch and got it finally switched off properly, before the OS could come up (machines used to take ages to boot back in the day) and before some enthusiastic fcsk could wreak havoc on my filesystems ... and before anything else could happen to the tape drive with the latest backup tape still in it ...

But there was still wisps of smoke rising from the back of the server ...

So pushing the server around again, I exposed the back where the cables all went in and the option cards were connected – the site had a lot of terminals all directly connected to the server by serial lines, connecting to RS-232 'concentrator' cards. I can't clearly remember how to open the case of one of these machines today, but it must have been easy enough, because it wasn't long before I found the source of the blue smoke ... one of the serial cards had exploded. So no active fire risk ... but still a nasty smell.

So a little investigation and back-tracking was needed, but pretty quickly it turned out that some of these serial cables were run up to the roof and out the side of the building, strung across the carpark bundled with telephone lines, and into one of the smaller portacabin offices out the back, and then directly down to the terminals. No separation, no isolation, no grounding. They always were a little ... unreliable ... but never bad enough for anyone to spend any time or money improving them.

The lightning strike had contacted some of those cables. So at the same time that the site power went down, the cables jumped a little bit above their normal 5-12V states and ... out came the magic blue smoke. The UPS hadn't been quick enough switching over, so the server was reporting PSU errors ... and the message stayed on the screen (until the server rebooted) because it was a serial console; as long as it still had power, whatever was on the screen stayed on the screen ...

So nothing supernatural ...

For the aftermath ... I spent a lot of time on the phone with engineering, I removed the broken serial card, and we restarted the server. Which came back up again just fine; fsck had its way but there were no detectable problems there. The backup tape was fine, all the other hardware was fine ... (a couple of terminals in the external office were toasted, but not dramatically so). We took the machine slowly through its paces over the weekend (which is when we should have been upgrading it), and got everything back to normal – obviously with only half the number of terminals. The customer replaced the serial cabling with more of the same, and when the new serial card arrived on Monday we were able to get them all back up again.

But I'll not forget that server, telling me it's PSU had failed and restarting while the power was off ... all while it was still on fire.

The new Type-8, with its integrated SCO frameshift drive, is capable of getting to Hutton Orbital within less than 30 minutes of arriving at Alpha Centauri's jump point.

However, to do this takes a little thought and some gentle engineering.

And materials. Quite a few materials; for heatsink reloads, and the engineering itself.

General configuration

Mostly, you need your ship fitted with a lot of extra fuel tanks. At maximum speed, overcharged supercruise consumes a staggering 700T per hour, compared to less than 2T/hour in normal supercruise.

Then you probably need lots of heatsinks. While running overcharged, the ship's temperature builds steadily until you start to suffer internal heat damage; firing a heatsink drops the ship temperature down to 0%, and then it builds back up again. The number of heatsinks you need is based on how long it takes you to heat back up, and you need to be prepared to synthesise more.

Heatsink Hot-foot Shuffle

In my original Type-8 build (not quite stock), the ship was rapidly overheating during overcharge. I was getting only 30 seconds between popping one heatsink and the next, measured from when heat got just over 105%, and back up again.

The ship carried 4 heatsink launchers, each holding 3 units. Synthesis of Basic Heatsinks takes 20 seconds. It would be difficult to complete this before the ship started taking heat damage again, and I'd have to be doing a lot of synthesis, without any mistakes or delays. That didn't sound reasonable.

I upgraded to a 5A Power Plant and used Premium heatsinks (i.e. I synthesised the whole load myself rather than using the standard rounds bought via resuppliers), and then I was getting a 50 second cycle for overheating, and a 30 second synthesis time. Combining this with the pre-engineered high capacity Sirius Heatsink Launchers that hold 5 charges, and 4 of them in total fitted to the ship, I now had enough to last for 16 minutes, so 4 complete recharges would give enough coverage for the full flight.

With this setup, I took my first trip to Hutton Orbital, getting there in 27 minutes 30 seconds (recorded with the official Hutton Helper Lite running in EDMC, and verified by the pilot's manual chronometer).

Engineering the Power Plant, G5 Low Emissions FTW!

Lakon ships have long been infamous for overheating though, the Type-6 in particular has a bad reputation for jumping into hyperspace anywhere near a star. The only approach has been to get an A-rated Power Plant and just deal with the difference through heatsinks.

But aftermarket engineering is the route to success here. Hera Tani in Kuwemaki can deliver a reliable Low Emissions modification, and at Grade 5 it really shines (there are a couple of other suppliers who can have a go, but hers is the only workshop in the Bubble delivering G5).

I was now getting 90 seconds between overheat events, which extends protection to 5 x 4 x 90 = 30 minutes, meaning that I don't need to do any synthesis during the flight at all! Much much less stress on the pilot.

The Low Emissions engineering could be stretched further with the Thermal Spread experimental effect, but I didn't apply that. Mostly because I didn't fancy the 250+Ly trip in a ship making just over 20Ly per jump, although refuelling wouldn't have been a problem ...

But that 90-second overheat cycle isn't the only win for this modification, oh no ... if I'm not trying to break a speed record by running at maximum throttle (maxing out just over 4,580c) I can throttle back and cruise at a mere 3,500c, consuming fuel at only (only!) 500T/hour, and be actively cooling the whole time! So much that I can ramp up to max speed, and when the overheat warnings start to blare, just throttle down until things stabilise. So there's no need for a full fit-out of heatsinks on normal journeys, freeing up space for other modules.

Hands on the Stick

With this improved heat build in place, I can now fly all the way there without having to spend time in the synthesis panels, which doesn't sound like much of a problem. However, the SCO FSD doesn't deliver the smoothest ride – not only does speed fluctuate by about 10c, directional control isn't perfect, and the ship's heading bounces around all the time. Being able to keep the ship's nose pointed to the target requires constant feedback, and over such a distance is sure to pay off in terms of total distance and therefore time.

One additional small adjustment to help is to fix the arrival point at Alpha Centauri to get Hutton Orbital in front of you somewhere, so that you can boost immediately in the right direction without having to do lots of slow maneuvering. Entering the system from Sol isn't perfect, but it's the best option of the nearest few systems.

Second run ...

With a better starting position and more time to keep the ship on target, my second run in a Type-8 clocked in at 26 minutes 40 seconds.

I used 275T of fuel and 15 Premium heatsinks, and dropped ship integrity to 36%. I spent just under 250,000 CR to repair the ship afterwards.

This doesn't represent a massive improvement on the previous time, sadly. It was less stressful though.

Can we go faster?

Hutton Orbital is 6,395,831 Ls away from the jump point, which is 1.917 x 10^12 km (call it 'd'). Speed of light in vacuum (known as 'c') is 299792 km/sec, and the max speed here is 4580c. d/4580c = 1396.2 seconds, which is basically 23min 20sec – if we ignore acceleration/deceleration times. So there isn't much room for improvement to be had here, except a quicker drop onto the landing pad.

But it turns out that unlike the normal FSD speed being fixed for all ship types at a maximum of 2001c, the Achilles SCO drive performs differently for each different ship model and FSD grade.

This SCO performance has been analysed in detail by CMDR Osiliran, who currently sits in pole place for the Hutton Run, somewhat appropriately. https://forums.frontier.co.uk/threads/measured-sco-fuel-hour-and-speed-rates.624525/ will help you find out more and access the raw data.

The Type-8 is a safe ship to experiment in, but once you get the feel for this speed run, perhaps you need to try something different? Lets keep our eyes open for the next ships to arrive in the next few months ...

Or, How Much Does A Light-Year Cost?

August 3310 (2024), CMDR Yojimbosan, Radio Sidewinder Crew

If you want the maximum frameshift drive jump distance in Elite:Dangerous, your choices have been pretty stable for a long time – the Anaconda holds the record, and the Asp Explorer comes a really really close second, expensive but much less than its big cousin.

Over the years, the technology has improved and more ships have become available. Engineers have squeezed maximum performance out of standard units, Guardian technology has been integrated with ours, and now Thargoid technology is available via the Achilles Supercruise Overcharge FSDs. The Diamondback Explorer and Krait Phantom are new and very strong challengers for the crown of long range jumping ships as well.

So, just as the Lakon Spaceways Type-8 is being released (and before there are changes to Engineering & materials gathering) I thought I'd do some quick theory-crafting using Coriolis.IO and see what the situation is ... I described a couple of real ships I had to Coriolis, and the final jump distances agreed with no more than a 0.05ly discrepancy – either I forgot something, or their models are a little off. But that's more than enough accuracy for our purposes here ... I checked 14 different ships, so there might be a surprise waiting for us in the others, but I doubt it.

The Long Range Build

As far as possible I'm keeping the same reference build choices for all of the ships. This has no hardpoints or utility mounts, and only a Fuel Scoop and a Guardian FSD Booster in the optional slots (i.e. no shields, repair technology or surface vehicles). For the core internals we will target the lowest mass, using engineering/experimental effects to get there. I'm not reducing the size of the fuel tanks, however, even though that makes a ~2ly difference to the Anaconda. It's also possible to make some more shavings on the Anaconda by switching in pre-engineered 'drive distributor' 4D thrusters, but I couldn't get that working on Coriolis.

  • Hull : Lightweight Alloy (no changes)
  • Power Plant: the smallest that works – usually the 2A Guardian Hybrid model, but if not then a traditional model, with Grade 1 Overcharge + Stripped Down
  • Thrusters: the smallest again, Grade 1 Clean Drive, Stripped Down.
  • FSD: The largest A-rated SCO model, with Grade 5 Increased Range + Mass Manager
  • Life Support & Sensors: D-Rated, Grade 5 Lightweight
  • Power Distributor: The Guardian models, or Grade 1 System Focus + Stripped Down
  • Fuel Tank: The default, often a 5C.

Results

The final results are not surprising in terms of the ordering of ships; but perhaps the actual ranges themselves are.

Unladen Jump Range top 5

On sheer jump range, there's no real surprises :-

Oddly, the Python Mk II makes only 53.01 ly, despite integration with the Achilles SCO FSD for supercruise – the older Python can make 60.09.

Cost per Light Year

But when we look at the total cost of these ships, we start to see some other things stand out. If you're not the sort of commander that's swimming in credits you'll have to look at the price ticket carefully ... how many lightyears does a million credits buy you?

  • Diamondback Explorer – 75.89 ly – 14 million credits – 5.42 ly/mil
  • Asp Explorer – 73.10 ly – 48 million credits – 1.52 ly/mil
  • Krait Phantom – 73.98 ly – 78 million credits – 0.96 ly/mil
  • Orca – 70.77 Ly – 90 million credits – 0.79 ly/mil
  • Anaconda – 83.85 ly – 262 million credits – 0.32 ly/mil

The Diamondback Explorer is so small that it can only use the 4H Guardian FSD Booster, and all the other ships on that list are using the 5H. So we get an excellent price per lightyear result. But there's another surprise hidden further down the figures ... the Sidewinder! This little-ship-who-could, the starter ship that every new pilot gets, is just so cheap that you can upgrade it to a startling jump range of 47.25ly for only 1.3 million credits! The next best super-jumper surprise is the Hauler, spending 3 million credits gets you 63.80 ly performance.

Conclusion

  • The Anaconda is still the best deep-space longest jumping ship, and with such an immense hull you can fill up with useful equipment and still outpace the competition. But you pay a premium for equipment on such a large vessel. 83.85 lightyears for 262 million credits.

  • The Diamondback Explorer is almost 20 times cheaper than the Anaconda, but has 90% of the range capability. It's far less flexible in terms of mission capability though. 75.89 lightyears for 14 million credits.

  • The Sidewinder wins the value prize, with a stunning range capable on a shoestring budget. 47.25 lightyears for 1.3 million credits. You should listen to the radio station too!

I've always liked being able to use the text console as an alternative to starting a full graphical environment; but by default on Linux the console's capabilities are pretty limited, especially in terms of available fonts.

But, if we replace the default terminal with FbTerm (https://code.google.com/archive/p/fbterm/), we can now use all of the (monospaced) fonts we're used to seeing in graphical environments. And at a significantly higher resolution, giving us more variation over sizes.

Debian 12's defaults

The console is initialised during boot, and we can set the font using an interactive tool with dpkg-reconfigure console-setup, or by editing /etc/default/console-setup. Then run setupcon to make the changes active.

There are only a small number of fonts (Fixed, Terminus, VGA) by default, and each comes only in a few sizes. See /usr/share/consolefonts/ for the original font files.

FbTerm

Instead of the default console environment, we can install and run fbterm. We can add a couple of nice fonts at the same time too ... which of course will be available for use in your graphical environment as well.

apt install fbterm fonts-firacode fonts-hack

The console's login: prompt is kicked off by the systemd getty service. See /etc/systemd/system/getty.target.wants/getty@tty1.service for the ExecStart line, which reads :

ExecStart=-/sbin/agetty -o '-p -- \\u' --noclear - $TERM

This line means that agetty will prompt for a username, and then run /bin/login, passing the collected username along as a parameter (the \\u part of the commandline).

The change we need

Because I can't see a way to get login to start fbterm for us, we will instead ask agetty to run fbterm before running login; and that means that we need to pass the responsibility of asking for the username along to login instead of within agetty.

We achieve this by telling agetty that the login program executable is 'fbterm', and then passing options fbterm to ask it to execute the original login.

This will mean that the “Security Notes” section of the fbterm man page doesn't apply to us – we will be running fbterm as root, rather than as a user. There is no need to add named capabilities to the executable.

Experimenting with getty

If you want to try a few different possibilities with the getty configuration, you'll find yourself going through these commands a few times :-

  • editor/etc/systemd/system/getty.target.wants/getty@tty1.service
  • systemctl daemon-reload
  • systemctl restart getty@tty1

This is the most basic command that works to activate fbterm and invoke login correctly :-

ExecStart=-/sbin/agetty --skip-login --login-program /usr/bin/fbterm --login-options /bin/login - -

Setting FbTerm options

Because the systemd service is running as root in order to be able to use 'login', when FbTerm starts it will only look for configuration from the command arguments, or in /root/.fbtermrc. This isn't ideal, but we can work with it.

Select the specific font ordering and size that you want in the .fbtermrc file :-

font-names=Fira Code, Hack, mono
font-size=13

So that we can confirm FbTerm's behaviour, we add the --verbose option to the login-options (which now have to be 'quoted'); this will give us the active font list and other settings on the console, above the login prompt. We're also including -- as the explicit marker between the options for fbterm itself, and the command/arguments for it to run.

ExecStart=-/sbin/agetty --skip-login --login-program /usr/bin/fbterm --login-options '--verbose -- /bin/login' - -

Verbose only reports font-size as 'width' and 'height'. At least we get a value for the terminal size in characters. On my system in mode 1920x1080-32bpp, using Fira Code at size 13, we end up with width 8, height 16 and 240x67 characters. At size 14, it's 9x17 and 213x63; at 16 we get 10x20 and 192x54.

Because of the way that 'login' times out if you don't interact with it, and that 'agetty' will automatically restart the command after the timeout, if you edit the '.fbtermrc' file any saved changes will be automatically deployed on your console a few seconds later without any other interaction needed.

Setting $TERM for 256 colours

FbTerm supports 256 colours, and ships a relevant termcap entry for 'fbterm', but by default it selects a TERM value of 'linux' that does not let other programs know these features are available.

We should be able to set the correct TERM value in the systemd service file with the config line Environment=TERM=fbterm; or with agetty's command-line (currently it is just the final '–' in the examples above). Unfortunately, while these work just fine in setting the environment for fbterm itself (as confirmed by looking in /proc/<pid>/environ), this value is not passed onwards to the login program, and therefore does not end up in our eventual shell. This seems to be a limitation of fbterm itself; TERM will default to 'linux'.

The way to fix this is to ask fbterm to invoke a shell capable of explicitly setting an environment variable, and then use that to start login. The hard part here is that the systemd.service ExecStart lines provide an environment that feels like it is capable of many things, where in practice there are some strong limitations that don't appear to be clearly documented.

However, we can get fbterm to run a shell, and ask the shell to set a value for TERM before running login, but only if we limit ourselves to a single variable with no spaces in the value, because any additional quoting seems to reliably break the command line parsing ... we cannot even use or add a space around the ; character.

ExecStart=-/bin/agetty --skip-login --login-program /usr/bin/fbterm --login-options '--verbose -- /usr/bin/sh -c TERM=fbterm;/bin/login' - -

Alternative approaches we did not need, or that did not work

According to the man page for 'login', we can pass ENV=VAR parameters at the end of the command arguments, but this seems to be silently ignored.

/etc/ttytype is a documented mechanism to set TERM, but by default Debian does not enable this. In any case, setting the commented TTYTYPES_FILE in /etc/login.defs and populating /etc/ttydefs doesn't have the desired outcome – I'm not sure that login(1) is even consulting ttytype at all; I didn't see any access of it via strace. The man page for login.defs notes that much of its functionality is provided by PAM instead, but I haven't seen any way to get PAM to set an environment variable conditionally.

If we follow the advice in https://superuser.com/questions/438950/how-do-i-make-ubuntu-start-fbterm-in-the-tty-on-startup we will create a new command (preferably in /usr/local/bin rather than /usr/sbin!) that manually sets TERM using a shell as an intermediate to calling login. This feels a little wasteful; but if we want to do more than setting a single environment variable with no spaces in it, then that approach is probably justified. However ... this is the approach that we eventually have to use for the more complex use-cases.

.profile is a thing

We could also edit our own .profile to fix the TERM value problem – after all, this is probably going to end up being used on a single-person computer anyway, so fixing the problem in the user's session rather than system-wide is a valid choice.

Detecting that we're running on the console under fbterm turns out to be less straightforward than you might expect. Because of the architecture of fbterm, our actual tty is a pty, just the same as it is for terminal sessions within a graphical desktop, so I can't make a condition based on detecting the use of /dev/tty*.

I can look up my process tree, and see that my current shell is parented by fbterm and login, so some approach based on the output of ps or by looking in /proc/<pid>/{stat,status} might help. It seems a bit overly complex though. If 'pstree' is installed (from the 'psmisc' package) we can use the '-s' option to get a single-line output with all the parent process names in it, pstree -s $$ will present systemd---fbterm---login---bash---pstree.

Another approach would be to note that where FbTerm sets TERM=linux, the graphical terminals often have some way to set their own TERM values that are different. We tend to get defaults related to 'xterm' when running under X.11. We could also look for the presence of the DISPLAY environment variable, which isn't set for console logins; but we might end up mistaking our session for something like an incoming ssh connection.

So I could pop a simple conditional into .profile, and change any instance of 'TERM=linux' into 'TERM=fbterm', leaving other values alone. In bash, that's a concise TERM=${TERM/linux/fbterm} using parameter expansion. If you use a different shell you'll have to make sure your approach is appropriate, but bash is the default Debian 12 shell so I'm comfortable with that.

Background Image

One of the advertised features of FbTerm is the ability to set a background image and layer the text console over the top of it. To achieve this we are going to have to reach outside the Debian packaged files, as I haven't found anything with the necessary features yet.

The FbTerm documentation recommends a program called 'fbv' available from http://s-tech.elsat.net.pl/fbv/, but although archive.org claims to have a copy from only a few days ago, I can't get to it now. There are a few forked copies on places like GitHub, and there is a copy maintained by Arch Linux.

However, a simpler program written in Python is 'convertfb', from https://github.com/zqb-all/convertfb, and has a minimal dependency on Python3's Pillow library, available in Debian from 'python3-willow'

  • apt install python3-willow
  • git clone https://github.com/zqb-all/convertfb
  • cd convertfb
  • python3 convertfb --help

If asking for '—help' results in a TabError from python, then https://github.com/zqb-all/convertfb/issues/4 has probably not been fixed yet. Remove the unwanted space character from the front of line 57 with sed -ie '57s/^ //' convertfb.

Pick a source image to try with your console. 'convertfb' can truncate an image's width and height to your specifications, but you may prefer to use other tools to get the source image to the right size first. You'll need to know the size of your console, which is provided by the '—verbose' output from fbterm when it starts, as '[screen] ... mode: width x height – bitdepth'.

'convertfb' will take a very wide range of input image types thanks to the Python Imaging Library. It will map each pixel into separate Alpha, Red, Green and Blue values, and then output them in a simple bitmap, in the format you select.

The right format for the framebuffer is probably 'rgba' but some hardware might be different. If you want to check on your own hardware, install the 'fbset' package, and run fbset to see what the current settings are.

All we have to do now is to decide where to store the image, and for this I've chosen /etc/fbterm.background.fb.

python3 ./convertfb -i sourceimage -o /etc/fbterm.background.fb -f rgba

In the 'getty@tty1.service' file, add the following two lines to the '[Service]' definition, just above our new ExecStart is probably sensible :-

ExecStartPre=-/usr/bin/cp /etc/fbterm.background.fb /dev/fb0
Environment=FBTERM_BACKGROUND_IMAGE=1

I haven't found the setting of the background this way to be particularly reliable. Sometimes the image is present, sometimes just a white background, and sometimes a default terminal banner has been caught. I suspect the problem is down to timing between the ExecStartPre command and the ExecStart one, allowing other processes to access the framebuffer and change the state.

Screen rotation

FbTerm can rotate its display, so you can use your console in portrait mode as well as in the default landscape.

To do this, set the option -r, --screen-rotate= to 0,1,2 or 3.

  • 0 = default landscape
  • 1 = 90° CW (i.e. top now on the right, equivalent to xrandr ... --rotate right)
  • 2 = 180° (upside-down)
  • 3 = 170° CCW (i.e. top now on the left)

Mouse integration with gpm

The 'General Purpose Mouse interface' packaged in Debian as 'gpm' works just fine with FbTerm, no additional configuration is needed; just install the package. Left-click-drag or double-left-click to select text, right-click to paste it. There are probably more features but those are the essential ones!

See also

Problems

We have a few problems with this approach, or places where we'd want to have improvements made.

systemd

First of all, the 'systemd.service' execution environment doesn't seem to be reliable, and it certainly isn't flexible enough to perform all the tasks needed for running FbTerm correctly.

  • Unreliable – as discussed above, ExecStartPre and ExecStart have some timing issues, and the end result is that we cannot reliably set the framebuffer state before fbterm starts, using these command lines.
  • Inflexible – Command lines are not interpreted with a shell (which is probably for good reasons) – quoting is available but doesn't seem to nest very well. Using exec to tidy up executables that were only present for glue purposes (usually shells) is not available. We cannot combine the functions of ExecStartPre and ExecStart for fbterm in one single ExecStart line.
  • Unclear – the documentation available in the man pages doesn't have advanced examples, so it is hard to see where the limits are, and what the recommended workarounds would be.

FbTerm limitations

FbTerm itself isn't well-integrated with a modern Linux, and has implicit configuration choices.

  • No system-wide configuration file – config options come only from the command line, and from $HOME/.fbtermrc – and this file is forcibly created if absent.
  • TERM is not preserved – fbterm sets the TERM value to 'linux' and provides no way to change this. It will throw away any TERM value presented to it and refuse to propagate it to child processes. At least one of the forked versions of fbterm address this.
  • Background image is implicit – in order to have a background image, fbterm will only use “the current framebuffer contents”, and won't read a specific file.
  • Font colour can be set only from the first 7 standard colours, and not the full range.

virtual consoles conflict

There is still a conflict with the pre-existing virtual consoles – although FbTerm allows the use of multiple terminal sessions within itself, if we use the Alt-F{1,2,3...} or Alt-{Right,Left} keys, the original system configuration kicks in and getty spawns fbterm on the next /dev/tty. This causes a problem with putting the background image onto /dev/fb0, which may overwrite another terminal's view.

You could try very hard to not accidentally invoke these ttys ... or you could disable them, by editing /etc/systemd/logind.conf and changing two settings in the '[Login]' section :-

NAutoVTs=1
ReserveVT=1

You can then activate this setting with systemctl restart systemd-logind but that might not clear up any existing terminals; for that a reboot is simplest but loginctl is the tool to manually fix things.

There can be only one /dev/fb0

fbterm holds /dev/fb0 open while running; framebuffer-aware browsers like links2, w3m-img and so on also want to have access to /dev/fb0 while running, which they cannot get (actually, they can't get it when they're running in a pty rather than on a tty – so they won't work inside tmux panes either). So no inline-graphics on an fbterm console, I guess.

/dev/console

By default, /dev/console will be hooked up to the currently active tty, and therefore the kernel printk messages will occasionally show up overwriting the state of your display; FbTerm seems to freeze output when this happens, but you can regain control quickly by just switching the display to a different tty and back again (e.g. with Alt-Right, Alt-Left).

We probably should divert /dev/console elsewhere to avoid this problem; but I'm not sure how this is done without building a new kernel. That seems like an overly-complex change.

Font selection

Fira Code is a great font, but we don't seem to have any way to ask for the optional stylistic sets to be switched on. We may need to build our own copy of the font with these options permanently enabled.

Conclusion: Use the shell, Luke

I've worked hard to try to get 'systemd' to work with FbTerm here, because that's apparently the “right” way to run a Linux these days. But we can still use some good old-fashioned unix skills to get an easier-to-manage and arguably “better” result :-

  • Use 'getty@tty1.service' to launch a shell-script for this task
  • Use the shell to exec over intermediate commands

So, we don't need any ExecStartPre or Environment settings in the service file, just a simple ExecStart line

ExecStart=/sbin/agetty --skip-login --login-program /usr/local/bin/fbterm-login - $TERM

The script '/usr/local/bin/fbterm-login' does everything we need, all in one place and in one well-known environment :-

#!/usr/bin/sh
export TERM=fbterm
export FBTERM_BACKGROUND_IMAGE="/etc/fbterm.background.fb"

clear
cp $FBTERM_BACKGROUND_IMAGE /dev/fb0
exec /usr/bin/fbterm --verbose --\
    /usr/bin/sh -c \
    "TERM=$TERM FBTERM_CONSOLE=true exec login -p"

When we get our shell, we have a simple process tree above us, and an environment variable (FBTERM_CONSOLE) that unambiguously identifies how we got here. Intermediate shells that have been created while getting to the final executable have been execd over. The provision of the background image is much more reliable than before – i.e. it actually works.

$ pstree -s $$
systemd---fbterm---login---bash---pstree

Conclusion

Is FbTerm worth having?

It's a relatively small change to the system configuration, and provides a much more visually interesting environment for the system console, at the cost of changing/disabling the keystrokes usually used to invoke new terminals.

Are you really going to spend a lot of time running programs in the console? Yes, tmux exists and so does byobu, and they can replicate much of the 'windowed' world, but you won't get graphics inside them anyway, and you won't get graphics under FbTerm at all.

I do like text UI programs; but I can run them under a tiling window manager under X and get a better user experience. I can have different windows running with different text sizes, for example. You can run X under framebuffer if you want to ...

If your system resources are so tight that you can't run X, and you don't need graphics, then setting up FbTerm is probably a good thing. Otherwise ... why change the defaults for a text console that you won't be using?

I'm currently using Hosted Elasticsearch for work, and wanting to grab data via a custom API to we can use it in reporting against events.

Of course, the remote API isn't structured in any way that makes this easy to achieve, so a little pre-processing is needed.

My POC approach is to use bash, curl and jq to restructure the data, but that's not a very robust production solution. It's also difficult to find a reasonable place to run ad-hoc scripts from, when in theory we don't have any infrastructure of our own for this piece of work.

So I've been recommended the “Custom API using CEL” as the Elastic way to do this. https://www.elastic.co/docs/current/integrations/cel

CEL is new to me, and it's been difficult to find any examples that match the environment and sort of work that I want to do. Running CEL scripts through the Elastic web UI is challenging, as the program input box doesn't have any understanding of the script syntax, and results are decoupled from that screen as well.

So, I was hunting to find a way to play with the language in a meaningful way, and I think I've got there now ...

Using mito

We start with Elastic's actual implementation of the engine, https://github.com/elastic/mito. I've got Go installed on my machine, even though I'd rarely use it ...

$ git clone https://github.com/elastic/mito.git
$ cd mito
$ cd cmd/mito
$ go build

This should give you an executable mito in your copy of the repo, I copied mine out to ~/bin which is on my PATH.

Then I switched to a tmp directory for testing, and tried a very simple CEL program, a Hello World :-

hello.cel:

"Hello World"

Executing this:

$ mito hello.cel
"Hello World"

So that works, and you have a functioning test environment!

CEL is an odd-feeling language for me. It seems like the whole program is “one single statement”, and semi-templated output is mixed in with function calls. Some functions chain using dot-notation, and others have to be invoked with arguments. I haven't worked out why this is the case, or how to read the docs to understand which to use, so I'm still doing a lot of trial and error ...

The 'state'

CEL (and therefore mito) is intended to be used to take some initial state, and to mutate it in a non-Turing-Complete environment, and output it. Although it can access external resources (file and network) the basic flow is “take an initial state, run a program over it, output the result” and that's all. Elastic's implementation adds a data-access cursor, but we can ignore that.

From the command-line, we can pop a JSON Object into a file, and introduce that to our program implicitly. If you invoke mito with -data <filename> the JSON object in that file will be available as 'state' in your program, in just the same way that the Elastic Web UI data fields are available as 'state'. An Elastic example is the API's resource URL field, and other examples would include authentication details.

Specific example – JumpCloud

For this article, I'm going to be asking the API for JumpCloud (an Identity provider) to provide me with a list of 'Applications' (these are per-service SSO setups), and then provide me with the list of Users that are allowed to access them. That's one API call to get the list of Applications, and then another call per Application to get the Users for each. I want the final output document to list all applications and each of their users, all in one go – so that's multiple API calls.

My state.json file is very simple :-

{ "url": "https://console.jumpcloud.com/api" }

My CEL/mito program on the other hand is not! It's basically viewed as a one-line command, but in that one line I can use a map function to do more work per-result ...

We start with an HTTP GET to the API, where I have to add some Headers to the request (specifically a hard-coded API key; there are ways to obscure this key but I'm not using them here).

request("GET", state.url+"/applications") will give us the correct URL for this first request, and then I add the headers using .with(). The argument to 'with' is a JSON object, but the header values have to be arrays – { "Header": { "Accept": ["application/json"], "x-api-key": ["hardcoded"] } }

(I could and probably should get the api key from something like “state.apikey”, but it isn't clear to me how to do this in the Elastic web UI right now)

Once I have the request object created, I call .do_request() to make the actual HTTP call, this returns JSON with the response, and I'm interested only in the Body of the response (I could/should be checking for HTTP 200 though). request(...).with(...).do_request().Body

Then, for “reasons” I need to cast the whole Body into bytes() (I could use string() if I just wanted to see the response in a human-readable form). These bytes(...) are then passed into .decode_json() bytes(request(...).with(...).do_request().Body).decode_json(...)

So this takes our API call response, and parses it as a JSON object (which is good, because that's what the API returns). I see that our response has an array called results so I'll take that and run map over it. For each of the result objects (which represent each of the “Applications”), I want to grab the 'id' and 'displayName' fields, renaming the latter to just 'name' ...

.decode_json().results.map(app, { "id": app.id, "name": app.displayName } )

That's great so far ... the next step is to add a “users” key to that, and in there put the results of another API call, one to the '/application/{id}/users' endpoint ...

This will be, similar to the original call, a request(...).with(...), calling do_request().Body to get just the part of the response we need, and then casting the whole thing as bytes(...).decode_json() ... then I invoke map() again, and select just the key/values that I want ...

The whole mess in one go

It really feels like CEL/mito just doesn't care about whitespace, so I can lay my program out any way I want to get readability ... so here's the whole thing. I hope you can follow it from the explanation above!

bytes(  request("GET", state.url+"/applications")
        .with( { "Header": { "Accept": ["application/json"] , "x-api-key": ["HARDCODED"] } } )
        .do_request().Body).decode_json().results.map(app,
                { "id": app.id
                , "name": app.displayName
                , "label": app.displayLabel
                , "users": bytes(       request("GET", state.url+"/v2/applications/"+app.id+"/users")
                                        .with( { "Header":{ "Accept": ["application/json"], "x-api-key": ["HARDCODED"] } })
                                        .do_request().Body).decode_json()
                                        .map(user, user.id)
} )

For performance, that's one API call per-application, plus one. From my desktop, on a JumpCloud org with 29 applications, that takes 13 seconds elapsed.

The (redacted) output looks like this :-

[
  {
    "id": "c160c315088d44ad80ce8464419119c5",
    "label": "platyfish-minionism-obsignatory",
    "name": "idiobiology-reoxidize-dooja",
    "users": [
      "d0e94fd60a884c17ac5ea904a8f71412",
      "27eb539596af466b8e1ff3d183d4b0b7"
    ]
  },
  {
    "id": "65a27cbb50bd473b85e20b700a834225",
    "label": "fibroneuroma-resalvage-pronomination",
    "name": "pseudobasidium-shinglewise-unmortal",
    "users": [
      "ced6d82df7a04ae7ac7768756471a950",
      "3dfd0a2adf2747cfb618f273f9a1410c"
    ]
  }
]

2024-03-13 (Previous article: “First experiences with Konilo”)

This time I'm starting with the Nightly Snapshot tarball from (http://konilo.org/), because it's been quite a while since 2023.10 and I'm expecting a number of rough edges have been sorted out.

Less is Better

Unpacking the tarball results in a significantly cleaner and smaller set of files, and a QUICK-START.txt that points you directly to starting the system easily.

The Konilo Welcome page now only lists three commands, and some of the TUI is using colour-coding, which looks quite effective.

How is ilo.blocks built?

The snapshot doesn't contain much, besides a working ilo with several alternative VMs.

The source available via fossil has a load more, but still the blocks file comes basically “from above”. I'm getting the impression that this is because Konilo is self-hosting, and the author is dogfooding this environment; there is no “source” for the blocks, only the “current state” chosen for release.

This seems to be very consistent with the ambition of Konilo; but I'm not entirely sure it's the best way to encourage contribution from outside.

How to determine the source of a word?

This might be a generic question about Retro Forth itself (I'm not sure there's a handy way to decompile words), but seeing as the blocks store the text source for many of the words in use, finding out how to at least see the definition is good.

The catalogue viewer is handy – although it doesn't actually seem to tell you which block is currently being displayed, so you have to keep a good track of things yourself, or constantly re-assert a known state.

You can look through the list of blocks, and hope that the title texts displayed help you decide where to look next :-

a | Welcome to Konilo                                                | 0
b | (startup) (set_blocks,_load_extensions)                          | 1
c | (startup) (local_configuration)                                  | 2
d |                                                                  | 3
e |                                                                  | 4
f | (std) (constants)                                                | 5
g | (std) (s:) (constants)                                           | 6
h | (std) (c:) (classification)                                      | 7
i | (std) (c:) (conversions)                                         | 8
j | (std) (n:) (even,odd,sign,square,sqrt)                           | 9
k | (std) (v:)                                                       | 10
l | (std) (buffer:)                                                  | 11
m | (std) (s:) (begins-with,ends-with)                               | 12
n | (std) (s:) (contains/s?) (index/s)                               | 13
o | (std) (bit:) (bit-access)                                        | 14
p | (std) (b:) (byte-addressing)                                     | 15
---------------------------------------------------------------------------
1 Previous     2 Next         3 Display      4 Jump         5              
6              7              8 Rename       9 Leap         0 Quit         

But you could be a while paging through this list! Instead, use the “Leap” command to get to your destination.

In this case, I'd like to see where the manual command is getting its data from (because I want to chase down a simple typo I'd spotted). So, I think I'm supposed to Leap to the definition with the Catalogue request “9manual” ... but that doesn't seem to give me any meaningful results at all. “9catalogue” on the other hand does provide me with a bunch of related blocks :-

a | (tools:catalogue) (variables,_actions)                           | 80
b | (tools:catalogue) (actions,_continued)                           | 81
c | (tools:catalogue) (key-bindings)                                 | 82
d | (tools:catalogue) (key-bindings)                                 | 83
e | (tools:catalogue) (key-bindings)                                 | 84
f | (tools:catalogue) (hints)                                        | 85
g | (tools:catalogue) (display)                                      | 86
h | (tools:catalogue) (top-level)                                    | 87
i |                                                                  | 88
j |                                                                  | 89
k |                                                                  | 90
l |                                                                  | 91
m |                                                                  | 92
n |                                                                  | 93
o |                                                                  | 94
p | catalogue shortcuts ============================================ | 95
---------------------------------------------------------------------------
1 Previous     2 Next         3 Display      4 Jump         5              
6              7              8 Rename       9 Leap         0 Quit         

Looking at page h does indeed result in some useful source :-

a | (tools:catalogue) (top-level)                                    |
b |                                                                  |
c | :catalogue (-)                                                   |
d |   [ &catalogue:actions &catalogue:hints &catalogue:display ]     |
e |   ti:application ;                                               |
f |                                                                  |
g | &catalogue \catalog                                              |
h |                                                                  |
i |                                                                  |
j |                                                                  |
k |                                                                  |
l |                                                                  |
m |                                                                  |
n |                                                                  |
o |                                                                  |
p |                                                                  |
---------------------------------------------------------------------------
1 Previous     2 Next         3              4 Jump         5 Run          
6 Save         7 Load         8              9 Leap         0 Quit         

So now I know a little more, including that there's an alias for the catalogue command allowing me to type a little less ... catalog!

That's probably enough for now, there's a lot more hacking to do here and I'll probably have to ask the author some questions to make sure I'm getting the terminology correct for the current names of commands and concepts before writing much more :–)

2024-03-12

A while ago I spent some time getting into RetroForth, and getting very interested in the VM that runs the language as well.

I thought I'd check in with this project again recently, and discovered that Charles Childers the author has moved forwards and is now implementing a full operating system based on this work!

Konilo, the Tool for Thinking

You get the following main parts for the system :- * ilo, the virtual machine (multicore CPU with a specialist instruction set, dual stack memory model) * ilo.rom, a 256KB image with the base system image (Retro Forth) * ilo.blocks, a 16MB “filesystem” pre-loaded with utility programs, documentation & examples

You can run this on a baremetal x86, using a multiboot binary for the VM, directly on a Sparkfun Teensy 4.1, in the browser (slowly) via an x86 emulator or more usefully from a terminal on your current computer with prebuilt binaries for Linux, *BSD, macOS, Windows, also Haiku, Mac System 5,6,7 and MS-DOS. If you prefer to build from source, help yourself to C, C#, C++, Lisp, Go, Kotlin, Lua, Nim, Python, Rust, Swift ... or hand-built assembly.

This isn't a simple “here, I figured out how to build it” project. This is a “if it's going to be done portably, we need to show it being done everywhere” demonstration of skill. If you want to learn any of those other languages, the ilo VM makes a great Rosetta Stone to compare with.

Using Konilo

Despite the technical excellence of the implementation, Konilo (currently on it's second release, 2023.10) isn't all that friendly to get started with. You have to have the right files in the current working directory, and get used to REPL loops that don't identify themselves with anything as helpful as a prompt.

So grab the tarball or zip file from (http://konilo.org/), and unpack it somewhere. From a command-line, go to the top-level of the package, and execute one of the VMs from the bin directory. For my Linux PC, that's bin/ilo-amd64-linux. Hopefully you'll get the Welcome messages then you'll have a simple input line waiting for you to say something incisive. Try “bye” to immediately exit :–)

$ bin/ilo-amd64-linux 

Welcome to Konilo                                               
                                                                
 ,dPYb,                                    ,dPYb,               
 IP'`Yb                                    IP'`Yb               
 I8  8I                               gg   I8  8I               
 I8  8bgg,                            ""   I8  8'               
 I8 dP" "8    ,ggggg,   ,ggg,,ggg,    gg   I8 dP    ,ggggg,     
 I8d8bggP"   dP"  "Y8  ,8" "8P" "8,   88   I8dP    dP"  "Y8     
 I8P' "Yb,  i8'    ,8I d8   8I   8I   88   I8P    i8'    ,8I    
,d8    `Yb,,d8,   ,d8P8P    8I   Yb,_,88,_,d8b,_ ,d8,   ,d8'    
88P      Y8P"Y8888P"        8I   `Y88P""Y88P'"Y88P"Y8888P"      
                                                                
`manual` for the users guide                                    
`blocks` for help using the block editor                        
`describe name` for help on a specific word                     
`catalogue` to explore the blocks                               

The suggested commands manual, blocks and catalogue all run a console-based text user interface called (termina), which expects a screen that's at least 75 columns wide and 20 lines high (configurable, but that comes later!). The bottom 3 lines are reserved for a help section showing the current numeric selections that will work, and the convention is that 0 will quit from these at any time, returning you to the main RetroForth REPL – but there's no prompt, so you have to know where you are ...

Another gotcha is that there is nothing in the way of command history or autocomplete. If you're on a unix, and you have access to the rlwrap command, there's an autocomplete file in the source distribution (a separate download direct from the author's Fossil repository, or via a Git-based mirror on SourceHut) – see the example ri.sh script.

That's all for my initial noodling around, I hadn't noticed this project starting up but this will be fun I hope ...

Exploring Veilid Bootstrap

This is early in Veilid's history, 16 Aug 2023.

Went to https://veilid.net, read up on concepts. Joined the Discord.

“All nodes are equal” “Bootstrap nodes publish a network key” “The network key is the network Absent config, a DNS TXT query will be used to startup

At this stage in the project there is little documentation. However, there are Debian/Fedora packages for veilid-server & veilid-cli, which I started with rather than the sources.

Install veilid-server, veilid-cli. Very minimal packages, just the binaries and /etc/veilid-server/veilid-server.conf. -server install tells you how to enable systemd service.

$ veilid-server —help produces the most documentation I've seen so far ...

Specifically, you get a full config output (not sure if it's actually a valid config file yet) with velid-server —dump-config

This reveals some nice defaults, and in particular :- * network.routing_table.bootstrap: bootstrap.veilid.net

But the results from a TXT lookup there are sparse bootstrap.veilid.net. 3600 IN TXT “1,2”

So, when veilid-server starts, it sets config from commandline than starts veilidcore internally. routingtable/tasks/bootstrap.rs tells us a little more ... // Bootstrap TXT Record Format Version 0: // txtversion|envelopesupport|nodeids|hostname|dialinfoshort* // // Split bootstrap node record by '|' and then lists by ','. Example: // 0|0|VLD0:7lxDEabKqgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ|bootstrap-1.dev.veilid.net|T5150,U5150,W5150/ws

This calls txtlookup on each bootstrap hostname, so we dive into intf/native/system.rs, where txtlookup ends up calling trustdnsresolver by default, which ends up on the local system resolver. The initial TXT query retrieves a list (“1,2”), which is treated as prefixes to the hostname, giving us 1.bootstrap.veilid.net and 2.bootstrap.veilid.net (thanks Discord for helping me notice the second lookups)

1.bootstrap.veilid.net. 2345 IN TXT “0|0|VLD0:m5OY1uhPTq2VWhpYJASmzATsKTC7eZBQmyNs6tRJMmA|bootstrap-1.veilid.net|T5150,U5150,W5150/ws”

2.bootstrap.veilid.net. 3600 IN TXT “0|0|VLD0:6-FfH7TPb70U-JntwjHS7XqTCMK0lhVqPQ17dJuwlBM|bootstrap-2.veilid.net|T5150,U5150,W5150/ws”

I'm not sure why this complexity exists, rather than a simple CNAME alternative; i.e. look up the bootstrap hostname, if there are cname entries then look those up, until there are no CNAMES any more, then use TXT on the remainder.

More discussion on the Discord on this suggested that SRV records might be more appropriate; and of course this is all about the process of joining a network, not operating it, so I need to go do some more reading

2022-11-20 @yojimbo@hackers.town

TL;DR only 5% of fediverse servers publish security.txt

Introduction

There's a continuing influx of new users to the Fediverse at the moment (Oct/Nov 2022), and with them came a fair number of new infosec people.

Some of them started kicking at the tyres of the services that they were on, which has caused some concern – because of the current growth rate of users, many servers are running very tight on resources, and the admins are working hard enough just keeping things running; they don't want to have to also deal with potential outages caused by 'testing'.

There are now a few servers being set aside for testing, and this is a great resource, but they're not necessarily easy to discover.

So if you're a white-hat hacker, you should be trying to co-operate with the servers you are working on. One of the ways to do that is to consult SECURITY.TXT, an informational page described by RFC9116, and informally specified for many years before that.

This tells you who to contact with potential security issues, and can be used to describe any bounty program in place, as well as overall policies (and perhaps pleas for being left alone?)

So, I thought I'd survey how many fediverse sites actually have a SECURITY.TXT page in the first place.

Data Sources

I took lists of known servers from three different resources, * https://instances.social/ * https://fediverse.observer/ * https://fediverse.party/ combined the results to come up with a list of just over 28k domain names, and then requested /.well-known/security.txt from each one of them.

Detailed Results

Many sites didn't respond at all. This probably reflects on the “timeliness” of the list data maintained by the sources I chose; some of them actively re-survey and curate their data to make sure it is current, and others view it more as a record of servers that have been seen at some point.

Many of the sites responded with a clean 404 status code, indicating that they simply don't have the page at all.

Some of the sites responded with content (a 200 status code), but the content wasn't security.txt, it was some other default content from the site instead.

On my initial run through the servers, I ran out of inodes as I got to sites starting with 'm' :–) and had to quickly pause the job, clean up, and resume. And curl was refusing to connect to sites with 'invalid' certificates. So I'm sure I missed a few original data points. Sadly this doesn't seem to matter because the overall picture is pretty sad ...

28805 queries made, 20526 did not provide a response.

  • 8279 valid responses
  • 5520 “404”s
  • 1617 “200 OK”
  • 437 “500”-series
  • 338 redirects – I didn't ask curl to follow redirects

Of the 1617 “200 OK” responses, only 664 provided a Contact value ... although 217 of those were 'info@frendi.ca', so I'm excluding them to leave only 447. It's notable that 208 of these were actually PeerTube instances, too, with what looks like an automatically-generated file containing the site admin's address as well as the project's central reporting details, so well done PeerTube.

My final result therefore is that only 447 valid security.txt files were found in over 8000 servers, which is an approximately 5% hit rate.

Perhaps we should clean this up before complaining about white-hat activity?