Read the latest posts from write.angry.im.

from petertechtips

I have been using systemd-nspawn on my Debian servers as a simple isolation tool, but there is just one small problem: it does not play nicely with nested Docker inside. I had to use a dirty hack to make it work, but the hack basically makes the namespace isolation pointless. This seems to be a limitation of cgroups v1, i.e. the “legacy hierarchy”, but for some reason enabling cgroups v2 “unified hierarchy” makes it impossible to use Docker inside altogether, even with the hack. With Debian's migration to croups v2 on the horizon, I figured it might be a good idea to start preparing for the transition right now instead of later, and since systemd-nspawn did not seem to work as of my previous attempt, I decided to try LXC, as I have heard that it supported nesting Docker inside even on cgroups v1.

Enabling CGroups V2 in Systemd

To use pure cgroups v2, we have to make sure systemd uses only the unified hierarchy, otherwise it defaults to “hybrid hierarchy” as of Debian 10. This is a problem because, obviously, we want to prepare for the transition to pure cgroups v2.

It is fairly straightforward to enable cgroups v2 support in systemd: just add systemd.unified_cgroup_hierarchy=1 to kernel cmdline (for GRUB, /etc/default/grub), and then rebuild your bootloader config (for GRUB, update-grub).

After rebooting, run mount | grep cgroup and check if the output looks like

cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)

Notice the cgroup2 at the start and after type.

LXC Configuration for CGroups V2

I won't go into the details about how to set up an LXC container — you can do it the “template” way, or you can generate the rootfs yourself using something like debootstrap and write the configuration like what's described in this article. Either way, you'll have to apply the following additional configuration yourself.

Debian has an excellent wiki page about LXC cgroups v2 compatibility, but it is for LXC 4.0, which will probably be included in Debian 11 “Bullseye”. As of now, Debian 10 “Buster” comes with LXC 3.0, which contains some bugs in terms of cgroups v2 compatibility. Specifically,

lxc.mount.auto = cgroup:rw:force

is broken on LXC 3.0: it mounts cgroup2 at /sys/fs/cgroup/cgroup instead of the correct path /sys/fs/cgroup (reported here and fixed in 4.0). The workaround is to simply mount cgroup2 to the correct path manually

lxc.mount.entry = cgroup2 sys/fs/cgroup cgroup2 create=dir,rw 0 0

(Note: theoretically, you can simply use the alternative solution provided in the Debian wiki page, lxc.init.cmd = /sbin/init systemd.unified_cgroup_hierarchy=1, but this did not work for me either for a Debian 10 rootfs inside the container. I am not sure why, but force-mounting the cgroup2 filesystem seem to work just fine).

In addition, the automatically generated AppArmor profile is broken for cgroups v2, resulting in some systemd services inside the containers crashing, such as systemd-networkd, due to not being able to set up service-specific namespaces. For now, the dirty fix is to simply put containers in the unconfined profile:

lxc.apparmor.profile = unconfined

This is, obviously, not exactly great for security, but rest assured you shouldn't need this once Debian 11 is released and you can upgrade to LXC 4.0.

I have put together a configuration file that you can simply include into your own configuration via the lxc.include directive:

# Mount cgroup2 at ${CONTAINER_ROOT}/sys/fs/cgroup
# to force systemd to use cgroups v2
# TODO: After LXC 4.0, we only need
#   lxc.mount.auto = cgroup:rw:force
# for this. The line above does not work in LXC 3.0
lxc.mount.entry = cgroup2 sys/fs/cgroup cgroup2 create=dir,rw 0 0

# Clear cgroups v1 rules
lxc.cgroup.devices.allow =
lxc.cgroup.devices.deny =

# LXC 3.0 has apparmor bugs for cgroup v2
# TODO: Remove this after LXC 4.0
lxc.apparmor.profile = unconfined

Docker nested in LXC

Actually, with the configuration above, you should already be able to use Docker inside the container. The problem is that Docker only supports CGroups v2 since 20.10, but Debian 10 comes with a pretty old version of it. To use Docker 20.10 on Debian 10, you'll need to add

deb [arch=amd64] https://download.docker.com/linux/debian buster stable

to your APT sources list, and then install docker-ce from it.

Remember that for Docker to behave correctly inside a container, you must load the overlay kernel module on the host, otherwise it will fall back to an very inefficient driver vfs.

To verify your Docker installation is working, run docker info and make sure it shows

 Cgroup Driver: systemd
 Cgroup Version: 2

from petertechtips

Recently I got very interested in retro gaming, and began looking for solutions that I can use to build my own portable retro-style gaming console. I could just design my own and 3d-print a case, but I did not want to go through all the trouble doing so if one is readily available. Additionally, I wanted to make use of my Raspberry Pi 3B as it has been lying around doing nothing and collecting dust for almost two whole years.

What I ended up buying was one of WaveShare's gaming console kits, GamePi43. It is basically a shield PCB made for the RPi3, with a D-pad and some buttons pre-installed, along with a full enclosure. It is powered by two 18650 lithium ion batteries and has a built-in charging IC. The screen — a 4.3 inch IPS panel — also looks extraordinarily nice for something at this price point.

Upsides aside, like many electronics made in Shenzhen, there are some quirks with its official documentation and also things one can modify to improve upon. Listed below are some tips to work around issues with its documentation or drastically improve its user experience.

System Image / Drivers

Despite WaveShare's claims on their Wiki page, the case can actually be fully operational without using images provided by them or executing any of the scripts they have listed on their page. There are only two things that needs to be done upon a default RetroPie installation to make it fully operational inside the GamePi43 case.

Update (2020-09-24): You don't actually even need the retrogame daemon described below. I have written a Device Tree Overlay for GamePi43's GPIO layout, and you can use the file (and instructions) available at this gist to replace the following paragraph of instructions. Going the Device Tree Overlay path allows the input events to be processed in kernel instead of in userspace and reduces latency significantly.

The first is to enable controller input via GPIO — the D-pad and buttons present on the case are actually connected to the Pi's GPIO pins. The “driver” provided by WaveShare is actually the same as the retrogame daemon released by Adafruit, and you can just download the binary from Adafuit. I just downloaded the binary and put it in /usr/local/bin/retrogame, gave it +x permissions, and then extracted /boot/retrogame.cfg (you might want to uncomment the ESC line) from WaveShare's “driver” image and placed it in /boot. Then, unlike what is done in the official driver image, I used a systemd unit to start the daemon on boot

Description=Retrogame GPIO Input Service



You might now notice that the screen looks blurry for some reason. This is the second change to make — add the following to /boot/config.txt:

hdmi_cvt 800 480 60 6 0 0 0

(You can make these changes by plugging a keyboard into the Pi's USB port)

Screen Blanking

By default the screen is never turned fully off on idle. This can be problematic for heat output, LCD lifecycle, and more importantly, if you want to use your GamePi as a power bank (lol). Fortunately, the solution is pretty easy: just add hdmi_blanking=1 to /boot/config.txt, and change consoleblank=0 to consoleblank=<timeout_in_seconds> in /boot/cmdline.txt. Now, if the machine stays idle for <timeout_in_seconds>, it will turn the screen completely off.

“Soft” Halt / Power On Button

The GamePi43 comes with a clicky power switch at the top, which is nice, except that the switch kills power physically and does not allow a clean shutdown. You can run shutdown in command line or from RetroPie's menu, but that puts the RPi into a halted state that cannot be waken up under the default configuration of the case, and you still need to turn the power switch off and then on to power it back up again, which to me feels far from elegant.

After a bit of digging I learned that you can actually wake up a halted Pi by pulling GPIO3 (pin 5) LOW. Unfortunately, according to WaveShare's documentation, GPIO3 is unused in GamePi43, so we do not have anything on the machine that can power it up. What I then realized is that GPIO2 is actually used for the HK button and it is an active-LOW momentary switch, which means that if I short GPIO2 and GPIO3, I should be able to use the HK button as a soft (graceful) power button (because now HK pulls both GPIO2 and GPIO3 to LOW).

The mod was quite simple. I just pulled the Pi out of its socket on GamePi's board and then shorted the legs of the socket corresponding to GPIO2 and GPIO3 (pin 3 and pin 5). To short them, I simply cut a short piece of resistor leg and soldered it to both of the two pins. You could also just short the two pins on the back of your Pi but I figured that it's better to do my modifications on GamePi instead of my Pi.

Now, after you run shutdown -h now in the OS, you should be able to push the HK button once to cause the Pi to boot back up. All that's left is to write a simple script to add shutdown feature to the HK button

#!/usr/bin/env python

import RPi.GPIO as GPIO
import subprocess
import time

GPIO.setup(3, GPIO.IN, pull_up_down=GPIO.PUD_UP)

while True:
  GPIO.wait_for_edge(3, GPIO.FALLING)
  down_ts = time.time()
  GPIO.wait_for_edge(3, GPIO.RISING)
  down_secs = time.time() - down_ts
  if down_secs >= 3:
      # Shut down if long-pressed for more than 3 secs
      subprocess.call(['shutdown', '-h', 'now'], shell=False)

The scripts listens for push and pop events of GPIO3 (now shorted to the HK button) and triggers a halt if the button gets pushed for more than 3 seconds. Just make the script run on boot using systemd or whatever you like, and you now have a graceful shutdown / power on button in place of the forceful power switch.

Joystick-like Cap for D-pad

Let's face it: the D-pad that comes with GamePi43 is sub-par. It is hard to press and makes my thumb sore all the time, and you cannot even push down more than one direction at a time due to its sheer stiffness.

When I was playing with another open-source retro console owned by my sister, I noticed a trick that they used to make the D-pad feel better. All they did was sticking a round cap upon the D-pad (it came as a separate part that the end user puts it on), and suddenly the D-pad feels almost like a joystick. Of course, there are dents on the round cap to make it fit fingers better.

Learning from this trick, I quickly did some measurement on the D-pad of GamePi43, and then made a model in OpenSCAD similar to the one on my sister's console.

$fn = 100;
d = 23.3;
cross_width = 8.5;
dpad_height = 1;
cap_height = 5;
cap_roundness = 1;
dent_size = 12;

intersection() {
    union() {
        cube(size = [d, cross_width, dpad_height], center = true);
        cube(size = [cross_width, d, dpad_height], center = true);
    cylinder(h = 1, r = d / 2, center = true);
translate(v = [0, 0, cap_height / 2 + dpad_height / 2])
    difference() {
        minkowski() {
            cylinder(h = cap_height - cap_roundness * 2, r = d / 2, center = true);
        union() {
            translate(v = [0, 0, dent_size * 2])
                sphere(dent_size * 2);
            translate(v = [5, 0, dent_size])
            translate(v = [0, 5, dent_size])
            translate(v = [-5, 0, dent_size])
            translate(v = [0, -5, dent_size])

This was then printed on my Ender-3S and then stuck to the D-pad using some 3M double-sided tape.

The D-pad now feels way better than it was before, but unfortunately it is still stiffer than my liking. Nevertheless, you can at least press down diagonally with this simple mod.


from petertechtips

I have recently purchased a Pixel 4a from Japan, shipped all the way here to China via EMS. Unfortunately, unlike people living in “the outside world”, doing so means that I have to hack around to bypass some limitations to make it even work like a “normal” Pixel 4a (as sold and used in the US, Canada, or the EU). Specifically,

  • Japanese phones are required to have non-mutable shutter sound for both the camera and screenshots
  • Pixel phones have no proper support for China Mainland carriers

These two problems are the most serious blockers for using the phone as daily drive. Thankfully, both of these issues can be worked around (at least to some extent) with custom Magisk modules.

Installing Magisk

There is (at the time of writing) no proper TWRP for the phone yet, because it is released with Android 10 and has dynamic partitions. TWRP would also not work very well with A/B devices without a standalone recovery partition.

Like other devices in similar situations, we install Magisk by first installing Magisk Manager, then extracting boot.img from factory image and patching it with Magisk Manager. After that, just flash it into the boot partition using fastboot as usual.

Note that at the time of writing, using Magisk on Android 11 requires opting in the Canary channel.

Global Carrier Support

The core issue of carrier support on Pixels is that there are very few carrier configurations available in RFS (on Pixels, RFS is located in /vendor/rfs, unlike other QCOM devices). I do not know what the rationale behind it is, but this fact makes Pixels basically useless outside the region where they are sold.

One simple fix is to just kang RFS images from devices with the same or similar SoCs and has global carrier support. As Pixel 4a uses Snapdragon 730G, a natural choice of the donor device is the Redmi K30, sporting the same SoC. The RFS image on K30 can be extracted from NON-HLOS.bin inside its factory image, by simply mounting the file using the mount command on Linux (it is actually a VFAT filesystem). What is useful to us, however, is the mcfg_sw directory located somewhere deep in the directory tree. Take that directory and make a Magisk module to override that directory to the path system/vendor/rfs/msm/mpss/readonly/vendor/mbn/ where a directory of the same name can be found, and voila, now the Pixel 4a works perfectly with basically every carrier that it can physically support.

Remember to fix the SELinux contexts and permissions by using the following customize.sh script

ui_print "-- Setting permissions for modem config files"

find $MODPATH/system/vendor/rfs/msm/mpss/readonly/vendor/mbn/mcfg_sw | while IFS= read -r f; do
  set_perm $f root root 755 u:object_r:vendor_file:s0

(Note: for some reason, even though my Pixel 4a seems to work fine with China Telecom with pure LTE (JPN version has no CDMA support) after the hack, it somehow fails to output audio during outgoing calls over VoLTE. Incoming calls are fine and other carriers seem to be fine, too.)

Remove Screenshot Shutter

This part is easy: the shutter sound is located in /product/media/audio/ui/camera_click.ogg. Just make a Magisk module to override that file with an empty ogg file created using ffmpeg or something similar. Also remember to fix the SELinux context properly.

I tried to remove the camera shutter but failed. Normally one would expect the camera to use the same audio resource file in /product, but apparently for Google Camera this is not true. It has its own shutter sound file packaged inside its APK, and I cannot think of a sane way to remove it without using things like Xposed. I also tried to edit its preference XML file in /data but that also did not work. I figured that camera shutter is not as annoying as screenshot shutter, so I decided to just live with it at last.

Upgrading Android OS

After installing Magisk, the automatic system upgrade functionality no longer works, and the old trick of restoring boot image and re-flashing Magisk to the other slot also stopped working as of Android 11. What I do instead is:

  1. Download latest factory image from Google
  2. IMPORTANT: Edit flash-all.{bat,sh} and flash-base.sh to remove all -w flags after fastboot. This prevents wiping data.
  3. Extract boot.img from image-<codename>-<build_number>.zip
  4. Patch that boot.img with Magisk Manager on the phone
  5. Fetch the patched image back to PC and replace the one in the zip with the patched one
  6. Reboot to fastboot and execute flash_all.{bat,sh}

from petercxy

I have not been very active in all aspects of my life since the beginning of 2020 compared to how it used to be — my projects have been pretty much stagnated and I have hardly updated my main blog. More than any time before, I have been in a state of burnout in this unusual year of 2020.

The feeling of a burnout is strange. For me, I don't actually feel that I cannot work on my projects, or that I have no energy to spare for them. Quite the opposite. I can spend all my free time playing Minecraft and working on complicated redstone circuits, which is not less tiring at all when compared to my “real” projects, that is, things we consider “productive” to do. There is simply some sort of reluctance to doing anything productive, anything that could result in some kind of accomplishment. I am aware that they are not that complex to work on, and that once I start working on them, they would not be very tiring — but I just do not want to work on them.

Of course, knowing all of these often makes me “condemn” myself internally for my reluctance to productive work, for which I have to constantly find excuses for, an act called “moral licensing”. My initial excuse was my final year studies, including my final year project which I did not actually invest that much time on. After that came to an end, realizing me still being in a burnout state, I switched my excuse to preparation for application for master's studies. This one made some sense, as those exams that I have to go through are (and I still think so) very tedious to prepare for, especially the GRE which requires me to memorize a bunch of intricate words and their nuances that I probably will not even need for the rest of my life. But it takes nowhere near all of my time or energy, and I still have a lot of time lying around doing absolutely nothing.

I think the situation is more psychological than anything else. My productivity has always been intermittent — just look at my activity graph on GitHub. I would normally hit a one or two months streak, and then become dormant for the following month or two, normally because of a loss of direction. That is when I get confused about what the future of my project is, and what purpose it would serve, not just for me but also for everyone else. The loss of direction incurs a loss of motivation, which causes my productivity-burnout cycles.

This time it is different though. It has lasted for the entirety of 2020 so far, and I still don't feel any better than how I felt at the start of 2020 when everything ridiculous was still in their infancy. The more this state continues, the worse I feel, thanks to all those thoughts of “moral licensing” that I have done in vain, whose only effect was causing me to feel even worse about myself. I think that it came partially from uncertainties about my future, as now I am laser-focused on getting somewhere overseas for my master's studies and hopefully stay there, and if that fails, I don't even know what my plans for the future should be. This is more than just being confused about the purpose of a project, because if my attempt at this fails, none of my project, if any still lives on, would make any sense to me.

Social media of course did nothing helpful in this situation. I don't want to sound like a cynic, but as someone easily frustrated by a few trolls on the Internet, I am overwhelmed by the amount of pointless arguments and disappointing news since the beginning of this year. I am aware that killing my own productivity by indulging myself in these does nothing helpful to anybody including myself, but awareness is not the same as feelings. Without reading such news, I feel isolated; but once I take a look at any news this year at all, the frustration comes back. I cannot even try to go outside more and get a “real” life — travel restrictions mean no friends, no university, no party, nothing.

As of now, although I am trying to get back to working on some of my projects, as the exam dates get closer and closer, my level of motivation does not really increase enough for me to resume my normal activity. I am still not sure what the future holds for me, at least before the results of all these exams come in. I am sorry this is not an uplifting post in such a time, but the fact is that I have no idea how do I get out of all of this without the conditions disappearing first. I guess the best I can do as of now is to have a rest and do not break down completely before everything ridiculous even ends.


from petercxy

I've been told numerous times throughout my school years that one needs to not refrain from asking questions to be a truly good learner. Even simple questions, they say, is a sign of thinking and being in the process of learning. I myself, however, seldom find myself willing to ask questions, not because I have no questions, or I think it is stupid asking simple questions, but that I, more often than not, cannot formulate questions that I feel truly worth asking.

I am not denying that questions are valuable for the learning process — they most definitely are. What I often find reluctant to do — or rather, a waste of time — is the act of articulating the question to someone else (let's define, in this article, “asking”, as “articulating”). You see, this is the age of information, and it is not an exaggeration to say that over 80% of questions, especially those that come when learning about a new subject, has an answer lurking somewhere online waiting to be discovered through searching. More often than not, rather than sending the question to someone else, doing a few simple search queries results is much more efficient, not to mention that people being asked the question may themselves have to find the answer online. This is often the case when someone asks me a question that I am also not 100% sure about. It's not even being lazy — finding someone else to help normally requires more effort, not less, than typing a few words in a search engine. Asking about this 80% of questions, to me, is nothing more than an inefficient use of precious time.

What about the other 20% of questions? In that case, asking the question seldom gets you a single definitive answer, but rather brings about a discussion, which is good if you are prepared for a productive discussion. Unfortunately, when one asks a question simply for the objective of getting an answer, the person is most likely not actually prepared for a discussion. The questioner will only be able to observe others doing the discussion, feeling lonely, helpless and have no substantial point to bring into the discussion, which, for me, causes frustration and nothing is truly gained after the matter. When one actually thinks, does the research, until the point of being prepared for such a discussion, the person would have most definitely already had an answer of their own to the question. At this point, I would not still classify the act as asking a question, but rather an exchange and critique of ideas, which I love to see, but it is not “asking” per se. It is somewhat like the practice of debugging to a rubber duck: when you actually know what you want to ask, you usually have an answer already — even though it might be wrong. That is what discussions are for.

In my opinion, this is also what “the art of asking questions”, a notion prevalent on several tech forums, really expects. For most of the questions one can post brainlessly onto these forums, the answer could be found with no more than a little bit of effort on search engines. For the rest of questions, just throwing a question mark there would not get you anywhere, and can also be considered rude if the question is difficult and do not have a definitive answer, but the person asking the question does not even show any evidence of trying to solve it on their own. What is more valuable is an exchange of ideas and attempted solutions, not questions that literally take seconds to find with a modern search engine.

In other words, the act of asking questions does not imply intelligence. Rather, it is the ability to try to solve problems, think, discuss, and exchange ideas that really makes one gain knowledge. One needs to question and think to be a truly good learner.


from petercxy

I have been a long-time blogger since the good old days of internet, and my habit of writing long and elaborate essays on every interesting topic I can think of has become almost an instinct after all these years. I am also one that loves social networks (despite all my previous rants on social networks, ironically, posted on social networks), and I even have a Mastodon instance of my own. For a long while, the distinction between these two was pretty clear to me: for my long and pedantic essays, I use my blog; while social networks are for jokes, short comments, stupid arguments, or whatever that I cannot bother writing a full article on.

During recent years, I found myself somehow posting longer and longer messages on my social media account on Mastodon, while my blog sees barely any update throughout the year. I hit the upper character limit of Mastodon regularly, so I have even modified its code for a longer upper limit. Reviewing my own posts, I see them legitimately too short and incomplete for publication on my main blog, but too long to even read comfortably in the UI of a social media platform. Expanding them to full articles takes a considerable amount of time effort, and as other aspects of my life get more and more occupied by “important stuff” as one grows up, I find myself being increasingly reluctant to do so.

So here is the problem. I do not want any more thousand-character-long posts on my social media account, but on the other hand, I do not want my main blog to become filled with random incomplete comments or even just ideas, defeating the purpose of having a separate blog from social media in the first place. I could have just set up a secondary blog (again), but I find WriteFreely more suitable for this role, as it supports federation with ActivityPub, allowing me to integrate it with my circle on Mastodon, while providing a simplistic writing experience. Since my purpose of having a WriteFreely instance is to replace my super long posts on social media, I see ActivityPub integration pretty important. In addition, compared to something like Plume, the codebase of WriteFreely seems way more simple and clean. There's no 2 megabytes of WASM binary just to view an article, and no fancy eye-hurting styles, but still beautiful.

And here we are. A new WriteFreely instance. I welcome you (if you have an invitation) to join me exchanging our writing and our ideas on this instance. It doesn't matter if you use this as your main place of publication or like me, just to replace long social media posts. And who knows, maybe one day WriteFreely will actually replace my main blog. For now it doesn't matter. What is important is that we keep thinking and keep writing.

Keep writing. Keep smiling. Don't be angry :)