Good morning! :coffee:
Recently, I’ve been playing around with some ideas that’ll eventually get me 24 monitors. Why the impractical and unholy number of monitors?
If I’m being completely realistic, that’s way too many monitors to be practical. It’s just way too many monitors to even be useful, except for some extenuating circumstances like stocks trading.
So, instead of coming up with a good excuse, I present you the best argument I’ve had since time immemorial: why not, sounds fun!
There have been some interesting ideas to accomplish this surrounding the use of Virtual Reality (VR) headsets, using apps like Immersed to achieve both a multimonitor setup, and a distractionfree environment. When COVID19 was still a prominent part of life, I could see this being used fairly frequently.
On Immersed’s FAQ, under “What devices can run immersed?”, it is stated that “Currently, Linux only supports pluggedin external monitors” with “virtual displays coming soon!” (accurate at the time of writing).
Borrowing a mate’s Quest 2, I was able to verify that Immersed wasn’t able to spawn new displays on Linux.
So that got me wondering; what if I could implement this killer feature? Apart from Immersed, I could turn old devices into highspeed, low latency external displays.
Apps like these already exist; notable examples include GNOME’s virtual displays and deskreen.
“But!” I exclaim to myself.
“I want a potentially unlimited number of virtual displays!” I bemoaned.
“And I want to do it by myself!” I lamented.
Lo and behold, of course I’d find a way.
Ideally, I want whatever solution I come up with to be GPUagnostic; i.e. there shouldn’t be
something that only works for Intel iGPUs (as in the case with VirtualHeads
in XOrg
configurations).
When searching online, I came across this wonderful person’s
post, who suggested that we can use DisplayLink
’s evdi
kernel module, which allows us to set an initial number of devices.
Running the right xrandr
commands will then get us virtual monitors that we can’t
directly observe on physical monitors, but can instead be accessed via something like VNC or an alternate method
which I will later propose.
The first step is to install the evdi
kernel module. On Ubuntu, it is as simple as running sudo
apt install evdidkms
and then restarting the system.
On other systems, either look for evdi
in your package manager, or compile it from the
source.
Now, run modprobe evdi initial_device_count=2
(or however many you want).
After which, restart your X session; this can typically be done by signing out and then logging back
in, although I’ve only ever tested it by using the “restart X session” functionality on my i3
config. (i.e. killing the X session and restarting it).
You’d have to do this every restart. If you already have a good idea on how many additional virtual monitors you want, you can choose to add this to
/etc/modprobe.d/localevdi.conf
:options evdi initial_device_count=X
, whereX
is the number of monitors you intend to boot with.
Now, perform xrandr query
. You should see a bunch of disconnected monitors, which can look like
this:
DVII32 disconnected (normal left inverted right x axis y axis)
DVII21 disconnected (normal left inverted right x axis y axis)
eDP11 connected primary 1920x1080+0+0 (normal left invertest right x axis y axis) ...
At this point, add the resolution you want your virtual monitors to be. There are plenty of guides online on how you can add custom resolutions, but if you’re adding wellknown resolutions (such as “1920x1080”, “1920x1200”), you can do so by running these commands:
xrandr addmode DVII21 1920x1200
xrandr addmode DVII32 1920x1080
Note: If you have other interfaces that are free, you can use those instead. The
DVII
ones are generates byEVDI
.
Figure out how you want to lay your monitors. In my setup, I want DVII21
to be on the right of
eDP11
, and DVII32
to be on the right of DVII21
. Here’s the xrandr
magic to achieve
that:
xrandr output DVII21 mode 1920x1200 rightof eDP11
xrandr output DVII32 mode 1920x1080 rightof DVII21
Congratulations! You’ve managed to set up virtual desktops. Way to go :beers:
Now, how do you see content on those monitors?
Suppose you have two other devices to display the two new virtual monitors you’ve set up; then there are actually a fairly abundant number of ways you can go about this. The easiest way is probably with a VNC server and client, where there are plenty of guides for.
One way I tried that didn’t work was with NoMachine (notoriously known to be incredibly fast); it wasn’t happy about the virtual monitors and drew a large black box over where they were supposed to positioned.
If you have a VR workspace emulator like Immersed, the virtual monitors you’ve created should just work straight away (tried and tested).
The rest of this blog post will outline a less conventional way  using
ffmpeg
and ffplay
. This allows me to take advantage of a host’s NVIDIA card
to display my virtual monitors on other devices.
First, run xrandr query
to figure out the offsets of your outputs. For instance, here’s what
mine looks like:
> xrandr query
DVII32 disconnected 1920x1200+3360+1000 (normal left inverted right x axis y axis) 0mm x 0mm
DVII21 disconnected 1920x1080+5280+1000 (normal left inverted right x axis y axis) 0mm x 0mm
This means that DVII32
has an xoffset of 3360
, and a yoffset of 1000
, while DVII21
has an xoffset of 5280
and a yoffset of 1000
(I have a weird setup).
As you may have guessed, the devices I am planning to project the virtual monitors to are
1920x1200
and 1920x1080
in resolutions respectively.
Hence, on the host, I run the following command:
ffmpeg video_size 3840x1200 f x11grab framerate 60 i :0.0+3360,1000 \
c:v h264_nvenc zerolatency 1 profile:v main preset llhq maxrate 500k \
bufsize 1m qp 0 f mpegts udp://<client 1 IP>:<some port you choose> c:v \
h264_nvenc zerolatency 1 profile:v main preset llhq maxrate 500k bufsize 1m \
qp 0 f mpegts udp://<client 2 IP>:<some port you choose>
Quick Disclaimer: I am not a
ffmpeg
pro. I’m fairly certain this can be optimized to smithereens, but for the purposes of this blog post (and my usage), this is more than good enough.
The command above screen grabs the regions defined above (essentially the two virtual monitors), and
sends the stream to both devices. The other flags are there to decrease the latency as much as
possible. Note that this setup is still not subone latency, but is much faster than achievable
with VNC (ffmpeg
pros could probably get it to subone latency).
Ensure that the client allows ingress into both ports on their firewall, then, on the respective
clients, run the following ffplay
commands:
# Client 1 (the 1920x1200 one)
ffplay vf "crop=1920:1200:0:0,setpts=0" fflags nobuffer flags low_delay \
framedrop strict experimental probesize 32 fast an udp://127.0.0.1:<port>
# Client 2 (the 1920x1080 one)
ffplay vf "crop=1920:1080:1920:0,setpts=0" fflags nobuffer flags low_delay \
framedrop strict experimental probesize 32 fast an udp://127.0.0.1:<port>
The two clients should connect, after which you can press “F” to fullscreen the window. Congratulations, both devices should now be displaying your virtual screens! You can now interact with them as if they were external monitors to your host machine.
The solution above can be combined into a single script to suit your needs. It shows that even without dedicated software, it is possible to have a virtual monitor setup that (basically) supports unlimited monitors.
If you have any spare devices laying around, and they can run VNC clients / ffplay
, give this a
shot! You may be able to give it a new lease of life as a secondary monitor.
Happy Coding,
CodingIndex
]]>Ah yes, I can already hear the scorn and disdain of some of you wondering where I’ve been all this time. Short answer: I got lazy. Long answer: I have a lot to do, but I’m also procrastinating. Man, I swear, it’s probably a comedic routine at this point to start my blog posts with some kind of excuse.
Anyways, you’ve read the title right; I tried to make another game!
untitled game  Source: Me
Note: Skip to Day 1 for the actual blog content.
Table of contents:
Now, if you’ve ever read any of my blog posts, you’d know that I’ve tried making a game before, with a deadline to boot. It was called Failed Game, and was made for my buddy ModelConverge. It was a massive failure, with me spending eons trying to get the exact movement I wanted, fixing size mismatches, writing a “manager”, etc. Absolutely horrendous time management.
Surely, I have been polishing my skills to create a game that can captivate players for a gamejam, right? I definitely should have improved since 2020!
Nope, I’ve not touched game development since then because I was traumatized by how little I got done.
Fast forward a few years (2023, around March), I started watching Neurosama, and got hooked the moment an Alternate Reality Game (ARG) was released. This somehow warmed my cold introverted heart, leading me to create my first Twitch account, revive my old Discord account, and chat with random internet strangers about how we hold this cute little AI and its creator in eternal reverence.
Here are some clips I hold dear to my hear:
The ARG is found here.
The Neurosama community is one of the most “athome” I have felt for a while. Even during my lurking phase, I felt nothing but awe; people were kind, talented, and helpful. To me, the fact that this community exists at all is nothing short of a miracle.
So, out of love for the little AI and her creator, I began contributing by attempting the ARG (badly, I’ve been nothing much but deadweight). I had no other talents to contribute, can’t do art, music, no bright ideas, and worst of all, I’m not exactly a superstar programmer, especially compared to the AI’s creator and most people in the #programming channel. Talk about a failure of someone who literally runs a technologyrelated blog!
When the Neurosama Birthday Game Jam (28/12/2023  31/12/2023, 72 hours) rolled around during the subathon, I knew this was my only chance to get involved and actually do something. A recap:
I was hesitating  on one hand, I knew for a fact that I couldn’t have created anything remotely playable even with infinite time, much less in 72 hours. Plus, I actually have reallife work to complete, which I took up because I was sure I wouldn’t had any other commitments. On the other, I literally have no other chances to contribute to the community. Furthermore, I’ve found myself working amazingly during tight deadlines, which is the case during my serial hackathon days with ModelConverge and nikhilr.
Amidst tormenting myself with hesitation, the theme was announced onstream by Neurosama to be “Lost & Found”. Pondering what I could create, I realized that I actually had ideas; more than anything, I had a story I wanted to tell.
While you’re here, have a look at my short stories. They’re a
horriblecollection of short stories I wrote expressing what I feel about our current world.
And so, I joined the Game Jam. Alone (I think I have crippling social anxiety).
When I heard the theme “Lost & Found”, my mind wandered to “oh, digging”! The player could dig up for lost items. Sound good to me!
What else can you dig?
…
Graves.
I love games that can invoke emotions, because I don’t normally feel them. Happiness is an overrated emotion, so I decided to go for the sad route.
Tragedies are pretty difficult to convey; overexploiting elements will cause the story to become stale, and become a case of “oh, the author’s at it again”. For example, if I kept killing important characters left and right, the players would become numb to that sensation, and the story may become predictable.
A tragedy is good when it is unexpected from the onset, but makes sense after the fact.
Note: I have no idea what I’m talking about, this is all just personal takes on what makes a tragedy.
Digging for items? Digging for graves?
Let’s add a plot twist. Let’s make them dig their own grave.
Grave Sprite  Source: me
After spying on some streams of people building their games, I set out to code the core mechanics first (this was a good idea). The ideas are as such:
At this point, I have not figured out how to progress the story yet.
These set of core mechanics took a good total of 2 days to fully implement, including actually sleeping. Most of the difficulty stemmed from me trying to figure out how Godot worked (never used a Game Engine in my life), what the heck nodes are, and how they interact.
I had the most trouble figuring out the difference between
Area2D
,CharacterBody2D
, andRigidBody2D
, which all have different callbacks and different uses. Figuring out the difference between area collision and body collision was a huge time sink :sweat_smile:
I used a walker for the level generation, which basically uses DFS and some parameters to randomly generate a walkable path. This effectively means we have infinite level generation!
I also implemented enemies, with plans to implement different types of enemies (didn’t end up doing it because of time). To attack, the player would also use the shovel; hence, you couldn’t dig and attack at the same time. The idea was to challenge the player to knock the enemies back far enough before digging for items. Speaking of items, I implemented various levels of items to add some variance to the game.
Amazing green ball with quality indicator  Source: me
The most challenging part of this day was figuring out (for the life of me) how collisions with tilemaps worked;
because I had no idea. Even after setting the right tiles for collision on a TileSet
, the player character couldn’t
collide with the TileMap
properly. It took a few hours, but I eventually figured it out and used a CharacterBody2D
on floating
mode to introduce tile map collision physics.
HUD was also introduced on this day on another scene, adding HP and Stamina stats. I also eventually added other elements to the HUD, like the score, and the timer indicating the amount of time left before the end of the level.
The second most challenging part of the day was navigation; it turns out, navigation only works on one layer (at the time of writing) based on this PR, and so I tore out my hair for no reason trying to figure what in the world is going on. In the end, I resolved the navigation layer issue by using background.
At first, I wrote linear pathfinding using the
Behaviour
design pattern. But, like, who has time for design patterns in a Game Jam?
To end the day, I used the path generated by the walker to also randomly place items.
On this day, I implemented real bars to represent HP:
Bars  Source: me
And finally added stamina. When attacking, I figured the player should have some visual feedback that something is happening, so I decided to add slashes:
Slashes  Source: me
I realized that not many people will understand the digging mechanic upon spawning on a random level, so I decided to make a tutorial level:
Tutorial Level  Source: me
Then, I put healthbars on enemies:
Enemy HealthBar  Source: me
I also added some more qualityoflife mechanics, such as pressing a button to start a map, restarting a level, and a “level over” screen.
Honestly, at this point, I wasn’t sure if I could complete the game. I saw some people in the community becoming disheartened that they may not finish their game and dropping out; but I figured I continued anyway.
So, I sat my butt down on my chair and began working harder.
I implemented “landmarks”  kinda like random small terrain objects that spawn on the foreground layer as memorization helpers for the player.
Landmarks  Source: me
And… well, I think the core mechanics were done!
I suck at art. Nevertheless, I sat down and drew some sprites:
Sprites  Source: me
And implemented them into the game.
Finally, I decided to actually flesh out the story. In my mind, I wanted to create something that will invoke some sort of emotion within the player. The rough idea of the story was “dig to recover memory fragments”, and end off with “here’s the whole reason why you’re in this mess”. The rough storyboard was as follows:
x
number of fragments, play a special levely
number of fragments in total, play the ending special levelI ended up with 3 different special levels; I don’t really want to spoil the story, so here is the overview of two of the levels (the 3rd one is a story spoiler, so I won’t show it):
Special Level 1  Source: me
Special Level 2  Source: me
I would say I did pretty okay with the story. I’m not a professional writer, but I reckon it got the job done.
At this point, I only had one hour left. So, I definitely couldn’t be learning how to compose my own music in time; instead, I searched online for a suitable track. I wanted a “lost in the forest” kinda vibe, but in the depressing tone, which led me to find this page, which has a very fitting tune called “Goodbye Tales”.
Adding an audio player, whipping out some quick code to play it on a loop, and I shipped it and called it a day.
It was not a good game. The core mechanics were “complete”, but definitely not polished. The artstyle was absolute garbage, and the music wasn’t even mine.
The scoring mechanism to obtain fragments and reach special levels was completely broken, so I had to add a small note to guide players towards obtaining them.
Many community members were able to notice the novice attempt at creating something resembling a game and gave me some encouragement:
Community Encouragement  Source: rating page
Once again, my heart is warmed by the thoughtfulness of the community members.
I could have done better on the following aspects:
If I had collaborators, I would not need to spend so much time creating assets and focus on actually building a fun game. However, I am also socially awkward, and have no idea how to properly do game development. Furthermore, I realize that if I didn’t work alone, I’d likely never have the chance to write the story I wanted to convey. After all, not everyone wishes death upon someone/something they hold dear.
To be honest, the reviewing stage. I was given chance to rate other people’s games, and found many gems. Take a look at the submissions page and try some yourself!
I’m a numbersoriented person, and so I obsess over my analytics. This is partly the reason why this website doesn’t have Google Analytics  I’ll probably compulsively obsess over it and get no work done.
However, that wasn’t it. I also enjoyed watching people play my game (when they try their best to avoid the bugs, of course). The fact that someone out there is experiencing the thing I’ve crafted, feeling the things I intended for them to feel, and generally thinking it wasn’t an abysmally horrible creation is something that keeps me ticking.
It reminded me of why I wanted to do technology in the first place; to create things used by others, creating as large of an impact as possible. To this end, I’ve explored being a content creator (if you remember this you’re a real one), hosting large services, writing stories, joined companies to work on large stuff, and basically making many of my impactful projects opensource. Most of them were misses, but I can at least exclaim that I tried at one point.
Just like hackathons, the adrenaline really helped me realize I had skills I never thought I possessed. From rapid learning via experimentation on Godot, to drawing sprites even though I failed art, to writing stories even though no one ever reads mine. All of these (albeit almost nonexistent) skills even helped me complete something as complicated and scary as a game.
However, unlike hackathons, where I enjoyed working with likeminded individuals to get a product out that could potentially solve industrylevel problems, Game Jams are an expression of the team/individual’s creativity.
I don’t think I’ll join just any game jam in the future. I do need to care about it. The fact that the game jam was centered around Neurosama helped a lot, because I already had a desire to give back.
However, I think I’ll participate in the next Neurosama game jam, whenever that happens. I have many more stories I want to tell with the Neurosamaverse characters. They’ll probably not be very happy stories, though!
You can play the game on itch.io, and the source code can be found here.
Again, it’s not a very interesting game, so I hope you’ll forgive me for not providing a better experience.
Nevertheless, it was a fun 72hour game jam! Hopefully you had fun reading about my experience as much as I did reminiscing about it.
Happy Coding
CodingIndex
]]>Suppose a hypothetical situation where you gained access to a Python REPL on some server, somewhere. The REPL is artificially limited such that you have no access to any file or networking.
Given that you are in a REPL, you can theoretically write any program you want; however, you are lazy to write such a program, and instead wish to run an arbitrary executable. After running some REPL commands, like:
>>> import sys
>>> sys.version
'3.10.12 (main, Jun 11 2023, 00:00:00) [GCC 11.4.0]'
>>> sys.platform
'linux'
You realize the following:
As someone who knows how to code, surely you can whip up a script that would can execute any arbitrary binary file even under these conditions, right?
On Unix and Windows, Python supports an executable, as long as a file path is specified. This is typically done with the following recipe:
import os
os.execv("/bin/echo", ["e", "hello world"])
The code above causes the /bin/echo
to replace the current process
immediately and prints “hello world”. After /bin/echo
quits, so does Python.
Great, problem solved, right? Unfortunately, the oddly specific constraints
stated above has explicitly denied access to files, which includes the
/bin/echo
executable.
Okay, so maybe we include the executable as part of the script instead. Since we know that the REPL runs on Linux, we spin up a Docker container, and begin experimenting.
First, we get the /bin/echo
program as bytes:
>>> data = open('/bin/echo', 'rb').read()
>>> data
b'\x7fELF\x02\x01\x01\x00...'
Backslashes looks really scary, so lets convert it to a Base64 encoded string:
>>> import base64
>>> data_str = base64.b64encode(data)
>>> data_str
b'f0VMRgIBAQAAAAAAAAAAAAMA...'
Great, let’s copy the whole string and keep it in our clipboard for now.
Next, we whip up a script, and write:
import os
import base64
bin_file = base64.b64decode(b'f0VMRgIBAQAAAAAAAAAAAAMA...')
os.execv(bin_file, ['e', 'hello world'])
And the run the program, and oh…
Traceback (most recent call last):
File "your_file_here.py", line 1, in <module>
ValueError: execv: embedded null character in path
Turns out, even with all the bytes of the executable, you can’t just run it;
Python’s os.exec*
series of functions only support executables specified as
paths.
Well…
That statement is only halftrue. As of Python version 3.3, the os.execve
official Python
documentation supports a
file descriptor.
According to this StackOverflow answer, a file descriptor is an entry created by the OS when a resource (e.g. file or sockets) is opened. This entry stores information about the resource, which includes how to access it. On Windows, this is also known as a handle.
The file descriptors on Unix can be found in /proc/<pid>/fd
, where <pid>
is
the process ID of the current process. Each file descriptor is represented by
an integer.
Okay, but why is this important? Because the standard streams, i.e. standard input, standard output and standard error all have their own file descriptors, which are 0, 1, and 2 respectively.
Notably, those standard streams definitely don’t occupy disk space; the file
descriptor to these standard streams simply represent the concept of those
streams (StackOverflow). Even though the
files /dev/stdout
, /dev/stdin
, and /dev/stderr
exist, they actually point
to /proc/self/fd/<0/1/2>
, which is basically /proc/<pid>/fd/<0/1/2>
, the
file descriptors in question.
In some sense, you can say that these streams exist inmemory (they’re technically buffered there, according to this Quora post).
Now, answer me this: what happens if I pass os.execve
a file descriptor
pointing to a resource that has executable content?
The theoretical answer: we can execute things.
Let’s run an experiment on a computer we have full access to.
We create two files; redirect.py
, which basically redirects the standard
input to standard output, and execute.py
, which spawns the redirect.py
subprocess, then attaches pipes to the standard output of redirect.py
.
execute.py
will write the Base64 string to redirect.py
, and redirect.py
will respond with the raw bytes.
We have to do it this way, because
sys.stdin.read()
reads strings instead of bytes, which causes issues when trying to pass an entire executable. Withsys.stdout.buffer.write()
, we can write raw bytes into the standard output. Since we hijackexecute.py
with pipes, we can also receive raw bytes fromredirect.py
.
redirect.py
:
import base64
import sys
r = base64.b64decode(sys.stdin.read())
sys.stdout.buffer.write(r)
sys.stdout.flush()
sys.stdout.close()
In execute.py
:
import os
import subprocess
bin_file = b'f0VMRgIBAQAAAAAAAAAAAAMA...'
process = subprocess.Popen(['python', 'redirect.py'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
process.stdin.write(bin_file)
process.stdin.close()
os.execve(process.stdout.fileno(), ['e', 'hello world'], {})
Giving a quick whirl, we see… oh…
Traceback (most recent call last):
File "execute.py", line 10, in <module>
os.execve(process.stdout.fileno(), ['e', 'hello world'], {})
PermissionError: [Errno 13] Permission denied: 5
Looking at this AskPython article, it seems like this error happens when:
Given that we’re using one of the standard streams, surely the file descriptor points to something that actually exists; and given standard streams are exclusive to processes, we couldn’t have concurrent reads.
Hence, the only logical explanation stems from us receiving a permissions error. However, that conclusion is relatively illconceived  how do we assign permissions to a pipe?
After calling os.stat
on both the process.stdout.fileno()
file descriptor
and a normal executable file descriptor, we discover that there are indeed
indicators on the file mode that differentiates a stream to an actual file on
the system.
In fact, it is possible to use os.chmod
to change process.stdout.fileno()
’s
file mode, but that will still not yield a working result.
So, end of the road? Can’t be done? Not quite.
We have just established that we need files; the operating system has to understand that the file descriptor points to a resource that is meant to be a file.
This would mean that creating a temporary file would work; however, since we don’t have write access to the filesystem, as constrained by above, we can’t do that. Instead, we simply create a file in memory.
But how?
If we look carefully under the Linux kernel manual, under the sys/mman.h
header file, we see that there is an interesting function by the name of
memfd_create
. Here is a link to that
manpage. The
manpage describes that:
And wouldn’t you know it, Python’s os
module has a memfd_create
function!
Here’s the plan:
os.execve
and we’re off the races!Here is the final script:
# script.py
import base64
import os
import sys
bin_file = base64.b64decode(b'f0VMRgIBAQAAAAAAAAAAAAMA...')
in_mem_fd = os.memfd_create("bin_name", os.MFD_CLOEXEC)
os.write(in_mem_fd, bin_file)
os.lseek(in_mem_fd, 0, os.SEEK_SET)
os.execve(in_mem_fd, ['e', 'hello world'], {})
Finally, running the script will net us the result we were expecting:
$ python3 script.py
hello world
What are the implications of this? For starters, you can embed any kind of executable into a Python script. In the case of a malware, the script can download any random executable from the internet, and run it without leaving a file trace on your computer.
With enough trickery, the script can also hijack standard input and standard output of the embedded executable, with the UI being indistinguishable from just running the executable directly.
On a lighter note, you can, in theory, package your entire suite of applications into a single Python script. It isn’t feasible in production, sure, but you can rest well knowing that it is indeed, possible.
Nevertheless, I hope this little fun adventure was entertaining to read. Until next time!
Happy Coding,
CodingIndex
]]>Recently, I’ve worked heavily on GitLab CI/CD pipelines. In my line of work, these pipelines must incorporate security requirements, such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), Code Scanning, Dependency Scanning, and so on. Furthermore, the pipelines themselves should be templated to support several deployment variants, e.g. managed cloud services, and Kubernetes.
As with all things, if you’ve dedicated 60% of your time on something for 3 months, you’re sure to develop a lovehate relationship with that particular thing. For example, here is one gripe I have for GitLab CI/CD:
According to Variable Precedence, Project Variables have higher precedence compared to
.gitlabci.yml
variables. Hence, why in the world are.gitlabci.yml
variables passed down to child pipelines spawned viatrigger
? That overrides the settings I’ve set in Project Variables, and it just doesn’t make any sense.
Moreover, there are so many issues open on GitLab’s own repository regarding CI that I sometimes find myself wondering if the tool is actually productionready. Luckily (for you), this blog post is not about all the 789 problems I have with GitLab CI; instead, it is about the biggest painpoint I had when developing pipelines: not being able to develop them locally.
Typically, if you were to develop a pipeline, you’d really only know if it worked when you push the commit to a branch somewhere. On a good day, the pipeline would fail because of some misconfigured variables; for example, wrong SonarQube credentials. In that scenario, you’d just have to modify the CI/CD variable from your settings, and invoke the reincarnation of our lord and savior: the retry button.
The Retry Button
What if the problem arises from your script? Unfortunately, this would mean you’d have to vim
into
your YAML config, change the offending script, create a commit, push, and wait for the entire
pipeline to go through before you’d get feedback on whether your job is successful.
As a pipeline author, my job is to architect pipelines that will fail quickly so that developers get feedback as soon as something is wrong. Why, as the pipeline author, do I have to wait for an entire pipeline to figure out if I’ve fixed a job 4 stages later?
Being unable to test a pipeline locally would also pollute the commit logs with unnecessary commits; of course, I can simply
squash them prior to merging the gitlabci.yml
file into the default branch, but I still find it
clunky and inelegant. The worst I’ve done is pushing 70 CIrelated commits in a single afternoon,
debugging GitLab CI services. For some reason, services networking wasn’t functioning properly for
an inrunner DAST scan.
By the way,
$CI_DEBUG_SERVICES
is not an omnipotent flag that forces applications to produce logs; in some Kubernetes configurations, services simply won’t output logs.
In an ideal world, I’d be able to run the entire pipeline locally. Hence, I looked online and found firecow/gitlabcilocal.
This tool makes use of docker
to emulate the functionality of GitLab CI/CD, and even has
services support. The feature parity is the most accurate I’ve seen; in fact, I’ve contributed PR #905
which pushes the tool closer to feature parity with actual CI/CD pipelines on GitLab.
The remainder of this blog post will walk through the typical workflow I follow when developing
pipelines, which is not a comprehensive look into the full features provided by
gitlabcilocal
. The focus here is feature parity, meaning to change as little as possible in
.gitlabci.yml
to get a pipeline working on both the runner, and locally on your computer.
There are instructions for setting up the tool on various platforms in the tool’s README.md, so get Docker and the tool installed before continuing.
Let’s suppose we have a simple .gitlabci.yml
file, like this:
image: debian:latest
stages:
 somestage
somejob:
stage: somestage
script:
 echo "Hello, world"
If you run this with gitlabcilocal list
, you should see somejob
:
somejob is listed
Let’s quickly run it: gitlabcilocal somejob
:
somejob output works locally
This allows us to run fairly simple jobs that takes in no variables. What if we want some variables?
Let’s update the .gitlabci.yml
file:
image: debian:latest
stages:
 somestage
somejob:
stage: somestage
script:
 echo "$SOME_TEXT"
If we suppose the variable will be set within GitLab’s CI/CD settings, then surely we need to have our
“local” version of those settings; this is achieved via the .gitlabcilocalvariables.yml
file.
Let’s create that file, and define SOME_TEXT
:
SOME_TEXT: "hello from the other side!"
Great, let’s make it so that our job creates some sort of artifact. This pattern is commonly found in build jobs:
# ... just change somejob
somejob:
stage: somestage
script:
 echo $SOME_TEXT > some_output
artifacts:
paths:
 some_output
If you were to execute gitlabcilocal somejob
now, you should observe that some_output
appears
within your directory. By default, the tool will copy the artifacts to the root of the repository,
for your inspection. Of course, you can turn this off by running: gitlabcilocal
artifactstosource=false somejob
.
Let’s suppose we write another job in the same stage, that depends on that above dependency:
# append this job into your .gitlabci.yml file
anotherjob:
stage: somestage
script:
 echo "The file outputs"
 cat some_output
needs:
 somejob
dependencies:
 somejob
If we now run gitlabcilocal anotherjob
, we should see that this job is able to get the
artifacts from the dependent job. Caches also work the same way.
You’d have noticed that I specified the job to run: gitlabcilocal anotherjob
, and
noted that artifacts and caches are propagated correctly. This saves lots of development time  you don’t have to run
all of the stages prior to the current job to check if your job works. This, to me, is a massive
improvement from the original cycle of iteration, which required me to commit every change, waiting
for all prerequisites stages to run, only to be met with yet another error message to fix.
The whole pipeline can now be run with just simply gitlabcilocal
. If you just want to run a
single stage, then run gitlabcilocal stage somestage
.
Typically, upon a successful build, we would want to upload the artifacts to some registry. For example, if I were to build a container, it is likely that I want to push to some sort of Docker registry.
GitLab offers a bunch of registries, including a Container Registry; you can read more about the supported registry here.
Note: As a GitLab user, you can authenticate to the Container Registry using your username, and a Personal Access Token via
docker login
. The registry URL will typically be:registry.<gitlab url>.com
, where<gitlab url>
is the instance URL. You can then use images from the Container Registry, like so:images: registry.<gitlab url>.com/some/project/path/image:latest
By default, GitLab runners will already be authenticated to the registry, so there is no additional step to authenticate your jobs.
Another Note: To push to the container registry, you need to define the following variables within your
.gitlabcilocalvariables.yml
:CI_REGISTRY_USER: someusername CI_REGISTRY_PASSWORD: somepassword CI_REGISTRY_IMAGE: <registry URL>/<namespace>/<project>
Let’s say we’re on an esoteric project that doesn’t really use any of the above registries; so we’d choose to use the Generic Package Registry.
On GitLab runners, a token known as $CI_JOB_TOKEN
will be populated automatically, allowing the CI
job to authenticate to most GitLab services without any additional configuration from the job
runner. This also bypasses issues related to secrets rotation, which is a huge boon overall for
everyone involved.
However, $CI_JOB_TOKEN
will not be populated automatically when running gitlabcilocal
, because
obviously, there just isn’t a valid job token to use. Hence, the obvious solution is to use a
Project Access
Token,
and then change our .gitlabcilocalvariables.yml
to reflect the token:
# ...whatever variables before
CI_JOB_TOKEN: <project access token here>
However, upon closer inspection from the GitLab
documentation,
we observe that the curl
command has an issue:
curl header "PRIVATETOKEN: <project_access_token>" \
uploadfile path/to/file.txt \
"https://gitlab.example.com/api/v4/projects/24/packages/generic/my_package/0.0.1/file.txt?select=package_file"
Here’s the catch: $CI_JOB_TOKEN
that is populated by the runner has the type of BOTTOKEN
, which
means that the correct flag to use in the job would be header "JOBTOKEN: $CI_JOB_TOKEN"
.
However, the Project Access Token we’ve generated earlier requires the flag to be: header
"PRIVATETOKEN: $CI_JOB_TOKEN
to run locally.
Remember the motto we’ve established earlier: feature parity with GitLab runner. With the motto in
mind, we simply change the flag to be: header "$TOKEN_TYPE: $CI_JOB_TOKEN"
. According to
variable precedence, since our .gitlabcivariables.yml
is considered to be a part of “Project
Variables”, it has a higher precedence compared to job variables. So, all we need to do now is to set the job
variable to TOKEN_TYPE: JOBTOKEN
, and set TOKEN_TYPE: PRIVATETOKEN
within
gitlabcivariables.yml
.
Hence, the final curl
command that should be used is:
curl header "$TOKEN_TYPE: $CI_JOB_TOKEN" uploadfile some_output "https://gitlab.example.com/api/v4/projects/24/packages/generic/my_package/0.0.1/some_output?select=package_file"
So, we create a job within our .gitlabci.yml
, like this:
# append
uploadjob:
image: curlimages/curl
stage: somestage
needs:
 somejob
dependencies:
 somejob
variables:
TOKEN_TYPE: JOBTOKEN
script:
 curl header "$TOKEN_TYPE: $CI_JOB_TOKEN" uploadfile some_output "https://gitlab.example.com/api/v4/projects/24/packages/generic/my_package/0.0.1/some_output?select=package_file"
And then amend our .gitlabcilocalvariables.yml
like so:
# ...whatever variables before
CI_JOB_TOKEN: <project access token here>
TOKEN_TYPE: PRIVATETOKEN
Running gitlabcilocal uploadfile
should then yield a successful result:
A successful publish
A file output
Needless to say, this .gitlabci.yml
also works when pushed to GitLab.
In large enough enterprises, you may encounter the need to include
other templates, something like
this:
# appears at the top of the gitlabci.yml file
include:
 project: "sometemplateproject"
ref: "sometag"
file:
 BUILD.gitlabci.yml
 COPY.gitlabci.yml
 TEST.gitlabci.yml
As long as you have access to git clone
the current repository, the include will work
transparently with the local tool.
The
gitlabcilocal
tool looks through your Git remote list, picks the first one, and attempts to fetch the referenced files.
This is useful because you can now do gitlabcilocal preview  less
, which will render all of
the included files into one gigantic file. If you have multiple layers of include
, i.e. the
included references also includes other references, they will all be flattened and displayed.
This makes debugging templates much easier.
In some pipeline architectures, child pipelines are heavily relied upon. In such configurations, you may have two pipeline files, maybe something like:
.gitlabci.yml
for common CI work,DEPLOYMENT.gitlabci.yml
for projectspecific deploymentWhere you have a .gitlabci.yml
job that looks something like this:
spawnchildpipeline:
stage: somestage
trigger:
include: DEPLOYMENT.gitlabci.yml
GitLab’s own pipeline editor doesn’t support multiple files; hence, you won’t have the nice features to validate rules, checking conditions, etc.
However, that isn’t an issue with gitlabcilocal
; simply add the file
DEPLOYMENT.gitlabci.yml
file during development. All the suffixes used thus far, such as list
and preview
work as expected.
Unfortunately, it seems like trigger
is currently not supported by the local tool; you can only
perform a “close” imitation by running this command:
gitlabcilocal file "DEPLOYMENT.gitlabci.yml" variable CI_PIPELINE_SOURCE=parent_pipeline
Sometimes, when testing locally, you may not want to pollute the GitLab registry with unnecessary container images. In scenarios like this, it might be useful to create your own local registry for testing. Here’s a useful script to create 3 registries at once:
#!/bin/bash
if [[ "$1" == "" ]]; then
docker run d p 5000:5000 name registry v "$(pwd)"/auth:/auth v "$(pwd)"/certs:/certs e "REGISTRY_AUTH=htpasswd" e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd registry:2
docker run d p 5001:5000 name registry2 v "$(pwd)"/auth:/auth v "$(pwd)"/certs:/certs e "REGISTRY_AUTH=htpasswd" e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd registry:2
docker run d p 5002:5000 name registry3 v "$(pwd)"/auth:/auth v "$(pwd)"/certs:/certs e "REGISTRY_AUTH=htpasswd" e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd registry:2
fi
if [[ "$1" == "stop" ]]; then
docker stop registry
docker stop registry2
docker stop registry3
docker rm registry
docker rm registry2
docker rm registry3
fi
You can then hijack the $CI_REGISTRY_*
variables via .gitlabcilocalvariables.yml
to point to your local registry:
CI_REGISTRY_USER: someusername
CI_REGISTRY_PASSWORD: somepassword
CI_REGISTRY_IMAGE: 172.17.0.2:5000 # use your docker IP here, using docker inspect
To create a user, do the following:
mkdir p auth \
docker run entrypoint htpasswd httpd:2 Bbn testuser testpassword >> auth/htpasswd \
docker run entrypoint htpasswd httpd:2 Bbn AWS testpassword >> auth/htpasswd
To list the images in the registry:
curl X GET u testuser:testpassword http://localhost:5000/v2/_catalog
The above spawns registries without TLS verification. If you’re using skopeo
, you may have to set
srctlsverify=false
and desttlsverify=false
in your job scripts, via something akin to $ADDITIONAL_OPTS
.
Generally speaking, developing pipelines can be fairly painful; however, towards DevOps and
automating the pain of deploying applications, it is a necessary sacrifice. By using supporting
tools such as gitlabcilocal
, engineers can quickly iterate through pipeline development.
Hopefully, both the tool and this blog post has proved useful for any pipeline work you’ll be doing.
Happy Coding,
CodingIndex
]]>Ever since my last blog post, which was the Advent of Code 22, it seems like lots of things has happened, and the world is devolving into a discourse.
My life has also experienced a drastic change as I finally went back to academia and touched a textbook for the first time in 3 years. I want to share some opinions I have. However, since no one really reads opinions fully anymore, I figured I’ll spin them into short stories that you can enjoy!
Note: Whatever views I express here are not related to my employer, college, alma mater, my parents, my hypothetical pets, my imaginary friends, or anyone/anything else. Hence, don’t go around claiming that “CodingIndex says this, imagine what other people from <insert institution here> think!”. Chances are, nine times out of ten, I hold the pessimistic and antisocietal opinion amongst my peers.
Table of contents:
Steve is a great builder in his favourite video game, Minecraft. Back in 2011, he started off with little dirt shacks, and waited the night to pass while he watched online walkthroughs of the game.
He discovered many content creators in the same situation as he was. Being impatient, he furiously clicked through the episodes to see where he could have ended up.
What started off as a simple dirt shack grew to become a cosy wooden cabin, decorated with flowers, filled to the brim with utility on the interior. It wasn’t too shabby of a home for a game of blocks.
He clicked through another 10 episodes, and saw the content creator putting hours of work on their house, adding gardens, farms, lakes, building paths, creating stables, and becoming something magnificent. Houses slowly became castles, utilitarian buildings became aesthetic pleasing constructions with modern architecture.
Steve was inspired. He started to watch tutorials on how to build better, and began refining his craft.
A few months later, he got to the point where he could will mansions and entire environments into existence. He knew the intricacies of how every block added color, variant and personality to the builds. He was genuinely enjoying the artform.
He was building a giant city when he got a ping from his friend on IRC. “Hey”, they said. “Check this out”.
It was a link that pointed him to a game mod that could procedurally generate modern architectures in the game. It used stateoftheart technology that, with reference to all the architectural designs up till 2021, generates structures that looked realistic. It was even able to generate interiors using countless interior design plans!
“This is so cool!”, replied Steve. He downloaded the program and started to watch tutorials on how to use the mod. In his peripheral vision, however, he started noticing videos with the titles like “This mod will ruin Minecraft creativity!” and “Why building is dead” in his suggestions feed.
“Have you seen videos on it? Seems like you’ll be replaced soon.” the friend commented.
Steve pondered what this meant to himself; will people no longer appreciate the builders in this community? Will he become irrelevant? Has all the skills he has learnt till this point been for naught?
Steve went out for a walk. He observed that the real world was in a state of turmoil, with people fighting against the worsening economy, aging population, and social media overexposure. He was part of the gloomiest generation alive, and he was worried he wouldn’t be able to land a job with the everadvancing technology, or find a place to live. Is it even affordable to eat 3 meals a day anymore?
His mind wandered, thoughts after thoughts tangling in his mind, creating a large sense of unease for the future. He then reminiscences the past, about the good old days where he just built structures in Minecraft, worrying about nothing in particular.
An epiphany struck. Why did it matter if he was going to be replaced? Or if his skills are no longer appreciated? Or if he becomes irrelevant? Why did it matter to himself?
Steve sat in front of the computer, booted it, launched Minecraft, and began building. To him, building is what he does to escape reality. It’s his hobby, his passion, and what he wants to continue doing. Even if the entire world forgets such trivialities used to be done by humans, he wants to continue doing it; not because it is the “hip” thing to do, but because he finds it fun.
“I think the mod’s pretty cool, I’ve basically been doing the same thing. By the way, look at what I’ve built!” Steve excitedly sends a screenshot on IRC.
Steve is perfectly happy.
I am a wood carpenter.
When I was younger, I followed my dad around as he fulfilled oddjobs. He was multifaceted in his expertise, being wellknown for repairing electrical installations, plumbing, pest control, and even motorbike maintenance. However, the one job that has always fascinated me was wood carpentry.
Looking back at it now, he definitely wasn’t very skilled at it; he knew enough to get by repairing furniture, but definitely not enough to build furniture from logs. That level of skill would require machines and hand power tools, something outside the capability of our family’s finances.
Regardless, the thing that captured my interest was the first time he showed me how to join two pieces of wood; turns out, there were many creative ways to do so. The best kind of joints would hide the fact that there were two pieces of wood involved in the first place.
From there, my obsession to wood spiralled out of control; from the types of wood, their strengths, what they signify in superstition, woodworking techniques, and so on. Eventually, I was the guy in town who could not only repair furniture, but also build them. In that sense, I have surpassed my father.
While halfdrunk, my father sent me for my first competition as a trial for my carpentry skills. There, I caught the attention of some executive from the Ministry of Labour, who decided to grant me a fullride scholarship to a trade school. Overjoyed, I went on to best trade school in the world and specialized in wood carpentry.
I’ve built over a hundred furniture at this point. While they have not seen use outside of my town, townsfolk would always comment on how my wooden furniture have withstood their harsh daily use for years and required minimal maintenance. Needless to say, I am proud of what I’ve created, but I definitely have ways to go before I become a master.
First year of the wood carpentry curriculum was everything I’ve already known, digested and used extensively in actual carpentry. However, being conceit goes against the values of an aspiring master craftsman; hence, I used the opportunity to etch the concepts onto my soul.
Of the hundred furnitures I have created in my lifetime, some were catastrophic failures. Unbalanced stool supports, wood not covered properly in resin causing rot, etc. Nevertheless, I’ve learnt from those mistakes to create even better furniture. Craftsmanship is practical art  creative endeavours turned utility. I loved it as a hobby and a career; surely, I must love learning about it.
One of the assignments for a critical skill officially named “Blueprints” was to theorize about the strengths and weaknesses of different types of leg stands for a table. We were given four types of wood to think about; all of varying brittleness, weight, and resistance to bugs.
Sounds straight forward enough.
And so I submitted a report detailing a table, and theorized about each leg stand, how much force they can each support, and which wood to pick for a longlasting piece of furniture. I also submitted a blueprint, and an additional analysis guide for the derivation of that blueprint. I took some photographs of a mock table I built with that blueprint to prove my point.
After a month or so, I received devastating feedback. Turns out, the instructor expected a rectangular table, while I’ve analyzed table stands for a triangular table. I was also expected to report on each leg of the table, even though the results would have been identical to just reporting one. For the blueprint, I was chided about providing additional materials. When I wanted to refute the feedback, I was told that this was professional judgement. Maybe I was too narrowminded to realize what the master wanted.
Regardless, I now have an official record of being weak at “Blueprints”, a skill so fundamental to being a craftsman that bad = incompetency, no matter the other skills. Perhaps I’m thinking too much about it, maybe I’ll have the chance to explain it when I look for an apprenticeship programme. Surely I wouldn’t be rejected before even stepping foot into the real working world, right?
The Ministry of Labour dropped my scholarship for my poor performance in trade school, where “Blueprints” had a great influence in the decisionmaking process. It can’t be helped; trade school performance is baked into the contract after all.
During examinations, all I could think about was “Blueprints”. Distracted, I flunked all the examinations. Seems like dropping my scholarship was a good move for the Ministry of Labour; somehow I’ve lost the “spark” and “interest” to remain in the woodworking business.
In my second year, I tried to get an apprenticeship to further my woodworking. I would always get questions about “Blueprints”, and why it received such an undesirable record. Whenever I was given the chance to, I would explain; but what would an apprentice know anyway? Compared to the wellesteemed expert that is the master craftsman, my words may as well be the whispering wind. Of course, I wouldn’t be accepted to any apprenticeship programmes, because I supposedly can’t do blueprints. Meanwhile, my peers were seen as budding talents of the craft  up to this point, they’ve built a single table.
I couldn’t find a job, and had to drop out for my third year. Trade school is expensive, after all. Back at home, I was ridiculed by the same people who encouraged me in the past for being a quitter, and being lucky. Or maybe I was just imagining it.
Somehow, my entire life now revolves being weak at “Blueprints”. Maybe I’m just imagining it.
During a family reunion, I saw cousins who had no interest in wood working becoming successful after attending the trade school. They seem to have their life put together. Strange that the dynamic was the other way just a few years ago. They’re looking at me. Their faces seem to be in disdain. I could see hatred from the years of being compared to me, all directed towards me at once. They seemed satisfied. Surely, it must be my imagination.
I went home and tried to build a table. I didn’t have enough tools to do so; I’ve gotten into an argument with my family and they’ve tossed away most of them, stating that “it has ruined my life”. Surely, they still support me, and I’m just imagining things.
It is now 4 years since I last left the trade school. I’m jobless; except for the occasional oddjob. I’ve not been asked to perform carpentry, since we now had several experts (my cousins) in town. Oh, how I envy them. While I experience financial drought, they can comfortably get by creating masterpieces after masterpieces.
Today is the day my last remaining family died. They said it was of old age. I no longer have a reason to work. I no longer had a reason to live. Maybe I’ll go back to wood carpentry?
No. Of the hundred and one furnitures I have created, all of them were catastrophic failures. If the world thinks so, then it must be true. I shouldn’t soil this world with my horrendous work.
They’d be sad if I just stopped living. So I’ll continue, but I’ll lay rest what I am within. This is the best way.
I wish I was a wood carpenter, I thought, as I quell my anxiety after waking up from my own nightmare.
“Hey, you’re still working on that draft?”
After nearly falling over from the friendly strong pat from David, Jack regained his balance and composed himself.
“Yeah, it really is taking a while. I’ve only gotten reliable firsthand accounts from the Core, but the AntiCore? Hearsay at most.” Jack replied, taking a huge swig from his glass of ale.
David followed suit. Ever since the war started, the entire news agency went into chaosmode covering the events. Jack and David were specially tasked to gather information for an exclusive news column on the war.
“I’m surprised you’ve even gotten a hold of a contact from the Core countries at all; who’s free enough to answer you?” David inquired. A reasonable question, seeing how their country was neither Core nor AntiCore.
Jack put his drink down, and spent some time staring at the table. It looked as if he was too drunk to hear anything properly, or he was deep in thought  it wasn’t exactly clear given the question.
“I… made a promise to one of them.” Jack said, chugging the rest of his ale, and stood up in one swift motion. Before David could respond, Jack walked out of the door, but not before shouting across the bar, “sorry, see you soon, I’ve gotta meet someone!”
It was 01:31AM; who would he even be meeting? It’s probably none of my business…
David found himself tailing Jack. Jack looked around him at every intersection he took  he seemed to be more wary as streets became alleyways. Eventually, Jack stopped in front of a metallic backdoor of an otherwise unsuspecting noodle shop.
Knock knock knock
David pressed himself against a nearby wall. What was Jack doing in this suspicious alleyway?
A husky accent replied softly in broken English through the door. David couldn’t quite hear it, but it didn’t stop his imagination from going wild. Was Jack involved in some illegal business? Moneylending? A drug deal perhaps? Maybe even…
The metal door creaked open, and Jack slipped inside. When the door closed, David edged closer to the door, and leaned his ear against the cold metal. While muffled, he could just about make up the conversation:
“… sellout … you and your family … citizenship.” this was a native voice. This must be Jack.
A long silence loomed. He could hear a disgruntled husky sigh. This must have been the other person through the door.
“… corruption … attack … no basis … lose. locations… do not reveal my identity. my family … anticore”
The rest of the conversation was inaudible, except: “I’ll bring you 28 thousand tomorrow”, which got progressively louder. David took the cue and fled the scene.
The next day, David went over to Jack’s desk, where he found Jack furiously typing away. David kept silent, having inferred how Jack was getting his information. At night, during their drinking session, as David was mentally anticipating Jack to fulfil his promise, the television was broadcasting about the current state of the war.
“That’s your work, right? Good job!” David happily exclaimed, as he raised his ale to clink glasses in celebration. Jack happily obliged, while watching the television intently.
“… AntiCore Citizen Garn Nova said that the country is full of corrupted officials. In this exclusive report, we reveal key AntiCore military installations never discovered till today. Stay tuned.”
David’s eyes widened in horror; at the corner of his eyes, he observed Jack still maintaining his perfect smile.
All of a sudden, the door to the bar slammed open. All eyes were on the perpetrator. He then unholstered his rifle, letting out a battlecry. This was followed by uncontrolled shots to the ceiling and anything else in his path. In the ensuing panic, full of shrieks, the man shouted in a rather familiar voice: “Where are you, Jack!”
Among the shrieks and a general sense of fear in the air, David’s mind remained calm; he recognized this voice. The tone and signature were identical to the husky voice he heard yesterday. This was Jack’s correspondent from the AntiCore.
“How dare you, Jack! You have doomed my entire family!” the man was livid. He shot at random things in the room, hoping one of his bullets would slot itself cleanly through Jack’s temples by sheer luck  he didn’t care about the innocent crossfire; to him, his life has ended the moment his name was published on television.
David’s eyes darted around the room, searching for the antagonist of their current predicament. What he found was not his meek coworker who stood for writing excellence, but a monster who kept his smile, talking to his phone under cover from the uncontrolled bullet spray.
In seconds, men donning blue tactical gear emerged from the back of the bar. The resulting chaos between the police force and a man who lost everything was too gruesome to watch. To an outsider, it would seem to be a case where “the police force saves the day from a madman”, but to David, he was witnessing the tale of a man who trusted a monster.
The event made national news  politicians used it to talk about national defence, and there were even talks to ally themselves with the Core. Under the leadership of the newlypromoted chief investigator, who authored both the exclusive insight to the war, and the bar event that shook the nation, the news agency prospered.
David resigned a few months after, supposedly to “look for new opportunities elsewhere”.
David sighed as he carries his bag of canned beer, on his way home. He has since moved away from the district where he used to work; to stay away from the stuff that sometimes fuels his nightmares. He never thought that someone he was close to could kill a man, let alone their entire family.
Sometimes, the sad tale of a poor man who sought refuge in another country, alone and away from his family in the AntiCore states replayed itself, rentfree in the mind of David. What would he have done if he was the man? What should he have done when he found Jack performing shady dealings? In the first place, how could he have misjudged Jack?
As David reached his street of residence, he noticed a familiar figure and stopped in his tracks. The figure noticed this and turned towards his direction.
Somehow, without even confirming, the figure said, “David! It has been a while, I’ve been trying to reach you forever!”.
“How did you know“
“It’s not nice, ignoring your friend, ya know.”
“I didn’t tell any“
“Especially since I let you follow me to my secret spot the other day.”
David froze. Jack knew that he was followed all along.
“You know, trust is a funny thing. They say it’s difficult to build, but easy to break. Have those people ever been desperate?”
“You are a monster.”
Jack paused his overexaggerated movements, and stared at David for a while. Disgust filled David as Jack expressed a puzzled look.
“I just wanted to be promoted. The economy’s hard, you know?”
Repulsive. Abhorrent. Absolute filth. Trash. How is this parasite still alive?
“This is basically what everyone does, ya know? The guy wasn’t even one of our own!”
David snapped. He couldn’t remember exactly what he said, but it was something about exposing Jack to the entire world.
“That’s troublesome. It’ll hinder my progress to become a minister.” Jack stated apathetically, as if he practiced for this exact scenario.
“Hence, it’ll be nice if you don’t do that.”
In David’s anger, he failed to notice the blacksuited people surrounding him. Before he could react, his entire world went black.
In his final moments, he thought about the first time he met Jack. Jack was serious about his work, but he absolutely hated how the world worked  it was a worldview that resonated with David. When did this Jack change? Or, could it be, that there were no changes in the first place?
Then, he thought about the man. Even in David’s final moments, David couldn’t help but feel pity for the man who was misled to accept the poisoned kindness of a monster. Just how many more monsters roam this earth?
Hope you enjoyed the short stories! I’m not much of a writer, but these were quite fun to assemble.
If you like programming, please subscribe to my blog, either on RSS (link your reader to https://codingindex.xyz/feed.xml), or via email).
Happy Coding,
CodingIndex
]]>After having absolutely zero blog posts for the past 11 months, including on my treasured anime page, here I am declaring that I will be participating in the Advent of Code (AOC).
I’ve never completed an AOC before, so it’ll be a nice challenge to breathe vitality into this blog before the New Years. To motivate me, I have invited my buddies over at modelconverge and nikhilr to join me.
Throughout AOC, I will update this blog post in a rolling manner to discuss my thought processes from ideation to solution. Do check back every day!
Thanks to deadlines being a thing, I ended up doing Day 1 24 hours late. Anyways, it seems like we need to make a simple program to figure out who is carrying the most amount of calories among the elves.
I broke down the problem into processing chunks of numbers at once:
\n\n
(two newlines), and\n
.So, the steps to solve this problem will be:
l
;0
into the list, l
;l
;l
, completing our algorithm.Framing the problem another way, l
is the accumulator of integers, and we are processing a list of strings with a function that:
Then, we take the maximum of the list. Naturally, this means that the problem can be solved with two lines of Python:
from functools import reduce
print(max((reduce(lambda accum, y: accum + [0] if y == "" else accum[:1] + [accum[1] + int(y)], open("input.txt").read().splitlines(), [0]))))
Where the contents of input.txt
are given by the puzzle input.
The second part wants us to get the three highest elements in the list. So, just a small tweak to part 1:
from functools import reduce
print(sum(sorted((reduce(lambda accum, y: accum + [0] if y == "" else accum[:1] + [accum[1] + int(y)], open("input.txt").read().splitlines(), [0])), reverse=True)[:3]))
All I did here was to replace max
with a composition of sum
and sorted
.
Parsing the problem into programmer monkey brain language, the question is essentially:
A X
where A = ['A','B','C']
and X = ['X','Y','Z']
.\n
.A
and X
are enumeration representations of the possible moves in rock, paper and scissors. The truth table is as follows:Left  Right  State 

A  X  Tie 
B  Y  Tie 
C  Z  Tie 
A  Y  Win 
B  Z  Win 
C  X  Win 
A  Z  Lose 
B  X  Lose 
C  Y  Lose 
X
, Y
, Z
have a partial score of 1, 2, 3 respectivelyThe first thing I did was to “normalize” and simplify the truth table by taking the difference between X
and A
. So, before simplification, the table looked like this:
Left  Right  Diff  State 

1  1  0  Tie 
2  2  0  Tie 
3  3  0  Tie 
1  2  1  Win 
2  3  1  Win 
3  1  2  Win 
1  3  2  Lose 
2  1  1  Lose 
3  2  1  Lose 
I then simplify the table with the following thoughts:
So, the table looks like this:
Diff  State 

0  Tie 
1  Win 
2  Win 
Now, the problem of obtaining the win/tie/loss partial score has been simplified to check for these 3 cases. So, I could now write something like:
// a is normalized left, x is normalized right
int partial_score = (a == x) * 3 + (x  a == 1  x  a == 2) * 6;
The next subproblem to tackle will be to normalize our inputs. All ASCII characters can be expressed as integers, and hence can be normalized by the lowest value of each range. In other words:
// a is left, x is right
int normalised_a = a  'A';
int normalised_x = x  'X';
Performing this normalization almost conforms to the partial sum where 'X', 'Y', 'Z' > 1, 2, 3
. Right now, the map looks like 'X', 'Y', 'Z' > 0, 1, 2
. To fix this, just add 1:
// normalised_x as above
int partial_score = normalised_x + 1;
So, the total score can now be expressed as:
// a is normalised left, x is normalised right
int score = (x + 1) + (a == x) * 3 + (x  a == 1  x  a == 2) * 6;
All we need to do now is to do the preprocessing and required code to actually obtain x
and a
. I first wrote it in C, which looks like this:
#include <stdlib.h>
#include <stdio.h>
int eval_score(char a, char b) {
char opp_a = a  'A';
char opp_b = b  'X';
return opp_b + 1 + (opp_b  opp_a == 1  opp_b  opp_a == 2) * 6 + (opp_a == opp_b) * 3;
}
int main() {
FILE* file = fopen("input.txt", "r");
long accum_score = 0;
do {
char first, second;
fscanf(file, "%c %c\n", &first, &second);
accum_score += eval_score(first, second);
} while (!feof(file));
printf("%ld\n", accum_score);
return 0;
}
This was too long, so I decided to rewrite the same thing in JavaScript:
inputStr = `` // puzzle input
inputStr.split('\n').reduce((acc, curr) =>
acc.concat(
((codes) => codes[1] + 1 +
(codes[1]  codes[0] == 1  codes[1]  codes[0] == 2) * 6 +
(codes[0] == codes[1]) * 3)
(((raw) => [raw[0].charCodeAt()  65, raw[1].charCodeAt()  88])(curr.split(' ')))), [])
.reduce((acc, curr) => acc + curr, 0)
Which is shorter but kinda unreadable.
Part 2 changes the interpretation of X
. "X"
, "Y"
, and "Z"
now represents lose
, tie
, and win
. Upon closer inspection, this really only affects the partial sum used to calculate the score based on state; if anything, it made calculating the win/loss/tie partial score simple.
It can be easily realised that associating tie to 0
, win to 1
and loss to 1
will make deriving the rock/paper/scissors move simple.
Left  State  Right 

x  Tie (0)  x 
x  Win (1)  0 if x + 1 == 3 else x + 1 
x  Lose (1)  2 if x  1 == 1 else x  1 
Remember that the normalised "A", "B", "C" > 0, 1, 2
, so ties would imply "A", "B", "C" > Scissors, Paper, Rock
, wins would imply "A", "B", "C" > Paper, Rock, Scissors
, and losses will be "A", "B", "C" > Scissors, Rock, Paper
.
Hence, the code would be changed to:
inputStr = ``
inputStr.split('\n').reduce((acc, curr) =>
acc.concat(
((codes) => ((codes[0] + codes[1] == 1) ? 2 : (codes[0] + codes[1]) % 3) + 1 +
(codes[1] == 1) * 6 +
(codes[1] == 0) * 3)
(((raw) => [raw[0].charCodeAt()  65, raw[1].charCodeAt()  89])(curr.split(' ')))), [])
.reduce((acc, curr) => acc + curr, 0)
Notice the change at raw[1].charCodeAt()  89
, which essentially absorbed an offset of 1
.
Today’s part 1 problem can be broken down into the following subproblems:
I decided to use Haskell, because :shrug:. Inputs in Haskell is notoriously complex, so I decided to bypass that by utilizing my browser’s JavaScript engine to convert multiline strings to normal strings delimited by \n
, like this:
Converting to a singleline string with JavaScript
Doing so, I will be able to bypass all inputrelated processing in Haskell by assigning the string to the variable.
Let’s solve each subproblem in Haskell:
 input string
input = ""
 going through line by line
lines input
 split line by half
splitAt (round $ (/2) $ fromIntegral $ length line) line
 find intersection between the two halfs
intersect splitted_xs splitted_ys
 calculate priority
(\x > if x `elem` ['a'..'z'] then ord x  96 else ord x  65 + 27) $ (!! 0) intersected_list
Some notes:
length line
strictly returns an integer, which needs to be converted for division in Haskell;+1
;['A'..'Z']
has an offset of 26 + 1 after getting it’s sequence number from the ASCII value for ‘A’.Combining these together, we have:
import Data.Char
import Data.List
input = ""
solution input = sum [(\x > if x `elem` ['a'..'z'] then ord x  96 else ord x  65 + 27) $ (!! 0) $ (\(xs, ys) > intersect xs ys) $ splitAt (round $ (/2) $ fromIntegral $ length line) line  line < lines input]
The slight twist introduced here require us to do the following:
It is guaranteed by the nature of the problem that our input’s number of lines will be divisible by 3.
There are many ways to group the lines by 3, and the way I chose is to maintain an accumulated list of lists, where each element list will contain 3 elements.
With that, we solve the subproblems:
 grouping the lines by 3
foldr (\x acc@(y:ys) > if length y == 3 then [x]:acc else (x:y):ys) [[]] $ lines input
 intersecting 3 lines
map (foldr1 intersect) output_of_above
Then, reassembling the final solution:
import Data.Char
import Data.List
solution' input = sum $ map ((\x > if x `elem` ['a'..'z'] then ord x  96 else ord x  65 + 27) . (!! 0)) $ map (foldr1 intersect) $ foldr (\x acc@(y:ys) > if length y == 3 then [x]:acc else (x:y):ys) [[]] $ lines input
Feeling a little lazy today, I decided to work in Python. Today’s problem is broken down into the following, familiar subproblems:
,
, which we will call segments;
, which we will call fragments;Let’s talk about step 5. In set theory, if we wanted to know if A
is fully contained in B
, then A⊂B
; however, this can be simplified if A
and B
are sorted lists, which is the case for ranges defined solely by their boundaries. So, if I had an input line of 66,46
we can verify quite quickly that the left range is fully contained in the right range, not because we imagined if all elements of the left range is in the right range, but because of the lower bounds: 6 > 4
, and the upper bounds: 6 == 6
, so therefore 66
is in 46
.
Similarly, for 28,37
, we see that 3 > 2
and 7 < 8
, so this means 37
must be in 28
.
With that context, the subproblems can be solve like so in Python:
# read input line by line e.g. "28,37"
open("input.txt", "r").readlines()
# split line by ',', so we get ["28", "37"]
segments = line.split(',')
# split a single segment by '' so we get fragment = ["2", "8"]
fragment = segment.split('')
# note that all fragments = [["2", "8"], ["3", "7"]]
# convert to int [2, 8]
fragment_prime = map(int, fragment)
# compare the ranges
possibility_1 = fragment_1[0] <= fragment_2[0] and fragment_1[1] >= fragment_2[1]
possibility_2 = fragment_2[0] <= fragment_1[0] and fragment_2[1] >= fragment_1[1]
result = possibility_1 or possibility_2
The way I used to combine all of the subproblems together is to use an unholy concoction of maps:
print(sum(list(map(lambda xys: (xys[0][0] <= xys[1][0] and xys[0][1] >= xys[1][1]) or (xys[1][0] <= xys[0][0] and xys[1][1] >= xys[0][1]), list(map(lambda segments: list(map(lambda segment: list(map(int, segment.split(''))), segments)), list(map(lambda line: line.split(','), open("input.txt", "r").readlines()))))))))
Part 2 changes the socalled “set operation” we are performing. Instead of “fully contains”, we are looking for overlaps, or in set terms we are looking for, “A∩B≠Ø”.
Let’s consider the few possible cases, if we have a string in the format ab,xy
:
case 1
......a###########b...
.x#y..................
case 2
..a######b...
.x###y....
case 3
..a###b....
....x###y..
case 4
.a####b.......
.........x##y.
case 5
....a####b....
......x#y.....
The cases imply the following:
a > x
, b > x
, x < a
, y < a
;a > x
, b > x
, x < a
, y > a
;a < x
, b > x
, x > a
, y > a
;a < x
, b < x
, x > a
, y > a
;a < x
, b > x
, x > a
, y > a
.The relations in bold matter the most; we see that for any two ranges to intersect, the lower bound of the first range must be less than the lower bound of the second range, and the upper bound of the first range must be greater than the lower bound of the second range, or viceversa.
Writing that in code, the testing statement becomes:
possibility_1 = fragment_1[0] <= fragment_2[0] and fragment_1[1] >= fragment_2[0]
possibility_2 = fragment_2[0] <= fragment_1[0] and fragment_2[1] >= fragment_1[0]
result = possibility_1 or possibility_2
So, our resulting code looks very similar to part 1, with a minor change of index in our comparison lambda:
print(sum(list(map(lambda xys: (xys[0][0] <= xys[1][0] and xys[0][1] >= xys[1][0]) or (xys[1][0] <= xys[0][0] and xys[1][1] >= xys[0][0]), list(map(lambda segments: list(map(lambda segment: list(map(int, segment.split(''))), segments)), list(map(lambda line: line.split(','), open("input.txt", "r").readlines()))))))))
Deadlines are looming, so I’ve haven’t got the time to compact this. However, a streak is a streak!
Immediately after reading the question, I immediately thought of stacks. The subproblems are as follows:
from
queue;to
queue;Not being in the headspace to do function composition, I left the code separated in their respective chunks:
import functools
data = open('input.txt', 'r').readlines()
# \n here is the divider
segments = functools.reduce(lambda accum, x: accum[:1] + [accum[1] + [x]] if x != '\n' else accum + [[]], data, [[]])
# all characters are +4 away from one another, first one at pos 1. reparse accordingly
segments[0] = list(map(lambda x: [x[i] for i in range(1, len(x), 4)], segments[0]))
# flatten segments[0] into a queuelike structure
stacks = [[] for i in range(len(segments[0][0]))]
for row in segments[0][:1]:
for i, col in enumerate(row):
if col != ' ':
stacks[i].append(col)
stacks = [list(reversed(stack)) for stack in stacks]
# flatten segments[1] into a list of tuple instructions
digit_fn = lambda s: [int(x) for x in s.split() if x.isdigit()]
instructions = [digit_fn(s) for s in segments[1]]
# do the movements
for instruction in instructions:
stack_from = instruction[1]  1
stack_to = instruction[2]  1
number = instruction[0]
for _ in range(number):
stacks[stack_to].append(stacks[stack_from].pop())
# get the top of all
print(''.join([s[1] for s in stacks]))
Part 2 essentially changes the data structure we are working with. Now, we’re breaking off lists at any arbitrary point, and appending it to another list (is there a name for this type of data structure)?
However, since this is a small change, I decided to change two lines and reuse the rest of the code, meaning that the main data structure in use is misnamed. Regardless, here it is:
import functools
data = open('input.txt', 'r').readlines()
# \n here is the divider
segments = functools.reduce(lambda accum, x: accum[:1] + [accum[1] + [x]] if x != '\n' else accum + [[]], data, [[]])
# all characters are +4 away from one another, first one at pos 1. reparse accordingly
segments[0] = list(map(lambda x: [x[i] for i in range(1, len(x), 4)], segments[0]))
# flatten segments[0] into a queuelike structure
stacks = [[] for i in range(len(segments[0][0]))]
for row in segments[0][:1]:
for i, col in enumerate(row):
if col != ' ':
stacks[i].append(col)
stacks = [list(reversed(stack)) for stack in stacks]
# flatten segments[1] into a list of tuple instructions
digit_fn = lambda s: [int(x) for x in s.split() if x.isdigit()]
instructions = [digit_fn(s) for s in segments[1]]
# do the movements
for instruction in instructions:
stack_from = instruction[1]  1
stack_to = instruction[2]  1
number = instruction[0]
stacks[stack_to].extend(stacks[stack_from][number:])
stacks[stack_from] = stacks[stack_from][:number]
# get the top of all
print(''.join([s[1] for s in stacks]))
Oh no I can feel the deadlines! I’ve decided to take a crack at implementing another thing in C. Since I was also feeling lazy, I decided to use C.
Today’s puzzle involves us picking out the position of the first unique character in a sliding frame of 4. The most obvious algorithm is generally as follows:
The above algorithm is probably also the fastest I know, since the set operations involved is O(4)
. Iterating through the string, that’s O(n)
, so the total runtime of this solution would be O(4n)
.
In C, however, we don’t have sets, and I don’t really feel like implementing one. Instead, I employed a technique known as dynamic programming to implement something like a queue, which memorizes 4 values at once. Whenever a new character is read from the input stream, the head of the queue is popped, and the new character is pushed into the queue.
To speed up figuring out if there are any duplicate elements, I created a map of 26 characters and maintain a reference count of each alphabet in the queue. In theory, the function will simply need to iterate through the queue, lookup the alphabet in the map, look at the reference count, and if it’s all 1, we’ve found our character.
This method has a rough time complexity of: O(n)
for going through the string, O(4)
for the dynamic programming implementation, O(4)
for checking the queue. If 4 is an unknown, this’ll be O(k^2 * n)
. Damn.
So:
#include <stdlib.h>
#include <stdio.h>
int main() {
FILE *f = fopen("input.txt", "r");
char exist_map[26] = {0};
char *a = NULL, *b = NULL, *c = NULL, *d = NULL;
size_t n_processed = 0;
char buf = 0;
while ((buf = fgetc(f)) != EOF) {
++n_processed;
if (exist_map[buf  'a'] == 0 && a != NULL && *a == 1 && *b == 1 && *c == 1) {
printf("delimiter found at %lu\n", n_processed);
break;
}
if (a) *a = 1;
d = exist_map + (buf  'a');
*d += 1;
a = b; b = c; c = d; d = NULL;
}
fclose(f);
return 0;
}
The dynamic programming implementation can be improved, but oh well.
Increasing the required unique characters from 4 to 14 would have been much easier on Python, but in C, this means I had to abstract my functions, and use an array of char*
instead of defining each position in the queue on my own.
The two functions to abstract are:
Improving the “queue” can be easily seen in this example, which involves introducing variables to keep a pointer of where the head and tail is. However, I was lazy. So:
#include <stdlib.h>
#include <stdio.h>
char areOnes(char** pointers, size_t size) {
for (size_t i = 0; i < size  1; i++)
if (*(pointers[i]) != 1) return 0;
return 1;
}
void leftShiftExistMap(char* map, char** pointers, char newVal, size_t size) {
if (pointers[0]) *(pointers[0]) = 1;
pointers[size  1] = map + (newVal  'a');
*(pointers[size  1]) += 1;
for (size_t i = 0; i < size  1; i++)
pointers[i] = pointers[i + 1];
pointers[size  1] = NULL;
}
int main() {
FILE *f = fopen("input.txt", "r");
char exist_map[26] = {0};
char *pointers[14] = {NULL};
size_t n_processed = 0;
char buf = 0;
while ((buf = fgetc(f)) != EOF) {
++n_processed;
if (exist_map[buf  'a'] == 0 && pointers[0] != NULL && areOnes(pointers, 14)) {
printf("delimiter found at %lu\n", n_processed);
break;
}
leftShiftExistMap(exist_map, pointers, buf, 14);
}
fclose(f);
return 0;
}
The time complexity is still the same, which is O(k^2*n)
where k = 14
. Use the right tools (i.e. Python) for the right job!
After a mere 4 hours of sleep, I continued to rush deadlines fueled by nothing but coffee in my stomach. Suffice to say, I’m not entirely satisfied with the work I’ve turned in, but what’s done is done, am I right?
Day 7 was done together with Day 8, because time was just simply not on my side. But hey, I’ve done both, cut me some slack!
An interesting use case is presented in day 7, where we essentially had to rebuild the folder structure based on the output of a few commands, and figure out the sum of the set of folders (including subdirectories) that exceeds 100000.
My very tired and uncaffeinated (halflife of coffee was out) brain immediately thought “trees” and jumped straight into the code. We also have to write a simple parser to figure out what each line in the output did / displayed, so that we can use the information meaningfully.
So the subproblems were:
Parsing each line is simple, by using spaces as delimiters and tokenizing each word:
tokens = x.strip().split(' ') # x is a line
if tokens[0] == "$":
if tokens[1] == 'ls':
# do something
elif tokens[2] == '..':
# do something
elif tokens[2] == '/':
# do something
else:
# do something, is a directory
elif tokens[0].isdigit():
# is size of file
elif tokens[0] == 'dir':
# is telling us directory exist
All we need to do now is to create a Node
class that represents our tree:
class Node:
def __init__(self, dirname, parent = None):
self.dirname = dirname
self.value = None
self.parent = parent
self.nodes = []
def __eq__(self, other):
return self.dirname == other.dirname
def __hash__(self):
return hash(self.dirname)
def __str__(self):
return "{} {}".format(self.dirname, [str(n) for n in self.nodes])
def getSize(self):
return self.value if self.value is not None else sum([x.getSize() for x in self.nodes])
And then combine all the code together. I also add a getSolutionSize
function in Node
, which traverses the tree depthfirst, gets the space occupied on the diskif it’s larger than 100000
(specified in the problem), and accumulates the size.:
import functools
import sys
class Node:
def __init__(self, dirname, parent = None):
self.dirname = dirname
self.value = None
self.parent = parent
self.nodes = []
def __eq__(self, other):
return self.dirname == other.dirname
def __hash__(self):
return hash(self.dirname)
def __str__(self):
return "{} {}".format(self.dirname, [str(n) for n in self.nodes])
def getSolutionSize(self):
if self.value is not None:
return 0
else:
size = self.getSize()
return (0 if size > 100000 else size) + sum([x.getSolutionSize() for x in self.nodes])
def getSize(self):
return self.value if self.value is not None else sum([x.getSize() for x in self.nodes])
def parselines(xs, rootNode, node):
if xs == []: return
x = xs[0]
tokens = x.strip().split(' ')
if tokens[0] == "$":
if tokens[1] == 'ls':
parselines(xs[1:], rootNode, node)
elif tokens[2] == '..':
parselines(xs[1:], rootNode, node.parent)
elif tokens[2] == '/':
parselines(xs[1:], rootNode, rootNode)
else:
n = Node(tokens[2], node)
if n in node.nodes:
n = node.nodes[node.nodes.index(n)]
parselines(xs[1:], rootNode, n)
elif tokens[0].isdigit():
n = Node(tokens[1], node)
n.value = int(tokens[0])
node.nodes.append(n)
parselines(xs[1:], rootNode, node)
elif tokens[0] == 'dir':
n = Node(tokens[1], node)
node.nodes.append(n)
parselines(xs[1:], rootNode, node)
n = Node('/')
data = open("input.txt", "r").readlines()[1:]
sys.setrecursionlimit(len(data) * 2)
parselines(data, n, n)
print(n.getSolutionSize())
Because we use recursion extensively, we have to increase our recursion limit to something we can work with.
In Part 2, we find the folder with lowest value that is greater than the free space we need. Luckily, this is a small change (I use tuples, but actually we can just omit the dirname
to remove that information, as we don’t need it for our solution):
import functools
import sys
class Node:
def __init__(self, dirname, parent = None):
self.dirname = dirname
self.value = None
self.parent = parent
self.nodes = []
def __eq__(self, other):
return self.dirname == other.dirname
def __hash__(self):
return hash(self.dirname)
def __str__(self):
return "{} {}".format(self.dirname, [str(n) for n in self.nodes])
def getSolution(self, target):
if self.value is not None:
return (self.dirname, 999999)
else:
bestTuple = (self.dirname, self.getSize())
for x in self.nodes:
childTuple = x.getSolution(target)
if childTuple[1] > target and childTuple[1] < bestTuple[1]:
bestTuple = childTuple
return bestTuple
def getSize(self):
return self.value if self.value is not None else sum([x.getSize() for x in self.nodes])
def parselines(xs, rootNode, node):
if xs == []: return
x = xs[0]
tokens = x.strip().split(' ')
if tokens[0] == "$":
if tokens[1] == 'ls':
parselines(xs[1:], rootNode, node)
elif tokens[2] == '..':
parselines(xs[1:], rootNode, node.parent)
elif tokens[2] == '/':
parselines(xs[1:], rootNode, rootNode)
else:
n = Node(tokens[2], node)
if n in node.nodes:
n = node.nodes[node.nodes.index(n)]
parselines(xs[1:], rootNode, n)
elif tokens[0].isdigit():
n = Node(tokens[1], node)
n.value = int(tokens[0])
node.nodes.append(n)
parselines(xs[1:], rootNode, node)
elif tokens[0] == 'dir':
n = Node(tokens[1], node)
node.nodes.append(n)
parselines(xs[1:], rootNode, node)
n = Node('/')
data = open("input.txt", "r").readlines()[1:]
sys.setrecursionlimit(len(data) * 2)
parselines(data, n, n)
print(n.getSolution(30000000  70000000 + n.getSize()))
70000000
is the total disk space and 30000000
is the free space we need. The only change was to getSolutionSize()
, which was changed to getSolution()
:
def getSolution(self, target):
if self.value is not None:
return (self.dirname, 999999)
else:
bestTuple = (self.dirname, self.getSize())
for x in self.nodes:
childTuple = x.getSolution(target)
if childTuple[1] > target and childTuple[1] < bestTuple[1]:
bestTuple = childTuple
return bestTuple
The code block figures out if a child is closer to the target value than itself, done recursively.
Are you tired of humanreadable code yet?
This is a classic problem, in the sense that many applications rely on figuring out if adjacent cells are blocking the view of a current cell. An example could be collision detection (blocking view distance = 1). The problem we are trying to solve, in programmer terms, is: given grid of numbers, find out if all the numbers to any of the edges of the grid are less than the value at the current (x,y).
Interestingly, this problem doesn’t have subproblems, since it’s quite a wellcontained problem. The algorithm to solve this would be:
(1, 1)
, ending at (max_x  1, max_y  1)
0 to x  1
, find out if there are any values that exceed the value at (x,y)x + 1
to max_x  1
0
to y  1
y + 1
to max_y  1
The code, is hence:
import itertools
trees = [[int(y) for y in x if y != '\n'] for x in open('input.txt', 'r').readlines()]
result = itertools.starmap(lambda row, r_trees: list(itertools.starmap(lambda col, tree: all([trees[c_u][col + 1] < tree for c_u in range(0, row + 1)]) or all([trees[c_d][col + 1] < tree for c_d in range(row + 2, len(trees))]) or all([trees[row + 1][r_l] < tree for r_l in range(0, col + 1)]) or all([trees[row + 1][r_r] < tree for r_r in range(col + 2, len(r_trees))]), enumerate(r_trees[1:1]))), enumerate(trees[1:1]))
print(sum([sum(r) for r in result]) + len(trees) * 2 + len(trees[0]) * 2  4)
The most readable thing on the planet, I know.
Instead of figuring out how many (x,y)s have larger values than all the values to any edges of the grid, we now compute a score for each (x,y) based on how many values there is until the current value <=
a value along the path to the edge of the grid, composited with multiplication.
It’s really changing the function all
to sum list itertools.takewhile
, which sums the list of True values, while current value is still more than the values it traverses to reach the edge. As the stopping number themselves is counted into the sum (+1), we need to handle the case where all of the numbers were lower than the value at (x,y), which shouldn’t have the +1 offset. A min
function is applied to handle that case. So:
import itertools
trees = [[int(y) for y in x if y != '\n'] for x in open('input.txt', 'r').readlines()]
result = itertools.starmap(lambda row, r_trees: list(itertools.starmap(lambda col, tree: min(sum(list(itertools.takewhile(lambda x: x, [trees[c_u][col + 1] < tree for c_u in range(row, 1, 1)]))) + 1, row + 1) * min(sum(list(itertools.takewhile(lambda x: x, [trees[c_d][col + 1] < tree for c_d in range(row + 2, len(trees))]))) + 1, len(trees)  row  2) * min(sum(list(itertools.takewhile(lambda x: x, [trees[row + 1][r_l] < tree for r_l in range(col, 1, 1)]))) + 1, col + 1) * min(sum(list(itertools.takewhile(lambda x: x, [trees[row + 1][r_r] < tree for r_r in range(col + 2, len(r_trees))]))) + 1, len(r_trees)  col  2), enumerate(r_trees[1:1]))), enumerate(trees[1:1]))
print(max([max(r) for r in result]))
Ah yes, nothing like simulating ropes innit?
Our adventures today bring us to simulating a head and tail, where tail has welldefined behaviour, which the prompt has kindly provided:
The head is given a list of directions and number of squares to move. So, the subproblems are:
My code today is a lot more readable, so it’s quite obvious how the subproblems are defined:
head_instructions = [(direction, int(value.strip())) for direction, value in [x.split(' ') for x in open('input.txt', 'r').readlines()]]
tail_positions = {(0, 0)}
last_head_pos = (0, 0)
last_tail_pos = (0, 0)
for instruction in head_instructions:
dir, val = instruction
h_x,h_y = last_head_pos
t_x,t_y = last_tail_pos
step = 1 if dir in 'LD' else 1
for incr in [step] * val:
h_y += step if dir in 'UD' else 0
h_x += step if dir in 'LR' else 0
if abs(h_x  t_x) <= 1 and abs(h_y  t_y) <= 1:
continue
else:
t_x += int(0 if h_x == t_x else (h_x  t_x) / abs(h_x  t_x))
t_y += int(0 if h_y == t_y else (h_y  t_y) / abs(h_y  t_y))
tail_positions.add((t_x, t_y))
last_head_pos = (h_x, h_y)
last_tail_pos = (t_x, t_y)
print(len(tail_positions))
Part 2 gives us more points to control (i.e. the tail follows a point which follows another point, etc until the head). This means we have to maintain the positions of all the points, and compare the positions pairwise. Luckily for us, the behaviour is the same. So, for each step in our instructions, we go through the positions pairwise and to update positions. Since we are interested in how the tail moves, we only store all the coordinates visited by the tail in our set.
So:
head_instructions = [(direction, int(value.strip())) for direction, value in [x.split(' ') for x in open('input.txt', 'r').readlines()]]
tail_positions = {(0, 0)}
last_positions = 10 * [(0, 0)]
for instruction in head_instructions:
dir, val = instruction
step = 1 if dir in 'LD' else 1
for incr in [step] * val:
g_x, g_y = last_positions[0]
g_y += step if dir in 'UD' else 0
g_x += step if dir in 'LR' else 0
last_positions[0] = (g_x, g_y)
for i in range(len(last_positions)  1):
h_x,h_y = last_positions[i]
t_x,t_y = last_positions[i + 1]
if abs(h_x  t_x) <= 1 and abs(h_y  t_y) <= 1:
continue
else:
t_x += int(0 if h_x == t_x else (h_x  t_x) / abs(h_x  t_x))
t_y += int(0 if h_y == t_y else (h_y  t_y) / abs(h_y  t_y))
if i + 1 == 9:
tail_positions.add((t_x, t_y))
last_positions[i] = (h_x, h_y)
last_positions[i + 1] = (t_x, t_y)
print(len(tail_positions))
CPU instructions!
This problem is what I would classify as a parsertype problem; it usually involves the programmer writing some sort of basic parser.
The subproblems are:
addx
increment cycles by two, figure out if within the two increments if we’ve crossed 20
or  20 mod 40
, and modify the signal strength accordinglynoop
increment cycles by one, figure out if we’ve crossed 20
or  20 mod 40
, and modify the signal strength accordinglyThinking that this would be easy to do in Haskell, I gave it a go:
inputStr = ""
solution :: String > Integer
solution input = (\(_,_,z) > z) $ foldr (\accum (x:xs) > step x (if null xs then 0 else (read $ head xs)) accum) (1,1,0) $ map words $ lines input
where
stepAddX x accum@(cycles,sums,sigstr) y = if ((cycles + y) == 20)  ((cycles + y  20) `mod` 40 == 0) then (cycles + 2, sums + x, sigstr + if y == 1 then sums * (cycles + y) else (sums + x) * (cycles + y)) else (cycles + 2, sums + x, sigstr)
step "noop" _ accum@(cycles,sums,sigstr) = if ((cycles + 1) == 20)  ((cycles + 1  20) `mod` 40 == 0) then (cycles + 1, sums, sigstr + sums * (cycles + 1)) else (cycles + 1, sums, sigstr)
step "addx" x accum@(cycles,_,_) = stepAddX x accum (if odd cycles then 1 else 2)
Compiles fine, but gives nonsensical values. I’ll give you some time, figure out what may have went wrong here.
Have you thought about it yet?
Right, the reason why this doesn’t work, is because we’re talking about 20
and 20 mod 40
, which is a step function. The key to this error is foldr
, which processes elements starting from the last element. This costed me 3 hours, no joke.
So, the final code works once I changed foldr
to foldl
, which processes lists starting from the first element.
inputStr = ""
solution :: String > Integer
solution input = (\(_,_,z) > z) $ foldl (\accum (x:xs) > step x (if null xs then 0 else (read $ head xs)) accum) (1,1,0) $ map words $ lines input
where
stepAddX x accum@(cycles,sums,sigstr) y = if ((cycles + y) == 20)  ((cycles + y  20) `mod` 40 == 0) then (cycles + 2, sums + x, sigstr + if y == 1 then sums * (cycles + y) else (sums + x) * (cycles + y)) else (cycles + 2, sums + x, sigstr)
step "noop" _ accum@(cycles,sums,sigstr) = if ((cycles + 1) == 20)  ((cycles + 1  20) `mod` 40 == 0) then (cycles + 1, sums, sigstr + sums * (cycles + 1)) else (cycles + 1, sums, sigstr)
step "addx" x accum@(cycles,_,_) = stepAddX x accum (if odd cycles then 1 else 2)
Each day’s part 2 is typically a quick edit of each day’s part 1. However, not for this particular subproblem. By changing the purpose of the CPU instructions, I had to pretty much change my entire function definition.
Luckily for me, for the most part, cycles
and sums
still have the same concepts. Hence, the only thing I really needed to modify was sigstr
, and how I render the output:
import Data.List.Split (chunksOf)
inputStr = ""
solution :: String > [String]
solution input = (\(_,_,z) > chunksOf 40 $ reverse z) $ foldl (\accum (x:xs) > step x (if null xs then 0 else (read $ head xs)) accum) (1,1,"#") $ map words $ lines input
where
isWithin cycles x = (cycles `mod` 40) < x + 3 && (cycles `mod` 40) >= x
step "noop" _ (cycles,lastx,result) = (cycles + 1, lastx, (if (isWithin (cycles + 1) lastx) then '#' else '.') : result)
step "addx" x (cycles,lastx,result) = (cycles + 2, lastx + x, (if isWithin (cycles + 2) (lastx + x) then '#' else '.') : (if isWithin (cycles + 1) lastx then '#' else '.') : result)
The answer would be a list of Strings, which I then manually copy and paste into a text editor to reformat into text that had any meaning to me.
I’ll be honest; this is the hardest part 2 yet. I solved part 2 instinctual, but it took a long time for me to figure out why my solution worked.
Part 1 is quite simple; in simple programmer terms, we have some queues of items, and move the items around based on conditions that have its parameters changed based on the input.
Let’s deconstruct the problem a little bit more:
+
or *
old
, which refers to the value of the itemSo, the subproblems are:
I decided to write my code with some level of structure this time round, because the implementation is slightly complicated compared to the past days.
from itertools import islice
from functools import reduce
class Monkey:
def __init__(self, block):
self.items_inspected = 0
self.parse_block(block)
def parse_block(self, block):
self.id = int(block[0].split(' ')[1][:1])
self.items = Queue()
[self.items.put(int(x.rstrip(' ,'))) for x in block[1].split(' ')[2:]]
self.operation = (lambda x,y: x*y) if block[2].split(' ')[4] == '*' else (lambda x,y: x+y)
self.is_mult = block[2].split(' ')[4] == '*'
self.operand = block[2].split(' ')[5]
self.test = int(block[3].split(' ')[3])
self.true_result = int(block[4].split(' ')[5])
self.false_result = int(block[5].split(' ')[5])
def throw_items(self, monkeys):
while not self.items.empty():
item = self.items.get()
worry = self.operation(item, item if self.operand == 'old' else int(self.operand)) // 3
monkeys[self.true_result if worry % self.test == 0 else self.false_result].items.put(worry)
self.items_inspected += 1
def processor(monkeys, target_rounds):
for n_rounds in range(target_rounds):
for monkey in monkeys:
monkey.throw_items(monkeys)
best_two = list(islice(sorted(monkeys, key=lambda x: x.items_inspected, reverse=True), 2))
return best_two[0].items_inspected * best_two[1].items_inspected
if __name__ == '__main__':
lines = open('input.txt', 'r').readlines()
blocks = reduce(lambda accum, line: accum + [[]] if line == '\n' else accum[:1] + [accum[1] + [line.strip()]], lines, [[]])
monkeys = [Monkey(block) for block in blocks]
print(processor(monkeys, 20))
In this part, the condition was changed to no longer include the // 3
, meaning that the numbers grew out of proportion, especially when we want 10000 rounds. In Python, large integers, although take time to function, and hence, the program will take too long to complete.
Hence, part 2’s prompt suggested that we find a better way to represent the worry
variable. I went to inspect the counts of the queue at the end of 10, 20 and 30 rounds; even though there is some correlation in the rate of change of counts, it is not strictly linear. This is because the operations are different; inspect the input:
Monkey 0:
Starting items: 79, 98
Operation: new = old * 19
Test: divisible by 23
If true: throw to monkey 2
If false: throw to monkey 3
Monkey 1:
Starting items: 54, 65, 75, 74
Operation: new = old + 6
Test: divisible by 19
If true: throw to monkey 2
If false: throw to monkey 0
Monkey 2:
Starting items: 79, 60, 97
Operation: new = old * old
Test: divisible by 13
If true: throw to monkey 1
If false: throw to monkey 3
Monkey 3:
Starting items: 74
Operation: new = old + 3
Test: divisible by 17
If true: throw to monkey 0
If false: throw to monkey 1
There is a high probability that a value will go through queues 0, 3, and 1, but a probability still exists that it will go through queue 2, which affects the final queue count. Hence, attempting to map the queue count linearly is not viable.
The next thing I looked at was the input. I tried to think about how the operations will affect the divisibility of the items and concluded (after 30 minutes of thinking) that there is no fixed pattern, due addition. If all operations were multiplications, then the story would be different; we would be able to definitively tell if a number will be divisible by the condition the first time we look at the item, or the operand.
The next observation I made was that each test was relatively constant; they are always in the format: divisible by <prime number>
. For a moment, I thought of some math, like “how would I know if 2^x + 3^y = 7n, where x, y, n are natural numbers?” > the answer is I have no idea.
Then, my instincts took over and I just replaced // 3
with mod (sum of all test prime numbers in the input)
and ran the script on the input without blinking twice. To my surprise, it worked; it was one of those situations where my instincts completed its processes far ahead of the capabilities of my logical thinking.
The code change was one of those that looks insignificant (it literally replaces 4 characters with a modulo), but had a few hours of effort put into it.
from queue import Queue
from itertools import islice
from functools import reduce
class Monkey:
def __init__(self, block):
self.items_inspected = 0
self.parse_block(block)
def parse_block(self, block):
self.id = int(block[0].split(' ')[1][:1])
self.items = Queue()
[self.items.put(int(x.rstrip(' ,'))) for x in block[1].split(' ')[2:]]
self.operation = (lambda x,y: x*y) if block[2].split(' ')[4] == '*' else (lambda x,y: x+y)
self.is_mult = block[2].split(' ')[4] == '*'
self.operand = block[2].split(' ')[5]
self.test = int(block[3].split(' ')[3])
self.true_result = int(block[4].split(' ')[5])
self.false_result = int(block[5].split(' ')[5])
def throw_items(self, monkeys):
while not self.items.empty():
item = self.items.get()
worry = self.operation(item, item if self.operand == 'old' else int(self.operand)) % (2 * 17 * 7 * 11 * 19 * 5 * 13 * 3)
monkeys[self.true_result if worry % self.test == 0 else self.false_result].items.put(worry)
self.items_inspected += 1
def processor(monkeys, target_rounds):
for n_rounds in range(target_rounds):
for monkey in monkeys:
monkey.throw_items(monkeys)
best_two = list(islice(sorted(monkeys, key=lambda x: x.items_inspected, reverse=True), 2))
return best_two[0].items_inspected * best_two[1].items_inspected
if __name__ == '__main__':
lines = open('input.txt', 'r').readlines()
blocks = reduce(lambda accum, line: accum + [[]] if line == '\n' else accum[:1] + [accum[1] + [line.strip()]], lines, [[]])
monkeys = [Monkey(block) for block in blocks]
print(processor(monkeys, 1000))
After taking a shower, my logical thinking finally reached a conclusion.
Let’s break this down into a much simpler problem. Let’s say we have two test prime numbers, 2 and 3. There are 4 things that could possibly happen after applying the operation to our item’s value:
So, if we were to talk about the possible values of each of the bullet points:
Let’s think about all the numbers in their prime factors:
If we link this to our question, we realise that our these numbers are a combination of multiplication and addition. A further observation suggests that all numbers more than 6 can be broken down into n = q * 6 + r
, where n
is the original number, q
is some number, and r
is a number less than 6. We then realize that r
is the remainder, and we also know that n % 6 == r
.
We then realize that if we add a number, m
, such that n
is still not divisible by 6, and r + m < 6
then: n + m = q * 6 + r + m
. Since n + m
is not divisible by 6, then surely r + m
is not divisible by 6. Likewise, for 2: r + m < 6
, then: n + m = q * 6 + r + m
, since n + m
is not divisible by 2, then surely r + m
is not divisible by 2, and so on. This wouldn’t work if we try to test for divisibility by 7: r + m < 6
then: n + m =/= q * 6 + r + m
, r + m
not divisible by 7 (which is the case for all possible values of r + m
, since r + m
is within 0 to 6) does not necessarily mean n + m
is not divisible by 7.
So, what this means is that any addition that does not make the expression immediately divisible by 6
is added to the remainder, and we know that the modulo of the remainder is equal to the modulo of the original number. Since 6
can be broken down into the primes 2
and 3
, which are our test prime numbers, therefore, by performing modulo on all the test prime numbers within our input, we can fully express the divisibility of our number with any one of the primes just by maintaining the remainder.
Hence,
worry = self.operation(item, item if self.operand == 'old' else int(self.operand)) % (2 * 17 * 7 * 11 * 19 * 5 * 13 * 3)
must work (the prime numbers are the terms I’m too lazy to evaluate).
Today is quite obviously a pathfinding challenge.
Admittedly, I spend an embarrassing amount of time figuring out that while I can only go up by one altitude unit at a time, I can actually descend more than 1 level at a time. I decided to use Breadth First Search to perform pathfinding, since it’s good enough for the use case.
For every node I’ve visited, I replace it’s position with #
, which denotes a visited node. So:
grid = [[y for y in x.strip()] for x in open('input.txt', 'r').readlines()]
grid[0][20] = 'a'
def bfs(pos):
q = Queue()
p = Queue()
q.put(pos)
count = 0
while True:
while not q.empty():
x, y = q.get()
elevation = 'a' if grid[y][x] == 'S' else grid[y][x]
grid[y][x] = '#'
moves = [(x  1, y), (x + 1, y), (x, y  1), (x, y + 1)]
if elevation == 'E':
return count
for new_x, new_y in moves:
if 0 <= new_x < len(grid[0]) and 0 <= new_y < len(grid) \
and grid[new_y][new_x] != '#' \
and (999 <= ord(grid[new_y][new_x])  ord(elevation) <= 1 \
or (elevation == 'z' and grid[new_y][new_x] == 'E')):
p.put((new_x, new_y))
count += 1
q = p
p = Queue()
print(bfs((0, 20)))
It might be worth it to mention that 999
is too large of a magnitude. 2
would have been good enough; this means that I would be able to descend a maximum of 2
. Experimental results for the win.
Also, if you think hardcoding the starting position is hacky, then you can look away.
Part 2 requires us to find a better starting position, so that we minimize the amount of steps it takes to reach the peak, denoted by E
. So, I first approached the problem the dumb way, which was to iterate through all positions of a
, the lowest altitude, and accumulate the minimum.
Obviously, that was slow, so I thought about using another algorithm, like Dijkstra’s Shortest Path algorithm; however, there would be no benefit whatsoever over BFS since the weights of each nodes are the same.
Hence, I decided to perform a reverse BFS; instead of checking for E
, I check for the closest a
, given that we can instead ascend 2 levels and descend only 1 level (inverse of our ascending constraints).
So:
from queue import Queue
grid = [[y for y in x.strip()] for x in open('input.txt', 'r').readlines()]
def bfs(pos):
q = Queue()
p = Queue()
q.put(pos)
count = 0
while True:
while not q.empty():
x, y = q.get()
elevation = 'z' if grid[y][x] == 'E' else grid[y][x]
grid[y][x] = '#'
moves = [(x  1, y), (x + 1, y), (x, y  1), (x, y + 1)]
if elevation == 'a':
return count
for new_x, new_y in moves:
if 0 <= new_x < len(grid[0]) and 0 <= new_y < len(grid) \
and grid[new_y][new_x] != '#' \
and (1 <= ord(grid[new_y][new_x])  ord(elevation) <= 2 \
or (elevation == 'a' and grid[new_y][new_x] == 'S')):
p.put((new_x, new_y))
count += 1
q = p
p = Queue()
print(bfs((len(grid[0])  22, 20)))
Nothing like spending 5 hours on Advent of Code, eh?
Felt a little down, so I decided to use good old C to do this. Little did I know, that was going to be a huge ordeal.
This part was essentially about parsing. I may be able to summarize what I’ve essentially did here, but the process to get there is errorprone; I had to painstakingly debug the corner cases that occurred during my parsing.
In hindsight, it might have been a better idea to list all the possible corner cases before attempting the problem.
The input we are to parse can come in the following format:
[[[[3],[]],5],[[],[7,[3,3,3],2,[1],[6,7,9]],[],8,1],[9,[0,0,[5,3,5,1],[2],2],3],[2,[0,4]]]
[[[]],[[[],10,[8,0,5,5],[5,4,8,10,1],[6,8,0,3,5]],2,[9,[5],[9,2],[]],[8,[]]]]
Defining the first list as ‘a’, and the second list as ‘b’, if:
Sounds easy, but it was actually much more difficult than I imagined. I converted each comparison method above into their own function, and wrapped all three functions around a main function called “think” that decides which comparison method to choose based on the current tokens. I then confirmed that the list pairs are either greater, or less than one another. Hence, I was able to discard all thoughts related to equality.
Now, time to think about each case step by step, which I only thought was a good idea in hindsight. Let’s say the current character in ‘a’ and ‘b’ are ‘x’ and ‘y’:
Embarrasingly enough, it took me a long time to figure out that two digit numbers exist within our problemspace; I’ve been comparing ASCII for a few hours not knowing why my solution didn’t work.
With the steps described above, it becomes possible to define a recursive function that steps through the list, building kinda like a syntax tree on the stack:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int comparevaluethenlist(char* a, char* b, size_t l_levels, size_t r_levels, int c);
int comparevalue(char* a, char* b, size_t l_levels, size_t r_levels, int c);
int comparelist(char* a, char* b, size_t l_levels, size_t r_levels, int c);
int think(char* a, char* b, size_t l_levels, size_t r_levels, int c);
int comparevaluethenlist(char* a, char* b, size_t l_levels, size_t r_levels, int c) {
return think(a, b + 1, l_levels + 1, r_levels + 1, c + 1);
}
int think(char* a, char* b, size_t l_levels, size_t r_levels, int c) {
if (*a == '[' && *b == '[') {
int res = comparelist(a, b, l_levels, r_levels, c);
if (res == 1  res == 1) return res;
} else if (*a != '[' && *a != ']' && *b != '[' && *b != ']')
return comparevalue(a, b, l_levels, r_levels, c);
else if (*a == ']' && *b != ']') {
l_levels;
if (l_levels < r_levels) return 1;
if (l_levels > r_levels) return 1;
a++;
if (*a == ',') a++;
return think(a + 1, b, l_levels, r_levels, c);
} else if (*a != ']' && *b == ']') {
r_levels;
if (l_levels < r_levels) return 1;
if (l_levels > r_levels) return 1;
b++;
if (*b == ',') b++;
return think(a, b + 1, l_levels, r_levels, c);
} else if (*a == ']' && *b == ']') {
l_levels;
r_levels;
if (l_levels < r_levels) return 1;
if (l_levels > r_levels) return 1;
a++;
b++;
if (*a == ',') a++;
if (*b == ',') b++;
return think(a, b, l_levels, r_levels, c);
} else {
if (*a != '[' && *a != ']')
return comparevaluethenlist(a, b, l_levels, r_levels, c);
else if (*b != '[' && *b != ']')
return comparevaluethenlist(b, a, r_levels, l_levels, c);
}
}
int comparevalue(char* a, char* b, size_t l_levels, size_t r_levels, int c) {
char numBufA[20];
char numBufB[20];
char *tokA_com = strchr(a, ','), *tokA_brac = strchr(a, ']'),
*tokB_com = strchr(b, ','), *tokB_brac = strchr(b, ']');
char *tokA = (tokA_com < tokA_brac && tokA_com != NULL) ? tokA_com : tokA_brac;
char *tokB = (tokB_com < tokB_brac && tokB_com != NULL) ? tokB_com : tokB_brac;
strncpy(numBufA, a, tokA  a);
numBufA[tokA  a] = '\0';
strncpy(numBufB, b, tokB  b);
numBufB[tokB  b] = '\0';
int a_i = 0, b_i = 0;
a_i = atoi(numBufA);
b_i = atoi(numBufB);
if (a_i > b_i) return 1;
if (a_i < b_i) return 1;
a += tokA  a;
b += tokB  b;
if (c && *b == ',') return 1;
if (c && *b != ',' && *a == ',') return 1;
if (*a == ',') a++;
if (*b == ',') b++;
return think(a, b, l_levels, r_levels, c);
}
int comparelist(char* a, char* b, size_t l_levels, size_t r_levels, int c) {
l_levels++;
r_levels++;
a++; b++;
if (*a == ',') a++;
if (*b == ',') b++;
return think(a, b, l_levels, r_levels, c);
}
int parse(char* line1, char* line2) {
return comparelist(line1, line2, 0, 0, 0);
}
int main() {
unsigned long accum = 0, count = 0;;
char line1[1000], line2[1000];
FILE *f = fopen("input.txt", "r");
do {
count++;
fscanf(f, "%s\n", line1);
fscanf(f, "%s\n", line2);
int val = parse(line1, line2);
if (val == 1) {
accum += count;
}
} while (!feof(f));
fclose(f);
printf("Result: %ld\n", accum);
return 0;
}
After some hours of debugging, I also had to introduce c
to maintain information that we are currently within a list that has been upgraded from a value for the sake of comparison, so that we can return early upon encountering a ,
. This has by far the most corner cases in this problem.
Part 2 repurposes the think
function into a binary comparison function. Luckily, I have already defined think
to return values required by the qsort
standard library function, so I simply used that, and appended [[2]]
and [[6]]
into the input.txt
file, and multiplied their indices after sorting to acquire the final solution:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int comparevaluethenlist(char* a, char* b, size_t l_levels, size_t r_levels, int c);
int comparevalue(char* a, char* b, size_t l_levels, size_t r_levels, int c);
int comparelist(char* a, char* b, size_t l_levels, size_t r_levels, int c);
int think(char* a, char* b, size_t l_levels, size_t r_levels, int c);
int comparevaluethenlist(char* a, char* b, size_t l_levels, size_t r_levels, int c) {
return think(a, b + 1, l_levels + 1, r_levels + 1, c + 1);
}
int think(char* a, char* b, size_t l_levels, size_t r_levels, int c) {
if (*a == '[' && *b == '[') {
int res = comparelist(a, b, l_levels, r_levels, c);
if (res == 1  res == 1) return res;
} else if (*a != '[' && *a != ']' && *b != '[' && *b != ']')
return comparevalue(a, b, l_levels, r_levels, c);
else if (*a == ']' && *b != ']') {
l_levels;
if (l_levels < r_levels) return 1;
if (l_levels > r_levels) return 1;
a++;
if (*a == ',') a++;
return think(a + 1, b, l_levels, r_levels, c);
} else if (*a != ']' && *b == ']') {
r_levels;
if (l_levels < r_levels) return 1;
if (l_levels > r_levels) return 1;
b++;
if (*b == ',') b++;
return think(a, b + 1, l_levels, r_levels, c);
} else if (*a == ']' && *b == ']') {
l_levels;
r_levels;
if (l_levels < r_levels) return 1;
if (l_levels > r_levels) return 1;
a++;
b++;
if (*a == ',') a++;
if (*b == ',') b++;
return think(a, b, l_levels, r_levels, c);
} else {
if (*a != '[' && *a != ']')
return comparevaluethenlist(a, b, l_levels, r_levels, c);
else if (*b != '[' && *b != ']')
return comparevaluethenlist(b, a, r_levels, l_levels, c);
}
}
int comparevalue(char* a, char* b, size_t l_levels, size_t r_levels, int c) {
char numBufA[20];
char numBufB[20];
char *tokA_com = strchr(a, ','), *tokA_brac = strchr(a, ']'),
*tokB_com = strchr(b, ','), *tokB_brac = strchr(b, ']');
char *tokA = (tokA_com < tokA_brac && tokA_com != NULL) ? tokA_com : tokA_brac;
char *tokB = (tokB_com < tokB_brac && tokB_com != NULL) ? tokB_com : tokB_brac;
strncpy(numBufA, a, tokA  a);
numBufA[tokA  a] = '\0';
strncpy(numBufB, b, tokB  b);
numBufB[tokB  b] = '\0';
int a_i = 0, b_i = 0;
a_i = atoi(numBufA);
b_i = atoi(numBufB);
if (a_i > b_i) return 1;
if (a_i < b_i) return 1;
a += tokA  a;
b += tokB  b;
if (c && *b == ',') return 1;
if (c && *b != ',' && *a == ',') return 1;
if (*a == ',') a++;
if (*b == ',') b++;
return think(a, b, l_levels, r_levels, c);
}
int comparelist(char* a, char* b, size_t l_levels, size_t r_levels, int c) {
l_levels++;
r_levels++;
a++; b++;
if (*a == ',') a++;
if (*b == ',') b++;
return think(a, b, l_levels, r_levels, c);
}
int comparison(const void* line1, const void* line2) {
return comparelist((char*) line1, (char*) line2, 0, 0, 0);
}
int main() {
unsigned long count = 0;
unsigned long result = 0;
char lines[1000][1000];
FILE *f = fopen("input.txt", "r");
while (!feof(f))
fscanf(f, "%s\n", lines[count++]);
fclose(f);
qsort(lines, count, 1000 * sizeof(char), comparison);
for (int i = 0; i < count; i++) {
if (strcmp(lines[i], "[[2]]") == 0)
result = i + 1;
if (strcmp(lines[i], "[[6]]") == 0)
result *= i + 1;
}
printf("Result: %ld\n", result);
return 0;
}
Bury me in sand, please.
Today’s problem involved the following subproblems:
What about the size of the grid? Well, since our input is fixed, we really don’t have to figure that out; just guess a large enough size, I’m sure that won’t come back to bite me in the future :new_moon_with_face:. The first subproblem was easily solved like so:
grid = [['.' for _ in range(600)] for _ in range(200)]
with open('input.txt', 'r') as f:
line = f.readline()
while line:
if line:
xys = [tuple(map(lambda y: int(y), x.split(','))) for x in line.split(' ') if x != '>']
for i in range(len(xys)  1):
x1, y1 = xys[i]
x2, y2 = xys[i + 1]
while abs(x1  x2) > 0:
grid[y1][x1] = '#'
x1 += 1 if x1 > x2 else 1
while abs(y1  y2) > 0:
grid[y1][x1] = '#'
y1 += 1 if y1 > y2 else 1
grid[y1][x1] = '#'
line = f.readline()
The input looks like this:
498,4 > 498,6 > 496,6
503,4 > 502,4 > 502,9 > 494,9
So, when parsing each line, we need to strip spaces, filter out >
, and split the resultant string by ,
. We also want to convert each list of strings into a tuple of integers, so we also do that in the same line.
For each adjacent x
and y
, we attempt to draw the walls that will affect sand interactions.
To solve the next subproblem, we convert the behavior in to a bunch of if statements, and keep looping until one grain of sand enters the void, defined by anything falling out of y = 200
:
voided = False
settled_grains = 0
while not voided:
grain_x, grain_y = (500, 0)
is_occupied = lambda x: x == '#' or x == '+'
settled = False
while not settled:
if grain_y + 1 >= 200:
voided = True
break
elif not is_occupied(grid[grain_y + 1][grain_x]):
grain_y += 1
elif grain_x  1 >= 0 and not is_occupied(grid[grain_y + 1][grain_x  1]):
grain_x = 1
grain_y += 1
elif grain_x + 1 < 600 and not is_occupied(grid[grain_y + 1][grain_x + 1]):
grain_x += 1
grain_y += 1
else:
settled = True
grid[grain_y][grain_x] = '+'
if not voided:
settled_grains += 1
Piecing it all together:
grid = [['.' for _ in range(600)] for _ in range(200)]
with open('input.txt', 'r') as f:
line = f.readline()
while line:
if line:
xys = [tuple(map(lambda y: int(y), x.split(','))) for x in line.split(' ') if x != '>']
for i in range(len(xys)  1):
x1, y1 = xys[i]
x2, y2 = xys[i + 1]
while abs(x1  x2) > 0:
grid[y1][x1] = '#'
x1 += 1 if x1 > x2 else 1
while abs(y1  y2) > 0:
grid[y1][x1] = '#'
y1 += 1 if y1 > y2 else 1
grid[y1][x1] = '#'
line = f.readline()
voided = False
settled_grains = 0
while not voided:
grain_x, grain_y = (500, 0)
is_occupied = lambda x: x == '#' or x == '+'
settled = False
while not settled:
if grain_y + 1 >= 200:
voided = True
break
elif not is_occupied(grid[grain_y + 1][grain_x]):
grain_y += 1
elif grain_x  1 >= 0 and not is_occupied(grid[grain_y + 1][grain_x  1]):
grain_x = 1
grain_y += 1
elif grain_x + 1 < 600 and not is_occupied(grid[grain_y + 1][grain_x + 1]):
grain_x += 1
grain_y += 1
else:
settled = True
grid[grain_y][grain_x] = '+'
if not voided:
settled_grains += 1
print(settled_grains)
In this part, we realize that the void doesn’t exist (damn it, there goes one option). Instead, there is an infinite floor at max_y + 2
, where max_y
is the largest y
found while parsing the lines.
Luckily for me, that was simple to do; we just store the maximum y
every time we see one:
highest_y = max(y1, y2, highest_y)
Then, after reading the entire input, we just fill that y
with the floor symbol:
grid[highest_y + 2] = ['#' for _ in range(600)]
Next, our stop condition has changed to sand particles settling at (500, 0)
, meaning that the generator of sand particles will subsequently be blocked.
else:
settled = True
grid[grain_y][grain_x] = 'o'
if (grain_x, grain_y) == (500, 0):
settled_grains += 1
stop = True
break
However, all these changes weren’t enough, as I was greeted by the “wrong answer” prompt on AOC. Turns out, due to the floor, the sand particles tend to create large pyramids. This means that there is a large base, which can’t fit into our grid. Incidentally, I decided to reassign settled grains as 'o'
, to differentiate between falling grains and settled grains.
Luckily, since we know our sand particles are generated from (500, 0)
, we know for sure that the maximum x
is somewhere around 750
due to how equilateral triangles work. To be safe, we increase the grid size all the way to 1000
. So, the final code looks like this.
grid = [['.' for _ in range(1000)] for _ in range(200)]
with open('input.txt', 'r') as f:
line = f.readline()
highest_y = 0
while line:
if line:
xys = [tuple(map(lambda y: int(y), x.split(','))) for x in line.split(' ') if x != '>']
for i in range(len(xys)  1):
x1, y1 = xys[i]
x2, y2 = xys[i + 1]
highest_y = max(y1, y2, highest_y)
while abs(x1  x2) > 0:
grid[y1][x1] = '#'
x1 += 1 if x1 > x2 else 1
while abs(y1  y2) > 0:
grid[y1][x1] = '#'
y1 += 1 if y1 > y2 else 1
grid[y1][x1] = '#'
line = f.readline()
grid[highest_y + 2] = ['#' for _ in range(1000)]
stop = False
settled_grains = 0
while not stop:
grain_x, grain_y = (500, 0)
is_occupied = lambda x: x == '#' or x == 'o'
settled = False
while not settled:
if grain_y + 1 >= 200:
stop = True
break
elif not is_occupied(grid[grain_y + 1][grain_x]):
grain_y += 1
elif grain_x  1 >= 0 and not is_occupied(grid[grain_y + 1][grain_x  1]):
grain_x = 1
grain_y += 1
elif grain_x + 1 < 1000 and not is_occupied(grid[grain_y + 1][grain_x + 1]): #and not is_occupied(grid[grain_y][grain_x + 1]):
grain_x += 1
grain_y += 1
else:
settled = True
grid[grain_y][grain_x] = 'o'
if (grain_x, grain_y) == (500, 0):
settled_grains += 1
stop = True
break
if not stop:
settled_grains += 1
print(settled_grains)
Today was an excellent lesson in how time & space can grow into sizes that would be noticeable.
In pure logical terms, there are two entities in question: the sensor, and the beacon. Both of these entities have a position, and can be mapped with the relation: sensor > beacon
.
The problem constraints that the position are integers, and each relation sensor > beacon
represents the sensor to its closest beacon in Manhattan distance.
Manhattan distance is the distance in the xaxis + the distance in the yaxis, which is different from typical distance that is typically the hypotenuse of x and y.
With the constraints out of the way, behold the question: get the number of positions that is not within the Manhattan distance of any sensor > beacon
relation. The position is constraint by y, so we essentially get a row of positions that fulfils the condition.
At first, I thought about performing a BFS on every source, and then marking visited nodes. Then, I just count the number of unmarked nodes, and we’d be done. Of course, this works, but subsequently, the puzzle input looks like this:
Sensor at x=2832148, y=322979: closest beacon is at x=3015667, y=141020
Sensor at x=1449180, y=3883502: closest beacon is at x=2656952, y=4188971
which I interpreted as “aw crap, I’d need like a hundred gigabytes of memory to store a grid that size”. Instead, let’s approach the problem from another angle: we take the possible positions, which is defined by the minimum x
and y
seen in the input minus the largest distance we know, to the maximum x
and y
plus the largest distance. Luckily for us, since y
is constrained to a single row, we only need to process one row, and x
columns.
Then, calculate the Manhattan distance from the possible positions to every sensor, and check if they are less than the distance within the sensor > beacon
relation. If they are, then those positions are considered visited; otherwise, those positions are unvisited. Finally, just count the number of unvisited positions, as required of us.
The above text is summarized as:
Code:
min_x, min_y = 0, 0
max_x, max_y = 0, 0
max_dist = 0
coordinate_map = dict()
beacons = set()
with open('input.txt', 'r') as f:
line = f.readline()
while line:
tokens = line.strip().split(' ')
s_x = int(tokens[2].rstrip(',').split('=')[1])
s_y = int(tokens[3].rstrip(':').split('=')[1])
b_x = int(tokens[8].rstrip(',').split('=')[1])
b_y = int(tokens[9].rstrip(',').split('=')[1])
min_x = min(s_x, b_x, min_y)
min_y = min(s_y, b_y, min_y)
max_x = max(s_x, b_x, max_x)
max_y = max(s_y, b_y, max_y)
dist = abs(b_x  s_x) + abs(b_y  s_y)
max_dist = max(max_dist, dist)
coordinate_map[(s_x, s_y)] = dist
beacons.add((b_x, b_y))
line = f.readline()
target_y = 2000000
count = 0
for x in range(min_x  max_dist, max_x + max_dist + 1):
for k, v in coordinate_map.items():
s_x, s_y = k
dist = abs(x  s_x) + abs(target_y  s_y)
if (x, target_y) not in beacons and (x, target_y) not in coordinate_map and dist <= v:
count += 1
break
print(count)
Part 2 requires us to limit our search space and find one position that all beacons cannot reach; the problem guarantees that there is only 1 such position within the x and y constraints. Our yconstraint is released, which creates a huge problem for us; now, our constraints are x between 0 to 4000000 and y between 0 to 4000000.
If I were to draw a grid, and assuming each unit of data we talk about here is 1 byte, that’s like 16 terabytes of data. ‘Tis but a small issue, let’s just buy more RAM.
Luckily, part 1 doesn’t really store anything in a grid; we have great space complexity, so why not just use it? Turns out, we will experience time issues; even though the algorithm is O(x * n) in timecomplexity, where x
is the column size of the theoretical grid and n
is the number of sensors, the algorithm in this new context is now O(y * x * n), since y
is no longer just a constant. n
is a small number, so it basically doesn’t matter, but x
and y
multiplied together is huge. Suffice to say, the code doesn’t finish with a few hours.
Instead, let’s slightly change how we approach the problem; instead of finding unreachable locations line by line, we make the following observations instead:
sensor > beacon
relation.Hence, we can generate all the points between Manhattan distance and Manhattan distance + 1.
However, this presents a problem; if the Manhattan distance is some absurd size, like 100000, and we have 16 sensors, then we have an absurd number of generated points, which should be 16 * 4 * 100000 = 6400000 points. If each point takes 16 bytes to store, as each number is an Int, then we get 102,400,000 bytes, which is 102.4GB of RAM. No biggie, just buy more RAM, amirite?
Well ok, we’ve reduced the storage our solution requires from 16TB to 102.4GB, which is 0.64% of the original size we needed, which is an improvement :tada:. However, that’s not good enough. So what do we do instead?
We make sacrifices in time. Now, for every sensor
position, we generate all the unreachable locations from that one sensor position, and check if the unreachable locations is also unreachable from every other sensor
position. Rinse and repeat until we find that one bloody point.
Originally, if we burned 102.4GB of RAM, then our time complexity would be O(m * n), where m
is the number of points generated, n
is the number of sensor
positions. Now, we burn a cool 100MB of RAM, and have a time complexity of O(m * n^2). In this particular case, I feel that this is a perfectly reasonable tradeoff for our problem.
Hence, the Python code:
coordinate_map = dict()
beacons = set()
with open('input.txt', 'r') as f:
line = f.readline()
while line:
tokens = line.strip().split(' ')
s_x = int(tokens[2].rstrip(',').split('=')[1])
s_y = int(tokens[3].rstrip(':').split('=')[1])
b_x = int(tokens[8].rstrip(',').split('=')[1])
b_y = int(tokens[9].rstrip(',').split('=')[1])
dist = abs(b_x  s_x) + abs(b_y  s_y)
coordinate_map[(s_x, s_y)] = dist
beacons.add((b_x, b_y))
line = f.readline()
def sensor_barrier_coords(sensor_pos):
s_x, s_y = sensor_pos
dist = coordinate_map[sensor_pos] + 1
res = set()
for i in range(dist + 1):
res.add((s_x + i, s_y + (dist  i)))
res.add((s_x  i, s_y  (dist  i)))
res.add((s_x + i, s_y  (dist  i)))
res.add((s_x  i, s_y + (dist  i)))
return res
for k, _ in coordinate_map.items():
for pos in sensor_barrier_coords(k):
exclusive = True
x, y = pos
if pos in beacons or pos in coordinate_map:
continue
if x < 0 or x > 4000000 or y < 0 or y > 4000000:
continue
for k1, v in coordinate_map.items():
s_x, s_y = k1
dist = abs(x  s_x) + abs(y  s_y)
if dist <= v:
exclusive = False
break
if exclusive:
print(x * 4000000 + y)
exit()
x * 4000000 + y
is just the problem statement’s instruction on how to encode the answer for AOC to check if the result is valid.
This day was, for lack of a better phrase, really difficult. Part 1 was relatively simple, although I did struggle for a day to get it working, while I needed some hints for part 2.
Part 1 presents an input that looks something like this:
Valve AA has flow rate=0; tunnels lead to valves DD, II, BB
Valve BB has flow rate=13; tunnels lead to valves CC, AA
Valve CC has flow rate=2; tunnels lead to valves DD, BB
Valve DD has flow rate=20; tunnels lead to valves CC, AA, EE
Valve EE has flow rate=3; tunnels lead to valves FF, DD
Valve FF has flow rate=0; tunnels lead to valves EE, GG
Valve GG has flow rate=0; tunnels lead to valves FF, HH
Valve HH has flow rate=22; tunnel leads to valve GG
Valve II has flow rate=0; tunnels lead to valves AA, JJ
Valve JJ has flow rate=21; tunnel leads to valve II
To understand this problem, there are a few pieces of important information that we need to extract from the context:
XX
denotes a node;flow rate=xx;
denotes a weight to the node;... DD, II, BB
denotes what the node is connected to.Each of the valves must be “turned on” to have an impact on the context. The highest sum over a period of 30 units of time will be the solution to the problem.
If we were to directly translate the input to a graph without much thought, we will end up with a undirected cyclic graph, which for lack of a better term, is a pain to work with.
Hence, I decided to boil it down using Dijkstra’s Algorithm  before that, I got myself a refresher on how to properly implement priority queues with a flat array, which is possible because it is a complete binary tree.
heap_rep = []
def queue_add(val):
global heap_rep
heap_rep.append(val)
curr_ind = len(heap_rep)  1
# => odd number = left child, even number = right child
parent = (curr_ind  2) // 2 if curr_ind % 2 == 0 else (curr_ind  1) // 2
while parent >= 0 and heap_rep[parent] < heap_rep[curr_ind]:
heap_rep[parent], heap_rep[curr_ind] = heap_rep[curr_ind], heap_rep[parent]
curr_ind = parent
parent = (curr_ind  2) // 2 if curr_ind % 2 == 0 else (curr_ind  1) // 2
def queue_pop():
global heap_rep
retval = heap_rep[0]
heap_rep[0], heap_rep[1] = heap_rep[1], heap_rep[0]
ueap_rep = heap_rep[:1]
indx = 0
left_child = indx * 2 + 1
right_child = indx * 2 + 2
while (left_child < len(heap_rep) and heap_rep[indx] < heap_rep[left_child]) or (right_child < len(heap_rep) and heap_rep[indx] < heap_rep[right_child]):
if right_child < len(heap_rep) and heap_rep[left_child] < heap_rep[right_child]:
heap_rep[indx], heap_rep[right_child] = heap_rep[right_child], heap_rep[indx]
indx = right_child
else:
heap_rep[indx], heap_rep[left_child] = heap_rep[left_child], heap_rep[indx]
indx = left_child
left_child = indx * 2 + 1
right_child = indx * 2 + 2
return retval
queue_add(14)
queue_add(7)
queue_add(12)
queue_add(18)
queue_add(7)
queue_add(11)
queue_add(20)
queue_add(31)
queue_add(45)
print(heap_rep)
while len(heap_rep) != 0:
print(queue_pop())
NOTE: Yes, the code looks ugly. It was meant to be a refresher after all!
Then, I used GeeksForGeeks’s picture of their graph as reference to test my Dijkstra’s algorithm:
# testing implementation of d algo
list_of_distances = []
association_list = []
# add some values
association_list.append([(1, 4), (7, 8)])
association_list.append([(0, 4), (7, 11), (2, 8)])
association_list.append([(1, 8), (8, 2), (5, 4), (3, 7)])
association_list.append([(2, 7), (4, 9), (5, 14)])
association_list.append([(3, 9), (5, 10)])
association_list.append([(4, 10), (3, 14), (2, 4), (6, 2)])
association_list.append([(5, 2), (8, 6), (7, 1)])
association_list.append([(0, 8), (8, 7), (1, 11), (6, 1)])
association_list.append([(2, 2), (7, 7), (6, 6)])
# calculate distances
list_of_distances = [999999 for i in range(len(association_list))]
list_of_distances[0] = 0
# nonpriority queue implementation
spt_set = set()
while len(spt_set) != len(list_of_distances):
min_index = 0
min_distance = 999999
for k, v in enumerate(list_of_distances):
if k in spt_set:
continue
if v < min_distance:
min_index = k
min_distance = v
spt_set.add(min_index)
for association in association_list[min_index]:
list_of_distances[association[0]] = min(min_distance + association[1], list_of_distances[association[0]])
# get path from one to another
print(list_of_distances)
for i in range(1, 9):
target = i
path = [i]
while target != 0:
min_dist = 999999
min_ind = 0
for association in association_list[target]:
dist = list_of_distances[association[0]] + association[1]
if dist < min_dist:
min_ind = association[0]
min_dist = dist
path.append(min_ind)
target = min_ind
print('>'.join([str(x) for x in reversed(path)]))
Great, warmup done. Let’s talk about the problem now.
The distance between each node (that is connected anyway), is actually just 1 unit; so, we boil down those 1unit nodes into edges. When those nodes become edges, we realize that information about how we traverse from one node to another is lost. In other words, we could be doing crazy things like walking back and forth a node but not actually turning on the valve at that node gasp. Thankfully, that is exactly what we want. The conversion process looks something like this:
def get_distances(source, associations):
to_visit = PriorityQueue()
distances = dict()
for k in associations.keys():
distances[k] = 999999
distances[source] = 0
to_visit.put((0, source))
while not to_visit.empty():
_, node = to_visit.get()
association = associations[node]
for neighbor in association[1]:
if distances[neighbor] > distances[node] + 1:
distances[neighbor] = distances[node] + 1
to_visit.put((distances[neighbor], neighbor))
return distances
Let’s talk about the structure we get from this. If we were to pass source
the root node, we’ll get the minimum spanning tree (i.e. the minimum distance from the current node to any other node in the graph). So, if we were to iterate through a list of all of the nodes with a valve that has a flow rate, then we’ll get a map of minimum spanning trees from all nodes. Some questions you may now have is:
Wouldn’t the minimum spanning tree from every other node simply be an adjustment of the distance traveled from the starting node, to the ending node?
Me: No, because remember that we lost information about how to actually traverse from one node to another; we only know the distance. Imagine a cyclic graph, A <> B <> C <> D <> A
, and only A
, B
and D
are nodes with valves, which means a minimum spanning tree that looks like this: A <1> B, A <3> D
. If I was at A
, and I first go to D
, I travel a distance of 3
. How would I then travel to B
? We know that the distance from A
to D
is 3
, and the distance from A
to B
is 1
. So is the answer 4
? Of course not, there is a shorter path that connected B
to D
through C
, which means the answer is actually 2
. but we wouldn’t have known that with just the minimum spanning tree of A
. So, we necessarily must generate the spanning tree of all the node with valves.
What is the resulting structure?
Me: Before I answer this question, let me go through what went through my head for over half a day. “This structure must be a web, because each node has it’s own minimum spanning tree!” Naturally, I thought that I ended up with a 3D fully connected web. It took me a while before I was able to reinterpret the graph as a directed acyclic graph, a.k.a a tree. Realizing it is a tree has many benefits, which includes: being able to actually solve the problem. To see how it is a tree, remember that the graph has lost all information about paths through the actual nodes. Then, each node is now represented as actually turning on the nodes, because remember, with information lost about paths, it also suggests that someone could navigate the through the nodes with valves to reach a more important valve before coming back later. Since you are unable to turn on a valve twice, this means that in a graph, the arrow always points outwards, and there will never be a situation where a path will point back to itself. Hence, it is directed and acyclic, which makes the resulting structure a tree.
How does this new structure solve the problem?
Now that it’s represented as a tree, we can use a variety of ways (like I tried to do) to solve the problem. However, there is one extremely important thing about the problem that makes it challenging to use conventional graph search algorithms: we are maximizing our sum.
All pathfinding algorithm minimizes paths. In a nutshell, this means we have to either look for fantastic heuristics that can turn our maximizing problem into minimizing problems, or figure out another way.
Heuristics are hard, particularly because approximate ones may not yield an accurate result, while an accurate one will either take too long to compute, or is very challenging to define. For instance, A* Search and Dijkstra both require heuristics to make decisions on what to explore next; if we had heuristics that kept on increasing in value, the pathfinding algorithm will be stuck on a single path, and we end up with an inaccurate result. Even if we were to solve that problem by inversing the heuristic, we still find that our reliance on the accumulated pressure, which is always increasing, causes the heuristics to produce inaccurate results. Heuristics work the best if it is calculated between two nodes, and does not have any contextwide variables, such as time, which is required to calculate the total pressure amassed between any two nodes.
Then, you may ask. What about using a slightly inaccurate heuristic, such as time / pressure
? The larger the time, the more unideal that path. The lower the pressure amassed, the more unideal the path. Perfect!
Perfect?
Well, I tried it out, and it somehow worked for the example, but not the actual input. The rationale is simple: it’s actually (w1 * time) / (w2 * pressure)
, where w1
and w2
are arbitrary weights dictating how important time and pressure is. This is the nature of approximation  we need to declare how important something is to the other. However, for our usecase, we need precise answers; hence, even approximate heuristics are not suitable.
There is likely a proper heuristic that can be used for this particular problem, but I decided that it is no longer worth the effort. Instead, I explored BFS and DFS.
I didn’t think too much about BFS, because I had a gut feeling that it wouldn’t be suitable for the rest of the puzzle; turns out, in part 2, where I actually implement BFS because I ran out of options, I was actually right. The space complexity of BFS is V
, which is synonymous with every node in existence. When we reach part 2, we can see why storing V
is a terrible idea. Meanwhile, for DFS, the space complexity is however many edges we have for the node we are currently processing, which is E
. In a nutshell, for our problem in particular, the storage complexity of DFS is beneficial.
DFS is great because we can do anything with it; even a problem like maximizing accumulated sums. Although there are better ways to do it, like linear programming, the nature of the problem probably disallows us to express the problem as a linear equation (I tried boiling it down to a linear equation, but after spending a fair bit of time, I decided not to).
So, after figuring out that it’s a tree, and DFS is the way forward, and attempting to implement the other searches as an experiment, I ended up with a simple implementation like so:
from queue import PriorityQueue
def get_distances(source, associations):
to_visit = PriorityQueue()
distances = dict()
for k in associations.keys():
distances[k] = 999999
distances[source] = 0
to_visit.put((0, source))
while not to_visit.empty():
_, node = to_visit.get()
association = associations[node]
for neighbor in association[1]:
if distances[neighbor] > distances[node] + 1:
distances[neighbor] = distances[node] + 1
to_visit.put((distances[neighbor], neighbor))
return distances
def dfs(source, time, pressure, visited, important_nodes):
if time >= 30:
return pressure
distances = get_distances(source, associations)
best_pressure = pressure
for impt_node in important_nodes:
node, (point_pressure, _) = impt_node
if node in visited:
continue
new_time = time + distances[node] + 1
new_pressure = pressure + point_pressure * (30  new_time)
new_visited = visited.copy()
new_visited.add(node)
res = dfs(node, new_time, new_pressure, new_visited, important_nodes)
if res > best_pressure:
best_pressure = res
return best_pressure
associations = dict()
with open('input.txt', 'r') as f:
line = f.readline().strip().split(' ')
while line[0] != '':
associations[line[1]] = (int(line[4].rstrip(';').split('=')[1]),
[valve.strip(',') for valve in line[9:]])
line = f.readline().strip().split(' ')
print(dfs('AA', 0, 0, set(), [(k,v) for k, v in associations.items() if v[0] > 0]))
And wouldn’t you know, it worked!
This part is the main reason why I spent 4 days to write the blog post from Day 16 to Day 19. The problem introduces a new entity that can explore the graph, which is affectionately chosen to be an elephant, and cuts the amount of time we have to explore the nodes to 26 units of time.
To save you the trouble from thinking about it: no, a double forloop in DFS doesn’t work. Well, it would, if you run the program for 16 hours (actual calculations), but it is definitely not the intended solution.
Of course, it didn’t stop me from trying:
def dfs(info_source_1, info_source_2, pressure, visited, important_nodes, distances_map, depth=0):
source_1, time_1 = info_source_1
source_2, time_2 = info_source_2
if time_2 >= 26 and time_1 < 26:
return dfs(info_source_2, info_source_1, pressure, visited, important_nodes, distances_map, depth+1)
if time_1 >= 26 and time_2 >= 26:
return pressure
distances_1 = distances_map[source_1]
distances_2 = distances_map[source_2]
best_pressure = pressure
for impt_node_1 in important_nodes:
node_1, (point_pressure_1, _) = impt_node_1
if node_1 in visited:
continue
new_time_1 = time_1 + distances_1[node_1] + 1
new_visited = visited.copy()
new_visited.add(node_1)
if time_2 >= 26:
new_pressure = pressure + point_pressure_1 * (26  new_time_1)
res = dfs((node_1, new_time_1), info_source_2, new_pressure, new_visited, important_nodes, distances_map, depth+1)
if res > best_pressure:
best_pressure = res
continue
for impt_node_2 in important_nodes:
node_2, (point_pressure_2, _) = impt_node_2
if node_2 in visited or node_1 is node_2:
continue
new_time_2 = time_2 + distances_2[node_2] + 1
new_pressure = pressure + point_pressure_1 * (26  new_time_1) + point_pressure_2 * (26  new_time_2)
new_visited_inner = new_visited.copy()
new_visited_inner.add(node_2)
res = dfs((node_1, new_time_1), (node_2, new_time_2), new_pressure, new_visited_inner, important_nodes, distances_map, depth+1)
if res > best_pressure:
best_pressure = res
return best_pressure
While it worked for the example input, it doesn’t work (i.e. doesn’t finish within acceptable time) for the real input. This is because there are 15! * 14! = 114,000,816,848,279,961,600,000
possible combinations for the algorithm to run through.
So, what next? I tried BFS as well:
def bfs(info_source_1, info_source_2, important_nodes, distances_map):
q = Queue()
p = Queue()
q.put((info_source_1, info_source_2, 0, set(), 0, []))
set_of_all_important_nodes = set([k for k, _ in important_nodes])
found_pressure = 0
found_depth = 999999
while not q.empty():
info_source_1, info_source_2, pressure, visited, depth, path = q.get()
if depth > found_depth:
break
source_1, time_1 = info_source_1
source_2, time_2 = info_source_2
if time_2 >= 26 and time_1 < 26:
source_1, source_2 = source_2, source_1
time_1, time_2 = time_2, time_1
elif time_1 >= 26 and time_2 >= 26:
continue
elif len(visited & set_of_all_important_nodes) == len(important_nodes):
if pressure > found_pressure:
found_pressure = pressure
found_depth = depth
distances_1 = distances_map[source_1]
distances_2 = distances_map[source_2]
best_pressure = pressure
for impt_node_1 in important_nodes:
node_1, (point_pressure_1, _) = impt_node_1
if node_1 in visited:
continue
new_time_1 = time_1 + distances_1[node_1] + 1
new_visited = visited.copy()
new_visited.add(node_1)
if time_2 >= 26:
new_pressure = pressure + point_pressure_1 * (26  new_time_1)
q.put(((node_1, new_time_1), info_source_2, new_pressure, new_visited, depth+1, path))
for impt_node_2 in important_nodes:
node_2, (point_pressure_2, _) = impt_node_2
if node_2 in visited or node_1 is node_2:
continue
new_time_2 = time_2 + distances_2[node_2] + 1
new_pressure = pressure + point_pressure_1 * (26  new_time_1) + point_pressure_2 * (26  new_time_2)
new_visited_inner = new_visited.copy()
new_visited_inner.add(node_2)
q.put(((node_1, new_time_1), (node_2, new_time_2), new_pressure, new_visited_inner, depth+1, path + [(node_1, node_2)]))
return found_pressure
The BFS mechanism uses a gimmick to break out early, because I reasoned that beyond a certain depth, we approach diminishing returns. Needless to say, BFS worked on the example input, but not on the actual input, due to space complexity.
I went berserk and also implemented Dijkstra’s to find the minimum spanning tree, but in hindsight, I have no idea what I was trying to accomplish with it.
Eventually, I gave up and went to bed. On and off, I would try my hand again, including attempting to use permutations to shuffle the order of valves to open, but again, due to space complexity, this was infeasible.
Finally, I decided to look for inspiration. Without looking at the solutions, I looked through the Reddit post, and found a post by betaveros (at time of writing, the top on the leaderboard), which contained a sentence that gave me the inspiration to settle on the answer: “one person first, then the same DFS for the other over all unopened valves”.
If I may: “god damn it”! I’ve thought about this at one point, but my implementation was naive: I simply made one explorer explore half the list, and the other explorer explore the other half of the list. However, this failed because obviously, not all possibilities were considered.
However, let’s think about it another way. Assume I have 6 valves to open. If I were to open the valves alone, I may not be able to finish within the 26 measly minutes given to me. So, the whole point of teamwork is to split up the work. Hence, two explorers should open roughly 3 valves each. However, recall that once a valve has been opened, it cannot be opened again. Hence, all I need to do is to perform DFS on 3 valves, then change the actor to the other explorer, and perform DFS on the remaining 3 valves. Hence, instead of searching through 6! * 5!
possibilities, I am now at 6!
possibilities, which is definitely doable within human time.
Supersizing to the current problem, we now have an opportunity to restrict the problem to 15!
possibilities, which may be a huge number, but definitely much smaller than 15! * 14!
possibilities. Hence, the new DFS is implemented as such:
def dfs(info_source_1, info_source_2, pressure, visited, important_nodes, distances_map):
source_1, time_1 = info_source_1
source_2, time_2 = info_source_2
if time_1 >= 26 and time_2 >= 26:
return pressure, path
elif time_1 >= 26 or (len(visited) + 1 > len(important_nodes) // 2 and time_2 != 9999):
return dfs(info_source_2, (source_1, 9999), pressure, visited, important_nodes, distances_map)
distances = distances_map[source_1]
best_pressure = pressure
for impt_node in important_nodes:
node, (point_pressure, _) = impt_node
if node in visited:
continue
new_time = time_1 + distances[node] + 1
new_visited = visited.copy()
new_visited.add(node)
new_pressure = pressure + point_pressure * (26  new_time)
res = dfs((node, new_time), info_source_2, new_pressure, new_visited, important_nodes, distances_map)
if res > best_pressure:
best_pressure = res
return best_pressure
So, applying this diff (<
is part 1, >
part 2) to the part 1 solution, and running the program for roughly 20 minutes will give us the final result.
22,24c22,29
< def dfs(source, time, pressure, visited, important_nodes):
< if time >= 30:
< return pressure

> def dfs(info_source_1, info_source_2, pressure, visited, important_nodes, distances_map):
> source_1, time_1 = info_source_1
> source_2, time_2 = info_source_2
>
> if time_1 >= 26 and time_2 >= 26:
> return pressure, path
> elif time_1 >= 26 or (len(visited) + 1 > len(important_nodes) // 2 and time_2 != 9999):
> return dfs(info_source_2, (source_1, 9999), pressure, visited, important_nodes, distances_map)
26c31
< distances = get_distances(source, associations)

> distances = distances_map[source_1]
34,35c39
< new_time = time + distances[node] + 1
< new_pressure = pressure + point_pressure * (30  new_time)

> new_time = time_1 + distances[node] + 1
38c42,44
< res = dfs(node, new_time, new_pressure, new_visited, important_nodes)

>
> new_pressure = pressure + point_pressure * (26  new_time)
> res = dfs((node, new_time), info_source_2, new_pressure, new_visited, important_nodes, distances_map)
52c58,60
< print(dfs('AA', 0, 0, set(), [(k,v) for k, v in associations.items() if v[0] > 0]))

> important_elements = [(k,v) for k, v in associations.items() if v[0] > 0]
> distances_map = {k: get_distances(k, associations) for k in associations.keys()}
> print(dfs(('AA', 0), ('AA', 0), 0, set(), important_elements, distances_map))
Wha…? Is this Tetris?
Yeah, this is almost like tetris. Given a bunch of blocks, which are the horizontal line, cross, Lshape, vertical line and square, we are tasked to get the height of the tetris board after 2022 tetrominos sets on the board. The tetrominos follow a sequence of movements, which is our input; it looks something like this:
>>><<><>><<<>><>>><<<>>><<<><<<>><>><<>>
>
stands for right, and <
stands for left. The sequence of movements repeats. The tetrominos themselves follow the standard set of rules, which are:
So, the subproblems are:
I decided to represent the tetrinome positions as a set of positions, and adjust the positions based on how the block is falling. So, the code is as follows:
shapes = [[(0, 0), (1, 0), (2, 0), (3, 0)],
[(1, 0), (0, 1), (1, 1), (2, 1), (1, 2)],
[(2, 2), (2, 1), (0, 0), (1, 0), (2, 0)],
[(0, 0), (0, 1), (0, 2), (0, 3)],
[(0, 0), (1, 0), (0, 1), (1, 1)]]
offset = {
'<': (1, 0),
'>': (1, 0)
}
shape_offset = 2
width = 7
sequence = open('input.txt', 'r').read().strip()
positions = set()
shape_i = 1
dropped = 0
current_block = [(x + shape_offset, y + 3) for x, y in shapes[0]]
board_max_y = 0
while dropped < 2022:
for s in sequence:
current_block = [(x + offset[s][0], y + offset[s][1]) for x, y in current_block]
xs = sorted(current_block, key=lambda b: b[0])
ys = sorted(current_block, key=lambda b: b[1])
min_x, max_x = xs[0][0], xs[1][0]
min_y, max_y = ys[0][1], ys[1][1]
if min_x < 0 or max_x >= width or len(set(current_block) & positions):
current_block = [(x  offset[s][0], y  offset[s][1]) for x, y in current_block]
before_down_block = current_block.copy()
current_block = [(x, y  1) for x, y in current_block]
if min_y <= 0 or len(set(current_block) & positions):
dropped += 1
positions = set(before_down_block)
board_max_y = max(positions, key=lambda x: x[1])[1]
current_block = [(x + shape_offset, y + board_max_y + 4) for x, y in shapes[shape_i]]
shape_i = shape_i + 1 if shape_i < len(shapes)  1 else 0
if dropped >= 2022:
break
print(board_max_y + 1)
Ah yes, following the pattern we’ve seen in Day 16, we experience another expansion of the problem statement beyond what is reasonable to do with our original algorithm. Our goal now is simple: instead of getting the height at 2022 blocks, we want 1000000000000 (that’s 12 zeros, which means this is 1 trillion). Obviously, not feasible.
Turns out, this problem can be boiled down into a simple sequencing problem. I began by hypothesizing that at some point, there must be a pattern for height increments; there are a limited number of blocks, and a limited number of sequences. In logical hindsight, this is likely due to the pigeonhole principle  I’ll reach a point where I’m going through the exact same blocks for the exact same sequences.
To confirm this experimentally, I inserted a print statement to figure out if this was true:
stats = dict()
...
if board_max_y  new_max_y not in stats:
stats[new_max_y  board_max_y] = 1
else:
stats[new_max_y  board_max_y] += 1
if s_i % len(sequence) == 0: # on the actual input, this is (s_i + 1) % ...
print(stats)
...
print(stats)
where s_i
is the index in the sequence.
I quickly realised that the pattern holds beyond the first statement; this implies that after a certain starting sequence, the sequence started to repeat, implying a predictable increase in height for a fixed increase in block drops.
In the code, I added a cheeky little comment that says the actual input would require me to change the condition to s_i + 1
. Why?
Let’s use the actual numbers: the sequence given in the example has 40 tokens, while the sequence given in the actual input has 10091 tokens. s_i
is bounded from 0 to 39 in the example, while s_i
is bounded from 0 to 10090 in the actual input. Hence, s_i % len(sequence) == 0
is only true when s_i
is any multiple of 40 in the example, while (s_i + 1) % len(sequence) == 0
is true when s_i
is only true when s_i
is a multiple of 10090. This is not a coincidence, because 40 and 10090 are divisible by the number of possible blocks in the context, 5 (also each number’s greatest common factor).
Intuitively, this means that at 40
and 10090
sequences respectively, it encapsulates a multiple of block drops for all 5 bocks perfectly. Remember the pigeonhole principle? Let’s say I have 20 pigeons and 10 holes. If we dictate each pigeon to always fly into adjacent holes, then necessarily, each hole must have 2 pigeons (without dictating the behaviour of the pigeon, we could have 1 hole with 19 pigeons). The same applies for this context; 40 sequences and 5 blocks, where each sequence will always apply to the next block in order, then necessarily, each block must have 8 sequences associated to it, always.
So, with sequences repeating every 40
or 10090
sequences, we can bypass the need to simulate falling tetromines, and just simulate height differences instead.
Okay, so we now have the theory. How do we translate this to practice?
Turns out, we are able to “shortcut” most of 1000000000000 block drops, by estimating as much as we can with just pure mathematics.
estimated_height = int((1000000000000  blocks_1) / blocks_difference) * repeat_height_difference
Where blocks_difference
is the number of blocks dropped from a sequence of 40
to 80
, or 10090
to 20181
, and repeat_height_difference
is the height difference between two repeating sequences. I will discuss how to get this later.
Then, we process the rest of the blocks using the sequences we derived:
remaining_blocks = (1000000000000  blocks_1) % blocks_difference
remaining_height = 0
height_epoch_x_i = 0
while remaining_blocks >= 0:
remaining_height += height_epoch_x[height_epoch_x_i]
remaining_blocks = 1
if height_epoch_x_i >= len(height_epoch_x)  1:
height_epoch_x_i = 0
else:
height_epoch_x_i += 1
where height_epoch_x
is the height difference per sequence.
Now, how do we get blocks_difference
, height_epoch_x
, and repeat_height_difference
? We know from experimental data that starting from a certain number of blocks, the sequence holds. Hence, we need to acquire this certain number of blocks, which is tied to the number of sequences processed (they need to be multiples of 40
or 10090
) and then continue simulating metronomes until we get the sequences from one multiple of 40
/ 10090
to the next.
Hence, the code diff to get the final answer is as follows:
15a16
> # Figuring out the pattern
16a18
> s_i = 0
20,38c22,35
< while dropped < 2022:
< for s in sequence:
< current_block = [(x + offset[s][0], y + offset[s][1]) for x, y in current_block]
< xs = sorted(current_block, key=lambda b: b[0])
< ys = sorted(current_block, key=lambda b: b[1])
< min_x, max_x = xs[0][0], xs[1][0]
< min_y, max_y = ys[0][1], ys[1][1]
<
< if min_x < 0 or max_x >= width or len(set(current_block) & positions):
< current_block = [(x  offset[s][0], y  offset[s][1]) for x, y in current_block]
< before_down_block = current_block.copy()
< current_block = [(x, y  1) for x, y in current_block]
<
< if min_y <= 0 or len(set(current_block) & positions):
< dropped += 1
< positions = set(before_down_block)
< board_max_y = max(positions, key=lambda x: x[1])[1]
< current_block = [(x + shape_offset, y + board_max_y + 4) for x, y in shapes[shape_i]]
< shape_i = shape_i + 1 if shape_i < len(shapes)  1 else 0

> runs = 2
> height_epoch_1 = []
> height_epoch_x = []
> si_1 = 0
> si_difference = 0
> blocks_1 = 0
> blocks_difference = 0
> while runs > 0:
> s = sequence[s_i % len(sequence)]
> current_block = [(x + offset[s][0], y + offset[s][1]) for x, y in current_block]
> xs = sorted(current_block, key=lambda b: b[0])
> ys = sorted(current_block, key=lambda b: b[1])
> min_x, max_x = xs[0][0], xs[1][0]
> min_y, max_y = ys[0][1], ys[1][1]
40,41c37,45
< if dropped >= 2022:
< break

> if min_x < 0 or max_x >= width or len(set(current_block) & positions):
> current_block = [(x  offset[s][0], y  offset[s][1]) for x, y in current_block]
> before_down_block = current_block.copy()
> current_block = [(x, y  1) for x, y in current_block]
>
> if min_y <= 0 or len(set(current_block) & positions):
> dropped += 1
> positions = set(before_down_block)
> new_max_y = max(positions, key=lambda x: x[1])[1]
43c47,86
< print(board_max_y + 1)

> if runs == 2:
> height_epoch_1.append(new_max_y  board_max_y)
> else:
> height_epoch_x.append(new_max_y  board_max_y)
>
> if (s_i + 1) % len(sequence) == 0:
> if runs == 2:
> si_1 = s_i
> blocks_1 = dropped
> else:
> si_difference = s_i  si_1
> blocks_difference = dropped  blocks_1
> runs = 1
> board_max_y = max(positions, key=lambda x: x[1])[1]
> current_block = [(x + shape_offset, y + board_max_y + 4) for x, y in shapes[shape_i]]
> shape_i = shape_i + 1 if shape_i < len(shapes)  1 else 0
>
> s_i += 1
> if runs <= 0:
> break
>
> # Use the pattern to engineer the simulation
> repeat_start_height = len(height_epoch_1)
> repeat_height_difference = sum(height_epoch_x)
>
> estimated_height = int((1000000000000  blocks_1) / blocks_difference) * repeat_height_difference
> remaining_blocks = (1000000000000  blocks_1) % blocks_difference
>
> remaining_height = 0
> height_epoch_x_i = 0
> while remaining_blocks >= 0:
> remaining_height += height_epoch_x[height_epoch_x_i]
> remaining_blocks = 1
> if height_epoch_x_i >= len(height_epoch_x)  1:
> height_epoch_x_i = 0
> else:
> height_epoch_x_i += 1
>
> height = int(estimated_height) + remaining_height + sum(height_epoch_1)
> print(height)
Lava and whatnot, oh my!
So, we have a bunch of positions that represent whether it contains lava particles. We want to find the surface area of the lava particles that make up the water droplets.
The problem, in programmer terms, is to accumulate (6  number of edges) in all possible vertices of a graph.
This straightforward problem is broken down into a graph problem, which can be traversed using any of the graph traversal algorithms. Each missing edge (i.e. 6  number of edges) count towards a global variable, which represents the solution.
So, the solution is as follows:
from queue import Queue
positions = set([tuple(map(lambda x: int(x), line.strip().split(','))) for line in open('input.txt', 'r').readlines()])
possibilities = [
(1, 0, 0), (0, 1, 0), (0, 0, 1),
(1, 0, 0), (0, 1, 0), (0, 0, 1)
]
area = 0
visited = set()
q = Queue()
while len(visited & positions) != len(positions):
d = (positions  visited).pop()
visited.add(d)
q.put(d)
while not q.empty():
x, y, z = q.get()
for (dx, dy, dz) in possibilities:
nx, ny, nz = x + dx, y + dy, z + dz
if (nx, ny, nz) not in positions:
area += 1
elif (nx, ny, nz) not in visited:
visited.add((nx, ny, nz))
q.put((nx, ny, nz))
print(area)
Now, we want to only find the external surface area; meaning, any surface area that is surrounded by lava should not be considered. There were two main ways I could approach this:
0
in any dimension, or the maximum of any dimension, then accumulate the area. Otherwise, don’t accumulate the area.q
is empty, we have explored one connected body. A connected body that touches 0
in any dimension or maximum in any dimension must be a liquid / water vapour. Otherwise, it is trapped gas between all the positions.I decided to do step 2. So, I broke down the problem as:
So the diff to implement part 2 is:
2c2,16
< positions = set([tuple(map(lambda x: int(x), line.strip().split(','))) for line in open('input.txt', 'r').readlines()])

> from itertools import product
> positions = set()
> max_x, max_y, max_z = 0, 0, 0
> with open('input.txt', 'r') as f:
> line = f.readline().strip()
> while line:
> pos = tuple(map(lambda x: int(x), line.split(',')))
> max_x = max(max_x, pos[0])
> max_y = max(max_y, pos[1])
> max_z = max(max_z, pos[2])
>
> positions.add(pos)
> line = f.readline().strip()
>
> positions_prime = {(x, y, z) for x, y, z in product(range(1, max_x + 2), range(1, max_y + 2), range(1, max_z + 2)) if (x, y, z) not in positions}
12,13c26,27
< while len(visited & positions) != len(positions):
< d = (positions  visited).pop()

> while len(visited & positions_prime) != len(positions_prime):
> d = (positions_prime  visited).pop()
16a31,32
> isOutside = False
> areaInContact = 0
19a36,38
> if not (0 < x < max_x and 0 < y < max_y and 0 < z < max_z):
> isOutside = True
>
22,24c41,44
< if (nx, ny, nz) not in positions:
< area += 1
< elif (nx, ny, nz) not in visited:

> if (nx, ny, nz) in positions:
> areaInContact += 1
>
> if (nx, ny, nz) not in visited and (nx, ny, nz) in positions_prime:
26a47,49
>
> if isOutside:
> area += areaInContact
Right, another entry down for the count. This day was quite similar to Day 16, because it involves doing a search on a space that is too large for comfort.
So, we have a blueprint, which are defined like so:
Blueprint 1:
Each ore robot costs 4 ore.
Each clay robot costs 2 ore.
Each obsidian robot costs 3 ore and 14 clay.
Each geode robot costs 2 ore and 7 obsidian.
Blueprint 2:
Each ore robot costs 2 ore.
Each clay robot costs 3 ore.
Each obsidian robot costs 3 ore and 8 clay.
Each geode robot costs 3 ore and 12 obsidian.
That allows us to build robots that gather resources to build even more robots to gather even more resources and so on. We start with 1 ore robot, and each robot will take 1 time unit to build, before it can contribute to our resource pool. Our goal is to calculate a score for each of the blueprint, and print out a linear combination of the score. The score is defined as the number of geodes the blueprint can possibly generate within 24 units of time.
Following the pattern I saw in Day 16, I quickly eliminated the typical graph search algorithms, and decided that DFS was the way to go. So, let’s think about when we want to call DFS.
A direct approach would be to, for every time unit, call DFS in an attempt to expend resources build every type of robot. Instinctively, I knew that this search space was too huge to consider.
Instead, we have to do a good enough approximation of what may happen. Here are some considerations:
The considerations help to reduce the search space into something that is completes within reasonable time (~10 minutes), and doesn’t waste the CPU cycle looking at graphs that don’t matter in the grand scheme of things. Putting together all the considerations, and some trial and error later, I end up with the following implementation:
blueprints = list()
with open('input.txt', 'r') as f:
line = f.readline().strip()
while line:
data = line.split(' ')
blueprints.append((int(data[6]), int(data[12]), (int(data[18]), int(data[21])), (int(data[27]), int(data[30]))))
line = f.readline().strip()
def dfs(blueprint, resources=(0, 0, 0, 0), bot_count=(1, 0, 0, 0), new_bot_count=(0, 0, 0, 0), minutes=0):
best_quality = (resources[1], resources, bot_count)
if minutes > 24:
return best_quality
ores, clays, obsidians, geodes = resources
ore_bots, clay_bots, obsidian_bots, geode_bots = bot_count
ores += ore_bots
clays += clay_bots
obsidians += obsidian_bots
geodes += geode_bots
bot_count = tuple(map(lambda x: x[0] + x[1], zip(bot_count, new_bot_count)))
minutes += 1
if minutes == 24:
return (geodes, (ores, clays, obsidians, geodes), bot_count)
maximum_ores_required = max(blueprint[0], blueprint[1], blueprint[2][0], blueprint[3][0])
if ores >= blueprint[3][0] and obsidians >= blueprint[3][1]:
quality = dfs(blueprint, (ores  blueprint[3][0], clays, obsidians  blueprint[3][1], geodes),
bot_count, (0, 0, 0, 1), minutes)
if quality[0] > best_quality[0]:
best_quality = quality
elif ores >= blueprint[2][0] and clays >= blueprint[2][1]:
quality = dfs(blueprint, (ores  blueprint[2][0], clays  blueprint[2][1], obsidians, geodes),
bot_count, (0, 0, 1, 0), minutes)
if quality[0] > best_quality[0]:
best_quality = quality
quality = dfs(blueprint, (ores, clays, obsidians, geodes), bot_count, (0, 0, 0, 0), minutes)
if quality[0] > best_quality[0]:
best_quality = quality
else:
if ores >= blueprint[1]:
quality = dfs(blueprint, (ores  blueprint[1], clays, obsidians, geodes),
bot_count, (0, 1, 0, 0), minutes)
if quality[0] > best_quality[0]:
best_quality = quality
# snapback pruning: don't accumulate just ores
if ores >= blueprint[0] and ores < 2 * maximum_ores_required:
quality = dfs(blueprint, (ores  blueprint[0], clays, obsidians, geodes),
bot_count, (1, 0, 0, 0), minutes)
if quality[0] > best_quality[0]:
best_quality = quality
quality = dfs(blueprint, (ores, clays, obsidians, geodes), bot_count, (0, 0, 0, 0), minutes)
if quality[0] > best_quality[0]:
best_quality = quality
return best_quality
accum_quality = 0
for i, blueprint in enumerate(blueprints):
quality = dfs(blueprint)
print(i, quality)
accum_quality += (i + 1) * quality[0]
print(accum_quality)
The question increased the depth of the tree by adjusting the time unit from 24 to 32, and cutting down the number of blueprints to search to 3. This is a significant adjustment, as increasing tree depth exponentially increases the number of nodes to traverse. Hence, to create an algorithm that completes within reasonable time, we need to make even more assumptions of what may happen.
Based on observations, there are only a few blueprints with its highest number of geodes actually depending on saving resources whenever it could build an obsidian robot instead. Furthermore, by adjusting the time unit to 32, time can be wisely spent to build an obsidian robot, then providing resources to build a geode robot. Hence, the probability to save for resources when it could build an obsidian robot is decreased drastically.
As it turns out, the assumptions are true in our particular case, and removing just that one possibility from the previous algorithm allowed my solution to complete within human time (also, the scoring function changed as required by the question):
11c11
< if minutes > 24:

> if minutes > 32:
25c25
< if minutes == 24:

> if minutes == 32:
39,42d38
<
< quality = dfs(blueprint, (ores, clays, obsidians, geodes), bot_count, (0, 0, 0, 0), minutes)
< if quality[0] > best_quality[0]:
< best_quality = quality
63c59
< accum_quality = 0

> accum_quality = 1
67c63
< accum_quality += (i + 1) * quality[0]

> accum_quality *= quality[0]
Is there a faster way? Probably, using heuristics and the other search algorithms. Do I want to implement it? Not this year!
This puzzle highlights the power of Python, because I don’t have to think about huge numbers at all. Having done Days 17, 18and 20 in a row, I didn’t bother trying to make the code run faster in Day 20; it’s plenty fast compared to all the graph traversal I’ve done!
This problem is one of the easier ones among all of the challenges in AOC 2022 so far. Essentially, given a list of numbers, we need to rearrange the numbers such that each of the numbers are moved according to the value they represent. So, I solved it by:
mutable
mutable
based on the value of the intHere is code:
movements = [(i, int(line.strip())) for i, line in enumerate(open('input.txt', 'r').readlines())]
mutable = movements.copy()
zero_tuple = tuple()
for i, m in movements:
ind = mutable.index((i, m))
mutable.pop(ind)
new_ind = ind + m
if new_ind > len(movements):
new_ind %= len(movements)  1
elif new_ind <= 0:
new_ind += len(movements)  1
mutable.insert(new_ind, (i, m))
if m == 0:
zero_tuple = (i, m)
zero_ind = mutable.index(zero_tuple)
print(mutable[(zero_ind + 1000) % len(movements)][1] +
mutable[(zero_ind + 2000) % len(movements)][1] +
mutable[(zero_ind + 3000) % len(movements)][1])
The only thing that changed were the input numbers. In Python, integers have no bounds. Then, we just perform the mixing operation 10 times, so the code is almost the same, but indented to fit the new for loop:
1c1
< movements = [(i, int(line.strip())) for i, line in enumerate(open('input.txt', 'r').readlines())]

> movements = [(i, 811589153 * int(line.strip())) for i, line in enumerate(open('input.txt', 'r').readlines())]
4,6c4,7
< for i, m in movements:
< ind = mutable.index((i, m))
< mutable.pop(ind)

> for _ in range(10):
> for i, m in movements:
> ind = mutable.index((i, m))
> mutable.pop(ind)
8,13c9,16
< new_ind = ind + m
< if new_ind > len(movements):
< new_ind %= len(movements)  1
< elif new_ind <= 0:
< new_ind += len(movements)  1
< mutable.insert(new_ind, (i, m))

> new_ind = ind + m
> if new_ind > len(movements):
> new_ind %= len(movements)  1
> elif new_ind <= 0:
> new_ind += len(movements)  1
> factor = ((new_ind) // (len(movements)  1)) + 1
> new_ind += (factor * (len(movements)  1))
> mutable.insert(new_ind, (i, m))
15,16c18,19
< if m == 0:
< zero_tuple = (i, m)

> if m == 0:
> zero_tuple = (i, m)
If I were to optimize this, it’ll probably be similar to Day 17; since finite sequences are involved, repeats are bound to happen. However, the efforttoresult ratio is probably not worth it.
Today we have expression evaluations. It’s quite a simple day, although I spent an embarrassing amount of time trying to figure out why my part 2 solution didn’t work. More below.
We have a bunch of expressions that uses a bunch of symbols, like so:
root: pppw + sjmn
dbpl: 5
cczh: sllz + lgvd
zczc: 2
ptdq: humn  dvpt
dvpt: 3
lfqf: 4
humn: 5
ljgn: 2
sjmn: drzm * dbpl
sllz: 4
pppw: cczh / lfqf
lgvd: ljgn * ptdq
drzm: hmdt  zczc
hmdt: 32
All we have to do is to evaluate the value at root
. Quite immediately, I got reminded of Prolog, which is a logic programming language that work on constraints. From what I know, Prolog does a depthfirst search to obtain the results based on the constraints defined just like our input.
So, I thought about using trees to express the expression. However, I quickly realised that it would take too much effort; instead, a much faster way is probably to use a hash table, where the key is the symbol to be evaluated, and the value is the expression to evaluate.
Then, I jump to the root symbol, and recursively evaluate the constituent symbols until I figure out the final answer. Seems simple enough!
operation_map = {
'+': lambda x, y: x + y,
'': lambda x, y: x  y,
'/': lambda x, y: x / y,
'*': lambda x, y: x * y
}
expressions = {expr[0].rstrip(':'): expr[1:] for expr in [line.strip().split(' ') for line in open('input.txt').readlines()]}
def evaluate(expr):
if expr[0].isdigit():
return int(expr[0])
else:
return operation_map[expr[1]](evaluate(expressions[expr[0]]),
evaluate(expressions[expr[2]]))
print(int(evaluate(expressions['root'])))
Part 2 redefines the problem:
humn
(the original value of humn
is now discarded)root
is now a = b
.I decided to approach the problem mathematically, by performing inverse operations. Suppose we have an equation, a = b op c
, where a
, b
, and c
are unknowns. If we want to find the value of b
, then we can rearrange the equation as: b = a 'op c
. Then, we see how the new root
fits into the picture; since root
is essentially lhs = rhs
, this implies that if:
root: a = c
a: x + y
c: b + z
If, again, we want to find b
, then b = c  z
, and since root: a = c
, so b = a  z
, therefore b = x + y  z
. So this means we need to consider the following to change our equation:
symbol = left op right
, left = symbol op' right
and right = symbol op' left
;op'
;humn
in our case, find it within the associations left = ...
or right = ...
symbol
within the associations left = ...
and right = ...
. This will inverse our operators. For right
, use the normal association symbol = ...
to evaluate it.symbol
is root, we evaluate the other operand with the normal association symbol = ...
. This essentially does the operation left = right
within our evaluation.humn
must be.The main assumption being made here is that the input cannot repeat a symbol twice (specifically, not the target symbol we are finding). Otherwise, the inverse operation approach here probably wouldn’t work.
Next, let’s figure out the rules to get op
:
+
, then transmute it to symbol  operand
*
, then transmute it to symbol / operand

, then transmute it to symbol + right_operand
and symbol _ left_operand
, where _
effectively performs left_operand  symbol
. I forgot to this, which caused me an hour or so to discover, as this does not affect the example input :sweat_smile:/
, then transmute it to symbol * right_operand
and symbol \ left_operand
where \
effectively performs left_operand / symbol
. I remembered this, but unluckily for me it wasn’t used at allWith that out of the way, we can finally implement it:
operation_map = {
'+': lambda x, y: x + y,
'': lambda x, y: x  y,
'_': lambda x, y: y  x,
'/': lambda x, y: x / y,
'*': lambda x, y: x * y,
'\\': lambda x, y: y / x
}
expressions = dict()
left_expressions = dict()
right_expressions = dict()
with open('input.txt', 'r') as f:
line = f.readline()
while line:
tokens = line.strip().split(' ')
symbol = tokens[0].rstrip(':')
expressions[symbol] = tokens[1:]
if not tokens[1].isdigit():
left, right = tokens[1], tokens[3]
op = tokens[2]
if op == '+':
left_expressions[left] = [symbol, '', right]
right_expressions[right] = [symbol, '', left]
elif op == '':
left_expressions[left] = [symbol, '+', right]
right_expressions[right] = [symbol, '_', left]
elif op == '/':
left_expressions[left] = [symbol, '*', right]
right_expressions[right] = [symbol, '\\', left]
else:
left_expressions[left] = [symbol, '/', right]
right_expressions[right] = [symbol, '/', left]
line = f.readline()
def evaluate(expr):
if expr[0].isdigit():
return int(expr[0])
else:
return operation_map[expr[1]](evaluate(expressions[expr[0]]),
evaluate(expressions[expr[2]]))
def evaluate_unknown(expr):
if expr in left_expressions:
(symbol, op, operand) = left_expressions[expr]
else:
(symbol, op, operand) = right_expressions[expr]
if symbol == 'root':
return evaluate(expressions[operand])
return operation_map[op](evaluate_unknown(symbol), evaluate(expressions[operand]))
print(int(evaluate_unknown('humn')))
I’m embarrassed to say this, but I spent way too long on this day, even though it should be fundamentally simple.
Part 1’s context is actually quite simple; given a mazelike structure, navigate it with the instructions given in the input. If, during any point of navigation, the navigator falls off, then we warp the navigator to the other side of the map.
So, we just need to consider the min x, max x, min y and max y to do the problem. Here is a helpful snippet to print the boards being traversed:
def print_board(x, y, direction, instruction):
print('\033[2J')
print('\033[H')
print(x, y, direction, instruction)
y_output = (y // 50) * 50
for row in range(y_output, y_output + 50):
for col in range(0, max(boundary_xs, key=lambda t: t[1])[1] + 1):
if (col, row) == (x, y):
if direction == 0:
print('>', end='')
elif direction == 1:
print('v', end='')
elif direction == 2:
print('<', end='')
elif direction == 3:
print('^', end='')
elif (col, row) in tiles:
print(tiles[(col, row)], end='')
else:
print(' ', end='')
print()
print()
print()
sleep(0.1)
With some level of consideration to speed, I’ve decided to sacrifice my otherwise very free RAM to store way more dictionaries and lists than I really needed to. Here is how I solved it in the end:
from functools import reduce
tiles = dict()
boundary_xs = list()
boundary_ys = list()
instructions = ''
start_pos = (1, 1)
movement_map = [
(1, 0),
(0, 1),
(1, 0),
(0, 1)
]
with open('input.txt', 'r') as f:
line = f.readline().rstrip()
y = 0
while line:
min_x, max_x = 999, 1
for x, c in enumerate(line):
if c != ' ':
tiles[(x, y)] = c
min_x = min(x, min_x)
max_x = max(x, max_x)
if x > len(boundary_ys)  1:
boundary_ys += (x  len(boundary_ys) + 1) * [(999, 1)]
boundary_ys[x] = (min(boundary_ys[x][0], y),
max(boundary_ys[x][1], y))
if start_pos == (1, 1):
start_pos = (x, y)
line = f.readline().rstrip()
boundary_xs.append((min_x, max_x))
y += 1
instructions = reduce(lambda a, y: a[:1] + [a[1] + y] if y != 'L' and y != 'R' else a[:1] + [a[1] + y] + [''], f.readline().strip(), [''])
direction = 0
x, y = start_pos
for i, instruction in enumerate(instructions):
steps = int(instruction[0:1] if i != len(instructions)  1 else instruction)
min_x, max_x = boundary_xs[y]
min_y, max_y = boundary_ys[x]
while steps:
diff = movement_map[direction]
new_x, new_y = x + diff[0], y + diff[1]
while (x, y) in tiles and (new_x, new_y) not in tiles:
if new_x > max_x:
new_x = min_x
continue
elif new_x > min_x:
new_x = new_x + diff[0]
elif new_x < min_x:
new_x = max_x
continue
if new_y > max_y:
new_y = min_y
continue
elif new_y > min_y:
new_y = new_y + diff[1]
elif new_y < min_y:
new_y = max_y
continue
if tiles[(new_x, new_y)] == '#':
break
else:
x, y = new_x, new_y
steps = 1
dirchange = instruction[1] if i != len(instructions)  1 else None
if dirchange == 'L':
direction = 1
if direction < 0:
direction += len(movement_map)
elif dirchange == 'R':
direction += 1
direction %= len(movement_map)
print(1000 * (y + 1) + 4 * (x + 1) + direction)
Now the maze becomes a cube. I first tried to map the coordinates to 3D, which was fine, until I realised I needed to find a way to fold the cube. After hours of thinking, drawing stuff till I went insane, I decided it was not worth the hassle.
So I decided to hardcode the relationship between each side in the cube. However, because there is no generalization, debugging exactly what went wrong was ungodly. Thankfully, someone who has solved this beforehand provided a great cube visualizer that I used to debug my script, written by nanot1m. I also ran my script against another solution to check the output per instruction, only to find out that one of my functions that mapped the sides have the wrong offset.
So after roughly 5 hours, here is the final code:
from functools import reduce
face_width = 50
tiles = dict()
boundary_xs = list()
boundary_ys = list()
instructions = ''
start_pos = (1, 1)
movement_map = [
(1, 0),
(0, 1),
(1, 0),
(0, 1)
]
cube_connection_operations = {
1: [
lambda x, y: (x + 1, y, 0), # 2
lambda x, y: (x, y + 1, 1), # 3
lambda x, y: (x  50, 2 * 50 + (49  y), 0), # 4
lambda x, y: (0, (x % 50) + 3 * 50, 0) # 6
],
2: [
lambda x, y: (x  50, 2 * 50 + (49  y), 2), # 5
lambda x, y: (99, 50 + (x % 50), 2), # 3
lambda x, y: (x  1, y, 2), # 1
lambda x, y: (x % 50, 4 * 50  1, 3), # 6
],
3: [
lambda x, y: (2 * 50 + (y % 50), 49, 3), # 2
lambda x, y: (x, y + 1, 1), # 5
lambda x, y: (y % 50, 2 * 50, 1), # 4
lambda x, y: (x, y  1, 3), # 1
],
4: [
lambda x, y: (x + 1, y, 0), # 5
lambda x, y: (x, y + 1, 1), # 6
lambda x, y: (50, (49  (y % 50)), 0), # 1
lambda x, y: (50, 50 + x, 0), # 3
],
5: [
lambda x, y: (149, (49  (y % 50)), 2), # 2
lambda x, y: (49, 3 * 50 + (x % 50), 2), # 6
lambda x, y: (x  1, y, 2), # 4
lambda x, y: (x, y  1, 3), # 3
],
6: [
lambda x, y: (50 + (y % 50), 149, 3), # 5
lambda x, y: (x + 100, 0, 1), # 2
lambda x, y: (50 + (y % 50), 0, 1), # 1
lambda x, y: (x, y  1, 3) # 4
]
}
cube_toplefts = 6 * [None]
with open('input.txt', 'r') as f:
line = f.readline().rstrip()
y = 0
while line:
min_x, max_x = 999, 1
for x, c in enumerate(line):
if c != ' ':
tiles[(x, y)] = c
min_x = min(x, min_x)
max_x = max(x, max_x)
side_exist = False
for topleft in cube_toplefts:
if topleft is not None:
tx, ty = topleft
if tx <= x < tx + face_width and ty <= y < ty + face_width:
side_exist = True
if not side_exist:
cube_toplefts[cube_toplefts.index(None)] = (x, y)
if x > len(boundary_ys)  1:
boundary_ys += (x  len(boundary_ys) + 1) * [(999, 1)]
boundary_ys[x] = (min(boundary_ys[x][0], y),
max(boundary_ys[x][1], y))
if start_pos == (1, 1):
start_pos = (x, y)
line = f.readline().rstrip()
boundary_xs.append((min_x, max_x))
y += 1
instructions = reduce(lambda a, y: a[:1] + [a[1] + y] if y != 'L' and y != 'R' else a[:1] + [a[1] + y] + [''], f.readline().strip(), [''])
direction = 0
x, y = start_pos
for i, instruction in enumerate(instructions):
steps = int(instruction[0:1] if i != len(instructions)  1 else instruction)
min_x, max_x = boundary_xs[y]
min_y, max_y = boundary_ys[x]
cube_side = cube_toplefts.index(next(filter(lambda topleft: topleft[0] <= x < topleft[0] + face_width and topleft[1] <= y < topleft[1] + face_width, cube_toplefts)))
while steps:
diff = movement_map[direction]
new_x, new_y = x + diff[0], y + diff[1]
new_direction = direction
topleft = cube_toplefts[cube_side]
fell_out = not (topleft[0] <= new_x < topleft[0] + face_width and topleft[1] <= new_y < topleft[1] + face_width)
if fell_out:
new_x, new_y, new_direction = cube_connection_operations[cube_side + 1][direction](x, y)
if tiles[(new_x, new_y)] == '#':
break
else:
x, y, direction = new_x, new_y, new_direction
cube_side = cube_toplefts.index(next(filter(lambda topleft: topleft[0] <= x < topleft[0] + face_width and topleft[1] <= y < topleft[1] + face_width, cube_toplefts)))
steps = 1
dirchange = instruction[1] if i != len(instructions)  1 else None
if dirchange == 'L':
direction = 1
if direction < 0:
direction += len(movement_map)
elif dirchange == 'R':
direction += 1
direction %= len(movement_map)
print(1000 * (y + 1) + 4 * (x + 1) + direction)
It’s ugly, the process is errorprone, I’m tired, this’ll do. I’ve put off plans for this man!
Today’s puzzle was much more manageable than the previous days! TGIF & Merry Christmas, amirite?
We follow our hero’s journey as we now have to scatter elves in a fixed way. I spent roughly 30 minutes debugging why my code didn’t work, only to realise that I haven’t fully digested the specifications. Lesson learnt!
Okay so, we have an input like this:
....#..
..###.#
#...#.#
.#...##
#.###..
##.#.##
.#..#..
Each little hashtag moves according to a certain set of rules, which varies by the round number. The rules are:
“Attempt” to move can become “actually” moved if all hashtags end up having unique positions.
After every round of movement, steps 2 to 5 are rearranged to 3, 4, 5, 2. Essentially the first considered position is now the last considered position, and the second becomes the first, and so on.
With that, here’s a helpful little function to print the board:
def print_board():
print('\033[0J')
print('\033[H')
pos_sorted_x = sorted(list(positions), key=lambda p: p[0])
pos_sorted_y = sorted(list(positions), key=lambda p: p[1])
min_x, max_x = pos_sorted_x[0][0], pos_sorted_x[1][0]
min_y, max_y = pos_sorted_y[0][1], pos_sorted_y[1][1]
for y in range(min_y, max_y + 1):
for x in range(min_x, max_x + 1):
if (x, y) in positions:
print('#', end='')
else:
print('.', end='')
print()
And here is the solution:
positions = set()
with open('input.txt', 'r') as f:
line = f.readline().strip()
y = 0
while line:
for x, c in enumerate(line):
if c == '#':
positions.add((x, y))
y += 1
line = f.readline().strip()
def generate_decisions(rounds):
decisions = dict()
for (x, y) in positions:
intersect_results = [
len({(x  1, y  1), (x, y  1), (x + 1, y  1)} & positions) == 0,
len({(x  1, y + 1), (x, y + 1), (x + 1, y + 1)} & positions) == 0,
len({(x  1, y  1), (x  1, y), (x  1, y + 1)} & positions) == 0,
len({(x + 1, y  1), (x + 1, y), (x + 1, y + 1)} & positions) == 0,
]
if all(intersect_results):
decisions[(x, y)] = (x, y)
continue
for iterator in range(len(intersect_results)):
i = (rounds + iterator) % len(intersect_results)
match i:
case 0:
if intersect_results[0]:
decisions[(x, y)] = (x, y  1)
break
case 1:
if intersect_results[1]:
decisions[(x, y)] = (x, y + 1)
break
case 2:
if intersect_results[2]:
decisions[(x, y)] = (x  1, y)
break
case 3:
if intersect_results[3]:
decisions[(x, y)] = (x + 1, y)
break
if (x, y) not in decisions:
decisions[(x, y)] = (x, y)
return decisions
def count_empty():
pos_sorted_x = sorted(list(positions), key=lambda p: p[0])
pos_sorted_y = sorted(list(positions), key=lambda p: p[1])
min_x, max_x = pos_sorted_x[0][0], pos_sorted_x[1][0]
min_y, max_y = pos_sorted_y[0][1], pos_sorted_y[1][1]
return ((max_x  min_x + 1) * (max_y  min_y + 1))  len(positions)
rounds = 0
while rounds < 10:
decisions = generate_decisions(rounds)
hits = dict()
new_positions = set()
for result_pos in decisions.values():
if result_pos in hits:
hits[result_pos] += 1
else:
hits[result_pos] = 1
for original_pos, result_pos in decisions.items():
if hits[result_pos] > 1:
new_positions.add(original_pos)
else:
new_positions.add(result_pos)
positions = new_positions
rounds += 1
print(count_empty())
Today’s part two is the most natural out of all the part twos I have attempted in this year’s AOC. Simply, we remove the boundaries of rounds, and figure out when all the hashtags run out of moves. So, basically, we just keep running until positions == new_positions
. Hence, our diff would be:
61c61
< while rounds < 10:

> while True:
78d77
< positions = new_positions
79a79,81
> if positions == new_positions:
> break
> positions = new_positions
81c83
< print(count_empty())

> print(rounds)
It’s not the fastest piece of code ever, but for the amount of effort I put in, being able to get the answer in five seconds is reasonable enough.
Today’s puzzle is about pathfinding, but on crack.
Let’s examine an example:
#.######
#>>.<^<#
#.<..<<#
#>v.><>#
#<^v^^>#
######.#
The arrows, which are >v<^
are moving obstacles in the board, moving towards the direction suggested by the arrows. These arrows can overlap, and warp around the board. Our goal is to perform pathfinding through this board, and output the shortest possible path.
Okay, what’s the best method? The first method I immediately thought of is to implement a path searching algorithm, and find the shortest path at every step. However, this is largely inefficient, because when there are as many obstacles as shown in the board above, then too much effort is put into recalculating the path at every step due to obstacles to the path.
Instead, let’s include the moving obstacles into our path search algorithm; at every step, we clone the board, move the obstacles, figure out the best next step, and repeat the process adinfinitum until we reach the target position. To effectively do this, we must invent an algorithm that quickly converges to the target position, without searching unnecessary paths.
For this, I chose to use A* search.
As usual, here is a useful function to print the board:
def print_board(p, hs, steps):
print('\033[2J')
print('\033[H')
print('Steps:', steps)
px, py = p
for y in range(0, height):
for x in range(0, width):
if ((x, y) == start_position) or ((x, y) == end_position):
if (px, py) == (x, y):
print('E', end='')
else:
print(' ', end='')
continue
if x % (width  1) == 0:
print('#', end='')
elif y % (height  1) == 0:
print('#', end='')
elif (x, y) == p:
print('E', end='')
else:
hasDir = 0
lastDir = '^'
for c, _ in enumerate(directions):
if (x, y, c) in hs:
lastDir = directions[c]
hasDir += 1
if not hasDir:
print('.', end='')
elif hasDir > 1:
print(hasDir, end='')
else:
print(lastDir, end='')
print()
print()
sleep(0.1)
And here is the search implemented:
from queue import PriorityQueue
directions = '>v<^'
directions_movement = [
(1, 0),
(0, 1),
(1, 0),
(0, 1),
(0, 0)
]
hurricanes = list()
width, height = 1, 1
with open('input.txt', 'r') as f:
line = f.readline().strip()
y = 0
width = len(line)
while line:
for x, c in enumerate(line):
if c in directions:
hurricanes.append((x, y, directions.index(c)))
line = f.readline().strip()
y += 1
height = y
def move(pos, isPlayer):
x, y, c = pos
diff = directions_movement[c]
x += diff[0]
y += diff[1]
if isPlayer:
return (x, y, c)
if x > width  2:
x = 1
elif x < 1:
x = width  2
if y > height  2:
y = 1
elif y < 1:
y = height  2
return (x, y, c)
start_position = (1, 0)
end_position = (width  2, height  1)
visited = set()
p = PriorityQueue()
p.put((0, start_position, hurricanes, 0))
found = False
while not p.empty():
old_heuristic, (px, py), current_hurricanes, steps = p.get()
steps += 1
# move hurricanes
new_hurricanes = list()
for pos in current_hurricanes:
new_hurricanes.append(move(pos, False))
# attempt to move
for c, direction in enumerate(directions_movement):
x, y, _ = move((px, py, c), True)
if (x, y) == end_position:
found = True
break
if not (0 < x < width  1 and 0 < y < height  1):
continue
collides = False
for (hx, hy, _) in new_hurricanes:
if (x, y) == (hx, hy):
collides = True
break
if collides:
continue
new_heuristic = steps + abs(end_position[0]  x) + abs(end_position[1]  y)
if (x, y, steps) not in visited:
p.put((new_heuristic, (x, y), new_hurricanes, steps))
visited.add((x, y, steps))
if found:
print(steps)
break
In part 2, I found a bug in my original code. If, right out of the gate, there is a hurricane blocking the path of the starting position, then the A* search will return prematurely with no results:
if (x, y) == end_position:
found = True
break
To fix this, I simply check if the current position is the starting position; if it is, the subsequent block of code is executed, which has “stay still” as one of the possible actions to take.
Hence, after fixing the bug, I just move all of the path finding code to its own function, which will return the number of steps taken and the state of the board, and call it three times; once from start > end, end > start and start > end again.
Here is the final diff:
46,49c46,67
< start_position = (1, 0)
< end_position = (width  2, height  1)
< visited = set()
< p = PriorityQueue()

> def astar(start_position, end_position, hurricanes):
> visited = set()
> p = PriorityQueue()
>
> p.put((0, start_position, hurricanes, 0))
> found = False
>
> while not p.empty():
> old_heuristic, (px, py), current_hurricanes, steps = p.get()
> steps += 1
>
> # move hurricanes
> new_hurricanes = list()
> for pos in current_hurricanes:
> new_hurricanes.append(move(pos, False))
>
> # attempt to move
> for c, direction in enumerate(directions_movement):
> x, y, _ = move((px, py, c), True)
> if (x, y) == end_position:
> found = True
> break
51,52c69,84
< p.put((0, start_position, hurricanes, 0))
< found = False

> if not (0 < x < width  1 and 0 < y < height  1) \
> and (x, y) != start_position:
> continue
>
> collides = False
> for (hx, hy, _) in new_hurricanes:
> if (x, y) == (hx, hy):
> collides = True
> break
> if collides:
> continue
>
> new_heuristic = steps + abs(end_position[0]  x) + abs(end_position[1]  y)
> if (x, y, steps) not in visited:
> p.put((new_heuristic, (x, y), new_hurricanes, steps))
> visited.add((x, y, steps))
54,67c86,87
< while not p.empty():
< old_heuristic, (px, py), current_hurricanes, steps = p.get()
< steps += 1
<
< # move hurricanes
< new_hurricanes = list()
< for pos in current_hurricanes:
< new_hurricanes.append(move(pos, False))
<
< # attempt to move
< for c, direction in enumerate(directions_movement):
< x, y, _ = move((px, py, c), True)
< if (x, y) == end_position:
< found = True

> if found:
> return current_hurricanes, steps
70,88c90,95
< if not (0 < x < width  1 and 0 < y < height  1):
< continue
<
< collides = False
< for (hx, hy, _) in new_hurricanes:
< if (x, y) == (hx, hy):
< collides = True
< break
< if collides:
< continue
<
< new_heuristic = steps + abs(end_position[0]  x) + abs(end_position[1]  y)
< if (x, y, steps) not in visited:
< p.put((new_heuristic, (x, y), new_hurricanes, steps))
< visited.add((x, y, steps))
<
< if found:
< print(steps)
< break

> start_position = (1, 0)
> end_position = (width  2, height  1)
> hurricanes, steps = astar(start_position, end_position, hurricanes)
> hurricanes, backsteps = astar(end_position, start_position, hurricanes)
> hurricanes, backbacksteps = astar(start_position, end_position, hurricanes)
> print(backbacksteps + backsteps + steps  2)
There’s only one part to this puzzle; and it’s probably the most fun I had in a puzzle thus far!
Nothing like alternate number representations to end of the advent eh? In this puzzle, we have a bunch of alienlooking numbers, like so:
1=02
12111
2=0=
21
2=01
111
20012
112
1=1=
112
12
1=
122
We eventually find out that each of these numbers are in base 5, but with a twist (as there usually is); 
and =
represent 1 and 2 respectively, and the maximum digit that can be represented is 2. From a list of these integers, we need to sum it out, and return our sum in the same format.
Okay, so there are two subproblems:
The first subproblem is really simple. All we have to do is to sum the value represented by each digit position, negative and all that: so, for example, 1=02
can converted to an integer by this method: 2 + (1) * 5 + 0 * 5^2 + (1) ^ 5^3 + (2) ^ 5^4 + 1 * 5^5 = 1747
. In Haskell, this is a foldr
zipped with the position of each digit, something like that:
snafuToInt :: SNAFU > Int
snafuToInt = foldr convert 0 . enumerate
where
convert (i, digit) acc = acc + (5 ^ i) * (snafuDigitToInt digit)
enumerate xs = zip[(length xs)  1, (length xs)  2 .. 1] xs
where SNAFU
is just a String
, snafuDigitToInt
converts =012
to an integer, like 1, 2, 0, 1, 2
.
To approach the second subproblem, we must understand that we are in a situation where we need to perform differences to convert a normal base 10 integer to this strange version of an integer. Okay, what if it was to a normal base 5 integer? Normally, we would need to perform the following:
1747 % 5 = 2 (last digit is 2)
1747 / 5 = 349
349 % 5 = 4 (fourth digit is 4)
349 / 5 = 69
69 % 5 = 4 (third digit is 4)
69 / 5 = 13
13 % 5 = 3 (second digit is 3)
13 / 5 = 2
2 % 5 = 2 (first digit is 2)
As such, our base 5 reprsentation of 1747 is 23442. Now, let’s think about how our number system changes things. If we now want to represent, say, 8, in normal base 5, that would be 1 * 5^1 + 3
. In our unique representation, it’s 2 * 5^1  2
, whch means 2=
. We discover that the difference is actually just 1 * 5^1 + (3  5) + 5 = 2 * 5^1  2
, which is 2=
. Okay, what a bout a smaller number, like 6? That’s 1 * 5^1 + 1
for both normal base 5, and our unique base 5 (11
).
Hence, we find out that should our normal base 5 digit exceed 2
, we need to perform (5  digit)
on it, to get the correct representation at that point. But doing so will offset our answer by 5; how do we intend to fix that? Let’s think about a larger number, say 74
. This is 2 * 5^2 + 4 * 5^1 + 4 * 5^0
in normal base 5. Using our logic above, to represent this in our unique number, we see that: 2 * 5^2 + (4  5) * 5^1 + (4  5) * 5^0
which is offset by + 5 * 5^1 + 5 * 5^0
, missing from the expression. Wait, isn’t that just 5^2 + 5^1
? If we apply this back to the unique number expression, then: 3 * 5^2 + (4  5 + 1) * 5^1  1 * 5^0
, which is just 3 * 5^2  1
which is 5*5^2 + (5  3) * 5 ^ 2  1
, which is 5^3  2*5^2  1
which finally translates to 1=0
in our special integer representation.
What this whole shtick implies is that we need to carry over a 1 to the next significant digit, as long as our base 5 representation exceeds the maximum digit, 2.
With that finally out of the way, we can implement our logic:
intToSnafu :: Int > SNAFU
intToSnafu x = reverse $ convertDigits x 0 []
where
convertDigits num carry xs
 num == 0 && carry == 0 = []
 num' + carry > 2 = intToSnafuDigit (num' + carry  5) : convertDigits num'' 1 xs
 otherwise = intToSnafuDigit (num' + carry) : convertDigits num'' 0 xs
where
num' = num `mod` 5
num'' = floor $ ((fromIntegral num) / 5)
I’m reversing the list because I don’t want to do ++ []
, which increases my time complexity, however much that matters. Now that we have both of our conversion functions, we can finally do the problem, which is to sum all the numbers together in our special base 5 representation. The full code is as follows:
import System.IO
type SNAFUDigit = Char
type SNAFU = String
snafuDigitToInt :: SNAFUDigit > Int
snafuDigitToInt '=' = 2
snafuDigitToInt '' = 1
snafuDigitToInt '0' = 0
snafuDigitToInt '1' = 1
snafuDigitToInt '2' = 2
intToSnafuDigit :: Int > SNAFUDigit
intToSnafuDigit (2) = '='
intToSnafuDigit (1) = ''
intToSnafuDigit 0 = '0'
intToSnafuDigit 1 = '1'
intToSnafuDigit 2 = '2'
snafuToInt :: SNAFU > Int
snafuToInt = foldr convert 0 . enumerate
where
convert (i, digit) acc = acc + (5 ^ i) * (snafuDigitToInt digit)
enumerate xs = zip[(length xs)  1, (length xs)  2 .. 1] xs
intToSnafu :: Int > SNAFU
intToSnafu x = reverse $ convertDigits x 0 []
where
convertDigits num carry xs
 num == 0 && carry == 0 = []
 num' + carry > 2 = intToSnafuDigit (num' + carry  5) : convertDigits num'' 1 xs
 otherwise = intToSnafuDigit (num' + carry) : convertDigits num'' 0 xs
where
num' = num `mod` 5
num'' = floor $ ((fromIntegral num) / 5)
main = do
contents < readFile "input.txt"
let result = intToSnafu . sum . map snafuToInt $ lines contents
print result
And with that, we’ve completed Advent of Code 2022, the first time ever I’ve done so!
Advent of Code Calendar 2022
I’ll probably update this blog post for formatting, English and clearer explanations after Christmas, but I will not change the published date.
AOC has been a fun experience for me to hone my skills in a way that did not feel too overbearing, yet fun and engaging. The puzzles taught me a lot, highlighting things that I should improve on. In a nutshell, the lessons were:
I hope to do AOC next year too, hopefully with less mistakes!
Merry Christmas and Happy 2023, folks.
Happy Coding,
CodingIndex
]]>Today, I’ll be writing about the hidden magical gem that is Linear Programming, available in your nearest spreadsheet program, be it LibreOffice Calc, Excel, or Google Sheets.
Unlike the other posts you might have seen within my blog, Linear Programming isn’t actually programming. Rather, it is “a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships”, according to Wikipedia.
For those who are uninitiated, or need a mininotsoprofessionalrefresher, let us break down the definition.
Essentially, this is “optimization”. We construct an equation, and we try to either minimize or maximize it  you probably had some exposure to it in high school when they taught us how to differentiate.
However, in optimization, instead of figuring out if an equation should be minimized/maximized, we define if the equation should be minimized/maximized based on our requirements.
Linear relationships are essentially either equalities, or inequalities (=
, >
, <
and so on):
An example of a linear inequality  Source: Me
Since the relationships must be linear, it implies that equations like the following cannot be solved with Linear Programming:
An example of a nonlinear inequality  Source: Me
If inequalities like the above presents itself, the best course of action would be to use another kind of solver, like a nonlinear programming solver, or a Constraint Problem (CP) solver like this one by Google. However, chances are, that with a touch of creativity, most problems can be expressed as a linear programming problem.
In a nutshell, given a bunch of inputs, lets say:
A bunch of inputs  Source: Me
We can define a bunch of constraints represented via linear relationships, like:
An example of a linear inequality  Source: Me
For Linear Programming involving only two variables, we can visualize how it works with graphs. Let’s say our two variables are x
and y
, and our constraints are:
First Constraint  Source: Me
Second Constraint  Source: Me
We will find that the graph on Desmos will look like this:
Graph. Green represents constraint 1, Blue represents constraint 2  Source: Me
The intersected area (i.e. areas where both blue and green) are the solutions to the inequality (note that the intersection itself is not a solution, since both of our inequalities are not inclusive). Now, if we were to define an objective function, which is the function we want to minimize or maximize:
Objective Function, 2x + y  Source: Me
And then plot it on the graph:
Objective Function (purple) on the graph  Source: Me
We see that the intersection between the line, and the overlapping shaded areas contain all the values that satisfies both constraints, and also the objective function. All we need to do now is to determine what x
and y
should be if we choose to maximize or minimize our objective function. If it was to maximize our objective function, then the answer we seek is as close to the intersection as possible. Otherwise, if we were to minimize our objective, then the answer we seek should
technically be at another intersection, which isn’t possible with these particular constraints, hence, minimizing the objective would be “INFEASIBLE”.
To wrap up the example, performing linear programming would give us a few results:
x
if the objective function was minimized/maximizedy
if the objective function was minimized/maximizedAnd subsequently, to wrap up generally:
x_1, x_2, x_3, ... x_n
if the objective function was minimized/maximizedWith more variables, we are essentially working with linear constraints in ndimensional graphs, which might sound difficult to visualize until you realize it doesn’t really matter, since the user is the one that defines the constraints anyway.
To learn more about how exactly to solve Linear Programming problems, look at the Simplex algorithm. Good solvers would indicate if there is more than one possible answer, or if there is a “closeenough” solution should the entire system be infeasible  although, that is in no way necessary or universal in wellused solvers.
In my line of work (and probably most of yours, too), duty is a necessary part of work. As a software engineer, this could be translated to being oncall, as a doctor, it could be shifts to do ER, and so on. Needless to say, countless of psychological battles have been waged across the globe thanks to conflicts in agenda when it comes to planning for duty slots: “no weekends please”, or “no public holidays please”, or “my wife’s pregnant” or “I need to walk my pet rock”.
As a duty planner, if you were to ignore these claims, you would be seen as a coldhearted human being. So I thought: why not just offload the work onto a computer program? Not only would this save time and be much fairer compared to a human (especially if you are also planning it yourself), you would be disguising your own stonecold, immovable heart and instead blaming your inhumanness on a computer program.
Leveraging on Google Sheet’s integration with Google Forms, I modeled our own planning considerations as linear relationships, maximized preferred dates, and minimized the amount & quality (defined by weekends and public holidays) of duty disparity between each duty personnel. Then, I solved them using Google’s Linear Optimization Service (GLOS).
Originally, I used the OpenSolver app on Google Sheets to solve, but I later realized how slow it was when I was developing the Google Sheet.
Here is a GitHub snippet link that contains all of the Google Apps Script used within the relevant Google Sheet. The Google Sheet itself is not opensource, since it contains sensitive data that I won’t try cleaning.
Did you know that Google Sheet collaboration isn’t actually simultaneous? The edits from each user just happens so quickly that you see it as simultaneous. Not only is this due to JavaScript browser engines being incapable of multiprocessing, but also based on personal experience, where a script can hog out all of the users when busy. Also, programmatically reading / writing each cell is extremely slow compared to bulkwriting an entire matrix into Google Sheets.
Instead, allow me to explain how I managed to create linear relationships for some of our planning considerations.
Take x_i_j
(synonymous to x
generally) to be any duty date where i
and j
is the personnel and day respectively, and b_i_j
(synonymous to b
generally) to be any backup date, where the personnel is to serve as a backup for the duty personnel.
1
represents duty / backup on that day (depending on whether x
or b
is referred to), and 0
represents no duty / backup on that day.
As this is considered an “innovation” rather than an “invention”, it is meant to work as a transition between the old process (manually planning) to the new process (automatically planning). Hence, dates that are manually planted must not be changed. “SetInStone” acquires nonempty cells, and adds a linear constraint for each affected cell:
x = 1
, b = 0
.x = 0
, b = 0
.x = 0
, b = 1
.Every single day should have 1 personnel performing duties, while another personnel will be the backup. This is achieved simply by summing for all i
, in the same j
, for all duties / backups.
Repeat this for every `j`  Source: Me
If we were to generate a duty timetable now without some specific constraints, the model would simply assign all the duty to one single person who is free. There are three ways that we are combating this:
For a single slot (i.e. the status of duty for a particular person on a particular day), consecutive days are prevented by using this clever little equation, iterating x_ij
over all possible values of j
for a
number of days, where a
is the limit to the number of days someone is allowed to serve.
Clever Sum  Source: Me
In effect, this ensures that a
days after a duty / backup (not shown) slot, there will not be any more duties.
Without delving too deep into the point system details (as this is subjected to individual implementation), a common understanding between the planners and personnel involved alike are the roughly equal points that everyone should have.
As one of the pivotal factors to eliminate model bias, the points must be allocated fairly to each person. This is done quite simply by taking the projected amount of points (calculated by summing the possible points earned throughout the entire period, dividing by the number of days in the period), introducing a deviation variable, which dictates how many points can each person differ from one another, and then summing the points for each person, ensuring that it is between point_avg 
deviation
to point_avg + deviation
.
Repeat this for every i  Source: Me
In effect, this means that the model itself would determine the value of deviation
, which means that we want to minimize this as much as possible.
This is actually quite simple. After plotting each unavailable date onto a matrix, the x
and b
just has to be 0
to 0
or 0
to 1
, if unavailable or available respectively.
This is also pretty simple. 0 < x + b < 1
would do the trick: x
and b
cannot both be 1
, as that would result to 2
, which is greater than 1
. This prevents a person from simultaneously being his own backup.
This is done in two parts:
The constraints are quite simple:
j
per i
), and it must be greater or equal to the projected average;Weekday is chosen solely due to the large availability compared to weekends  there may be less weekends than the amount of people you are planning for!
For obvious reasons, x
and b
can only either be 0
or 1
. deviation
is a continuous variable, solely because points can be expressed as decimals.
Combined together, the objective of our function is to prioritize & maximize preferred slots (by modifying the points at the objective level, which has benefits over constraint level, as objective is suggestive, while constraints are requirements), while minimizing the deviation variable mentioned earlier.
In a nutshell we are maximizing:
Objective Function  Source: Me
where deviation
and x
is as expected, while s
is a matrix containing values that are either 0
, 1
or 2
. 1
represents a normal day, while 2
represents a preferred day; 0
is essentially nil
 since there is already a constraint that prevents duty slots from being filled if personnel is unavailable.
The result:
Beautiful Result  Source: Me
Green is duty, Yellow is backup, Black is unavailable, and Red is nothing. C is special consideration.
Before I learnt about optimization functions through a mathematical nerd friend of mine, I always thought this kind of problem would be easier solved with things like Machine Learning, or bruteforce search.
However, optimization functions not only reduces the searchspace by a lot, it also ensures that the result is mathematically sound, and a hundred percent cold and calculating so that you can freeze anyone who decides you are being too inhumane in planning. Or, you could be soft like me and add in dates of consideration (like ethnic holidays). Feel free to use my Google Apps Script code, that is, after you figure out how to create a Google Sheet that it requires as input!
Happy Coding,
CodingIndex
]]>It’s been a while, huh? I’ve completely broke my New Year’s resolution of delivering 1 blog post every month, gotten listless in my life, and generally lost a great chunk of motivation to maintain my Gentoo installation. In terms of recreation during my weekends, I spend a great deal of time playing Gacha video games (as a freetoplay because I’m broke) and watching a bunch of anime.
It has been a few months since I last touched code for recreation  something I regret greatly. Sometimes, in moments of panic and anxiety for the future, I would log into HackerRank just to practice. Recently, I’ve been slowly going through the Rust Programming Language book, which has a notable “borrowing” concept to manage memory that piques my interest to explore it in the near future.
Of course, this isn’t all I’ve been doing for the past few months. I’ve picked up chess puzzles (not chess itself), learnt how to solve the Rubik’s cube, automated some parts of my job with simple excel skills, and:
This external HDD has been with me for three years, containing many projects and Linux containers used for academic purposes and hackathons. My laptop, while a powerhouse, has limited storage, and could not meet the storage demand required to archive these projects. Hence, I rely on the external HDD asif it was a builtin drive, which meant that it was plugged in at every moment the laptop is online.
Despite knowing that the typical hard drive lasts for 3 to 5 years, I thought that there was no need to do predictive maintenance (like backing up) on the drive; furthermore, S.M.A.R.T was still returning an “ok” a few weeks prior to the failure. Alas, it failed spectacularly, and I was unable to recover the data on the drive with tools like dd
and Live Recovery CDs like CloneZilla.
To troubleshoot further, I thought about what an external (implied: portable) HDD is typically made up of:
Cracking open my Maxtor drive, I found out that my Maxtor’s internal HDD uses a 1TB harddrive from Seagate. Inputting the serial number into the Seagate warranty website suggests that the internal HDD was made particularly for Maxtor HDDs.
Internal HDD bundled with the adapter  Source: Me
SATA to USB3.0 adapter  Source: Me
I took the Maxtor internal HDD and plugged it into a desktop with a SATA cable to try recovering the data again.
Plugging in the HDD  Source: Me
To my dismay, it didn’t work: all of the sectors after the header sector is completely unreadable; furthermore, I’ve encrypted the drive with VeraCrypt, so chances are, the data is impractical to recover.
It may seem like I am barking up the wrong tree here, but I am quite disappointed in the performance of Seagate drives in general. A majority of the drives in my possession are from Western Digital, which are fantastic drives that have lasted me 5  6 years since I’ve acquired them.
And it’s not just me; a majority of those in the tech community agrees that WD drives (particularly WD Black) are very reliable hard drives. In my life, I’ve owned two Seagate drives; both of them have failed, despite being newer than my WD Blue drives. I also own an old WD Passport from 2015, which has outlasted everything I’ve had in my possession, although it is an unfair comparison since I don’t run the drive unless necessary.
In the first place, relying on the external HDD for daily, I/O intensive stuff is generally a bad idea :tm:. So, I decided to solve the root cause: my laptop did not have enough storage capacity for my needs. My laptop has a 512GB Samsung 960 M.2 SSD, which is partitioned somewhat in half for dualbooting Windows & Ubuntu. This meant around 230GB for each operating system for programs, documents, Machine Learning models, and highly bloated IDEs.
Hence, I decided to get a 1TB Samsung 980 M.2 SSD:
Samsung 980 M.2 SSD  Source: Samsung Official Site
I also decided to replace the external HDD with a new WD Passport, to archive projects that I won’t be working on anymore. This new management of storage will allow my HDD to last much longer, and allow me to bring items with my laptop to do productive work. Furthermore, the upgrade even increased my boot times quite a bit.
One day, my display ceased to function. To troubleshoot it, I decided to open up both of my monitors and swap parts until I figured out that:
This is the motherboard in question:
23es Motherboard  Source: Me
I bought the monitor in a sale for $150, which is an insanely good price because on Amazon, it costs about $250. However, because I have basic repair skills, I instead bought the motherboard on Aliexpress for $30.
Sure enough, swapping out the parts fixed the issue; however, seeing as how this (perfectly normal, coolless) monitor’s motherboard broke down after a mere 4 years of usage, the problem by and large is likely the humid yet dry environment the monitor is operating in, which causes wear and tear in its components. At the same time, I have an Acer monitor that has lived longer  hopefully, this was just a onetime fluke in the operating life expectancy of the HP 23es monitor.
Windows is a widely used operating system used worldwide in various environments and settings. Hence, there are games and tools built specifically for Windows alone. Linux has many tools to try and bridge the gap between Windows applications and the Linux operating system, either by porting the application, building an alternative, using WINE, or optimizing virtual machines to run only for Windows applications.
The last option stated above has gotten really good recently; by applying a technique known as Single GPU passthrough, Linux users can use applications that require hardware acceleration, most notably in video games or professional video editing software. However, this technique does not work on CPUs prior to Intel Broadwell, which is unfortunately exactly what I have on my x220.
As an advocate of privacy, I love the concept of ephemeral runtimes: whatever changes done by any applications within an ephemeral runtime will not be committed to the system, which makes it simple to run “throwaway” applications should I only need them occasionally, rather than frequently. When I am done with the application, I can simply shutdown the system, and it was like I never used the application in the first place. Furthermore, it does not take up space in the harddrive like a traditional operating system would, meaning that I can have more space storing the files and application that affect my daytoday life, which makes my setup more organized and purposeful.
Hence, I decided to create my own ephemeral Windows 10 LiveCD, using a tool called Winbuilder. To support 3D applications and as wide of an application spectrum as possible, I ensured that the following feaures were enabled:
Another consideration, when choosing which project to run within Winbuilder, is the difference between WinRE and WinPE. RE stands for Recovery Environment, while PE stands for Preinstallation Environment.
Even though WinRE is based on WinPE, WinPE loads network drivers and is a more complete environment compared to WinRE, which provides more recovery tools than operating system utility (Toms’ Guide). Since what I need is essentially as complete of a Windows 10 environment as possible, the natural right answer is WinPE. Furthermore, in my testing, there were some applications that just refuses to run on WinRE, but runs perfectly fine on WinPE.
Happy Coding
CodingIndex
]]>So, I don’t have many friends. The friends that I have are… strange, to say the least.
A while ago, I, ModelConverge and nikhilr migrated to Signal, to escape from the privacy policy change imposed by Whatsapp. While Whatsapp claims that the privacy policy change will only affect Whatsapp Business users, we had already wanted to migrate away from Whatsapp ever since Facebook acquired it; so the policy change by Whatsapp simply acted as a catalyst. We are hence glad to report that we were part of the masses that hugged Signal to death during a mass migration to the Signal platform, especially after Elon Musk’s tweet.
For those of you living under a rock, Signal is an instant messenger just like Whatsapp. Many people migrated to Signal because: (i) it is opensource, (ii) it is run by a nonprofit organization and (iii) has libraries & specifications for developers who want to leverage the Signal protocol or platform to build apps.
The Signal messenger is wonderful; but the users  they have too much power. One of my pals, nikhilr decided to change the group’s avatar photo, drastically changing the friendly democratic climate we shared, effectively serving as a declaration of war between all parties involved. What followed was a great group war that is described in history books as the pivotal moment of the greatest creation.
Fighting a great war in Signal  Source: Me
I couldn’t just sit idly by and watch as my enemy won battle after battle, getting foothold after foothold on my sanctuary; hence, as a responsible and perfectly rational adult, I decided to abandon all of the work society had me do, and built a Signal bot to eliminate my enemy’s only advantage (free time), and exploit his weakest point (the fact that he is human and hence slower).
Using a bot to fight the war  Source: Me
As you can clearly see, before nikhilr decided to remove my privileges to edit the Group Avatar like a true savage undeserving of a respectful knight, my bot fought an admirable battle, stunning my enemies who displayed sheer awe towards my cunning plot.
Today, we won’t be building Group Contender Bot; instead, we’ll just be making a simple Signal bot, to jog your creativity and get you started.
Being a container nerd, I decided that my bot must be setup and run in a container. Automatically, this means that the Signal bot can be run from any platform that can run Docker; furthermore, this would deploy nicely on a home server running most services on dockercompose
.
When searching for a way to interface with Signal, I found Signal CLI, which exposes a DBus interface for applications to interact with. Hence, all I needed to do was to get a library that could interface with the DBus, like pydbus
.
Many Linux applications talk to each other over the System DBus; according to this StackOverflow post, it is used as an alternative to sudo
, by allowing a nonprivileged application to perform interprocess communication (IPC) to a more privileged application through a bunch of exposed functions. Hence, the system DBus is also the default DBus used by many applications.
Because of the nonprivileged <> privilege
method of communication, container software do not normally expose the System DBus to guest containers because it would open up a whole array of possible vulnerabilities. Thankfully, when digger deeper as to what DBus actually is, I found out that it is essentially a protocol slapped on top of a UNIX socket, meaning that theoretically, it should be possible to construct my own DBus instance just for Signal communication.
The beauty of using the DBus to communicate implies that any language under the sun can be used; I decided to go with Python on an impulse with no clear thought; if I were to make a rational choice, I would have selected Golang, for how simple it is to spawn Go routines for multiprocessing.
On the other hand, Python makes the code more understandable to a wider audience, given its simplicity, and how it is the “comfy” language for most people, allowing a wider audience to develop useful Signal bots.
So, let us build a Signal bot!
First and foremost, we need to install all of the dependencies. On an Ubuntu system, Signal CLI requires defaultjre
, while the pydbus
package requires buildessentials
, libcairo2dev
and libgirepository1.0dev
. As you can see, for a bot that will run in container, there are quite a lot of dependencies; hence, instead of polluting my otherwise pure host environment, I decided to create a Dockerfile
to build me an environment that can handle Signal CLI.
FROM ubuntu:latest
RUN aptget update && DEBIAN_FRONTEND="noninteractive" aptget install y python3 python3pip defaultjre coreutils curl wget libcairo2dev libgirepository1.0dev
WORKDIR /tmp
RUN curl s https://api.github.com/repos/AsamK/signalcli/releases/latest \
 grep "browser_download_url.*tar.gz" \
 cut d : f 2,3 \
 tr d \" \
 grep ".gz$" \
 wget qi 
RUN mkdir p /opt/cli && mkdir p /opt/bot && tar xvf *.tar.gz C /opt/cli && mv /opt/cli/signal* /opt/cli/signal
WORKDIR /opt/bot
RUN pip install pydbus PyGObject
I adapted the
curl
command from this GitHub gist written by@steinwaywhw
.
This creates an Ubuntu container with the latest Signal CLI install in /opt/cli
, and the working directory planted in /opt/bot
. To use this container for Signal bot development, you will keep some things at hand:
Once you have figured out the phone number & directories you want to use, set them in a terminal you’ll be using for Signal bot related work:
export PHONE_NUMBER="<a phone number, with +countrycode prefixed>"
export SIGNAL_CLI_DATA="<a directory for signal secrets>"
export SIGNAL_BOT_PROJECT="<the directory to your bot project>"
export SIGNAL_BOT_NAME="<any alphanumeric name for your bot>"
alias signalcli='docker run v "$SIGNAL_BOT_PROJECT:/opt/bot" v "$SIGNAL_CLI_DATA:/root/.local/share/signalcli" e PHONE_NUMBER="$PHONE_NUMBER" signalbot:latest /opt/cli/signal/bin/signalcli'
For development purposes, we should first link the Signal CLI to our phone number, so that the bot can send and receive messages. To do this, we first copy + paste the Dockerfile
to a local directory, and build the Docker image:
wget https://gist.githubusercontent.com/jameshi16/71764cc0bac84adda717e9ddb0b44364/raw/2fff57fac78826e17ad097dcb4c7ed1e873ddb1e/Dockerfile
docker build . t signalbot:latest
Now, if you want to link to a phone number you already use for daily Signal usage, then run this command:
signalcli link n "$SIGNAL_BOT_NAME" > /tmp/output & \sleep 10 && cat /tmp/output  curl F=\< https://qrenco.de && fg
A QR code should be generated and then printed on your terminal window; scan the result with your phone’s Signal messenger. If you don’t know how, follow the guide on the official Signal Support.
Your device should be linked.
Otherwise, if you want to link a completely new phone number, then run this Signal CLI command through the container:
signalcli u ${PHONE_NUMBER} register
You should receive a SMS with your OTP code to activate Signal. Copy that verification code before running this command:
signalcli u ${PHONE_NUMBER} verify <insert verification code>
Now, we move to the stage where we write the bot. No matter what language the bot is written in, the bot needs at least two other running processes:
Hence, before we can even write the content required for the bot, we must first write an entrypoint script for the Docker container. Luckily, we can quite easily write this script:
entrypoint.sh
#!/bin/bash
set e
export DBUS_SESSION_BUS_ADDRESS=$(dbusdaemon session fork printaddress)
touch /tmp/output.log
/opt/cli/signal/bin/signalcli u "${PHONE_NUMBER}" daemon >> /tmp/output.log 2>&1 &
dbusmonitor session >> /tmp/output.log 2>&1 & # comment this out if you no longer need to monitor the bus
sleep 20s && python3 /opt/bot/script.py >> /tmp/output.log 2>&1 &
tail f /tmp/output.log
The script above assumes that you are executing a bot written in Python, with the entrypoint of that bot within script.py
of your project folder, and also assumes that you have set the environmental variables right. Let’s test it out:
alias run_bot="docker run v \"$SIGNAL_BOT_PROJECT:/opt/bot\" v \"$SIGNAL_CLI_DATA:/root/.local/share/signalcli\" e PHONE_NUMBER=\"$PHONE_NUMBER\" signalbot:latest ./entrypoint.sh"
wget O script.py https://gist.githubusercontent.com/jameshi16/71764cc0bac84adda717e9ddb0b44364/raw/fd3fc896bfe56d18741ba84c8c63d00f34c8434b/receive.py
run_bot
The script written by mhg
, modified by me to use a Session Bus instead, essentially reads every message pumped into Signal out onto the terminal window.
The purpose of
sleep 20s
is to give Signal CLI some time to: (i) start daemonizing, (ii) connect to the DBus, and (iii) synchronize messages a little before starting the actual script. Sometimes, this takes more than 20s, but for our purposes, it should be good enough. You may sometimes find your bot unresponsive during this stage; but trust me, it’ll work eventually, after catching up with all of the messages.
Once you have verified the workability of your whole set up, it is time to write code to develop a signal bot. Let’s start with the receive.py
sample code you downloaded to test the workability of your setup:
script.py
#!/usr/bin/python3
def msgRcv(timestamp, source, groupID, message, attachments):
print("msgRcv called")
print(message)
return
from pydbus import SessionBus
from gi.repository import GLib
bus = SessionBus()
loop = GLib.MainLoop()
signal = bus.get('org.asamk.Signal', '/org/asamk/Signal')
signal.onMessageReceived = msgRcv
if __name__ == '__main__':
loop.run()
If you’ve linked the bot to a number that is already using Signal, then you would realize that this piece of code would only work when people other than yourself messages you. If you want to receive all messages, including the ones from yourself, then change:
def msgRcv(timestamp, source, groupID, message, attachments):
print("msgRcv called")
print(message)
return
+ def msgSyncRcv(timestamp, source, destination, groupID, message, attachments):
+ msgRcv(timestamp, source, groupId, message, attachments)
+ return
...
signal = bus.get('org.asamk.Signal', '/org/asamk/Signal')
signal.onMessageReceived = msgRcv
+ signal.onSyncMessageReceived = msgSyncRcv
if __name__ == '__main__':
And then run the bot again with the run_bot
command.
Let’s make the bot respond to commands that start with the /
prefix, by changing the contents of the msgRcv
function:
def msgRcv (timestamp, sender, groupID, message, attachments):
if len(message) > 0 and message[0] == '/':
signal.sendGroupMessage("{:s} said {:s}".format(sender, message), [], groupID)
return
Now, send a message to the bot with the /
prefix, and you should see that the bot echos you like a parrot. With that, we now have a basic bot. For more things that the bot can do, check out Signal CLI’s DBus wiki; all of the functions are available by decapitalizing the first letter, and then accessing it as a submember of the signal
object. This also includes the DBus signals listed on the manpage.
For a more complete guide, let’s make an 8ball bot, which essentially just returns an 8ballesque response based on random probability.
8ball has 20 different answers, which can be represented by the following Python list:
responses = ['It is Certain.', 'It is decidedly so.', 'Without a doubt.', 'Yes definitely.', 'You may rely on it.', 'As I see it, yes.', 'Most likely.', 'Outlook good.', 'Yes.', 'Signs point to yes.', 'Reply hazy, try again.', 'Ask again later.', 'Better not tell you now.', 'Cannot predict now.', 'Concentrate and ask again.', 'Don\'t count on it.', 'My reply is no.', 'My sources say no.', 'Outlook not so good.', 'Very doubtful.']
For the msgRcv
function, we basically just choose a random string within the list, and return it whenever we see 8 ball after the /
prefix:
import random
responses = ['It is Certain.', 'It is decidedly so.', 'Without a doubt.', 'Yes definitely.', 'You may rely on it.', 'As I see it, yes.', 'Most likely.', 'Outlook good.', 'Yes.', 'Signs point to yes.', 'Reply hazy, try again.', 'Ask again later.', 'Better not tell you now.', 'Cannot predict now.', 'Concentrate and ask again.', 'Don\'t count on it.', 'My reply is no.', 'My sources say no.', 'Outlook not so good.', 'Very doubtful.']
def msgRcv (timestamp, sender, groupID, message, attachments):
if len(message) > 0 and message[0] == '/':
if '8ball' in message[1:]:
signal.sendGroupMessage('8ball: ' + random.choice(responses), [], groupID)
return
Full code for script.py
can be found in my gist. After editing the script, the bot can be run with:
run_bot
Now, on Signal, messaging /8ball
should yield:
A response from magical 8 ball  Source: Me
The last part is probably the simplest part; writing the dockercompose.yml
file. The template should be quite selfexplanatory:
version: '3'
services:
signalbot:
build: https://gist.githubusercontent.com/jameshi16/71764cc0bac84adda717e9ddb0b44364/raw/Dockerfile
image: signalbot
command: /bin/bash c "./entrypoint.sh"
volumes:
 ${SIGNAL_CLI_DATA}:/root/.local/share/signalcli
 ${SIGNAL_BOT_PROJECT}:/opt/bot
environment:
 PHONE_NUMBER=${PHONE_NUMBER}
Then, fill in the relevant details in the .env
file. If you have not shutdown the terminal you used in the prerequisite stage, then you can use this command to generate the .env
file:
echo e "SIGNAL_CLI_DATA=${SIGNAL_CLI_DATA}\nSIGNAL_BOT_PROJECT=${SIGNAL_BOT_PROJECT}\nPHONE_NUMBER=${PHONE_NUMBER}" > .env
dockercompose config
You should see all of the environment variables substituted. If they are all there, then you can run:
dockercompose up
To see the bot in action; and run:
dockercompose up d
To detach it from the terminal, and run it in the background.
Welp, that was fun! I will make the source code for the Group Avatar Contender bot available soon; but don’t count on it to be online after this blog post. Hopefully, this blog post makes up for the missing one you would have otherwise gotten on May. There should be a separate blog post for June; until then, ciao!
Happy Coding,
CodingIndex
]]>Instead, I changed my plans and wanted to write about studying techniques for remembering content you have little interest in, since that has been my most utilized skill for the past few months. Then, I looked at my coffee, drank the coffee, thought about how interesting it was that coffee was banned a few times in history and decided that I was going to do a blog post about coffee instead.
So, let’s get started!
Coffee contains caffeine, which is a drug that stimulates the brain and helps us stay awake and alert (ref. 1, 2, 3). The usage of coffee is ubiquitous for almost any profession and scenario that requires the worker to stay awake  artists, musicians, labourers, architects, engineers, and of course, programmers. For enterprise programmers, it helps them trudge through client complaints, unreasonable demands from product managers, and fixing failing unit tests  just to name a few. Caffeine also boosts mood, metabolism and physical performance (ref. 8).
Despite its name, decaffeinated coffee is not fully caffeine free; a cup of coffee containing 180mg of caffeine will end up having 5.4mg of caffeine when decaffeinated (ref. 11).
For adults, the recommended daily caffeine intake is 400mg  which is about 4 or 5 cups of coffee per day (ref. 4). For reference, a cup conventionally contains 200ml to 250ml of coffee (ref. 5) (the former will limit coffee intake to 5 cups a day, while the latter will limit coffee to 4 cups a day)  this would mean that, in metric units, an adult can drink up to 1 litre of coffee a day.
There exists light coffee drinkers and heavy coffee drinkers differentiated by their caffeine tolerance (ref. 6), which affects the magnitude and duration of the undesirable side effects of coffee. An individual that is unsure of their caffeine intake should try to only ingest a maximum of 200mg of caffeine per day, which amounts to 2 or 2.5 cups a day.
Some figures for commonly ingested source of caffeine:
Caffeine doses in coffee  Source: Stacy Lu, APA (ref. 7)
Some more figures from ref. 6:
Recalling that the recommended maximum daily caffeine intake is 400mg, ingesting 3 times the amount will likely result to toxic effects (ref. 4). Some toxic effects include (ref. 8, 9):
Coffee  Photo by Nathan Dumlao on Unsplash (ref. 10)
The origins of coffee (as in the coffee beans) is highly debatable (ref. 12), although it can be said that due to its discovery since time immemorial, it has touched upon every aspect of human civilisation, including religion (as with the case with it replacing wine in the Muslim religion), politics (as with the various bans on coffee) and trade.
An interesting aspect of coffee was its use by America in World War I, and its relation to coffeehouses, which were cornerstones for people to “hang out”.
In World War I, coffee was considered to be an essential item in the America army (ref. 13)  it was so essential that coffee became a rationed item in November 29, 1942 (ref. 14). There were only two reasons any resources needed to be rationed in the context of war:
Coffee Rationing in US  Source: WeAreTheMighty (ref. 15)
As for the reasons why coffee was much needed in the trenches of World War I, it was to combat fatigue, much like how we use coffee nowadays. Having reenergized troops can make a huge difference in the context of war.
Coffeehouses were popular places for people of all classes to “meet, discuss business, exchange ideas and debate the news of the day” (ref. 16). Having free debate and discussion between people of all social statuses lead to the spread of democracy. However, because coffeehouses were so commonplace and popular, some were filled with criminals, scoundrels and pimps (ref. 16), enough to warrant an attempt was made to ban coffeehouses in 1675.
Coffeehouses soon fell out of favour as the popularity of tea rose, but its impact on democracy can still be felt today.
It is strange to make a blog post about a commodity that is widely enjoyed instead of a programming blog post; luckily, as my blog name suggests, I like to do “Random Shenanigans”, which means that once in a while, an informative blog post such as this is nice to have.
I hope you enjoyed this blog post, as much as I enjoyed writing it! Let’s hope that I get apples instead of lemons from life next time; I haven’t got a lemonade maker.
Happy Coding (& enjoy a cup of coffee :coffee:),
CodingIndex
Here are a list of references informally attributed, because I’m too lazy to do a proper citation.