Add old posts

This commit is contained in:
Kaan Barmore-Genç 2023-11-18 23:12:33 -06:00
parent ac4e2d66e0
commit b6b750b16a
Signed by: kaan
GPG key ID: B2E280771CD62FCF
44 changed files with 3694 additions and 0 deletions

View file

@ -14,5 +14,7 @@ export async function load() {
})
);
console.log(posts);
return { posts };
}

View file

@ -0,0 +1,78 @@
---
title: Optimizing My Hugo Website
date: 2022-04-10T22:54:35-04:00
toc: false
images:
tags:
- dev
---
> This post is day 11 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
I just migrated my website to Hugo. I used to have a very minimal custom setup
which I enjoyed having, but it had a few shortcomings which I thought would be
easier to address if I migrated over.
I was pretty happy with the move, but I did find that my website got quite a bit
heavier after the move. Smaller websites load faster and run smoothly on more
limited hardware, so I decided to see if I could cut some things out and make my
website smaller.
## Trimming Down Fonts
First, the theme I was using had included multiple weights of a font. It had
variants from "thin" to "bold" to "bold italic". While these variants make the
font look better when bold or italic, each variant adds about 100KB to the
website. I did notice that my browser was smart enough to only load the variants
needed, which was just 2 of them, but I don't think having users download 100KB
just to display a few characters makes sense. Thankfully the browser is able to
automatically compute what the bold characters should be formatted like even if
the variant is not included in the website, so I started by tearing those out.
Even loading a single variant, the fonts are still the largest bit of the
website. That's when I came across [this post](https://stackoverflow.com/a/44440992) which describes how to reduce the
size of a font file by removing character sets you don't use with FontForge. I
then removed anything Cyrillic, Greek, or any other language that I was very
unlikely to use. The file is now down to about 24KB, finally knocking it down
from the "largest thing on the page" status! And the nice thing about how fonts
work is that if I do ever end up typing something in one of those languages that
I removed, the browser will fall back to another font the reader has on their
system so nothing breaks.
## Generating code highlighting during build
A lot of Hugo themes seem to use javascript libraries to do code highlighting.
The theme adds a javascript library like [PrismJS](https://prismjs.com/), which
runs in the browser of whoever is looking at your website and adds the
highlighting to any code blocks.
This is a really weird approach to me, since you're building a static website.
Why not do the code highlighting when you're building the site? That saves both
downloaded data, makes the website easier to process for the browser, and also
works if the person has javascript disabled! I think the reason why is that hugo
doesn't have a built-in highlighter but relies on
[Pygments](https://pygments.org/) being installed, so theme developers find it
easier to just add more javascript instead of explaining people how to install
pygments.
Savings are amazing however. PrismJS is pretty lightweight at the core, but gets
heavier and heavier if you want support for more programming languages. The
version shipped with the theme I picked came at 167KB, at even gzipped it was
slightly above 60KB, plus another 2.8KB for the required CSS. I was able to tear
all of these out which saved me even more space!
## Final tally
Finally... 54.6KB, which is 34.2KB gzipped. The whole website is smaller than PrismJS!
The largest thing is, yet again, the font. Even stripped down it takes quite a
bit of space, which makes me consider fully removing it and just relying on
system fonts. But I'll leave that for another day.
## Sources?
The customized theme is open source, you can find it here if you want to grab
the optimized fonts, or just want to see what changes I made: [github.com/SeriousBug/hugo-theme-catafalque](https://github.com/SeriousBug/hugo-theme-catafalque)
This website itself is also open source: [gitea.bgenc.net/kaan/bgenc.net](https://gitea.bgenc.net/kaan/bgenc.net)

View file

@ -0,0 +1,103 @@
---
title: "Handling Errors in Rust"
date: 2022-04-13T15:31:11-04:00
toc: false
images:
tags:
- dev
- rust
---
> This post is day 12 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
Rust uses a pretty interesting way to deal with errors. Languages like Python
and JavaScript allow you to throw errors completely unchecked. C does error
handling through the `errno` which is also completely unchecked and not enforced
in any way by the programming language. This means any function you call may
throw an error, and other than documentation there's nothing to let you know
that it might do so.
Worse, most of these languages don't tell you what kind of error a function
might throw. That's obvious in JavaScript, and TypeScript doesn't help with it
either (errors caught have `any` or `unknown` type). C++ and Python let you
catch specific types of errors, but you have to rely on documentation to know
which types those are.
```typescript
try {
// ...
} catch (err: any) {
// Is it a file error? Did I access something undefined? Who knows.
```
Java, on the other hand, requires you to explicitly mark what errors a function
can throw and enforces that you handle all the errors or explicitly mark that
you are propagating it.
```java
public class Main {
static void example() throws ArithmeticException;
}
```
Rust is a lot closer to this as it enforces that you handle errors. The main
difference is that instead of using a special syntax for error handling, it's
built into the return type of the function directly. Which is why you'll see
functions like this:
```rust
fn example() -> Result<String, io::Error>;
```
This sometimes makes error handling a little harder, but luckily we have many
tools that can help us handle errors.
## When you don't care about the error
Sometimes you just don't care about the error. Perhaps the error is impossible
to recover from and the best you can do is print an error message and exit, or
how you handle it is the same regardless of what the error is.
### To just exit
Two great options in this case is [die](https://crates.io/crates/die) and
[tracing-unwrap](https://crates.io/crates/tracing-unwrap). Both of these options
allow you to unwrap a `Result` type, and print a message if it's an `Error` and
exit. `die` allows you to pick the error code to exit with, while
`tracing-unwrap` uses the [tracing](https://crates.io/crates/tracing) logging
framework. You can also always use the built-in [unwrap or expect](https://learning-rust.github.io/docs/e4.unwrap_and_expect.html) functions.
```rust
// die
let output = example().die_code("some error happened", 12);
// tracing-unwrap
let output = example().unwrap_or_log()
```
If you are writing a function that might return any type of error, then
[anyhow](https://crates.io/crates/anyhow) is your best option.
```rust
fn main() -> anyhow::Result<()> {
let output = example()?;
}
```
## When you do care about the error
If you do care about what type of error you return, then you need
[thiserror](https://crates.io/crates/thiserror). `thiserror` allows you to write
your own error types and propagate errors.
```rust
use thiserror::Error;
#[derive(Error, Debug)]
pub enum ExampleError {
#[error("The error message for this type")]
Simple(String),
#[error("An error that you are propagating")]
FileError(#[from] io::Error),
}
```

View file

@ -0,0 +1,43 @@
---
title: 'Adding an HTML-only interface for Bulgur Cloud'
date: 2022-04-17T04:49:20-04:00
draft: true
toc: false
images:
tags:
- dev
- rust
- web
- bulgur-cloud
---
I have talked about my project [Bulgur Cloud](/bulgur-cloud-intro/) before. I'm
very happy with the user interface I've been building for it, which fully relies
on JavaScript with React. Well, React Native with React Native for web
specifically. The nice thing about is that I can reuse most of my code for
mobile and desktop applications, without resorting to Electron.
I also want to keep in mind though that sometimes you want a web page to work
without JavaScript. Maybe it's a basic text-based browser like Lynx, or it's
someone on a slow device that can't run a web app.
So I decided to experiment with a basic HTML interface for Bulgur Cloud. And it
was shockingly easy! With maybe 5 or 6 hours of work, I was able to get a
read-only interface off the ground, and I was able to reuse a lot of the code I
had written for the Bulgur Cloud API that the web app interface uses.
The key packages in my experiment are [askama](https://crates.io/crates/askama)
with [askama_actix](https://crates.io/crates/askama_actix), and
[rust-embed](https://crates.io/crates/rust-embed). Askama is a template
rendering engine which builds the templates into rust code. `askama_actix` adds
support for the Actix framework so I can return a templated page from a route,
and Actix will finish rendering and serving the page for me. Finally,
`rust-embed` allows you to embed files into your executable binary, which is
great because you can ship a single binary which includes all the files you
need!
![A web page with the name kaan and a link Logout at the top. Below is a list of files and folders. The bottom has some text noting Bulgur Cloud is open source, and that this is the HTML-only version.](/img/bulgur-cloud-basic-html.png)
It's all hand-written HTML and CSS. It was very quick to get all of this
working. Once the web app is done, I'll come back to this interface to add full
functionality!

View file

@ -0,0 +1,39 @@
---
title: "JavaScript error \"Super Expression must either be null or a function\""
date: 2022-04-18T04:05:40-04:00
draft: true
toc: false
images:
tags:
- dev
- javascript
- typescript
---
I just got this error when working on some TypeScript code.
```
Uncaught TypeError: Super Expression must either be null or a function
```
The line for the error pointed to the constructor of a class. What's happening?
Circular dependencies it turns out.
```ts
// in foo.ts
class Foo {
foo() {
new Bar().doSomething();
}
}
// in bar.ts
class Bar extends Foo {
// ...
}
```
It's obvious when boiled down like this, but it's something you'll want to make
sure to avoid. I solved this issue by making `Bar` not extend `Foo`. It added
little to no duplicated code for me in this case, so just separating the classes
was easy enough.

View file

@ -0,0 +1,72 @@
---
title: "actix-web Url Dispatch and Middleware"
date: 2022-04-24T03:37:47-04:00
draft: false
toc: false
images:
tags:
- dev
- rust
---
I've hit an issue with `actix-web` recently, and ended up learning a bit about
how it does routing (or URL dispatch as they name it).
Here's a simplified version of my problem:
```rust
// service code
#[get("/{path:.*}")]
pub async fn basic_handler(params: web::Path<String>) -> HttpResponse {
// ...
}
#[post("/auth/")]
pub async fn auth_handler() -> HttpResponse {
// ...
}
// in main
let auth_service = web::scope("")
.wrap(auth_middleware)
.service(auth_handler);
App::new()
.service(authenticated_scope)
.service(basic_handler)
```
`auth_middleware` is a custom middleware I wrote which checks for the existence
of an auth token in a header or cookie, then validates the token. The middleware
responds early with a 401 if the token is missing, which ensures that the
protected handlers can only be reached by authenticated users.
I expected Actix to realize that if a request doesn't match `/auth/`, it should
go to the `basic_handler` and the authentication middleware shouldn't run. But
the middleware did run even if the path had nothing to do with `/auth/`! That
would cause the middleware to respond with a 401 and stops it from propagating,
so it never reached `basic_handler`.
The solution, it turns out, is using the `web::scope` to scope out the
authenticated routes. If the scope doesn't match, Actix then seems to skip over
that entire scope and never runs the middleware. Here's the same code, fixed:
```rust
// service code
#[get("/{path:.*}")]
pub async fn basic_handler(params: web::Path<String>) -> HttpResponse {
// ...
}
#[post("/")] // <-- change here
pub async fn auth_handler() -> HttpResponse {
// ...
}
// in main
let auth_service = web::scope("/auth") // <-- change here
.wrap(auth_middleware)
.service(auth_handler);
App::new()
.service(authenticated_scope)
.service(basic_handler)
```

View file

@ -0,0 +1,145 @@
---
title: "Using a path as a parameter in React Navigation"
date: 2022-05-01T17:49:02-04:00
lastmod: 2022-05-09T04:28:00-04:00
draft: true
toc: false
images:
tags:
- dev
- react
- javascript
- typescript
---
I've been trying to integrate [React Navigation](https://reactnavigation.org/)
into [Bulgur Cloud](https://github.com/SeriousBug/bulgur-cloud) and I hit an
issue when trying to use a path as a parameter.
What I wanted to do was to have a route where the rest of the path in the route
would be a parameter. For example, I can do this in my backend:
```rust
#[get("/s/{store}/{path:.*}")]
pub async fn get_storage(// ...
```
This route will match all paths like `/s/user/`, as well as `/s/user/foo/` and
`/s/user/foo/bar.txt`. The key is that the path portion is a path with an
arbitrary number of segments.
Unfortunately there doesn't seem to be built-in support for this in React
Navigation. Here's what I had at first:
```ts
export type RoutingStackParams = {
Login: undefined;
Dashboard: {
store: string;
path: string;
};
};
export const Stack: any = createNativeStackNavigator<RoutingStackParams>();
export const LINKING = {
prefixes: ["bulgur-cloud://"],
config: {
screens: {
Login: "",
Dashboard: "s/:store/", // Can't do "s/:store/*" or something like that
},
},
};
function App() {
return (
<NavigationContainer linking={LINKING}>
<Stack.Navigator initialRouteName="Login">
<Stack.Screen name="Login" component={Login} />
<Stack.Screen name="Dashboard" component={Dashboard} />
</Stack.Navigator>
</NavigationContainer>
);
}
```
This would cause the URLs for the `Dashboard` to look like `/s/user/?path=file`.
I read through all the docs, looked up many examples, and scoured any
Stackoverflow answers I could find. Nope, nobody seems to be talking about this.
This feels like such a fundamental piece of routing tech to me that I'm shocked
that not only is there no built-in support, nobody seems to be questioning why
it doesn't exist.
Thankfully some folks in the [Reactiflux](https://www.reactiflux.com/) discord
pointed me towards the right way: using
[`getStateFromPath`](https://reactnavigation.org/docs/navigation-container#linkinggetstatefrompath)
and
[`getPathFromState`](https://reactnavigation.org/docs/navigation-container#linkinggetpathfromstate)
to write a custom formatter and parser for the URL.
This is made easier thanks to the fact that you can still import and use the
built-in formatter and parser, and just handle the cases that you need to.
Here's what I implemented:
```ts
export const LINKING = {
prefixes: ["bulgur-cloud://"],
config: {
screens: {
Login: "",
Dashboard: "s/:store/",
},
},
getStateFromPath: (path: string, config: any) => {
// For the Dashboard URLs only...
if (path.startsWith("/s/")) {
const matches =
// ...parse the URL as /s/:store/...path
/^[/]s[/](?<store>[^/]+)[/](?<path>.*)$/.exec(
path,
);
const out = {
routes: [
{
name: "Dashboard",
path,
params: {
store: matches?.groups?.store,
path: matches?.groups?.path,
},
},
],
};
return out;
}
// For all other URLs fall back to the built-in
const state = getStateFromPath(path, config);
return state;
},
getPathFromState: (state: any, config: any) => {
// Getting the "top route" if we're using a stack navigator
const route = state.routes[state.routes.length - 1];
// For the Dashboard routes only...
if (route?.name === "Dashboard") {
// ...directly put the path into the URL
const params: RoutingStackParams["Dashboard"] = route.params;
return `/s/${params.store}/${params.path}`;
}
// For all other routes fall back to the built-in
return getPathFromState(state, config);
},
};
```
I'm not sure how the get the types to be a little nicer, but just going with
`any` is fine for me in this case since it's a very small portion of the
codebase.
> Edit: Made the following change after noticing that this code didn't work with a stack navigator
>
> ```ts
> // Getting the "top route" if we're using a stack navigator
> const route = state.routes[state.routes.length - 1];
> ```

View file

@ -0,0 +1,59 @@
---
title: "React Navigation on web, getting browser history to work with links"
date: 2022-05-09T04:12:24-04:00
toc: false
images:
tags:
- dev
- react
- javascript
- typescript
---
I've hit another issue with React Navigation when building it with React Native for Web.
The issue is with the browser history integration and `<Link>`s.
React Navigation supports browser [history API](https://developer.mozilla.org/en-US/docs/Web/API/History_API)
when targeting the web, but it wouldn't add new URLs to the history when navigating
using `<Link to=...` elements. That meant you can't hit the back button in the browser to navigate back.
I think the issue specifically happens with the stack navigator, but I'm not sure.
In any case, turns out the problem was because the links are not using the right
action to navigate to that path, which means the navigator does not have the
right history. The solution is luckily easy, you need to pass the `action`
property to the link to tell it how to navigate. To make this foolproof, I
created wrapper component to handle it, here's what it looks like:
```ts
import React from "react";
import { Link, StackActions } from "@react-navigation/native";
import { TextProps } from "react-native";
// This is the type defining all the routes. It looks like:
//
// export type RoutingStackParams = {
// Login: undefined;
// Dashboard: {
// path: string;
// isFile: boolean;
// };
// };
import { RoutingStackParams } from "./routes";
export function BLink<Route extends keyof RoutingStackParams>(
props: TextProps & {
screen: Route;
params: RoutingStackParams[Route];
},
) {
return (
<Link
to={{
screen: props.screen,
params: props.params,
}}
action={StackActions.push(props.screen, props.params)}
>
{props.children}
</Link>
);
}
```

View file

@ -0,0 +1,64 @@
---
title: "My New Backup Setup with Kopia"
date: 2022-05-29T16:37:03-04:00
draft: false
toc: false
images:
tags:
- homelab
---
I've recently switched to [Kopia](https://kopia.io/) after having some trouble with Duplicati. There
was some sort of issue with mono (the runtime used by Duplicati) not reading the
certificate files on my system, and failing to authenticate the Backblaze B2
connections. After most workarounds I read online not solving the issue, and the
problem not being solved after months of waiting, I decided it might be time to
check out some other backup solutions.
## What I want from backup software
There are some features that I think are crucial for backup software.
- Incremental backups. These save massive amounts of space, and it's
non-negotiable in my opinion. I'm not going to waste space storing a hundred
duplicates of each file, any sane backup solution must be able to deduplicate
the data in the backups.
- Client-side encryption. While I have some level of trust for the services I'm
backing up my data on, I don't trust them to not read through my data. Between Google implementing a [copyrighted material scanner](https://torrentfreak.com/google-drive-uses-hash-matching-detect-pirated-content/) and the [said scanner going haywire](https://www.bleepingcomputer.com/news/security/google-drive-flags-nearly-empty-files-for-copyright-infringement/), while I have nothing illegal in my backups I'd rather keep my data out of these services hands.
- Compression is also important to me. A lot of the data on my computer that I
want to back up is stuff like code files, configuration files, game saves and
such. A lot of these files are not compressed well or at all, so compressing
the backed up data can be a major win in terms of space savings. Modern
processors can decompress data faster than disks can read or write them with
the right algorithms, so this usually comes at effectively no cost too. Of
course this may be less important for you if what you are trying to back up is
already compressed data like images, videos, and music files.
- Being able to restore only some files or folders without doing a full restore.
Some services like Backblaze B2 charge you for data downloaded, so it's
important that if I'm only restoring a few files, I can do so without
downloading the entire archive.
## Kopia
Kopia checks all these boxes. Backups are incremental, everything is encrypted
client side. Compression is supported and is configurable, and you can mount
your backups to restore only a subset of files or read them without restoring.
Something small that is amazing though is that Kopia can read `.gitignore` files
to filter those files out of your backups! This is amazing as a developer
because I already have gitignore files set up to ignore things like
`node_modules` or project build directories, which I wouldn't care about backing
up. Thanks to Kopia, these are immediately filtered out of my backups without
any extra effort.
## Are incremental backups and compression really worth it?
Yes, absolutely!
Right now I have Kopia set up to back up my home directory, which contains about
9.8GB of data after all excluding all the cache files, video games, and applying
gitignores. I have 13 snapshots of this, which in total take up 4.9GB on disk.
13 snapshots take less space than the entirety of my home directory!
I have Kopia set up to use pgzip compression which seems to be the best option
in terms of compression speed versus storage size.

View file

@ -0,0 +1,127 @@
---
title: "My Experience Applying & Interviewing for Software Engineering Positions"
date: 2022-07-10T15:49:51-04:00
draft: false
toc: false
images:
tags: []
---
I've recently been interviewing for jobs, and decided it might be interesting to
write about how the process went. Hopefully this will be useful for other
developers in my position, or hiring folks who are looking for a candidate's
viewpoint.
#### By the numbers:
- 67 applications
- 5 companies interviewed with
- 1 offer
## Applications
Overall, I made around 67 applications. I'm counting that by the number of
confirmation emails I got, so the actual number may be slightly higher because
a few companies didn't send a confirmation email.
Out of these, I got interviews with 5 companies. That's around a 7.5% response
rate. Keep in mind however that I did apply to a lot of positions that required
more years of experience or experience in specific technologies that I didn't
have. I do think these applications are still useful, because I did get
interviews with some of these. I think it's pretty well known that companies
will sometimes over-exaggerate the requirements so it can be worth applying to
these jobs even when you don't perfectly match them.
The application process was very exhausting. The good news is that there's no
shortage of positions on LinkedIn or other sites that you can apply to. The bad
news is that repetitively filling out applications is extremely boring. I could
have sworn that I applied to hundereds of jobs until I counted them.
My recommendation for applicants is to break it up and try to apply to a few
places every day. Don't get discouraged if you are not getting a lot of
responses: looking at what LinkedIn reports, most job listings get tens if not
hundereds of applications so sometimes your application might just get lost in
the flood.
## Interview Process
The interviewing process varies from company to company, but there's often a
general flow they all follow:
- You'll first get an interview with a recruiter from the HR department. Their
questions are typically surface-level questions like "do you have experience
with AWS?" or "how many years of experience do you have with javascript?".
Keep in mind that this person most often will have no technical knowledge, so
for example they wouldn't know that listing "Next.js" on your resume implies
you know React.
They will also typically ask you what range of salary you are looking for. It
can be hard to come up with a number especially for your first job, but you
should look up average salaries and figure out a range you think is good. If
you can't come up with anything, I don't think it hurts to ask what range they
are looking for. I did this once and figured out that the pay range they were
offering was just far lower than what I would accept.
- The next step is usually a technical challenge. In most cases for me this was
a pair-programming exercise, although I've also had a take-home programming
challenge. What sort of challenge you get here depends heavily on the company
and the interviewer. Usually it is an opportunity to just chat with you and
see how you think about problems, but sometimes it can be a "can you solve
this puzzle".
- What happens after the technical challenge is a lot more varied: some places
go with the typical "tell us what your strengths are" kind of interviews, some
do whiteboard high-level design, and some just want to chat with you.
The biggest thing to remember is that the interview process isn't just for the
company to vet you, but for you to vet the company as well. If the interviewer
is being harsh towards you, or the process demands too much of your time (I had
one company who wanted to do an 8-hour "virtual on-site" followed by more
technical interviews!), or if you just get a bad vibe from the people you are
talking to; don't be afraid to say no. Luckily software jobs, even heading into
a recession, are plentiful so you don't have to be stuck with a place you hate.
### Remember to ask questions!
Talking about you vetting the company, you should make sure to prepare some
questions to ask. Make sure you ask about anything you are curious about or
anything that's a dealbreaker for you.
If you're not sure what to ask, there are [a lot of resources
online](https://hackernoon.com/what-to-ask-an-interviewer-during-a-tech-interview-865a293e548c)
for examples. I like to ask a mix of "people questions" (e.g. how big is the
team, is it a new team or an existing one that you are expanding etc.), technical
questions (how do you do code reviews, are there any standard tools you use etc.),
and company culture questions (do you work overtime and how often, is the design
process collaborative etc.).
### Take notes
Maybe you have a great memory and can remember every little detail, but I can't.
I always take notes during my interviews. I'll write down the names and roles of
the people I'm meeting, and any important points discussed. I'd strongly
recommend doing this because you might need to go back and reference things
later when you need to make a decision.
## Interviewers: what not to do
I'm not going to shame any companies or recruiters here, but a few bad
experiences stand out:
- One job listing I applied to was extremely vague about what I might work on:
this was a big Fortune 500 company that's not a "software company" so it
wasn't clear what I might even work on. When I asked the recruiter for
details, they only knew what was written on the job listing.
- I think job listings should be clear about what the role entails, and
recruiters should have some knowledge of what the role is.
- At a technical interview, the interviewer seemed to be unhappy with my
solution and wanted me to explore a different way of solving the problem.
That's totally fine, but the interviewer would not tell me what was wrong with
my solution or what sorts of issues I should consider. Instead, they just kept
repeating "but how else would you do it".
- Remember that the interviewee can't read your mind. Treat the interview
like a code review: you wouldn't tell a coworker "this is bad, fix it",
you would tell them what's wrong. See if the interviewee can handle
constructive criticism and change their design to solve the issue.

View file

@ -0,0 +1,112 @@
---
title: 'Solving React Redux Triggering Too Many Re-Renders'
date: 2022-09-18T18:13:31-04:00
toc: false
images:
tags:
- javascript
- typescript
- react
- web
- dev
---
This might be obvious for some, but I was struggling with a performance issue in
[Bulgur Cloud](/portfolio/#bulgur-cloud), my React (well, React Native) based
web application. Bulgur Cloud is an app like Google Drive or NextCloud, and one
of the features is that you can upload files. But I noticed that the page would
slow down to a crawl and my computers fans would spin up during uploads. It
can't be that expensive to display a progress bar for an upload, so let's figure
out what's happening.
As the first step, I installed the "React Developer Tools" extension in my
browser and enabled the "Highlight updates when components render" option. Then
I used the network tab to add a throttle so I could see the upload happen more
slowly, and started another upload.
<video controls width="100%">
<source src="/vid/react-redux-causes-re-renders.mp4" type="video/mp4">
<p>A video showing a progress bar slowly increasing. As the progress bar goes up, the entire screen flashes with blue borders.</p>
</video>
Pretty much the entire screen flashes every time the progress bar goes up.
Something **is** causing unnecessary re-renders! The next step in my diagnosis
was to record a profile of the upload process with the react developer tools. I
enable the option "Record why each component rendered when profiling", then run
the profiler as I upload another file.
![A flame graph with many elements highlighted in blue.](/img/react-redux-rerender-flamegraph.png)
Walking through the commits in the flame graph (the part that says "select next
commit" in the screenshot, top right), I can see this weird jagged pattern that
repeats through the upload process. Selecting on of the tall commits again
confirms that pretty much everything had to be re-rendered. Hovering over the
items being re-rendered shows me that the items being re-rendered are my folder
listings, the rows representing files. It also tells me why they re-rendered:
"Hook 3 changed".
Next step is to switch over to the components feature of the react developer
tools, and take a look at `FolderList`. Once I select it, I get a list of the
hooks that it uses. Keep expanding the tree of hooks, and the hook numbers are
revealed.
![A tree view displaying hooks for a component named FolderList](/img/react-redux-component-hooks.png)
The hook names here seem to be the names in the source code minus the `use`
part, so my `useFetch` hook becomes `Fetch`. They are in a tree view since hooks
can call other hooks inside them. Following the tree, hook 3 is `State`, and is
located under `Selector` which is a React Redux hook `useSelector`. At this
point things become clearer: I'm using redux to store the upload progress, and
every time I update the progress it causes everything to re-render. And all of
this is being caused through my fetch hook. Let's look at the code for that:
```ts
export function useFetch<D, R>(
params: RequestParams<D>,
swrConfig?: SWRConfiguration,
) {
// useAppSelector is useSelector from react redux,
// just a wrapper to use my app types
const { access_token, site } = useAppSelector((selector) =>
// pick is same as Lodash's _.pick
pick(selector.auth, "access_token", "site"),
);
// ...
```
I realized the issue once I had tracked it down to here! The state selector is
using the `pick` function to extract values out of an object. React Redux checks
if the value selected by the selector has changed to decide if things need to be
re-rendered, but it uses a basic equality comparison and not a deep equality
check. Because `pick` keeps creating new objects, the objects are never equal to
each other, and Redux keeps thinking that it has to re-render everything!
The solution luckily is easy, we can tell redux to use a custom function for
comparison. I used a `shallowEquals` function to do a single-depth comparison of
the objects (the object is flat so I don't need recursion).
```ts
export function shallowEquals<
Left extends Record<string, unknown>,
Right extends Record<string, unknown>
>(left: Left, right: Right) {
if (Object.keys(left).length !== Object.keys(right).length) return false;
for (const key of Object.keys(left)) {
if (left[key] !== right[key]) return false;
}
return true;
}
// ...
const { access_token, site } = useAppSelector(
(selector) => pick(selector.auth, 'access_token', 'site'),
shallowEquals
);
```
Let's look at the profile now:
![A flame graph with only a tiny portion of elements highlighted.](/img/react-redux-after-flamegraph.png)
Much better! The only thing re-rendering now is the progress bar itself, which
is ideal.

View file

@ -0,0 +1,47 @@
---
title: "Browser Caching: Assets not revalidated when server sends a 304 'Not Modified' for html page"
date: 2022-10-15T20:56:36-04:00
toc: false
images:
tags:
- dev
- web
---
I've been working on some web server middleware, and hit a weird issue that I
couldn't find documented anywhere. First, let's look at an overview of how
browser caching works:
If your web server sends an
[ETag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) header in
a HTTP response, the web browser may choose to cache the response. Next time the
same object is requested, the browser may add an
[If-None-Match](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-None-Match)
header to let the server know that the browser might have the object cached. At this point, the server should respond with the
[304 Not Modified](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/304)
code and skip sending the response. This can also happen with the
[Last Modified](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified)
and
[If-Modified-Since](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Modified-Since)
headers if `ETag` is not supported as well.
After implementing this in my middleware, I made a quick test website to try it
out. That's when I ran into a weird behavior: the browser would revalidate the
HTML page itself with the `If-None-Match` header, but when the server responded
with `304` it would not attempt to revalidate the linked stylesheets, scripts,
and images. The browser would not request them at all and immediately use the
cached version. It looks like if the server responds with `304` on the HTML
page, the browser assumes that all the linked assets are not modified as well.
That means that if the asset does change (you weren't using something like
fingerprinting or versioning on your assets), then the browser will use outdated
assets. Oops!
Luckily it looks like there's an easy solution: add `Cache-Control: no-cache`
header to your responses. `no-cache` doesn't actually mean "don't cache at all",
but rather means that the browser needs to revalidate objects before using
the cached version.
Without the `Cache-Control` header:
![Browser developer tools window, there is only 1 request for /](/img/browser-caching-before.png)
With the `Cache-Control` header:
![Browser developer tools window, there are 5 requests in total, including /, style.css, and 3 images.](/img/browser-caching-after.png)

View file

@ -0,0 +1,64 @@
---
title: "Automating My Blog With Gitea and Woodpecker"
date: 2022-11-19T12:21:40-05:00
toc: false
images:
tags:
- homelab
---
I had been using Gitea for a while. If you haven't heard of it before, it's a
"git forge" meaning a website where you can host git repositories, track issues,
accepts pull requests and so on. If you have seen Github, it's just like that.
My Gitea is for personal use only, but I do keep it accessible publicly: https://gitea.bgenc.net/
I have been using this personal instance to keep a few small experiments and my
personal blog, but one thing I've found missing was a CI. Again in case you are
unfamiliar, CIs are tools that automate processes in your projects. For example,
you could automate a testing process so the code gets tested every time a pull
request is created. Or automate a release so the code gets built and uploaded
when a new version is tagged on git.
I hadn't looked into a CI since I wasn't using my Gitea for anything important,
and for any "big" project I could just use Github and get their free Github
Actions for public repositories. But I recently started working on some projects
I'd rather keep private, and thought that having a CI to automate testing on
them would be useful.
After looking around for a CI and not finding a lot that I like, I came across
[Woodpecker CI](https://woodpecker-ci.org/). Woodpecker is a relatively simple
CI system: it's all built on top of Docker and runs everything in containers.
You specify your actions as container images and steps to be executed inside
those images... and that's all!
Setting up Woodpecker and connecting it to Gitea was a breeze, you just point a
few variables to Gitea and create an app on Gitea, then you're done! It uses
OAuth to log you into Woodpecker through Gitea, and automatically grabs all your
repositories. All you have to do is hit enable on Woodpecker, then add a
`.woodpecker.yml` to your repository.
I ended up trying out Woodpecker with 2 repositories:
- The first is a [containers](https://gitea.bgenc.net/kaan/containers)
repository: I realized that I might need to create some simple container
images to use in my Woodpecker pipelines so this is a repository to keep these
simple images. The containers are also automatically built with Woodpecker:
there's a Woodpecker plugin (the plugin itself is a container too!) to build
and publish docker containers so the process is trivial.
- The second is [this blog](https://gitea.bgenc.net/kaan/bgenc.net)! I used to
just manually run the build for this and rsync it over to my server. But with
Woodpecker I was able to automate this process. Using a hugo container, the
blog gets built. Then I use a container I created to then `rsync` it over to
my server. I created a special system user that has ownership over the `www`
folder, who can only log in with an SSH key that's stored as a Woodpecker
secret.
The whole process absolutely amazed me! Woodpecker is missing a few minor
features I would have liked, like the ability to trigger builds with a click
(like github actions `workflow_dispatch` option) or to trigger builds with a
timer. A timer would have been especially useful with my `containers` repository
to keep all the containers up to date. But I imagine this will be possible
eventually. At the very least, it looks like they are working on a
[CLI](https://woodpecker-ci.org/docs/next/cli) for the next version of
Woodpecker which can start pipelines, so it would be possible to set up a timer
with a bit of scripting.

View file

@ -0,0 +1,44 @@
---
title: 'Solving "XML Parsing Error: no root element found" in Firefox'
date: 2022-12-17T21:27:37-05:00
toc: false
images:
tags:
- dev
- web
---
I've been seeing this error a lot while working on my project [Bulgur Cloud](/bulgur-cloud-intro/).
It seems to show up only on Firefox:
![Error message in Firefox console. XML Parsing Error: no root element found. Location: http://...701.jpg Line Number 1, Column 1](/img/2022-12-17.firefox.xml-parsing-error.png)
What is curious was that I was not actually loading the file mentioned in the
error message. I tried looking up what the error might mean, but all that came
up was very specific issues regarding some web frameworks, unrelated to what I'm
using.
I later realized however, while I wasn't loading the file in the error message,
I was actually sending a `DELETE` request. The Bulgur Cloud server then responds
to this request with an empty `200 OK` response. Turns out, if you send an empty
response to Firefox, it tries to parse the body of the response even though it's
empty and gives you this error.
To quiet this error, I started just sending an empty JSON response even for
requests that don't actually need it. Here's some rust code to illustrate the
solution:
```rust
#[derive(Debug, Serialize)]
pub struct EmptySuccess {
status: &'static str,
}
// Using actix-web
fn empty_ok_response() -> HttpResponse {
HttpResponse::Ok().json(EmptySuccess { status: "ok"})
}
```
This adds a `{"status": "ok"}` json to the response, which seems to be enough to
make Firefox happy.

View file

@ -0,0 +1,63 @@
---
title: "Hosting websites without a static IP with Gandi LiveDNS"
date: 2022-12-29T18:11:42-05:00
toc: false
images:
tags:
- web
- homelab
---
I've been hosting this website at home now for a few years. My ISP doesn't offer
a static IP address however, which means my IP address occasionally changes.
This is sort of a dealbreaker, because your domain will be left pointing to the
wrong address whenever your IP address changes.
Luckily you can solve this by using a dynamic DNS solution, like DynDNS, but these
can be pretty pricy.
Which is why I was very excited when I saw [gandi.net](https://www.gandi.net/)
has a system they call "LiveDNS" which allows you to update the IP address your
domain points to very quickly. Their website advertises that updates are
propagated in under 250ms which is amazing. Although other DNS servers may cache
results and not update for significantly longer, that's not a massive issue for
me. My IP address doesn't change that often, and my personal blog having a short
downtime is not a big deal.
Gandi provides this service at no additional cost, all you need to do is to
register your domain with them (or transfer it over). I've been using them for
years and have had great service, and the LiveDNS is the cherry on top.
## Updating your IP
Unlike some other dynamic DNS providers, Gandi does not provide a program that
you can use to update your IP address. But being an open API, there are many
programs and scripts you can use to update your IP. One of these is a program I
made, [gandi-live-dns-rust](https://github.com/SeriousBug/gandi-live-dns-rust).
After installing `gandi-live-dns` on my home server (I used the ArchLinux
package, but there are other options available), I copied over the
[example configuration](https://github.com/SeriousBug/gandi-live-dns-rust/blob/master/example.toml)
in the repository. I just added my domain, obtained an API key and set it up,
then I just added the subdomains I want.
```
fqdn = "bgenc.net"
api_key = "key goes here"
[[entry]]
name = "@"
types = ["A"]
[[entry]]
name = "gitea"
types = ["A"]
[[entry]]
fqdn = "kaangenc.me"
name = "@"
types = ["A"]
```
The configuration file is a bit trimmed, but it shows the gist of everything.
I'm updating `bgenc.net`, along with `gitea.bgenc.net`. I also update
`kaangenc.me`, which is an old domain I was using.

View file

@ -0,0 +1,79 @@
---
title: "Get inferred type for a generic parameter in TypeScript"
date: 2023-01-28T14:50:54-05:00
toc: false
images:
tags:
- dev
- typescript
---
Have you used [Zod](https://zod.dev/)? It's a very cool TypeScript library for
schema validation. Compared to alternatives like Joi, one of the biggest
strenghts of Zod is that it can do type inference. For example,
```ts
const PersonSchema = z.object({
name: z.string(),
age: z.number(),
});
type Person = z.infer<typeof PersonSchema>;
// This is equivalent to { name: string; age: number; }
```
Now I was recently working on a database client, where I'm using a validator
function to ensure the data on the database matches what the client expects. I
then take advantage of TypeScript's type inference so the type of everything
matches. It looks like this:
```ts
class Database<T> {
constructor(validator: (input: unknown) => T);
function get(key: string): T | undefined { /* ... */ }
function put(key: string, data: T) { /* ... */ }
}
// Note I didn't have to specify the type parameter,
// TypeScript infers it from the validator argument
const PersonDB = new Database(PersonSchema.parse);
```
At this point I started to wonder, could I do something similar to what Zod does
and get the inferred type for the objects that are stored in the database? While
this is not required in this example above since I could get the type from Zod,
the validator function doesn't necessarily have to be implemented with Zod.
After reading through Zod's codebase, I found the trick they use, and it's very
simple. Let's see it:
```ts
class Database<T> {
readonly _output!: T;
constructor(validator: (input: unknown) => T);
function get(key: string): T | undefined { /* ... */ }
function put(key: string, data: T) { /* ... */ }
}
type EntityOf<D extends Database<any>> = D["_output"];
const PersonDB = new Database(PersonSchema.parse);
type Person = EntityOf<typeof PersonDB>;
```
This is surprisingly simple. We add a property `_output` to the class, which has
the inferred type. We can then get the type through that property with
`D["_output"]`. The `!` in the definition of the property is there because we
never actually set any value for `_output`. TypeScript normally will detect and
warn us that we did not set `_output`, but the exclamation point suppresses
that.
This is not without drawbacks of course, because the `_output` property will be
visible in the instances of the class. We can't hide the property with `private`
because TypeScript won't let us look it up in `EntityOf` if we do so. So the
best we can do is document the fact that this should not be used, and throw in
the prefix so it stands out from regular properties.

View file

@ -0,0 +1,157 @@
---
title: 'Why I use Dev containers for most of my projects'
date: 2023-02-09T23:14:05-05:00
toc: false
images:
tags:
- dev
---
It is important to have a consistent and streamlined setup process for your
development environment. This saves time and minimizes frustration, both making
you more productive and also making it much easier to onboard new developers.
This is important, whether we're talking about a company who wants to onboard
new engineers or an open source project that needs more contributors, being able
to press one button to get a fully functional development environment is
incredibly valuable.
That's why I was thrilled when I discovered dev containers! Dev containers use
containers to automate the setup of your development environment. You can have
it install compilers, tools, and more. Set up a specific version of nodejs,
install AWS cli, or run a bash script to run a code generator. Anything you need
to set up your development environment. And you can also run services like
databases or message queues along with your development environment because it
has support for Docker compose.
For example, I use dev containers to spin up Redis and CouchDB instances for a
project I'm working on. It also installs pnpm, then uses it to install all the
dependencies for the codebase. The end result is that you can press one button
to have a fully functional development environment in under a minute.
This has many advantages. It ensures that everyone has the same version of any
services or tools needed, and isolates these tools from the base system. And if
you ship your code with containers, it also makes your development environment
very similar to your production environment. No more "well it works on my
machine" issues!
## Basic setup
I use dev containers with VSCode. It has pretty good support. I've also tried
the [dev container CLI](https://github.com/devcontainers/cli) which works fine
if you just want to keep everything in the CLI (although you could probably
stick with docker compose alone then!).
VSCode comes with commands to automatically generate a dev container
configuration for you by answering a few questions.
![A VSCode prompt window. deb is typed into the prompt, and the text Simple debian container with git installed is highlighted below.](/img/devcontainer-debian-example.png)
At the core of dev containers, what sets it apart from just using Docker is the
"features". These are pre-made recipes that install some tool or set up some
dependency within your dev container. There is a lot of these available,
installing everything from `pnpm` to `wget`. You can also set up commands to run
when the container is created --or even every time the container is started-- to
install or set up anything else that features didn't cover.
```json
{
// ...
"features": {
"ghcr.io/devcontainers/features/node:1": {},
"ghcr.io/devcontainers-contrib/features/pnpm:2": {}
},
"updateContentCommand": "pnpm install"
// ...
}
```
Above is an excerpt from the dev container of a project I'm working on. I needed nodejs and pnpm,
and I then use pnpm to install the dependencies.
## Docker compose
But I honestly probably would not have used dev containers if this was all they
did. What I find even more impressive is that they can be set up to use docker
compose to bring up other services like I mentioned at the beginning.
To do that, you create your docker compose file with all the services you need,
but also add in the dev container.
```yml
version: '3.8'
services:
devcontainer:
image: mcr.microsoft.com/devcontainers/base:bullseye
volumes:
- ..:/workspaces/my-project:cached
command: sleep infinity
environment:
- COUCHDB=http://test:test@couchdb:5984
- S3=http://minio:9000
couchdb:
restart: unless-stopped
image: couchdb:3.3
volumes:
- couchdb-data:/opt/couchdb/data
environment:
- COUCHDB_USER=test
- COUCHDB_PASSWORD=test
minio:
restart: unless-stopped
image: minio/minio
volumes:
- minio-data:/data
command: server /data --console-address ":9001"
volumes:
couchdb-data:
minio-data:
```
In the example above, I'm setting up a CouchDB database and Minio S3-compatible
store. Docker gives containers access to each other using the container names. I
pass the endpoint URLs as environment variables to my dev container, where I can
read and use them.
Then, you just tell your dev container config to use the docker compose file.
```json
{
"name": "my-project",
"dockerComposeFile": "docker-compose.yml",
"service": "devcontainer",
"workspaceFolder": "/workspaces/my-project",
// Adding the Rust compiler, plus the AWS cli so I can access the S3 API of minio from the CLI.
"features": {
"ghcr.io/devcontainers/features/rust:1": {},
"ghcr.io/devcontainers/features/aws-cli:1": {}
},
// The project I'm working on exposes the port 8080.
// I forward that out so I can look at it on my browser.
"forwardPorts": [8080],
// Set up the development AWS config and credentials with test values,
"onCreateCommand": "mkdir -p ~/.aws/ && /bin/echo -e '[default]\nregion = local' > ~/.aws/config && /bin/echo -e '[default]\naws_access_key_id = minioadmin\naws_secret_access_key = minioadmin' > ~/.aws/credentials",
// Create the S3 bucket
"postCreateCommand": "aws s3 --endpoint-url $S3_ENDPOINT mb s3://my-bucket",
// I found that I have to add this, but it's not the default. Not sure why.
"remoteUser": "root",
"customizations": {
// You can even add in VSCode extensions that everyone working on the project
// would need, without them having to install it on their own setup manually.
"vscode": {
"extensions": ["rust-lang.rust-analyzer", "streetsidesoftware.code-spell-checker"]
}
}
}
```
That's it! Run the "Dev Containers: Reopen in Container" command in VSCode, give
it a few minutes, and you'll have your full development environment ready.

View file

@ -0,0 +1,105 @@
---
title: Enforcing a "Do Not Merge" label with Github Actions
date: 2023-02-18T12:33:32-05:00
toc: false
images:
tags:
- dev
---
At my workplace, we sometimes find ourselves in situations where a PR passes all
tests, and has been reviewed and approved, but still shouldn't be merged.
Sometimes this is because that PR needs some other work in another repository to
be merged and deployed first. Sometimes it's because merging the PR will
kickstart some process like sending out emails, and we are waiting to start that
process at a certain time.
Whatever the reason, our convention is that we add a "Do Not Merge" label to the
PR. But we recently had a case where someone didn't see the label and clicked
the merge button anyway. I can tell you that it's not fun scrambling to hit the
"cancel action" button on Github before the code gets deployed! So we started
looking into a way to prevent such issues.
Now, you might ask why we don't just leave these PRs as drafts. While that would
stop them from being merged on an accidental click, there is still some risk
that someone might just mark it ready for review and merge it without checking
the label. We also have some automation set up, like automatically changing card
state when a PR is marked as ready, which would not work if we leave PRs in
draft. Luckily, I found a better solution.
After coming across this [post from Jesse Squires](https://www.jessesquires.com/blog/2021/08/24/useful-label-based-github-actions-workflows/),
I decided to try the improved version of a "Do Not Merge" check he suggests.
```yml
name: Do Not Merge
on:
pull_request:
types: [synchronize, opened, reopened, labeled, unlabeled]
jobs:
do-not-merge:
if: ${{ contains(github.event.*.labels.*.name, 'do not merge') }}
name: Prevent Merging
runs-on: ubuntu-latest
steps:
- name: Check for label
run: |
echo "Pull request is labeled as 'do not merge'"
echo "This workflow fails so that the pull request cannot be merged"
exit 1
```
Our first attempt was dropping this into the repository, which worked, but we
have a lot of repositories and we sometimes create new ones too. Having to copy
this check to all repositories seems like a lot of work! But thanks to a
coworker discovering that you can set repository-wide workflows, we were able to
set up all of this organization-wide.
To do that, you first add this workflow file in some repository. It doesn't need
to be (and probably shouldn't be) in your `.github/workflows` folder. You might
even want to create a new repository to contain just this workflow file.
![A github repository, with a single file named do-not-merge.yml at the root of the repository. The file contains the code listed earlier in this page.](/img/gh-do-not-merge-action.png)
Next, go to your organization settings and select "Actions > General" on the
side menu.
![Github side bar menu, with heading Action expanded, and General selected inside that section.](/img/gh-menu-actions-general.png)
Scroll to the bottom, where you'll find "Required workflows". Click
to add a workflow.
![The required workflows section in Github organization settings. An Add workflow button is present.](/img/gh-required-workflows.png)
Then select the repository where you added your action, and
write the path to the workflow file within that repository.
![Add required workflow page. The previously mentioned repository is selected, and the path do-not-merge.yml is written next to that. A selection below has picked 'All repositories'.](/img/gh-required-workflows-config.png)
You're now done! All PRs in all repositories will run the do not merge label
check, and will prevent you from merging any PR with the label.
![The checks section on a PR page. A check named Do Not Merge has failed, and the merge button is disabled. Github warns that all checks must pass before merging.](/img/gh-do-not-merge-fail.png)
One caveat is
that there seems to be a bug on Github's end of things where for any PR that was
open at the time you added this check, the check gets stuck with the message
"Expected - Waiting for status to be reported". If that happens, add the "do not
merge" label then remove it. This will remind Github to actually run the check.
To make the experience a bit smoother for new repositories, you can also add "do
not merge" as a default PR label. To do so, go to the "Repository > Repository
defaults" section on the side bar.
![Github side bar menu, with heading Repository expanded, and Repository defaults selected inside that section.](/img/gh-repository-defaults.png)
Click "New label", and create a label named
"do not merge".
![The repository labels section in Github organization settings. A new label is being added, with the name do not merge.](/img/gh-repository-defaults-labels.png)
This will only apply to new repositories, so you may need to add
the label to your existing repositories. But even if you don't add the label to
the repository, the check should not block you so you don't have to worry about
going through all your repositories to add this label.

View file

@ -0,0 +1,90 @@
---
title: "Making the Slow Explicit: Dynamodb vs SQL"
date: 2023-02-26T15:51:19-05:00
toc: false
images:
tags:
- dev
- web
---
SQL databases like MySQL, MariaDB, and PostgreSQL are highly performant and can
scale well. However in practice it's not rare that people run into performance
issues with these databases, and run to NoSQL solutions like DynamoDB.
Proponents of DynamoDB like Alex DeBrie, the author of ["The DynamoDB Book"](https://www.dynamodbbook.com/)
point to a few things for this difference: HTTP-based APIs of NoSQL databases are more efficient than TCP connections used by SQL databases,
table joins are slow, SQL databases are designed to save disk space while NoSQL databases take advantage of large modern disks.[^1]
[^1]: I don't have my copy of the book handy, so I wrote these arguments from
memory. I'm confident that I remember them correctly, but apologies if I
misremembered some details.
These claims don't make a lot of sense to me though. HTTP runs over TCP, it's
not going to be magically faster. Table joins do make queries complex, but they
are a common feature that SQL engines are designed to optimize. And I don't
understand the point about SQL databases being designed to save space. While
disk capacities have skyrocketed, even the fastest disks are extremely slow
compared to how fast CPUs can crunch numbers. A single cache miss can stall a
CPU core for millions of cycles, so it's critical to fit data in cache. That
means making your data take up as little space as possible. Perhaps Alex is
talking about data normalization which is a property of database schemas and not
the database itself, but normalization is not about saving space either, it's
about keeping a single source of truth for everything. I feel like at the end of
the day, these arguments just boil down to "SQL is old and ugly, NoSQL is new
and fresh".
That being said, I think there is still the undeniable truth that people in
practice do hit performance issues with SQL databases far more often than they
hit performance issues with NoSQL databases like DynamoDB. And I think I know
why: it's because DynamoDB makes what is slow explicit.
Look at these 2 SQL queries, can you spot the performance difference between
them?
```SQL
SELECT * FROM users WHERE user_id = ?;
SELECT * FROM users WHERE group_id = ?;
```
It's a trick question, of course you can't! Not without looking at the table
schema to check if there are indexes on `user_id` or `group_id`. And you'd
likely have to run `DESCRIBE ...` if the query was more complex to make sure the
database will actually execute it the way you think it will.
I think this makes it easy to write bad queries. Look at [Jesse Skinner's article](https://www.codingwithjesse.com/blog/debugging-a-slow-web-app/)
about the time where he found a web app where all the `SELECT` queries were using `LIKE` instead of `=`
which meant that the queries were not using indexes at all! While it's easy to
think that the developer who made the mistake of using `LIKE` everywhere was just
a bad developer, I think the realization we need to come to is that it is too easy to make these mistakes.
The same `SELECT` query could be looking up a single item by its primary key,
or it could be doing a slow table scan. The same syntax could return you a single result, or it could return you a million results.
If you make a mistake, there is no indication that you made a mistake until
your application has been live for months or even years and your database has grown to a size
where these queries are now choking.
On one hand I think this speaks to how high performance SQL databases are. You
can write garbage queries and still get decent performance until your tables
grow to hundreds of thousands of rows! But at the same time I think this is
exactly why DynamoDB ends up being more scalable in production: because bad
queries are explicit.
With DynamoDB, if you want to get just one item by its unique key, then you use
a `Get` operation that makes this explicit. If you make a query that selects
items based on a key condition, that's an explicit `Query` operation. And your
query will return you only a small number of results and require you to paginate
with a cursor. Again making it explicit that you could be querying for many
items! And a query never falls back to scanning an entire table, you do a `Scan`
operation for that which makes it explicit that you are doing something wrong.
Rather than any magic about table joins or differences in connection types, I
think this is really the biggest difference in what makes DynamoDB more
scalable. It's not because DynamoDB is magic, it's because it makes bad patterns
more visible. I think it's critical that we make our tools be explicit and even
painful when using them in bad patterns, because we will accidentally follow bad
patterns if it's easy to do so.
I want to add though, DynamoDB is not perfect in this regard either. I
particularly see this with filters. It's easy to see why Amazon added filters,
but it's not rare that people use filters without understanding how they work
and end up making mistakes (for example, [here](https://stackoverflow.com/questions/64814040/dynamodb-scan-filter-not-returning-results-for-some-requests)).

View file

@ -0,0 +1,97 @@
---
title: 'Setting up my blog as an Onion service (Tor hidden service)'
date: 2023-03-05T15:54:13-05:00
toc: false
images:
---
If you don't know about it, Tor is a software that helps online privacy and
fights censorship using the Onion network. For example, [tens of thousands of people in Iran and Russia are using Tor through Tor's Snowflake proxies](https://blog.torproject.org/snowflake-daily-operations/) to get
around government censorship and access vital information, as news organizations like the [BBC started offering access through Tor](https://www.wsj.com/articles/russia-rolls-down-internet-iron-curtain-but-gaps-remain-11647087321).
As [online services are happy to turn over our data to the authorities](https://www.businessinsider.com/police-getting-help-social-media-to-prosecute-people-seeking-abortions-2023-2?op=1),
it is crucial for Tor to exist so journalists, activists, whistle-blowers, and
anyone living under oppressive regimes can access information and communicate freely.
![A chart showing daily snowflake users in 2022. The numbers start to rise in December 2021, which is marked as Unblocking in Russia. The numbers then skyrocket in September, which is marked as Protests in Iran.](/img/tor-censorship-snowflake-chart.webp)
But there is really no reason for Tor to be used solely by people trying to
avoid censorship or stay private. In fact, I think it is good for people to use
Tor for other things, because this way Tor is not just a tool for "people with
something to hide" but a tool that everyone uses. It's a bit like adding
pronouns in your bio on social media: it's good when cis people put pronouns in
their bios because otherwise just having your pronouns in your bio would
immediately flag you as a trans or gender nonconforming person. Everyone else
joining in gives security to those who really need it.
## Setting up the Onion service
My first step was to set up a Docker container to run Tor in.
I put this container on DockerHub for others to use: [seriousbug/tor](https://hub.docker.com/repository/docker/seriousbug/tor/general).
Next, I used [mkp224o](https://github.com/cathugger/mkp224o) to get a vanity
address. Onion addresses are made out of long, random sequences like
`xbbubmuxby...qd.onion`, but you can try to generate one that starts with a
special prefix, for example DuckDuckGo has an Onion service that starts with
"duckduckgo": `duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion`.
Doing this is computationally expensive, but short prefixes are easy to
generate. I wanted something that starts with `bgenc`, which only took a few
seconds. I also tried `kaanbgenc` but gave up after waiting several minutes: the
difficulty goes up exponentially the longer the prefix you want is, so 9
characters would have likely taken months on my desktop.
Next, I set up the configuration file for Tor. That looks like this:
```
Log notice stdout
HiddenServiceDir /etc/tor/service
HiddenServicePort 80 unix:/var/run/tor/bgenc.net.sock
```
I put the keys that `mkp224o` generated into a subfolder named `service` next to
my Tor config. These are going to be mounted at `/etc/tor` in the Tor container.
I then told Tor to look at `/var/run/tor/bgenc.net.sock`, where I'll be mounting
my nginx unix socket at.
And that reminds me, it's time to set up nginx! Under the `server` block that
serves my website, I added my onion address as one of the host names:
```
server_name bgenc.net;
server_name bgenc2iv62mumkhu2p564vxtao6ha7ihavmzwpetkmazgq6av7zvfwyd.onion;
```
Then, I added the listen directive to create and listen to that socket:
```
listen 443 ssl http2;
listen unix:/var/run/nginx/bgenc.net.sock;
```
I'm using a unix socket here because my nginx is actually running on the base
system without a container, while tor is in a container. So to allow Tor to
connect to the nginx in the host, I would have had to allow the tor container to
use the host network. But I can get around that with a Unix socket, because the
socket can get mounted from the host into the container.
Also mind that I'm not using SSL or http2 for the unix socket. There are very few
SSL key services that support Tor, and it's not necessary anyway because the Tor
network provides the same security guarantees to you already. I also found that
`http2` does not work, though I'm not sure why.
I finally added the tor container to a `docker-compose.yml` to make it easier to
rebuild if needed. That looks like this:
```yml
tor-hidden-service:
image: seriousbug/tor
restart: always
volumes:
- ./tor:/etc/tor
- /var/run/nginx:/var/run/tor
```
I also needed to make the tor directory with the configuration file and services
owned by root, and use 700 as the file permission. Otherwise Tor refuses to start.
Once all of this is set up, I restarted nginx and my Tor container. And that was about it!
The website is now accessible through Tor! You can find it at [bgenc2iv62mumkhu2p564vxtao6ha7ihavmzwpetkmazgq6av7zvfwyd.onion](http://bgenc2iv62mumkhu2p564vxtao6ha7ihavmzwpetkmazgq6av7zvfwyd.onion/).

View file

@ -0,0 +1,129 @@
---
title: 'Self Hosted Backups with Minio, Kopia, and Tailscale'
date: 2023-04-25T00:11:31-04:00
toc: false
images:
tags:
- homelab
---
I've been struggling with my local backup setup for a while now.
I use [Kopia for backups](/2022.05.29.my-new-backup-kopia/), which is really good,
and I have a [custom built NAS](/raid/) where I can store backups.
That's all good so far, but how do I get the backups from my desktop to my NAS?
My first attempt was to set up Kopia to do backups with SSH access, which Kopia does support.
But when I decided to also limit how much of the server Kopia could access, I started to hit issues.
You can set up the OpenSSH server limit certain users to SFTP only with the `ForceCommand internal-sftp` setting, and a `ChrootDirectory` option can let you
limit them into specific folders too. But I kept hitting issues while setting this up, with the server
refusing connections whenever the limit is active. While I'm sure there's an answer to why I was failing to set this up,
I came up with an easier solution: Minio.
Minio is an S3 compatible, self hosted block storage service.
It is generally meant to be used in clusters, but there's nothing to stop you from putting it on a single device!
You do lose a few features like file locking, but most features still work.
Kopia has S3 support, so it should work with Minio.
To set up Minio, I put it in a `docker-compose.yml` like this:
```yml
minio:
image: minio/minio
command: minio server /data --console-address ":9001"
restart: always
volumes:
- minio-data:/data
ports:
- '9000:9000'
- '9001:9001'
env_file:
- .minio.env
```
Then in `.minio.env`, I enter the root username and password:
```
MINIO_ROOT_USER=...
MINIO_ROOT_PASSWORD=...
```
A `docker compose up -d`, and minio was running!
I hit a minor issue though: the minio server was running with HTTP not HTTPS, so no encryption.
This is not a big deal because the connection is only local,
it's literally 2 computers sitting in a room, connected with wires to each other over a network switch.
And Kopia does have a setting to allow HTTP connections.
And I could create a self-signed certificate and tell Kopia to use that,
but dealing with self-signed certificates can be a little annoying.
Now, I've also been looking for excused to play around with Tailscale. Tailscale
is a mesh VPN software that lets you connect devices securely, while still
allowing them to communicate peer-to-peer directly (when possible). I recently
set up all my devices with Tailscale to make it easier for myself to access my
home network remotely.
But Tailscale also comes with a lot of cool additional features.
One of these is the "MagicDNS", which automatically assigns "hostname.network.ts.net" domain names
to devices on your Tailscale network. And another feature allows you to generate real TLS
certificates for your MagicDNS domains. This is really cool because the generated certificates are "real",
they are not self-signed certificates and you don't need anything special for browsers and other tools to accept them.
![A web page with the contents: HTTPS Certificates. Beta. Allow users to provision HTTPS cerificates for their devices. Learn More. Below is a button labeled Disable HTTPS.](/img/2023-04-25.tailscale.png)
So putting these together, I enabled MagicDNS and HTTPS certificates for my network. Then,
I generated my certificates with `sudo tailscale cert --cert-file public.crt --key-file private.key hostname.network.ts.net`,
and put those certificates into a `certs` folder.
Next, I adjusted my `docker-compose` file to make Minio use these certificates:
```yml
minio:
image: minio/minio
command: minio server /data --console-address ":9001" --certs-dir /certs
restart: always
volumes:
- minio-data:/data
- ./certs:/certs
ports:
- '9000:9000'
- '9001:9001'
env_file:
- .minio.env
```
Note the added `--certs-dir /certs` in the command, and the extra mount under volumes.
And that's about it! I rebuilt the container with `docker compose up -d minio`,
then navigated to `https://hostname.network.ts.net:9001` on a browser on a
Tailscale connected computer. And boom! HTTPS protected Minio console. While the
URL suggests it could be public, these domains are local to your Tailscale
network unless you explicitly expose them.
Next, I created a bucket named `backup` to house my backups. Then, I created an
access key. Minio allows you to restrict what a client can and can't do with an
access key by defining a policy. I restricted this access key to only access my
backups bucket with this policy:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::backup/*", "arn:aws:s3:::backup"]
},
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": ["arn:aws:s3:::*"]
}
]
}
```
There is 2 statements in the policy, first allowing all access to the backup
bucket only. The second statement allows the key to check what buckets are
available. I'm not sure if I could have restricted that further as well, but I'm
happy with how strict this is already.
All that's left is to point Kopia at `https://hostname.network.ts.net:9000`,
enter the access key, and let it back things up.

View file

@ -0,0 +1,64 @@
---
title: "Fully Headless Setup for Raspberry Pi"
date: 2023-04-27T20:40:00-04:00
toc: false
images:
tags:
- homelab
---
I always hit this issue: I have a Raspberry Pi I want to set up, but I don't
want to break out the cables to hook it up to a monitor and keyboard.
Luckily, the Raspbian OS actually has built-in features for this.
Just follow these steps:
```sh
# Extract the Raspbian OS Lite image
xz -d <image>-lite.img.xz
# Flash the image to the SD card
dd if=<image>-lite.img of=/dev/sdX bs=4M
# Mount the boot partition
mount /dev/sdX1 /mnt
# Enable SSH
sudo touch /mnt/ssh
```
Create a file `/mnt/wpa_supplicant.conf`, with the contents:
```
country=<2 letter country code here>
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="wifi ssid here"
psk="wifi password here"
}
```
Set up your user:
```sh
# replace username and password below
echo username:(echo 'password' | openssl passwd -6 -stdin) | sudo tee /mnt/userconf
```
Optional: Set up a hostname.
```sh
sync
umount /mnt
mount /dev/sdX2 /mnt
# replace host below
echo host | sudo tee /mnt/etc/hostname
```
Also edit `/etc/hosts` so `127.0.0.1` points to your new hostname.
Otherwise, just `sync && umount /mnt` and you are done.
If you did set up the hostname, and you have `.local` domains with Avahi set up,
you should be able to `ssh username@host.local`. If you didn't, or if that
doesn't work, use your router's DHCP page to find the IP address for the
Raspberry Pi.

View file

@ -0,0 +1,59 @@
---
title: 'CSS only placeholder for contenteditable elements'
date: 2023-07-02T13:06:15-05:00
toc: false
images:
tags:
- dev
- web
---
The HTML elements `input` and `textarea` include a `placeholder` property. This
property puts a placeholder text, which then disappears once the user selects
the input and types something in. I'm sure you've seen a placeholder before, it
doesn't need much explanation.
But what if an `input` or `textarea` doesn't fit your needs? `input` only allows
a single line of text. While `textarea` does allow multiple lines, the height of
the text area is fixed. If you want it to expand as the user types, you need to
add javascript to resize it on the fly. But there is an alternative: you can use
a `div` that is marked `contenteditable`. Div's can resize based on their
contents, so this is an easy way to create a text area that resizes
automatically without any javascript.
But the downside to using an editable div is that basic functionality like
`placeholder` doesn't exist. And if you duckduckgo this, you'll find a lot of
people working around the problem with javascript. While this is certainly
possible, I am trying to minimize the amount of javascript on this page. That's
the whole reason why I didn't use a `textarea` in the first place! But I found a
way to add a placeholder to a `contenteditable` span without javascript. Here it is:
```html
<span contenteditable="true" data-placeholder="click on me and type"></span>
<style>
/* Add the placeholder text */
[data-placeholder]::before {
content: attr(data-placeholder);
/* Or whatever */
color: gray;
}
/* Hide the placeholder when selected, or when there is text inside */
[data-placeholder]:focus::before,
[data-placeholder]:not(:empty)::before {
content: none;
}
</style>
```
And here what it looks like:
<iframe width="100%" height="300" src="//jsfiddle.net/SeriousBug/t9hmgyq5/13/embedded/result,html,css/" allowfullscreen="allowfullscreen" allowpaymentrequest frameborder="0"></iframe>
This also works with a `div`, but there is one caveat: If the user types
something into the `div` then deletes it, the browser will leave a `<br/>` in
the `div`. This means the `div` won't be empty, and the placeholder won't come
back. Boo! The same issue doesn't happen with a `span` though, so use a `span`
that is set to `display: block` instead.
Also, remember to not only rely on just the placeholder. Make sure to add labels
to your inputs.

View file

@ -0,0 +1,132 @@
---
title: 'Getting theme colors in JavaScript using React with DaisyUI and TailwindCSS'
date: 2023-08-10T00:18:27-05:00
toc: false
images:
tags:
- web
- dev
- react
---
I've been building a web app using React and TailwindCSS, with DaisyUI. But
while working on it I hit a minor snag: I'm trying to use Chart.js, and Chart.js
creates a canvas to render charts. But the canvas can't pick up the CSS
variables that are defined on the page itself. That means that my charts can't
use the colors from my theme, unless I manually copy and paste the theme colors!
This is pretty bad for maintainability because if the theme colors are ever
changed, or themes are added or removed, you'll have to come back and update the
colors for the charts as well. Boo!
Luckily, I found out that you can read CSS variables from JavaScript. So I added
this code:
```ts
function getPropertyValue(name: string) {
return getComputedStyle(document.body).getPropertyValue(name);
}
```
This returns the value of any CSS variable you pass it. For example
`getPropertyValue("--tab-border")` returns `1px` for my theme!
Next, I just looked through the CSS on the page to figure out what CSS variables
DaisyUI sets for themes. I quickly found the most important ones I needed: the
primary and secondary colors, and the colors for the text that goes on top of
them.
```ts
const primary = getPropertyValue('--p');
const secondary = getPropertyValue('--s');
const primaryText = getPropertyValue('--pc');
const secondaryText = getPropertyValue('--sc');
```
This is all great! But I had one more concern: I needed a way to change these
variables and re-render components whenever the user toggles between the light
and dark themes.
I decided to use SWR for this. SWR is mainly meant to be used to fetch data from
an API, but there's really nothing stopping you from using it for anything else.
In this case, SWR will cache all the colors in a primary place, and allow me to
re-render all the components when the colors change using its `mutate` API.
Here's how that code looks like:
```ts
export function useThemeColor() {
const themeFetcher = useCallback(() => {
const primary = getPropertyValue('--p');
const primaryText = getPropertyValue('--pc');
const secondary = getPropertyValue('--s');
const secondaryText = getPropertyValue('--sc');
return { primary, primaryText, secondary, secondaryText };
}, []);
// The key "data:theme" could be anything, as long as it's unique in the app
const { data: color, mutate } = useSWR('data:theme', themeFetcher);
return { ...color, mutate };
}
```
It's very easy to use it.
```ts
export default function Dashboard() {
const { primary, primaryContent } = useThemeColor();
// ... fetch the data and labels ...
return (
<Line
data={{
labels,
datasets: [
{
data,
// borderColor is the color of the line
borderColor: `hsl(${primary})`,
// backgroundColor is the color of the dots
backgroundColor: `hsl(${primaryContent})`,
},
],
}}
/>
);
}
```
Here's what that looks like:
![A screenshot of a web page. At the top there is a dark red colored button labelled Dashboard. There is a line chart below, which uses the same dark red color as the button.](/img/2023-08-10.chartjs.png)
To keep the colors changing whenever the user toggles the theme, you then just
have to call the `mutate` function inside your toggle button.
```ts
export function ThemeToggle() {
const { mutate: mutateTheme } = useThemeColor();
const [theme, setTheme] = useState("light");
const toggleTheme = useCallback(() => {
if (theme === "dark") {
document.body.dataset.theme = "autumn";
setTheme("light");
} else {
document.body.dataset.theme = "forest";
setTheme("dark");
}
mutateTheme();
}, [theme, mutateTheme, setTheme]);
return (
<div className="btn btn-ghost text-xl" onClick={toggleTheme}>
<PiLightbulbBold />
</div>
);
}
```
Oh and that's a bonus trick for you. You can swap the DaisyUI theme by just
setting `document.body.dataset.theme`, as long as that theme is enabled in the
settings.

View file

@ -0,0 +1,49 @@
---
title: "Next.js error about native node modules using bindings"
date: 2023-08-13T16:44:41Z
toc: false
images:
tags:
- dev
---
This might be a little niche, but I had trouble finding anyone else write about
it so I might as well.
I was trying to use a native node module [libheif-node-dy](https://www.npmjs.com/package/libheif-node-dy) that uses [bindings](https://www.npmjs.com/package/bindings) in a Next.js app, and I kept getting errors from bindings:
```
- error node_modules/.pnpm/bindings@1.5.0/node_modules/bindings/bindings.js (179:15) @ indexOf
- error Error [TypeError]: Cannot read properties of undefined (reading 'indexOf')
at Function.getFileName (webpack-internal:///(rsc)/./node_modules/.pnpm/bindings@1.5.0/node_modules/bindings/bindings.js:193:18)
at bindings (webpack-internal:///(rsc)/./node_modules/.pnpm/bindings@1.5.0/node_modules/bindings/bindings.js:130:52)
```
After a little digging, I came across [this Github issue by Nicholas Ramz](https://github.com/TooTallNate/node-bindings/issues/61) who diagnosed the issue down to the Webpack/Terser minimizer, which removes a line that looks like a no-op but is actually supposed to
trigger an error stack. Bindings needs that to happen, otherwise you get this error.
The solution he suggested is that you change your Webpack config to disable that
Terser optimization. This is sounds like a good solution, but it's a bit hard to
apply to Next.js because I don't have a webpack config of myself. Next.js has
it's Webpack config built-in, and while it allows you to override it the
documentation is a little bare.
I found an easier solution for Next.js though: you can tell Next.js to not
bundle a module. You can configure this with the
`serverComponentsExternalPackages` option. My config file now looks like this:
```js
/** @type {import('next').NextConfig} */
const nextConfig = {
experimental: {
serverComponentsExternalPackages: ["libheif-node-dy"],
},
};
module.exports = nextConfig;
```
And this immediately worked, the module no longer gets bundled but instead
executed like a regular node module, which gets around this issue. Since native
modules can't be bundled into a JS file anyway, I don't think you lose much by
disabling bundling just for native modules anyway.

View file

@ -0,0 +1,134 @@
---
title: "Amazon SES Production Access Approval"
date: 2023-10-03T04:34:37Z
toc: false
images:
tags:
---
I've been setting up something for a relative who's trying to start a business,
and as part of that they needed to be able to send transactional emails. After
looking through some of the services available, Amazon SES looked like a good
option. You get a fairly large number of emails for free every day, and
additional emails are available at a decent price paid per email rather than the
big "pay $X to get 10,000 more emails" kind of packages that most other
providers seem to offer.
## Let's sign up with AWS!
So I signed up for AWS, created my access tokens, imported the SES client into
the app, and verified a domain so I could start testing. Phew! It did work
pretty painlessly at this point, I was sending my test emails and receiving them
very quickly.
But there's a weird thing Amazon does for SES where your SES account will start
out in a sandbox mode where you can only send test emails to your own verified
domains or inboxes. To actually start sending emails, you need to request
approval first. I thought this was sort of a formality, but it turns out that's
not the case because AWS very quickly rejected my request. Now I get it, spam
emails are a big problem and Amazon doesn't want spammers abusing SES, which
could hurt SES's email reputation. If you are unfamiliar, if email services
decide your IP address is a spammer, your emails will start going straight into
the spam filter. With SES, you use their shared IP addresses unless you pay
extra to reserve your own IP address, so a spammer using SES would cause issues
for everyone who's not paying for a dedicated IP.
This was very frustrating though, because I then decided to sign up for SendGrid
who banned my account immediately before I could even complete the sign up. It
was a bizarre experience to receive a "here's the code to verify your email
address" email and a "you have been banned, goodbye" email simultaneously. What
did I even do?
I was able to get approved after some back and forth with the AWS support team,
and I wanted to write about this in case others hit this same issue because the
feedback AWS gives you is basically nonexistant. When my request was rejected,
the response I got just said:
> We reviewed your request and determined that your use of Amazon SES could have
> a negative impact on our service. We are denying this request to prevent other
> Amazon SES customers from experiencing interruptions in service. For security
> purposes, we are unable to provide specific details.
Oof. I was especially confused because I had explicitly described that I would
only be sending transactional emails to paying customers, and only once just to
deliver their order after they had paid. This is as far away as you can get from
spam, the only way you would receive an email is if you asked for it and paid.
But I think I understand a few things AWS customer support team is looking for
before they approve your request. I wish they would just describe this, but I
guess that's the "security purposes".
## What to put in your production access request
1. Note how many emails you'll be sending. Give your best estimate. This won't
be your sending limit, I said I'd send 100 emails a day and got a quota for
50,000 emails per day. So there's no need to over-estimate to get a higher
limit or to lie that you'll send less, your best estimate should be good
enough.
2. Explain what you'll do with bounces, complaints, and unsubscribe requests.
These might not even make sense for you, for example in my case the emails
will only be sent to paid customers after their successful transaction, there
are no recurring emails so nothing to unsubscribe from. But explaining that
wasn't enough for AWS, I had to also explain that I would stop sending emails
to any bounced addresses or to anyone who complains. If you don't have the
capability to do that in your code, make sure to implement it first.
3. Attach at least 1 screenshot of the emails you'll be sending. A picture is
worth a thousand words and all, I think this gets your point across much more
quickly. I'm not sure if you can attach a picture to your initial request,
but I think you can comment again to attach a picture afterwards without
waiting for them to reject you.
4. Write down where you'll be getting the email addresses to send emails to,
even if it feels obvious. I had already said I was going to send emails to
customers who bought something, but I think this wasn't clear enough. In
followups I described that customers would enter their email addresses when
making a purchase, and added a screenshot of the checkout page where it
clearly says "your purchase will be sent to this email address". I think
showing this also helped.
I'm sure there are other things to consider and explain, but this worked for me.
If you get denied, reopen the case, add even more screenshots and information
and try again.
## Brevo
Alternatively, I also signed up for [Brevo](https://www.brevo.com/). It works,
and was honestly easier to set up than SES because I didn't have to pull in AWS
client libraries. Instead I just had to call `fetch` and that was it. Hey, here
is the code for that actually:
```ts
export type Email = {
sender: {
name: string;
email: string;
};
to: string;
content: {
text: string;
html: string;
};
subject: string;
};
export async function sendEmailBrevo({
sender,
to,
content: { html: htmlContent, text: textContent },
subject,
}: Email) {
await fetch("https://api.brevo.com/v3/smtp/email", {
method: "POST",
headers: {
accept: "application/json",
"content-type": "application/json",
"api-key": process.env.BREVO_API_KEY,
},
body: JSON.stringify({
sender,
to: [{ email: to }],
subject,
htmlContent,
textContent,
}),
});
}
```

View file

@ -0,0 +1,117 @@
---
title: Solving `app_data` or `ReqData` missing in requests for actix-web
date: 2022-03-26
---
> This post is day 5 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
I'm using `actix-web` to set up a web server, and I've been hitting a small
problem that I think other people may come across too.
To explain the problem, let me talk a bit about my setup. I have a custom
middleware that checks if a user is authorized to access a route. It looks like
this:
```rust
impl<S: 'static, B> Service<ServiceRequest> for CheckLoginMiddleware<S>
where
S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error>,
S::Future: 'static,
{
type Response = ServiceResponse<EitherBody<B>>;
type Error = Error;
type Future = LocalBoxFuture<'static, Result<Self::Response, Self::Error>>;
dev::forward_ready!(service);
fn call(&self, req: ServiceRequest) -> Self::Future {
let state = self.state.clone();
let (request, payload) = req.into_parts();
let service = self.service.clone();
let user_token = get_token_from_header(&request);
let path_token = if self.allow_path_tokens {
get_token_from_query(&request)
} else {
None
};
Box::pin(async move {
match verify_auth(state, user_token, path_token, request.path()).await {
Ok(authorized) => {
tracing::debug!("Request authorized, inserting authorization token");
// This is the "important bit" where we insert the authorization token into the request data
request.extensions_mut().insert(authorized);
let service_request =
service.call(ServiceRequest::from_parts(request, payload));
service_request
.await
.map(ServiceResponse::map_into_left_body)
}
Err(err) => {
let response = HttpResponse::Unauthorized().json(err).map_into_right_body();
Ok(ServiceResponse::new(request, response))
}
}
})
}
}
```
The `verify_auth` function is omitted, but the gist of it is that it returns an `Result<Authorized, Error>`.
If the user is authorized, the authorization token `verify_auth` returned is then attached to the request.
Then here's how I use it in a path:
```rust
#[delete("/{store}/{path:.*}")]
async fn delete_storage(
params: web::Path<(String, String)>,
// This parameter is automatically filled with the token
authorized: Option<ReqData<Authorized>>,
) -> Result<HttpResponse, StorageError> {
let (store, path) = params.as_ref();
let mut store_path = get_authorized_path(&authorized, store)?;
store_path.push(path);
if fs::metadata(&store_path).await?.is_file() {
tracing::debug!("Deleting file {:?}", store_path);
fs::remove_file(&store_path).await?;
} else {
tracing::debug!("Deleting folder {:?}", store_path);
fs::remove_dir(&store_path).await?;
}
Ok(HttpResponse::Ok().finish())
}
```
This setup worked for this path, but would absolutely not work for another path.
I inserted logs to track everything, and just found that the middleware would
insert the token, but the path would just get `None`. How‽ I tried to slowly
strip everything away from the non-functional path until it was identical to
this one, but it still would not work.
Well it turns out the solution was very simple, see this:
```rust
use my_package::storage::put_storage;
use crate::storage::delete_storage;
```
Ah! They are imported differently. I had set up my program as both a library and
a program for various reasons. However, it turns out importing the same thing
from `crate` is different from importing it from the library. Because of the
difference in import, Actix doesn't recognize that the types match, so the route
can't access the attached token.
The solution is normalizing the imports. I went with going through the library
for everything, because that's what `rust-analyzer`s automatic import seems to
prefer.
```rust
use my_package::storage::{put_storage, delete_storage};
```
Solved!

10
src/routes/posts/bash.md Normal file
View file

@ -0,0 +1,10 @@
---
title: Writing a Program in Bash
date: 2015-04-12
---
I don't really know why, but writing code in Bash makes me kinda anxious. It feels really old, outdated, and confusing. Why can't a function return a string? And no classes, or even data types? After getting confused, usually, I just end up switching to Python.
<!--more-->
But this time, I decided to stick with Bash. And I am surprised. It is unbelievebly good. I must say, now I understand the Unix philosophy much better. Having small programs doing one thing very good allows you to combine the power of those programs in your scripts. You think your favourite programming language has a lot of libraries? Well, bash has access to more. The entire Unix ecosystem powers bash. Converting videos, taking screenshots, sending mails, downloading and processing pages; there are already command line tools for all of that, and you have great access to all of them.
The program I've started writing is called [WoWutils](https://github.com/SeriousBug/WoWutils). And I'm still shocked at just how much functionality I have added with so little code. If you are considering writing a program in Bash too, just go through with it. It really is very powerful.

View file

@ -0,0 +1,35 @@
---
title: "Black Crown Initiate"
date: 2022-04-02
---
> This post is day 9 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
I love metal, I've been listening to metal since I was 13. It was the first
music genre that I actually liked: until I discovered metal I actually thought I
didn't like music at all, because nothing I heard on the radio or heard my
friends listening to were interesting to me. My taste in music has expanded and
changed over the years to include different types of music and genres, but metal
remains the one I love the most.
Demonstrating my metal-worthiness aside, I've always listened to European metal
bands. I had this weird elitist thought that "good" metal could only come from
Europe, with exceptions for some non-European bands, and that American metal was
just always bad. This is obviously false, but I just had never came across
anything American that I had liked. That's until recently.
I recently came across [Black Crown Initiate](https://www.metal-archives.com/bands/Black_Crown_Initiate/3540386765),
a progressive death metal band from Pennsylvania. And I have to tell you that they are amazing.
Their first release "Song of the Crippled Bull" is absolutely amazing. The music
is just the right amount of metal and progressive, and lyrics are amazing. The
clean vocals get the themes of the song across, while the growls give a lot of
power to the songs. My favorite songs from this release are "Stench of the Iron
Age" and the title track "Song of the Crippled Bull". Other hightlights from the
band I've listened to so far include "A Great Mistake", "Death Comes in
Reverse", "Vicious Lives".
I'm still making my way through their songs, but I'm glad to have discovered
something from America that I absolutely love. I'm now trying to find more
non-European bands that I enjoy.

View file

@ -0,0 +1,56 @@
---
title: An introduction to Bulgur Cloud - simple self hosted cloud storage
date: 2022-03-29
---
> This post is day 8 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
I've been recently working on Bulgur Cloud, a self hosted cloud storage
software. It's essentially Nextcloud, minus all the productivity software. It's
also designed to be much simpler, using no databases and keeping everything on
disk.
The software is still too early to actually demo, but the frontend is at a point
where I can show some features off. So I wanted to show it off.
![A white web page with the words Bulgur Cloud. Below is Simple and delicious cloud storage and sharing. Under that are two fields titled Username and Password, and a black button titled Login.](/img/2022-03-29-00-17-38.png)
I've been going for a clean print-like look. I think it's going pretty well so far.
![A web page with 3 files listed, sprite-fright.mp4, test.txt, and sprite-fright.LICENSE.txt. There are pencil and thrash bin symbols to the right of the file names. A leftward arrow is grayed out on the top left, and top right says kaan. On the bottom right there's a symbol of a cloud with an up arrow.](/img/2022-03-29-00-16-13.png)
I'm not sure about the details of how the directory listing will look. I don't
think I like the upload button in the corner, and the rename and delete icons
feel like they would be easy to mis-press. There is a confirmation before
anything is actually deleted, but it still would be annoying.
![A pop up that says Delete file text.txt, with the buttons Delete and Cancel below it.](/img/2022-03-29-00-20-48.png)
Something I'm pretty happy with is the file previews. I've added support for
images, videos, and PDFs. Video support is restricted by whatever formats are
supported by your browser, the server doesn't do any transcoding, but I think
it's still very useful for a quick preview. I'm also planning on support for
audio files. The server supports range requests, so you can seek around in the
video without waiting to download everything (although I've found that Firefox
doesn't handle that very well).
![A page with the text sprite-fright.mp4, and a video player below showing a frame from the movie. Below the player is a link that says Download this file.](/img/2022-03-29-00-22-48.png)
This is a web interface only so far, but I'm planning to add support for mobile
and desktop apps eventually. I've been building the interface with React Native
so adding mobile/desktop support shouldn't be too difficult, but I've been
finding that "write once, run everywhere" isn't always that simple. I ended up
having to add web-only code to support stuff like the video and PDF previews, so
I'll have to find replacements for some parts. Mobile and desktop apps natively
support more video and audio formats too, and with native code you usually have
the kind of performance to transcode video if needed.
The backend is written in Rust with `actix-web`, using async operations. It's
incredibly fast, and uses a tiny amount of resources (a basic measurement
suggests less than 2 MB of memory used). I'm pretty excited about it!
After a few more features (namely being able to move files), I'm planning to put
together a demo to show this off live! The whole thing will be open source, but
I'm waiting until it's a bit more put together before I make the source public.
The source will go live at the same time as the demo.

View file

@ -0,0 +1,92 @@
---
title: Emacs and extensibility
date: 2015-10-06
---
Update: I've put the small Emacs tools I have written to a
[gist](https://gist.github.com/91c38ddde617b98ffbcb).
I have been using Emacs for some time, and I really love it. The
amount of power it has, and the customizability is incredible. What
other editor allow you to connect to a server over SSH and edit files,
which is what I am doing to write this post. How many editors or IDE's
have support for so many languages?
<!--more-->
One thing I didn't know in the past, however, is extensibility of
Emacs. I mean, I do use a lot of packages, but I had never written
Elisp and I didn't know how hard or easy it would be. But after
starting to learn Clojure a bit, and feeling more comfortable with
lots of parenthesis, I decided to extend Emacs a bit to make it fit
myself better.
The first thing I added is an "insert date" function. I use Emacs to
take notes during lessons -using Org-mode- and I start every note with
the date of the lesson. Sure, glancing at the date to the corner of my
screen and writing it down takes just a few seconds, but why not write
a command to do it for me? Here is what I came up with:
~~~commonlisp
(defun insert-current-date ()
"Insert the current date in YYYY-MM-DD format."
(interactive)
(shell-command "date +'%Y-%m-%d'" t))
~~~
Now that was easy and convenient. And being able to write my first
piece of Elisp so easily was really fun, so I decided to tackle
something bigger.
It is not rare that I need to compile and run a single C file. Nothing
fancy, no libraries, no makefile, just a single C file to compile and
run. I searched around the internet like "Emacs compile and run C", but
couldn't find anything. I had been doing this by opening a shell in
Emacs and compiling/running the program, but again, why not automate
it?
The code that follows is not really good. "It works" is as good as it
gets really, and actually considering that this is the first
substantial Elisp I have written, that is pretty impressive -for the
language and Emacs, which are both very helpful and powerful- I think.
```commonlisp
(require 's)
(defun compile-run-buffer ()
"Compile and run buffer."
(interactive)
(let* ((split-file-path (split-string buffer-file-name "/"))
(file-name (car (last split-file-path)))
(file-name-noext (car (split-string file-name "[.]")))
(buffer-name (concat "compile-run: " file-name-noext))
(buffer-name* (concat "*" buffer-name "*")))
(make-comint buffer-name "gcc" nil "-Wall" "-Wextra" "-o" file-name-noext file-name)
(switch-to-buffer-other-window buffer-name*)
(set-process-sentinel (get-buffer-process (current-buffer))
(apply-partially
'(lambda (prog-name proc even)
(if (s-suffix? "finished\n" even)
(progn
(insert "Compilation successful.\n\n")
(comint-exec (current-buffer) prog-name (concat "./" prog-name) nil nil))
(insert (concat "Compilation failed!\n" even))))
file-name-noext))))
```
Again, the code is not really good. I'm uploading it here right now
because I'm actually very excited that I wrote this. Just now I can
think of ways to improve this, for example moving the compiler and the
flags to variables so that they can be customized. I could also
improve the presentation, because strings printed by this function,
comint and the running programs mixes up. I'll update this blog post
if I get to updating the code.
If this is your first time hearing about Emacs, this post may look
very confusing. I don't to Emacs any justice here, so do check it out
somewhere like [Emacs rocks](http://emacsrocks.com/). On the other
hand, if you have been looking a functionality like this, hope this
helps. If you have any suggestions about the code, I'd love to hear
them, you can find my email on the "about me" page. Anyway, have a
good day!

View file

@ -0,0 +1,78 @@
---
title: Do kids not know computers now?
date: 2022-03-28
---
> This post is day 7 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
One discussion point I've seen around is that kids nowadays don't know how to
use computers. Okay that's a bit of a strawman, but this article titled [File Not Found](https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z).
The gist of the article is that Gen-Z kids are too used to search interfaces.
That means they don't actually know about where files are stored, or how they
are organized. They only know that they can access the files by searching for
them. The article talks about how professors ended up having to teach them how
to navigate directory structures and file extensions.
As the article claims, it seems to be related to how modern user interfaces are
designed. Our UIs nowadays are more focused around search capabilities: you just
type in a search bar and find what you need.
![A desktop, displaying a bar with the words launch, followed by fi. On the right side of the bar are program names Firefox, fish, Profiler, Frontend, Patch Fixes, and Qt File Manager. Firefox is highlighted.](/img/app-search-bar.png)
In some sense I do like this sort of interface. I use something like that when
launching applications, both on my Desktop and on my laptop! It's actually a
better interface compared to hunting for icons on your desktop. I use similar
interfaces in VSCode to switch between open editor tabs.
However, this is a complimentary interface to hierarchy and organization. Going
back to the file systems example discussed in the article, being able to search
through your files and folders is useful. But it's not a replacement for
hierarchy. You can't just throw files into a folder, and expect to always find
them accurately.
Let me give an example with Google Photos. I have been keeping all my photos on
Google Photos, and between migrating photos from old phones and ones I have
taken on new phones, I have over 8,000 photos. This is completely disorganized
of course, but Google Photos has a search functionality. It even uses AI to
recognize the items in the photos, which you can use in the search. A search for
"tree" brings up photos of trees, "cat" brings up cats, and you can even tag
people and pets and then search for their names. Very useful, right?
Well, it is sometimes. I recently had to remember what my wife's car license
plate is. A quick search for "license plate" on google photos and luckily, I had
taken a photo of her car that included the license plate in the frame. Success!
On the other hand, I was trying to find some photos from a particular gathering
with my friends. Searches for their names, names of the place, or stuff I know
are in the picture turned up with nothing. I eventually had to painstakingly
scroll through all photos to find the one I wanted.
This reminds me of 2 things. One is this article named [To Organize The World's
Information](https://dkb.io/post/organize-the-world-information) by
[@dkb868@twitter.com](https://nitter.net/dkb868). One thing I found interesting
on that article was that the concept of "the library" has been lost over the
last few decades as a way to organize information. They define the library as a
hierarchical, categorized directory of information. The article also talks about
other organizational methods, and is worth a read.
The other thing is the note taking software we're building at my workplace,
[Dendron](https://dendron.so/). One of the core tenets of Dendron is that the
information is hierarchical. Something the founder Kevin recognizes was that
other note taking software make it easier to make new notes, but they don't
support hierarchical structures which makes it hard to find those notes later.
I've also experienced this, when I used other note taking software (or sticky
notes!) I found that it was easy to just jot down a few notes, but they very
quickly get lost or hard to find when you need them. A hierarchical organization
makes it possible to actually find and reference the information later.
Requiring organization creates a barrier of entry to storing information, but
what good is storing information if you can't retrieve the information later?
This seems to work pretty well with Dendron. Would it not work for other things?
Why not for taking photos? You of course want to be able to quickly snap a photo
so you can record a moment before it's gone, but perhaps you could be required
to organize your photos afterwards. Before modern cellphones & internet
connected cameras, you'd have to get your photos developed or transfer them off
an SD card: a step where you would have to (or have the opportunity to) organize
your photos. I wonder if we cloud services could ask you to organize your photos
before syncing them as well.

View file

@ -0,0 +1,57 @@
---
title: Taking Backups with Duplicity
date: 2015-05-16
---
I wanted to start taking backups for some time, but I haven't had the time to do any research and set everything up. After reading another [horror story that was saved by backups](https://www.reddit.com/r/linuxmasterrace/comments/35ljcq/couple_of_days_ago_i_did_rm_rf_in_my_home/), I decided to start taking some backups.
<!--more-->
After doing some research on backup options, I decided on [duplicity](http://duplicity.nongnu.org/). The backups are compressed, encrypted and incremental, both saving space and ensuring security. It supports both local and ssh files(as well as many other protocols), so it has everything I need.
I first took a backup into my external hard drive, then VPS. The main problem I encountered was that duplicity uses [paramiko](https://github.com/paramiko/paramiko) for ssh, but it wasn't able to negotiate a key exchange algorithm with my VPS. Luckily, duplicity also supports [pexpect](http://pexpect.sourceforge.net/pexpect.html), which uses OpenSSH. If you encounter the same problem, you just need to tell duplicity to use pexpect backend by prepending your url with `pexpect+`, like `pexpect+ssh://example.com`.
Duplicity doesn't seem to have any sort of configuration files of itself, so I ended up writing a small bash script to serve as a sort of configuration, and also keep me from running duplicity with wrong args. I kept forgetting to add an extra slash to `file://`, causing duplicity to backup my home directory into my home directory! :D
If anyone is interested, here's the script:
```bash
#!/bin/bash
if [[ $(id -u) != "0" ]]; then
read -p "Backup should be run as root! Continue? [y/N]" yn
case $yn in
[Yy]*) break;;
*) exit;;
esac
fi
if [[ $1 = file://* ]]; then
echo "Doing local backup."
ARGS="--no-encryption"
if [[ $1 = file:///* ]]; then
URL=$1
else
echo "Use absolute paths for backup."
exit 1
fi
elif [[ $1 = scp* ]]; then
echo "Doing SSH backup."
ARGS="--ssh-askpass"
URL="pexpect+$1"
else
echo "Unknown URL, use scp:// or file://"
exit 1
fi
if [[ -n "$1" ]]; then
duplicity $ARGS --exclude-filelist /home/kaan/.config/duplicity-files /home/kaan "$URL/backup"
else
echo "Please specify a location to backup into."
exit 1
fi
```

View file

@ -0,0 +1,238 @@
---
title: Emacs as an operating system
date: 2016-04-14
modified: 2016-05-29
---
Emacs is sometimes jokingly called a good operating system with a bad
text editor. Over the last year, I found myself using more and more of
Emacs, so I decided to try out how much of an operating system it
is. Of course, operating system here is referring to the programs that
the user interacts with, although I would love to try out some sort of
Emacs-based kernel.
<!--more-->
# Emacs as a terminal emulator / multiplexer
Terminals are all about text, and Emacs is all about text as well. Not
only that, but Emacs is also very good at running other processes and
interacting with them. It is no surprise, I think, that Emacs works
well as a terminal emulator.
Emacs comes out of the box with `shell` and `term`. Both of these
commands run the shell of your choice, and give you a buffer to
interact with it. Shell gives you a more emacs-y experience, while
term overrides all default keymaps to give you a full terminal
experience.
![A terminal interface, with the outputs of the commands ls and git status displayed.](/img/emacs-terminal.png)
To use emacs as a full terminal, you can bind these to a key in your
window manager. I'm using i3, and my keybinding looks like this:
```
bindsym $mod+Shift+Return exec --no-startup-id emacs --eval "(shell)"
```
You can also create a desktop file to have a symbol to run this on a
desktop environment. Try putting the following text in a file at
`~/.local/share/applications/emacs-terminal.desktop`:
```
[Desktop Entry]
Name=Emacs Terminal
GenericName=Terminal Emulator
Comment=Emacs as a terminal emulator.
Exec=emacs --eval '(shell)'
Icon=emacs
Type=Application
Terminal=false
StartupWMClass=Emacs
```
If you want to use term instead, replace `(shell)` above with `(term "/usr/bin/bash")`.
A very useful feature of terminal multiplexers is the ability to leave
the shell running, even after the terminal is closed, or the ssh
connection has dropped if you are connection over that. Emacs can also
achieve this with it's server-client mode. To use that, start emacs
with `emacs --daemon`, and then create a terminal by running
`emacsclient -c --eval '(shell)'`. Even after you close emacsclient,
since Emacs itself is still running, you can run the same command
again to get back to your shell.
One caveat is that if there is a terminal/shell already running, Emacs
will automatically open that whenever you try opening a new one. This
can be a problem if you are using Emacs in server-client mode, or want
to have multiple terminals in the same window. In that case, you can
either do `M-x rename-uniquely` to change the name of the existing
terminal, which will make Emacs create a new one next time, or you can
add that to hook in your `init.el` to always have that behaviour:
```lisp
(add-hook 'shell-mode-hook 'rename-uniquely)
(add-hook 'term-mode-hook 'rename-uniquely)
```
# Emacs as a shell
Of course, it is not enough that Emacs works as a terminal
emulator. Why not use Emacs as a shell directly, instead of bash/zsh?
Emacs has you covered for that too. You can use eshell, which is a
shell implementation, completely written in Emacs Lisp. All you need
to do is press `M-x eshell`.
![An Emacs window, split in two. Left side shows a command line with the command cat README.rst buffer scratch. Right side shows the emacs scratch buffer, with the contents of the readme file displayed.](/img/eshell.png)
The upside is that eshell can evaluate and expand lisp expressions, as
well as redirecting the output to Emacs buffers. The downside is
however, eshell is not feature complete. It lacks some features such
as input redirection, and the documentation notes that it is
inefficient at piping output between programs.
If you want to use eshell instead of shell or term, you can replace
`shell` in the examples of terminal emulator section with `eshell`.
# Emacs as a mail cilent
[Zawinski's Law](http://www.catb.org/~esr/jargon/html/Z/Zawinskis-Law.html):
Every program attempts to expand until it can read mail. Of course, it
would be disappointing for Emacs to not handle mail as well.
Emacs already ships with some mail capability. To get a full
experience however, I'd recommend using
[mu4e](http://www.djcbsoftware.nl/code/mu/mu4e.html) (mu for emacs). I
have personally set up [OfflineIMAP](http://www.offlineimap.org/) to
retrieve my emails, and mu4e gives me a nice interface on top of that.
![An emacs window, displaying several emails on top with titles like Announcing Docker Cloud, or Order #29659 shipped. An email titles Add 'url' option to 'list' command' is selected, and the bottom half of the window displays the contents of this email. Email display includes From and To fields, Date, Flags, and the body of the email.](/img/mu4e.png)
I'm not going to talk about the configurations of these programs, I'd
recommend checking out their documentations. Before ending this
section, I also want to mention
[mu4e-alert](https://github.com/iqbalansari/mu4e-alert) though.
# Emacs as a feed reader (RSS/Atom)
Emacs handles feeds very well too. The packages I'm using here are
[Elfeed](https://github.com/skeeto/elfeed) and
[Elfeed goodies](https://github.com/algernon/elfeed-goodies). Emacs
can even show images in the feeds, so it covers everything I need from
a feed reader.
![A window, with a list on the left displaying entries from xkcd.com, Sandra and Woo, and The Codeless Code. An entry titled Pipelines is selected, and the right side of the window displays the contents of that XKCD.](/img/elfeed.png)
# Emacs as a file manager
Why use a different program to manage your files when you can use
Emacs? Emacs ships with dired, as well as image-dired. This gives you
a file browser, with optional image thumbnail support.
# Emacs as a document viewer
Want to read a pdf? Need a program to do a presentation? Again, Emacs.
![An emacs window displaying a PDF file, titled Clojure for the Brave and True.pdf. The page includes some clojure code, and talks about Emacs.](/img/docview.png)
Emacs comes with
[DocView](https://www.gnu.org/software/emacs/manual/html_node/emacs/Document-View.html)
which has support for PDF, OpenDocument and Microsoft Office files. It
works surprisingly well.
Also, [PDF Tools](https://github.com/politza/pdf-tools) brings even
more PDF viewing capabilities to Emacs, including annotations, text
search and outline. After installing PDF Tools, Emacs has become my
primary choice for reading PDF files.
# Emacs as a browser
Emacs comes out of box with
[eww](https://www.gnu.org/software/emacs/manual/html_node/eww/index.html#Top),
a text-based web browser with support for images as well.
![An Emacs window, displaying the Wikipedia web page for Emacs.](/img/eww.png)
Honestly, I don't think I'll be using Emacs to browse the web. But
still, it is nice that the functionality is there.
# Emacs as a music player
Emacs can also act as a music player thanks to
[EMMS](https://www.gnu.org/software/emms/), Emacs MultiMedia
System. If you are wondering, it doesn't play the music by itself but
instead uses other players like vlc or mpd.
It has support for playlists, and can show thumbnails as well. For the
music types, it supports whatever the players it uses support, which
means that you can basically use file type.
# Emacs as a IRC client
I don't use IRC a lot, but Emacs comes out of the box with support for
that as well thanks to
[ERC](https://www.emacswiki.org/emacs?action=browse;oldid=EmacsIrcClient;id=ERC).
![An Emacs window, displaying an IRC chat for emacs freenode.](/img/erc.png)
# Emacs as a text editor
Finally, Emacs also can work well as a text editor.
Emacs is a pretty fine text editor out of the box, but I want to
mention some packages here.
First,
[multiple cursors](https://github.com/magnars/multiple-cursors.el). Multiple
cursors mode allows you to edit text at multiple places at the same
time.
I also want to mention
[undo-tree](http://www.dr-qubit.org/emacs.php#undo-tree). It acts like
a mini revision control system, allowing you to undo and redo without
ever losing any text.
Another great mode is
[iy-go-to-char](https://github.com/doitian/iy-go-to-char). It allows
you to quickly jump around by going to next/previous occurrances of a
character. It is very useful when you are trying to move around a
line.
[Ace Jump Mode](https://github.com/winterTTr/ace-jump-mode/) allows
you to jump around the visible buffers. It can jump around based on
initial characters of words, or jump to specific lines. It can also
jump from one buffer to another, which is very useful when you have
several buffers open in your screen.
![An emacs window, with Python code displayed. Several locations within the code are highlighted with different letters.](/img/ace-jump-mode.png)
Finally, I want to mention [ag.el](https://github.com/Wilfred/ag.el),
which is an Emacs frontend for the silver searcher. If you don't know
about ag, it is a replacement for grep that recursively searches
directories, and has some special handling for projects, and is very
fast.
# Emacs as an IDE
People sometimes compare Emacs to IDE's and complain that a text
editor such as Emacs doesn't have enough features. What they are
forgetting, of course, is that Emacs is an operating system, and we
can have an IDE in it as well.
There are different packages for every language, so I'll be only
speaking on language agnostic ones.
For interacting with git, [magit](http://magit.vc/) is a wonderful
interface.
![An emacs window, displaying the git log for a repository at the top, and the shortcuts for git commands such as Apply, Stage, Unstage below.](/img/magit.png)
For auto-completion, [Company mode](https://company-mode.github.io/)
works wonders. I rely heavily on completion while writing code, and
company mode has support for anything I tried writing.
If you like having your code checked as you type,
[flycheck](https://www.flycheck.org/) has you covered. It has support
for many tools and languages.
![A C code file, with the letters st are written. A pop-up below the cursor displays options like strcat, strchr, strcmp and more.](/img/company-flycheck.png)

View file

@ -0,0 +1,99 @@
---
title: Getting Deus Ex GOTY Edition running on Linux
date: 2022-03-12
---
I've been struggling with this for a few hours, so I might as well document how
I did it.
I have a particular setup, which ended up causing issues. Most important are
that I'm using Sway, a tiling Wayland compositor, and a flatpak install of
Steam.
## Mouse doesn't move when the game is launched
It looks like there's a problem with the game window grabbing the cursor on my
setup, so moving the mouse doesn't move the cursor in the game and if you move
it too much to the side it takes you out of the game window.
The solution to this is using Gamescope, which is a nested Wayland compositor
that makes the window inside it play nice with your actual compositor.
Because I'm using the flatpak install of Steam, I needed to install the
[flatpak version of gamescope](https://github.com/flathub/com.valvesoftware.Steam.Utility.gamescope).
One catch here is that for me, this wouldn't work if I also had the flatpak MangoHud installed.
The only solution I could come up with right now was to uninstall MangoHud.
```bash
flatpak remove org.freedesktop.Platform.VulkanLayer.MangoHud # if you have it installed
flatpak install com.valvesoftware.Steam.Utility.gamescope
```
Then, right click on the game and select properties, then in launch options type
`gamescope -f -- %command%`. This will launch the game inside gamescope, and the
cursor should move inside the game now.
## The game is too dark to see anything
It looks like the game relied on some old DirectX or OpenGL features or
something, because once you do launch into the game, everything is extremely
dark and hard to see. At first I was wondering how anyone could play the game
like this, but it turns out that's not how the game is supposed to look!
I finally managed to solve this by following the installer steps for the
[Deus Ex CD on Lutris](https://lutris.net/games/install/948/view). Yeah,
roundabout way to solve it, but it worked.
First download the updated D3D9 and OpenGL renderers from the page, and extract
them into the `System` folder inside the game.
```bash
cd "$HOME/.var/app/com.valvesoftware.Steam/.steam/steam/steamapps/common/Deus Ex/System"
wget https://lutris.net/files/games/deus-ex/dxd3d9r13.zip
wget https://lutris.net/files/games/deus-ex/dxglr20.zip
unzip dxd3d9r13.zip
unzip dxglr20.zip
```
Next, download and install the `1112fm` patch.
```bash
cd "$HOME/.var/app/com.valvesoftware.Steam/.steam/steam/steamapps/common/Deus Ex/System"
wget https://lutris.net/files/games/deus-ex/DeusExMPPatch1112fm.exe
env WINEPREFIX="$HOME/.var/app/com.valvesoftware.Steam/.steam/steam/steamapps/compatdata/6910/pfx/" wine DeusExMPPatch1112fm.exe
```
Follow the steps of the installer. It should automatically find where the game
is installed. Once the install is done, launch the game, then head into the
settings and pick "Display Settings", then "Rendering Device". In the renderer
selection window, pick "Show all devices", and then select "Direct3D9 Support".
![A window with the title Deus Ex in a stylized font. Below it lists several options such as Direct3D Support, Direct3D9 Support, and OpenGL Support. Direct3D9 is selected. Below are two radio buttons, with the one titled Show all devices selected.](/img/deus-ex-render-settings.png)
Launch back into the game, head into the display settings again, pick your
resolution, and restart the game. Then head into the display settings yet again,
this time change the color depth to 32 bit. Restart once more. Yes, you do have
to do them separately or the game doesn't save the color depth change for some
reason. Finally, you can start playing!
![A game screenshot displaying the Statue of Liberty in front of a cityscape. Closer to the player are wooden docks. The image is split down the middle, left side says before and is very dark, the right side says after and is much lighter.](/img/deus-ex-renderer-comparison.png)
## Other small issues
Here are a few more issues you might hit during this whole process:
> My cursor moves too fast!
You need to turn down the cursor speed. My mouse has buttons to adjust the speed on the fly, so I use those to turn down the speed.
> After changing resolution, I can't move my cursor!
Use the keyboard shortcuts (arrow keys and enter) to exit the game. It should work again when you restart.
> The cursor doesn't move when I open the game, even with gamescope!
I'm not fully sure why or how this happens, but a few things I found useful:
- When the game is launching, and it's showing the animation of the studio logo, don't click! Press escape to bring up the menu instead.
- Press escape to bring the menu up, then hit escape again to dismiss it. It sometimes starts working after that.
- Use the keyboard to exit the game and restart. It always works the next time for me.

View file

@ -0,0 +1,166 @@
---
title: "Managing my recipes with Dendron"
date: 2022-04-04
---
> This post is day 10 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
I like to cook at home, but for a long time I never wrote down or saved any of
my recipes. Because of that I would occasionally completely forget how to make
something. My mom, and my grandmom write down their recipes in notebooks, but I
want something more powerful and resilient than some pen and paper.
At first I tried writing down my recipes in Google Keep, but found it a bit
tedious. That's where Dendron came in. Dendron is a knowledge management and
note taking tool. It comes with a features that enhance the writing experience,
but more importantly it has a lot of features that enhance the discoverability
of what you wrote.
For reference, I have the [repository for the recipes](https://gitea.bgenc.net/kaan/recipes) available publicly.
## Setup
[Dendron](https://marketplace.visualstudio.com/items?itemName=dendron.dendron)
is an extension for Visual Studio Code, so you'll need to install both. There's
a great tutorial to go through, but I'm already experienced with it so I went
ahead and created a new workspace that I called "recipes".
Next, I created a template and a schema to help me write new recipes. The
template is just a regular Dendron note, which I named `templates.recipe`.
```md
* Servings:
* Calories:
* Protein:
* Fiber:
## Ingredients
## Instructions
## Notes
```
This template immediately gives me the basic structure of a recipe. I have the
ingredients and instructions, and then I have a place to put any additional
notes about the recipe (for example, things I want to change next time I cook
it, or how to serve it best). I also have a section at the top to fill out some
nutritional information. I use the mobile app Cronometer to calculate that,
although most of the time I don't bother because it's just a nice-to-have that I
don't really need.
Next, here's my schema.
```yml
version: 1
imports: []
schemas:
- id: recipes
title: recipes
parent: root
children:
- id: bowls
title: bowls
namespace: true
template: templates.recipe
- id: baked
title: baked
namespace: true
template: templates.recipe
- id: dessert
title: dessert
namespace: true
template: templates.recipe
- id: misc
title: misc
namespace: true
template: templates.recipe
- id: soup
title: soup
namespace: true
template: templates.recipe
```
The schema helps me keep my recipes organized (and also automatically applies
the template note). You can see that I have my recipes organized under `bowls`
for stuff like rice and pasta dishes, `baked` for bread, pies and anything else
where you bake everything, `dessert` and `soup` which are self descriptive, and
`misc` which holds anything else like salad toppings.
## Publishing
I publish my [recipes online](https://bgenc.net/recipes/), which makes it very
easy to pull up a recipe when I'm cooking or at the grocery store.
I use a self-hosted setup, so all I have to do is just run the Dendron CLI to
build the site. To automate this process, I set up some VSCode tasks to build
and publish the site.
```json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "build site",
"type": "shell",
"command": "dendron publish export",
"options": {
"cwd": "${workspaceFolder}"
}
},
{
"label": "publish site",
"type": "shell",
"command": "rsync -av .next/out/ /var/www/recipes/",
"options": {
"cwd": "${workspaceFolder}"
},
"dependsOn": ["build site"],
"problemMatcher": []
},
]
}
```
I think before running these commands, you first have to run `dendron publish init && dendron publish build` first.
The first task builds the site using Dendron, and then the second task copies
the generated static website to where I have it published. I'm running a web
server on my desktop so this is just a folder, but `rsync` can also copy things
over SSH if you host your site on a different machine. There are also
[tutorials](https://wiki.dendron.so/notes/x0geoUlKJzmIs4vlmwLn3/) for things
like Github pages or Netlify.
Because I'm publishing under a subfolder (`.../recipes`), I also had to set
`assetsPrefix` in my `dendron.yml` configuration file.
```yml
publishing:
assetsPrefix: "/recipes"
...
```
## Bonus: What do I cook this week?
My wife and I go shopping once a week, so every week we need to decide what
we're going to eat this week. Sometimes it can be hard to pick something to eat
though! Luckily, Dendron comes with a command `Dendron: Random Note` which shows
you a random note. You can even configure it to only show some notes, which I
used so it will only show me recipes.
```yml
commands:
randomNote:
include:
- "recipes"
```
Now when I'm having trouble picking, I can just use this command and get
something to cook!

View file

@ -0,0 +1,60 @@
---
title: Mass batch processing on the CLI
date: 2022-03-19
---
> This post is day 4 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
Some time ago, I needed to process a lot of video files with vlc. This is
usually pretty easy to do, `for file in *.mp4 ; do ffmpeg ... ; done` is about
all you need in most cases. However, sometimes the files you are trying to
process are in different folders. And sometimes you want to process some files
in a folder but not others. That's the exact situation I was in, and I was
wondering if I needed to find some graphical application with batch processing
capabilities so I can queue up all the processing I need.
After a bit of thinking though, I realized I could do this very easily with a
simple shell script! That shell script lives in my [mark-list](https://github.com/SeriousBug/mark-list)
repository.
The idea is simple, you use the command to mark a bunch of files. Every file you
mark is saved into a file for later use.
```bash
$ mark-list my-video.mp4 # Choose a file
Marked 1 file.
$ mark-list *.webm # Choose many files
Marked 3 files.
$ cd Downloadsr
$ mark-list last.mpg # You can go to other directories and keep marking
```
You can mark a single file, or a bunch of files, or even navigate to other
directories and mark files there.
Once you are done marking, you can recall what you marked with the same tool:
```bash
$ mark-list --list
/home/kaan/my-video.mp4
/home/kaan/part-1.webm
/home/kaan/part-2.webm
/home/kaan/part-3.webm
/home/kaan/Downloads/last.mpg
```
You can then use this in the command line. For example, I was trying to convert everything to `mkv` files.
```bash
for file in `mark-list --list` ; do ffmpeg -i "${file}" "${file}.mkv" ; done
```
It works! After you are done with it, you then need to clear out your marks:
```
mark-list --clear
```
Hopefully this will be useful for someone else as well. It does make it a lot
easier to just queue up a lot of videos, and convert all of them overnight.

58
src/routes/posts/mpv.md Normal file
View file

@ -0,0 +1,58 @@
---
title: Motion Interpolation, 24 FPS to 60 FPS with mpv, VapourSynth and MVTools
date: 2015-07-18
modified: 2015-07-20
---
Watching videos at 60 FPS is great. It makes the video significantly smoother and much more enjoyable. Sadly, lots of movies and TV shows are still at 24 FPS. However, I recently discovered that it is actually possible to interpolate the extra frames by using motion interpolation, and convert a video from 24 FPS to 60 FPS in real time. While it is far from perfect, I think the visual artifacts are a reasonable tradeoff for high framerate.
<!--more-->
Firstly, what we need is mpv with VapourSynth enabled, and MVTools plugin for VapourSynth. VapourSynth must be enabled while compiling mpv. I adopted an AUR package [mpv-vapoursynth](https://aur4.archlinux.org/packages/mpv-vapoursynth/) which you can use if you are on Arch. Otherwise, all you need to do is use `--enable-vapoursynth` flag when doing `./waf --configure`. They explain the compilation on their [repository](https://github.com/mpv-player/mpv), so look into there if you are compiling yourself.
After that, we need MVTools plugin for VapourSynth. This is available on Arch via [vapoursynth-plugin-mvtools](https://www.archlinux.org/packages/community/x86_64/vapoursynth-plugin-mvtools/), otherwise you can find their repository [here](https://github.com/dubhater/vapoursynth-mvtools). There is also a [PPA for Ubuntu](https://launchpad.net/~djcj/+archive/ubuntu/vapoursynth) where you can find `vapoursynth-extra-plugins`, but I haven't used it myself so I can't comment on it.
After both of these are enabled, we need a script to use MVTools from VapourSynth. There is one written by Niklas Haas, which you can find here as [mvtools.vpy](https://github.com/haasn/gentoo-conf/blob/master/home/nand/.mpv/filters/mvtools.vpy). Personally, I tweaked the block sizes and precision to my liking, as well as removing the resolution limit he added. I'll put the modified version here:
```python
# vim: set ft=python:
import vapoursynth as vs
core = vs.get_core()
clip = video_in
dst_fps = display_fps
# Interpolating to fps higher than 60 is too CPU-expensive, smoothmotion can handle the rest.
while (dst_fps > 60):
dst_fps /= 2
# Skip interpolation for 60 Hz content
if not (container_fps > 59):
src_fps_num = int(container_fps * 1e8)
src_fps_den = int(1e8)
dst_fps_num = int(dst_fps * 1e4)
dst_fps_den = int(1e4)
# Needed because clip FPS is missing
clip = core.std.AssumeFPS(clip, fpsnum = src_fps_num, fpsden = src_fps_den)
print("Reflowing from ",src_fps_num/src_fps_den," fps to ",dst_fps_num/dst_fps_den," fps.")
sup = core.mv.Super(clip, pel=1, hpad=8, vpad=8)
bvec = core.mv.Analyse(sup, blksize=8, isb=True , chroma=True, search=3, searchparam=1)
fvec = core.mv.Analyse(sup, blksize=8, isb=False, chroma=True, search=3, searchparam=1)
clip = core.mv.BlockFPS(clip, sup, bvec, fvec, num=dst_fps_num, den=dst_fps_den, mode=3, thscd2=12)
clip.set_output()
```
At this point, you should be able to try this out as suggested in the script. To set this up more permanently, I'd suggest placing this script as `~/.config/mpv/mvtools.vpy`, and also writing the following as `~/.config/mpv/mpv.conf`:
```
hwdec=no
vf=vapoursynth=~/.config/mpv/mvtools.vpy
```
Now, whenever you open mpv, it will always use motion interpolation.
The result is fairly good. I noticed some significant artifacts while watching anime, but it works well with movies. I'm guessing that it is harder to track the motion in animations since they are generally exaggerated.
One thing to keep in mind, however, is performance. With `rel=2`, viewing a 1080p video results in around 90% CPU usage across all cores and 1.6 GBs of ram on my Intel i7 4700MQ. With `rel=1`, CPU usage goes down to about 60% per core. This process is very heavy on the processor, and you may have trouble unless you have a fast CPU.

View file

@ -0,0 +1,44 @@
---
title: My response to Aurynn Shaw's "Contempt Culture" post
date: 2022-03-27
---
> This post is day 6 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
I recently came across [Aurynn Shaw's article on "Contempt Culture"](https://blog.aurynn.com/2015/12/16-contempt-culture/).
I'm a bit late to the party, but I wanted to talk about this too.
Aurynn's article talks about how some programming languages are considered
inferior, and programmers using these languages are considered less competent.
It's a good article, and you should take a look at it if you haven't.
## my thoughts
One thing I've come to realize over the years is that there are really no "bad
programming languages". Ignoring esolangs like brainfuck which are not really
meant to be used for anything serious, most programming languages are designed
to fit a niche. I'm using the term like it's used in ecology: every programming
language has a place in the ecosystem of technology and programming.
PHP is bad? PHP certainly has its drawbacks, but it also has its advantages.
"Drop these files into a folder and it works" is an amazing way to get started
programming. It's also a great way to inject a bit of dynamic content into
otherwise static pages. In fact, it's simpler and more straightforward solution
than building a REST API and a web app where you have to re-invent server side
rendering just to get back to where PHP already was!
That's not to say PHP is perfect or the best language to use. It's a language I
personally don't like. But that doesn't make it a bad or "stupid" programming
language. At worst it's a programming language that doesn't fit my needs. If I
extrapolate that and say that PHP is a bad language, that would instead show my
ego. Do I really think I'm so great that anything I don't like is just
immediately bad? Something Aurynn said resonates with me here:
> It didn't matter that it was (and remains) difficult to read, it was that we
> were better for using it.
I just want to conclude this with one thing: next time you think a programming
language or tool or whatever is bad, think to yourself whether that's because it
doesn't feel cool or because you saw others making fun of it, or because you
actually evaluated the pros and cons and came up with a calculated decision.

18
src/routes/posts/pass.md Normal file
View file

@ -0,0 +1,18 @@
---
title: Switching to pass
date: 2015-03-30
---
For some time, I used LastPass to store my passwords. While LastPass works well, it doesn't fit into the keyboard driven setup I have. I have been looking into alternatives for some time, I looked into KeePassX but just like LastPass, it doesn't give me any ways to set up keyboard shortcuts. On the other hand, and I recently came across [pass](http://www.passwordstore.org/), and it provides everything I want.
<!--more-->
Pass uses GPG keys to encrypt the passwords, and git to keep revisions and backups. It integrates well with the shell, and there is a dmenu script, a Firefox plugin and an Android app. All the passwords are just GPG enrypted files, stored in some folders anyway, so you don't need anything special to work with them.
![A terminal window with the command pass ls archlinux.org. The output lists SeriousBug@Gmail.com and SeriousBug. Above the terminal is a bar, with archlin typed on the left, and the option archlinux.org/SeriousBug@Gmail.com displayed on the right.](/img/passmenu.png)
So first, I needed to migrate my passwords from LastPass to pass. The website lists some scripts for migration, but sadly I missed that when I first looked at the page. So I decided to write a [python script to handle the migration](https://gist.github.com/SeriousBug/e9f33873d10ad944cbe6) myself. It inserts all passwords in `domain/username` format, and if there is any extra data written, it is added after the password as well. Secure notes are placed into their own folder, and any "Generated Password for ..." entries are skipped. If you're migrating from LastPass to pass, feel free to give it a try. If you are taking an export from their website however, do make sure that there is no whitespace before and after the csv.
![An Android phone screenshot. A search bar at the top displays archlin typed in, and below the search bar the options archlinux.org and wiki.archlinux.org are listed.](/img/password_store.png)
I certainly recommend trying out pass. It works very well, and it fits in with the unix philosophy.

188
src/routes/posts/raid.md Normal file
View file

@ -0,0 +1,188 @@
---
title: My local data storage setup
date: 2022-03-10
---
Recently, I've needed a bit more storage. In the past I've relied on Google
Drive, but if you need a lot of space Google Drive becomes prohibitively
expensive. The largest option available, 2 TB, runs you $100 a year at the time
of writing. While Google Drive comes with a lot of features, it also comes with
a lot of privacy concerns, and I need more than 2 TB anyway. Another option
would be Backblaze B2 or AWS S3, but the cost is even higher. Just to set a
point of comparison, 16 TB of storage would cost $960 a year with B2 and a
whopping $4000 a year with S3.
Luckily in reality, the cost of storage per GB has been coming down steadily.
Large hard drives are cheap to come by, and while these drives are not
incredibly fast, they are much faster than the speed of my internet connection.
Hard drives it is then!
While I could get a very large hard drive, it's generally a better idea to get
multiple smaller hard drives. That's because these drives often offer a better
$/GB rate, but also because it allows us to mitigate the risk of data loss. So
after a bit of search, I found these "Seagate Barracuda Compute 4TB" drives. You
can find them on [Amazon](https://www.amazon.com/gp/product/B07D9C7SQH/) or
[BestBuy](https://www.bestbuy.com/site/seagate-barracuda-4tb-internal-sata-hard-drive-for-desktops/6387158.p?skuId=6387158).
These hard drives are available for $70 each at the time I'm writing this,and I bought 6 of them.
This gets me to around $420, plus a bit more for SATA cables.
Looking at [Backblaze Hard Drive Stats](https://www.backblaze.com/blog/backblaze-drive-stats-for-2021/),
I think it's fair to assume these drives will last at least 5 years.
Dividing the cost by the expected lifetime, that gets me $84 per year, far below what the cloud storage costs!
It's of course not as reliable, and it requires maintenance on my end, but
the difference in price is just too far to ignore.
## Setup
I decided to set this all up inside my desktop computer. I have a large case so
fitting all the hard drives in is not a big problem, and my motherboard does
support 6 SATA drives (in addition to the NVMe that I'm booting off of). I also
run Linux on my desktop computer, so I've got all the required software
available.
For the software side of things, I decided to go with `mdadm` and `ext4`. There
are also other options available like ZFS (not included in the linux kernel) or
btrfs (raid-5 and raid-6 are known to be unreliable), but this was the setup I
found the most comfortable and easy to understand for me. How it works is that
`mdadm` combines the disks and presents it as a block device, then `ext4`
formats and uses the block device the same way you use it with any regular
drive.
### Steps
I was originally planning to write the steps I followed here, but in truth I
just followed whatever the [ArchLinux wiki](https://wiki.archlinux.org/title/RAID#Installation)
told me. So I'll just recommend you follow that as well.
The only thing I'll warn you is that the wiki doesn't clearly note just how long
this process takes. It took almost a week for the array to build, and until the
build is complete the array runs at a reduced performance. Be patient, and just
give it some time to finish. As a reminder, you can always check the build
status with `cat /dev/mdstat`.
## Preventative maintenance
Hard drives have a tendency to fail, and because RAID arrays are resilient, the
failures can go unnoticed. You **need** to regularly check that the array is
okay. Unfortunately, while there are quite a few resources online on how to set
up RAID, very few of them actually talk about how to set up scrubs (full scans
to look for errors) and error monitoring.
For my setup, I decided to set up systemd to check and report issues. For this,
I first set up 2 timers: 1 that checks if there are any reported errors on the
RAID array, and another that scrubs the RAID array. Systemd timers are 2 parts,
a service file and a timer file, so here's all the files.
- `array-scrub.service`
```toml
[Unit]
Description=Scrub the disk array
After=multi-user.target
OnFailure=report-failure-email@array-scrub.service
[Service]
Type=oneshot
User=root
ExecStart=bash -c '/usr/bin/echo check > /sys/block/md127/md/sync_action'
[Install]
WantedBy=multi-user.target
```
- `array-scrub.timer`
```toml
[Unit]
Description=Periodically scrub the array.
[Timer]
OnCalendar=Sat *-*-* 05:00:00
[Install]
WantedBy=timers.target
```
The timer above is the scrub operation, it tells RAID to scan the drives for
errors. It actually takes up to a couple days in my experience for the scan to
complete, so I run it once a week.
- `array-report.service`
```toml
[Unit]
Description=Check raid array errors that were found during a scrub or normal operation and report them.
After=multi-user.target
OnFailure=report-failure-email@array-report.service
[Service]
Type=oneshot
ExecStart=/usr/bin/mdadm -D /dev/md127
[Install]
WantedBy=multi-user.target
```
- `array-report.timer`
```toml
[Unit]
Description=Periodically report any issues in the array.
[Timer]
OnCalendar=daily
[Install]
WantedBy=timers.target
```
And this timer above checks the RAID array status to see if there were any
errors found. This timer runs much more often (once a day), because it's
instant, and also because RAID can find errors during regular operation even
when you are not actively running a scan.
### Error reporting
Another important thing here is this line in the service file:
```toml
OnFailure=report-failure-email@array-report.service
```
The automated checks are of no use if I don't know when something actually
fails. Luckily, systemd can run a service when another service fails, so I'm
using this to report failures to myself. Here's what the service file looks like:
- `report-failure-email@.service`
```toml
[Unit]
Description=status email for %i to user
[Service]
Type=oneshot
ExecStart=/usr/local/bin/systemd-email address %i
User=root
```
- `/usr/local/bin/systemd-email`
```sh
#!/bin/sh
/usr/bin/sendmail -t <<ERRMAIL
To: homelab@bgenc.net
From: systemd <root@$HOSTNAME>
Subject: Failure on $2
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8
$(systemctl status --lines 100 --no-pager "$2")
ERRMAIL
```
The service just runs this shell script, which is just a wrapper around
sendmail. The `%i` in the service is the part after `@` when you use the
service, you can see that the `OnFailure` hook puts `array-report` after the `@`
which then gets passed to the email service, which then passes it on to the mail
script.
To send emails, you also need to set up `sendmail`. I decided to install
[msmtp](https://wiki.archlinux.org/title/Msmtp), and set it up to use my GMail
account to send me an email.
To test if the error reporting works, edit the `array-report.service` and change
the line `ExecStart` line to `ExecStart=false`. Then run the report service with
`systemd start array-report.service`, you should now get an email letting you
know that the `array-report` service failed, and attaches the last 100 lines of
the service status to the email.

View file

@ -0,0 +1,91 @@
---
title: Running graphical user services with systemd
date: 2022-03-18
---
> This post is day 3 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
I've recently switched from KDE Plasma to sway as my window manager. I had a problem with the change though: the amazing kdeconnect service weren't working!
My first attempt at fixing this was to just add a lines into sway config to launch it along with sway.
```
exec /usr/lib/kdeconnectd
```
Looks simple enough. But for some reason, `kdeconnectd` would just disappear
after a while. It would appear to run at startup, and then an hour or two later
I pull up the kdeconnect app on my phone and it would tell me that my computer
is disconnected.
The biggest issue here was that I had no way to see why kdeconnect had failed.
In comes systemd to save the day. Systemd is a service manager, so it will
actually maintain the logs for these services. That means if kdeconnect is
crashing, I can check the logs for kdeconnect to see why it crashed. I can also
configure it to auto-restart after a crash if I want to.
To launch graphical applications with systemd though, you need to pass the
appropriate environment variables to it so it knows how to launch new windows.
I added this line to my sway config to do exactly that.
```
# Pass all variables to dbus & systemd to run graphical user services
exec dbus-update-activation-environment --all --systemd
```
Next, we need to write a service files to run the application. This is easier
than it sounds, here's the service file I wrote for kdeconnect:
```
[Unit]
Description=Run kdeconnectd.
After=graphical-session.target
StartLimitIntervalSec=600
StartLimitBurst=5
[Service]
Type=basic
ExecStart=/usr/lib/kdeconnectd
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=graphical-session.target
```
I saved this as `~/.config/systemd/user/kdeconnectd.service`. Finally, enabled it for my user with `systemctl --user enable kdeconnectd.service` and then restarted.
The service is configured to automatically restart on failure, but not if it
failed more than 5 times in the last 10 minutes. Systemd also waits 5 seconds
before trying to restart the failed service. This way if it crashes for some
reason, it will restart. But if it keeps crashing rapidly, it won't keep
trying to restart which could take up too much system resources.
I can now check how the service is doing with systemd!
```
Warning: The unit file, source configuration file or drop-ins of kdeconnectd.service changed on disk. Run 'systemctl --user daemon-reload>
● kdeconnectd.service - Run kdeconnectd.
Loaded: loaded (/home/kaan/.config/systemd/user/kdeconnectd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-03-17 14:18:15 EDT; 1h 46min ago
Main PID: 2188363 (kdeconnectd)
Tasks: 6 (limit: 77007)
Memory: 24.2M
CPU: 2.440s
CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/kdeconnectd.service
└─2188363 /usr/lib/kdeconnectd
Mar 17 14:20:58 eclipse systemd[817]: /home/kaan/.config/systemd/user/kdeconnectd.service:6: Unknown key name 'type' in section 'Service'>
Mar 17 15:16:11 eclipse kdeconnectd[2188363]: QObject::connect(KWayland::Client::Registry, Unknown): invalid nullptr parameter
Mar 17 15:16:11 eclipse kdeconnectd[2188363]: kdeconnect.plugin.battery: No Primary Battery detected on this system. This may be a bug.
Mar 17 15:16:11 eclipse kdeconnectd[2188363]: kdeconnect.plugin.battery: Total quantity of batteries found: 0
Mar 17 15:23:26 eclipse kdeconnectd[2188363]: QObject::connect(KWayland::Client::Registry, Unknown): invalid nullptr parameter
Mar 17 15:23:26 eclipse kdeconnectd[2188363]: kdeconnect.plugin.battery: No Primary Battery detected on this system. This may be a bug.
Mar 17 15:23:26 eclipse kdeconnectd[2188363]: kdeconnect.plugin.battery: Total quantity of batteries found: 0
Mar 17 15:23:26 eclipse kdeconnectd[2188363]: QMetaObject::invokeMethod: No such method KIO::StoredTransferJob::slotDataReqFromDevice()
Mar 17 15:24:35 eclipse kdeconnectd[2188363]: QMetaObject::invokeMethod: No such method KIO::StoredTransferJob::slotDataReqFromDevice()
Mar 17 15:57:29 eclipse systemd[817]: /home/kaan/.config/systemd/user/kdeconnectd.service:9: Unknown key name 'type' in section 'Service'>
```
A bunch of warnings so far, but no crashes yet. But if it does crash again, I'll finally know why.

View file

@ -0,0 +1,47 @@
---
title: A little type system trick in Rust
date: 2022-03-15
---
> This post is day 1 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
While working on a small project recently, I ended up writing this type in Rust.
```rust
type ImageData = Arc<Mutex<Option<ImageBuffer<Rgba<u8>, Vec<u8>>>>>;
```
Even though I wrote it myself, it actually took me a bit after writing it to
figure out what this type was doing so I wanted to write about it.
Let me start from outside-in, the first type we have is `Arc`. `Arc` stands for
"atomic reference counting". [Reference counting](https://en.wikipedia.org/wiki/Reference_counting)
is a method to handle ownership of the data, or in other words to figure out
when the data needs to be freed. Garbage collected languages do this
transparently in the background, but in Rust we explicitly need to state that we
want it. Atomic means this is done using [atomic operations](https://en.wikipedia.org/wiki/Linearizability#Primitive_atomic_instructions),
so it is thread safe. In my case, I needed this because this data was going to
be shared between multiple threads, and I didn't know exactly when I would be "done"
with the data.
The next type is `Mutex`, which means [mutual exclusion](https://en.wikipedia.org/wiki/Lock_(computer_science))
or locking. Locks are used to restrict access to data to a single thread at a time.
That means whatever type is inside of this is not thread safe,
so I'm using the lock to protect it. Which is true!
The type after that is `Option`. This basically means "nullable", there may or may not be a thing inside this.
The interesting thing here is that this is a [sum type](https://en.wikipedia.org/wiki/Tagged_union),
so Rust helps remind us that this is nullable without introducing a nullability concept to the language. It's just part of the type system!
Then we have `ImageBuffer`, a type from the popular [image crate](https://docs.rs/image/latest/image/index.html).
Not much to talk about with this, that's the data I wanted to store.
The next thing that *is* interesting is the `Rgba<u8>` and `Vec<u8>` inside the image buffer.
What that means (and I'm speculating here because I'm lazy/too busy to check), is that
`Rgba` is just a basic wrapper type (or a "newtype"). It makes the compiler enforce the type of the
image data that's stored in this image buffer, so the user doesn't mix up different data types.
Similar for `Vec<u8>`, (I think) it means that the data inside this buffer is stored in a vector.
Finally, `u8` is probably self descriptive, the pixels and the vector are made out of 8-bit unsigned integers.

View file

@ -0,0 +1,83 @@
---
title: State of Rust GUIs
date: 2022-03-17
---
> This post is day 2 of me taking part in the
> [#100DaysToOffload](https://100daystooffload.com/) challenge.
The website [Are we GUI Yet?](https://www.areweguiyet.com/) helpfully lists a
lot of the libraries and frameworks available for making a GUI in Rust. I've
been looking into making a GUI program in Rust, so I've been working my way
through some of these options.
This is not a through review, just my thoughts after a brief look. I'd recommend
looking over the website and deciding for yourself.
## Best candidate: Dioxus
- Website: https://dioxuslabs.com/
Dioxus is probably the option I like the best from a quick look. Declarative
applications similar to React, encapsulated components, first class async
support, and good type checking.
Downsides? Right now it's web only. Desktop applications are just web
applications rendered inside a web view. That's okay for cross platform apps,
but not for what I want to do which is a lightweight native application.
## Better Electron: Tauri
- Website: https://github.com/tauri-apps/tauri
Tauri is a really good replacement for Electron. You can see the comparison on
their Github page, smaller binaries, less memory use, and faster launch times.
But again, it is a web app running in a web view. Not a native desktop app. Even
though Tauri uses less memory than electron, it still uses ~180 MB according to
their comparison. And the fast launch time is still around 0.4 seconds, way
longer than what I would expect.
## My current preference: Slint
- Website: https://slint-ui.com/
I really like Slint. It is a native GUI with their own OpenGL renderer, and an
optional Qt backend. From some basic experimentation, it seems to launch in less
than 50ms, and uses less than 80 MB of memory (mostly shared libraries).
You can write the code in either `.slint` files (and they actually have okay
editor support for this file type), or inside macros in your code files. The
code also looks pretty intuitive.
The downsides? The theming support is not great/nonexistent, you can't
dynamically generate UI elements (well kinda, you can generate them based on
properties you change at runtime, but the components themselves are hardcoded),
and the code sometimes gets awkward due to current limitations.
```rust
MainWindow := Window {
// You then have to bind to this callback inside rust code. No way to just write a hook that calls a rust function.
callback save_to_file(string);
HorizontalLayout {
height: 32px;
FilePath := LineEdit {
placeholder_text: "placeholder here";
}
Button {
text: "Save to file";
clicked => { save_to_file(FilePath.text); }
}
}
}
```
There is also no way to do some things, like setting a dialog hint for your main
window, which is something I needed to do.
## Conclusion?
It looks like the state of GUIs in rust is still "not yet". There are a few more
projects I need to look at, like [Relm](https://github.com/antoyo/relm), but
their code looks way too verbose to me. In the end, I think the best option
might be to just write my GUI in C++ with Qt, and maybe integrate bits written
in rust inside of that.